question_type
stringclasses 2
values | question
stringlengths 11
421
| answer
stringlengths 1
2.03k
| evidence_keys
stringlengths 11
61
| evidence_contents
stringlengths 71
10.2k
| evidence_modal
stringclasses 4
values | evidence_count
int64 1
4
| distractor_count
int64 1
4
| info_count
int64 5
5
| text_2_idx
stringlengths 2
11.5k
| idx_2_text
stringlengths 2
11.5k
| image_2_idx
stringlengths 2
421
| idx_2_image
stringlengths 2
421
| table_2_idx
stringlengths 2
336
| idx_2_table
stringlengths 2
336
| meta_data
stringlengths 2
1.9k
| distractor_contents
stringlengths 79
11.2k
| question_id
stringlengths 64
64
| pdf_id
stringlengths 40
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
explanation | Why is the performance of your method better on paraphrased datasets than on the Normal Dataset? | Regarding the occasionally better performance of Profiler (and also other baselines) on paraphrased datasets in Table 1 and Table 2, it is important to note that these are in-distribution results, where the training and test data distributions are the same. When detectors are tested in an out-of-distribution setting—where the detector is trained on the original dataset and tested on the paraphrased dataset—all detectors exhibit a performance degradation, as shown in Figure 4. The improved performance on paraphrased datasets under the in-distribution setting suggests that paraphrased data is more separable in this context. We attribute this to two main reasons: (1) paraphrasing may inadvertently expose more model-specific characteristics, and (2) different LLMs may interpret and encode patterns of human-written texts differently, thereby reducing detection complexity. However, the performance drop observed in the out-of-distribution setting indicates that paraphrasing remains an effective evasion technique in real-world deployments. | ['Table 1', 'Table 2', 'Figure 4'] | ['images/28a3fb2eb860b2336250a168e4be3a619b992036b649ac8490cf8856010977aa.jpg', 'images/66b65d6ef25661f61096bcb5ba493ecd43565a459b1d58ac7031289ca040692c.jpg', 'images/ee2afb98560f1235c1389cdfa71d968a022447de87c1ce4638ab8d2f07577b6b.jpg'] | ['mixed'] | 3 | 2 | 5 | {'where V is the vocabulary of the surrogate model M, and P˜k ∈R||V ||×1 is the one-hot encoded vector of input token xk over the vocabulary list V . The calculated context losses L = [L1, · · · , LW ] are then used in the next stage to extract the inference pattern. ': '1'} | {'1': 'where V is the vocabulary of the surrogate model M, and P˜k ∈R||V ||×1 is the one-hot encoded vector of input token xk over the vocabulary list V . The calculated context losses L = [L1, · · · , LW ] are then used in the next stage to extract the inference pattern. '} | {'images/6aab4e92c975eda2500335b4f5cd9ae1c46b56dc872b79f30e841caf88d27a56.jpg': '1', 'images/ee2afb98560f1235c1389cdfa71d968a022447de87c1ce4638ab8d2f07577b6b.jpg': '4'} | {'1': 'images/6aab4e92c975eda2500335b4f5cd9ae1c46b56dc872b79f30e841caf88d27a56.jpg', '4': 'images/ee2afb98560f1235c1389cdfa71d968a022447de87c1ce4638ab8d2f07577b6b.jpg'} | {'images/28a3fb2eb860b2336250a168e4be3a619b992036b649ac8490cf8856010977aa.jpg': '1', 'images/66b65d6ef25661f61096bcb5ba493ecd43565a459b1d58ac7031289ca040692c.jpg': '2'} | {'1': 'images/28a3fb2eb860b2336250a168e4be3a619b992036b649ac8490cf8856010977aa.jpg', '2': 'images/66b65d6ef25661f61096bcb5ba493ecd43565a459b1d58ac7031289ca040692c.jpg'} | {} | ['images/6aab4e92c975eda2500335b4f5cd9ae1c46b56dc872b79f30e841caf88d27a56.jpg', 'where V is the vocabulary of the surrogate model M, and P˜k ∈R||V ||×1 is the one-hot encoded vector of input token xk over the vocabulary list V . The calculated context losses L = [L1, · · · , LW ] are then used in the next stage to extract the inference pattern. '] | a8a6339a943fa79ae72382fb9f1d022d8409510904d542963b580682babf239b | d969953a0cbdd7fa8485cf1555a32f7b3d62a7a4 |
explanation | What improvements does FacLens provide over existing methods? | Our work has clear improvements over existing works in practical applications (efficiency beyond performance) due to the following reasons. In Figure 2, we compare the ante-hoc method (FacLens) with post-hoc methods (SAPLMA and INSIDE). Unlike post-hoc methods, which rely on costly answer generation, the ante-hoc method avoids inference costs and controls risks in advance. As shown in Figure 2, despite post-hoc methods having more information (i.e., generated answers), FacLens still performs better. Table 1 shows that FacLens achieves clear performance gains over most baselines. While the performance gains over LoRA and Self-Evaluation are slightly smaller, FacLens significantly outperforms both baselines in terms of training efficiency (see Table 2), which is a crucial factor for practical application. | ['Figure 2', 'Table 1', 'Table 2'] | ['images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg', 'images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg', 'images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg'] | ['mixed'] | 3 | 2 | 5 | {'Unsupervised domain adaptation performs well for cross-LLM FacLens. Given an LLM, we train FacLens on the training data of the corresponding domain and directly test the FacLens on the test data of another domain. The results in the upper part of Figure 6 are unsatisfactory. After unsupervised domain adaptation, the cross-LLM FacLens can work well in the target domain, as depicted in the the lower part of Figure 6. We also discuss the choice of the kernel function in Appendix G, and find that linear kernel performs well, indicating that the NFP features derived by genc are inherently discriminative. Furthermore, we observe that FacLens demonstrates better transferability between LLMs of similar scales. In future work, we will explore more effective methods to enhance FacLens’s transferability between LLMs of significantly different scales. ': '1', 'NFP Dataset Construction. Given an LLM m and a QA dataset, for each question q ∈Q, we assign a binary label y to the (m, q) pair, where y = 1 if m fails to generate the golden answer for q, and y = 0 otherwise. The goal of NFP is to predict the labels prior to answer generation. Specifically, we follow previous work (Mallen et al., 2023) to adopt QA datasets with short answers like entity mentions, and mark an LLM’s response as non-factual (i.e., y = 1) if no sub-string of the response matches any of the gold answers.2 To ensure the experimental reproducibility, we set the LLM’s decoding strategy to greedy search rather than top-p or top-k sampling. We have also run the sampling-based decoding for response generation, and find that all the experimental conclusions in this paper still hold true. In this work, we consider four LLMs and three QA datasets, which results in 4 × 3 = 12 NFP datasets. In each NFP dataset, consisting of samples in the form of ((m, q), y), we randomly sample 20% samples for training, 10% samples for validation, and use the remaining samples for testing. ': '2'} | {'1': 'Unsupervised domain adaptation performs well for cross-LLM FacLens. Given an LLM, we train FacLens on the training data of the corresponding domain and directly test the FacLens on the test data of another domain. The results in the upper part of Figure 6 are unsatisfactory. After unsupervised domain adaptation, the cross-LLM FacLens can work well in the target domain, as depicted in the the lower part of Figure 6. We also discuss the choice of the kernel function in Appendix G, and find that linear kernel performs well, indicating that the NFP features derived by genc are inherently discriminative. Furthermore, we observe that FacLens demonstrates better transferability between LLMs of similar scales. In future work, we will explore more effective methods to enhance FacLens’s transferability between LLMs of significantly different scales. ', '2': 'NFP Dataset Construction. Given an LLM m and a QA dataset, for each question q ∈Q, we assign a binary label y to the (m, q) pair, where y = 1 if m fails to generate the golden answer for q, and y = 0 otherwise. The goal of NFP is to predict the labels prior to answer generation. Specifically, we follow previous work (Mallen et al., 2023) to adopt QA datasets with short answers like entity mentions, and mark an LLM’s response as non-factual (i.e., y = 1) if no sub-string of the response matches any of the gold answers.2 To ensure the experimental reproducibility, we set the LLM’s decoding strategy to greedy search rather than top-p or top-k sampling. We have also run the sampling-based decoding for response generation, and find that all the experimental conclusions in this paper still hold true. In this work, we consider four LLMs and three QA datasets, which results in 4 × 3 = 12 NFP datasets. In each NFP dataset, consisting of samples in the form of ((m, q), y), we randomly sample 20% samples for training, 10% samples for validation, and use the remaining samples for testing. '} | {'images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg': '2'} | {'2': 'images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg'} | {'images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg': '2', 'images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg': '1'} | {'2': 'images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg', '1': 'images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg'} | {} | ['NFP Dataset Construction. Given an LLM m and a QA dataset, for each question q ∈Q, we assign a binary label y to the (m, q) pair, where y = 1 if m fails to generate the golden answer for q, and y = 0 otherwise. The goal of NFP is to predict the labels prior to answer generation. Specifically, we follow previous work (Mallen et al., 2023) to adopt QA datasets with short answers like entity mentions, and mark an LLM’s response as non-factual (i.e., y = 1) if no sub-string of the response matches any of the gold answers.2 To ensure the experimental reproducibility, we set the LLM’s decoding strategy to greedy search rather than top-p or top-k sampling. We have also run the sampling-based decoding for response generation, and find that all the experimental conclusions in this paper still hold true. In this work, we consider four LLMs and three QA datasets, which results in 4 × 3 = 12 NFP datasets. In each NFP dataset, consisting of samples in the form of ((m, q), y), we randomly sample 20% samples for training, 10% samples for validation, and use the remaining samples for testing. ', 'Unsupervised domain adaptation performs well for cross-LLM FacLens. Given an LLM, we train FacLens on the training data of the corresponding domain and directly test the FacLens on the test data of another domain. The results in the upper part of Figure 6 are unsatisfactory. After unsupervised domain adaptation, the cross-LLM FacLens can work well in the target domain, as depicted in the the lower part of Figure 6. We also discuss the choice of the kernel function in Appendix G, and find that linear kernel performs well, indicating that the NFP features derived by genc are inherently discriminative. Furthermore, we observe that FacLens demonstrates better transferability between LLMs of similar scales. In future work, we will explore more effective methods to enhance FacLens’s transferability between LLMs of significantly different scales. '] | 4ab6d6d8dcdf8b7a45b9b9c864dc3959193bbda43c25d024ee44e0234248444d | e2297ed06ca065d361ec3f28961b352c3377db10 |
explanation | How does FacLens compare to previous methods in terms of performance? | Table 1 shows that FacLens achieves clear performance gains over most baselines. While the performance gains over LoRA and Self-Evaluation are slightly smaller, FacLens significantly outperforms both of them in terms of training efficiency (see Table 2), which is crucial for practical applications. Moreover, as shown in Figure 2, we compared FacLens with post-hoc methods. Despite post-hoc methods having access to additional information (i.e., the generated answers), FacLens still performs better. | ['Table 1', 'Table 2', 'Figure 2'] | ['images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg', 'images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg', 'images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg'] | ['mixed'] | 3 | 2 | 5 | {'where zS,i = genc (xS,i) , zT,j = genc (xT,j), NS = NT = |Qtrain| is the number of questions for training, and k (·) denotes a kernel function. We discuss the choice of kernel function in Appendix G. The hidden question representations are taken from the middle layer of the LLM. ': '1'} | {'1': 'where zS,i = genc (xS,i) , zT,j = genc (xT,j), NS = NT = |Qtrain| is the number of questions for training, and k (·) denotes a kernel function. We discuss the choice of kernel function in Appendix G. The hidden question representations are taken from the middle layer of the LLM. '} | {'images/add180d7870c649480bd2826bf4b5b054bf92dd72510bcbfde99e0efaf2a9972.jpg': '7', 'images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg': '2'} | {'7': 'images/add180d7870c649480bd2826bf4b5b054bf92dd72510bcbfde99e0efaf2a9972.jpg', '2': 'images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg'} | {'images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg': '2', 'images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg': '1'} | {'2': 'images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg', '1': 'images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg'} | {} | ['images/add180d7870c649480bd2826bf4b5b054bf92dd72510bcbfde99e0efaf2a9972.jpg', 'where zS,i = genc (xS,i) , zT,j = genc (xT,j), NS = NT = |Qtrain| is the number of questions for training, and k (·) denotes a kernel function. We discuss the choice of kernel function in Appendix G. The hidden question representations are taken from the middle layer of the LLM. '] | 670d6826b93a707dab76d21a73b5c691457ec286bcc186606cd4c02327464670 | e2297ed06ca065d361ec3f28961b352c3377db10 |
explanation | What analyses have the authors done on how properties of the dataset affect the performance of MLLMs? | In Figure 5 of the paper, we present the relationship between the number of images and the accuracy of image association in the IITC task. From the figure, we can see the following: 1. The image association accuracy of the VEGA-base-4k model decreases as the number of images increases. 2. For the other closed-source models, there is also a general negative correlation between the number of images and image association accuracy. The increase in the number of images makes image selection in the IITC task more challenging. We have supplemented the analysis with the relationship between token length and image accuracy. Details can be found in the table above: 1) Statistically, there is a general negative correlation between image accuracy and token length. 2) Due to the uneven distribution of token lengths in the test set (see Figure 4 of the paper), there is a limited amount of test data in the 0-1k and 7-8k ranges (with only 9 and 28 samples, respectively), which may lead to some margin of error in these intervals. 3) As shown in Table 2 of the paper, for all models, the image accuracy in IITC 4k is higher than in IITC 8k, further supporting the negative correlation between accuracy and token length. The increase in context length introduces more redundant information, making image selection more challenging. | ['Figure 5', 'Figure 4', 'Table 2'] | ['images/e4de0bd64fbf86f5f6d26dd1d132f25e1b4ab75f15d00969dc8b95148cc06e38.jpg', 'images/804f68c55932623c3d9dfb50941f9e1f5b2d9f67de4f5c63abcea211bb0f685a.jpg', 'images/c253055b0b2a2668877b56f92b77d8e7edba2c56e1e1558d781fe91e3e938c77.jpg'] | ['mixed'] | 3 | 2 | 5 | {'Ultimately, we have developed a novel dataset, designated as VEGA. It is comprised of two subsets, one tailored for the IITC task and another for the ITA task. The longest interleaved image-text content in VEGA reaches up to 8 images and 8k tokens. We design the instruction of the IITC task to be a question about only one of the images, requiring the model to specify the image it refers to in its answer. We assess the model’s interleaved image-text reading comprehension ability by both the correct rate of associated images, and the text quality of the answer by ROUGELin (2004) and BLEU Papineni et al. (2002). We have evaluated several state-of-the-art MLLMs on our dataset, validating the challenge of our tasks. Furthermore, we have fine-tuned the Qwen-VL-Chat model Bai et al. (2023) on the VEGA dataset to set a robust baseline for the IITC task. ': '1'} | {'1': 'Ultimately, we have developed a novel dataset, designated as VEGA. It is comprised of two subsets, one tailored for the IITC task and another for the ITA task. The longest interleaved image-text content in VEGA reaches up to 8 images and 8k tokens. We design the instruction of the IITC task to be a question about only one of the images, requiring the model to specify the image it refers to in its answer. We assess the model’s interleaved image-text reading comprehension ability by both the correct rate of associated images, and the text quality of the answer by ROUGELin (2004) and BLEU Papineni et al. (2002). We have evaluated several state-of-the-art MLLMs on our dataset, validating the challenge of our tasks. Furthermore, we have fine-tuned the Qwen-VL-Chat model Bai et al. (2023) on the VEGA dataset to set a robust baseline for the IITC task. '} | {'images/804f68c55932623c3d9dfb50941f9e1f5b2d9f67de4f5c63abcea211bb0f685a.jpg': '4', 'images/31070a6e7c3b53f02d80b85f2a2fffaeba361f9c85891c05adfcb10e733faf05.jpg': '1', 'images/e4de0bd64fbf86f5f6d26dd1d132f25e1b4ab75f15d00969dc8b95148cc06e38.jpg': '5'} | {'4': 'images/804f68c55932623c3d9dfb50941f9e1f5b2d9f67de4f5c63abcea211bb0f685a.jpg', '1': 'images/31070a6e7c3b53f02d80b85f2a2fffaeba361f9c85891c05adfcb10e733faf05.jpg', '5': 'images/e4de0bd64fbf86f5f6d26dd1d132f25e1b4ab75f15d00969dc8b95148cc06e38.jpg'} | {'images/c253055b0b2a2668877b56f92b77d8e7edba2c56e1e1558d781fe91e3e938c77.jpg': '2'} | {'2': 'images/c253055b0b2a2668877b56f92b77d8e7edba2c56e1e1558d781fe91e3e938c77.jpg'} | {} | ['images/31070a6e7c3b53f02d80b85f2a2fffaeba361f9c85891c05adfcb10e733faf05.jpg', 'Ultimately, we have developed a novel dataset, designated as VEGA. It is comprised of two subsets, one tailored for the IITC task and another for the ITA task. The longest interleaved image-text content in VEGA reaches up to 8 images and 8k tokens. We design the instruction of the IITC task to be a question about only one of the images, requiring the model to specify the image it refers to in its answer. We assess the model’s interleaved image-text reading comprehension ability by both the correct rate of associated images, and the text quality of the answer by ROUGELin (2004) and BLEU Papineni et al. (2002). We have evaluated several state-of-the-art MLLMs on our dataset, validating the challenge of our tasks. Furthermore, we have fine-tuned the Qwen-VL-Chat model Bai et al. (2023) on the VEGA dataset to set a robust baseline for the IITC task. '] | 8caf5a4e8ea45a9c61b2a596fe76417f7aa5a3d875406f1784a388872e17ead8 | ff04147bfeb3ecdb49c1ad6b729c8776be9205bc |
explanation | How does the paper address the marginal improvements observed in the experimental results? | Notice that spectral regularization is always amongst the best-performing methods in all experiments. Moreover, in several experiments, spectral regularization was significantly better than any other baseline: Figure 1 (left), Figure 2 (right), Figure 3. | ['Figure 1', 'Figure 2', 'Figure 3'] | ['images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg', 'images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54f87cd3f353.jpg', 'images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg'] | ['figure'] | 3 | 2 | 5 | {'Neural network initialization is key to trainability (He et al., 2015; Hinton and Salakhutdinov, 2006). One property of the initialization thought to be important is that the layerwise mapping, hl+1 = ReLU(θlhl), has a Jacobian with singular values that are close to or exactly one (Glorot and Bengio, 2010; Pennington et al., 2017; Saxe et al., 2014; Xiao et al., 2018). Writing this Jacobian explicitly, we have that Jl = ∂∂hlh+1 = Dlθl where Dl = Diag(ReLU′([θlhl]1), . . . , ReLU′([θlhl]d)). 2 We can obtain upper and lower bounds on the singular values of the layerwise Jacobian in terms of the singular values of the weight matrix. Denoting the ordered singular values of θl and Dl by σd(θl) ≤· · · ≤σ1(θl) and σd(Dl) ≤· · · ≤σ1(Dl), respectively, we have σd(Dl)σi(θl) < σi(Jl) < σ1(Dl)σi(θl) for all i ∈{1, . . . , d} (Zhang, 2011, Theorem 8.13). In particular, if the spectral norm (largest singular value) of the weight matrix θl increases, then the spectral norm of the Jacobian Dl increases as well, potentially impacting trainability. Furthermore, the condition number κ(Jl) = σ1(Jl)/σd(Jl) can be bounded with the product of the condition numbers of θl and Dl, κ(θl) and κ(Dl) as κ(θl)/κ(Dl) ≤κ(Jl) ≤κ(θl)κ(Dl). Thus, if our goal is to keep the singular values of the Jacobian close to one by controlling the singular values of the weight matrix, we should ensure that the condition number of the latter is not too large. ': '1'} | {'1': 'Neural network initialization is key to trainability (He et al., 2015; Hinton and Salakhutdinov, 2006). One property of the initialization thought to be important is that the layerwise mapping, hl+1 = ReLU(θlhl), has a Jacobian with singular values that are close to or exactly one (Glorot and Bengio, 2010; Pennington et al., 2017; Saxe et al., 2014; Xiao et al., 2018). Writing this Jacobian explicitly, we have that Jl = ∂∂hlh+1 = Dlθl where Dl = Diag(ReLU′([θlhl]1), . . . , ReLU′([θlhl]d)). 2 We can obtain upper and lower bounds on the singular values of the layerwise Jacobian in terms of the singular values of the weight matrix. Denoting the ordered singular values of θl and Dl by σd(θl) ≤· · · ≤σ1(θl) and σd(Dl) ≤· · · ≤σ1(Dl), respectively, we have σd(Dl)σi(θl) < σi(Jl) < σ1(Dl)σi(θl) for all i ∈{1, . . . , d} (Zhang, 2011, Theorem 8.13). In particular, if the spectral norm (largest singular value) of the weight matrix θl increases, then the spectral norm of the Jacobian Dl increases as well, potentially impacting trainability. Furthermore, the condition number κ(Jl) = σ1(Jl)/σd(Jl) can be bounded with the product of the condition numbers of θl and Dl, κ(θl) and κ(Dl) as κ(θl)/κ(Dl) ≤κ(Jl) ≤κ(θl)κ(Dl). Thus, if our goal is to keep the singular values of the Jacobian close to one by controlling the singular values of the weight matrix, we should ensure that the condition number of the latter is not too large. '} | {'images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg': '3', 'images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg': '1', 'images/a3485f80f366e7de3691aaad423ab16f973943002252e73580726f373f1bd657.jpg': '5', 'images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54f87cd3f353.jpg': '2'} | {'3': 'images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg', '1': 'images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg', '5': 'images/a3485f80f366e7de3691aaad423ab16f973943002252e73580726f373f1bd657.jpg', '2': 'images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54f87cd3f353.jpg'} | {} | {} | {} | ['Neural network initialization is key to trainability (He et al., 2015; Hinton and Salakhutdinov, 2006). One property of the initialization thought to be important is that the layerwise mapping, hl+1 = ReLU(θlhl), has a Jacobian with singular values that are close to or exactly one (Glorot and Bengio, 2010; Pennington et al., 2017; Saxe et al., 2014; Xiao et al., 2018). Writing this Jacobian explicitly, we have that Jl = ∂∂hlh+1 = Dlθl where Dl = Diag(ReLU′([θlhl]1), . . . , ReLU′([θlhl]d)). 2 We can obtain upper and lower bounds on the singular values of the layerwise Jacobian in terms of the singular values of the weight matrix. Denoting the ordered singular values of θl and Dl by σd(θl) ≤· · · ≤σ1(θl) and σd(Dl) ≤· · · ≤σ1(Dl), respectively, we have σd(Dl)σi(θl) < σi(Jl) < σ1(Dl)σi(θl) for all i ∈{1, . . . , d} (Zhang, 2011, Theorem 8.13). In particular, if the spectral norm (largest singular value) of the weight matrix θl increases, then the spectral norm of the Jacobian Dl increases as well, potentially impacting trainability. Furthermore, the condition number κ(Jl) = σ1(Jl)/σd(Jl) can be bounded with the product of the condition numbers of θl and Dl, κ(θl) and κ(Dl) as κ(θl)/κ(Dl) ≤κ(Jl) ≤κ(θl)κ(Dl). Thus, if our goal is to keep the singular values of the Jacobian close to one by controlling the singular values of the weight matrix, we should ensure that the condition number of the latter is not too large. ', 'images/a3485f80f366e7de3691aaad423ab16f973943002252e73580726f373f1bd657.jpg'] | e103290df88fe0eeb1f60aaf6df31d7daf51b5ff817ce6d7c06e4f19ca381e1f | 05fe05b0399402d34686a7b695820eaf3b6b5eca |
explanation | What improvements does spectral regularization provide over L2 regularization? | Empirically, spectral regularization is a large improvement over L2 regularization in several of our experiments, e.g. Figure 1 (left), Figure 2 (right), and Figure 3. Moreover, spectral regularization is more robust to its hyperparameter and always among the 1 or 2 best-performing methods in all of our experiments. | ['Figure 1', 'Figure 2', 'Figure 3'] | ['images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg', 'images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54f87cd3f353.jpg', 'images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg'] | ['figure'] | 3 | 2 | 5 | {'Loss of Trainability Mitigators In our main results, we compare spectral regularization against L2 regularization towards zero, shrink and perturb (Ash and Adams, 2020), L2 regularization towards the initialization (Kumar et al., 2023), recycling dormant neurons (ReDO, Sokar et al., 2023), concatenated ReLU (Abbas et al., 2023; Shang et al., 2016), and Wasserstein regularization (Lewandowski et al., 2023). Several regularizers in the continual learning without forgetting literature rely on privileged task information, which is not applicable to the task-agnostic setting that we consider. We use the streaming conversion (Elsayed and Mahmood, 2024) to transform elastic weight consolidation (Kirkpatrick et al., 2017; Zenke et al., 2017), so that it no longer requires task boundary information, and include it as a baseline. Additional experiment details can be found in Appendix B. ': '1', 'Loss of plasticity in the continual learning literature can refer to either loss of trainability (Dohare et al., 2021; Lyle et al., 2023) or to loss of generalization (Ash and Adams, 2020). Because trainability is a requirement for learning and generalization, we focus primarily on loss of trainability. Specifically, we use loss of trainability to refer to the phenomenon that the objective value, Jτ(θ(τT )), increases as a function of the task τ. Equivalently, the performance measures, such as accuracy, decrease with new tasks. Under the assumption that the tasks are sampled independently and identically, this would suggest that the neural network’s trainability diminishes on new tasks. ': '2'} | {'1': 'Loss of Trainability Mitigators In our main results, we compare spectral regularization against L2 regularization towards zero, shrink and perturb (Ash and Adams, 2020), L2 regularization towards the initialization (Kumar et al., 2023), recycling dormant neurons (ReDO, Sokar et al., 2023), concatenated ReLU (Abbas et al., 2023; Shang et al., 2016), and Wasserstein regularization (Lewandowski et al., 2023). Several regularizers in the continual learning without forgetting literature rely on privileged task information, which is not applicable to the task-agnostic setting that we consider. We use the streaming conversion (Elsayed and Mahmood, 2024) to transform elastic weight consolidation (Kirkpatrick et al., 2017; Zenke et al., 2017), so that it no longer requires task boundary information, and include it as a baseline. Additional experiment details can be found in Appendix B. ', '2': 'Loss of plasticity in the continual learning literature can refer to either loss of trainability (Dohare et al., 2021; Lyle et al., 2023) or to loss of generalization (Ash and Adams, 2020). Because trainability is a requirement for learning and generalization, we focus primarily on loss of trainability. Specifically, we use loss of trainability to refer to the phenomenon that the objective value, Jτ(θ(τT )), increases as a function of the task τ. Equivalently, the performance measures, such as accuracy, decrease with new tasks. Under the assumption that the tasks are sampled independently and identically, this would suggest that the neural network’s trainability diminishes on new tasks. '} | {'images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg': '3', 'images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg': '1', 'images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54f87cd3f353.jpg': '2'} | {'3': 'images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg', '1': 'images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg', '2': 'images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54f87cd3f353.jpg'} | {} | {} | {} | ['Loss of Trainability Mitigators In our main results, we compare spectral regularization against L2 regularization towards zero, shrink and perturb (Ash and Adams, 2020), L2 regularization towards the initialization (Kumar et al., 2023), recycling dormant neurons (ReDO, Sokar et al., 2023), concatenated ReLU (Abbas et al., 2023; Shang et al., 2016), and Wasserstein regularization (Lewandowski et al., 2023). Several regularizers in the continual learning without forgetting literature rely on privileged task information, which is not applicable to the task-agnostic setting that we consider. We use the streaming conversion (Elsayed and Mahmood, 2024) to transform elastic weight consolidation (Kirkpatrick et al., 2017; Zenke et al., 2017), so that it no longer requires task boundary information, and include it as a baseline. Additional experiment details can be found in Appendix B. ', 'Loss of plasticity in the continual learning literature can refer to either loss of trainability (Dohare et al., 2021; Lyle et al., 2023) or to loss of generalization (Ash and Adams, 2020). Because trainability is a requirement for learning and generalization, we focus primarily on loss of trainability. Specifically, we use loss of trainability to refer to the phenomenon that the objective value, Jτ(θ(τT )), increases as a function of the task τ. Equivalently, the performance measures, such as accuracy, decrease with new tasks. Under the assumption that the tasks are sampled independently and identically, this would suggest that the neural network’s trainability diminishes on new tasks. '] | 22132795dd4d718836bcea76aa5a9ee27154f136067d4d67d1e043271a66c6a1 | 05fe05b0399402d34686a7b695820eaf3b6b5eca |
explanation | How are passenger profiles integrated into the origin-destination matrix at the regional or stop level? | As shown in Figure 1(d), a walking distance is deemed acceptable if it is limited to 1.1 km. Concerning the average velocity, Figure 1(e), and the trip time, Figure 1(f), all registers with values greater than 80 km/h and 2 hours are unconsidered. These values were estimated by local specialists based on the passengers’ usage patterns and the transportation infrastructure in Salvador. We emphasize that the reader can modify these values according to their needs once both raw and processed data are shared. | ['Figure 1'] | ['images/c5a31d9a40f25518ecd6eaeca1e01c4b3acbd744bcdc450d17a22bfd197dd041.jpg'] | ['figure'] | 1 | 4 | 5 | {'In the subsequent phase, Figure 1(c), we analyzed user types to determine the feasibility of estimating their alighting points. In Salvador, there is no device to validate the passengers’ alighting; therefore, the main challenge is to estimate it by analyzing the following boarding. Moreover, it is impossible to track older people because they are not individually identified. According to local policies, the fares for such passengers are recorded as general users without identification. Consequently, we are unable to estimate their alighting points. Another particular case that prevents us from identifying users’ alighting points occurs when there is only a single trip registration on a given day. In such cases, we can only determine the boarding point, with no information available about the alighting point. Therefore, we cannot consider such situations in our analyses. ': '1', 'In our context, spatial data do not depend on time t, i.e., their information is time-invariant. Specifically, in every vertex vi ∈V , we store the following features: geographical position, number of boarding and alighting per vehicle, and passenger load. The features specifically concerning edges (vi, vj) ∈E include the distance between stops and stations, the trip duration, the mean velocity, and the Renovation Factor (RF). The RF is a well-known metric used in transportation research to assess the total demand in a line, i.e., it is computed on a set of edges that belong to the line [ITDP, 2016]. Formally, this metric is the ratio of the total demand of a line to the load on its critical link. Higher renovation factors occur when there are many short trips along the line. Corridors with very high renovation factor rates are more proftiable because they handle the same number of paying customers with fewer vehicles [ITDP, 2016]. Besides the individual features, there is relevant information shared by both vertices and edges, such as the number of passengers per vehicle, lines and directions, vehicle characteristics, altitude, and trips. ': '2', 'All information shared by SUNT was collected from March 2024 to July 2024 and aggregated into 5-minute intervals. This interval allows the data to be represented as a temporal graph, in addition to the spatial information. However, we emphasize that this interval can be adjusted according to the readers’ requirements. It is possible to work with a static graph using a single interval or to summarize all days using, for example, a mean function. Additional details about all the data comprising SUNT are available in Apendix B. ': '3'} | {'1': 'In the subsequent phase, Figure 1(c), we analyzed user types to determine the feasibility of estimating their alighting points. In Salvador, there is no device to validate the passengers’ alighting; therefore, the main challenge is to estimate it by analyzing the following boarding. Moreover, it is impossible to track older people because they are not individually identified. According to local policies, the fares for such passengers are recorded as general users without identification. Consequently, we are unable to estimate their alighting points. Another particular case that prevents us from identifying users’ alighting points occurs when there is only a single trip registration on a given day. In such cases, we can only determine the boarding point, with no information available about the alighting point. Therefore, we cannot consider such situations in our analyses. ', '2': 'In our context, spatial data do not depend on time t, i.e., their information is time-invariant. Specifically, in every vertex vi ∈V , we store the following features: geographical position, number of boarding and alighting per vehicle, and passenger load. The features specifically concerning edges (vi, vj) ∈E include the distance between stops and stations, the trip duration, the mean velocity, and the Renovation Factor (RF). The RF is a well-known metric used in transportation research to assess the total demand in a line, i.e., it is computed on a set of edges that belong to the line [ITDP, 2016]. Formally, this metric is the ratio of the total demand of a line to the load on its critical link. Higher renovation factors occur when there are many short trips along the line. Corridors with very high renovation factor rates are more proftiable because they handle the same number of paying customers with fewer vehicles [ITDP, 2016]. Besides the individual features, there is relevant information shared by both vertices and edges, such as the number of passengers per vehicle, lines and directions, vehicle characteristics, altitude, and trips. ', '3': 'All information shared by SUNT was collected from March 2024 to July 2024 and aggregated into 5-minute intervals. This interval allows the data to be represented as a temporal graph, in addition to the spatial information. However, we emphasize that this interval can be adjusted according to the readers’ requirements. It is possible to work with a static graph using a single interval or to summarize all days using, for example, a mean function. Additional details about all the data comprising SUNT are available in Apendix B. '} | {'images/c5a31d9a40f25518ecd6eaeca1e01c4b3acbd744bcdc450d17a22bfd197dd041.jpg': '1', 'images/23f1fdc539186d67330f172d2edf9ee702c9e85cdedb350bee9c37ac4c5cfed4.jpg': '3'} | {'1': 'images/c5a31d9a40f25518ecd6eaeca1e01c4b3acbd744bcdc450d17a22bfd197dd041.jpg', '3': 'images/23f1fdc539186d67330f172d2edf9ee702c9e85cdedb350bee9c37ac4c5cfed4.jpg'} | {} | {} | {} | ['images/23f1fdc539186d67330f172d2edf9ee702c9e85cdedb350bee9c37ac4c5cfed4.jpg', 'In the subsequent phase, Figure 1(c), we analyzed user types to determine the feasibility of estimating their alighting points. In Salvador, there is no device to validate the passengers’ alighting; therefore, the main challenge is to estimate it by analyzing the following boarding. Moreover, it is impossible to track older people because they are not individually identified. According to local policies, the fares for such passengers are recorded as general users without identification. Consequently, we are unable to estimate their alighting points. Another particular case that prevents us from identifying users’ alighting points occurs when there is only a single trip registration on a given day. In such cases, we can only determine the boarding point, with no information available about the alighting point. Therefore, we cannot consider such situations in our analyses. ', 'All information shared by SUNT was collected from March 2024 to July 2024 and aggregated into 5-minute intervals. This interval allows the data to be represented as a temporal graph, in addition to the spatial information. However, we emphasize that this interval can be adjusted according to the readers’ requirements. It is possible to work with a static graph using a single interval or to summarize all days using, for example, a mean function. Additional details about all the data comprising SUNT are available in Apendix B. ', 'In our context, spatial data do not depend on time t, i.e., their information is time-invariant. Specifically, in every vertex vi ∈V , we store the following features: geographical position, number of boarding and alighting per vehicle, and passenger load. The features specifically concerning edges (vi, vj) ∈E include the distance between stops and stations, the trip duration, the mean velocity, and the Renovation Factor (RF). The RF is a well-known metric used in transportation research to assess the total demand in a line, i.e., it is computed on a set of edges that belong to the line [ITDP, 2016]. Formally, this metric is the ratio of the total demand of a line to the load on its critical link. Higher renovation factors occur when there are many short trips along the line. Corridors with very high renovation factor rates are more proftiable because they handle the same number of paying customers with fewer vehicles [ITDP, 2016]. Besides the individual features, there is relevant information shared by both vertices and edges, such as the number of passengers per vehicle, lines and directions, vehicle characteristics, altitude, and trips. '] | c8f71f59ce47e86848347df22d37552cc7e4d12d8bf81a5447d0338086cffd33 | 5aa218287d89432e6fc34652ca252cfe99d92e21 |
explanation | What is the rationale for the experimental configurations chosen in the study? | Figure 4 (MAD): This figure focuses on a case study demonstrating a counter-intuitive phenomenon where introducing errors can improve performance—a rare observation in multi-agent systems. MAD was selected specifically for its relevance to this unique insight. Figure 7a (Exclusion of MAD): MAD was excluded from Figure 7a because this experiment involves scenarios with malicious instruction-sending agents, which are not present in the MAD system configuration. Figure 8 (Self-collab and Camel): Only Self-collab and Camel are included in Figure 8 because they represent the weaker systems within the Linear and Flat structures, respectively. Our objective in this experiment is to illustrate how our proposed defense method enhances resilience in weaker systems. To provide greater clarity on our multi-agent system settings, we have added a comprehensive table summarizing the experimental configurations. | ['Figure 4', 'Figure 7', 'Figure 8'] | ['images/0b55b386169f13d50b1b7ff47bfa61c9126516d2fca0fe9057685662016c9e22.jpg', 'images/c0b3ca79844ea127d86dcdd3fcf3e95955edba6f7897f1c1e732e26c9b917a50.jpg', 'images/28fe1291bf019109651723940887ed6ff1e1b4a60028a15b648aaa959d5b622c.jpg'] | ['figure'] | 3 | 2 | 5 | {'Current LLMs prioritize natural language over code. Fig. 6b illustrates that distraction comments can mislead LLMs into accepting incorrect code as correct across all six systems studied. This indicates that the systems tend to prioritize comments over the actual code. In the example, the system detects an error in the code when no comments are present. However, when a comment stating “the bug had been corrected” is added, the system overlooks the error and proceeds with the next task. AUTOTRANSFORM exploits this characteristic of LLMs to execute successful attacks. ': '1'} | {'1': 'Current LLMs prioritize natural language over code. Fig. 6b illustrates that distraction comments can mislead LLMs into accepting incorrect code as correct across all six systems studied. This indicates that the systems tend to prioritize comments over the actual code. In the example, the system detects an error in the code when no comments are present. However, when a comment stating “the bug had been corrected” is added, the system overlooks the error and proceeds with the next task. AUTOTRANSFORM exploits this characteristic of LLMs to execute successful attacks. '} | {'images/0b55b386169f13d50b1b7ff47bfa61c9126516d2fca0fe9057685662016c9e22.jpg': '4', 'images/c0b3ca79844ea127d86dcdd3fcf3e95955edba6f7897f1c1e732e26c9b917a50.jpg': '7', 'images/28fe1291bf019109651723940887ed6ff1e1b4a60028a15b648aaa959d5b622c.jpg': '8', 'images/18cebe9c02b6160708591815b62a39d65034f0db42e6f9495510e4a203c2c009.jpg': '1'} | {'4': 'images/0b55b386169f13d50b1b7ff47bfa61c9126516d2fca0fe9057685662016c9e22.jpg', '7': 'images/c0b3ca79844ea127d86dcdd3fcf3e95955edba6f7897f1c1e732e26c9b917a50.jpg', '8': 'images/28fe1291bf019109651723940887ed6ff1e1b4a60028a15b648aaa959d5b622c.jpg', '1': 'images/18cebe9c02b6160708591815b62a39d65034f0db42e6f9495510e4a203c2c009.jpg'} | {} | {} | {} | ['Current LLMs prioritize natural language over code. Fig. 6b illustrates that distraction comments can mislead LLMs into accepting incorrect code as correct across all six systems studied. This indicates that the systems tend to prioritize comments over the actual code. In the example, the system detects an error in the code when no comments are present. However, when a comment stating “the bug had been corrected” is added, the system overlooks the error and proceeds with the next task. AUTOTRANSFORM exploits this characteristic of LLMs to execute successful attacks. ', 'images/18cebe9c02b6160708591815b62a39d65034f0db42e6f9495510e4a203c2c009.jpg'] | 28348747626e6364ef8ed1d3cf3ae2a27e837a31c9e72c81cf34fc34a077ec92 | 5f4382c8b4eb16e5bc379f3c02f21f53318dbacb |
explanation | What evidence supports the claim of improved zero-shot generalization? | We respectfully disagree with the reviewer’s assertion that the paper does not demonstrate improved zero-shot generalization, as we show this in Procgen (see aggregate performance added to Table 3). Additionally, we present the FDD approach (Table 2), where we observe improvement in the generalization gap for the DMC environments. That said, we understand that the improved performance in the original environment in Table 1 (not necessarily a bad thing!) could lead to confusion. We are happy to rephrase the title if you have a recommendation. One proposal could be 'Synthetic Data Enables Training Robust Agents from Offline Data,' as our agents perform well across a wide range of settings. We also updated Tables 2 and 3 to include $Test/Train$ and $Train-Test$ results for both environments, aligning with the metrics suggested by the reviewer. | ['Table 2', 'Table 3', 'Table 1'] | ['images/5569c6a3bfa524742be607e0b74ea0c979027562b132ff607589531c164f2e8b.jpg', 'images/dcb7d6d85125826affdf8c728bcb27be4cf1096384a051e10fe381534f2d375b.jpg', 'images/4700ceaef17c3682b2201018d66e8a2e1c59985dcb94ee0a1293d3d2c28e41f8.jpg'] | ['table'] | 3 | 2 | 5 | {} | {} | {'images/d65db754ee4fe5be255f66580d4f92aa03a1107db5b556c3b4ea7d63b56fec34.jpg': '5', 'images/8759ff84202e49665b7630122fce5a4391e5fd728dd26bcf8bd79ba128b548f2.jpg': '2'} | {'5': 'images/d65db754ee4fe5be255f66580d4f92aa03a1107db5b556c3b4ea7d63b56fec34.jpg', '2': 'images/8759ff84202e49665b7630122fce5a4391e5fd728dd26bcf8bd79ba128b548f2.jpg'} | {'images/4700ceaef17c3682b2201018d66e8a2e1c59985dcb94ee0a1293d3d2c28e41f8.jpg': '1', 'images/5569c6a3bfa524742be607e0b74ea0c979027562b132ff607589531c164f2e8b.jpg': '2', 'images/dcb7d6d85125826affdf8c728bcb27be4cf1096384a051e10fe381534f2d375b.jpg': '3'} | {'1': 'images/4700ceaef17c3682b2201018d66e8a2e1c59985dcb94ee0a1293d3d2c28e41f8.jpg', '2': 'images/5569c6a3bfa524742be607e0b74ea0c979027562b132ff607589531c164f2e8b.jpg', '3': 'images/dcb7d6d85125826affdf8c728bcb27be4cf1096384a051e10fe381534f2d375b.jpg'} | {} | ['images/8759ff84202e49665b7630122fce5a4391e5fd728dd26bcf8bd79ba128b548f2.jpg', 'images/d65db754ee4fe5be255f66580d4f92aa03a1107db5b556c3b4ea7d63b56fec34.jpg'] | 5357a51d9c1a64e442ce83018c4e81ed44c53e736a443bd65b61b021ea85c150 | 67ffaaf503d82d0615454baf237f5e5a9ff7bb19 |
explanation | Do you have a proof that PolyReLU and PolyNorm have equivalent expressivity? | Thank you for pointing out the less precise expression. We have rephrased the sentence as follows: 'From Figure 1, one can see that the expressivity of PolyNorm is greater than or equal to that of PolyReLU.' The claim is primarily supported through the empirical evidence provided in the paper. As can be observed in Figure 1, Figure 6 and Figure 7, both PolyReLU and PolyNorm exhibit superior expressivity in comparison to other activation functions, with PolyNorm demonstrating equal or greater expressive capacity than PolyReLU. | ['Figure 1', 'Figure 6', 'Figure 7'] | ['images/a4c46101de0b0f13b987de572c9324742705fcb26f894ba6c6254c285adddf1a.jpg', 'images/046ab70a2b4e254b1ece36680859bb7ab5fac1877cc75d8c41d950778c8f1046.jpg', 'images/b14e1904e8ec3fdee265107c7746e771c19d93de74f082fb6f52c4f54678406b.jpg'] | ['figure'] | 3 | 2 | 5 | {'Hyperparameters. Unless otherwise specified, we use a third-order PolyCom by default and initialize the coefficients as ai = 1/3 for i = 1, 2, 3 and set a0 = 0. Model weights are randomly initialized. For optimization, we apply the AdamW optimizer with β1 = 0.9 and β2 = 0.95. All models are trained on sequences of 4096 tokens. For the dense model, we set the initial learning rate to 3e-4, decaying to 1.5e-5 using a cosine scheduler. The MoE model starts with a learning rate of 4e-4, also decaying according to a cosine schedule. We summary the hyperparameters in Table 7. ': '1'} | {'1': 'Hyperparameters. Unless otherwise specified, we use a third-order PolyCom by default and initialize the coefficients as ai = 1/3 for i = 1, 2, 3 and set a0 = 0. Model weights are randomly initialized. For optimization, we apply the AdamW optimizer with β1 = 0.9 and β2 = 0.95. All models are trained on sequences of 4096 tokens. For the dense model, we set the initial learning rate to 3e-4, decaying to 1.5e-5 using a cosine scheduler. The MoE model starts with a learning rate of 4e-4, also decaying according to a cosine schedule. We summary the hyperparameters in Table 7. '} | {'images/b14e1904e8ec3fdee265107c7746e771c19d93de74f082fb6f52c4f54678406b.jpg': '7', 'images/046ab70a2b4e254b1ece36680859bb7ab5fac1877cc75d8c41d950778c8f1046.jpg': '6', 'images/824414a9a148b783330a35bbd312329fc253390c4f58a7711bcb9a1d90809da1.jpg': '2', 'images/a4c46101de0b0f13b987de572c9324742705fcb26f894ba6c6254c285adddf1a.jpg': '1'} | {'7': 'images/b14e1904e8ec3fdee265107c7746e771c19d93de74f082fb6f52c4f54678406b.jpg', '6': 'images/046ab70a2b4e254b1ece36680859bb7ab5fac1877cc75d8c41d950778c8f1046.jpg', '2': 'images/824414a9a148b783330a35bbd312329fc253390c4f58a7711bcb9a1d90809da1.jpg', '1': 'images/a4c46101de0b0f13b987de572c9324742705fcb26f894ba6c6254c285adddf1a.jpg'} | {} | {} | {} | ['images/824414a9a148b783330a35bbd312329fc253390c4f58a7711bcb9a1d90809da1.jpg', 'Hyperparameters. Unless otherwise specified, we use a third-order PolyCom by default and initialize the coefficients as ai = 1/3 for i = 1, 2, 3 and set a0 = 0. Model weights are randomly initialized. For optimization, we apply the AdamW optimizer with β1 = 0.9 and β2 = 0.95. All models are trained on sequences of 4096 tokens. For the dense model, we set the initial learning rate to 3e-4, decaying to 1.5e-5 using a cosine scheduler. The MoE model starts with a learning rate of 4e-4, also decaying according to a cosine schedule. We summary the hyperparameters in Table 7. '] | c1bc3c66ef0dee68fef185813dcc321a868969e1fce058e8db05d4896e37025c | 8b6c738aadc6b44e6ec8736d7e10c499122c0609 |
explanation | Include aforementioned key benchmarks to facilitate a more comprehensive comparison. | We provide additional performance comparisons with distillation sampling variant on CIFAR-10 (Table 1) and with direct consistency training variant on ImageNet 64 × 64 (Table 2). We have now included the key baselines [2], [3], [4] in Table 3 and Table 4. | ['Table 1', 'Table 2', 'Table 3', 'Table 4'] | ['images/4584bc1cab2b3666269c237bdbbd1b4df550e3959b8bb76bdede29a12727b351.jpg', 'images/a6254a44a961174af259338151f5d83522877672f091e94dc159e438aded5ddc.jpg', 'images/75f3909c72e81a021d776ae110c21cbef76c5af19d2a275c98cb87c82056d383.jpg', 'images/0c955ce83df71273494300492571aa486b8447724460975d37e40202ee8c8a1f.jpg'] | ['table'] | 4 | 1 | 5 | {'Instead of directly regressing on the ground truth vector field, Consistency-FM directly defines straight flows with consistent velocity that start from different times to the same endpoint. Specifically, we have the following lemma (prove in Appendix A.1): ': '1'} | {'1': 'Instead of directly regressing on the ground truth vector field, Consistency-FM directly defines straight flows with consistent velocity that start from different times to the same endpoint. Specifically, we have the following lemma (prove in Appendix A.1): '} | {} | {} | {'images/4584bc1cab2b3666269c237bdbbd1b4df550e3959b8bb76bdede29a12727b351.jpg': '1', 'images/75f3909c72e81a021d776ae110c21cbef76c5af19d2a275c98cb87c82056d383.jpg': '3', 'images/a6254a44a961174af259338151f5d83522877672f091e94dc159e438aded5ddc.jpg': '2', 'images/0c955ce83df71273494300492571aa486b8447724460975d37e40202ee8c8a1f.jpg': '4'} | {'1': 'images/4584bc1cab2b3666269c237bdbbd1b4df550e3959b8bb76bdede29a12727b351.jpg', '3': 'images/75f3909c72e81a021d776ae110c21cbef76c5af19d2a275c98cb87c82056d383.jpg', '2': 'images/a6254a44a961174af259338151f5d83522877672f091e94dc159e438aded5ddc.jpg', '4': 'images/0c955ce83df71273494300492571aa486b8447724460975d37e40202ee8c8a1f.jpg'} | {} | ['Instead of directly regressing on the ground truth vector field, Consistency-FM directly defines straight flows with consistent velocity that start from different times to the same endpoint. Specifically, we have the following lemma (prove in Appendix A.1): '] | fd2f46c9e9ce065018261c79e4bf414a71abfae337f3faa2bf15a48fdd911f0c | 8c2ef55eef0d86e9d05bef581f26ff0fb739fa87 |
explanation | What are the reasons for the different performances of the unguided approach across various tasks? | The proposed distillation methods indeed have different effects in different tasks. The Table 2 corresponds to the scenario of zero-shot inference on large language models. In this case, to produce meaningful (not random) inference, the model capacity and training dataset need to be sufficiently large. As we often observe in such large-scale training, the help brought by the enhanced training technique will be reduced compared to the scenario of fine-tuning a smaller model using limited data. This leads to the smaller relative difference in Table 2 compared to Table 1 and Table 3. On the other hand, we did find issues in hyperparameter tuning in certain tasks and thank you for pointing out that. We performed grid search of hyperparameters in all the experiments for fair comparison, and as for the QNLI and QQP tasks the unguided model hyperparameters we identified failed to converge very well, leading to rather low accuracy. However, we performed more complete hyperparameter search on all the tasks after the initial submission and found some configurations with better results on QNLI, QQP, and SST2, given in the revised Table 1. The CALD models still outperform the unguided models by more than 10% in average accuracy. Therefore, the conclusions we drawn from the experiments are not affected. | ['Table 1', 'Table 2', 'Table 3'] | ['images/83b6e86b10024de7152534b36aabdc49122020f75cebe1217ea2380354aff292.jpg', 'images/79766378db8c691eda1c3f51d27ae46996a8e59c5475c93a621738726d28df13.jpg', 'images/bd5358fe10a83b107dacdece7f6c535060a32b18ad102fc6b077d4c6375984b6.jpg'] | ['table'] | 3 | 2 | 5 | {'• Target Guided. We can directly transfer the parameters from the fine-tuned teacher target model and distill from it. More specifically, given the model inputs for training on a target classification task, we denote the hidden states from the student and the teacher target as H(s) and H(t), each with m vectors; outputs (class probabilities) of the student and the teacher target as y(s) and y(t); and one-hot labels as y. Then the loss terms are written as ': '1', 'Although not our focus, we also carry out an extra experiment to convert a widely-used open-source LM of various sizes, namely Pythia (Biderman et al., 2023), into Mamba for language modeling by retraining on a small subset of the pretraining corpus. ': '2'} | {'1': '• Target Guided. We can directly transfer the parameters from the fine-tuned teacher target model and distill from it. More specifically, given the model inputs for training on a target classification task, we denote the hidden states from the student and the teacher target as H(s) and H(t), each with m vectors; outputs (class probabilities) of the student and the teacher target as y(s) and y(t); and one-hot labels as y. Then the loss terms are written as ', '2': 'Although not our focus, we also carry out an extra experiment to convert a widely-used open-source LM of various sizes, namely Pythia (Biderman et al., 2023), into Mamba for language modeling by retraining on a small subset of the pretraining corpus. '} | {} | {} | {'images/83b6e86b10024de7152534b36aabdc49122020f75cebe1217ea2380354aff292.jpg': '1', 'images/79766378db8c691eda1c3f51d27ae46996a8e59c5475c93a621738726d28df13.jpg': '2', 'images/bd5358fe10a83b107dacdece7f6c535060a32b18ad102fc6b077d4c6375984b6.jpg': '3'} | {'1': 'images/83b6e86b10024de7152534b36aabdc49122020f75cebe1217ea2380354aff292.jpg', '2': 'images/79766378db8c691eda1c3f51d27ae46996a8e59c5475c93a621738726d28df13.jpg', '3': 'images/bd5358fe10a83b107dacdece7f6c535060a32b18ad102fc6b077d4c6375984b6.jpg'} | {} | ['• Target Guided. We can directly transfer the parameters from the fine-tuned teacher target model and distill from it. More specifically, given the model inputs for training on a target classification task, we denote the hidden states from the student and the teacher target as H(s) and H(t), each with m vectors; outputs (class probabilities) of the student and the teacher target as y(s) and y(t); and one-hot labels as y. Then the loss terms are written as ', 'Although not our focus, we also carry out an extra experiment to convert a widely-used open-source LM of various sizes, namely Pythia (Biderman et al., 2023), into Mamba for language modeling by retraining on a small subset of the pretraining corpus. '] | 4a4b0196466c6c22db5b60d2b3f0218bd1a1b5721c5c8290e83a3e171768f2c5 | 91bbf564af0c392bf3d0152e8ff6b20e5a1f211f |
explanation | How is the last-demonstration clustering supported by evidence? | While we acknowledge that last-demonstration clustering may appear less pronounced in some visualizations, multiple lines of evidence still support its existence: Figure 3a shows elevated percentage frequencies for last demonstrations compared to middle positions, Figure 3b demonstrates higher partial derivative norms for the last chunk versus middle chunks, and we've added new evidence in Figure 5 showing attention weights to the last token steadily increase across layers. | ['Figure 3', 'Figure 5'] | ['images/fabe8c971816529b4c874def50c0f2e100520e70af95a726acf1450e01eea639.jpg', 'images/fe074d7e6b9aab3e309f6ad1ffdc5778d949aecc4bf0867ae31b1e7e1ffc94eb.jpg'] | ['figure'] | 2 | 3 | 5 | {'We prepare 100 randomized prompts and compute the partial derivative norms similarly to Section 3.2. To ensure the prompts are differently distributed to training ones, we build each prompt as a sequence of 50 to 100 random words, resulting in meaningless sentences. For each prompt, we compute its chunk partial derivative norms, then average over 100 prompts. Figure 6 shows interesting results. It reveals a robust correlation between first-demonstration clustering and the utilization of the causal attention mask. Specifically, the importance of beginning tokens is markedly elevated when, and only when, the causal attention mask is applied, which aligns with the findings presented in Proposition 4.1. On the other hand, the case for last-demonstration is more complex. While the importance of ending tokens remains distinctively high when sinusoidal positional encoding is employed in the absence of a causal attention mask, this phenomenon is not observed for rotary and trainable positional encoding. This suggests that the importance of ending tokens is influenced by the interplay between the causal structure and the choice of positional encoding method. ': '1'} | {'1': 'We prepare 100 randomized prompts and compute the partial derivative norms similarly to Section 3.2. To ensure the prompts are differently distributed to training ones, we build each prompt as a sequence of 50 to 100 random words, resulting in meaningless sentences. For each prompt, we compute its chunk partial derivative norms, then average over 100 prompts. Figure 6 shows interesting results. It reveals a robust correlation between first-demonstration clustering and the utilization of the causal attention mask. Specifically, the importance of beginning tokens is markedly elevated when, and only when, the causal attention mask is applied, which aligns with the findings presented in Proposition 4.1. On the other hand, the case for last-demonstration is more complex. While the importance of ending tokens remains distinctively high when sinusoidal positional encoding is employed in the absence of a causal attention mask, this phenomenon is not observed for rotary and trainable positional encoding. This suggests that the importance of ending tokens is influenced by the interplay between the causal structure and the choice of positional encoding method. '} | {'images/ec3135d1cfef854bb75e4265222eba50ed2b0ed0d52b35742ddda3078c21d394.jpg': '2', 'images/fabe8c971816529b4c874def50c0f2e100520e70af95a726acf1450e01eea639.jpg': '3', 'images/72e917b6bbc2426132b4a78754ac69c62ff71dbba1420f390b68497c5bb6d90e.jpg': '6', 'images/fe074d7e6b9aab3e309f6ad1ffdc5778d949aecc4bf0867ae31b1e7e1ffc94eb.jpg': '5'} | {'2': 'images/ec3135d1cfef854bb75e4265222eba50ed2b0ed0d52b35742ddda3078c21d394.jpg', '3': 'images/fabe8c971816529b4c874def50c0f2e100520e70af95a726acf1450e01eea639.jpg', '6': 'images/72e917b6bbc2426132b4a78754ac69c62ff71dbba1420f390b68497c5bb6d90e.jpg', '5': 'images/fe074d7e6b9aab3e309f6ad1ffdc5778d949aecc4bf0867ae31b1e7e1ffc94eb.jpg'} | {} | {} | {} | ['images/72e917b6bbc2426132b4a78754ac69c62ff71dbba1420f390b68497c5bb6d90e.jpg', 'images/ec3135d1cfef854bb75e4265222eba50ed2b0ed0d52b35742ddda3078c21d394.jpg', 'We prepare 100 randomized prompts and compute the partial derivative norms similarly to Section 3.2. To ensure the prompts are differently distributed to training ones, we build each prompt as a sequence of 50 to 100 random words, resulting in meaningless sentences. For each prompt, we compute its chunk partial derivative norms, then average over 100 prompts. Figure 6 shows interesting results. It reveals a robust correlation between first-demonstration clustering and the utilization of the causal attention mask. Specifically, the importance of beginning tokens is markedly elevated when, and only when, the causal attention mask is applied, which aligns with the findings presented in Proposition 4.1. On the other hand, the case for last-demonstration is more complex. While the importance of ending tokens remains distinctively high when sinusoidal positional encoding is employed in the absence of a causal attention mask, this phenomenon is not observed for rotary and trainable positional encoding. This suggests that the importance of ending tokens is influenced by the interplay between the causal structure and the choice of positional encoding method. '] | 2ce088d4fbe67d6821915184112e5defdd7a3bcfcfe7b9ce6b34a0b27eb435bc | b8fc178ed7dc8207c662d4ba992e64d9a28fc8ee |
explanation | Does the method work with real-world images? | Our work works well with real-world images (see the first two samples of Figure 4, all three samples of Figure 5, first two samples of Figure 6, and all the samples of Figure 7). | ['Figure 4', 'Figure 5', 'Figure 6', 'Figure 7'] | ['images/9b75a55929abeeee0f970442e7358f841aba7019bab5cdac23752b0c2ed34f32.jpg', 'images/ece6ed302a7295cf3813537e94d26a34f41157b56cd355d789ada87b42bac8ea.jpg', 'images/4894814557a8f53513b6310e3ee6de20a59c21ba809ee7c937ed9716f8450cb1.jpg', 'images/4302891effddf7411089361e222e4f6d69e2c5ade47430b9764184817e08b1a6.jpg'] | ['figure'] | 4 | 1 | 5 | {} | {} | {'images/4894814557a8f53513b6310e3ee6de20a59c21ba809ee7c937ed9716f8450cb1.jpg': '6', 'images/c4510071d04ee16399d200e62fee65a7c0007882c555daff9368e23ae77f2c23.jpg': '3', 'images/4302891effddf7411089361e222e4f6d69e2c5ade47430b9764184817e08b1a6.jpg': '7', 'images/ece6ed302a7295cf3813537e94d26a34f41157b56cd355d789ada87b42bac8ea.jpg': '5', 'images/9b75a55929abeeee0f970442e7358f841aba7019bab5cdac23752b0c2ed34f32.jpg': '4'} | {'6': 'images/4894814557a8f53513b6310e3ee6de20a59c21ba809ee7c937ed9716f8450cb1.jpg', '3': 'images/c4510071d04ee16399d200e62fee65a7c0007882c555daff9368e23ae77f2c23.jpg', '7': 'images/4302891effddf7411089361e222e4f6d69e2c5ade47430b9764184817e08b1a6.jpg', '5': 'images/ece6ed302a7295cf3813537e94d26a34f41157b56cd355d789ada87b42bac8ea.jpg', '4': 'images/9b75a55929abeeee0f970442e7358f841aba7019bab5cdac23752b0c2ed34f32.jpg'} | {} | {} | {} | ['images/c4510071d04ee16399d200e62fee65a7c0007882c555daff9368e23ae77f2c23.jpg'] | b607fe6e943eefd103b89382aad8a02304a8098893da7dd0d8060f6c1189ad21 | dc4965f7e90b8b1f74b0f2cf392194fdb07ae1ab |
explanation | What are the reasons for the marginal accuracy improvements observed in the ablation studies? | For the performance improvement of the model, it is important to highlight that many of the baselines we selected are recent and highly competitive models, making accuracy improvements both challenging and meaningful. Regarding the relatively marginal improvements observed in the ablation studies, this is because we conducted experiments in four aspects within the ablation study: The first two aspects focus on analyzing the impact of the number of segments and encoders in PSformer. From Table 3 and Table 4, the observed changes in these areas are indeed minor (with an average MSE variation of 0.002 on the ETTh1 and ETTm1 datasets), demonstrating that PSformer is robust to these two critical hyperparameters. The latter two aspects involve ablation studies on the model's key innovations: parameter sharing and segment attention. From Table 5 and Table 6, it can be observed that the variances in metrics are highly significant. Specifically, parameter sharing achieved an average MSE reduction of 0.016 on ETTm1, while SegAtt achieved an average MSE reduction of 0.017 on ETTh1. These results demonstrate that the two core contributions of PSformer significantly improve its performance, making it more competitive compared to a wide range of baseline models. | ['Table 3', 'Table 4', 'Table 5', 'Table 6'] | ['images/aa351033b8cf75db6e07454ead56e7b665fae04b673800e8f2558e4d54cef916.jpg', 'images/1f1078f753f79ee4153f96eee248fd1500d4c26e7a555df9cf2f32b3f99ab65b.jpg', 'images/9309bdd10a5e15d03da92fbb58615df58674e8dbbcf7250ca71335513ecb2cba.jpg', 'images/51c306a362a485c4852f716f2bfcf255a948e69e522101b7a27c655d91600da9.jpg'] | ['table'] | 4 | 1 | 5 | {'After passing through n layers of the PSformer Encoder, the final output is Xpred = XoutW F , where Xpred ∈RM×F , and W F ∈RL×F is a linear mapping, where F is the prediction length. The Xpred is the final output of the PSformer model. The PSformer structure does not use positional encoding, as the segment attention mixes local spatiotemporal information. We discuss this in more detail in Appendix A.6 and Appendix B.8. ': '1'} | {'1': 'After passing through n layers of the PSformer Encoder, the final output is Xpred = XoutW F , where Xpred ∈RM×F , and W F ∈RL×F is a linear mapping, where F is the prediction length. The Xpred is the final output of the PSformer model. The PSformer structure does not use positional encoding, as the segment attention mixes local spatiotemporal information. We discuss this in more detail in Appendix A.6 and Appendix B.8. '} | {} | {} | {'images/51c306a362a485c4852f716f2bfcf255a948e69e522101b7a27c655d91600da9.jpg': '6', 'images/aa351033b8cf75db6e07454ead56e7b665fae04b673800e8f2558e4d54cef916.jpg': '3', 'images/1f1078f753f79ee4153f96eee248fd1500d4c26e7a555df9cf2f32b3f99ab65b.jpg': '4', 'images/9309bdd10a5e15d03da92fbb58615df58674e8dbbcf7250ca71335513ecb2cba.jpg': '5'} | {'6': 'images/51c306a362a485c4852f716f2bfcf255a948e69e522101b7a27c655d91600da9.jpg', '3': 'images/aa351033b8cf75db6e07454ead56e7b665fae04b673800e8f2558e4d54cef916.jpg', '4': 'images/1f1078f753f79ee4153f96eee248fd1500d4c26e7a555df9cf2f32b3f99ab65b.jpg', '5': 'images/9309bdd10a5e15d03da92fbb58615df58674e8dbbcf7250ca71335513ecb2cba.jpg'} | {} | ['After passing through n layers of the PSformer Encoder, the final output is Xpred = XoutW F , where Xpred ∈RM×F , and W F ∈RL×F is a linear mapping, where F is the prediction length. The Xpred is the final output of the PSformer model. The PSformer structure does not use positional encoding, as the segment attention mixes local spatiotemporal information. We discuss this in more detail in Appendix A.6 and Appendix B.8. '] | 8abcf8cbc84e27faa2a3473349a878f22d2e9d285585994ea6deb78140bf8142 | e69a59c151ec85e9a7265a99a50bc763aa6cf326 |
explanation | What is the motivation for introducing an uncertainty-aware exploration strategy? | We have updated the abstract to clarify the motivation of our work. As further elaborated in the introduction, most existing methods treat recommendation as a static process, which prevents them from effectively accounting for users’ evolving preferences. Sequential recommendation methods address this limitation to some extent by leveraging previously interacted items to capture users’ dynamic behavior. However, prior RL-based recommender system models largely rely on standard exploration strategies, such as ε-greedy, which are less effective in scenarios with a large item space and sparse reward signals due to limited user interactions. As a result, these methods may struggle to learn an optimal policy that adequately captures users’ evolving preferences and achieves the maximum expected reward over the long term. The qualitative results presented in Figure 1 and Table 1 illustrate the limitations of existing approaches and further highlight the need for a systematic, uncertainty-aware exploration strategy. | ['Figure 1', 'Table 1'] | ['images/fa5fccc5987c17ab6cac4b5db3a07f8ec93a37436f6e558577aba21b8cbbcf4f.jpg', 'images/9b094692d407a6efcb89998338d3da6e042001fee571511e0a247c7e41638bc3.jpg'] | ['mixed'] | 2 | 3 | 5 | {'where ratingu,i is the user assigned rating, τ is the threshold to identify if a user provided rating is positive. Evidential reward aggregates the recommended items’ rating as a traditional reward r balanced with their vacuity predictions as a measure of information gain, denoted as an uncertainty regularizer R. During testing, for item i′ not appearing in user u’s interaction history Hu, a neutral rating ratingu,i′ = τ will be assigned to give neutral feedback. ': '1'} | {'1': 'where ratingu,i is the user assigned rating, τ is the threshold to identify if a user provided rating is positive. Evidential reward aggregates the recommended items’ rating as a traditional reward r balanced with their vacuity predictions as a measure of information gain, denoted as an uncertainty regularizer R. During testing, for item i′ not appearing in user u’s interaction history Hu, a neutral rating ratingu,i′ = τ will be assigned to give neutral feedback. '} | {'images/df2c9e1253936ea04152414231d474bf9ca9030048f1ea7acdaa739044d37396.jpg': '2', 'images/fa5fccc5987c17ab6cac4b5db3a07f8ec93a37436f6e558577aba21b8cbbcf4f.jpg': '1'} | {'2': 'images/df2c9e1253936ea04152414231d474bf9ca9030048f1ea7acdaa739044d37396.jpg', '1': 'images/fa5fccc5987c17ab6cac4b5db3a07f8ec93a37436f6e558577aba21b8cbbcf4f.jpg'} | {'images/0d9baf01287057b51e00bb8299a7e6f498abf529debe3d15976263037e9bbbb1.jpg': '3', 'images/9b094692d407a6efcb89998338d3da6e042001fee571511e0a247c7e41638bc3.jpg': '1'} | {'3': 'images/0d9baf01287057b51e00bb8299a7e6f498abf529debe3d15976263037e9bbbb1.jpg', '1': 'images/9b094692d407a6efcb89998338d3da6e042001fee571511e0a247c7e41638bc3.jpg'} | {} | ['images/df2c9e1253936ea04152414231d474bf9ca9030048f1ea7acdaa739044d37396.jpg', 'images/0d9baf01287057b51e00bb8299a7e6f498abf529debe3d15976263037e9bbbb1.jpg', 'where ratingu,i is the user assigned rating, τ is the threshold to identify if a user provided rating is positive. Evidential reward aggregates the recommended items’ rating as a traditional reward r balanced with their vacuity predictions as a measure of information gain, denoted as an uncertainty regularizer R. During testing, for item i′ not appearing in user u’s interaction history Hu, a neutral rating ratingu,i′ = τ will be assigned to give neutral feedback. '] | a833561f0ef7484e750c82f53cfc0766535b7e1c1697d5c3a86b2caa0fa0ec11 | 01bc18d9733b34622eff9efd4422fca8f18b069c |
explanation | On tasks where the model already performs well, does C&P fine-tuning lead to a decline in performance? | According to Table 6, InternVL2-2B and InternVL2-8B show minor declines on a few datasets where they originally performed well. We attribute this to the possibility that both cognitive and perceptual responses may occasionally fail simultaneously while maintaining consistency, as illustrated in Figure 4a. However, our analysis indicates that such cases are rare. Moreover, considering the significant improvement in C&P consistency after fine-tuning, these 'trade-offs' are acceptable. | ['Table 6', 'Figure 4'] | ['images/bcb639e76fd6dc9fbd9ce9c46b15d6202f6959795d950b3d155df1b78c839daf.jpg', 'images/cc3acc6bad6037c01587df84ec064bd17d72732e9e58af6558ab04c50b980ae2.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Table 2 shows the evaluation results. Overall, closed-source models have higher C&P consistency compared to open-source models. Qwen-VL-Max achieves the highest C&P consistency at 79.98%, followed by GPT-4o at 68.60%. Among the open-source models, Qwen-VL-Chat demonstrates the ': '1', 'Notably, OCR annotations are required in Section 2.3. For the DocVQA dataset, the official OCR annotations are used, while the other datasets use OCR annotations produced by Duguang OCR1. ': '2'} | {'1': 'Table 2 shows the evaluation results. Overall, closed-source models have higher C&P consistency compared to open-source models. Qwen-VL-Max achieves the highest C&P consistency at 79.98%, followed by GPT-4o at 68.60%. Among the open-source models, Qwen-VL-Chat demonstrates the ', '2': 'Notably, OCR annotations are required in Section 2.3. For the DocVQA dataset, the official OCR annotations are used, while the other datasets use OCR annotations produced by Duguang OCR1. '} | {'images/cc3acc6bad6037c01587df84ec064bd17d72732e9e58af6558ab04c50b980ae2.jpg': '4'} | {'4': 'images/cc3acc6bad6037c01587df84ec064bd17d72732e9e58af6558ab04c50b980ae2.jpg'} | {'images/bcb639e76fd6dc9fbd9ce9c46b15d6202f6959795d950b3d155df1b78c839daf.jpg': '6', 'images/2947bf84e1016e8842d69621a930d869f093ca5b459b699f7a73c36a6b6fc8f9.jpg': '4'} | {'6': 'images/bcb639e76fd6dc9fbd9ce9c46b15d6202f6959795d950b3d155df1b78c839daf.jpg', '4': 'images/2947bf84e1016e8842d69621a930d869f093ca5b459b699f7a73c36a6b6fc8f9.jpg'} | {} | ['Table 2 shows the evaluation results. Overall, closed-source models have higher C&P consistency compared to open-source models. Qwen-VL-Max achieves the highest C&P consistency at 79.98%, followed by GPT-4o at 68.60%. Among the open-source models, Qwen-VL-Chat demonstrates the ', 'images/2947bf84e1016e8842d69621a930d869f093ca5b459b699f7a73c36a6b6fc8f9.jpg', 'Notably, OCR annotations are required in Section 2.3. For the DocVQA dataset, the official OCR annotations are used, while the other datasets use OCR annotations produced by Duguang OCR1. '] | 20fe7fb2e826aaf4eb4a1389904161ab54b16c90753811caa2ca465b23ab243b | 08af6e3bbee2dba7d63f9faef1d3963bebb02a2c |
explanation | What is the relationship between N, n, and m? | In Table 2, N = m = n, where m and n represent the sample sizes for each of the two distributions being tested. In Figure 5, however, N = m + n, which represents the total sample size received for the experiment. | ['Table 2', 'Figure 5'] | ['images/2950f7b20dfe365b5c28b1ed08c83ceb608b33783477a621623b59347cc6318d.jpg', 'images/692e64130ff3aac0cde8b87c3679d8fedebd43fb50805bb63a333a330a465d76.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Theorem 5.1. (Lopez-Paz & Oquab, 2018b) Let f ′ ∈Cϕ : X →{0, 1} be the SSL-C2ST classifier model. Let H0 : t = 1 and H1 : t = 1 −ϵ(P, Q; f ′), where t is the test accuracy and ϵ(P, Q; f ′) = Pr(zi,li)∼D [f ′(zi) ̸= li] /2 ∈ 0, 21 represents the inability of f ′ to distinguish between P and Q. The test power of tˆ is: ': '1', 'where nte = |Ste| and I is the indicator function. Finally, we compute the p-value to determine if the test statistic is significantly greater than the random guessing accuracy, utilizing the approximate null distribution of C2ST outlined in Appendix D.1 and the permutation test discussed next. ': '2', 'and f ∗(zk) be the estimate of the conditional probability distribution I p(lk = 1|zk) > 1 , the statistic or the accuracy of the classifier f ∗on Ste can be written as: ': '3'} | {'1': 'Theorem 5.1. (Lopez-Paz & Oquab, 2018b) Let f ′ ∈Cϕ : X →{0, 1} be the SSL-C2ST classifier model. Let H0 : t = 1 and H1 : t = 1 −ϵ(P, Q; f ′), where t is the test accuracy and ϵ(P, Q; f ′) = Pr(zi,li)∼D [f ′(zi) ̸= li] /2 ∈ 0, 21 represents the inability of f ′ to distinguish between P and Q. The test power of tˆ is: ', '2': 'where nte = |Ste| and I is the indicator function. Finally, we compute the p-value to determine if the test statistic is significantly greater than the random guessing accuracy, utilizing the approximate null distribution of C2ST outlined in Appendix D.1 and the permutation test discussed next. ', '3': 'and f ∗(zk) be the estimate of the conditional probability distribution I p(lk = 1|zk) > 1 , the statistic or the accuracy of the classifier f ∗on Ste can be written as: '} | {'images/692e64130ff3aac0cde8b87c3679d8fedebd43fb50805bb63a333a330a465d76.jpg': '5'} | {'5': 'images/692e64130ff3aac0cde8b87c3679d8fedebd43fb50805bb63a333a330a465d76.jpg'} | {'images/2950f7b20dfe365b5c28b1ed08c83ceb608b33783477a621623b59347cc6318d.jpg': '2'} | {'2': 'images/2950f7b20dfe365b5c28b1ed08c83ceb608b33783477a621623b59347cc6318d.jpg'} | {} | ['Theorem 5.1. (Lopez-Paz & Oquab, 2018b) Let f ′ ∈Cϕ : X →{0, 1} be the SSL-C2ST classifier model. Let H0 : t = 1 and H1 : t = 1 −ϵ(P, Q; f ′), where t is the test accuracy and ϵ(P, Q; f ′) = Pr(zi,li)∼D [f ′(zi) ̸= li] /2 ∈ 0, 21 represents the inability of f ′ to distinguish between P and Q. The test power of tˆ is: ', 'where nte = |Ste| and I is the indicator function. Finally, we compute the p-value to determine if the test statistic is significantly greater than the random guessing accuracy, utilizing the approximate null distribution of C2ST outlined in Appendix D.1 and the permutation test discussed next. ', 'and f ∗(zk) be the estimate of the conditional probability distribution I p(lk = 1|zk) > 1 , the statistic or the accuracy of the classifier f ∗on Ste can be written as: '] | 57d8aa3edeb2dbd5843784fdc93d50eda986568b887c34b5165e185e1aced37e | 117a7d1efe9b6cebaba614db86e709185420d408 |
explanation | How does the addition of cross-modal data impact the performance of your model? | We have conducted extensive experiments and ablation studies to demonstrate the benefits of adding cross-modal data under the same model structure and token budget. The results are presented in Table 1 and Figure 6. | ['Table 1', 'Figure 6'] | ['images/c33c24982b8e0230e8525a7adde26a5543eaa8420130b670cf13a875e8704f17.jpg', 'images/102bf6a906a8851cfdd1c79ca2ea69ab3c198ac7f4a8e6fd88b5466602dfdeab.jpg'] | ['mixed'] | 2 | 3 | 5 | {'BSM employs a single-nucleotide tokenizer with a vocabulary that includes nucleotides, amino acids, and special tokens. It uses an autoregressive architecture to model biological sequences such as genes and proteins. By learning next-token prediction, the model reasons over sequences causally and captures statistical patterns and dependencies in the training data, enabling effective representation and generation of biological sequences. Furthermore, the autoregressive architecture’s sequential nature effectively handles long-range dependencies, which is crucial in biological sequences like DNA, RNA, and proteins, where long-context information can reveal critical functional relationships or structural interactions. ': '1'} | {'1': 'BSM employs a single-nucleotide tokenizer with a vocabulary that includes nucleotides, amino acids, and special tokens. It uses an autoregressive architecture to model biological sequences such as genes and proteins. By learning next-token prediction, the model reasons over sequences causally and captures statistical patterns and dependencies in the training data, enabling effective representation and generation of biological sequences. Furthermore, the autoregressive architecture’s sequential nature effectively handles long-range dependencies, which is crucial in biological sequences like DNA, RNA, and proteins, where long-context information can reveal critical functional relationships or structural interactions. '} | {'images/2a282d81483ab9546bf759391883e174ba84c87d74eabaa34c26ce02fb1b988f.jpg': '5', 'images/102bf6a906a8851cfdd1c79ca2ea69ab3c198ac7f4a8e6fd88b5466602dfdeab.jpg': '6', 'images/39752a43862b401b88e5097b689efb2d95d26e7802d6f768274cf3b18def9b76.jpg': '3'} | {'5': 'images/2a282d81483ab9546bf759391883e174ba84c87d74eabaa34c26ce02fb1b988f.jpg', '6': 'images/102bf6a906a8851cfdd1c79ca2ea69ab3c198ac7f4a8e6fd88b5466602dfdeab.jpg', '3': 'images/39752a43862b401b88e5097b689efb2d95d26e7802d6f768274cf3b18def9b76.jpg'} | {'images/c33c24982b8e0230e8525a7adde26a5543eaa8420130b670cf13a875e8704f17.jpg': '1'} | {'1': 'images/c33c24982b8e0230e8525a7adde26a5543eaa8420130b670cf13a875e8704f17.jpg'} | {} | ['images/2a282d81483ab9546bf759391883e174ba84c87d74eabaa34c26ce02fb1b988f.jpg', 'images/39752a43862b401b88e5097b689efb2d95d26e7802d6f768274cf3b18def9b76.jpg', 'BSM employs a single-nucleotide tokenizer with a vocabulary that includes nucleotides, amino acids, and special tokens. It uses an autoregressive architecture to model biological sequences such as genes and proteins. By learning next-token prediction, the model reasons over sequences causally and captures statistical patterns and dependencies in the training data, enabling effective representation and generation of biological sequences. Furthermore, the autoregressive architecture’s sequential nature effectively handles long-range dependencies, which is crucial in biological sequences like DNA, RNA, and proteins, where long-context information can reveal critical functional relationships or structural interactions. '] | f7db81ab5514b0fd0666d31b7dfd586921d981fa3d40fec604ea5fb3d76b12be | 24fd5d6b134b0c6def366de2ca6cae4543e39f62 |
explanation | How does the proposed model handle large deformations in medical images? | Our new draft currently extends Table 1 with a full rigid transformation setting including all 3 transformations: rotation, scaling, and translation. However, we would like to point out that across all settings of Experiment 1, we apply Brownian noise deformation at multiple scales to ensure the synthetic transformation is not strictly rigid. The amount this local deformation impacts the tissue structures of Experiment 1 can be seen in the qualitative results of Figure 3 of the manuscript. | ['Table 1', 'Figure 3'] | ['images/bd332f00b08d46c3fe079807993e810711ef42010efacdfaf9ef76c4f2dfb014.jpg', 'images/416fa5fe6a345128a602cf05f287b6bb8b06c438dbb519062a04f760b6c7a49e.jpg'] | ['mixed'] | 2 | 3 | 5 | {'In this section, we first formally establish the limitations imposed on deformable image registration by the grid constraints of Eulerian frameworks. Afterwards, we establish a Lagrangian formulation that does not make any grid assumptions (section 2.1). Within this context, we highlight the advantages offered by geometric deep learning in modeling deformations as interactions between free-floating features (section 2.2). Next, we propose a data-driven form of local interpolation, which facilitates multi-scale deformation modeling by learning to propagate deformations across resolutions (section 2.3). Finally, we combine these ideas to construct an end-to-end trainable neural network capable of learning deformable registration in continuous domains in a coarse-to-fine fashion (section 2.4). ': '1', 'A common necessary preprocessing technique employed to mitigate this issue involves an exhaustive search for an initial affine alignment. This reduces the degrees of freedom in the transformation parameters by guaranteeing that similar features are captured in a consistent spatial context, thus reducing the range of representations experienced by the network. Recent works combat the misalignmentdependent complexity by incorporating transformer layers throughout the network (Chen et al., 2022; 2023; Liu et al., 2022; Meng et al., 2022; Wang et al., 2023; Zhu & Lu, 2022). This enables greater flexibility in the feature extraction process as the transformer layer’s attention mechanism is able to establish non-local spatial relationships at the cost of increased learnable parameters. Similarly, cascading approaches have shown increased accuracy by recovering the misalignment progressively, modeling the transformation as a sequence of deformations (Hu et al., 2022; Sandkühler et al., 2019; Zhao et al., 2019). ': '2'} | {'1': 'In this section, we first formally establish the limitations imposed on deformable image registration by the grid constraints of Eulerian frameworks. Afterwards, we establish a Lagrangian formulation that does not make any grid assumptions (section 2.1). Within this context, we highlight the advantages offered by geometric deep learning in modeling deformations as interactions between free-floating features (section 2.2). Next, we propose a data-driven form of local interpolation, which facilitates multi-scale deformation modeling by learning to propagate deformations across resolutions (section 2.3). Finally, we combine these ideas to construct an end-to-end trainable neural network capable of learning deformable registration in continuous domains in a coarse-to-fine fashion (section 2.4). ', '2': 'A common necessary preprocessing technique employed to mitigate this issue involves an exhaustive search for an initial affine alignment. This reduces the degrees of freedom in the transformation parameters by guaranteeing that similar features are captured in a consistent spatial context, thus reducing the range of representations experienced by the network. Recent works combat the misalignmentdependent complexity by incorporating transformer layers throughout the network (Chen et al., 2022; 2023; Liu et al., 2022; Meng et al., 2022; Wang et al., 2023; Zhu & Lu, 2022). This enables greater flexibility in the feature extraction process as the transformer layer’s attention mechanism is able to establish non-local spatial relationships at the cost of increased learnable parameters. Similarly, cascading approaches have shown increased accuracy by recovering the misalignment progressively, modeling the transformation as a sequence of deformations (Hu et al., 2022; Sandkühler et al., 2019; Zhao et al., 2019). '} | {'images/416fa5fe6a345128a602cf05f287b6bb8b06c438dbb519062a04f760b6c7a49e.jpg': '3'} | {'3': 'images/416fa5fe6a345128a602cf05f287b6bb8b06c438dbb519062a04f760b6c7a49e.jpg'} | {'images/bd332f00b08d46c3fe079807993e810711ef42010efacdfaf9ef76c4f2dfb014.jpg': '1', 'images/72348eec94135ab7a03b205acb09c70e2e7df45331db3948baf5b1e8224a4a18.jpg': '2'} | {'1': 'images/bd332f00b08d46c3fe079807993e810711ef42010efacdfaf9ef76c4f2dfb014.jpg', '2': 'images/72348eec94135ab7a03b205acb09c70e2e7df45331db3948baf5b1e8224a4a18.jpg'} | {} | ['images/72348eec94135ab7a03b205acb09c70e2e7df45331db3948baf5b1e8224a4a18.jpg', 'A common necessary preprocessing technique employed to mitigate this issue involves an exhaustive search for an initial affine alignment. This reduces the degrees of freedom in the transformation parameters by guaranteeing that similar features are captured in a consistent spatial context, thus reducing the range of representations experienced by the network. Recent works combat the misalignmentdependent complexity by incorporating transformer layers throughout the network (Chen et al., 2022; 2023; Liu et al., 2022; Meng et al., 2022; Wang et al., 2023; Zhu & Lu, 2022). This enables greater flexibility in the feature extraction process as the transformer layer’s attention mechanism is able to establish non-local spatial relationships at the cost of increased learnable parameters. Similarly, cascading approaches have shown increased accuracy by recovering the misalignment progressively, modeling the transformation as a sequence of deformations (Hu et al., 2022; Sandkühler et al., 2019; Zhao et al., 2019). ', 'In this section, we first formally establish the limitations imposed on deformable image registration by the grid constraints of Eulerian frameworks. Afterwards, we establish a Lagrangian formulation that does not make any grid assumptions (section 2.1). Within this context, we highlight the advantages offered by geometric deep learning in modeling deformations as interactions between free-floating features (section 2.2). Next, we propose a data-driven form of local interpolation, which facilitates multi-scale deformation modeling by learning to propagate deformations across resolutions (section 2.3). Finally, we combine these ideas to construct an end-to-end trainable neural network capable of learning deformable registration in continuous domains in a coarse-to-fine fashion (section 2.4). '] | a6b72d9a6bc04b0c1dffad81c4bc17a49f27acd5641db79d0e693ac20938121e | 2e71063092065f2b211c52664560426b1e04c5ef |
explanation | How does the CoTFormer model compare to the standard Transformer in terms of performance? | The accuracy of the standard Transformer in Table 1 can indicate the distance between the CoTFormer and the standard Transformer. Therefore, it is necessary to add the standard Transformer to Figure 2. | ['Table 1', 'Figure 2'] | ['images/bd79fb6eebc5aefb653b4b480e9e9c98751350fed8a08da40c192caa576035f5.jpg', 'images/78d36fde1a32e35714e8df05588902cacc90cd989890adaa24267f1cafab50a9.jpg'] | ['mixed'] | 2 | 3 | 5 | {} | {} | {'images/e3201107214ce7a424118b6fd025aea53e7dfe9577b425b4c9c7787ead0069ae.jpg': '4', 'images/5a7031f87b40f338004cf846e40e18025b7e98d528cdd79cdfebd6680fee6792.jpg': '3', 'images/f2bd897de74379c2b125865a2fdc18f79b9187744fe9c4c27dc0fe5afae18fdc.jpg': '5', 'images/78d36fde1a32e35714e8df05588902cacc90cd989890adaa24267f1cafab50a9.jpg': '2'} | {'4': 'images/e3201107214ce7a424118b6fd025aea53e7dfe9577b425b4c9c7787ead0069ae.jpg', '3': 'images/5a7031f87b40f338004cf846e40e18025b7e98d528cdd79cdfebd6680fee6792.jpg', '5': 'images/f2bd897de74379c2b125865a2fdc18f79b9187744fe9c4c27dc0fe5afae18fdc.jpg', '2': 'images/78d36fde1a32e35714e8df05588902cacc90cd989890adaa24267f1cafab50a9.jpg'} | {'images/bd79fb6eebc5aefb653b4b480e9e9c98751350fed8a08da40c192caa576035f5.jpg': '1'} | {'1': 'images/bd79fb6eebc5aefb653b4b480e9e9c98751350fed8a08da40c192caa576035f5.jpg'} | {} | ['images/e3201107214ce7a424118b6fd025aea53e7dfe9577b425b4c9c7787ead0069ae.jpg', 'images/5a7031f87b40f338004cf846e40e18025b7e98d528cdd79cdfebd6680fee6792.jpg', 'images/f2bd897de74379c2b125865a2fdc18f79b9187744fe9c4c27dc0fe5afae18fdc.jpg'] | 6998e59fbcab22b1bc6ee609d88666efa0ea344c3bc37844ea2e058426bcfe0c | 3a439959ac98f4b2f52116ae11b370605e09b606 |
explanation | What are the performance differences between the SSF and MSF strategies? | First, we present the SSF and MSF visualization comparison in Figure 2. The SSF has a single change, while the MSF has a variety of changes. Second, in Table 6 we perform ablation experiments of SSF and MSF on segmentation, and we analyze why MSF is more suitable for segmentation. | ['Figure 2', 'Table 6'] | ['images/260264af09a8f3445bbdd80fdeec2b07693b431df57ccf3eae6333d168781a3a.jpg', 'images/5ae993f2b704b12e16b72c4e9ac2a9756bf3c36653746a6f51959b728caed000.jpg'] | ['mixed'] | 2 | 3 | 5 | {'To simulate the distortion and deformation of an object, we have chosen to use the Sine function as our residual function. The inherent periodic nature of the Sine function allows us to adjust the number of regions that are deformed with precision. Additionally, by manipulating the amplitude of the Sine function, we can precisely control the intensity of the deformation. This displacement field, generated by the Sine function, effectively distorts and deforms specific local regions of the point cloud data without altering the overall topology. As a result, the augmented point cloud data contains more intricate and detailed local features. The standard Sine function is shown below: ': '1'} | {'1': 'To simulate the distortion and deformation of an object, we have chosen to use the Sine function as our residual function. The inherent periodic nature of the Sine function allows us to adjust the number of regions that are deformed with precision. Additionally, by manipulating the amplitude of the Sine function, we can precisely control the intensity of the deformation. This displacement field, generated by the Sine function, effectively distorts and deforms specific local regions of the point cloud data without altering the overall topology. As a result, the augmented point cloud data contains more intricate and detailed local features. The standard Sine function is shown below: '} | {'images/01b46f660d690ae0f356567e49caf8b198e9bc41fea3c545d5a72c54bbc6bcd6.jpg': '4', 'images/260264af09a8f3445bbdd80fdeec2b07693b431df57ccf3eae6333d168781a3a.jpg': '2'} | {'4': 'images/01b46f660d690ae0f356567e49caf8b198e9bc41fea3c545d5a72c54bbc6bcd6.jpg', '2': 'images/260264af09a8f3445bbdd80fdeec2b07693b431df57ccf3eae6333d168781a3a.jpg'} | {'images/5ae993f2b704b12e16b72c4e9ac2a9756bf3c36653746a6f51959b728caed000.jpg': '6', 'images/8cc72a64880c6c21759991d0f88f1ec620fd16727c49450e5f7a67b51eb99754.jpg': '5'} | {'6': 'images/5ae993f2b704b12e16b72c4e9ac2a9756bf3c36653746a6f51959b728caed000.jpg', '5': 'images/8cc72a64880c6c21759991d0f88f1ec620fd16727c49450e5f7a67b51eb99754.jpg'} | {} | ['images/8cc72a64880c6c21759991d0f88f1ec620fd16727c49450e5f7a67b51eb99754.jpg', 'To simulate the distortion and deformation of an object, we have chosen to use the Sine function as our residual function. The inherent periodic nature of the Sine function allows us to adjust the number of regions that are deformed with precision. Additionally, by manipulating the amplitude of the Sine function, we can precisely control the intensity of the deformation. This displacement field, generated by the Sine function, effectively distorts and deforms specific local regions of the point cloud data without altering the overall topology. As a result, the augmented point cloud data contains more intricate and detailed local features. The standard Sine function is shown below: ', 'images/01b46f660d690ae0f356567e49caf8b198e9bc41fea3c545d5a72c54bbc6bcd6.jpg'] | 915d3f9f1702d60c5e98d2340e38873dd76632da2b5d1e3e1d7a9dfb85c2f5fc | 3b7721717f4d4bb039675982f8604ef8379258d5 |
explanation | How does the GSA-R2R dataset address the diversity of real-world environments? | We have made significant efforts to expand the diversity of GSA-R2R to include 20 distinct scene types, compared to just six in R2R. This diversity covers a wide range of daily scenarios and exceeds that of existing embodied navigation datasets, as highlighted in Table 1 of our paper. We already include multiple commercial spaces such as cinemas, shops, and restaurants, as illustrated in Figure 2 of our paper. | ['Table 1', 'Figure 2'] | ['images/52bf352cbd52ddb91e50272965f8dfd54170eea96c743cb3adf62eba877558ce.jpg', 'images/6178eda5ffcafe2b6b73084fd1941e5d713fc12a75f8318278a58a5aedf8cf64.jpg'] | ['mixed'] | 2 | 3 | 5 | {} | {} | {'images/1a425bbe2763a3120894c3389ccec7ee600b5454cb5de1118f2041dea2aabfeb.jpg': '1', 'images/34842d927b45d096db0f8485a57e2098bd1596a0bd7265f2e4fd1f7720206aaa.jpg': '4', 'images/6178eda5ffcafe2b6b73084fd1941e5d713fc12a75f8318278a58a5aedf8cf64.jpg': '2'} | {'1': 'images/1a425bbe2763a3120894c3389ccec7ee600b5454cb5de1118f2041dea2aabfeb.jpg', '4': 'images/34842d927b45d096db0f8485a57e2098bd1596a0bd7265f2e4fd1f7720206aaa.jpg', '2': 'images/6178eda5ffcafe2b6b73084fd1941e5d713fc12a75f8318278a58a5aedf8cf64.jpg'} | {'images/52bf352cbd52ddb91e50272965f8dfd54170eea96c743cb3adf62eba877558ce.jpg': '1', 'images/d88d1a88a655df45e8d41933a6a1b701c4ac4d7240316f9d40b38df2b625399c.jpg': '4'} | {'1': 'images/52bf352cbd52ddb91e50272965f8dfd54170eea96c743cb3adf62eba877558ce.jpg', '4': 'images/d88d1a88a655df45e8d41933a6a1b701c4ac4d7240316f9d40b38df2b625399c.jpg'} | {} | ['images/1a425bbe2763a3120894c3389ccec7ee600b5454cb5de1118f2041dea2aabfeb.jpg', 'images/d88d1a88a655df45e8d41933a6a1b701c4ac4d7240316f9d40b38df2b625399c.jpg', 'images/34842d927b45d096db0f8485a57e2098bd1596a0bd7265f2e4fd1f7720206aaa.jpg'] | 79321511912b2964f578557dbd5b0e3962b310f5fe14ce7b8b3ecb7cee6bd556 | 466366db3c29af46db9db97a71f1c21c2940ea95 |
explanation | What is the exact computational time/cost for the proposed method compared to existing MetaBBO methods? | We have demonstrated in the experiments (Figure 3, zero-shot performance) that the trained NeurELA can be seamlessly integrated into existing MetaBBO methods to provide effective dynamic landscape analysis, without further re-training. We also provide the inference wall time comparison in Table 1 to compare the computational cost required to obtain the landscape feature by our NeurELA and traditional ELA, where the results show that NeurELA requires less processing time than traditional ELA, particularly for the high-dimensional problem. | ['Figure 3', 'Table 1'] | ['images/404469c60be80871de0a0cac273007fcc1f18dfb0d7cdc107fc8c79a31f770b5.jpg', 'images/2860d185ad0551afe2aabb501992df2d0b7f46bca5e5e298a5678d318f671126.jpg'] | ['mixed'] | 2 | 3 | 5 | {'PIE. PIE normalizes observation ot using two min-max normalization operations: first on the candidate solutions {Xit}im=1 against the search range, and second on the objective values {yit}im=1 using the extremum values at time step t. This ensures unified representation and generalization by scaling all values to [0, 1]. For a d-dimensional optimization problem, the normalized observations ot ': '1', 'Model Complexity (RQ6). We discuss the relationship between the model complexity and the zero-shot performance (unseen MetaBBO algorithm & problem sets) of our NeurELA. Concretely, We pre-train NeurELA under 6 different model complexities, with various hidden dimensions, i.e., h = (16, 64), and the number of the Ts-Attn module, i.e., l = (1, 3, 5). We additionally pre-train three MLP baselines, which substitute the Ts-Attn module in NeurELA with a linear feed-forward layer, which holds a shape of h × h, h = (16, 64, 128). We report both the zero-shot performance (yaxis) and the computational efficiency (x-axis, presented as the consumed wall time for computing the landscape features) in Figure 7, where the dashed lines denotes the performance and wall-time of the Original baseline. #para denotes the number of the learnable parameters. The results show that: 1) a significant performance gap is observed between the MLP baselines and our Ts-Attn module (h = 16, l = 1). It validates the effectiveness of our Ts-Attn design, which enhance the feature extraction of our NeurELA by encouraging the information sharing at both the cross-solution and cross-dimension levels; 2) As the model complexity increases, the performance of the Ts-Attn module drops rapidly. It reveals that the increased number of learnable parameters challenges the optimization ability of the backend ES. Given the limited computational resources, it is difficult to identify the optimal parameters θ∗. ': '2'} | {'1': 'PIE. PIE normalizes observation ot using two min-max normalization operations: first on the candidate solutions {Xit}im=1 against the search range, and second on the objective values {yit}im=1 using the extremum values at time step t. This ensures unified representation and generalization by scaling all values to [0, 1]. For a d-dimensional optimization problem, the normalized observations ot ', '2': 'Model Complexity (RQ6). We discuss the relationship between the model complexity and the zero-shot performance (unseen MetaBBO algorithm & problem sets) of our NeurELA. Concretely, We pre-train NeurELA under 6 different model complexities, with various hidden dimensions, i.e., h = (16, 64), and the number of the Ts-Attn module, i.e., l = (1, 3, 5). We additionally pre-train three MLP baselines, which substitute the Ts-Attn module in NeurELA with a linear feed-forward layer, which holds a shape of h × h, h = (16, 64, 128). We report both the zero-shot performance (yaxis) and the computational efficiency (x-axis, presented as the consumed wall time for computing the landscape features) in Figure 7, where the dashed lines denotes the performance and wall-time of the Original baseline. #para denotes the number of the learnable parameters. The results show that: 1) a significant performance gap is observed between the MLP baselines and our Ts-Attn module (h = 16, l = 1). It validates the effectiveness of our Ts-Attn design, which enhance the feature extraction of our NeurELA by encouraging the information sharing at both the cross-solution and cross-dimension levels; 2) As the model complexity increases, the performance of the Ts-Attn module drops rapidly. It reveals that the increased number of learnable parameters challenges the optimization ability of the backend ES. Given the limited computational resources, it is difficult to identify the optimal parameters θ∗. '} | {'images/404469c60be80871de0a0cac273007fcc1f18dfb0d7cdc107fc8c79a31f770b5.jpg': '3', 'images/1ea8a4c9f98bd3c072369dd6b23ed6a0b0386c676b238e2f01bc9429d0b2366e.jpg': '2'} | {'3': 'images/404469c60be80871de0a0cac273007fcc1f18dfb0d7cdc107fc8c79a31f770b5.jpg', '2': 'images/1ea8a4c9f98bd3c072369dd6b23ed6a0b0386c676b238e2f01bc9429d0b2366e.jpg'} | {'images/2860d185ad0551afe2aabb501992df2d0b7f46bca5e5e298a5678d318f671126.jpg': '1'} | {'1': 'images/2860d185ad0551afe2aabb501992df2d0b7f46bca5e5e298a5678d318f671126.jpg'} | {} | ['Model Complexity (RQ6). We discuss the relationship between the model complexity and the zero-shot performance (unseen MetaBBO algorithm & problem sets) of our NeurELA. Concretely, We pre-train NeurELA under 6 different model complexities, with various hidden dimensions, i.e., h = (16, 64), and the number of the Ts-Attn module, i.e., l = (1, 3, 5). We additionally pre-train three MLP baselines, which substitute the Ts-Attn module in NeurELA with a linear feed-forward layer, which holds a shape of h × h, h = (16, 64, 128). We report both the zero-shot performance (yaxis) and the computational efficiency (x-axis, presented as the consumed wall time for computing the landscape features) in Figure 7, where the dashed lines denotes the performance and wall-time of the Original baseline. #para denotes the number of the learnable parameters. The results show that: 1) a significant performance gap is observed between the MLP baselines and our Ts-Attn module (h = 16, l = 1). It validates the effectiveness of our Ts-Attn design, which enhance the feature extraction of our NeurELA by encouraging the information sharing at both the cross-solution and cross-dimension levels; 2) As the model complexity increases, the performance of the Ts-Attn module drops rapidly. It reveals that the increased number of learnable parameters challenges the optimization ability of the backend ES. Given the limited computational resources, it is difficult to identify the optimal parameters θ∗. ', 'PIE. PIE normalizes observation ot using two min-max normalization operations: first on the candidate solutions {Xit}im=1 against the search range, and second on the objective values {yit}im=1 using the extremum values at time step t. This ensures unified representation and generalization by scaling all values to [0, 1]. For a d-dimensional optimization problem, the normalized observations ot ', 'images/1ea8a4c9f98bd3c072369dd6b23ed6a0b0386c676b238e2f01bc9429d0b2366e.jpg'] | 33fa994b4d9460e0a41f23d63db78bbfe1e1a6b0222ca6e28c6ce212fffeef2c | 52338e0fa95ec6a5e01a939a36c8daed3211c494 |
explanation | What MARL settings are presented in the paper? | The MARL settings CooperativePong, PistonBall and Spread are presented in Table 1 and Figure 3. | ['Table 1', 'Figure 3'] | ['images/e326b6cd699b65230781ca064b7dc8e0c74518769469a10020f910ccf56ffa86.jpg', 'images/ff57fad4a640245576a93aca4d413d4fd042bfebf97ab72e198abc6cf0568753.jpg'] | ['mixed'] | 2 | 3 | 5 | {'The training plots for multi-agent environments are shown in Figure 3, following the same methodology. To further compare different scenarios, we allow both agents in CooperativePong to share the same policy. While in PistonBall and Spread, only the controller is centralized, and each of the actors—20 in PistonBall and 3 in Spread—learns its own policy. As in previous experiments, we observe that GRASP and ASC achieve similar performance. ': '1'} | {'1': 'The training plots for multi-agent environments are shown in Figure 3, following the same methodology. To further compare different scenarios, we allow both agents in CooperativePong to share the same policy. While in PistonBall and Spread, only the controller is centralized, and each of the actors—20 in PistonBall and 3 in Spread—learns its own policy. As in previous experiments, we observe that GRASP and ASC achieve similar performance. '} | {'images/d4bc4d7f26d85b616d283efaa11b51d547720393d059af8460c4943bbf79f3b0.jpg': '2', 'images/ff57fad4a640245576a93aca4d413d4fd042bfebf97ab72e198abc6cf0568753.jpg': '3'} | {'2': 'images/d4bc4d7f26d85b616d283efaa11b51d547720393d059af8460c4943bbf79f3b0.jpg', '3': 'images/ff57fad4a640245576a93aca4d413d4fd042bfebf97ab72e198abc6cf0568753.jpg'} | {'images/68745102ec79efac81ef48cbfa782ed2d3970ee106e08ed4f94f4daa3f353f7c.jpg': '2', 'images/e326b6cd699b65230781ca064b7dc8e0c74518769469a10020f910ccf56ffa86.jpg': '1'} | {'2': 'images/68745102ec79efac81ef48cbfa782ed2d3970ee106e08ed4f94f4daa3f353f7c.jpg', '1': 'images/e326b6cd699b65230781ca064b7dc8e0c74518769469a10020f910ccf56ffa86.jpg'} | {} | ['images/d4bc4d7f26d85b616d283efaa11b51d547720393d059af8460c4943bbf79f3b0.jpg', 'images/68745102ec79efac81ef48cbfa782ed2d3970ee106e08ed4f94f4daa3f353f7c.jpg', 'The training plots for multi-agent environments are shown in Figure 3, following the same methodology. To further compare different scenarios, we allow both agents in CooperativePong to share the same policy. While in PistonBall and Spread, only the controller is centralized, and each of the actors—20 in PistonBall and 3 in Spread—learns its own policy. As in previous experiments, we observe that GRASP and ASC achieve similar performance. '] | f48df9d51e3796924fa36c31d59c5ac5c95533c249bddb76dbb0895ec9726c7a | 52654c7bcc7ede0930ec2ee1e88ac24f1c68621d |
explanation | How does the proposed approach compare against non-equivariant policy learning algorithms? | We directly compare our proposed approach against non-equivariant policy learning algorithms. The non-equivariant baselines perform much worse in terms of performance and sample efficiency (see 'Sideview NonEqui' Figure 5 and Table 1). The non-equivariant methods were trained with data augmentation and still underperformed the equivariant versions. These results were also observed in [1]. | ['Figure 5', 'Table 1'] | ['images/533dcc4ba8374a381b12f6e0a58fc2d7cbb9eb7bbeabfd7dd0bd4b95581ab8e3.jpg', 'images/97e8c25df23ebf0bb39ff2c1446d1262167f67bb3b1216035a2576da9c25530f.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Wang et al. (2022b) showed that equivariant networks can still be effective when there is some mismatch between the symmetry group used to constrain the model and the physically accurate task symmetry. Specifically, they found that using image rotations on sideview images to capture O(2) actions on the scene is better than not using equivariance. Nevertheless, there is a noticeable performance gap when compared to the top-down image setting. ': '1', 'Learning Latent or Approximate Symmetry For some learning problems, there could be a mismatch between the symmetry in the ground truth function and the symmetry in the equivariant network because the symmetry cannot be easily described in the input space or the ground truth function is only partially symmetric. Falorsi et al. (2018) and Park et al. (2022) showed that symmetric neural representations can be extracted using traditional networks with a self-supervised loss. These symmetric representations can be further processed with equivariant layers leading to improved generalization (Esteves et al., 2019; Klee et al., 2023). Another solution to combat this problem is to use approximate or relaxed equivariant neural networks (Wang et al., 2022e; 2024b; Huang et al., 2024b) to relax the equivariant constraint in the network to better match the symmetry in the ground truth function. Alternatively, Wang et al. (2022b) showed that even with the symmetry match, a fully equivariant model that enforces symmetry to out-of-distribution data can still outperform non-equivariant baselines, as long as the symmetry in the model does not conflict with the ground truth function (Wang et al., 2024a). A similar finding was shown in De Silva et al. (2023) where training with out-of-distribution data could aid learning. Although the solution of Wang et al. (2022b) is simple and effective, there remains a significant performance gap compared to not having the symmetry mismatch. Our work provides a simple means to close this gap. ': '2'} | {'1': 'Wang et al. (2022b) showed that equivariant networks can still be effective when there is some mismatch between the symmetry group used to constrain the model and the physically accurate task symmetry. Specifically, they found that using image rotations on sideview images to capture O(2) actions on the scene is better than not using equivariance. Nevertheless, there is a noticeable performance gap when compared to the top-down image setting. ', '2': 'Learning Latent or Approximate Symmetry For some learning problems, there could be a mismatch between the symmetry in the ground truth function and the symmetry in the equivariant network because the symmetry cannot be easily described in the input space or the ground truth function is only partially symmetric. Falorsi et al. (2018) and Park et al. (2022) showed that symmetric neural representations can be extracted using traditional networks with a self-supervised loss. These symmetric representations can be further processed with equivariant layers leading to improved generalization (Esteves et al., 2019; Klee et al., 2023). Another solution to combat this problem is to use approximate or relaxed equivariant neural networks (Wang et al., 2022e; 2024b; Huang et al., 2024b) to relax the equivariant constraint in the network to better match the symmetry in the ground truth function. Alternatively, Wang et al. (2022b) showed that even with the symmetry match, a fully equivariant model that enforces symmetry to out-of-distribution data can still outperform non-equivariant baselines, as long as the symmetry in the model does not conflict with the ground truth function (Wang et al., 2024a). A similar finding was shown in De Silva et al. (2023) where training with out-of-distribution data could aid learning. Although the solution of Wang et al. (2022b) is simple and effective, there remains a significant performance gap compared to not having the symmetry mismatch. Our work provides a simple means to close this gap. '} | {'images/76f7ce360706c044c9d50d7488012c66e2d5866297e6a285cf8aa7ed4ec994a1.jpg': '7', 'images/533dcc4ba8374a381b12f6e0a58fc2d7cbb9eb7bbeabfd7dd0bd4b95581ab8e3.jpg': '5'} | {'7': 'images/76f7ce360706c044c9d50d7488012c66e2d5866297e6a285cf8aa7ed4ec994a1.jpg', '5': 'images/533dcc4ba8374a381b12f6e0a58fc2d7cbb9eb7bbeabfd7dd0bd4b95581ab8e3.jpg'} | {'images/97e8c25df23ebf0bb39ff2c1446d1262167f67bb3b1216035a2576da9c25530f.jpg': '1'} | {'1': 'images/97e8c25df23ebf0bb39ff2c1446d1262167f67bb3b1216035a2576da9c25530f.jpg'} | {} | ['images/76f7ce360706c044c9d50d7488012c66e2d5866297e6a285cf8aa7ed4ec994a1.jpg', 'Learning Latent or Approximate Symmetry For some learning problems, there could be a mismatch between the symmetry in the ground truth function and the symmetry in the equivariant network because the symmetry cannot be easily described in the input space or the ground truth function is only partially symmetric. Falorsi et al. (2018) and Park et al. (2022) showed that symmetric neural representations can be extracted using traditional networks with a self-supervised loss. These symmetric representations can be further processed with equivariant layers leading to improved generalization (Esteves et al., 2019; Klee et al., 2023). Another solution to combat this problem is to use approximate or relaxed equivariant neural networks (Wang et al., 2022e; 2024b; Huang et al., 2024b) to relax the equivariant constraint in the network to better match the symmetry in the ground truth function. Alternatively, Wang et al. (2022b) showed that even with the symmetry match, a fully equivariant model that enforces symmetry to out-of-distribution data can still outperform non-equivariant baselines, as long as the symmetry in the model does not conflict with the ground truth function (Wang et al., 2024a). A similar finding was shown in De Silva et al. (2023) where training with out-of-distribution data could aid learning. Although the solution of Wang et al. (2022b) is simple and effective, there remains a significant performance gap compared to not having the symmetry mismatch. Our work provides a simple means to close this gap. ', 'Wang et al. (2022b) showed that equivariant networks can still be effective when there is some mismatch between the symmetry group used to constrain the model and the physically accurate task symmetry. Specifically, they found that using image rotations on sideview images to capture O(2) actions on the scene is better than not using equivariance. Nevertheless, there is a noticeable performance gap when compared to the top-down image setting. '] | cb4f90d46d84bcfa362631838e00cf9d04f56acb8f689fa61189b9993a63f821 | 557f8e7f27e42c5b8fa4a32df0e28d72280ab64b |
explanation | Are there any fundamental differences or novel issues in confidence calibration for Retrieval-Augmented Generation (RAG) compared to calibration in generation models without retrieval augmentation? | In RAG, additional context that the LLM may not know is augmented into the input, which differs from the process where the LLM generates responses solely based on pre-existing knowledge or a given answer. This additional context serves as a hint, creating a different scenario compared to the traditional tasks performed by generation models. For instance, as shown in Table 1-(b), simple confidence calibration methods that do not account for RAG are insufficient to address these challenges. This is because, while RAG can improve performance, it can also lead to overly high confidence. However, existing research has not addressed decision calibration within the RAG framework. Moreover, if the LLM generates responses based solely on the Top-1 document retrieved by the retrieval model, it may fail to provide the optimal information required for decision-making. As illustrated in Figure 1-(a), other documents within the Top-10 may offer more valuable insights or information that contribute to more accurate decisions. This highlights the necessity of not only employing a retrieval-based approach but also integrating LLM-retrieval interactions and calibrating the confidence of the retrieved documents. Therefore, confidence calibration in RAG involves fundamentally different challenges and issues compared to calibration in simple generation models. It requires a comprehensive approach that not only enhances performance through RAG but also addresses the problem of over-confidence and compensates for incomplete or inaccurate information provided by the retrieval model. | ['Table 1', 'Figure 1'] | ['images/1d93a1ae787879849a9853489f77e176cd417c85d200e14f7e189e5daf5e5093.jpg', 'images/ffcff06167847c9f219435a7800054a7da065c26d07942e5a3b2233b9ed79a7a.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Comparison with uncertainty calibration baselines. Table 1 presents a comparison of uncertainty-based baselines across four QA datasets. Our CalibRAG achieves both a lower ‘No Answer’ rate and higher accuracy compared to other baselines, achieving the accuracy of 35.03 and 39.91 on BioASQ and HotpotQA, respectively, representing over a 3% improvement over the bestperforming baseline. Additionally, its confidence level is better calibrated than the other baselines, demonstrating the lowest ECE and BS. CalibRAG†, which regenerates the query for documents that do not exceed the threshold, consistently shows performance improvements. However, while it correctly answers more challenging questions, it also makes accurate decisions with lower confidence, causing some variation in the calibration metrics. ': '1', 'tences, summing over all possible corresponding probabilities would be required—an intractable process due to the exponential number of potential sequences. Consequently, token-level probabilities in current language models often fail to offer reliable confidence estimates for long-form text generation, thereby limiting their application to tasks that extend beyond multiple-choice scenarios. ': '2', 'However, the method proposed by Band et al. (2024) to tackle this calibration problem has three major limitations: 1) it requires supervised fine-tuning for three different LLMs, including the LLM responsible for generating a response z and the forecasting function f parameterized with two LLMs, 2) it further needs proximal policy optimization (PPO; Schulman et al., 2017) for fine-tuning the LLM for response generation, which is known to suffer from training instability (Zhu et al., 2023), and 3) it cannot calibrate the probabilities associated with the user decisions based on the guidance provided by RAG. ': '3'} | {'1': 'Comparison with uncertainty calibration baselines. Table 1 presents a comparison of uncertainty-based baselines across four QA datasets. Our CalibRAG achieves both a lower ‘No Answer’ rate and higher accuracy compared to other baselines, achieving the accuracy of 35.03 and 39.91 on BioASQ and HotpotQA, respectively, representing over a 3% improvement over the bestperforming baseline. Additionally, its confidence level is better calibrated than the other baselines, demonstrating the lowest ECE and BS. CalibRAG†, which regenerates the query for documents that do not exceed the threshold, consistently shows performance improvements. However, while it correctly answers more challenging questions, it also makes accurate decisions with lower confidence, causing some variation in the calibration metrics. ', '2': 'tences, summing over all possible corresponding probabilities would be required—an intractable process due to the exponential number of potential sequences. Consequently, token-level probabilities in current language models often fail to offer reliable confidence estimates for long-form text generation, thereby limiting their application to tasks that extend beyond multiple-choice scenarios. ', '3': 'However, the method proposed by Band et al. (2024) to tackle this calibration problem has three major limitations: 1) it requires supervised fine-tuning for three different LLMs, including the LLM responsible for generating a response z and the forecasting function f parameterized with two LLMs, 2) it further needs proximal policy optimization (PPO; Schulman et al., 2017) for fine-tuning the LLM for response generation, which is known to suffer from training instability (Zhu et al., 2023), and 3) it cannot calibrate the probabilities associated with the user decisions based on the guidance provided by RAG. '} | {'images/ffcff06167847c9f219435a7800054a7da065c26d07942e5a3b2233b9ed79a7a.jpg': '1'} | {'1': 'images/ffcff06167847c9f219435a7800054a7da065c26d07942e5a3b2233b9ed79a7a.jpg'} | {'images/1d93a1ae787879849a9853489f77e176cd417c85d200e14f7e189e5daf5e5093.jpg': '1'} | {'1': 'images/1d93a1ae787879849a9853489f77e176cd417c85d200e14f7e189e5daf5e5093.jpg'} | {} | ['Comparison with uncertainty calibration baselines. Table 1 presents a comparison of uncertainty-based baselines across four QA datasets. Our CalibRAG achieves both a lower ‘No Answer’ rate and higher accuracy compared to other baselines, achieving the accuracy of 35.03 and 39.91 on BioASQ and HotpotQA, respectively, representing over a 3% improvement over the bestperforming baseline. Additionally, its confidence level is better calibrated than the other baselines, demonstrating the lowest ECE and BS. CalibRAG†, which regenerates the query for documents that do not exceed the threshold, consistently shows performance improvements. However, while it correctly answers more challenging questions, it also makes accurate decisions with lower confidence, causing some variation in the calibration metrics. ', 'tences, summing over all possible corresponding probabilities would be required—an intractable process due to the exponential number of potential sequences. Consequently, token-level probabilities in current language models often fail to offer reliable confidence estimates for long-form text generation, thereby limiting their application to tasks that extend beyond multiple-choice scenarios. ', 'However, the method proposed by Band et al. (2024) to tackle this calibration problem has three major limitations: 1) it requires supervised fine-tuning for three different LLMs, including the LLM responsible for generating a response z and the forecasting function f parameterized with two LLMs, 2) it further needs proximal policy optimization (PPO; Schulman et al., 2017) for fine-tuning the LLM for response generation, which is known to suffer from training instability (Zhu et al., 2023), and 3) it cannot calibrate the probabilities associated with the user decisions based on the guidance provided by RAG. '] | 42abe4e25bf7f5872f2e665998243fe7438870187bf54c81839510b58b5fea08 | 65e624095701a1080d5f73fc831b548c8a63296a |
explanation | What are the advantages of the proposed variance-preserving mechanism in the architecture? | Our variance-preserving mechanism embedded in the architecture enables model selection directly from the training loss by preserving prediction variance and consequently preventing the model from overfitting the training set when extreme hyper-parameter configurations are tested and strong distribution shifts happen. This is a clear advantage that enables the predictability of generalization (Table 1). Dealing with overfitting when performing model selection with default architectures without validation data is challenging. See Figure 1 (top) and Table 1. | ['Table 1', 'Figure 1'] | ['images/af8e0d1e88cefbb1b60f3d0310b373ef143a06241764b57cd015fcb81f95376c.jpg', 'images/949446e2d67f0ae6d9110e45b33c6dde0111de219eb62546f0e7c4b43fd47b82.jpg'] | ['mixed'] | 2 | 3 | 5 | {} | {} | {'images/864ea4fde44d7d950cf0a6545208af5190c39403349e790378caf742085537f7.jpg': '4', 'images/835cd1e80907b44c9fd7028ceb4d89d1522cf74150fb28e061c06a499eae8af8.jpg': '5', 'images/949446e2d67f0ae6d9110e45b33c6dde0111de219eb62546f0e7c4b43fd47b82.jpg': '1', 'images/da1e7a460119c378444bfc707f3936ce4167cdd89f28cd6b09351b56e332463a.jpg': '2'} | {'4': 'images/864ea4fde44d7d950cf0a6545208af5190c39403349e790378caf742085537f7.jpg', '5': 'images/835cd1e80907b44c9fd7028ceb4d89d1522cf74150fb28e061c06a499eae8af8.jpg', '1': 'images/949446e2d67f0ae6d9110e45b33c6dde0111de219eb62546f0e7c4b43fd47b82.jpg', '2': 'images/da1e7a460119c378444bfc707f3936ce4167cdd89f28cd6b09351b56e332463a.jpg'} | {'images/af8e0d1e88cefbb1b60f3d0310b373ef143a06241764b57cd015fcb81f95376c.jpg': '1'} | {'1': 'images/af8e0d1e88cefbb1b60f3d0310b373ef143a06241764b57cd015fcb81f95376c.jpg'} | {} | ['images/da1e7a460119c378444bfc707f3936ce4167cdd89f28cd6b09351b56e332463a.jpg', 'images/835cd1e80907b44c9fd7028ceb4d89d1522cf74150fb28e061c06a499eae8af8.jpg', 'images/864ea4fde44d7d950cf0a6545208af5190c39403349e790378caf742085537f7.jpg'] | d376b95e1f8c8b65d07e847649e99383256ba2c9c44c57606267c49c767ceffc | 6b582ea4a5145a03c831aa33976a9f67441057ae |
explanation | Why should SELFEE work? | We first remark that SELFEE begins with an LLM fine-tuned using DPO on the initial seed preference dataset; therefore, depending on the size of the seed dataset and the degree of distribution shift in new prompts for each iteration, the effectiveness of SELFEE can vary. However, our experiments (Table 1) show that it yields significant improvements in alignment performance, demonstrating its suitability for our setup. Additionally, as observed in Table 1 and Figure 4, even online preference learning with an external reward model (Iterative DPO) experiences biased preference features and increased response lengths. This indicates that the increased response length in SELFEE is not merely a result of exacerbating biases from an insufficiently strong starting model but reflects broader challenges inherent in current online preference learning methods. | ['Table 1', 'Figure 4'] | ['images/56ab973f0b003a4464bdc89f222272b0fe685f03571e28cf06274a35639da434.jpg', 'images/95e411fae3db89cb06a06270e17555220fe309c0679484a3fff2cd3fbabeea36.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Fig. 5(a) describes the changes in the response character length throughout the iteration process. From iteration 1 to iteration 4, the response length for Iterative DPO and SELFEE increased significantly (1418 →1709) and (1852 →2412), respectively. In contrast, PFP exhibited only a minimal increase in length (1138 →1187). This highlights that, unlike other iterative improvement algorithms that have a weakness at length bias, PFP learns human preferences well without causing length bias. ': '1', 'To preserve the feature distribution over each iteration of online preference learning, we first map each instruction x ∈Xt used in online learning to the proper preference features. One can expect that the preference feature distribution is preserved by explicitly utilizing the assigned features during response generation and preference judgment. Specifically, this process involves two key components: (a) learning a feature classifier, and (b) assigning a pseudo-label using a relabeling technique. ': '2'} | {'1': 'Fig. 5(a) describes the changes in the response character length throughout the iteration process. From iteration 1 to iteration 4, the response length for Iterative DPO and SELFEE increased significantly (1418 →1709) and (1852 →2412), respectively. In contrast, PFP exhibited only a minimal increase in length (1138 →1187). This highlights that, unlike other iterative improvement algorithms that have a weakness at length bias, PFP learns human preferences well without causing length bias. ', '2': 'To preserve the feature distribution over each iteration of online preference learning, we first map each instruction x ∈Xt used in online learning to the proper preference features. One can expect that the preference feature distribution is preserved by explicitly utilizing the assigned features during response generation and preference judgment. Specifically, this process involves two key components: (a) learning a feature classifier, and (b) assigning a pseudo-label using a relabeling technique. '} | {'images/95e411fae3db89cb06a06270e17555220fe309c0679484a3fff2cd3fbabeea36.jpg': '4'} | {'4': 'images/95e411fae3db89cb06a06270e17555220fe309c0679484a3fff2cd3fbabeea36.jpg'} | {'images/56ab973f0b003a4464bdc89f222272b0fe685f03571e28cf06274a35639da434.jpg': '1', 'images/562f128d30d78018aa05bce14272be8ecf303982a9c1843f66724b6da84d7f54.jpg': '4'} | {'1': 'images/56ab973f0b003a4464bdc89f222272b0fe685f03571e28cf06274a35639da434.jpg', '4': 'images/562f128d30d78018aa05bce14272be8ecf303982a9c1843f66724b6da84d7f54.jpg'} | {} | ['Fig. 5(a) describes the changes in the response character length throughout the iteration process. From iteration 1 to iteration 4, the response length for Iterative DPO and SELFEE increased significantly (1418 →1709) and (1852 →2412), respectively. In contrast, PFP exhibited only a minimal increase in length (1138 →1187). This highlights that, unlike other iterative improvement algorithms that have a weakness at length bias, PFP learns human preferences well without causing length bias. ', 'images/562f128d30d78018aa05bce14272be8ecf303982a9c1843f66724b6da84d7f54.jpg', 'To preserve the feature distribution over each iteration of online preference learning, we first map each instruction x ∈Xt used in online learning to the proper preference features. One can expect that the preference feature distribution is preserved by explicitly utilizing the assigned features during response generation and preference judgment. Specifically, this process involves two key components: (a) learning a feature classifier, and (b) assigning a pseudo-label using a relabeling technique. '] | 13745cddb4137837dc61258323bde9569315729968633a2e6c05f83770e96230 | 729d9ddfbdd5e5b4eaf7653e8b760408d22d4650 |
explanation | What is the key novelty of the paper, particularly regarding the query-adaptive sampler? | Our key contribution lies in the application of query-adaptive frame sampling, which leverages the reasoning ability of the agents. Our approach is particularly focused on improving efficiency and performance when handling long-context videos. As demonstrated in the results (Table 4, Figure 4), our method enhances efficiency by reducing the number of frames accessed, while simultaneously increasing the accuracy of tasks. | ['Table 4', 'Figure 4'] | ['images/73764241d9d6a380ba3f3fef1642353cfbff4f7e9065d255fb34126e54da777d.jpg', 'images/079ec5638365adb75ac75381f5b989af45df1b1819d52fd8b862be90fb25b7ef.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Planning/tool invoking At time step t, the agent L selects an action at and action input xt based on policy π in solving problem D. The actions A are the invokable tools, which are pre-defined and callable functions from the agent. The action input xt is typically the frame number, indicating which frames the tools should access. The input often includes extra arguments, for example the question to query the tools (e.g. Frame index 0, what is happening in the frame?). Once the tools are invoked, it returns a observation O which is the extracted information of the selected frame. The agent L considers the previous observation-action trajectory τt = [a1, o1, . . . , ot−1] : in choosing ': '1'} | {'1': 'Planning/tool invoking At time step t, the agent L selects an action at and action input xt based on policy π in solving problem D. The actions A are the invokable tools, which are pre-defined and callable functions from the agent. The action input xt is typically the frame number, indicating which frames the tools should access. The input often includes extra arguments, for example the question to query the tools (e.g. Frame index 0, what is happening in the frame?). Once the tools are invoked, it returns a observation O which is the extracted information of the selected frame. The agent L considers the previous observation-action trajectory τt = [a1, o1, . . . , ot−1] : in choosing '} | {'images/079ec5638365adb75ac75381f5b989af45df1b1819d52fd8b862be90fb25b7ef.jpg': '4'} | {'4': 'images/079ec5638365adb75ac75381f5b989af45df1b1819d52fd8b862be90fb25b7ef.jpg'} | {'images/69b93f43c76ca6c81d7acf0206b9c0830fef08dc8c2eccdb20f449b0a7e15f75.jpg': '5', 'images/73764241d9d6a380ba3f3fef1642353cfbff4f7e9065d255fb34126e54da777d.jpg': '4', 'images/6a05baf74f5f0377b3a8fd8eb35bd54c29ed1bc563444ee4cfd82c0df906642c.jpg': '8'} | {'5': 'images/69b93f43c76ca6c81d7acf0206b9c0830fef08dc8c2eccdb20f449b0a7e15f75.jpg', '4': 'images/73764241d9d6a380ba3f3fef1642353cfbff4f7e9065d255fb34126e54da777d.jpg', '8': 'images/6a05baf74f5f0377b3a8fd8eb35bd54c29ed1bc563444ee4cfd82c0df906642c.jpg'} | {} | ['images/69b93f43c76ca6c81d7acf0206b9c0830fef08dc8c2eccdb20f449b0a7e15f75.jpg', 'images/6a05baf74f5f0377b3a8fd8eb35bd54c29ed1bc563444ee4cfd82c0df906642c.jpg', 'Planning/tool invoking At time step t, the agent L selects an action at and action input xt based on policy π in solving problem D. The actions A are the invokable tools, which are pre-defined and callable functions from the agent. The action input xt is typically the frame number, indicating which frames the tools should access. The input often includes extra arguments, for example the question to query the tools (e.g. Frame index 0, what is happening in the frame?). Once the tools are invoked, it returns a observation O which is the extracted information of the selected frame. The agent L considers the previous observation-action trajectory τt = [a1, o1, . . . , ot−1] : in choosing '] | 09f828d9a90ed12c038fbf9fbc9635b31b4865415666f51ba283d8c76c6c8b04 | 80917e140b56b5b4d9459329a896fef9e483dacc |
explanation | How does the proposed method compare to DETR in terms of performance and inference speed? | Thanks for the concern. We would like to highlight that our DECO also outperforms DETR with the same settings, *i.e.*, training receipt, architecture etc. The comparisons are shown in Table 2 (as also shown in Figure 1 in supplementary material) and we can see that our DECO obtains better performance than DETR, which justifies the effectiveness and also the main contribution of our proposed method. | ['Table 2', 'Figure 1'] | ['images/5c3bb75a3c6ada6985c4a487688a7f0fa40b6446a0f0dde0be232cc72bfca63d.jpg', 'images/aec1a617d8c4be2ce7b12411ab71b5a77c4e6bc697890acc05b8b8de9c395c34.jpg'] | ['mixed'] | 2 | 3 | 5 | {'DECO Encoder. Similar to DETR, a 1 × 1 convolution is first utilized to reduce the channel dimension of f from C to d and obtain a new feature map z0 ∈ℜd×H×W . In DETR, z0 is fed into stacked transformer encoder layers, which mainly consists of multi-head self-attention (MHSA) and feed-forward network (FFN) to perform spatial and channel information mixing respectively. Recent work such as ConvNeXt (Liu et al., 2022b) has demonstrated that using stacked depthwise and pointwise convolutions could achieve comparable performance with Transformers. Therefore, we use the ConvNeXt blocks to build our DECO encoder. Specifically, each DECO encoder layer is stacked with a 7 × 7 depthwise convolution, a LayerNorm layer, a 1 × 1 convolution, a GELU acitvation and another 1 × 1 convolution. Besides, in DETR, positional encodings are necessary to be added to the input of each transformer encoder layer, since the transformer architecture is permutation-invariant. However, the ConvNet architecture is permutation-variant so that our DECO encoder layers could get rid of any positional encodings. ': '1', 'Meanwhile, some recent work rethinks the strong performance and reveal that the pure ConvNets could also achieve competitive performance via proper architecture design (Liu et al., 2022b; Yu et al., 2022). For example, ConvNeXt (Liu et al., 2022b) competes favorably with vision transformers like Swin Transformer (Liu et al., 2021) in terms of accuracy and computational cost. However, these methods mainly focus on Encoder part of transformer, in which self-attention is utilized and could be replaced by convolution with careful design. These motivate us to explore one important question in this paper: could we obtain an architecture via pure ConvNets but still enjoys the excellent properties similar to attention? ': '2'} | {'1': 'DECO Encoder. Similar to DETR, a 1 × 1 convolution is first utilized to reduce the channel dimension of f from C to d and obtain a new feature map z0 ∈ℜd×H×W . In DETR, z0 is fed into stacked transformer encoder layers, which mainly consists of multi-head self-attention (MHSA) and feed-forward network (FFN) to perform spatial and channel information mixing respectively. Recent work such as ConvNeXt (Liu et al., 2022b) has demonstrated that using stacked depthwise and pointwise convolutions could achieve comparable performance with Transformers. Therefore, we use the ConvNeXt blocks to build our DECO encoder. Specifically, each DECO encoder layer is stacked with a 7 × 7 depthwise convolution, a LayerNorm layer, a 1 × 1 convolution, a GELU acitvation and another 1 × 1 convolution. Besides, in DETR, positional encodings are necessary to be added to the input of each transformer encoder layer, since the transformer architecture is permutation-invariant. However, the ConvNet architecture is permutation-variant so that our DECO encoder layers could get rid of any positional encodings. ', '2': 'Meanwhile, some recent work rethinks the strong performance and reveal that the pure ConvNets could also achieve competitive performance via proper architecture design (Liu et al., 2022b; Yu et al., 2022). For example, ConvNeXt (Liu et al., 2022b) competes favorably with vision transformers like Swin Transformer (Liu et al., 2021) in terms of accuracy and computational cost. However, these methods mainly focus on Encoder part of transformer, in which self-attention is utilized and could be replaced by convolution with careful design. These motivate us to explore one important question in this paper: could we obtain an architecture via pure ConvNets but still enjoys the excellent properties similar to attention? '} | {'images/aec1a617d8c4be2ce7b12411ab71b5a77c4e6bc697890acc05b8b8de9c395c34.jpg': '1'} | {'1': 'images/aec1a617d8c4be2ce7b12411ab71b5a77c4e6bc697890acc05b8b8de9c395c34.jpg'} | {'images/ce86eb6e32ea5f53607d5a4ec12f23ad6d10f8d3cc52aad8d11e95d797c7526a.jpg': '1', 'images/5c3bb75a3c6ada6985c4a487688a7f0fa40b6446a0f0dde0be232cc72bfca63d.jpg': '2'} | {'1': 'images/ce86eb6e32ea5f53607d5a4ec12f23ad6d10f8d3cc52aad8d11e95d797c7526a.jpg', '2': 'images/5c3bb75a3c6ada6985c4a487688a7f0fa40b6446a0f0dde0be232cc72bfca63d.jpg'} | {} | ['Meanwhile, some recent work rethinks the strong performance and reveal that the pure ConvNets could also achieve competitive performance via proper architecture design (Liu et al., 2022b; Yu et al., 2022). For example, ConvNeXt (Liu et al., 2022b) competes favorably with vision transformers like Swin Transformer (Liu et al., 2021) in terms of accuracy and computational cost. However, these methods mainly focus on Encoder part of transformer, in which self-attention is utilized and could be replaced by convolution with careful design. These motivate us to explore one important question in this paper: could we obtain an architecture via pure ConvNets but still enjoys the excellent properties similar to attention? ', 'images/ce86eb6e32ea5f53607d5a4ec12f23ad6d10f8d3cc52aad8d11e95d797c7526a.jpg', 'DECO Encoder. Similar to DETR, a 1 × 1 convolution is first utilized to reduce the channel dimension of f from C to d and obtain a new feature map z0 ∈ℜd×H×W . In DETR, z0 is fed into stacked transformer encoder layers, which mainly consists of multi-head self-attention (MHSA) and feed-forward network (FFN) to perform spatial and channel information mixing respectively. Recent work such as ConvNeXt (Liu et al., 2022b) has demonstrated that using stacked depthwise and pointwise convolutions could achieve comparable performance with Transformers. Therefore, we use the ConvNeXt blocks to build our DECO encoder. Specifically, each DECO encoder layer is stacked with a 7 × 7 depthwise convolution, a LayerNorm layer, a 1 × 1 convolution, a GELU acitvation and another 1 × 1 convolution. Besides, in DETR, positional encodings are necessary to be added to the input of each transformer encoder layer, since the transformer architecture is permutation-invariant. However, the ConvNet architecture is permutation-variant so that our DECO encoder layers could get rid of any positional encodings. '] | a6962b92e1a1db20f165bb3e2736f2e535655de62c44f23849676eec30fdbdc8 | 859cadf9210afc0858163efe25c35e2f15290731 |
explanation | How do you generalize your approach to more complicated and rare compositions? | RareBench already includes the complicated rare composition cases (as the 'complex' case), consisting of three or more concepts, and R2F still exhibits superior performance on such complex cases as shown in Table 6. Specifically, looking at Figure 6, there is an example 'A horned bearded spotted raccoon smiling' from the complex case, and R2F successfully generates the image that accurately follows the prompt. Technically, given examples such as 'adj1 + adj2 + noun', R2F finds a noun that more frequently appears in the context of 'adj1 + adj2', and uses it for frequent concept guidance. | ['Table 6', 'Figure 6'] | ['images/0b5cb04ba6b819219bdae29194748fe720dbd54de165be822a80f0345d14d6b5.jpg', 'images/490f062e589415340fffe20ccdd9705368cc032e994ff34727f920548537a57d.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Efficacy of Alternating Guidance. Figure 8 and Table 6 show the qualitative and quantitative analysis of the R2F’s alternating guidance compared to other possible guidance choices. We apply three guidance choices, (1) Linear interpolation (Interpolate) of latents as in Theorem 3.1, and bring the idea of (2) Composable Diffusion (Liu et al., 2022) and (3) Prompt-to-prompt (P2P) (Hertz et al., 2022). Given a pair of rare-frequent concept prompts, Interpolate linearly interpolates the latants of rare and frequent prompts with α = 0.5 and Composable blends the two prompt embeddings and uses it as the input, until the stop points obtained from LLM. P2P first generates a complete image from the frequent concept prompt and then edits it by the rare concept prompt with attention-control. ': '1', 'Efficacy of Visual-detail-aware Guidance Stop Points. Figure 9 depicts the efficacy of R2F’s adaptive visual-detail-aware stop points compared to when using a fixed stop point on RareBench with single-object case, which has only one stop point. We ablate the fixed stop point in the grid of {5, 10, 20, 30, 40}. With lower stop points such as 5 and 10 (in yellow lines), R2F shows relatively lower performance than those with higher stop points (in green lines) in generating rare concepts for attribute types of property and texture, because these usually require a higher level of visual details to synthesize. This tendency becomes reversed for the attribute type of shape, which tends to require a lower level of visual details. The original R2F, which adaptively determines the guidance stop points based on the appropriate visual detail level for each prompt, naturally leads to the best performance. ': '2'} | {'1': 'Efficacy of Alternating Guidance. Figure 8 and Table 6 show the qualitative and quantitative analysis of the R2F’s alternating guidance compared to other possible guidance choices. We apply three guidance choices, (1) Linear interpolation (Interpolate) of latents as in Theorem 3.1, and bring the idea of (2) Composable Diffusion (Liu et al., 2022) and (3) Prompt-to-prompt (P2P) (Hertz et al., 2022). Given a pair of rare-frequent concept prompts, Interpolate linearly interpolates the latants of rare and frequent prompts with α = 0.5 and Composable blends the two prompt embeddings and uses it as the input, until the stop points obtained from LLM. P2P first generates a complete image from the frequent concept prompt and then edits it by the rare concept prompt with attention-control. ', '2': 'Efficacy of Visual-detail-aware Guidance Stop Points. Figure 9 depicts the efficacy of R2F’s adaptive visual-detail-aware stop points compared to when using a fixed stop point on RareBench with single-object case, which has only one stop point. We ablate the fixed stop point in the grid of {5, 10, 20, 30, 40}. With lower stop points such as 5 and 10 (in yellow lines), R2F shows relatively lower performance than those with higher stop points (in green lines) in generating rare concepts for attribute types of property and texture, because these usually require a higher level of visual details to synthesize. This tendency becomes reversed for the attribute type of shape, which tends to require a lower level of visual details. The original R2F, which adaptively determines the guidance stop points based on the appropriate visual detail level for each prompt, naturally leads to the best performance. '} | {'images/490f062e589415340fffe20ccdd9705368cc032e994ff34727f920548537a57d.jpg': '6'} | {'6': 'images/490f062e589415340fffe20ccdd9705368cc032e994ff34727f920548537a57d.jpg'} | {'images/523c879d828890d674f8c25830a6eb2e9e9f5e1eefe733086146f7153d4b58e4.jpg': '2', 'images/0b5cb04ba6b819219bdae29194748fe720dbd54de165be822a80f0345d14d6b5.jpg': '6'} | {'2': 'images/523c879d828890d674f8c25830a6eb2e9e9f5e1eefe733086146f7153d4b58e4.jpg', '6': 'images/0b5cb04ba6b819219bdae29194748fe720dbd54de165be822a80f0345d14d6b5.jpg'} | {} | ['images/523c879d828890d674f8c25830a6eb2e9e9f5e1eefe733086146f7153d4b58e4.jpg', 'Efficacy of Alternating Guidance. Figure 8 and Table 6 show the qualitative and quantitative analysis of the R2F’s alternating guidance compared to other possible guidance choices. We apply three guidance choices, (1) Linear interpolation (Interpolate) of latents as in Theorem 3.1, and bring the idea of (2) Composable Diffusion (Liu et al., 2022) and (3) Prompt-to-prompt (P2P) (Hertz et al., 2022). Given a pair of rare-frequent concept prompts, Interpolate linearly interpolates the latants of rare and frequent prompts with α = 0.5 and Composable blends the two prompt embeddings and uses it as the input, until the stop points obtained from LLM. P2P first generates a complete image from the frequent concept prompt and then edits it by the rare concept prompt with attention-control. ', 'Efficacy of Visual-detail-aware Guidance Stop Points. Figure 9 depicts the efficacy of R2F’s adaptive visual-detail-aware stop points compared to when using a fixed stop point on RareBench with single-object case, which has only one stop point. We ablate the fixed stop point in the grid of {5, 10, 20, 30, 40}. With lower stop points such as 5 and 10 (in yellow lines), R2F shows relatively lower performance than those with higher stop points (in green lines) in generating rare concepts for attribute types of property and texture, because these usually require a higher level of visual details to synthesize. This tendency becomes reversed for the attribute type of shape, which tends to require a lower level of visual details. The original R2F, which adaptively determines the guidance stop points based on the appropriate visual detail level for each prompt, naturally leads to the best performance. '] | 73f91a08d41dc1714651ac65380475e5d60f0ac5571a09135c97c84643d244fe | 87b23be1436dcbe59f7359a900e9813e81087437 |
explanation | What are the practical uses of crystal symmetry generation in academia or industry? | Our main claim is that SymmCD performs significantly better than prior works at generating crystals with realistic, diverse symmetries, as seen in Figure 4 and Table 1. Many properties of crystals (such as piezoelectricity and optical activity) are determined by symmetry, so when searching for practical crystals, a generative model should be able to generate crystals with desired symmetry. | ['Figure 4', 'Table 1'] | ['images/07b2a3283c639511e149708c5a5a98c631cd92a3b3e9b5ba6167f66e51a8374c.jpg', 'images/7aa13bce6f772da2cea00e5222dff0acc466a227e537bc4e9b71858c5da5e504.jpg'] | ['mixed'] | 2 | 3 | 5 | {'We empirically demonstrate our contributions, particularly in ensuring we generate crystals with desired symmetries while being competitive with existing baselines. In other words, we show that SymmCD generates symmetric, stable, and valid crystals. We compare our proposed method with four recent strong baselines: CDVAE (Xie et al., 2022), DiffCSP (Jiao et al., 2023), DiffCSP++ (Jiao et al., 2024) and FlowMM (Miller et al., 2024). ': '1', 'The main contributions of this work are as follows: I) We demonstrate a novel approach to generating crystals through the unconstrained generation of asymmetric units, along with their symmetry information. II) We introduce a physically-motivated representation for crystallographic site symmetries that generalizes across space groups. (III) We experimentally evaluate our method, finding that it performs on par with previous methods in terms of generating stable structures, while offering significantly improved computational efficiency due to our representation. (IV) We perform an indepth analysis of the symmetry and diversity of crystal structures generated by existing generative models. ': '2'} | {'1': 'We empirically demonstrate our contributions, particularly in ensuring we generate crystals with desired symmetries while being competitive with existing baselines. In other words, we show that SymmCD generates symmetric, stable, and valid crystals. We compare our proposed method with four recent strong baselines: CDVAE (Xie et al., 2022), DiffCSP (Jiao et al., 2023), DiffCSP++ (Jiao et al., 2024) and FlowMM (Miller et al., 2024). ', '2': 'The main contributions of this work are as follows: I) We demonstrate a novel approach to generating crystals through the unconstrained generation of asymmetric units, along with their symmetry information. II) We introduce a physically-motivated representation for crystallographic site symmetries that generalizes across space groups. (III) We experimentally evaluate our method, finding that it performs on par with previous methods in terms of generating stable structures, while offering significantly improved computational efficiency due to our representation. (IV) We perform an indepth analysis of the symmetry and diversity of crystal structures generated by existing generative models. '} | {'images/85d305a053ad51ec1c5ea92d18a72fbc0278174eb6a15776b9fcd72ffa4b5a9f.jpg': '3', 'images/07b2a3283c639511e149708c5a5a98c631cd92a3b3e9b5ba6167f66e51a8374c.jpg': '4'} | {'3': 'images/85d305a053ad51ec1c5ea92d18a72fbc0278174eb6a15776b9fcd72ffa4b5a9f.jpg', '4': 'images/07b2a3283c639511e149708c5a5a98c631cd92a3b3e9b5ba6167f66e51a8374c.jpg'} | {'images/7aa13bce6f772da2cea00e5222dff0acc466a227e537bc4e9b71858c5da5e504.jpg': '1'} | {'1': 'images/7aa13bce6f772da2cea00e5222dff0acc466a227e537bc4e9b71858c5da5e504.jpg'} | {} | ['We empirically demonstrate our contributions, particularly in ensuring we generate crystals with desired symmetries while being competitive with existing baselines. In other words, we show that SymmCD generates symmetric, stable, and valid crystals. We compare our proposed method with four recent strong baselines: CDVAE (Xie et al., 2022), DiffCSP (Jiao et al., 2023), DiffCSP++ (Jiao et al., 2024) and FlowMM (Miller et al., 2024). ', 'images/85d305a053ad51ec1c5ea92d18a72fbc0278174eb6a15776b9fcd72ffa4b5a9f.jpg', 'The main contributions of this work are as follows: I) We demonstrate a novel approach to generating crystals through the unconstrained generation of asymmetric units, along with their symmetry information. II) We introduce a physically-motivated representation for crystallographic site symmetries that generalizes across space groups. (III) We experimentally evaluate our method, finding that it performs on par with previous methods in terms of generating stable structures, while offering significantly improved computational efficiency due to our representation. (IV) We perform an indepth analysis of the symmetry and diversity of crystal structures generated by existing generative models. '] | 84384d9a293ea5d8aa6f43c33e3336541e733d203e9c9bfa96542fd3b5754725 | 999ece922a421954932ad2717fc2f68b13d513cc |
explanation | How does the tokenizer-level decoding method affect the model's performance? | We want to clarify that our token-level graph-constrained decoding would not lead to entities or relationships that do not exist in KGs. During decoding, we use the KG-Trie to restrict the tokens generated by the LLM to those starting with valid prefixes stored in the Trie. This approach has been used by previous methods to limit LLM output within a specific scope, such as all entities in KGs. Our KG-Trie is constructed from paths within KGs. Therefore, under these constraints, only valid entities and relations from KGs can be generated by LLMs to form reasoning paths. We have thoroughly checked the generated results and found zero invalid entities or relations, as shown in Figure 5. Meanwhile, the token-level graph-constrained decoding is more efficient and effective than other LLM-based graph reasoning methods. Due to the unstructured nature of LLMs, they are difficult to apply directly for reasoning on structured knowledge graphs (KGs). Previous LLM-based graph reasoning methods, such as ToG, typically follow an agent paradigm where LLMs iteratively query information from KGs. This approach incurs multiple API calls, resulting in high computational costs and latency. With KG-Trie, we enable LLMs to reason on KGs within a single decoding process, significantly reducing computation overhead and latency. Additionally, incorporating KG-Trie into LLM decoding does not introduce extra computational costs since it only masks out the probabilities of invalid tokens. Furthermore, this integration leverages GPU parallel computation to traverse multiple paths using beam search. Table 2 shows that GCR requires less running time and fewer LLM calls than LLM agent-based methods, such as ToG. | ['Figure 5', 'Table 2'] | ['images/a9c22c25f16dacbfe6afb009ac4154c18ce7d5cd88de363eac9ae889381dc7f6.jpg', 'images/b4366059ed83815dbff2897ba35dcac60bd79f6eff2e3af466c087f34a206fc6.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Large language models (LLMs) have strong reasoning capabilities but still suffer from severe hallucination issues, which undermines the trustworthiness of the reasoning process. To tackle this issue, we propose graph-constrained decoding, which unifies the reasoning ability of LLMs with the structured knowledge in KGs to generate faithful KG-grounded reasoning paths leading to answers. ': '1'} | {'1': 'Large language models (LLMs) have strong reasoning capabilities but still suffer from severe hallucination issues, which undermines the trustworthiness of the reasoning process. To tackle this issue, we propose graph-constrained decoding, which unifies the reasoning ability of LLMs with the structured knowledge in KGs to generate faithful KG-grounded reasoning paths leading to answers. '} | {'images/a9c22c25f16dacbfe6afb009ac4154c18ce7d5cd88de363eac9ae889381dc7f6.jpg': '5', 'images/f2e85e495bbd0facb7ad7758f9e65d318165e81151d0cd3f74811ff9db0793a0.jpg': '2'} | {'5': 'images/a9c22c25f16dacbfe6afb009ac4154c18ce7d5cd88de363eac9ae889381dc7f6.jpg', '2': 'images/f2e85e495bbd0facb7ad7758f9e65d318165e81151d0cd3f74811ff9db0793a0.jpg'} | {'images/b4366059ed83815dbff2897ba35dcac60bd79f6eff2e3af466c087f34a206fc6.jpg': '2', 'images/2b0e6d3dfedbcec23470e74c3999e07ad4ba2415833bd605f829d2d3c8634e86.jpg': '4'} | {'2': 'images/b4366059ed83815dbff2897ba35dcac60bd79f6eff2e3af466c087f34a206fc6.jpg', '4': 'images/2b0e6d3dfedbcec23470e74c3999e07ad4ba2415833bd605f829d2d3c8634e86.jpg'} | {} | ['images/2b0e6d3dfedbcec23470e74c3999e07ad4ba2415833bd605f829d2d3c8634e86.jpg', 'images/f2e85e495bbd0facb7ad7758f9e65d318165e81151d0cd3f74811ff9db0793a0.jpg', 'Large language models (LLMs) have strong reasoning capabilities but still suffer from severe hallucination issues, which undermines the trustworthiness of the reasoning process. To tackle this issue, we propose graph-constrained decoding, which unifies the reasoning ability of LLMs with the structured knowledge in KGs to generate faithful KG-grounded reasoning paths leading to answers. '] | e6a6776b0a81cdcfcd35e1bd0f5e9eb909bbed8679c20eb5d92793430d230f84 | a93a8af29009c03fc1e9cb53ca6471568eb580a5 |
explanation | What evidence supports that the improvement comes from the proposed diffusion policy-constrained iteration rather than the Q-ensemble? | We have to emphasize that the improvement of our proposed method over others is not solely based on high scores in the testing environments, but also on the stability of convergence. To demonstrate that the majority of the improvement stems from the proposed soft Q-guidance rather than the Q-ensemble, we have included an ablation study in Figure 3 in our paper. In this study, all designs and parameters are maintained except that soft Q-guidance is replaced with alternatives, such as denoised guidance (similar to DiffusionQL). As seen in Figure 3, the introduction of Q-ensemble does not enhance the stability of convergence for denoised Q-guidance. Additionally, in Table 1, the reported scores from DiffusionQL utilize *online-model-selection*, which tracks the best models throughout the training process. In contrast, we present the average of the final convergent scores from our method, which demands greater stability from the trained models. To further validate the effectiveness of DAC and to ensure a fair comparison with DiffusionQL, we conduct additional experiments where we replace the Q-ensemble in DAC with the same number of Qs (num of Qs=2) used in DiffusionQL. We also record the scores for DAC using *online-model-selection* (OMS). It can be seen that, under the same protocol (both using OMS), our method without the Q-ensemble significantly outperforms DiffusionQL in most environments, demonstrating the effectiveness of soft Q-guidance. We also observe that using a Q-ensemble of size 5 or larger yields similar performance. | ['Figure 3', 'Table 1'] | ['images/fe9b6e3caf55686bb4d3c144cac0a0668aba9ef004d9cd1d8eb13373c6ef5d3c.jpg', 'images/5ac63d1ac28ead1c5e2e523a3f221a3ebb2bf5d6198835d823817dddb1497d2a.jpg'] | ['mixed'] | 2 | 3 | 5 | {'A natural approach to employing diffusion models in behavior cloning involves replacing the noise predictor with a state-conditional model ϵθ(xt, s, t) that generates actions x0 ∈A based on state s. ': '1', 'In this section, we introduce the Diffusion Actor-Critic (DAC) framework that models the target policy directly as a diffusion model, eliminating the need for density estimation of either the behavior policy or the target policy. Initially, we formulate the KL constraint policy optimization as a diffusion noise regression problem, which yields a soft Q-guidance term for the noise prediction process that enables the learning of the target policy in a supervised manner. Additionally, we introduce Qensemble to stabilize the Q-gradient estimation, which utilizes LCB to mitigate the over-pessimistic estimation associated with taking the ensemble minimum in prior research. ': '2'} | {'1': 'A natural approach to employing diffusion models in behavior cloning involves replacing the noise predictor with a state-conditional model ϵθ(xt, s, t) that generates actions x0 ∈A based on state s. ', '2': 'In this section, we introduce the Diffusion Actor-Critic (DAC) framework that models the target policy directly as a diffusion model, eliminating the need for density estimation of either the behavior policy or the target policy. Initially, we formulate the KL constraint policy optimization as a diffusion noise regression problem, which yields a soft Q-guidance term for the noise prediction process that enables the learning of the target policy in a supervised manner. Additionally, we introduce Qensemble to stabilize the Q-gradient estimation, which utilizes LCB to mitigate the over-pessimistic estimation associated with taking the ensemble minimum in prior research. '} | {'images/f493cfe89e44d120988d5d913ae790d2915d73bde486e42e462583601c2cd850.jpg': '4', 'images/fe9b6e3caf55686bb4d3c144cac0a0668aba9ef004d9cd1d8eb13373c6ef5d3c.jpg': '3'} | {'4': 'images/f493cfe89e44d120988d5d913ae790d2915d73bde486e42e462583601c2cd850.jpg', '3': 'images/fe9b6e3caf55686bb4d3c144cac0a0668aba9ef004d9cd1d8eb13373c6ef5d3c.jpg'} | {'images/5ac63d1ac28ead1c5e2e523a3f221a3ebb2bf5d6198835d823817dddb1497d2a.jpg': '1'} | {'1': 'images/5ac63d1ac28ead1c5e2e523a3f221a3ebb2bf5d6198835d823817dddb1497d2a.jpg'} | {} | ['In this section, we introduce the Diffusion Actor-Critic (DAC) framework that models the target policy directly as a diffusion model, eliminating the need for density estimation of either the behavior policy or the target policy. Initially, we formulate the KL constraint policy optimization as a diffusion noise regression problem, which yields a soft Q-guidance term for the noise prediction process that enables the learning of the target policy in a supervised manner. Additionally, we introduce Qensemble to stabilize the Q-gradient estimation, which utilizes LCB to mitigate the over-pessimistic estimation associated with taking the ensemble minimum in prior research. ', 'A natural approach to employing diffusion models in behavior cloning involves replacing the noise predictor with a state-conditional model ϵθ(xt, s, t) that generates actions x0 ∈A based on state s. ', 'images/f493cfe89e44d120988d5d913ae790d2915d73bde486e42e462583601c2cd850.jpg'] | bff49264fb79cf5c46b980c620440e987355c2465918d80f3400f7ea8b807b5e | b0fbc4860d3a1995a411e7559c6961f48a7cda5e |
explanation | More scrutiny of the physics-informed losses would be beneficial. Some plots of solutions and errors across the poorer performing methods might help understand why they are performing badly. Is it that boundary conditions are not being adhered to? Maybe there are regions of high PDE loss in the resulting solution? Perhaps small changes (e.g. weighing boundary conditions effectively) might lead to improved performance. | We have already plotted the solutions to the poor performance in Figure 4 (c). The figure shows that the boundary condition is strictly obeyed for every network because we use weight=100 for boundary loss and weight=1 for residual loss. Besides, we did experiments on larger weights of boundary conditions to have a more strict boundary condition: we keep the weight of residual loss, and weight=1000 for boundary loss in the poor performance experiments i.e. $
ewline=1000$ in Table 3. | ['Figure 4', 'Table 3'] | ['images/fd9ec57bfd1aa96760031234b763c2614267e736af279f75e746b5661a9956da.jpg', 'images/c5db1317f66009d3c2aba6ceb5a67bfa25ef5402ea5b35c45592df5a2f2b76b3.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Physics-informed neural networks (PINNs) Lagaris et al. (1998); Raissi et al. (2019) are a method used to solve partial differential equations (PDEs) by integrating physical laws with neural networks in machine learning. The use of Kolmogorov-Arnold Networks (KANs) in PINNs has been explored and is referred to as Physics-Informed Kolmogorov-Arnold Networks (PIKANs) Rigas et al. (2024); Wang et al. (2024). Due to the high similarity between KAN and MLP, PIKANs inherit several advantages of PINNs, such as overcoming the curse of dimensionality (CoD) Wojtowytsch & Weinan (2020); Han et al. (2018), handling imperfect data Karniadakis et al. (2021), and performing interpolation Sliwinski & Rigas (2023). PINNs have diverse applications, including fluid dynamics Raissi et al. (2020); Jin et al. (2021); Kashefi & Mukerji (2022), quantum mechanical systems Jin et al. (2022), surface physics Fang & Zhan (2019), electric power systems Nellikkath & Chatzivasileiadis (2022), and biological systems Yazdani et al. (2020). However, they also face challenges such as spectral bias Xu et al. (2019); Wang et al. (2022), error estimation Fanaskov et al. (2024), and scalability issues Yao et al. (2023). ': '1'} | {'1': 'Physics-informed neural networks (PINNs) Lagaris et al. (1998); Raissi et al. (2019) are a method used to solve partial differential equations (PDEs) by integrating physical laws with neural networks in machine learning. The use of Kolmogorov-Arnold Networks (KANs) in PINNs has been explored and is referred to as Physics-Informed Kolmogorov-Arnold Networks (PIKANs) Rigas et al. (2024); Wang et al. (2024). Due to the high similarity between KAN and MLP, PIKANs inherit several advantages of PINNs, such as overcoming the curse of dimensionality (CoD) Wojtowytsch & Weinan (2020); Han et al. (2018), handling imperfect data Karniadakis et al. (2021), and performing interpolation Sliwinski & Rigas (2023). PINNs have diverse applications, including fluid dynamics Raissi et al. (2020); Jin et al. (2021); Kashefi & Mukerji (2022), quantum mechanical systems Jin et al. (2022), surface physics Fang & Zhan (2019), electric power systems Nellikkath & Chatzivasileiadis (2022), and biological systems Yazdani et al. (2020). However, they also face challenges such as spectral bias Xu et al. (2019); Wang et al. (2022), error estimation Fanaskov et al. (2024), and scalability issues Yao et al. (2023). '} | {'images/fd9ec57bfd1aa96760031234b763c2614267e736af279f75e746b5661a9956da.jpg': '4', 'images/9e0c873dd53288bb6b55aa30e6e2ec6ec0df2ab90da0179fa312ded2fd9060d2.jpg': '2'} | {'4': 'images/fd9ec57bfd1aa96760031234b763c2614267e736af279f75e746b5661a9956da.jpg', '2': 'images/9e0c873dd53288bb6b55aa30e6e2ec6ec0df2ab90da0179fa312ded2fd9060d2.jpg'} | {'images/ebae19bc3640cff886b2ec64f7bc1317fc2ee7a4d81adf69998f9b3babd55b96.jpg': '2', 'images/c5db1317f66009d3c2aba6ceb5a67bfa25ef5402ea5b35c45592df5a2f2b76b3.jpg': '3'} | {'2': 'images/ebae19bc3640cff886b2ec64f7bc1317fc2ee7a4d81adf69998f9b3babd55b96.jpg', '3': 'images/c5db1317f66009d3c2aba6ceb5a67bfa25ef5402ea5b35c45592df5a2f2b76b3.jpg'} | {} | ['images/9e0c873dd53288bb6b55aa30e6e2ec6ec0df2ab90da0179fa312ded2fd9060d2.jpg', 'images/ebae19bc3640cff886b2ec64f7bc1317fc2ee7a4d81adf69998f9b3babd55b96.jpg', 'Physics-informed neural networks (PINNs) Lagaris et al. (1998); Raissi et al. (2019) are a method used to solve partial differential equations (PDEs) by integrating physical laws with neural networks in machine learning. The use of Kolmogorov-Arnold Networks (KANs) in PINNs has been explored and is referred to as Physics-Informed Kolmogorov-Arnold Networks (PIKANs) Rigas et al. (2024); Wang et al. (2024). Due to the high similarity between KAN and MLP, PIKANs inherit several advantages of PINNs, such as overcoming the curse of dimensionality (CoD) Wojtowytsch & Weinan (2020); Han et al. (2018), handling imperfect data Karniadakis et al. (2021), and performing interpolation Sliwinski & Rigas (2023). PINNs have diverse applications, including fluid dynamics Raissi et al. (2020); Jin et al. (2021); Kashefi & Mukerji (2022), quantum mechanical systems Jin et al. (2022), surface physics Fang & Zhan (2019), electric power systems Nellikkath & Chatzivasileiadis (2022), and biological systems Yazdani et al. (2020). However, they also face challenges such as spectral bias Xu et al. (2019); Wang et al. (2022), error estimation Fanaskov et al. (2024), and scalability issues Yao et al. (2023). '] | 7b4312c0282f7827977689475824799ec9bcee735d135a31f21774590f086a44 | b2625752041c98c9978af6d3f403718dc2e532ba |
explanation | What verification process is in place for the key insight mentioned in the paper? | Please see 'Response to common comments' above for how this insight is verified through Figure 4. Our key insight states that if a model cannot generate consistently correct responses (sampled with a temperature of 1.0) across k trials, then the same model will struggle to distinguish between these k responses. Table 4, on the other hand, pertains to a different experimental setting in which we study the performance of several models on solving (greedy decoding, one trial) the set of questions identified via our key insight (i.e., these were only questions which GPT-4o struggled to consistently answer correctly). In this context, the solver's accuracy is not indicative of the difficulty in distinguishing between response pairs. Instead, the key takeaway from Table 4 is that identifying a correct response in JudgeBench is highly correlated with, and nearly as difficult as, solving the underlying problem itself. This reinforces the challenging nature of our dataset. | ['Figure 4', 'Table 4'] | ['images/f3451da79021da5c0980e252154f9755a3a290a822227dad6e71ba74ff046351.jpg', 'images/d335b997830106d49063ae1967cce250e8eed91de9515e78bb78496a0dd11ff7.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Figure 1: Comparison of JudgeBench against previous works. Unlike previous works which focus on instruction following or stylistic preferences, the focus of JudgeBench is on evaluating the factual and logical correctness of complex responses to challenging questions. JudgeBench is noticeably more difficult than previous work, containing responses that are impossible for crowdsourced human annotators to evaluate in a reliable and timely manner. ': '1', 'Benchmarks for LLM-based judges and reward models. As LLM-based judges have become a widely adopted method for evaluating and improving large language models (LLMs), several benchmarks have been introduced to assess their effectiveness. Works such as LLMEval (Zhang et al., 2023), MTBench (Zheng et al., 2024), and FairEval (Wang et al., 2023a) focus on evaluating the alignment between LLM-based judges’ responses and human evaluations. As mentioned above, these dataset suffers from the inherent subjectivity of human evaluation, prioritizing stylistic differences over factual and logical correctness. LLMBar (Zeng et al., 2023) instead takes a different approach by assessing LLM-based judges’ ability to follow instructions, using response pairs with clear ground truth preference labels based on adherence to instructions rather than subjective preferences. In contrast, JudgeBench focuses on assessing LLM-based judges’ ability to reason through responses and distinguish between correct and incorrect responses, which is more challenging than instruction following alone. ': '2', 'While instruction following and style are relatively easy for human annotators to judge, factual and logical correctness becomes increasingly challenging with complex problems. In such cases, human evaluators may mistakenly favor responses that seem more plausible or are simply longer, prioritizing style over correctness—thereby violating the hierarchical framework. As a result, human evaluations often become unreliable as the difficulty of the task increases. ': '3'} | {'1': 'Figure 1: Comparison of JudgeBench against previous works. Unlike previous works which focus on instruction following or stylistic preferences, the focus of JudgeBench is on evaluating the factual and logical correctness of complex responses to challenging questions. JudgeBench is noticeably more difficult than previous work, containing responses that are impossible for crowdsourced human annotators to evaluate in a reliable and timely manner. ', '2': 'Benchmarks for LLM-based judges and reward models. As LLM-based judges have become a widely adopted method for evaluating and improving large language models (LLMs), several benchmarks have been introduced to assess their effectiveness. Works such as LLMEval (Zhang et al., 2023), MTBench (Zheng et al., 2024), and FairEval (Wang et al., 2023a) focus on evaluating the alignment between LLM-based judges’ responses and human evaluations. As mentioned above, these dataset suffers from the inherent subjectivity of human evaluation, prioritizing stylistic differences over factual and logical correctness. LLMBar (Zeng et al., 2023) instead takes a different approach by assessing LLM-based judges’ ability to follow instructions, using response pairs with clear ground truth preference labels based on adherence to instructions rather than subjective preferences. In contrast, JudgeBench focuses on assessing LLM-based judges’ ability to reason through responses and distinguish between correct and incorrect responses, which is more challenging than instruction following alone. ', '3': 'While instruction following and style are relatively easy for human annotators to judge, factual and logical correctness becomes increasingly challenging with complex problems. In such cases, human evaluators may mistakenly favor responses that seem more plausible or are simply longer, prioritizing style over correctness—thereby violating the hierarchical framework. As a result, human evaluations often become unreliable as the difficulty of the task increases. '} | {'images/f3451da79021da5c0980e252154f9755a3a290a822227dad6e71ba74ff046351.jpg': '4'} | {'4': 'images/f3451da79021da5c0980e252154f9755a3a290a822227dad6e71ba74ff046351.jpg'} | {'images/d335b997830106d49063ae1967cce250e8eed91de9515e78bb78496a0dd11ff7.jpg': '4'} | {'4': 'images/d335b997830106d49063ae1967cce250e8eed91de9515e78bb78496a0dd11ff7.jpg'} | {} | ['Figure 1: Comparison of JudgeBench against previous works. Unlike previous works which focus on instruction following or stylistic preferences, the focus of JudgeBench is on evaluating the factual and logical correctness of complex responses to challenging questions. JudgeBench is noticeably more difficult than previous work, containing responses that are impossible for crowdsourced human annotators to evaluate in a reliable and timely manner. ', 'While instruction following and style are relatively easy for human annotators to judge, factual and logical correctness becomes increasingly challenging with complex problems. In such cases, human evaluators may mistakenly favor responses that seem more plausible or are simply longer, prioritizing style over correctness—thereby violating the hierarchical framework. As a result, human evaluations often become unreliable as the difficulty of the task increases. ', 'Benchmarks for LLM-based judges and reward models. As LLM-based judges have become a widely adopted method for evaluating and improving large language models (LLMs), several benchmarks have been introduced to assess their effectiveness. Works such as LLMEval (Zhang et al., 2023), MTBench (Zheng et al., 2024), and FairEval (Wang et al., 2023a) focus on evaluating the alignment between LLM-based judges’ responses and human evaluations. As mentioned above, these dataset suffers from the inherent subjectivity of human evaluation, prioritizing stylistic differences over factual and logical correctness. LLMBar (Zeng et al., 2023) instead takes a different approach by assessing LLM-based judges’ ability to follow instructions, using response pairs with clear ground truth preference labels based on adherence to instructions rather than subjective preferences. In contrast, JudgeBench focuses on assessing LLM-based judges’ ability to reason through responses and distinguish between correct and incorrect responses, which is more challenging than instruction following alone. '] | 17c6e6a32c763123494b9c7792e1d9ee15a288f3bad4e98dfd89a1980c2dd308 | c6da374332587e75991c772444d2fe81a84cf9c8 |
explanation | What is the motivation of using vector quantization into spatiotemporal prediction? | Our findings reveal that this belief does not hold true for the majority of state-of-the-art VQ methods, as demonstrated in Table 4 and Figure 5 on page 8 of our paper. We conducted experiments by varying the size of the codebook, from small to large, and found that none led to improved outcomes. Although a larger codebook size results in less deterioration, it does not enhance results. | ['Table 4', 'Figure 5'] | ['images/10719f212eaa31bd5eafbe3a45dce00fef2cc542ff52c8644bc7f47bc5ccf51b.jpg', 'images/a29b3a4f041b6c3762604546a5508ab6d5237cb7f71382ff31c412421c465415.jpg'] | ['mixed'] | 2 | 3 | 5 | {'with probability at least 1 −ε. Therefore, ∥g′−g∥2 ≥(1 + ∆)−1∥Ug −Ug′∥2. Since the s-sparse unit vector covering number is bounded by (Cm/sδ)s, we establish: ': '1'} | {'1': 'with probability at least 1 −ε. Therefore, ∥g′−g∥2 ≥(1 + ∆)−1∥Ug −Ug′∥2. Since the s-sparse unit vector covering number is bounded by (Cm/sδ)s, we establish: '} | {'images/a29b3a4f041b6c3762604546a5508ab6d5237cb7f71382ff31c412421c465415.jpg': '5', 'images/78acd0f2f4d9f888a1ccb28628e112d46ae9bd6922c7ebde2f114e26b75436bd.jpg': '4', 'images/a719b7d7290b4604bfaa233f79551b8f9f68a5eaab0c85e5a3cba9465325c48d.jpg': '1'} | {'5': 'images/a29b3a4f041b6c3762604546a5508ab6d5237cb7f71382ff31c412421c465415.jpg', '4': 'images/78acd0f2f4d9f888a1ccb28628e112d46ae9bd6922c7ebde2f114e26b75436bd.jpg', '1': 'images/a719b7d7290b4604bfaa233f79551b8f9f68a5eaab0c85e5a3cba9465325c48d.jpg'} | {'images/10719f212eaa31bd5eafbe3a45dce00fef2cc542ff52c8644bc7f47bc5ccf51b.jpg': '4'} | {'4': 'images/10719f212eaa31bd5eafbe3a45dce00fef2cc542ff52c8644bc7f47bc5ccf51b.jpg'} | {} | ['images/78acd0f2f4d9f888a1ccb28628e112d46ae9bd6922c7ebde2f114e26b75436bd.jpg', 'images/a719b7d7290b4604bfaa233f79551b8f9f68a5eaab0c85e5a3cba9465325c48d.jpg', 'with probability at least 1 −ε. Therefore, ∥g′−g∥2 ≥(1 + ∆)−1∥Ug −Ug′∥2. Since the s-sparse unit vector covering number is bounded by (Cm/sδ)s, we establish: '] | 08e9c7668c3f06734523fb27dc073e5f4a26b88fb8305a536bbe8c097cc45fd7 | ca6147914709aec09e7b238aac57b2e654fc45c8 |
explanation | Have the authors considered techniques to make the trigger less detectable? | To quantify the visual stealthiness of a trigger, we use a computer vision model as the judge. We trained a benign global model on clean data under the same training settings as the victim FL system, using it as the judge model. We consider a trigger to have good visual stealthiness if its poisoned data can maintain high benign accuracy on the judge model while showing a high attack success rate (ASR) on the victim global model. Based on the ASR results in Table 3 and Figure 3, we selected the trigger size by balancing the trade-off between attack performance and visual stealthiness. | ['Table 3', 'Figure 3'] | ['images/69e33fae2123ee66640c302e2ec75c63c15a58254fc37185ddba49584b88ab55.jpg', 'images/e0f0d3734d440dab10cc79612bf915603d883123df705f906ece46b0c9182f6c.jpg'] | ['mixed'] | 2 | 3 | 5 | {'The capability of malicious clients in our attack is limited to the manipulation of their local training data that are input to their training pipelines. In addition, in line with existing works (Lyu et al., 2023; Zhang et al., 2024; Fang & Chen, 2023; Gong et al., 2022), we do not assume the secrecy of the global model provided by the FL server, as it would typically need to be accessible outside TEEs for use in local inference tasks. As such, in each FL round, clients are granted white-box access to the global model. Originating from initially benign clients that have been compromised, these malicious clients possess some local training data for the FL main task as background knowledge. ': '1', 'Datasets and global models: We evaluated DPOT on four classification datasets with non-IID data distributions: Fashion MNIST, FEMNIST, CIFAR10, and Tiny ImageNet. Table 4 summarizes their basic information and models we used on each dataset. ': '2', 'In this section, we present the performance of DPOT attack against ten defense methods and compare our results with two widely-used data-poisoning attacks. ': '3'} | {'1': 'The capability of malicious clients in our attack is limited to the manipulation of their local training data that are input to their training pipelines. In addition, in line with existing works (Lyu et al., 2023; Zhang et al., 2024; Fang & Chen, 2023; Gong et al., 2022), we do not assume the secrecy of the global model provided by the FL server, as it would typically need to be accessible outside TEEs for use in local inference tasks. As such, in each FL round, clients are granted white-box access to the global model. Originating from initially benign clients that have been compromised, these malicious clients possess some local training data for the FL main task as background knowledge. ', '2': 'Datasets and global models: We evaluated DPOT on four classification datasets with non-IID data distributions: Fashion MNIST, FEMNIST, CIFAR10, and Tiny ImageNet. Table 4 summarizes their basic information and models we used on each dataset. ', '3': 'In this section, we present the performance of DPOT attack against ten defense methods and compare our results with two widely-used data-poisoning attacks. '} | {'images/e0f0d3734d440dab10cc79612bf915603d883123df705f906ece46b0c9182f6c.jpg': '3'} | {'3': 'images/e0f0d3734d440dab10cc79612bf915603d883123df705f906ece46b0c9182f6c.jpg'} | {'images/69e33fae2123ee66640c302e2ec75c63c15a58254fc37185ddba49584b88ab55.jpg': '3'} | {'3': 'images/69e33fae2123ee66640c302e2ec75c63c15a58254fc37185ddba49584b88ab55.jpg'} | {} | ['Datasets and global models: We evaluated DPOT on four classification datasets with non-IID data distributions: Fashion MNIST, FEMNIST, CIFAR10, and Tiny ImageNet. Table 4 summarizes their basic information and models we used on each dataset. ', 'The capability of malicious clients in our attack is limited to the manipulation of their local training data that are input to their training pipelines. In addition, in line with existing works (Lyu et al., 2023; Zhang et al., 2024; Fang & Chen, 2023; Gong et al., 2022), we do not assume the secrecy of the global model provided by the FL server, as it would typically need to be accessible outside TEEs for use in local inference tasks. As such, in each FL round, clients are granted white-box access to the global model. Originating from initially benign clients that have been compromised, these malicious clients possess some local training data for the FL main task as background knowledge. ', 'In this section, we present the performance of DPOT attack against ten defense methods and compare our results with two widely-used data-poisoning attacks. '] | d470c9223d5f671f73f08d91acc25b519cf383bd485a14a39da46ff8742d04c9 | d48240fbd51a9bc4ee932e076defb133e9ee5288 |
explanation | How is the pixel count determined in practice? | Based on the ASR results in Table 3 and Figure 3, we selected the trigger size by balancing the trade-off between attack performance and visual stealthiness—a larger trigger size results in a higher ASR but lower benign accuracy. We set the lower bound for 'Drop' at -30% and the lower bound for 'Final ASR' at 50%, and choose the smallest trigger size that meets both constraints. | ['Table 3', 'Figure 3'] | ['images/69e33fae2123ee66640c302e2ec75c63c15a58254fc37185ddba49584b88ab55.jpg', 'images/e0f0d3734d440dab10cc79612bf915603d883123df705f906ece46b0c9182f6c.jpg'] | ['mixed'] | 2 | 3 | 5 | {'• Trigger size. The number of pixels that a backdoor trigger can alter is specified by the trigger size attribute. Selection of trigger sizes for various datasets are discussed in Appendix D.3. ': '1', 'Existing defenses against backdoor attacks in FL rely on a hypothesis that backdoor attacks will always cause the updating direction of a model to deviate from its original benign objective, because the backdoor objectives defined by backdoored data cannot be achieved within the original direction (Fung et al., 2020; Cao et al., 2021). However, the capabilities of backdoor attacks are not limited to this hypothesis. To counter this hypothesis, adversaries can align the updating directions of a model with respect to backdoor and benign objectives by strategically adjusting the backdoor objective. Applying this idea to FL, if the injection of backdoored data has minimal effect on a client’s model updates, then detecting this client as malicious becomes challenging for defenses based on analyzing clients’ model updates. ': '2'} | {'1': '• Trigger size. The number of pixels that a backdoor trigger can alter is specified by the trigger size attribute. Selection of trigger sizes for various datasets are discussed in Appendix D.3. ', '2': 'Existing defenses against backdoor attacks in FL rely on a hypothesis that backdoor attacks will always cause the updating direction of a model to deviate from its original benign objective, because the backdoor objectives defined by backdoored data cannot be achieved within the original direction (Fung et al., 2020; Cao et al., 2021). However, the capabilities of backdoor attacks are not limited to this hypothesis. To counter this hypothesis, adversaries can align the updating directions of a model with respect to backdoor and benign objectives by strategically adjusting the backdoor objective. Applying this idea to FL, if the injection of backdoored data has minimal effect on a client’s model updates, then detecting this client as malicious becomes challenging for defenses based on analyzing clients’ model updates. '} | {'images/e0f0d3734d440dab10cc79612bf915603d883123df705f906ece46b0c9182f6c.jpg': '3'} | {'3': 'images/e0f0d3734d440dab10cc79612bf915603d883123df705f906ece46b0c9182f6c.jpg'} | {'images/69e33fae2123ee66640c302e2ec75c63c15a58254fc37185ddba49584b88ab55.jpg': '3', 'images/04648f4d66c2bf3ffebc2e5468af1f870aa7099fce095e3ef919fbe9fdde3cff.jpg': '2'} | {'3': 'images/69e33fae2123ee66640c302e2ec75c63c15a58254fc37185ddba49584b88ab55.jpg', '2': 'images/04648f4d66c2bf3ffebc2e5468af1f870aa7099fce095e3ef919fbe9fdde3cff.jpg'} | {} | ['images/04648f4d66c2bf3ffebc2e5468af1f870aa7099fce095e3ef919fbe9fdde3cff.jpg', 'Existing defenses against backdoor attacks in FL rely on a hypothesis that backdoor attacks will always cause the updating direction of a model to deviate from its original benign objective, because the backdoor objectives defined by backdoored data cannot be achieved within the original direction (Fung et al., 2020; Cao et al., 2021). However, the capabilities of backdoor attacks are not limited to this hypothesis. To counter this hypothesis, adversaries can align the updating directions of a model with respect to backdoor and benign objectives by strategically adjusting the backdoor objective. Applying this idea to FL, if the injection of backdoored data has minimal effect on a client’s model updates, then detecting this client as malicious becomes challenging for defenses based on analyzing clients’ model updates. ', '• Trigger size. The number of pixels that a backdoor trigger can alter is specified by the trigger size attribute. Selection of trigger sizes for various datasets are discussed in Appendix D.3. '] | d374c1df2439597a2bc212b3818e23a4b1137f6607d84bbfd435ef62542743bb | d48240fbd51a9bc4ee932e076defb133e9ee5288 |
explanation | What evidence supports the claim that the audio modality improves the performance of the visual modality? | We examined its impact on NeRF performance. As shown in Figure 6 and Table 2, incorporating the audio modality improves NeRF results in complex scenes with sparse observations, indicating that the two modalities support each other meaningfully. | ['Figure 6', 'Table 2'] | ['images/eb6e5ef15dae9de051f0d760500e20b159df83be6d58933557d6f5828e244b98.jpg', 'images/2d0d0bb593ee89d5745827047d353064f8ff7e65fc6afaf86d88210f7e930133.jpg'] | ['mixed'] | 2 | 3 | 5 | {'In response, we introduce NeRAF, a method that generates both novel views and RIRs at new sensor positions by jointly learning radiance and acoustic fields (Figure 1). NeRAF queries the radiance field to construct a voxel representation of the scene that encodes radiance and density. This grid conditions the acoustic field with appearance and geometric 3D priors, without the need for additional annotations. NeRAF’s cross-modal approach benefits both modalities: it achieves state-of-the-art ': '1'} | {'1': 'In response, we introduce NeRAF, a method that generates both novel views and RIRs at new sensor positions by jointly learning radiance and acoustic fields (Figure 1). NeRAF queries the radiance field to construct a voxel representation of the scene that encodes radiance and density. This grid conditions the acoustic field with appearance and geometric 3D priors, without the need for additional annotations. NeRAF’s cross-modal approach benefits both modalities: it achieves state-of-the-art '} | {'images/aa3c334d7372b121a911ada4bcda6a45995445239e50a5fe2f2067d2987814a4.jpg': '4', 'images/eb6e5ef15dae9de051f0d760500e20b159df83be6d58933557d6f5828e244b98.jpg': '6', 'images/534fbbf041d5c249b254a8aded735d423dcbf92d9f0aacb2d6b9c1a1cc1b8a0a.jpg': '2'} | {'4': 'images/aa3c334d7372b121a911ada4bcda6a45995445239e50a5fe2f2067d2987814a4.jpg', '6': 'images/eb6e5ef15dae9de051f0d760500e20b159df83be6d58933557d6f5828e244b98.jpg', '2': 'images/534fbbf041d5c249b254a8aded735d423dcbf92d9f0aacb2d6b9c1a1cc1b8a0a.jpg'} | {'images/2d0d0bb593ee89d5745827047d353064f8ff7e65fc6afaf86d88210f7e930133.jpg': '2'} | {'2': 'images/2d0d0bb593ee89d5745827047d353064f8ff7e65fc6afaf86d88210f7e930133.jpg'} | {} | ['In response, we introduce NeRAF, a method that generates both novel views and RIRs at new sensor positions by jointly learning radiance and acoustic fields (Figure 1). NeRAF queries the radiance field to construct a voxel representation of the scene that encodes radiance and density. This grid conditions the acoustic field with appearance and geometric 3D priors, without the need for additional annotations. NeRAF’s cross-modal approach benefits both modalities: it achieves state-of-the-art ', 'images/534fbbf041d5c249b254a8aded735d423dcbf92d9f0aacb2d6b9c1a1cc1b8a0a.jpg', 'images/aa3c334d7372b121a911ada4bcda6a45995445239e50a5fe2f2067d2987814a4.jpg'] | c933bbd3528d855093726b6e2837d2783a0761533a48c1aba824f9e1f65197f3 | e65453d1eccc2ec8cdbff813f51d33c54095c764 |
explanation | What is the impact of joint training on image generation? | Joint training improves image generation in large, complex scenes with sparse visual observations, as shown in Table 2 and Figure 6. For smaller scenes with sufficient observations, such as office 4, there is no notable difference between vision performances for joint and separate training. Audio generation performance remains equivalent for both approaches, which is expected since the acoustic field in both setups leverages a 3D grid representation of the scene. | ['Table 2', 'Figure 6'] | ['images/2d0d0bb593ee89d5745827047d353064f8ff7e65fc6afaf86d88210f7e930133.jpg', 'images/eb6e5ef15dae9de051f0d760500e20b159df83be6d58933557d6f5828e244b98.jpg'] | ['mixed'] | 2 | 3 | 5 | {} | {} | {'images/eb6e5ef15dae9de051f0d760500e20b159df83be6d58933557d6f5828e244b98.jpg': '6', 'images/534fbbf041d5c249b254a8aded735d423dcbf92d9f0aacb2d6b9c1a1cc1b8a0a.jpg': '2', 'images/aa3c334d7372b121a911ada4bcda6a45995445239e50a5fe2f2067d2987814a4.jpg': '4', 'images/419ce5575ea862af3be80194c528aa62cb5a2894726bd66387fa29a56bd58158.jpg': '1'} | {'6': 'images/eb6e5ef15dae9de051f0d760500e20b159df83be6d58933557d6f5828e244b98.jpg', '2': 'images/534fbbf041d5c249b254a8aded735d423dcbf92d9f0aacb2d6b9c1a1cc1b8a0a.jpg', '4': 'images/aa3c334d7372b121a911ada4bcda6a45995445239e50a5fe2f2067d2987814a4.jpg', '1': 'images/419ce5575ea862af3be80194c528aa62cb5a2894726bd66387fa29a56bd58158.jpg'} | {'images/2d0d0bb593ee89d5745827047d353064f8ff7e65fc6afaf86d88210f7e930133.jpg': '2'} | {'2': 'images/2d0d0bb593ee89d5745827047d353064f8ff7e65fc6afaf86d88210f7e930133.jpg'} | {} | ['images/534fbbf041d5c249b254a8aded735d423dcbf92d9f0aacb2d6b9c1a1cc1b8a0a.jpg', 'images/aa3c334d7372b121a911ada4bcda6a45995445239e50a5fe2f2067d2987814a4.jpg', 'images/419ce5575ea862af3be80194c528aa62cb5a2894726bd66387fa29a56bd58158.jpg'] | 289319609f38583b37507dbc67cf24f488df31e206a2fbce004da62dbf1e3a02 | e65453d1eccc2ec8cdbff813f51d33c54095c764 |
explanation | What datasets have been used to validate SPLR's performance, and what were the results? | We conducted experiments on the LRA benchmark and the HAR-DVS dataset, where SPLR achieved competitive performance while maintaining energy efficiency. The results are summarized in Table 1 and Figure 2. | ['Table 1', 'Figure 2'] | ['images/d77c4349202df452e5ab33e1480253cd781f4c29a1df6d90afd1c552c0a3665c.jpg', 'images/19be462f3617cf3f880374a603768182826bbfe6f16243d6d183cf9303d1a8cb.jpg'] | ['mixed'] | 2 | 3 | 5 | {'2. Dendrite Attention Layer: The model begins by passing the input through the Dendrite Attention Layer, constructed using DH-LIF neurons Zheng et al. (2024a), as shown in Figure 1(b). Each DH-LIF neuron has multiple dendritic branches, each characterized by a different timing factor τd, enabling it to capture temporal dynamics across various scales. This is essential for accommodating the diverse timescales present in asynchronous spike inputs. ': '1', 'Spiking Neural Networks (SNNs) offer an efficient framework for processing eventdriven data due to their sparse, spike-based communication, making them ideal for real-time tasks. However, their inability to capture long-range dependencies limits their effectiveness in complex temporal modeling. To address this challenge, we present a SPLR (SPiking Network for Learning Long-range Relations), a novel architecture designed to overcome these limitations. The core contribution of SPLR is the Spike-Aware HiPPO (SA-HiPPO) mechanism, which adapts the HiPPO framework for discrete, spike-driven inputs, enabling efficient long-range memory retention in event-driven systems. Additionally, SPLR includes a convolutional layer that integrates state-space dynamics to enhance feature extraction while preserving the efficiency of sparse, asynchronous processing. Together, these innovations enable SPLR to model both short- and long-term dependencies effectively, outperforming prior methods on various event-based datasets. Experimental results demonstrate that SPLR achieves superior performance in tasks requiring fine-grained temporal dynamics and long-range memory, establishing it as a scalable and efficient solution for real-time applications such as event-based vision and sensor fusion in neuromorphic computing. ': '2'} | {'1': '2. Dendrite Attention Layer: The model begins by passing the input through the Dendrite Attention Layer, constructed using DH-LIF neurons Zheng et al. (2024a), as shown in Figure 1(b). Each DH-LIF neuron has multiple dendritic branches, each characterized by a different timing factor τd, enabling it to capture temporal dynamics across various scales. This is essential for accommodating the diverse timescales present in asynchronous spike inputs. ', '2': 'Spiking Neural Networks (SNNs) offer an efficient framework for processing eventdriven data due to their sparse, spike-based communication, making them ideal for real-time tasks. However, their inability to capture long-range dependencies limits their effectiveness in complex temporal modeling. To address this challenge, we present a SPLR (SPiking Network for Learning Long-range Relations), a novel architecture designed to overcome these limitations. The core contribution of SPLR is the Spike-Aware HiPPO (SA-HiPPO) mechanism, which adapts the HiPPO framework for discrete, spike-driven inputs, enabling efficient long-range memory retention in event-driven systems. Additionally, SPLR includes a convolutional layer that integrates state-space dynamics to enhance feature extraction while preserving the efficiency of sparse, asynchronous processing. Together, these innovations enable SPLR to model both short- and long-term dependencies effectively, outperforming prior methods on various event-based datasets. Experimental results demonstrate that SPLR achieves superior performance in tasks requiring fine-grained temporal dynamics and long-range memory, establishing it as a scalable and efficient solution for real-time applications such as event-based vision and sensor fusion in neuromorphic computing. '} | {'images/19be462f3617cf3f880374a603768182826bbfe6f16243d6d183cf9303d1a8cb.jpg': '2', 'images/44c595d915c169d988eaff4c571c8e239a73fd68bc02a89d9517cad3d3bf5b37.jpg': '1'} | {'2': 'images/19be462f3617cf3f880374a603768182826bbfe6f16243d6d183cf9303d1a8cb.jpg', '1': 'images/44c595d915c169d988eaff4c571c8e239a73fd68bc02a89d9517cad3d3bf5b37.jpg'} | {'images/d77c4349202df452e5ab33e1480253cd781f4c29a1df6d90afd1c552c0a3665c.jpg': '1'} | {'1': 'images/d77c4349202df452e5ab33e1480253cd781f4c29a1df6d90afd1c552c0a3665c.jpg'} | {} | ['images/44c595d915c169d988eaff4c571c8e239a73fd68bc02a89d9517cad3d3bf5b37.jpg', '2. Dendrite Attention Layer: The model begins by passing the input through the Dendrite Attention Layer, constructed using DH-LIF neurons Zheng et al. (2024a), as shown in Figure 1(b). Each DH-LIF neuron has multiple dendritic branches, each characterized by a different timing factor τd, enabling it to capture temporal dynamics across various scales. This is essential for accommodating the diverse timescales present in asynchronous spike inputs. ', 'Spiking Neural Networks (SNNs) offer an efficient framework for processing eventdriven data due to their sparse, spike-based communication, making them ideal for real-time tasks. However, their inability to capture long-range dependencies limits their effectiveness in complex temporal modeling. To address this challenge, we present a SPLR (SPiking Network for Learning Long-range Relations), a novel architecture designed to overcome these limitations. The core contribution of SPLR is the Spike-Aware HiPPO (SA-HiPPO) mechanism, which adapts the HiPPO framework for discrete, spike-driven inputs, enabling efficient long-range memory retention in event-driven systems. Additionally, SPLR includes a convolutional layer that integrates state-space dynamics to enhance feature extraction while preserving the efficiency of sparse, asynchronous processing. Together, these innovations enable SPLR to model both short- and long-term dependencies effectively, outperforming prior methods on various event-based datasets. Experimental results demonstrate that SPLR achieves superior performance in tasks requiring fine-grained temporal dynamics and long-range memory, establishing it as a scalable and efficient solution for real-time applications such as event-based vision and sensor fusion in neuromorphic computing. '] | 3ded93663cda6eea4e124de849bb9bf25f3b604a152e7ba262196836f6177535 | f75155827303920d38abe5f6b01a0a3257e0a425 |
explanation | What are the expected speedups for well-known models in the single-node setup? | Your understanding of our proof-of-concept is correct. What we have are theoretical guarantees backed by simulations. The simulations show the expected speedups for well-known models (Table 2) and for any possible drafter (Figure 2) in the single-node setup. | ['Table 2', 'Figure 2'] | ['images/913972222e09d45f7f497b667d9e122dadcba68cf47bf4daf3035005a2d940f4.jpg', 'images/5d216ed0c53b02ce96a077e35b732d97611bacb4fa53d4781dc744082112df6b.jpg'] | ['mixed'] | 2 | 3 | 5 | {'Lookahead. While the abstract version of DSI described in Algorithm 1 takes advantage of a sufficiently large number of servers, in practice we typically have a fixed number of servers. We can deploy DSI on an arbitrary number of servers (≥2) by selecting a sufficiently large lookahead hyperparameter, as elaborated in Appendix C. The lookahead is defined as the number of draft tokens in every verification task sent to a target server. The lookahead in Algorithm 1 is set to 1 for simplicity, but can be arbitrarily large. Larger lookahead values require a lower SP degree. We have ': '1', 'A recent line of work (Stern et al., 2018) for accelerating the inference of LLMs is based on speculative inference. The idea is to use speculative execution (Burton, 1985; Hennessy and Patterson, 2012) to predict possible continuations of the input prompt using faster drafter LLMs that approximate the target LLM, then verify the correctness of the predicted continuations simultaneously by utilizing data parallelism capabilities of modern hardware (like CUDA-based processors) such as batching. They provided empirical evidence that their proposed draft-then-verify approach speeds up the inference. Since the introduction of speculative inference (SI) by Stern et al. (2018), various papers (Leviathan et al., 2023; Chen et al., 2023) have improved this method by introducing novel lossless methods to verify the correctness of token sequences that were generated by the drafter LLMs, and empirically showed significant speedups of 2-3x in some settings. Following this line of work, Miao et al. (2023) extended the verification algorithm of Leviathan et al. (2023); Chen et al. (2023) and showed that their method increases the probability of accepting draft tokens, and proved its losslessness. Following the success of this approach, research in this area has expanded in various directions (Mamou et al., 2024; Li et al., 2024; Cai et al., 2024; Sun et al., 2024b; Zhou et al., 2024; Liu et al., 2023; Joao Gante, 2023). Most recently, Chen et al. (2024) showed that lossless SI can effectively reduce the inference latency while increasing its throughput, in the multi-user setting. The effectiveness of SI algorithms comes from their ability to reduce the number of target forwards by using batching such that a single target forward is sufficient for generating more than one token. However, existing SI methods implement a sequential process, repeatedly drafting and verifying. They never draft more tokens before the previous verification ends. This limitation implies that SI speeds up the inference if and only if the drafter is sufficiently fast and accurate, as studied in this paper. In cases where the drafter is too slow or inaccurate, SI is slower than non-SI. ': '2'} | {'1': 'Lookahead. While the abstract version of DSI described in Algorithm 1 takes advantage of a sufficiently large number of servers, in practice we typically have a fixed number of servers. We can deploy DSI on an arbitrary number of servers (≥2) by selecting a sufficiently large lookahead hyperparameter, as elaborated in Appendix C. The lookahead is defined as the number of draft tokens in every verification task sent to a target server. The lookahead in Algorithm 1 is set to 1 for simplicity, but can be arbitrarily large. Larger lookahead values require a lower SP degree. We have ', '2': 'A recent line of work (Stern et al., 2018) for accelerating the inference of LLMs is based on speculative inference. The idea is to use speculative execution (Burton, 1985; Hennessy and Patterson, 2012) to predict possible continuations of the input prompt using faster drafter LLMs that approximate the target LLM, then verify the correctness of the predicted continuations simultaneously by utilizing data parallelism capabilities of modern hardware (like CUDA-based processors) such as batching. They provided empirical evidence that their proposed draft-then-verify approach speeds up the inference. Since the introduction of speculative inference (SI) by Stern et al. (2018), various papers (Leviathan et al., 2023; Chen et al., 2023) have improved this method by introducing novel lossless methods to verify the correctness of token sequences that were generated by the drafter LLMs, and empirically showed significant speedups of 2-3x in some settings. Following this line of work, Miao et al. (2023) extended the verification algorithm of Leviathan et al. (2023); Chen et al. (2023) and showed that their method increases the probability of accepting draft tokens, and proved its losslessness. Following the success of this approach, research in this area has expanded in various directions (Mamou et al., 2024; Li et al., 2024; Cai et al., 2024; Sun et al., 2024b; Zhou et al., 2024; Liu et al., 2023; Joao Gante, 2023). Most recently, Chen et al. (2024) showed that lossless SI can effectively reduce the inference latency while increasing its throughput, in the multi-user setting. The effectiveness of SI algorithms comes from their ability to reduce the number of target forwards by using batching such that a single target forward is sufficient for generating more than one token. However, existing SI methods implement a sequential process, repeatedly drafting and verifying. They never draft more tokens before the previous verification ends. This limitation implies that SI speeds up the inference if and only if the drafter is sufficiently fast and accurate, as studied in this paper. In cases where the drafter is too slow or inaccurate, SI is slower than non-SI. '} | {'images/de7f29969c84979fba29d0f8f2fd41fc0d55d20971a159b10f1479b29769d99f.jpg': '1', 'images/5d216ed0c53b02ce96a077e35b732d97611bacb4fa53d4781dc744082112df6b.jpg': '2'} | {'1': 'images/de7f29969c84979fba29d0f8f2fd41fc0d55d20971a159b10f1479b29769d99f.jpg', '2': 'images/5d216ed0c53b02ce96a077e35b732d97611bacb4fa53d4781dc744082112df6b.jpg'} | {'images/913972222e09d45f7f497b667d9e122dadcba68cf47bf4daf3035005a2d940f4.jpg': '2'} | {'2': 'images/913972222e09d45f7f497b667d9e122dadcba68cf47bf4daf3035005a2d940f4.jpg'} | {} | ['A recent line of work (Stern et al., 2018) for accelerating the inference of LLMs is based on speculative inference. The idea is to use speculative execution (Burton, 1985; Hennessy and Patterson, 2012) to predict possible continuations of the input prompt using faster drafter LLMs that approximate the target LLM, then verify the correctness of the predicted continuations simultaneously by utilizing data parallelism capabilities of modern hardware (like CUDA-based processors) such as batching. They provided empirical evidence that their proposed draft-then-verify approach speeds up the inference. Since the introduction of speculative inference (SI) by Stern et al. (2018), various papers (Leviathan et al., 2023; Chen et al., 2023) have improved this method by introducing novel lossless methods to verify the correctness of token sequences that were generated by the drafter LLMs, and empirically showed significant speedups of 2-3x in some settings. Following this line of work, Miao et al. (2023) extended the verification algorithm of Leviathan et al. (2023); Chen et al. (2023) and showed that their method increases the probability of accepting draft tokens, and proved its losslessness. Following the success of this approach, research in this area has expanded in various directions (Mamou et al., 2024; Li et al., 2024; Cai et al., 2024; Sun et al., 2024b; Zhou et al., 2024; Liu et al., 2023; Joao Gante, 2023). Most recently, Chen et al. (2024) showed that lossless SI can effectively reduce the inference latency while increasing its throughput, in the multi-user setting. The effectiveness of SI algorithms comes from their ability to reduce the number of target forwards by using batching such that a single target forward is sufficient for generating more than one token. However, existing SI methods implement a sequential process, repeatedly drafting and verifying. They never draft more tokens before the previous verification ends. This limitation implies that SI speeds up the inference if and only if the drafter is sufficiently fast and accurate, as studied in this paper. In cases where the drafter is too slow or inaccurate, SI is slower than non-SI. ', 'images/de7f29969c84979fba29d0f8f2fd41fc0d55d20971a159b10f1479b29769d99f.jpg', 'Lookahead. While the abstract version of DSI described in Algorithm 1 takes advantage of a sufficiently large number of servers, in practice we typically have a fixed number of servers. We can deploy DSI on an arbitrary number of servers (≥2) by selecting a sufficiently large lookahead hyperparameter, as elaborated in Appendix C. The lookahead is defined as the number of draft tokens in every verification task sent to a target server. The lookahead in Algorithm 1 is set to 1 for simplicity, but can be arbitrarily large. Larger lookahead values require a lower SP degree. We have '] | 1b32902c7ab15a653013003602dfc6a89055175f9470d23890bbd70486d05535 | fa970101b0b52fdbe62b32a64dfb4914a7798936 |
explanation | How does the performance comparison between SEDD and DCD change with an increase in the number of denoising steps? | Results in Figure 4 contain sample text from SEDD and DCD to show intuitively that SEDD cannot produce high-quality samples with few denoising steps while DCD can achieve good sample quality in as few as 4 steps. Further increasing the number of denoising steps of SEDD will not help much since with 256 steps to generate sequences of length 128, only one token will be unmasked in each denoising step. Additionally, as shown in Figure 3, the performance of DCD improves consistently as we increase the number of denoising steps. | ['Figure 4', 'Figure 3'] | ['images/567bedb724f7e1158badc7880ba7fa08f36f6c45f74ca75b47e2a5b096d660dc.jpg', 'images/e2ba0b3bee47a6b652277755e18fc2c45f11c37a53dcae79632f867fa3233fe6.jpg'] | ['figure'] | 2 | 3 | 5 | {'Proposition 4. For a positive distribution p and any V ∈RN×C, the distribution q(x) ∝p(x)· i exp(V[i, xi]) has the same copula as p. ': '1', 'Given univariate marginals pdm( X˜tixt+1) i and an autoregressive copula distribution pcopula( X˜t|xt+1), both of which estimate the target distribution q( X˜t|xt+1), our goal is to combine them following the I-projection procedure described in Section 4.1. Specifically, this involves solving the convex optimization problem in Equation (4), which is specialized to the following: ': '2'} | {'1': 'Proposition 4. For a positive distribution p and any V ∈RN×C, the distribution q(x) ∝p(x)· i exp(V[i, xi]) has the same copula as p. ', '2': 'Given univariate marginals pdm( X˜tixt+1) i and an autoregressive copula distribution pcopula( X˜t|xt+1), both of which estimate the target distribution q( X˜t|xt+1), our goal is to combine them following the I-projection procedure described in Section 4.1. Specifically, this involves solving the convex optimization problem in Equation (4), which is specialized to the following: '} | {'images/567bedb724f7e1158badc7880ba7fa08f36f6c45f74ca75b47e2a5b096d660dc.jpg': '4', 'images/e2ba0b3bee47a6b652277755e18fc2c45f11c37a53dcae79632f867fa3233fe6.jpg': '3'} | {'4': 'images/567bedb724f7e1158badc7880ba7fa08f36f6c45f74ca75b47e2a5b096d660dc.jpg', '3': 'images/e2ba0b3bee47a6b652277755e18fc2c45f11c37a53dcae79632f867fa3233fe6.jpg'} | {'images/df05c2c1ba0c353bfa57b4b96f4d960997e7b25fdf4eda1c6c08b7366be2c3d0.jpg': '1'} | {'1': 'images/df05c2c1ba0c353bfa57b4b96f4d960997e7b25fdf4eda1c6c08b7366be2c3d0.jpg'} | {} | ['Given univariate marginals pdm( X˜tixt+1) i and an autoregressive copula distribution pcopula( X˜t|xt+1), both of which estimate the target distribution q( X˜t|xt+1), our goal is to combine them following the I-projection procedure described in Section 4.1. Specifically, this involves solving the convex optimization problem in Equation (4), which is specialized to the following: ', 'images/df05c2c1ba0c353bfa57b4b96f4d960997e7b25fdf4eda1c6c08b7366be2c3d0.jpg', 'Proposition 4. For a positive distribution p and any V ∈RN×C, the distribution q(x) ∝p(x)· i exp(V[i, xi]) has the same copula as p. '] | bb3e82b92d35459a6e1cc70ddf9c42069a7a35ae55b69549fec2cbad10f7f8f8 | 03f9d16fc97dd93b4aab15f0ba39188eaefe16c3 |
explanation | Does the presence of meaningless words in the visualization indicate that the representation is not compact or noisy? | As shown in Figure 5, the visualizations demonstrate that the learned lexical representations align well with motion semantics, capturing key terms such as 'monkey,' 'kick,' 'flail,' and 'dance waltz,' underscoring the effectiveness of our method. As indicated in Figure 4, our method typically activates approximately 128 keywords, which is significantly more compact compared to current methods that rely on 256 or 512 dimensions. Furthermore, we can enhance regularization constraints to increase the model's focus on fewer, more relevant keywords, thus improving the compactness of the representations. | ['Figure 4', 'Figure 5'] | ['images/35398bc270826a0afcbf1eb467cf0e98daa0c35408ef2ad67abb344a44a9ce24.jpg', 'images/52e7b9448b0e34160ae57218f882a91387c93d5d61b3d93cb9a02590f8cfa6ff.jpg'] | ['figure'] | 2 | 3 | 5 | {'where D denotes the whole dataset, M(enc) and M(dec) denote the set of masked position in x¯ and xˆ, oj denotes the logit of xj, and xj refers to the original text token, the Ecbow is calculated as Eq.4. ': '1', 'Finally, the bottleneck Ecbow is fed into a simple decoder to reconstruct the masked tokens. ': '2', 'Ablation Studies on Sparsity. Top-K Sparsifying (Shen et al.; Formal et al., 2021) adjusts the sparsity of lexicon-weighted representations, striking a balance between efficiency and effectiveness by retaining only the top-k weighted lexicons while setting others to zero. Applied exclusively during inference, this method introduces no additional training overhead. Fig. 4 illustrates the storage and retrieval performance across different sparsity levels on the KIT-ML dataset, where our model demonstrates superior storage efficiency and retrieval performance compared to previous approaches. ': '3'} | {'1': 'where D denotes the whole dataset, M(enc) and M(dec) denote the set of masked position in x¯ and xˆ, oj denotes the logit of xj, and xj refers to the original text token, the Ecbow is calculated as Eq.4. ', '2': 'Finally, the bottleneck Ecbow is fed into a simple decoder to reconstruct the masked tokens. ', '3': 'Ablation Studies on Sparsity. Top-K Sparsifying (Shen et al.; Formal et al., 2021) adjusts the sparsity of lexicon-weighted representations, striking a balance between efficiency and effectiveness by retaining only the top-k weighted lexicons while setting others to zero. Applied exclusively during inference, this method introduces no additional training overhead. Fig. 4 illustrates the storage and retrieval performance across different sparsity levels on the KIT-ML dataset, where our model demonstrates superior storage efficiency and retrieval performance compared to previous approaches. '} | {'images/35398bc270826a0afcbf1eb467cf0e98daa0c35408ef2ad67abb344a44a9ce24.jpg': '4', 'images/52e7b9448b0e34160ae57218f882a91387c93d5d61b3d93cb9a02590f8cfa6ff.jpg': '5'} | {'4': 'images/35398bc270826a0afcbf1eb467cf0e98daa0c35408ef2ad67abb344a44a9ce24.jpg', '5': 'images/52e7b9448b0e34160ae57218f882a91387c93d5d61b3d93cb9a02590f8cfa6ff.jpg'} | {} | {} | {} | ['Ablation Studies on Sparsity. Top-K Sparsifying (Shen et al.; Formal et al., 2021) adjusts the sparsity of lexicon-weighted representations, striking a balance between efficiency and effectiveness by retaining only the top-k weighted lexicons while setting others to zero. Applied exclusively during inference, this method introduces no additional training overhead. Fig. 4 illustrates the storage and retrieval performance across different sparsity levels on the KIT-ML dataset, where our model demonstrates superior storage efficiency and retrieval performance compared to previous approaches. ', 'Finally, the bottleneck Ecbow is fed into a simple decoder to reconstruct the masked tokens. ', 'where D denotes the whole dataset, M(enc) and M(dec) denote the set of masked position in x¯ and xˆ, oj denotes the logit of xj, and xj refers to the original text token, the Ecbow is calculated as Eq.4. '] | 861f204cc3e6300f132c09a3a1ebc4100f6cfa764b610699e99775ce02ab4742 | 06f7c676f544702872abe06e953c555a2bf87f41 |
explanation | How sensitive is the system to the choice of priors? Could you provide guidelines for selecting appropriate priors? | Yes, the choice of priors, specifically the variances associated with those priors, is crucial for the performance of Polyrating. Low variance introduces heavy regularization, potentially slowing convergence and limiting adaptation to new tasks, while high variance may reduce the regularizing effect of priors, making them less impactful. Fortunately, these parameters are optimized automatically via cross-validation on the training dataset, allowing us to identify priors that best support predictive accuracy. Empirically, we found that optimal standard deviations, identified via this process, ranged from 1 to 100. Further, based on our experiments in Figure 1 and Figure 3, these variances adjust dynamically based on dataset size: with fewer samples, lower standard deviations are typically optimal, and as sample count increases, the optimal standard deviations also tend to increase. This adaptive approach reduces manual tuning effort and improves robustness across varying data conditions. Thus, one should always use this cross-validation-based approach to find the appropriate hyperparameters. If this is not a possibility for some reason, or to obtain initial results, standard deviations around 50 provide a good default value. We note that for all our experiments, we only optimized over 10 possible values of the variances. | ['Figure 1', 'Figure 3'] | ['images/347a34bb02f4ab1abc20001f73688e0ee2173a3def29d58734ed3290a838ae77.jpg', 'images/523a6da81716f91830c2643da58dc3ae65a0d26e39e4c2e63fd3e848ce76349e.jpg'] | ['figure'] | 2 | 3 | 5 | {'where length(gy ) is the length of the models’ completion for the given question. We use the public dataset from Wildbench (Lin et al., 2024) to obtain our LLM-based evaluation. Fig. 3(a) shows the logistic loss on a test set of the Chatbot Arena for a varying amount of human annotations. We find that POLYRATING converges faster to the optimal ratings than the univariate baseline. Specifically, the increase in sample efficiency when collecting 10000 human annotations is 38%. ': '1', 'Ratings Rating systems have been used across various domains, such as sports (Elo, 2008; Glickman, 2002; Shelopugin and Sirotkin, 2023; Sismanis, 2010; Vaz et al., 2012), gaming (Herbrich et al., 2007; Dangauthier et al., 2007), movies (Talattinis and Stephanides, 2022) and recommendation systems (Adomavicius et al., 2005; Chen et al., 2018; Kong et al., 2019). The widely recognized Elo rating system (Elo, 2008) and its extensions such as Glicko (Glickman, 2002) are generic univariate systems based on the BT-model (Bradley and Terry, 1952) that are widely applicable. Furthermore, various rating systems have been developed for specific use cases and areas. For example, Elo++ (Sismanis, 2010) was specifically designed for chess, and TrueSkill (Herbrich et al., 2007; Dangauthier et al., 2007) has been further developed specifically for multiplayer online games. ': '2', 'Rating-based human evaluation has become an essential tool to accurately evaluate the impressive performance of large language models (LLMs). However, current rating systems suffer from several important limitations: first, they fail to account for biases that significantly influence evaluation results, second, they require large and expensive preference datasets to obtain accurate ratings, and third, they do not facilitate meaningful comparisons of model ratings across different tasks. To address these issues, we introduce POLYRATING, an expressive and flexible rating system based on maximum a posteriori estimation that enables a more nuanced and thorough analysis of model performance at lower costs. POLYRATING can detect and quantify biases affecting human preferences, ensuring fairer model comparisons. Further, POLYRATING can reduce the cost of human evaluations by up to 41% for new models and up to 77% for new tasks by leveraging existing benchmark scores. Lastly, POLYRATING enables direct comparisons of ratings across different tasks, providing a comprehensive understanding of an LLMs’ strengths, weaknesses, and relative performance across different applications. 1 ': '3'} | {'1': 'where length(gy ) is the length of the models’ completion for the given question. We use the public dataset from Wildbench (Lin et al., 2024) to obtain our LLM-based evaluation. Fig. 3(a) shows the logistic loss on a test set of the Chatbot Arena for a varying amount of human annotations. We find that POLYRATING converges faster to the optimal ratings than the univariate baseline. Specifically, the increase in sample efficiency when collecting 10000 human annotations is 38%. ', '2': 'Ratings Rating systems have been used across various domains, such as sports (Elo, 2008; Glickman, 2002; Shelopugin and Sirotkin, 2023; Sismanis, 2010; Vaz et al., 2012), gaming (Herbrich et al., 2007; Dangauthier et al., 2007), movies (Talattinis and Stephanides, 2022) and recommendation systems (Adomavicius et al., 2005; Chen et al., 2018; Kong et al., 2019). The widely recognized Elo rating system (Elo, 2008) and its extensions such as Glicko (Glickman, 2002) are generic univariate systems based on the BT-model (Bradley and Terry, 1952) that are widely applicable. Furthermore, various rating systems have been developed for specific use cases and areas. For example, Elo++ (Sismanis, 2010) was specifically designed for chess, and TrueSkill (Herbrich et al., 2007; Dangauthier et al., 2007) has been further developed specifically for multiplayer online games. ', '3': 'Rating-based human evaluation has become an essential tool to accurately evaluate the impressive performance of large language models (LLMs). However, current rating systems suffer from several important limitations: first, they fail to account for biases that significantly influence evaluation results, second, they require large and expensive preference datasets to obtain accurate ratings, and third, they do not facilitate meaningful comparisons of model ratings across different tasks. To address these issues, we introduce POLYRATING, an expressive and flexible rating system based on maximum a posteriori estimation that enables a more nuanced and thorough analysis of model performance at lower costs. POLYRATING can detect and quantify biases affecting human preferences, ensuring fairer model comparisons. Further, POLYRATING can reduce the cost of human evaluations by up to 41% for new models and up to 77% for new tasks by leveraging existing benchmark scores. Lastly, POLYRATING enables direct comparisons of ratings across different tasks, providing a comprehensive understanding of an LLMs’ strengths, weaknesses, and relative performance across different applications. 1 '} | {'images/347a34bb02f4ab1abc20001f73688e0ee2173a3def29d58734ed3290a838ae77.jpg': '1', 'images/523a6da81716f91830c2643da58dc3ae65a0d26e39e4c2e63fd3e848ce76349e.jpg': '3'} | {'1': 'images/347a34bb02f4ab1abc20001f73688e0ee2173a3def29d58734ed3290a838ae77.jpg', '3': 'images/523a6da81716f91830c2643da58dc3ae65a0d26e39e4c2e63fd3e848ce76349e.jpg'} | {} | {} | {} | ['where length(gy ) is the length of the models’ completion for the given question. We use the public dataset from Wildbench (Lin et al., 2024) to obtain our LLM-based evaluation. Fig. 3(a) shows the logistic loss on a test set of the Chatbot Arena for a varying amount of human annotations. We find that POLYRATING converges faster to the optimal ratings than the univariate baseline. Specifically, the increase in sample efficiency when collecting 10000 human annotations is 38%. ', 'Rating-based human evaluation has become an essential tool to accurately evaluate the impressive performance of large language models (LLMs). However, current rating systems suffer from several important limitations: first, they fail to account for biases that significantly influence evaluation results, second, they require large and expensive preference datasets to obtain accurate ratings, and third, they do not facilitate meaningful comparisons of model ratings across different tasks. To address these issues, we introduce POLYRATING, an expressive and flexible rating system based on maximum a posteriori estimation that enables a more nuanced and thorough analysis of model performance at lower costs. POLYRATING can detect and quantify biases affecting human preferences, ensuring fairer model comparisons. Further, POLYRATING can reduce the cost of human evaluations by up to 41% for new models and up to 77% for new tasks by leveraging existing benchmark scores. Lastly, POLYRATING enables direct comparisons of ratings across different tasks, providing a comprehensive understanding of an LLMs’ strengths, weaknesses, and relative performance across different applications. 1 ', 'Ratings Rating systems have been used across various domains, such as sports (Elo, 2008; Glickman, 2002; Shelopugin and Sirotkin, 2023; Sismanis, 2010; Vaz et al., 2012), gaming (Herbrich et al., 2007; Dangauthier et al., 2007), movies (Talattinis and Stephanides, 2022) and recommendation systems (Adomavicius et al., 2005; Chen et al., 2018; Kong et al., 2019). The widely recognized Elo rating system (Elo, 2008) and its extensions such as Glicko (Glickman, 2002) are generic univariate systems based on the BT-model (Bradley and Terry, 1952) that are widely applicable. Furthermore, various rating systems have been developed for specific use cases and areas. For example, Elo++ (Sismanis, 2010) was specifically designed for chess, and TrueSkill (Herbrich et al., 2007; Dangauthier et al., 2007) has been further developed specifically for multiplayer online games. '] | 5d39a4f7d176a478084a3e4695f2191d17bfce9ebb20f50216dcbbc8e162c185 | 08b1686a41602ea798b8a6c7e9eea1f7fc02d0a3 |
explanation | What are the quantitative improvements in memory usage and runtime compared to existing video transformer baselines? | The quantitative improvements in memory usage and runtime of SaTran compared to existing video transformer baselines are given in Table 1 and Table 3 of the paper. SaTran does exhibit better computational scaling. As dataset sizes grow larger, SaTran exhibits superior computational scaling due to its focus on non-redundant regions. While traditional video transformers experience exponential growth in memory and runtime with larger spatial or temporal dimensions, SaTran mitigates this by keeping the number of selected patches manageable, scaling nearly linearly with the size of critical regions rather than the entire dataset. | ['Table 1', 'Table 3'] | ['images/d21945e7b96b716a9d998993b44d266b7f970be8d196d7e05608f08019293234.jpg', 'images/00c84e3be9e5e193d4fd3911728f4368e45217be2f1be69a366d8758e6ea10ad.jpg'] | ['table'] | 2 | 3 | 5 | {'We propose a transformer model, SaTran, for large size satellite image time series which exploits spatiotemporal redundancies. SITS data can be characterized by the presence of patches with spatiotemporal redundancy persisting throughout the time series, referred to hereafter as redundant patch tubes. SITS data also contains patches where temporal redundancy lasts only for a few timestamps, referred to hereafter as non-redundant patch tubes. The pictorial representation of the classification of patch tubes is given in Figure 1. For example, a region of a barren land/water body has spatiotemporal redundancy, and it won’t change even for years (thus is a redundant patch tube); 2) the non-redundant patches (regions of interest) experience changes with time but can still have a temporal redundancy for a shorter span, for example, cultivation land experiences changes in the crop cycle duration. However, during harvest time when the crop is fully grown, there can be redundancy for a few time stamps. Removing redundancies reduces the computational requirements thereby helping in the democratization of satellite image technology. SaTran disentangles spatiotemporal and temporal redundancies and makes the SITS processing efficient. Its key features are: ': '1'} | {'1': 'We propose a transformer model, SaTran, for large size satellite image time series which exploits spatiotemporal redundancies. SITS data can be characterized by the presence of patches with spatiotemporal redundancy persisting throughout the time series, referred to hereafter as redundant patch tubes. SITS data also contains patches where temporal redundancy lasts only for a few timestamps, referred to hereafter as non-redundant patch tubes. The pictorial representation of the classification of patch tubes is given in Figure 1. For example, a region of a barren land/water body has spatiotemporal redundancy, and it won’t change even for years (thus is a redundant patch tube); 2) the non-redundant patches (regions of interest) experience changes with time but can still have a temporal redundancy for a shorter span, for example, cultivation land experiences changes in the crop cycle duration. However, during harvest time when the crop is fully grown, there can be redundancy for a few time stamps. Removing redundancies reduces the computational requirements thereby helping in the democratization of satellite image technology. SaTran disentangles spatiotemporal and temporal redundancies and makes the SITS processing efficient. Its key features are: '} | {'images/376cd2b78d431962dd05bd69b37a5feadc43d9d0a612e9db19763364b4a6b359.jpg': '1', 'images/0f75a9de1f6a7889c2874efb43e50cf1be3ad21f742e93e2553e6cd2adbc1beb.jpg': '2'} | {'1': 'images/376cd2b78d431962dd05bd69b37a5feadc43d9d0a612e9db19763364b4a6b359.jpg', '2': 'images/0f75a9de1f6a7889c2874efb43e50cf1be3ad21f742e93e2553e6cd2adbc1beb.jpg'} | {'images/d21945e7b96b716a9d998993b44d266b7f970be8d196d7e05608f08019293234.jpg': '1', 'images/00c84e3be9e5e193d4fd3911728f4368e45217be2f1be69a366d8758e6ea10ad.jpg': '3'} | {'1': 'images/d21945e7b96b716a9d998993b44d266b7f970be8d196d7e05608f08019293234.jpg', '3': 'images/00c84e3be9e5e193d4fd3911728f4368e45217be2f1be69a366d8758e6ea10ad.jpg'} | {} | ['images/376cd2b78d431962dd05bd69b37a5feadc43d9d0a612e9db19763364b4a6b359.jpg', 'We propose a transformer model, SaTran, for large size satellite image time series which exploits spatiotemporal redundancies. SITS data can be characterized by the presence of patches with spatiotemporal redundancy persisting throughout the time series, referred to hereafter as redundant patch tubes. SITS data also contains patches where temporal redundancy lasts only for a few timestamps, referred to hereafter as non-redundant patch tubes. The pictorial representation of the classification of patch tubes is given in Figure 1. For example, a region of a barren land/water body has spatiotemporal redundancy, and it won’t change even for years (thus is a redundant patch tube); 2) the non-redundant patches (regions of interest) experience changes with time but can still have a temporal redundancy for a shorter span, for example, cultivation land experiences changes in the crop cycle duration. However, during harvest time when the crop is fully grown, there can be redundancy for a few time stamps. Removing redundancies reduces the computational requirements thereby helping in the democratization of satellite image technology. SaTran disentangles spatiotemporal and temporal redundancies and makes the SITS processing efficient. Its key features are: ', 'images/0f75a9de1f6a7889c2874efb43e50cf1be3ad21f742e93e2553e6cd2adbc1beb.jpg'] | 5c06233b8cf82f42aad227e6052a21ac9be31c920e94bfbae867441699f73acb | 13c32f8f02202a3508ef1e2e518e1b3dd4751e60 |
explanation | What are the key differences in computational cost between SymDiff and EDM models? | The key point here is the following: The SymDiff model from Table 1 is the *largest* SymDiff model in Table 2 and 3. The EDM model from Table 1 is the *smallest* EDM model from Table 2 and 3. In effect, these tables show the result of making the SymDiff model from Table 1 smaller, and the EDM model bigger. From this perspective, SymDiff again is more computationally efficient. From the updated version of Table 3, SymDiff$^{-}$ *still* does better than EDM in terms of molecular stability, but has considerably lower computational cost. | ['Table 2', 'Table 3'] | ['images/2cd29a9f658b3c479c09d4e3fa030c1a1243beb693827d1dd57e28ef7e0e2c03.jpg', 'images/377960a4373df001fb8163b45251b034d1ef8cd987bacb9adf3f713bdbc27c87.jpg'] | ['table'] | 2 | 3 | 5 | {'Invariance and equivariance Intuitively, the ordering of the N points and the orientation of the overall system in 3D space should not matter. To formalise this, let SN denote the symmetric group of permutations of the integers {1, . . . , N}, and O(3) denote the group of orthogonal 3×3 matrices. Their product SN × O(3) acts on N-body systems by reordering and orthogonally transforming points as follows: ': '1', 'We now apply SYMDIFF in the setting of N-body systems considered in Section 2.3. Specifically, we take Z := U, H := SN, and G := O(3), and consider the action on Z defined in equation 5. This means that we start with kθ(zt−1|zt) in equation 8 that is already equivariant with respect to reorderings of the N bodies, and then symmetrise this to obtain an (SN × O(3))-equivariant reverse kernel overall. We choose to symmetrise in this way because highly scalable SN-equivariant kernels based on Transformer architectures can be readily constructed for this purpose (Vaswani et al., 2017; Lee et al., 2019; Peebles & Xie, 2023), whereas intrinsically O(3)-equivariant neural networks have not shown the same degree of scalability to-date (Abramson et al., 2024). ': '2'} | {'1': 'Invariance and equivariance Intuitively, the ordering of the N points and the orientation of the overall system in 3D space should not matter. To formalise this, let SN denote the symmetric group of permutations of the integers {1, . . . , N}, and O(3) denote the group of orthogonal 3×3 matrices. Their product SN × O(3) acts on N-body systems by reordering and orthogonally transforming points as follows: ', '2': 'We now apply SYMDIFF in the setting of N-body systems considered in Section 2.3. Specifically, we take Z := U, H := SN, and G := O(3), and consider the action on Z defined in equation 5. This means that we start with kθ(zt−1|zt) in equation 8 that is already equivariant with respect to reorderings of the N bodies, and then symmetrise this to obtain an (SN × O(3))-equivariant reverse kernel overall. We choose to symmetrise in this way because highly scalable SN-equivariant kernels based on Transformer architectures can be readily constructed for this purpose (Vaswani et al., 2017; Lee et al., 2019; Peebles & Xie, 2023), whereas intrinsically O(3)-equivariant neural networks have not shown the same degree of scalability to-date (Abramson et al., 2024). '} | {} | {} | {'images/e2d0eabeb084f100d6fc00f85777bb98effca7527b6ec138e965ede1427cd572.jpg': '1', 'images/377960a4373df001fb8163b45251b034d1ef8cd987bacb9adf3f713bdbc27c87.jpg': '3', 'images/2cd29a9f658b3c479c09d4e3fa030c1a1243beb693827d1dd57e28ef7e0e2c03.jpg': '2'} | {'1': 'images/e2d0eabeb084f100d6fc00f85777bb98effca7527b6ec138e965ede1427cd572.jpg', '3': 'images/377960a4373df001fb8163b45251b034d1ef8cd987bacb9adf3f713bdbc27c87.jpg', '2': 'images/2cd29a9f658b3c479c09d4e3fa030c1a1243beb693827d1dd57e28ef7e0e2c03.jpg'} | {} | ['images/e2d0eabeb084f100d6fc00f85777bb98effca7527b6ec138e965ede1427cd572.jpg', 'We now apply SYMDIFF in the setting of N-body systems considered in Section 2.3. Specifically, we take Z := U, H := SN, and G := O(3), and consider the action on Z defined in equation 5. This means that we start with kθ(zt−1|zt) in equation 8 that is already equivariant with respect to reorderings of the N bodies, and then symmetrise this to obtain an (SN × O(3))-equivariant reverse kernel overall. We choose to symmetrise in this way because highly scalable SN-equivariant kernels based on Transformer architectures can be readily constructed for this purpose (Vaswani et al., 2017; Lee et al., 2019; Peebles & Xie, 2023), whereas intrinsically O(3)-equivariant neural networks have not shown the same degree of scalability to-date (Abramson et al., 2024). ', 'Invariance and equivariance Intuitively, the ordering of the N points and the orientation of the overall system in 3D space should not matter. To formalise this, let SN denote the symmetric group of permutations of the integers {1, . . . , N}, and O(3) denote the group of orthogonal 3×3 matrices. Their product SN × O(3) acts on N-body systems by reordering and orthogonally transforming points as follows: '] | 4a00a0cb152decaca92e7318c87f2e5af2650956c5d0dca7dabc89ef34f47e77 | 18d2d2726e4ef809fa2e41c916a02ed85a34bfcf |
explanation | How does NeurRL perform in terms of scalability and runtime with larger datasets? | In our experiment, we run the model on synthetic datasets, which are very small, and the positive and negative classes both include two instances. Hence, NeurRL can solve smaller datasets. For the larger datasets, we listed the data statistical information in Table 1. The largest used data in the UCR archive is StarLightCureves, which includes 9236 instances, and the length of each instance is 2014. We also presented some running time results in Table 2. | ['Table 1', 'Table 2'] | ['images/585e27a9c097dba7f1899ebd11f4f5292cb0b3edaba67045854353ddd4cf99a8.jpg', 'images/7eb7d3c66136aee1d79b5e2d7be5cd1ff5642664555b677936ed504807387412.jpg'] | ['table'] | 2 | 3 | 5 | {'The architecture of NeurRL is shown in Fig. 1a. To learn a logic program P from sequence data, each input sequence x is divided into shorter subsequences s of length l with a unit step stride. An encoder maps subsequences s to an embedding space z, and a decoder reconstructs s↔from z2. The differentiable k-means algorithm described in Section 3.2 clusters embeddings z, grouping subsequences s with similar patterns into groups r. This yields fuzzy interpretation vectors vI and Boolean target atom value v(ht) for each sequence. Finally, the differentiable rule-learning module uses vI as inputs and v(ht) as labels to learn high-level rules describing the target class. ': '1'} | {'1': 'The architecture of NeurRL is shown in Fig. 1a. To learn a logic program P from sequence data, each input sequence x is divided into shorter subsequences s of length l with a unit step stride. An encoder maps subsequences s to an embedding space z, and a decoder reconstructs s↔from z2. The differentiable k-means algorithm described in Section 3.2 clusters embeddings z, grouping subsequences s with similar patterns into groups r. This yields fuzzy interpretation vectors vI and Boolean target atom value v(ht) for each sequence. Finally, the differentiable rule-learning module uses vI as inputs and v(ht) as labels to learn high-level rules describing the target class. '} | {'images/b12fda4b021dc4e4d1a06bf489d5e0d869ec424034d90656ee4b34f2c2400726.jpg': '1', 'images/be24544cb2eab0eba0109fd9cf6ead5a2af98b3ba97cef65a86cdbae0c1d08c4.jpg': '5'} | {'1': 'images/b12fda4b021dc4e4d1a06bf489d5e0d869ec424034d90656ee4b34f2c2400726.jpg', '5': 'images/be24544cb2eab0eba0109fd9cf6ead5a2af98b3ba97cef65a86cdbae0c1d08c4.jpg'} | {'images/7eb7d3c66136aee1d79b5e2d7be5cd1ff5642664555b677936ed504807387412.jpg': '2', 'images/585e27a9c097dba7f1899ebd11f4f5292cb0b3edaba67045854353ddd4cf99a8.jpg': '1'} | {'2': 'images/7eb7d3c66136aee1d79b5e2d7be5cd1ff5642664555b677936ed504807387412.jpg', '1': 'images/585e27a9c097dba7f1899ebd11f4f5292cb0b3edaba67045854353ddd4cf99a8.jpg'} | {} | ['images/b12fda4b021dc4e4d1a06bf489d5e0d869ec424034d90656ee4b34f2c2400726.jpg', 'The architecture of NeurRL is shown in Fig. 1a. To learn a logic program P from sequence data, each input sequence x is divided into shorter subsequences s of length l with a unit step stride. An encoder maps subsequences s to an embedding space z, and a decoder reconstructs s↔from z2. The differentiable k-means algorithm described in Section 3.2 clusters embeddings z, grouping subsequences s with similar patterns into groups r. This yields fuzzy interpretation vectors vI and Boolean target atom value v(ht) for each sequence. Finally, the differentiable rule-learning module uses vI as inputs and v(ht) as labels to learn high-level rules describing the target class. ', 'images/be24544cb2eab0eba0109fd9cf6ead5a2af98b3ba97cef65a86cdbae0c1d08c4.jpg'] | 9451dd1f31b86623ca466ee6ee21b8874fb1b3633c1f1a5fe9411a9d8364c2b9 | 23561dad0a0d83f9bfb10b44410d96caf810f1bc |
explanation | What improvements have been made to the qualitative results presented in the paper? | We have updated the results in Figure 1(c) and Figure 6. The learned identifier tokens are not 'strong' enough to represent the characteristics of each subject, thus leading to the suboptimal results shown in previous Figure 1(c) and Figure 6. We took the Break-A-Scene model to make an improvement, which introduces enhanced training processes for learning better identifier tokens. As the updated results are shown in Figure 1(c) and Figure 6, we effectively achieved the Event-Subject customization with better subject customization results. | ['Figure 1', 'Figure 6'] | ['images/c89b49212aaab8e3db15150aedbefa13c50e0db7b2562710ce20a7c8d3d6f202.jpg', 'images/65a2bf389f7a6f0d8fcb0987560552502ec572e378e481e52b177016ddae50b3.jpg'] | ['figure'] | 2 | 3 | 5 | {'In this section, we first formally define the event-customized image generation task. Given a reference image IR involves N reference entities ER = {R1, . . . , RN}, we define the “event” as the specific actions and poses of each single reference entity, and the relations and interactions between different reference entities. Together we have the entity masks M = {m1, . . . , mN}, where mi is the mask of its corresponding entity Ri. The event-customized image generation task aims to capture the reference event, and further generate a target image IG under the same event but with diverse and novel target entities EG = {G1, . . . , GN} in the target prompt P = {w0, . . . , wN}, where wi is the description of the target entity Gi, and each target entity Gi should keep the same action or pose with its corresponding reference entity Ri. As the example shown in Figure 2, given the reference image with four reference entities (e.g., three people and one object), the event-customization aims to capture the complex reference event and generate the target image with a novel combination of different target entities (e.g., skeleton, statue, monkey, book). ': '1', 'Evaluation Benchmarks. In order to provide sufficient and suitable conditions for both quantitative and qualitative comparisons on this new task, we collect two new benchmarks2. 1) For quantitative evaluation, we present SWiG-Event, a benchmark derived from SWiG (Pratt et al., 2020) dataset, which comprises 5,000 samples with various events and entities, i.e., 50 kinds of different actions, poses, and interactions, where each kind of event has 100 reference images, and each reference image contains 1 to 4 entities with labeled bounding boxes and nouns. 2) For qualitative evaluation, we present Real-Event, which comprises 30 high-quality reference images from HICO-DET (Chao et al., 2015) and the internet with a wide range of events and entities (e.g., animal, human, object, and their combinations). We further employ Grounded-SAM (Kirillov et al., 2023; Ren et al., 2024) to extract the mask of each entity. ': '2'} | {'1': 'In this section, we first formally define the event-customized image generation task. Given a reference image IR involves N reference entities ER = {R1, . . . , RN}, we define the “event” as the specific actions and poses of each single reference entity, and the relations and interactions between different reference entities. Together we have the entity masks M = {m1, . . . , mN}, where mi is the mask of its corresponding entity Ri. The event-customized image generation task aims to capture the reference event, and further generate a target image IG under the same event but with diverse and novel target entities EG = {G1, . . . , GN} in the target prompt P = {w0, . . . , wN}, where wi is the description of the target entity Gi, and each target entity Gi should keep the same action or pose with its corresponding reference entity Ri. As the example shown in Figure 2, given the reference image with four reference entities (e.g., three people and one object), the event-customization aims to capture the complex reference event and generate the target image with a novel combination of different target entities (e.g., skeleton, statue, monkey, book). ', '2': 'Evaluation Benchmarks. In order to provide sufficient and suitable conditions for both quantitative and qualitative comparisons on this new task, we collect two new benchmarks2. 1) For quantitative evaluation, we present SWiG-Event, a benchmark derived from SWiG (Pratt et al., 2020) dataset, which comprises 5,000 samples with various events and entities, i.e., 50 kinds of different actions, poses, and interactions, where each kind of event has 100 reference images, and each reference image contains 1 to 4 entities with labeled bounding boxes and nouns. 2) For qualitative evaluation, we present Real-Event, which comprises 30 high-quality reference images from HICO-DET (Chao et al., 2015) and the internet with a wide range of events and entities (e.g., animal, human, object, and their combinations). We further employ Grounded-SAM (Kirillov et al., 2023; Ren et al., 2024) to extract the mask of each entity. '} | {'images/c89b49212aaab8e3db15150aedbefa13c50e0db7b2562710ce20a7c8d3d6f202.jpg': '1', 'images/65a2bf389f7a6f0d8fcb0987560552502ec572e378e481e52b177016ddae50b3.jpg': '6', 'images/8f87128374dd49e831f3ab9d7a9409d03a7dbafa36ad2814e9c70f075328249a.jpg': '2'} | {'1': 'images/c89b49212aaab8e3db15150aedbefa13c50e0db7b2562710ce20a7c8d3d6f202.jpg', '6': 'images/65a2bf389f7a6f0d8fcb0987560552502ec572e378e481e52b177016ddae50b3.jpg', '2': 'images/8f87128374dd49e831f3ab9d7a9409d03a7dbafa36ad2814e9c70f075328249a.jpg'} | {} | {} | {} | ['Evaluation Benchmarks. In order to provide sufficient and suitable conditions for both quantitative and qualitative comparisons on this new task, we collect two new benchmarks2. 1) For quantitative evaluation, we present SWiG-Event, a benchmark derived from SWiG (Pratt et al., 2020) dataset, which comprises 5,000 samples with various events and entities, i.e., 50 kinds of different actions, poses, and interactions, where each kind of event has 100 reference images, and each reference image contains 1 to 4 entities with labeled bounding boxes and nouns. 2) For qualitative evaluation, we present Real-Event, which comprises 30 high-quality reference images from HICO-DET (Chao et al., 2015) and the internet with a wide range of events and entities (e.g., animal, human, object, and their combinations). We further employ Grounded-SAM (Kirillov et al., 2023; Ren et al., 2024) to extract the mask of each entity. ', 'In this section, we first formally define the event-customized image generation task. Given a reference image IR involves N reference entities ER = {R1, . . . , RN}, we define the “event” as the specific actions and poses of each single reference entity, and the relations and interactions between different reference entities. Together we have the entity masks M = {m1, . . . , mN}, where mi is the mask of its corresponding entity Ri. The event-customized image generation task aims to capture the reference event, and further generate a target image IG under the same event but with diverse and novel target entities EG = {G1, . . . , GN} in the target prompt P = {w0, . . . , wN}, where wi is the description of the target entity Gi, and each target entity Gi should keep the same action or pose with its corresponding reference entity Ri. As the example shown in Figure 2, given the reference image with four reference entities (e.g., three people and one object), the event-customization aims to capture the complex reference event and generate the target image with a novel combination of different target entities (e.g., skeleton, statue, monkey, book). ', 'images/8f87128374dd49e831f3ab9d7a9409d03a7dbafa36ad2814e9c70f075328249a.jpg'] | e322364c3831b4f049c62ee868de3b9a268240055ff5ec5d572dbd299f1ccb95 | 277c39b0833b25263bbd4c7cc78687400a62c267 |
explanation | How natural is the GINC dataset generated from a mixture of Hidden Markov Models? Does it resemble text found on the web? | The Generative IN-Context Learning (GINC) dataset is a synthetic benchmark designed using a mixture of Hidden Markov Models (HMMs) to study specific properties of in-context learning (ICL) in a controlled setting. While it does not replicate the full complexity of web text, it captures key characteristics of natural language, such as long-range coherence and latent concept structure. GINC has been shown to exhibit several real-world phenomena: In-Context Learning Emergence, Sensitivity to Example Ordering, and Scaling Effects. Additionally, GINC enables precise experimentation, such as in Figure 4, where we demonstrate that increasing vocabulary size leads to higher diversity, reflecting properties observed in natural language datasets. To complement these insights, we validate the diversity coefficient on real-world datasets like PubMed, USPTO, and Pile-CC (Figure 3), ensuring its applicability to natural text. | ['Figure 3', 'Figure 4'] | ['images/1c980a0ed6e8a61b55a7636d550a9990c61ac1b5353fff76a94b9475a7cb303a.jpg', 'images/73ed6fe0712acca4d5320cdbdc68ac7fc3a41bbd65f2b479c5affd28da20fcbf.jpg'] | ['figure'] | 2 | 3 | 5 | {'Current trends in pre-training Large Language Models (LLMs) primarily focus on the scaling of model and dataset size. While the quality of pre-training data is considered an important factor for training powerful LLMs, it remains a nebulous concept that has not been rigorously characterized. To this end, we propose a formalization of one key aspect of data quality – measuring the variability of natural language data – specifically via a measure we call the diversity coefficient. Our empirical analysis shows that the proposed diversity coefficient aligns with the intuitive properties of diversity and variability, e.g., it increases as the number of latent concepts increases. Then, we measure the diversity coefficient of publicly available pre-training datasets and demonstrate that their formal diversity is high compared to theoretical lower and upper bounds. Finally, we conduct a comprehensive set of controlled interventional experiments with GPT-2 and LLaMAv2 that demonstrate the diversity coefficient of pre-training data characterizes useful aspects of downstream model evaluation performance—totaling 44 models of various sizes (51M to 7B parameters). We conclude that our formal notion of diversity is an important aspect of data quality that captures variability and causally leads to improved evaluation performance. ': '1'} | {'1': 'Current trends in pre-training Large Language Models (LLMs) primarily focus on the scaling of model and dataset size. While the quality of pre-training data is considered an important factor for training powerful LLMs, it remains a nebulous concept that has not been rigorously characterized. To this end, we propose a formalization of one key aspect of data quality – measuring the variability of natural language data – specifically via a measure we call the diversity coefficient. Our empirical analysis shows that the proposed diversity coefficient aligns with the intuitive properties of diversity and variability, e.g., it increases as the number of latent concepts increases. Then, we measure the diversity coefficient of publicly available pre-training datasets and demonstrate that their formal diversity is high compared to theoretical lower and upper bounds. Finally, we conduct a comprehensive set of controlled interventional experiments with GPT-2 and LLaMAv2 that demonstrate the diversity coefficient of pre-training data characterizes useful aspects of downstream model evaluation performance—totaling 44 models of various sizes (51M to 7B parameters). We conclude that our formal notion of diversity is an important aspect of data quality that captures variability and causally leads to improved evaluation performance. '} | {'images/73ed6fe0712acca4d5320cdbdc68ac7fc3a41bbd65f2b479c5affd28da20fcbf.jpg': '4', 'images/c93bb7a6cbd62d4c167c6a02ab83bde42f7e22a093562a4c808c9ed676c21fba.jpg': '1', 'images/1c980a0ed6e8a61b55a7636d550a9990c61ac1b5353fff76a94b9475a7cb303a.jpg': '3', 'images/7fb2868e9a4545a79bcc314da21af5f68780dcfe7fc29b3d2722e87423cc9bd8.jpg': '2'} | {'4': 'images/73ed6fe0712acca4d5320cdbdc68ac7fc3a41bbd65f2b479c5affd28da20fcbf.jpg', '1': 'images/c93bb7a6cbd62d4c167c6a02ab83bde42f7e22a093562a4c808c9ed676c21fba.jpg', '3': 'images/1c980a0ed6e8a61b55a7636d550a9990c61ac1b5353fff76a94b9475a7cb303a.jpg', '2': 'images/7fb2868e9a4545a79bcc314da21af5f68780dcfe7fc29b3d2722e87423cc9bd8.jpg'} | {} | {} | {} | ['images/7fb2868e9a4545a79bcc314da21af5f68780dcfe7fc29b3d2722e87423cc9bd8.jpg', 'Current trends in pre-training Large Language Models (LLMs) primarily focus on the scaling of model and dataset size. While the quality of pre-training data is considered an important factor for training powerful LLMs, it remains a nebulous concept that has not been rigorously characterized. To this end, we propose a formalization of one key aspect of data quality – measuring the variability of natural language data – specifically via a measure we call the diversity coefficient. Our empirical analysis shows that the proposed diversity coefficient aligns with the intuitive properties of diversity and variability, e.g., it increases as the number of latent concepts increases. Then, we measure the diversity coefficient of publicly available pre-training datasets and demonstrate that their formal diversity is high compared to theoretical lower and upper bounds. Finally, we conduct a comprehensive set of controlled interventional experiments with GPT-2 and LLaMAv2 that demonstrate the diversity coefficient of pre-training data characterizes useful aspects of downstream model evaluation performance—totaling 44 models of various sizes (51M to 7B parameters). We conclude that our formal notion of diversity is an important aspect of data quality that captures variability and causally leads to improved evaluation performance. ', 'images/c93bb7a6cbd62d4c167c6a02ab83bde42f7e22a093562a4c808c9ed676c21fba.jpg'] | 178b417035451a780f43835008e4d0ac57de8d75da2da8cadae192cfbac42def | 2945f6d8bc77680113b78173409fa1ac3c77b3f9 |
explanation | Is the method robust to the choice of the feature encoder? How large is the impact of the encoder? | Yes, our method is robust to the choice of feature encoder. We have demonstrated this in Table 3, which shows the robustness of our method to different backbones, and in Table 4, which evaluates the performance with different training schemes for the feature encoder. Across these settings, our method consistently achieves excellent and stable performance, outperforming all competitors. The choice of encoder does not have large impact on our method. We posit that this is because, unlike other OOD detection methods, which often rely on specific attributes or patterns in the extracted ID features that can vary significantly with the encoder, our method focuses on capturing the differences between ID and OOD data. Since OOD data are never encountered during training, the distribution differences between ID and OOD features remain consistent regardless of the encoder, ensuring that our method is minimally affected by the choice of encoder. | ['Table 3', 'Table 4'] | ['images/79c5712b0e70e62b4b298435e0f05469652ada420fe9b7caf9211a80ed3f06de.jpg', 'images/30da21a65e7e58199f5d214e2db65c0d1ffd7feb63aaa803a5fdff00d29400ec.jpg'] | ['table'] | 2 | 3 | 5 | {'(1) One line of work utilizes the outputs from pretrained models to design scoring functions for differentiating OOD samples. These post-hoc methods can be further divided into three subcategories. 1) The confidence-based methods (Hendrycks & Gimpel, 2017; Sun et al., 2021; Song et al., 2022; Hendrycks et al., 2022; Wang et al., 2022b; Liu et al., 2023) adjusts model outputs to obtain the desired confidence, including maximum softmax probability (Hendrycks & Gimpel, 2017), energy (Liu et al., 2020), and generalized entropy (Liu et al., 2023). 2) The density-based methods (Hendrycks et al., 2022; Sun & Li, 2022; Zhang et al., 2023c; Liu et al., 2024) identifies certain properties or patterns of ID data, such as neuron coverage (Liu et al., 2024), by learning the corre': '1'} | {'1': '(1) One line of work utilizes the outputs from pretrained models to design scoring functions for differentiating OOD samples. These post-hoc methods can be further divided into three subcategories. 1) The confidence-based methods (Hendrycks & Gimpel, 2017; Sun et al., 2021; Song et al., 2022; Hendrycks et al., 2022; Wang et al., 2022b; Liu et al., 2023) adjusts model outputs to obtain the desired confidence, including maximum softmax probability (Hendrycks & Gimpel, 2017), energy (Liu et al., 2020), and generalized entropy (Liu et al., 2023). 2) The density-based methods (Hendrycks et al., 2022; Sun & Li, 2022; Zhang et al., 2023c; Liu et al., 2024) identifies certain properties or patterns of ID data, such as neuron coverage (Liu et al., 2024), by learning the corre'} | {'images/cde49b0ea9fee22dfc63cfe50aeba2d292b8dc5d3ac6a8f5a94930b12d468a6a.jpg': '3'} | {'3': 'images/cde49b0ea9fee22dfc63cfe50aeba2d292b8dc5d3ac6a8f5a94930b12d468a6a.jpg'} | {'images/30da21a65e7e58199f5d214e2db65c0d1ffd7feb63aaa803a5fdff00d29400ec.jpg': '4', 'images/79c5712b0e70e62b4b298435e0f05469652ada420fe9b7caf9211a80ed3f06de.jpg': '3', 'images/20360d8e9511e8d78c42b47be8cb5693741ac074ae13ebea679c224c1e35f741.jpg': '2'} | {'4': 'images/30da21a65e7e58199f5d214e2db65c0d1ffd7feb63aaa803a5fdff00d29400ec.jpg', '3': 'images/79c5712b0e70e62b4b298435e0f05469652ada420fe9b7caf9211a80ed3f06de.jpg', '2': 'images/20360d8e9511e8d78c42b47be8cb5693741ac074ae13ebea679c224c1e35f741.jpg'} | {} | ['images/20360d8e9511e8d78c42b47be8cb5693741ac074ae13ebea679c224c1e35f741.jpg', 'images/cde49b0ea9fee22dfc63cfe50aeba2d292b8dc5d3ac6a8f5a94930b12d468a6a.jpg', '(1) One line of work utilizes the outputs from pretrained models to design scoring functions for differentiating OOD samples. These post-hoc methods can be further divided into three subcategories. 1) The confidence-based methods (Hendrycks & Gimpel, 2017; Sun et al., 2021; Song et al., 2022; Hendrycks et al., 2022; Wang et al., 2022b; Liu et al., 2023) adjusts model outputs to obtain the desired confidence, including maximum softmax probability (Hendrycks & Gimpel, 2017), energy (Liu et al., 2020), and generalized entropy (Liu et al., 2023). 2) The density-based methods (Hendrycks et al., 2022; Sun & Li, 2022; Zhang et al., 2023c; Liu et al., 2024) identifies certain properties or patterns of ID data, such as neuron coverage (Liu et al., 2024), by learning the corre'] | cbda2efa7d4e1a3f2a7c8b6628f9dec58aa4c2da082e7ce3f4895a8f90214406 | 2a006d8d785f35dd18f18a5659d50dca4fd26b1e |
explanation | What measures have been taken to mitigate bias in the confidence scores assigned by the LLM? | While our method relies on the LLM to assign confidence scores, we aim to mitigate this issue by explicitly consolidating potentially conflicting internal and external knowledge. This step helps reduce the direct influence of bias. As demonstrated in Figure 7, our method successfully generates the correct answer when either side is correct in these cases. The quantitative results in Figure 6 further show that our method significantly improves model performance on conflicting sets. | ['Figure 6', 'Figure 7'] | ['images/a34539a1b70e9281e2f3870575f3f8c265b5eafddbfec6922668b8bcc5b1f46a.jpg', 'images/927eaafe2c1071cc05b1a505fc8b836efe90c390ffdeed9f0024af5cd756240d.jpg'] | ['figure'] | 2 | 3 | 5 | {'To better showcase the common real-world challenges and to make better motivate for improved methodological designs, we evaluate retrieval quality, end-to-end RAG performance, and knowledge conflicts on a controlled set of data. The selected data encompass a diverse range of general, domainspecific, and long-tail questions from NQ (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017), BioASQ (Tsatsaronis et al., 2015), and PopQA (Mallen et al., 2023). Our analysis is based on realistic retrieval results with Google Search3 as the retriever and the Web as the corpus. This setting allows us to analyze the severity of imperfect retrieval in real-world RAG. Overall, we sample 1K short-form QA instances from these datasets, and pair each instance with 10 retrieved passages. ': '1'} | {'1': 'To better showcase the common real-world challenges and to make better motivate for improved methodological designs, we evaluate retrieval quality, end-to-end RAG performance, and knowledge conflicts on a controlled set of data. The selected data encompass a diverse range of general, domainspecific, and long-tail questions from NQ (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017), BioASQ (Tsatsaronis et al., 2015), and PopQA (Mallen et al., 2023). Our analysis is based on realistic retrieval results with Google Search3 as the retriever and the Web as the corpus. This setting allows us to analyze the severity of imperfect retrieval in real-world RAG. Overall, we sample 1K short-form QA instances from these datasets, and pair each instance with 10 retrieved passages. '} | {'images/7fb445667283ba2034b6b70b9feda1b1cb2f6e2cb9483472c1d2d87ee93ad9f5.jpg': '5', 'images/c401aa4b4a7ec415da40eeed961a0c1e82735a75a9e77b64d1bf6119680f0e80.jpg': '4', 'images/a34539a1b70e9281e2f3870575f3f8c265b5eafddbfec6922668b8bcc5b1f46a.jpg': '6', 'images/927eaafe2c1071cc05b1a505fc8b836efe90c390ffdeed9f0024af5cd756240d.jpg': '7'} | {'5': 'images/7fb445667283ba2034b6b70b9feda1b1cb2f6e2cb9483472c1d2d87ee93ad9f5.jpg', '4': 'images/c401aa4b4a7ec415da40eeed961a0c1e82735a75a9e77b64d1bf6119680f0e80.jpg', '6': 'images/a34539a1b70e9281e2f3870575f3f8c265b5eafddbfec6922668b8bcc5b1f46a.jpg', '7': 'images/927eaafe2c1071cc05b1a505fc8b836efe90c390ffdeed9f0024af5cd756240d.jpg'} | {} | {} | {} | ['images/c401aa4b4a7ec415da40eeed961a0c1e82735a75a9e77b64d1bf6119680f0e80.jpg', 'images/7fb445667283ba2034b6b70b9feda1b1cb2f6e2cb9483472c1d2d87ee93ad9f5.jpg', 'To better showcase the common real-world challenges and to make better motivate for improved methodological designs, we evaluate retrieval quality, end-to-end RAG performance, and knowledge conflicts on a controlled set of data. The selected data encompass a diverse range of general, domainspecific, and long-tail questions from NQ (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017), BioASQ (Tsatsaronis et al., 2015), and PopQA (Mallen et al., 2023). Our analysis is based on realistic retrieval results with Google Search3 as the retriever and the Web as the corpus. This setting allows us to analyze the severity of imperfect retrieval in real-world RAG. Overall, we sample 1K short-form QA instances from these datasets, and pair each instance with 10 retrieved passages. '] | 446811e294297411f28ebb77b672ebf4412da8112b13b1a11a0427ab6f1d5f7b | 36d892fe3d69bf0cab1c83397a0c61314c6af3dd |
explanation | How do the authors address the speed-accuracy trade-off associated with full rank attention? | The review claims that 'the paper does not address the speed-accuracy trade-off associated with full rank attention,' but in fact, we account for this trade-off throughout. It is true that, for a single attention head, the trade-off is simple, unavoidable, and well-understood: a full-rank attention head is slower but more powerful, and a low-rank attention head is faster but less powerful. However, for multi-head attention, the trade-off is far more subtle and poorly-understood. As the review notes, it is essential to control for 'the computational budget' when comparing full-rank to low-rank attention layers. The number of parameters and computational complexity of multi-head attention layers are both given by $dHr$... We prove that full-rank attention is superior despite this advantage. Our experiments also account for the speed-accuracy trade-off. Throughout our experiments, we scale $H$ inversely proportional to $r$, so that the speed of the low-rank and full-rank transformers are the same. In Figure 1, all five bars have the same computational budget, but the low-rank transformers perform much worse. Similarly, in Figure 2, the computational budget is fixed along each line. | ['Figure 1', 'Figure 2'] | ['images/2639c12e0ab992bd5344dd8dc21ec9ff16063eb51e4dec595df55a3969d13906.jpg', 'images/759443edd5083639070ed90ae1ceec9c6da0a459d1c475064e0f924335d5e3fc.jpg'] | ['figure'] | 2 | 3 | 5 | {'In the previous section, we proved a polynomial separation between low-rank and full-rank transformers in the constant accuracy regime (part 2 of Theorem 2). That is, to achieve error smaller than ϵ, where ϵ is a constant not depending on d or r, the number of heads must be at least poly d . In this section, we prove a stronger, exponential separation in this regime. That is, we find a target function that cannot be approximated by low-rank transformers to O(1)-error unless the number of heads is exp(Ω(d −r)). This new target function is defined as ': '1', 'Our constructions are based on the strategy we call “majority voting”, which we briefly describe here. Consider the case of N = 2 target points and hardmax attention. The output of each head, like the target function itself, is either x1 or x2. A random rank-1 head is √weakly correlated with the target; the probability that it outputs the correct answer is 1/2 + Ω(1/ d). Thus, combining many such random heads together, their mode (the output with the most “votes”) matches the target function with high probability. We use a second layer to calculate the “majority vote” of the heads in the attention layer. ': '2', 'We now turn to the lower bound. We show that approximating the target function with rank-r heads requires the number of heads to be large unless r ∼ d. For technical convenience, we set the number of target points to two and draw them from the distribution D2(Sd−1) in which they are always orthogonal. Our main result establishes a strong quantitative separation between full-rank and low-rank self-attention layer, even when the total number of parameters is of the same order: ': '3'} | {'1': 'In the previous section, we proved a polynomial separation between low-rank and full-rank transformers in the constant accuracy regime (part 2 of Theorem 2). That is, to achieve error smaller than ϵ, where ϵ is a constant not depending on d or r, the number of heads must be at least poly d . In this section, we prove a stronger, exponential separation in this regime. That is, we find a target function that cannot be approximated by low-rank transformers to O(1)-error unless the number of heads is exp(Ω(d −r)). This new target function is defined as ', '2': 'Our constructions are based on the strategy we call “majority voting”, which we briefly describe here. Consider the case of N = 2 target points and hardmax attention. The output of each head, like the target function itself, is either x1 or x2. A random rank-1 head is √weakly correlated with the target; the probability that it outputs the correct answer is 1/2 + Ω(1/ d). Thus, combining many such random heads together, their mode (the output with the most “votes”) matches the target function with high probability. We use a second layer to calculate the “majority vote” of the heads in the attention layer. ', '3': 'We now turn to the lower bound. We show that approximating the target function with rank-r heads requires the number of heads to be large unless r ∼ d. For technical convenience, we set the number of target points to two and draw them from the distribution D2(Sd−1) in which they are always orthogonal. Our main result establishes a strong quantitative separation between full-rank and low-rank self-attention layer, even when the total number of parameters is of the same order: '} | {'images/759443edd5083639070ed90ae1ceec9c6da0a459d1c475064e0f924335d5e3fc.jpg': '2', 'images/2639c12e0ab992bd5344dd8dc21ec9ff16063eb51e4dec595df55a3969d13906.jpg': '1'} | {'2': 'images/759443edd5083639070ed90ae1ceec9c6da0a459d1c475064e0f924335d5e3fc.jpg', '1': 'images/2639c12e0ab992bd5344dd8dc21ec9ff16063eb51e4dec595df55a3969d13906.jpg'} | {} | {} | {} | ['In the previous section, we proved a polynomial separation between low-rank and full-rank transformers in the constant accuracy regime (part 2 of Theorem 2). That is, to achieve error smaller than ϵ, where ϵ is a constant not depending on d or r, the number of heads must be at least poly d . In this section, we prove a stronger, exponential separation in this regime. That is, we find a target function that cannot be approximated by low-rank transformers to O(1)-error unless the number of heads is exp(Ω(d −r)). This new target function is defined as ', 'Our constructions are based on the strategy we call “majority voting”, which we briefly describe here. Consider the case of N = 2 target points and hardmax attention. The output of each head, like the target function itself, is either x1 or x2. A random rank-1 head is √weakly correlated with the target; the probability that it outputs the correct answer is 1/2 + Ω(1/ d). Thus, combining many such random heads together, their mode (the output with the most “votes”) matches the target function with high probability. We use a second layer to calculate the “majority vote” of the heads in the attention layer. ', 'We now turn to the lower bound. We show that approximating the target function with rank-r heads requires the number of heads to be large unless r ∼ d. For technical convenience, we set the number of target points to two and draw them from the distribution D2(Sd−1) in which they are always orthogonal. Our main result establishes a strong quantitative separation between full-rank and low-rank self-attention layer, even when the total number of parameters is of the same order: '] | 63782ee1d424775c7c8ce1844d269426d4275b2dfd9fd132e659e6555f5533f0 | 372a20c92cbd4f90e77460e8f6472635bda70f58 |
explanation | What improvements were made to the figures in the revised manuscript? | To improve clarity, we created an updated Figure 1 and added titles to the sub-figures in Figure 2. As indicated in the caption of Figure 2, the sub-figures summarize the results for different cases of class separability. | ['Figure 1', 'Figure 2'] | ['images/0e8852fff58fcbcbd9284ca49e052915eabdeea6a2db326a5d6b3b5e7d619ef8.jpg', 'images/4b525ce50e113a0c5ac680294c3d8e79fb6e3f922016f2ba8343f89a71be3313.jpg'] | ['figure'] | 2 | 3 | 5 | {'Tangent space mapping (TSM) to recover log(Ei) Considering a set of labeled data D = {(xi, yi) : xi ∈ X, yi ∈ Y, ji=j ∀i} obtained from a single domain j, TSM (Barachant et al., 2011) provides an established (Lotte et al., 2018; Jayaram & Barachant, 2018) decoding approach to infer yi. TSM requires SPD-matrix valued representations. For the considered generative model, covariance features Ci, as defined in (12), are a natural choice (i.e., fθ = Cov). In a nutshell, TSM first estimates the Fre´chet mean C¯ of C = {Cov(xi) : (xi, yi) ∈D} , projects each Ci to the tangent space at C¯, and finally transports the data to vary around IP . Formally, ': '1'} | {'1': 'Tangent space mapping (TSM) to recover log(Ei) Considering a set of labeled data D = {(xi, yi) : xi ∈ X, yi ∈ Y, ji=j ∀i} obtained from a single domain j, TSM (Barachant et al., 2011) provides an established (Lotte et al., 2018; Jayaram & Barachant, 2018) decoding approach to infer yi. TSM requires SPD-matrix valued representations. For the considered generative model, covariance features Ci, as defined in (12), are a natural choice (i.e., fθ = Cov). In a nutshell, TSM first estimates the Fre´chet mean C¯ of C = {Cov(xi) : (xi, yi) ∈D} , projects each Ci to the tangent space at C¯, and finally transports the data to vary around IP . Formally, '} | {'images/4b525ce50e113a0c5ac680294c3d8e79fb6e3f922016f2ba8343f89a71be3313.jpg': '2', 'images/0e8852fff58fcbcbd9284ca49e052915eabdeea6a2db326a5d6b3b5e7d619ef8.jpg': '1', 'images/40f89cef135c85bf0fd4fe1706847134972b298298a318a35f4f54cd60d2bb49.jpg': '3'} | {'2': 'images/4b525ce50e113a0c5ac680294c3d8e79fb6e3f922016f2ba8343f89a71be3313.jpg', '1': 'images/0e8852fff58fcbcbd9284ca49e052915eabdeea6a2db326a5d6b3b5e7d619ef8.jpg', '3': 'images/40f89cef135c85bf0fd4fe1706847134972b298298a318a35f4f54cd60d2bb49.jpg'} | {'images/b76c89c5263d0cd0875eba071c524f19d26d71a1ad908c5ba8124f774eccb475.jpg': '1'} | {'1': 'images/b76c89c5263d0cd0875eba071c524f19d26d71a1ad908c5ba8124f774eccb475.jpg'} | {} | ['images/40f89cef135c85bf0fd4fe1706847134972b298298a318a35f4f54cd60d2bb49.jpg', 'Tangent space mapping (TSM) to recover log(Ei) Considering a set of labeled data D = {(xi, yi) : xi ∈ X, yi ∈ Y, ji=j ∀i} obtained from a single domain j, TSM (Barachant et al., 2011) provides an established (Lotte et al., 2018; Jayaram & Barachant, 2018) decoding approach to infer yi. TSM requires SPD-matrix valued representations. For the considered generative model, covariance features Ci, as defined in (12), are a natural choice (i.e., fθ = Cov). In a nutshell, TSM first estimates the Fre´chet mean C¯ of C = {Cov(xi) : (xi, yi) ∈D} , projects each Ci to the tangent space at C¯, and finally transports the data to vary around IP . Formally, ', 'images/b76c89c5263d0cd0875eba071c524f19d26d71a1ad908c5ba8124f774eccb475.jpg'] | 9ac4a003b258aff61312f18f57b09927b0745a9f4611b783581f1295e6de5c0d | 37a3b05dc2dbca51aa6f701b56da1d5fab3c8fe1 |
explanation | Does the idea apply to all model sizes and task categories? | In Table 5 of our paper, we present the experimental results on 1.8B and 4B models, which demonstrate that our approach significantly improves performance on models of these sizes. Furthermore, the performance of small models trained with our Mixture of Instructions (MoI) approach is comparable to some excellent open-source small models, underscoring the effectiveness of our method. Regarding task categories, we initially defined the training dataset categories as code, math, chat, and tool usage, as these represent the domains where language models have substantial real-world applications. In our initial experiments, we tested the SFT performance on individual tasks, and ultimately, our MoI approach proved to be highly effective in leveraging data from all four tasks to comprehensively align the language model. Additionally, the experiments in Table 4 demonstrate that after simultaneously learning data from code, math, and tool usage tasks, our model gained the ability to use code to solve math problems. | ['Table 5', 'Table 4'] | ['images/f194a774cf02267d11bfbb6a58962250ff0c1daae91da9db92c822994d72caa2.jpg', 'images/c611dc113b4424643f152dda12e2ee913f7190a4852368a9b24bd24513dfa355.jpg'] | ['table'] | 2 | 3 | 5 | {'results indicate that MoI can transfer abilities originally elicited only under their respective system prompts to the default system prompt. ': '1'} | {'1': 'results indicate that MoI can transfer abilities originally elicited only under their respective system prompts to the default system prompt. '} | {'images/5cce7fe7af8e71eed90f29f7ac190cf402f5e71d3b967cbf70cdf16c915cae2e.jpg': '4'} | {'4': 'images/5cce7fe7af8e71eed90f29f7ac190cf402f5e71d3b967cbf70cdf16c915cae2e.jpg'} | {'images/edcd5d420be38902d7343a1518c94a75e8581d52fe610a748b365498c6b359f3.jpg': '7', 'images/f194a774cf02267d11bfbb6a58962250ff0c1daae91da9db92c822994d72caa2.jpg': '5', 'images/c611dc113b4424643f152dda12e2ee913f7190a4852368a9b24bd24513dfa355.jpg': '4'} | {'7': 'images/edcd5d420be38902d7343a1518c94a75e8581d52fe610a748b365498c6b359f3.jpg', '5': 'images/f194a774cf02267d11bfbb6a58962250ff0c1daae91da9db92c822994d72caa2.jpg', '4': 'images/c611dc113b4424643f152dda12e2ee913f7190a4852368a9b24bd24513dfa355.jpg'} | {} | ['results indicate that MoI can transfer abilities originally elicited only under their respective system prompts to the default system prompt. ', 'images/edcd5d420be38902d7343a1518c94a75e8581d52fe610a748b365498c6b359f3.jpg', 'images/5cce7fe7af8e71eed90f29f7ac190cf402f5e71d3b967cbf70cdf16c915cae2e.jpg'] | a2fc3e868dd1c59d796ea5d3574337a975069bf41f6fa11ef594811d81f9df3f | 381088a9f11619fb69033d5f420ec652ac089740 |
explanation | Is it reasonable to use two hyper-parameters to control the iteration in equation 9? | Using Equation (9) to control the linear decay of the threshold is one approach for threshold iteration. In both the paper and subsequent sensitivity analyses on additional datasets (Table 1, Table 2), our method consistently demonstrates robust performance under various settings of $\tau_{start}$, $\tau_{end}$, and $\tau_{des}$ when using the threshold decay approach from Equation (9). | ['Table 1', 'Table 2'] | ['images/00a20be47c92d81d864ea2f66e84a39b6b2590679abca6f36fbc1d3fb3fd4f0e.jpg', 'images/a5fc46e4bbcc10178c60cef59376e7feed920a4efc24c848aafb7e4bd4f20c17.jpg'] | ['table'] | 2 | 3 | 5 | {'Sherwin Bahmani, Oliver Hahn, Eduard Zamfir, Nikita Araslanov, Daniel Cremers, and Stefan Roth. Semantic self-adaptation: Enhancing generalization with a single sample. Trans. Mach. Learn. Res. (TMLR), 2023. ': '1', 'We evaluate all methods on domain generalization benchmarks using ResNet-18 and ResNet-50 models (He et al., 2016), both equipped with batch normalization (Ioffe & Szegedy, 2015). For image corruption benchmarks, we employ ResNet-18 as the backbone model. ': '2'} | {'1': 'Sherwin Bahmani, Oliver Hahn, Eduard Zamfir, Nikita Araslanov, Daniel Cremers, and Stefan Roth. Semantic self-adaptation: Enhancing generalization with a single sample. Trans. Mach. Learn. Res. (TMLR), 2023. ', '2': 'We evaluate all methods on domain generalization benchmarks using ResNet-18 and ResNet-50 models (He et al., 2016), both equipped with batch normalization (Ioffe & Szegedy, 2015). For image corruption benchmarks, we employ ResNet-18 as the backbone model. '} | {'images/44fcddc5555d2cc12e55731f940310e388c4bf17ccb5313f590ef29ca41f9e9e.jpg': '1'} | {'1': 'images/44fcddc5555d2cc12e55731f940310e388c4bf17ccb5313f590ef29ca41f9e9e.jpg'} | {'images/a5fc46e4bbcc10178c60cef59376e7feed920a4efc24c848aafb7e4bd4f20c17.jpg': '2', 'images/00a20be47c92d81d864ea2f66e84a39b6b2590679abca6f36fbc1d3fb3fd4f0e.jpg': '1'} | {'2': 'images/a5fc46e4bbcc10178c60cef59376e7feed920a4efc24c848aafb7e4bd4f20c17.jpg', '1': 'images/00a20be47c92d81d864ea2f66e84a39b6b2590679abca6f36fbc1d3fb3fd4f0e.jpg'} | {} | ['images/44fcddc5555d2cc12e55731f940310e388c4bf17ccb5313f590ef29ca41f9e9e.jpg', 'Sherwin Bahmani, Oliver Hahn, Eduard Zamfir, Nikita Araslanov, Daniel Cremers, and Stefan Roth. Semantic self-adaptation: Enhancing generalization with a single sample. Trans. Mach. Learn. Res. (TMLR), 2023. ', 'We evaluate all methods on domain generalization benchmarks using ResNet-18 and ResNet-50 models (He et al., 2016), both equipped with batch normalization (Ioffe & Szegedy, 2015). For image corruption benchmarks, we employ ResNet-18 as the backbone model. '] | d30bc960374bc6368f3d63027ce7eab49905df4853738fc1a71ccfaaf61ca083 | 3878526ddc5b41871551b98f236f079e968f62d2 |
explanation | What does it mean that tasks are distinct from the training data? | An LLM's ability to zero-shot generate correct reward functions may stem from either the task being relatively simple or the model’s exposure to similar data during training, allowing it to partially memorize the task. However, if a task differs significantly from the data the model has seen in its training set, the LLM lacks the necessary knowledge and will struggle to generate accurate reward functions in a zero-shot manner. The initial low performance in Figure 2 and Figure 3 shows that the reward function is likely not memorized and that ICPL is capable of enhancing performance through the iterative incorporation of preferences. | ['Figure 2', 'Figure 3'] | ['images/8892bf1efd8fc7132d93653b3117975dca5eb757181b5a015d866e6a2cbbbdd5.jpg', 'images/e20d5b8bec3b1f3b6daf821939468ec5c57dbc2c86514865165b81d8b8bffcd9.jpg'] | ['figure'] | 2 | 3 | 5 | {'Evaluation of reward functions: The component values that make up the good and bad reward functions are obtained from the environment during training and provided to the LLM. This helps the LLM assess the usefulness of different parts of the reward function by comparing the two. Differences between historical reward functions: The best reward functions selected by humans from each iteration are taken out, and for any two consecutive good reward functions, their differences are analyzed by another LLM. These differences are supplied to the primary LLM to assist in adjusting the reward function. \nReward trace of historical reward functions: The reward trace, consisting of the values of the good reward functions during training from all prior iterations, is provided to the LLM. This reward trace enables the LLM to evaluate how well the agent is actually able to optimize those reward components. \n5 EXPERIMENTS \nIn this section, we conducted two sets of experiments to evaluate the effectiveness of our method: one using proxy human preferences and the other using real human preferences. \n1) Proxy Human Preference: In this experiment, human-designed rewards, taken from EUREKA (Ma et al., 2023), were used as proxies of human preferences. Specifically, if ground truth reward R1 > R2, sample 1 is preferred over sample 2. This method enables rapid and quantitative evaluation of our approach. It corresponds to a noise-free case that is likely easier than human trials; if ICPL performed poorly here it would be unlikely to work in human trials. Importantly, humandesigned rewards were only used to automate the selection of samples and were not included in the prompts sent to the LLM; the LLM never observes the functional form of the ground truth rewards nor does it ever receive any values from them. Since proxy human preferences are free from noise, they offer a reliable comparison to evaluate our approach efficiently. However, as discussed later in the limitations section, these proxies may not correctly measure challenges in human feedback such as inability to rank samples, intransitive preferences, or other biases. \n2) Human-in-the-loop Preference: To further validate our method, we conducted a second set of experiments with human participants. These participants repeated the tasks from the Proxy Human Preferences and engaged in an additional task that lacked a clear reward function: “Making a humanoid jump like a real human.” \n5.1 TESTBED \nAll experiments were conducted on tasks from the Eureka benchmark (Ma et al., 2023) based on IsaacGym, covering a diverse range of environments: Cartpole, BallBalance, Quadcopter, Anymal, Humanoid, Ant, FrankaCabinet, ShadowHand, and AllegroHand. We adhered strictly to the original task configurations, including observation space, action space, and reward computation. This ensures that our method’s performance was evaluated under consistent and well-established conditions across a variety of domains. \n5.2 BASELINES \nWe consider three preference-based RL methods as baselines, which update reward models during training. B-Pref (Lee et al.), a benchmark specifically designed for preference-based reinforcement learning, provides two of our baseline algorithms: PrefPPO and PEBBLE. PrefPPO is based on the on-policy RL algorithm PPO, while PEBBLE builds upon the off-policy RL algorithm SAC. Additionally, we include SURF (Park et al., 2022), which enhances PEBBLE by utilizing unlabeled samples with data augmentation to improve feedback efficiency. For each task, we use the default hyperparameters of PPO and SAC provided by IsaacGym, which were fine-tuned for high performance. This ensures a fair comparison across methods. Further details can be found in Appendix A.3. \n5.3 EXPERIMENT SETUP ': '1', 'for i ←1 to N do RF1, . . . , RFK ←LLMRF (Prompt, K) // Render videos for each reward function Video1, . . . , VideoK ←Render(Env, RF1), . . . , Render(Env, RFK) // Human selects the most preferred (G) and least preferred (B) videos G, B ←Human(Video1, . . . , VideoK) // Retrieve the best and worst reward functions GoodRF, BadRF ←RFG, RFB // Update the prompt with feedback Prompt ←GoodRF + BadRF + HistoricalDifference + RewardTrace ': '2', 'Reward functions are a critical component of reinforcement learning (RL). However, specifying these functions becomes increasingly challenging as the complexity of the desired tasks grows. Recent advancements in pretrained foundation models have inspired approaches that leverage large language models to synthesize reward functions from task descriptions (Yu et al., 2023a; Ma et al., 2024; Yu et al., 2023b). Despite these innovations, existing methods still depend on human-designed sparse rewards or task-specific metrics to construct the reward functions. This is challenging for tasks where we cannot define any clear reward signals as the task is primarily semantically defined. For example, it is tricky to write down a reward function for a humanoid robot that corresponds to "moving like a human". ': '3'} | {'1': 'Evaluation of reward functions: The component values that make up the good and bad reward functions are obtained from the environment during training and provided to the LLM. This helps the LLM assess the usefulness of different parts of the reward function by comparing the two. Differences between historical reward functions: The best reward functions selected by humans from each iteration are taken out, and for any two consecutive good reward functions, their differences are analyzed by another LLM. These differences are supplied to the primary LLM to assist in adjusting the reward function. \nReward trace of historical reward functions: The reward trace, consisting of the values of the good reward functions during training from all prior iterations, is provided to the LLM. This reward trace enables the LLM to evaluate how well the agent is actually able to optimize those reward components. \n5 EXPERIMENTS \nIn this section, we conducted two sets of experiments to evaluate the effectiveness of our method: one using proxy human preferences and the other using real human preferences. \n1) Proxy Human Preference: In this experiment, human-designed rewards, taken from EUREKA (Ma et al., 2023), were used as proxies of human preferences. Specifically, if ground truth reward R1 > R2, sample 1 is preferred over sample 2. This method enables rapid and quantitative evaluation of our approach. It corresponds to a noise-free case that is likely easier than human trials; if ICPL performed poorly here it would be unlikely to work in human trials. Importantly, humandesigned rewards were only used to automate the selection of samples and were not included in the prompts sent to the LLM; the LLM never observes the functional form of the ground truth rewards nor does it ever receive any values from them. Since proxy human preferences are free from noise, they offer a reliable comparison to evaluate our approach efficiently. However, as discussed later in the limitations section, these proxies may not correctly measure challenges in human feedback such as inability to rank samples, intransitive preferences, or other biases. \n2) Human-in-the-loop Preference: To further validate our method, we conducted a second set of experiments with human participants. These participants repeated the tasks from the Proxy Human Preferences and engaged in an additional task that lacked a clear reward function: “Making a humanoid jump like a real human.” \n5.1 TESTBED \nAll experiments were conducted on tasks from the Eureka benchmark (Ma et al., 2023) based on IsaacGym, covering a diverse range of environments: Cartpole, BallBalance, Quadcopter, Anymal, Humanoid, Ant, FrankaCabinet, ShadowHand, and AllegroHand. We adhered strictly to the original task configurations, including observation space, action space, and reward computation. This ensures that our method’s performance was evaluated under consistent and well-established conditions across a variety of domains. \n5.2 BASELINES \nWe consider three preference-based RL methods as baselines, which update reward models during training. B-Pref (Lee et al.), a benchmark specifically designed for preference-based reinforcement learning, provides two of our baseline algorithms: PrefPPO and PEBBLE. PrefPPO is based on the on-policy RL algorithm PPO, while PEBBLE builds upon the off-policy RL algorithm SAC. Additionally, we include SURF (Park et al., 2022), which enhances PEBBLE by utilizing unlabeled samples with data augmentation to improve feedback efficiency. For each task, we use the default hyperparameters of PPO and SAC provided by IsaacGym, which were fine-tuned for high performance. This ensures a fair comparison across methods. Further details can be found in Appendix A.3. \n5.3 EXPERIMENT SETUP ', '2': 'for i ←1 to N do RF1, . . . , RFK ←LLMRF (Prompt, K) // Render videos for each reward function Video1, . . . , VideoK ←Render(Env, RF1), . . . , Render(Env, RFK) // Human selects the most preferred (G) and least preferred (B) videos G, B ←Human(Video1, . . . , VideoK) // Retrieve the best and worst reward functions GoodRF, BadRF ←RFG, RFB // Update the prompt with feedback Prompt ←GoodRF + BadRF + HistoricalDifference + RewardTrace ', '3': 'Reward functions are a critical component of reinforcement learning (RL). However, specifying these functions becomes increasingly challenging as the complexity of the desired tasks grows. Recent advancements in pretrained foundation models have inspired approaches that leverage large language models to synthesize reward functions from task descriptions (Yu et al., 2023a; Ma et al., 2024; Yu et al., 2023b). Despite these innovations, existing methods still depend on human-designed sparse rewards or task-specific metrics to construct the reward functions. This is challenging for tasks where we cannot define any clear reward signals as the task is primarily semantically defined. For example, it is tricky to write down a reward function for a humanoid robot that corresponds to "moving like a human". '} | {'images/e20d5b8bec3b1f3b6daf821939468ec5c57dbc2c86514865165b81d8b8bffcd9.jpg': '3', 'images/8892bf1efd8fc7132d93653b3117975dca5eb757181b5a015d866e6a2cbbbdd5.jpg': '2'} | {'3': 'images/e20d5b8bec3b1f3b6daf821939468ec5c57dbc2c86514865165b81d8b8bffcd9.jpg', '2': 'images/8892bf1efd8fc7132d93653b3117975dca5eb757181b5a015d866e6a2cbbbdd5.jpg'} | {} | {} | {} | ['for i ←1 to N do RF1, . . . , RFK ←LLMRF (Prompt, K) // Render videos for each reward function Video1, . . . , VideoK ←Render(Env, RF1), . . . , Render(Env, RFK) // Human selects the most preferred (G) and least preferred (B) videos G, B ←Human(Video1, . . . , VideoK) // Retrieve the best and worst reward functions GoodRF, BadRF ←RFG, RFB // Update the prompt with feedback Prompt ←GoodRF + BadRF + HistoricalDifference + RewardTrace ', 'Reward functions are a critical component of reinforcement learning (RL). However, specifying these functions becomes increasingly challenging as the complexity of the desired tasks grows. Recent advancements in pretrained foundation models have inspired approaches that leverage large language models to synthesize reward functions from task descriptions (Yu et al., 2023a; Ma et al., 2024; Yu et al., 2023b). Despite these innovations, existing methods still depend on human-designed sparse rewards or task-specific metrics to construct the reward functions. This is challenging for tasks where we cannot define any clear reward signals as the task is primarily semantically defined. For example, it is tricky to write down a reward function for a humanoid robot that corresponds to "moving like a human". ', 'Evaluation of reward functions: The component values that make up the good and bad reward functions are obtained from the environment during training and provided to the LLM. This helps the LLM assess the usefulness of different parts of the reward function by comparing the two. Differences between historical reward functions: The best reward functions selected by humans from each iteration are taken out, and for any two consecutive good reward functions, their differences are analyzed by another LLM. These differences are supplied to the primary LLM to assist in adjusting the reward function. \nReward trace of historical reward functions: The reward trace, consisting of the values of the good reward functions during training from all prior iterations, is provided to the LLM. This reward trace enables the LLM to evaluate how well the agent is actually able to optimize those reward components. \n5 EXPERIMENTS \nIn this section, we conducted two sets of experiments to evaluate the effectiveness of our method: one using proxy human preferences and the other using real human preferences. \n1) Proxy Human Preference: In this experiment, human-designed rewards, taken from EUREKA (Ma et al., 2023), were used as proxies of human preferences. Specifically, if ground truth reward R1 > R2, sample 1 is preferred over sample 2. This method enables rapid and quantitative evaluation of our approach. It corresponds to a noise-free case that is likely easier than human trials; if ICPL performed poorly here it would be unlikely to work in human trials. Importantly, humandesigned rewards were only used to automate the selection of samples and were not included in the prompts sent to the LLM; the LLM never observes the functional form of the ground truth rewards nor does it ever receive any values from them. Since proxy human preferences are free from noise, they offer a reliable comparison to evaluate our approach efficiently. However, as discussed later in the limitations section, these proxies may not correctly measure challenges in human feedback such as inability to rank samples, intransitive preferences, or other biases. \n2) Human-in-the-loop Preference: To further validate our method, we conducted a second set of experiments with human participants. These participants repeated the tasks from the Proxy Human Preferences and engaged in an additional task that lacked a clear reward function: “Making a humanoid jump like a real human.” \n5.1 TESTBED \nAll experiments were conducted on tasks from the Eureka benchmark (Ma et al., 2023) based on IsaacGym, covering a diverse range of environments: Cartpole, BallBalance, Quadcopter, Anymal, Humanoid, Ant, FrankaCabinet, ShadowHand, and AllegroHand. We adhered strictly to the original task configurations, including observation space, action space, and reward computation. This ensures that our method’s performance was evaluated under consistent and well-established conditions across a variety of domains. \n5.2 BASELINES \nWe consider three preference-based RL methods as baselines, which update reward models during training. B-Pref (Lee et al.), a benchmark specifically designed for preference-based reinforcement learning, provides two of our baseline algorithms: PrefPPO and PEBBLE. PrefPPO is based on the on-policy RL algorithm PPO, while PEBBLE builds upon the off-policy RL algorithm SAC. Additionally, we include SURF (Park et al., 2022), which enhances PEBBLE by utilizing unlabeled samples with data augmentation to improve feedback efficiency. For each task, we use the default hyperparameters of PPO and SAC provided by IsaacGym, which were fine-tuned for high performance. This ensures a fair comparison across methods. Further details can be found in Appendix A.3. \n5.3 EXPERIMENT SETUP '] | dc708af7addb3686a95541c75ea8cf77fc088320fcc65e9caf8e872de158c4db | 3e23acd4da10093f7177cd97aebdcc1ea3e13448 |
explanation | How many times/random seeds were the experiments conducted? Why is there a difference in reporting standard deviations for HV and training time? | For Figure 2 and Figure 3, all the experiments are conducted with the same random seed and the datasets are shuffled beforehand also with the same random seed to ensure the reproducibility of the results. Also, as far as we observed in our current scope of experiments, the results are consistent and robust across different random seeds. We will conduct more experiments with different random seeds to further validate the robustness of our results. | ['Figure 2', 'Figure 3'] | ['images/36ed143a8caaf6ecb6bd3866d3863b7a2d75aadb19ef8b873975e23f39a595b0.jpg', 'images/90b1a14914159d964c517ffddee2cac57aaeeb5363e2b6e3acf328d0a454a4b3.jpg'] | ['figure'] | 2 | 3 | 5 | {'For many machine learning applications, the MOO problem can be formulated as follows: given a dataset in the form of DMOO “ tDjMOOujPrms “ ttypkq, zj,pkqukPrNsujPrms, where ypkq is the feature vector and zj,pkq is the j-th label of the k-th data point, the goal is to learn a model fθpyq that optimizes the following objectives: ': '1'} | {'1': 'For many machine learning applications, the MOO problem can be formulated as follows: given a dataset in the form of DMOO “ tDjMOOujPrms “ ttypkq, zj,pkqukPrNsujPrms, where ypkq is the feature vector and zj,pkq is the j-th label of the k-th data point, the goal is to learn a model fθpyq that optimizes the following objectives: '} | {'images/2cc195e815cedb44ab8c49fd04e7c4ab685e20c70ac9f1a73d75fd1ef925ef28.jpg': '4', 'images/90b1a14914159d964c517ffddee2cac57aaeeb5363e2b6e3acf328d0a454a4b3.jpg': '3', 'images/cd200667b98021294cf037ab98ba2e9d240696044b83910995abbc495f4ad170.jpg': '1', 'images/36ed143a8caaf6ecb6bd3866d3863b7a2d75aadb19ef8b873975e23f39a595b0.jpg': '2'} | {'4': 'images/2cc195e815cedb44ab8c49fd04e7c4ab685e20c70ac9f1a73d75fd1ef925ef28.jpg', '3': 'images/90b1a14914159d964c517ffddee2cac57aaeeb5363e2b6e3acf328d0a454a4b3.jpg', '1': 'images/cd200667b98021294cf037ab98ba2e9d240696044b83910995abbc495f4ad170.jpg', '2': 'images/36ed143a8caaf6ecb6bd3866d3863b7a2d75aadb19ef8b873975e23f39a595b0.jpg'} | {} | {} | {} | ['images/2cc195e815cedb44ab8c49fd04e7c4ab685e20c70ac9f1a73d75fd1ef925ef28.jpg', 'For many machine learning applications, the MOO problem can be formulated as follows: given a dataset in the form of DMOO “ tDjMOOujPrms “ ttypkq, zj,pkqukPrNsujPrms, where ypkq is the feature vector and zj,pkq is the j-th label of the k-th data point, the goal is to learn a model fθpyq that optimizes the following objectives: ', 'images/cd200667b98021294cf037ab98ba2e9d240696044b83910995abbc495f4ad170.jpg'] | ee7e760d4265d69aa6eb2bb26851d5db9eb7ce2d01042fa053dc782ee37e1ead | 402ec5992d5cd7f47ef45e772515ab6a67bad45a |
explanation | How does the attack method ensure controllability and stability? | The reviewer is correct that the attack is not controllable or stable. This is not uncommon with fine-tuning attacks, particularly given the toxicity and harmfulness filter assumption from Figure 2. However, the high ASRs presented in Figure 4 suggest it is effective over a wide range of attack types. | ['Figure 2', 'Figure 4'] | ['images/24dae82b9201e296f3ef955edd84cda7451aa012467e8c7210f29fefd655966f.jpg', 'images/b095445a45da6f0743d73f06e92c7cc1e90b5a331f81b17bbb984d159ec85b1c.jpg'] | ['figure'] | 2 | 3 | 5 | {'Conclusion. Our work focuses on evaluating fine-tuning risks with task-specific data, showing that (i) benign users are unlikely to accidentally obtain harmful models by training on task-specific data, and (ii) malicious users can adversarially modify these datasets with prompting strategies that significantly increase harmfulness while avoiding detection. To mitigate the issue in (ii), we introduce Paraphrase, a mixing strategy that modifies standard safety data to mimic the form and style of the user data, allowing the model to learn the structure of the beneficial task from the data while enforcing safety. We show that Paraphrase efficiently outperforms other baselines in achieving safe models, at a minimal cost in downstream task performance. ': '1', 'In most cases, mixing even 1% of Paraphrase data leads to an ASR lower than 5% whereas other mitigation strategies cannot achieve an ASR lower than 40% (e.g., in AutoIF + AOA) for any mixing rate up to 50%. This higlights the efficiency of Paraphrase. As expected, w/o Mixing in the adversarial prompting settings also significantly decreases the refusal rate on XSTest—a positive observation given these prompts are supposed to test excessive safety. One drawback of Paraphrase is that it appears to lead to typically higher refusal rates than alternative strategies, though they are all lower than the baseline model’s 78%. ': '2'} | {'1': 'Conclusion. Our work focuses on evaluating fine-tuning risks with task-specific data, showing that (i) benign users are unlikely to accidentally obtain harmful models by training on task-specific data, and (ii) malicious users can adversarially modify these datasets with prompting strategies that significantly increase harmfulness while avoiding detection. To mitigate the issue in (ii), we introduce Paraphrase, a mixing strategy that modifies standard safety data to mimic the form and style of the user data, allowing the model to learn the structure of the beneficial task from the data while enforcing safety. We show that Paraphrase efficiently outperforms other baselines in achieving safe models, at a minimal cost in downstream task performance. ', '2': 'In most cases, mixing even 1% of Paraphrase data leads to an ASR lower than 5% whereas other mitigation strategies cannot achieve an ASR lower than 40% (e.g., in AutoIF + AOA) for any mixing rate up to 50%. This higlights the efficiency of Paraphrase. As expected, w/o Mixing in the adversarial prompting settings also significantly decreases the refusal rate on XSTest—a positive observation given these prompts are supposed to test excessive safety. One drawback of Paraphrase is that it appears to lead to typically higher refusal rates than alternative strategies, though they are all lower than the baseline model’s 78%. '} | {'images/24dae82b9201e296f3ef955edd84cda7451aa012467e8c7210f29fefd655966f.jpg': '2', 'images/642d38b4377691fc6065f881c88af63626038f15e03dfaa5d5b6cc7d73af951c.jpg': '3', 'images/b095445a45da6f0743d73f06e92c7cc1e90b5a331f81b17bbb984d159ec85b1c.jpg': '4'} | {'2': 'images/24dae82b9201e296f3ef955edd84cda7451aa012467e8c7210f29fefd655966f.jpg', '3': 'images/642d38b4377691fc6065f881c88af63626038f15e03dfaa5d5b6cc7d73af951c.jpg', '4': 'images/b095445a45da6f0743d73f06e92c7cc1e90b5a331f81b17bbb984d159ec85b1c.jpg'} | {} | {} | {} | ['Conclusion. Our work focuses on evaluating fine-tuning risks with task-specific data, showing that (i) benign users are unlikely to accidentally obtain harmful models by training on task-specific data, and (ii) malicious users can adversarially modify these datasets with prompting strategies that significantly increase harmfulness while avoiding detection. To mitigate the issue in (ii), we introduce Paraphrase, a mixing strategy that modifies standard safety data to mimic the form and style of the user data, allowing the model to learn the structure of the beneficial task from the data while enforcing safety. We show that Paraphrase efficiently outperforms other baselines in achieving safe models, at a minimal cost in downstream task performance. ', 'images/642d38b4377691fc6065f881c88af63626038f15e03dfaa5d5b6cc7d73af951c.jpg', 'In most cases, mixing even 1% of Paraphrase data leads to an ASR lower than 5% whereas other mitigation strategies cannot achieve an ASR lower than 40% (e.g., in AutoIF + AOA) for any mixing rate up to 50%. This higlights the efficiency of Paraphrase. As expected, w/o Mixing in the adversarial prompting settings also significantly decreases the refusal rate on XSTest—a positive observation given these prompts are supposed to test excessive safety. One drawback of Paraphrase is that it appears to lead to typically higher refusal rates than alternative strategies, though they are all lower than the baseline model’s 78%. '] | af8075b1507e27d9e876b7cec1e3f7f1b166d0b7044d6c901892d26329c5d027 | 40a650bd31cc0528f1f418dde2fd185f00c06660 |
explanation | What are the implications of the high computational cost of MD-LSM for its application in real-time scenarios? | The proposed MD-LSM is still far from being truly efficient one, but it is the best way we know of so far. Compared with the existing LSMs listed in Table 1, it has been able to meet the current technical requirements on the real-time monitoring of the behavior of each hidden layer after each training epoch. We also refer to Table 2 of the revised version for the comparison among the time costs of different LSMs on the UCI datasets. | ['Table 1', 'Table 2'] | ['images/874cc1ed383f75ed1c90e523c9afdd4da277117741f3bcfd7bad5fb361af1144.jpg', 'images/c07ef79b13042d3ef93d1b00d1df6a2243c66d8d651b6b4397c86d8f926c8e34.jpg'] | ['table'] | 2 | 3 | 5 | {'Furthermore, we denote majorω(MD(A, B)) (resp. minorω(MD(A, B))) as the subset of MD(A, B) that lies in the major (resp. minor) side of ωT m = 0 (cf. Fig. 1). According to Theorem 2.2, the points mij ∈minorω(MD(A, B)) can be eliminated by removing the relevant points ai from A or bj from B, and the rests turn out to be linearly separable. ': '1', 'Because of multi-layer composite structures, it could be hard to directly analyze the properties of deep networks via the backward inference from the behavior of their outputs. Instead, analyzing the linear separability of hidden-layer outputs becomes a feasible way of understanding the deep networks. However, it is still challenge to develop the LSMs that meet the requirements of robustness, absoluteness, and efficiency. In this paper, we propose the MD-LSMs LSi (i = ∗, 0, 1), which meet the first two requirements, and then derive their approximationsLSi (i = ∗, 0, 1), which meets all of the three requirements. The comparative experiments demonstrat e that there is only a slight difference between LSi andLSi (i = ∗, 0, 1). ': '2', 'Theorem 2.5. Given two point sets A and B, then it holds that where ω∗stands for the weight vector achieving the maximum operation of LS∗(A, B). ': '3'} | {'1': 'Furthermore, we denote majorω(MD(A, B)) (resp. minorω(MD(A, B))) as the subset of MD(A, B) that lies in the major (resp. minor) side of ωT m = 0 (cf. Fig. 1). According to Theorem 2.2, the points mij ∈minorω(MD(A, B)) can be eliminated by removing the relevant points ai from A or bj from B, and the rests turn out to be linearly separable. ', '2': 'Because of multi-layer composite structures, it could be hard to directly analyze the properties of deep networks via the backward inference from the behavior of their outputs. Instead, analyzing the linear separability of hidden-layer outputs becomes a feasible way of understanding the deep networks. However, it is still challenge to develop the LSMs that meet the requirements of robustness, absoluteness, and efficiency. In this paper, we propose the MD-LSMs LSi (i = ∗, 0, 1), which meet the first two requirements, and then derive their approximationsLSi (i = ∗, 0, 1), which meets all of the three requirements. The comparative experiments demonstrat e that there is only a slight difference between LSi andLSi (i = ∗, 0, 1). ', '3': 'Theorem 2.5. Given two point sets A and B, then it holds that where ω∗stands for the weight vector achieving the maximum operation of LS∗(A, B). '} | {} | {} | {'images/c07ef79b13042d3ef93d1b00d1df6a2243c66d8d651b6b4397c86d8f926c8e34.jpg': '2', 'images/874cc1ed383f75ed1c90e523c9afdd4da277117741f3bcfd7bad5fb361af1144.jpg': '1'} | {'2': 'images/c07ef79b13042d3ef93d1b00d1df6a2243c66d8d651b6b4397c86d8f926c8e34.jpg', '1': 'images/874cc1ed383f75ed1c90e523c9afdd4da277117741f3bcfd7bad5fb361af1144.jpg'} | {} | ['Theorem 2.5. Given two point sets A and B, then it holds that where ω∗stands for the weight vector achieving the maximum operation of LS∗(A, B). ', 'Because of multi-layer composite structures, it could be hard to directly analyze the properties of deep networks via the backward inference from the behavior of their outputs. Instead, analyzing the linear separability of hidden-layer outputs becomes a feasible way of understanding the deep networks. However, it is still challenge to develop the LSMs that meet the requirements of robustness, absoluteness, and efficiency. In this paper, we propose the MD-LSMs LSi (i = ∗, 0, 1), which meet the first two requirements, and then derive their approximationsLSi (i = ∗, 0, 1), which meets all of the three requirements. The comparative experiments demonstrat e that there is only a slight difference between LSi andLSi (i = ∗, 0, 1). ', 'Furthermore, we denote majorω(MD(A, B)) (resp. minorω(MD(A, B))) as the subset of MD(A, B) that lies in the major (resp. minor) side of ωT m = 0 (cf. Fig. 1). According to Theorem 2.2, the points mij ∈minorω(MD(A, B)) can be eliminated by removing the relevant points ai from A or bj from B, and the rests turn out to be linearly separable. '] | fc40a3dfd5a4864af58075288f6adadaac5afe4b8cfe995f3d6cac4adee97837 | 44e43c91e7fe4e50222115bbed70e738f53eba63 |
explanation | How do the results from swapped test cases relate to the performance of different models in reducing hypothesis space? | With respect to Figure 3, our intent with Figure 2 was to show the degree to which models stop exploring after just a few turns. We point out that in the figure there is a sizable gap between weaker multi-turn models like deepseek-chat, and stronger multi-turn models such as Claude 3.5 Sonnet. Looking at Figure 3, we note that the best test cases come from chatgpt-4o-latest, which matches the highest final point on Figure 2. | ['Figure 2', 'Figure 3'] | ['images/52b35ae33f08133f1f59419272ed7516f2c3b376993178af3ac074741d43f08e.jpg', 'images/74ff68078d772cda6cb763941cec3e6247483b34df3eb6213191af01f708f6c6.jpg'] | ['figure'] | 2 | 3 | 5 | {'Table 2 shows the complexity metrics of the LLMs. Many LLMs with high accuracy such as Claude 3.5 Sonnet, chatgpt-4o-latest, and Mistral Large have long response lengths. However, o1-preview has a short response length and few operators, despite its high performance on the task. The differences in response length and number of operators is most clearly seen in the incorrect answers. For example, if the correct rule is lambda x, y, z: (x \\* y \\* z)% 2 == 1, Claude 3.5 Sonnet’s guess is lambda x,y,z: all(n > 0 and int(n)== n and (n & (n-1)== 0)and (n % 3 == 0 or n == 1)for n in [x,y,z]), which is more convoluted than o1-preview’s guess of lambda x, y, z: abs(x)== 1 and abs(y)== 1 and abs(z)== 1. Combined with its low number of average guesses made before making the final guess, o1-preview appears to follow Occam’s Razor very closely compared to most of the other high-performing models with longer response lengths. For the set inclusion ratio, the best models tend to cluster around an intermediate value of 2-4. ': '1'} | {'1': 'Table 2 shows the complexity metrics of the LLMs. Many LLMs with high accuracy such as Claude 3.5 Sonnet, chatgpt-4o-latest, and Mistral Large have long response lengths. However, o1-preview has a short response length and few operators, despite its high performance on the task. The differences in response length and number of operators is most clearly seen in the incorrect answers. For example, if the correct rule is lambda x, y, z: (x \\* y \\* z)% 2 == 1, Claude 3.5 Sonnet’s guess is lambda x,y,z: all(n > 0 and int(n)== n and (n & (n-1)== 0)and (n % 3 == 0 or n == 1)for n in [x,y,z]), which is more convoluted than o1-preview’s guess of lambda x, y, z: abs(x)== 1 and abs(y)== 1 and abs(z)== 1. Combined with its low number of average guesses made before making the final guess, o1-preview appears to follow Occam’s Razor very closely compared to most of the other high-performing models with longer response lengths. For the set inclusion ratio, the best models tend to cluster around an intermediate value of 2-4. '} | {'images/61cb546e2201e8c4f30727845e64ad1ee44f69a13879d5e5855e7fe685b4505f.jpg': '4', 'images/52b35ae33f08133f1f59419272ed7516f2c3b376993178af3ac074741d43f08e.jpg': '2', 'images/74ff68078d772cda6cb763941cec3e6247483b34df3eb6213191af01f708f6c6.jpg': '3'} | {'4': 'images/61cb546e2201e8c4f30727845e64ad1ee44f69a13879d5e5855e7fe685b4505f.jpg', '2': 'images/52b35ae33f08133f1f59419272ed7516f2c3b376993178af3ac074741d43f08e.jpg', '3': 'images/74ff68078d772cda6cb763941cec3e6247483b34df3eb6213191af01f708f6c6.jpg'} | {'images/ce96851d11b98e9400f5ee183095b19cd8c94734ba6b50c75a9b80fc2c7bbcc1.jpg': '2'} | {'2': 'images/ce96851d11b98e9400f5ee183095b19cd8c94734ba6b50c75a9b80fc2c7bbcc1.jpg'} | {} | ['images/ce96851d11b98e9400f5ee183095b19cd8c94734ba6b50c75a9b80fc2c7bbcc1.jpg', 'images/61cb546e2201e8c4f30727845e64ad1ee44f69a13879d5e5855e7fe685b4505f.jpg', 'Table 2 shows the complexity metrics of the LLMs. Many LLMs with high accuracy such as Claude 3.5 Sonnet, chatgpt-4o-latest, and Mistral Large have long response lengths. However, o1-preview has a short response length and few operators, despite its high performance on the task. The differences in response length and number of operators is most clearly seen in the incorrect answers. For example, if the correct rule is lambda x, y, z: (x \\* y \\* z)% 2 == 1, Claude 3.5 Sonnet’s guess is lambda x,y,z: all(n > 0 and int(n)== n and (n & (n-1)== 0)and (n % 3 == 0 or n == 1)for n in [x,y,z]), which is more convoluted than o1-preview’s guess of lambda x, y, z: abs(x)== 1 and abs(y)== 1 and abs(z)== 1. Combined with its low number of average guesses made before making the final guess, o1-preview appears to follow Occam’s Razor very closely compared to most of the other high-performing models with longer response lengths. For the set inclusion ratio, the best models tend to cluster around an intermediate value of 2-4. '] | 44c7e4e6c2e2187468f983052deb279000f6be83eb7d4a163d72a6d5f1dd572f | 468ee2d84d60db9e0b9b91dd77c7f6ed2de008ed |
explanation | For reporting the property MAEs, was an external property predictor used for the evaluation? How is MinMAE reported? | We use RDKIT to evaluate the properties, except for Property Acc. in Table 1 which is based on a property-predictor as mentioned in the footnote of the table. We had also forgotten to mention that Table 3 uses MXMNet to evaluate the HOMO-LUMO Gap. We now mention all of this information more explicitly in the paper. | ['Table 1', 'Table 3'] | ['images/32053ab955fc8c27609659338d53c5088d821359166a26898b544b2cdfc14d51.jpg', 'images/b6616a39fddee44d99abb5887cfc3e4969dcfa2ee135d6c5addb60fdbeef243f.jpg'] | ['table'] | 2 | 3 | 5 | {'Results Our experiments are shown in Table 4. Our approach yields slightly better molecules in terms of reward and diversity compared to online methods, using around 11.5% of the molecules. This makes our approach significantly more efficient. However, it is important to note that solving this task with online methods is a steep hill and can be considered more difficult. ': '1', 'In the neural network, we process the standardized continuous features (continuous properties concatenated with their binary missing indicators) in a 2-layer multilayer perceptron (MLP) with Swish activation (Hendrycks & Gimpel, 2016; Ramachandran et al., 2017). Each categorical feature is then processed individually using a linear embedding. These processed outputs are added directly to the embedding of all tokens. We also experimented with injecting these embeddings through adaptive normalization (Huang & Belongie, 2017), a method commonly used for conditioning on noise-level in diffusion models (Ho et al., 2020), but this approach massively increased the number of parameters without improving performance. ': '2'} | {'1': 'Results Our experiments are shown in Table 4. Our approach yields slightly better molecules in terms of reward and diversity compared to online methods, using around 11.5% of the molecules. This makes our approach significantly more efficient. However, it is important to note that solving this task with online methods is a steep hill and can be considered more difficult. ', '2': 'In the neural network, we process the standardized continuous features (continuous properties concatenated with their binary missing indicators) in a 2-layer multilayer perceptron (MLP) with Swish activation (Hendrycks & Gimpel, 2016; Ramachandran et al., 2017). Each categorical feature is then processed individually using a linear embedding. These processed outputs are added directly to the embedding of all tokens. We also experimented with injecting these embeddings through adaptive normalization (Huang & Belongie, 2017), a method commonly used for conditioning on noise-level in diffusion models (Ho et al., 2020), but this approach massively increased the number of parameters without improving performance. '} | {} | {} | {'images/b6616a39fddee44d99abb5887cfc3e4969dcfa2ee135d6c5addb60fdbeef243f.jpg': '3', 'images/d18adac4babb6e686e482f88ed7e90c8e8bdc5d8d35581639a77e95426ff8197.jpg': '2', 'images/32053ab955fc8c27609659338d53c5088d821359166a26898b544b2cdfc14d51.jpg': '1'} | {'3': 'images/b6616a39fddee44d99abb5887cfc3e4969dcfa2ee135d6c5addb60fdbeef243f.jpg', '2': 'images/d18adac4babb6e686e482f88ed7e90c8e8bdc5d8d35581639a77e95426ff8197.jpg', '1': 'images/32053ab955fc8c27609659338d53c5088d821359166a26898b544b2cdfc14d51.jpg'} | {} | ['images/d18adac4babb6e686e482f88ed7e90c8e8bdc5d8d35581639a77e95426ff8197.jpg', 'Results Our experiments are shown in Table 4. Our approach yields slightly better molecules in terms of reward and diversity compared to online methods, using around 11.5% of the molecules. This makes our approach significantly more efficient. However, it is important to note that solving this task with online methods is a steep hill and can be considered more difficult. ', 'In the neural network, we process the standardized continuous features (continuous properties concatenated with their binary missing indicators) in a 2-layer multilayer perceptron (MLP) with Swish activation (Hendrycks & Gimpel, 2016; Ramachandran et al., 2017). Each categorical feature is then processed individually using a linear embedding. These processed outputs are added directly to the embedding of all tokens. We also experimented with injecting these embeddings through adaptive normalization (Huang & Belongie, 2017), a method commonly used for conditioning on noise-level in diffusion models (Ho et al., 2020), but this approach massively increased the number of parameters without improving performance. '] | 3c9726a7249d89d88377c0536700604a24cbc74aec4d79c6aea13c172234f97b | 4df37026ed3f1357f6910edc4db9049e3876ee10 |
explanation | What considerations are there regarding the sample size used for estimating statistical significance? | We note that a sample size of 5 falls within the range used in well-established prior work (see Figure 3(a) in [1]). In addition, in Figure 4 in our manuscript, we have already explored higher sample sizes and found the results to be consistent. | ['Figure 3', 'Figure 4'] | ['images/4c9ee8259146aee78fe0707010a303d9438e8302a85b416a71d72d3c24c4b782.jpg', 'images/7b114e3aef0bc63ee6a1c8ca698886c2415080eb7bb3b879c79d8900996e4cbf.jpg'] | ['figure'] | 2 | 3 | 5 | {'where g(ϵ) is the probability density function of U(0, α)d. By perturbing the intermediate layer outputs and sampling with a non-zero temperature at the final layer, our approach effectively combines two complementary sources of randomness. To identify hallucinations, we compute the hallucination detection score over K generations and apply a threshold to classify outputs. ': '1', 'Output: Hallucination detection score: s(x) \n1: for each generation k = 1 to K do \n2: Sample noise ϵ ∼U(0, α)d \n3: for each decoding step t do \n4: for each layer l do \n5: Compute hl using the potentially perturbed prior layer representations. \n6: Perturb the MLP outputs: ˜hl = hl + ϵ if l ∈[l1, l2]. \n7: end for ': '2'} | {'1': 'where g(ϵ) is the probability density function of U(0, α)d. By perturbing the intermediate layer outputs and sampling with a non-zero temperature at the final layer, our approach effectively combines two complementary sources of randomness. To identify hallucinations, we compute the hallucination detection score over K generations and apply a threshold to classify outputs. ', '2': 'Output: Hallucination detection score: s(x) \n1: for each generation k = 1 to K do \n2: Sample noise ϵ ∼U(0, α)d \n3: for each decoding step t do \n4: for each layer l do \n5: Compute hl using the potentially perturbed prior layer representations. \n6: Perturb the MLP outputs: ˜hl = hl + ϵ if l ∈[l1, l2]. \n7: end for '} | {'images/7b114e3aef0bc63ee6a1c8ca698886c2415080eb7bb3b879c79d8900996e4cbf.jpg': '4', 'images/4c9ee8259146aee78fe0707010a303d9438e8302a85b416a71d72d3c24c4b782.jpg': '3', 'images/2237f369bd922c013b1243781ffe69d11906f2c40554af9b7a98fe30ce92320c.jpg': '1'} | {'4': 'images/7b114e3aef0bc63ee6a1c8ca698886c2415080eb7bb3b879c79d8900996e4cbf.jpg', '3': 'images/4c9ee8259146aee78fe0707010a303d9438e8302a85b416a71d72d3c24c4b782.jpg', '1': 'images/2237f369bd922c013b1243781ffe69d11906f2c40554af9b7a98fe30ce92320c.jpg'} | {} | {} | {} | ['Output: Hallucination detection score: s(x) \n1: for each generation k = 1 to K do \n2: Sample noise ϵ ∼U(0, α)d \n3: for each decoding step t do \n4: for each layer l do \n5: Compute hl using the potentially perturbed prior layer representations. \n6: Perturb the MLP outputs: ˜hl = hl + ϵ if l ∈[l1, l2]. \n7: end for ', 'where g(ϵ) is the probability density function of U(0, α)d. By perturbing the intermediate layer outputs and sampling with a non-zero temperature at the final layer, our approach effectively combines two complementary sources of randomness. To identify hallucinations, we compute the hallucination detection score over K generations and apply a threshold to classify outputs. ', 'images/2237f369bd922c013b1243781ffe69d11906f2c40554af9b7a98fe30ce92320c.jpg'] | a31b7e0247b634cc2d43c5f9c4e7515d090228b2a420999a57cef268143bf28d | 5009d103d0c13476dbf84fe4cd2e58c1e18968e9 |
explanation | What is the rationale behind finetuning JAT/Gato on new demonstrations given its prior training on multiple tasks? | No, this is just a misunderstanding. We trained JAT/Gato from scratch leaving out the unseen environments shown in Figure 1. We also direct the reviewer to Figure 4 which includes the performance of the above JAT/Gato policy without any finetuning as well. | ['Figure 1', 'Figure 4'] | ['images/a7687c7c00dbac61229f57f2293c318f3e315c0f093e9fbb0a2eaec219fe1407.jpg', 'images/f477a5ac3310cd4f98d69242969d1c8176c8053020d24c8114416af49ac360e7.jpg'] | ['figure'] | 2 | 3 | 5 | {'Pre-training REGENT and Loss Function: We train REGENT by minimizing the total cross-entropy loss on discrete actions and total mean-squared error on continuous actions for all n + 1 action predictions (n in the context and 1 query). We also follow the JAT recipe in ensuring that each training batch consists only of data from a single environment’s dataset. We provide details about the training hyperparameters in Appendix A. ': '1', 'Definition 5.1 (Most Isolated State). For a given set of retrieval demonstrations Dj in environment j, we define the most isolated state sID := arg max mind(s, s′) , and consequently the distance to s′∈D ': '2'} | {'1': 'Pre-training REGENT and Loss Function: We train REGENT by minimizing the total cross-entropy loss on discrete actions and total mean-squared error on continuous actions for all n + 1 action predictions (n in the context and 1 query). We also follow the JAT recipe in ensuring that each training batch consists only of data from a single environment’s dataset. We provide details about the training hyperparameters in Appendix A. ', '2': 'Definition 5.1 (Most Isolated State). For a given set of retrieval demonstrations Dj in environment j, we define the most isolated state sID := arg max mind(s, s′) , and consequently the distance to s′∈D '} | {'images/a7687c7c00dbac61229f57f2293c318f3e315c0f093e9fbb0a2eaec219fe1407.jpg': '1', 'images/96af3415452c7f86d63b594a37cb1a5d8e79e178b96e85b7b57752aa134ae5d8.jpg': '3', 'images/f477a5ac3310cd4f98d69242969d1c8176c8053020d24c8114416af49ac360e7.jpg': '4'} | {'1': 'images/a7687c7c00dbac61229f57f2293c318f3e315c0f093e9fbb0a2eaec219fe1407.jpg', '3': 'images/96af3415452c7f86d63b594a37cb1a5d8e79e178b96e85b7b57752aa134ae5d8.jpg', '4': 'images/f477a5ac3310cd4f98d69242969d1c8176c8053020d24c8114416af49ac360e7.jpg'} | {} | {} | {} | ['Pre-training REGENT and Loss Function: We train REGENT by minimizing the total cross-entropy loss on discrete actions and total mean-squared error on continuous actions for all n + 1 action predictions (n in the context and 1 query). We also follow the JAT recipe in ensuring that each training batch consists only of data from a single environment’s dataset. We provide details about the training hyperparameters in Appendix A. ', 'images/96af3415452c7f86d63b594a37cb1a5d8e79e178b96e85b7b57752aa134ae5d8.jpg', 'Definition 5.1 (Most Isolated State). For a given set of retrieval demonstrations Dj in environment j, we define the most isolated state sID := arg max mind(s, s′) , and consequently the distance to s′∈D '] | a7c78e5ab17438126dbe15baa845de300d167f0156251fd94527f432ea4843dd | 5182975bc8ac57bfed1e37dbc9498747b9080ef2 |
explanation | How does the proposed method compare with existing unsupervised graph anomaly detection methods? | We would like to clarify that we have explicitly stated in the paper that our primary focus is on the supervised setting, as noted in response to 'To Rev. VYmz' W1. The inclusion of unsupervised baselines is intended for comprehensiveness. According to Table 1 in the main paper, supervised GAD baseline methods consistently demonstrate significantly better performance compared to their unsupervised counterparts. For comprehensiveness, we have already included seven unsupervised baselines, including recent methods such as CoLA, CONDA, TAM, and ADA-GAD, which are more recent than AEGIS. Additionally, we included GGAN, a GAN-based unsupervised method with a similar learning scheme to AEGIS, as part of our baseline comparisons. However, to acknowledge the reviewer’s input, we have now included the results for AEGIS. The overall GAD performance of AEGIS, in terms of AUC-ROC and AUC-PR, is reported in Table 2. | ['Table 1', 'Table 2'] | ['images/54a8cb604f792e12402098cbe3b4460b757ec56e5600797bdad480044a44b10e.jpg', 'images/5de3471b6e077ce69df8d792bdd1933e4c55276ac2f0ca4a7b239f3c68cab3c6.jpg'] | ['table'] | 2 | 3 | 5 | {'The design of the NSR module, which is the core of NSReg, is motivated by the limitations of most supervised GAD methods, which are only designed to maximise separability between normal nodes and seen anomalies, but fail to provide sufficient supervision for the representation learner to effectively differentiate unseen anomaly representations from the normal class. In an open-set environment, we are unable to obtain the prior knowledge of the unseen anomalies, and thus, difficult to learn the unseen anomaly patterns. Thus, NSReg takes a step back and focuses on learning better normality, which would help distinguish the unseen anomalies from the normal nodes better. NSReg achieves this by modelling the normal-node-oriented relations (i.e., {r = (v, u) | v ∈Vn, u ∈V}), which is aimed at enforcing a stricter definition of the normal region and recalibrating misplaced unseen anomaly representations within the representation space. By modelling three types of normalnode-oriented relations as a discriminative task, NSReg enhances representation learning with significantly enriched normality semantics, effectively disentangling unseen anomaly nodes from normal nodes in the representation space. We first provide a theoretical analysis of enforcing structural normality and then detail the two core components of NSR: normal-node-oriented relation generation and modelling. ': '1'} | {'1': 'The design of the NSR module, which is the core of NSReg, is motivated by the limitations of most supervised GAD methods, which are only designed to maximise separability between normal nodes and seen anomalies, but fail to provide sufficient supervision for the representation learner to effectively differentiate unseen anomaly representations from the normal class. In an open-set environment, we are unable to obtain the prior knowledge of the unseen anomalies, and thus, difficult to learn the unseen anomaly patterns. Thus, NSReg takes a step back and focuses on learning better normality, which would help distinguish the unseen anomalies from the normal nodes better. NSReg achieves this by modelling the normal-node-oriented relations (i.e., {r = (v, u) | v ∈Vn, u ∈V}), which is aimed at enforcing a stricter definition of the normal region and recalibrating misplaced unseen anomaly representations within the representation space. By modelling three types of normalnode-oriented relations as a discriminative task, NSReg enhances representation learning with significantly enriched normality semantics, effectively disentangling unseen anomaly nodes from normal nodes in the representation space. We first provide a theoretical analysis of enforcing structural normality and then detail the two core components of NSR: normal-node-oriented relation generation and modelling. '} | {'images/1674c24a2a4fa6361a1b35aa7224ff9095ab6c59eef222665a85053c6bf2b10a.jpg': '3'} | {'3': 'images/1674c24a2a4fa6361a1b35aa7224ff9095ab6c59eef222665a85053c6bf2b10a.jpg'} | {'images/5de3471b6e077ce69df8d792bdd1933e4c55276ac2f0ca4a7b239f3c68cab3c6.jpg': '2', 'images/54a8cb604f792e12402098cbe3b4460b757ec56e5600797bdad480044a44b10e.jpg': '1', 'images/35c234049e43fb13683e8ca2ab8c3403f97ba49968137e497f448ed4d1e25ad1.jpg': '3'} | {'2': 'images/5de3471b6e077ce69df8d792bdd1933e4c55276ac2f0ca4a7b239f3c68cab3c6.jpg', '1': 'images/54a8cb604f792e12402098cbe3b4460b757ec56e5600797bdad480044a44b10e.jpg', '3': 'images/35c234049e43fb13683e8ca2ab8c3403f97ba49968137e497f448ed4d1e25ad1.jpg'} | {} | ['images/1674c24a2a4fa6361a1b35aa7224ff9095ab6c59eef222665a85053c6bf2b10a.jpg', 'images/35c234049e43fb13683e8ca2ab8c3403f97ba49968137e497f448ed4d1e25ad1.jpg', 'The design of the NSR module, which is the core of NSReg, is motivated by the limitations of most supervised GAD methods, which are only designed to maximise separability between normal nodes and seen anomalies, but fail to provide sufficient supervision for the representation learner to effectively differentiate unseen anomaly representations from the normal class. In an open-set environment, we are unable to obtain the prior knowledge of the unseen anomalies, and thus, difficult to learn the unseen anomaly patterns. Thus, NSReg takes a step back and focuses on learning better normality, which would help distinguish the unseen anomalies from the normal nodes better. NSReg achieves this by modelling the normal-node-oriented relations (i.e., {r = (v, u) | v ∈Vn, u ∈V}), which is aimed at enforcing a stricter definition of the normal region and recalibrating misplaced unseen anomaly representations within the representation space. By modelling three types of normalnode-oriented relations as a discriminative task, NSReg enhances representation learning with significantly enriched normality semantics, effectively disentangling unseen anomaly nodes from normal nodes in the representation space. We first provide a theoretical analysis of enforcing structural normality and then detail the two core components of NSR: normal-node-oriented relation generation and modelling. '] | a7887fe2a0169b84848b2749fcfa2be78a40c196ec55e8413fb407a0bfd2704a | 5710b3ab32311f34cb47976677f788dbc359d4bc |
explanation | What is the underlying mechanism of the proposed metric? | Interestingly, the Wavelet Packet Transform (WPT) enhances transparency in the following way. The WPT splits the input image into various frequency bands (low -> high) based on the transformation level, as illustrated in Figure 2. We then compute Frechet Distance (FD) across individual frequency bands, and this individual FD score (Figure 6) provides valuable insights into the model's performance across different frequency bands. For instance, Figure 6 demonstrates that DDGAN achieves a better Frechet Wavelet Distance (FWD) score than StyleGAN2 because DDGAN can generate a better frequency response. | ['Figure 2', 'Figure 6'] | ['images/d048c5a503e965a9d8493548766d8ee4f7bffd72d400ebf9d869f0c2cc355210.jpg', 'images/6fca4a8e8496fcb5b1b48a4c276492df60beafe889f680f1d866aefe4657bb1a.jpg'] | ['figure'] | 2 | 3 | 5 | {'Modern generative models exhibit frequency biases (Durall et al., 2020), while commonly used metrics such as FID, KID and FD-DINOv2 are affected by domain bias (Kynkäänniemi et al., 2023). To address these limitations, FWD accounts for frequency information without introducing a domainspecific bias. Even though FD-DINOv2 offers a partial solution to this issue, it comes at a very high computational cost and has thus a negative environmental impact. In response, this paper introduced FWD a novel metric based on the wavelet packet transform. Our metric allows consistent, domainagnostic evaluation. At the same time, its formulation is computationally efficient. Our findings show that FWD is robust to input perturbations and interpretable through the analysis of individual frequency bands. Optimizing FID or FD-DINOv2 metrics can negatively impact reproducibility, if optimized samples are not provided. In such cases, the use of FWD in conjunction with traditional metrics ensures a comprehensive and accurate evaluation of generative models while also helping to detect and mitigate domain bias. ': '1'} | {'1': 'Modern generative models exhibit frequency biases (Durall et al., 2020), while commonly used metrics such as FID, KID and FD-DINOv2 are affected by domain bias (Kynkäänniemi et al., 2023). To address these limitations, FWD accounts for frequency information without introducing a domainspecific bias. Even though FD-DINOv2 offers a partial solution to this issue, it comes at a very high computational cost and has thus a negative environmental impact. In response, this paper introduced FWD a novel metric based on the wavelet packet transform. Our metric allows consistent, domainagnostic evaluation. At the same time, its formulation is computationally efficient. Our findings show that FWD is robust to input perturbations and interpretable through the analysis of individual frequency bands. Optimizing FID or FD-DINOv2 metrics can negatively impact reproducibility, if optimized samples are not provided. In such cases, the use of FWD in conjunction with traditional metrics ensures a comprehensive and accurate evaluation of generative models while also helping to detect and mitigate domain bias. '} | {'images/6fca4a8e8496fcb5b1b48a4c276492df60beafe889f680f1d866aefe4657bb1a.jpg': '6', 'images/d048c5a503e965a9d8493548766d8ee4f7bffd72d400ebf9d869f0c2cc355210.jpg': '2'} | {'6': 'images/6fca4a8e8496fcb5b1b48a4c276492df60beafe889f680f1d866aefe4657bb1a.jpg', '2': 'images/d048c5a503e965a9d8493548766d8ee4f7bffd72d400ebf9d869f0c2cc355210.jpg'} | {'images/64cacd52a778cf5d1c53c2986872631b91dbcde97068b62902d464aca7afd70e.jpg': '3', 'images/5e97f149d6acb8145c9d17facc28d611f8e72756bf733639838ad5f97287060e.jpg': '1'} | {'3': 'images/64cacd52a778cf5d1c53c2986872631b91dbcde97068b62902d464aca7afd70e.jpg', '1': 'images/5e97f149d6acb8145c9d17facc28d611f8e72756bf733639838ad5f97287060e.jpg'} | {} | ['images/5e97f149d6acb8145c9d17facc28d611f8e72756bf733639838ad5f97287060e.jpg', 'images/64cacd52a778cf5d1c53c2986872631b91dbcde97068b62902d464aca7afd70e.jpg', 'Modern generative models exhibit frequency biases (Durall et al., 2020), while commonly used metrics such as FID, KID and FD-DINOv2 are affected by domain bias (Kynkäänniemi et al., 2023). To address these limitations, FWD accounts for frequency information without introducing a domainspecific bias. Even though FD-DINOv2 offers a partial solution to this issue, it comes at a very high computational cost and has thus a negative environmental impact. In response, this paper introduced FWD a novel metric based on the wavelet packet transform. Our metric allows consistent, domainagnostic evaluation. At the same time, its formulation is computationally efficient. Our findings show that FWD is robust to input perturbations and interpretable through the analysis of individual frequency bands. Optimizing FID or FD-DINOv2 metrics can negatively impact reproducibility, if optimized samples are not provided. In such cases, the use of FWD in conjunction with traditional metrics ensures a comprehensive and accurate evaluation of generative models while also helping to detect and mitigate domain bias. '] | ea8a2e164753ed68327a1822125b0664a15242c8fde2fef18ffcc57bcb475c51 | 57951a7039a2291f190ed6ea63eb675cba868c95 |
explanation | How do you define reasoning in the context of your evaluation? | Since we evaluate each task independently, we define reasoning as a correct sequence of steps that helps in arriving at the final answer. In addition, we check for adjacent capabilities, for instance, arithmetic calculations in Figure 3 or hallucinations in Figure 5, and label them differently, instead of considering them as another reasoning capability. | ['Figure 3', 'Figure 5'] | ['images/b1d1dc601faee41cb166fd7724bbf5c6ea200e98bfe95a3bb7ef877d1d5cf96d.jpg', 'images/52c08f6777eb1848580dae5e3704fcf5e35875081d8d5dcaeebe52aaae9b721b.jpg'] | ['figure'] | 2 | 3 | 5 | {} | {} | {'images/b7fcdeb83dc5c6f8ff06081f07dfc13f641abd402e2d5cb06acaa85ea5615932.jpg': '4', 'images/b1d1dc601faee41cb166fd7724bbf5c6ea200e98bfe95a3bb7ef877d1d5cf96d.jpg': '3', 'images/52c08f6777eb1848580dae5e3704fcf5e35875081d8d5dcaeebe52aaae9b721b.jpg': '5', 'images/4332dabf8be0bfd377546e8c16470a7a688b028a3d466e969dcd9251c043f10c.jpg': '2'} | {'4': 'images/b7fcdeb83dc5c6f8ff06081f07dfc13f641abd402e2d5cb06acaa85ea5615932.jpg', '3': 'images/b1d1dc601faee41cb166fd7724bbf5c6ea200e98bfe95a3bb7ef877d1d5cf96d.jpg', '5': 'images/52c08f6777eb1848580dae5e3704fcf5e35875081d8d5dcaeebe52aaae9b721b.jpg', '2': 'images/4332dabf8be0bfd377546e8c16470a7a688b028a3d466e969dcd9251c043f10c.jpg'} | {'images/3c5bbf932bd48d88c0a2e86e3e4726ce640958222e4f2c9f350c1f97d0984e70.jpg': '1'} | {'1': 'images/3c5bbf932bd48d88c0a2e86e3e4726ce640958222e4f2c9f350c1f97d0984e70.jpg'} | {} | ['images/3c5bbf932bd48d88c0a2e86e3e4726ce640958222e4f2c9f350c1f97d0984e70.jpg', 'images/4332dabf8be0bfd377546e8c16470a7a688b028a3d466e969dcd9251c043f10c.jpg', 'images/b7fcdeb83dc5c6f8ff06081f07dfc13f641abd402e2d5cb06acaa85ea5615932.jpg'] | 13c045ea2edc83ef7d176fae36c844c87b573de3dec835d6f5cc79e6b7e70c65 | 5e38d009942abec86d00fef13f398721bbab55f1 |
explanation | What methods were used to evaluate the quality of generation? | We conducted manual evaluations to compare the outputs of different generation methods. As mentioned in the original text, 'Additionally, we manually inspect the few-shot prompts and find that this method tends to first describe the image content in detail before asking the question, as shown in Figure 6,' this conclusion was drawn from reviewing over 100 samples. However, performing manual evaluations on a statistically significant number of samples is prohibitively costly. Thus we also use score distribution in Figure 5 to show the quality of generation. We believe this hybrid approach balances quality and practicality. | ['Figure 5', 'Figure 6'] | ['images/388ccbff7d3b12df8feeb9c46605a53df8c0481770a0d5903573341737c315c0.jpg', 'images/53d6c1e13fb58a48ae939a7d643d255ed27cab476b9a4f563bae35c2c1afb34f.jpg'] | ['figure'] | 2 | 3 | 5 | {} | {} | {'images/39d29a433fec2675a39d357ac7e778c9da6449e15db18e13c572e3c156f60a27.jpg': '8', 'images/f786e8fd39922f5ee0219913c949ed2e86a61d45dee595eee16f8ed4ab08a890.jpg': '3', 'images/53d6c1e13fb58a48ae939a7d643d255ed27cab476b9a4f563bae35c2c1afb34f.jpg': '6', 'images/fa24ec2e765d3427ee7a7fbf1f28a575fd724a53f33d6628920de6a7b53ed1d4.jpg': '1', 'images/388ccbff7d3b12df8feeb9c46605a53df8c0481770a0d5903573341737c315c0.jpg': '5'} | {'8': 'images/39d29a433fec2675a39d357ac7e778c9da6449e15db18e13c572e3c156f60a27.jpg', '3': 'images/f786e8fd39922f5ee0219913c949ed2e86a61d45dee595eee16f8ed4ab08a890.jpg', '6': 'images/53d6c1e13fb58a48ae939a7d643d255ed27cab476b9a4f563bae35c2c1afb34f.jpg', '1': 'images/fa24ec2e765d3427ee7a7fbf1f28a575fd724a53f33d6628920de6a7b53ed1d4.jpg', '5': 'images/388ccbff7d3b12df8feeb9c46605a53df8c0481770a0d5903573341737c315c0.jpg'} | {} | {} | {} | ['images/f786e8fd39922f5ee0219913c949ed2e86a61d45dee595eee16f8ed4ab08a890.jpg', 'images/fa24ec2e765d3427ee7a7fbf1f28a575fd724a53f33d6628920de6a7b53ed1d4.jpg', 'images/39d29a433fec2675a39d357ac7e778c9da6449e15db18e13c572e3c156f60a27.jpg'] | d4c09d4293e0873105f0ba64e81e9082e649317c7b35fca24fc71acb535765b5 | 62b2367320b6d8fa04536512032c768a55deb201 |
explanation | How does GROD compare with the baseline NPOS in terms of OOD sample generation and superiority? | To the best of our knowledge, the 'gold standard' for measuring the quality of synthetic OOD data has not been proposed. And compared to task performance in Table 2/3/4/5, GROD is superior. In Table 6 of our revised paper, we test changing the generating method to Gaussian and randomly uniform noise, which further shows the effect of GROD. | ['Table 2', 'Table 6'] | ['images/91a7fad5481d02a6218d71c696c003f5835d8a76084eeeb8879c939e9c6657ba.jpg', 'images/a3c1da2d16f6a045c4d3834d5a895aa6dfe1a520f4b9a5de3656a745561c7d8b.jpg'] | ['table'] | 2 | 3 | 5 | {'In this section, we provide empirical evidence to validate the effectiveness of GROD across a range of real-world classification tasks and types of outliers, including comparison experiments with baselines on various NLP and CV tasks, and the ablation study of key parameters and modules in GROD. ': '1', 'techniques. While many popular OOD detection algorithms are rigorously tested on image datasets, their effectiveness on text datasets does not exhibit marked superiority, as Table 4.2 illustrates. In addition, methods like ODIN (Liang et al., 2017) and G-ODIN (Hsu et al., 2020), which compute data gradients, necessitate floating-point number inputs. However, the tokenizer-encoded long integers used as input tokens create data format incompatibilities when attempting to use BERT and GPT-2 alongside ODIN or G-ODIN. Given their marginal performance on image datasets, these methods are excluded from text classification tasks. For the decoder-only GPT-2 model, some methods (Baseline, GEN) are compatible with both models using CLS tokens as features and without them, as they only require logits for processing. Others are only compatible with transformers with CLS tokens since they combine features and logits. We test two modes (with/without CLS token), labeled Method-C (with CLS) and Method-L (without CLS). As shown in Table 4.2, GROD consistently improves model performance across both image and text datasets and various OOD detection tasks, highlighting its versatility and broad applicability. ': '2'} | {'1': 'In this section, we provide empirical evidence to validate the effectiveness of GROD across a range of real-world classification tasks and types of outliers, including comparison experiments with baselines on various NLP and CV tasks, and the ablation study of key parameters and modules in GROD. ', '2': 'techniques. While many popular OOD detection algorithms are rigorously tested on image datasets, their effectiveness on text datasets does not exhibit marked superiority, as Table 4.2 illustrates. In addition, methods like ODIN (Liang et al., 2017) and G-ODIN (Hsu et al., 2020), which compute data gradients, necessitate floating-point number inputs. However, the tokenizer-encoded long integers used as input tokens create data format incompatibilities when attempting to use BERT and GPT-2 alongside ODIN or G-ODIN. Given their marginal performance on image datasets, these methods are excluded from text classification tasks. For the decoder-only GPT-2 model, some methods (Baseline, GEN) are compatible with both models using CLS tokens as features and without them, as they only require logits for processing. Others are only compatible with transformers with CLS tokens since they combine features and logits. We test two modes (with/without CLS token), labeled Method-C (with CLS) and Method-L (without CLS). As shown in Table 4.2, GROD consistently improves model performance across both image and text datasets and various OOD detection tasks, highlighting its versatility and broad applicability. '} | {} | {} | {'images/91a7fad5481d02a6218d71c696c003f5835d8a76084eeeb8879c939e9c6657ba.jpg': '2', 'images/a3c1da2d16f6a045c4d3834d5a895aa6dfe1a520f4b9a5de3656a745561c7d8b.jpg': '6', 'images/37ce7bca3475057496ed12ad81cd36fe0f6c5dde908f44fda2ed2fd559268a2c.jpg': '1'} | {'2': 'images/91a7fad5481d02a6218d71c696c003f5835d8a76084eeeb8879c939e9c6657ba.jpg', '6': 'images/a3c1da2d16f6a045c4d3834d5a895aa6dfe1a520f4b9a5de3656a745561c7d8b.jpg', '1': 'images/37ce7bca3475057496ed12ad81cd36fe0f6c5dde908f44fda2ed2fd559268a2c.jpg'} | {} | ['In this section, we provide empirical evidence to validate the effectiveness of GROD across a range of real-world classification tasks and types of outliers, including comparison experiments with baselines on various NLP and CV tasks, and the ablation study of key parameters and modules in GROD. ', 'techniques. While many popular OOD detection algorithms are rigorously tested on image datasets, their effectiveness on text datasets does not exhibit marked superiority, as Table 4.2 illustrates. In addition, methods like ODIN (Liang et al., 2017) and G-ODIN (Hsu et al., 2020), which compute data gradients, necessitate floating-point number inputs. However, the tokenizer-encoded long integers used as input tokens create data format incompatibilities when attempting to use BERT and GPT-2 alongside ODIN or G-ODIN. Given their marginal performance on image datasets, these methods are excluded from text classification tasks. For the decoder-only GPT-2 model, some methods (Baseline, GEN) are compatible with both models using CLS tokens as features and without them, as they only require logits for processing. Others are only compatible with transformers with CLS tokens since they combine features and logits. We test two modes (with/without CLS token), labeled Method-C (with CLS) and Method-L (without CLS). As shown in Table 4.2, GROD consistently improves model performance across both image and text datasets and various OOD detection tasks, highlighting its versatility and broad applicability. ', 'images/37ce7bca3475057496ed12ad81cd36fe0f6c5dde908f44fda2ed2fd559268a2c.jpg'] | 27cea54636057f07daba34636ef1471dff674e139cb9c97058974f19f7101acd | 67e2edb048c731ed4c87843ae8a048f4be355f16 |
explanation | How do the empirical results relate to the theoretical findings? | Yes, the empirical results can be used to verify or illustrate key observations from the theoretical analysis. First, as the theory suggests and the results in Figure 1 validate, the algorithms obtain sublinear regret w.r.t $T$. Second, our theoretical analysis shows that the GP-BayesUCB provides more flexible parameters than GP-UCB. It is generally accepted that GP-UCB algorithms tends to overexplore due to too large theoretical confidence intervals. In the right column of Figure 3, we show that the confidence parameter $eta_t$ of GP-BayesUCB is lower than that of GP-UCB, and additionally we can tune it to be even lower. In practice, (left and middle column of Figure 3), we also show experimentally that GP-BayesUCB obtains lower regret than GP-UCB. Third, as discussed by Russo & Roy (2014), the performance of UCB algorithms depend on designing tight confidence bounds whereas the regret of Thompson sampling algorithms (using Russo & Roy's framework) can be bounded by any set of confidence bounds. For complex bandit settings, designing tight confidence bounds can be significantly harder. Whilst our theoretical results suggest that GP-TS should perform similar to GP-UCB and GP-BayesUCB, our experiments demonstrate that GP-TS obtains significantly lower regret which can likely be attributed to the point raised by Russo & Roy. This finding is also consistent with other works for GP and non-GP bandits. | ['Figure 1', 'Figure 3'] | ['images/1953c4bf91a41d2d19ac10e135812d8da80bdb9d75b4df009126b05453507652.jpg', 'images/e9ed5f01715897715492beebc68da34a7dd30ea24aa3c08460de59e81e663c6e.jpg'] | ['figure'] | 2 | 3 | 5 | {'combinatorial setting, the agent must select a feasible subset of base arms, a super arm, at t where t 2At is the set of feasible and available super arms. To facilitate a feasible combinatorial problem, the number of feasible super arms is finite in each round and the super arms have a maximum size K (a K a t). The agent observes the rewards of the selected base arms (semi-bandit feedback) rt = {rt,a|a ∈at} where the base arm reward rt,a = f(a) + ϵt,a is a sum of the expected reward and i.i.d. Gaussian noise with zero mean and variance ς2. Motivated by the online energy-efficient navigation problem in Section 4.1, the total reward is assumed to be additive, and the agent also observes this reward at time t: Rt = a∈at rt,a. The total number of time steps, the horizon, is denoted by T. Let Ht denote the history (A1, S1, a1, r1, . . . , At−1, St−1, at−1, rt−1, At, St) of past observations and the currently available arms at time t. ': '1', 'We note that Eq. (6d) is equivalent to the discretization size used by Takeno et al. (2023) with an extra factor of K to account for the combinatorial setting whilst we introduce Eqs. (6a) to (6c) to bound Ut([a]D) Ut(a). A key step to establish the regret bound of GP-UCB by Takeno et al. (2023) is to use the fact (for that setting) that at maximizes the upper confidence bound Ut(a) and thus Ut([at∗ ]Dt) −Ut(at) ≤0. Since we consider a setting with volatile arms, [at∗ ]Dt is not necessarily a feasible super arm and our technical contribution in the infinite setting is an analysis of the discretization error of Ut([a]Dt) −Ut(a). ': '2'} | {'1': 'combinatorial setting, the agent must select a feasible subset of base arms, a super arm, at t where t 2At is the set of feasible and available super arms. To facilitate a feasible combinatorial problem, the number of feasible super arms is finite in each round and the super arms have a maximum size K (a K a t). The agent observes the rewards of the selected base arms (semi-bandit feedback) rt = {rt,a|a ∈at} where the base arm reward rt,a = f(a) + ϵt,a is a sum of the expected reward and i.i.d. Gaussian noise with zero mean and variance ς2. Motivated by the online energy-efficient navigation problem in Section 4.1, the total reward is assumed to be additive, and the agent also observes this reward at time t: Rt = a∈at rt,a. The total number of time steps, the horizon, is denoted by T. Let Ht denote the history (A1, S1, a1, r1, . . . , At−1, St−1, at−1, rt−1, At, St) of past observations and the currently available arms at time t. ', '2': 'We note that Eq. (6d) is equivalent to the discretization size used by Takeno et al. (2023) with an extra factor of K to account for the combinatorial setting whilst we introduce Eqs. (6a) to (6c) to bound Ut([a]D) Ut(a). A key step to establish the regret bound of GP-UCB by Takeno et al. (2023) is to use the fact (for that setting) that at maximizes the upper confidence bound Ut(a) and thus Ut([at∗ ]Dt) −Ut(at) ≤0. Since we consider a setting with volatile arms, [at∗ ]Dt is not necessarily a feasible super arm and our technical contribution in the infinite setting is an analysis of the discretization error of Ut([a]Dt) −Ut(a). '} | {'images/e9ed5f01715897715492beebc68da34a7dd30ea24aa3c08460de59e81e663c6e.jpg': '3', 'images/7f7aa551b3666fd3d44bc94a7ef5d91c96d8fca31e7f484662cd329e75996162.jpg': '2', 'images/1953c4bf91a41d2d19ac10e135812d8da80bdb9d75b4df009126b05453507652.jpg': '1'} | {'3': 'images/e9ed5f01715897715492beebc68da34a7dd30ea24aa3c08460de59e81e663c6e.jpg', '2': 'images/7f7aa551b3666fd3d44bc94a7ef5d91c96d8fca31e7f484662cd329e75996162.jpg', '1': 'images/1953c4bf91a41d2d19ac10e135812d8da80bdb9d75b4df009126b05453507652.jpg'} | {} | {} | {} | ['We note that Eq. (6d) is equivalent to the discretization size used by Takeno et al. (2023) with an extra factor of K to account for the combinatorial setting whilst we introduce Eqs. (6a) to (6c) to bound Ut([a]D) Ut(a). A key step to establish the regret bound of GP-UCB by Takeno et al. (2023) is to use the fact (for that setting) that at maximizes the upper confidence bound Ut(a) and thus Ut([at∗ ]Dt) −Ut(at) ≤0. Since we consider a setting with volatile arms, [at∗ ]Dt is not necessarily a feasible super arm and our technical contribution in the infinite setting is an analysis of the discretization error of Ut([a]Dt) −Ut(a). ', 'combinatorial setting, the agent must select a feasible subset of base arms, a super arm, at t where t 2At is the set of feasible and available super arms. To facilitate a feasible combinatorial problem, the number of feasible super arms is finite in each round and the super arms have a maximum size K (a K a t). The agent observes the rewards of the selected base arms (semi-bandit feedback) rt = {rt,a|a ∈at} where the base arm reward rt,a = f(a) + ϵt,a is a sum of the expected reward and i.i.d. Gaussian noise with zero mean and variance ς2. Motivated by the online energy-efficient navigation problem in Section 4.1, the total reward is assumed to be additive, and the agent also observes this reward at time t: Rt = a∈at rt,a. The total number of time steps, the horizon, is denoted by T. Let Ht denote the history (A1, S1, a1, r1, . . . , At−1, St−1, at−1, rt−1, At, St) of past observations and the currently available arms at time t. ', 'images/7f7aa551b3666fd3d44bc94a7ef5d91c96d8fca31e7f484662cd329e75996162.jpg'] | 6d84a3651a48fa5aba1e63071dfd1923a1276e0ed00ac16fd16f8cffcbd2b333 | 7318fbc37a5ed68bb8e50ccba13525e2f9b272a5 |
explanation | What is the reason for quadrupling the channel number in the model? | The reason why we adopted the number four in the channel expansion during downsample is as follows: Firstly, the channel quadrupling of QB-Net does not decrease the computational complexity for spatialwise convolution with stride=2 in the downsampling blocks, as shown in Figure 1 (a) and Figure 4 (a). It does not decrease the OPs of spatialwise convolutions in deep layers. We hypothesize that the information loss during downsampling is critical in binarized convolutional neural networks (BCNNs), so that we expect that the maintained OPs by quadrupling the number of channels during downsampling can be helpful to provide more representation capacity. Secondly, the structural benefits for implementing BCNNs were considered when quadrupling the number of channels. In implementation of binarized operations, the number of bits in a word (64 bits) can be considered. In the optimization guide of Larq on real hardware, it was known that the number of channels should be expanded with a factor of 8. When quadrupling the number of channels during downsampling, the optimization guideline can be met. | ['Figure 1', 'Figure 4'] | ['images/9778eb2018549ef1931a9bfa64b7d44410cdf12eca63453a02dbc23fe31d2fc4.jpg', 'images/006d3df0cedd8cca8e9b196619b195e8e90b2ee45fda46a25b87ba7eadd19fd8.jpg'] | ['figure'] | 2 | 3 | 5 | {'Table 3 summarizes the comparison in terms of parameters and latency using Larq Compute Engine (LCE) (Bannink et al., 2021) on a single thread of RPi 4B and a Samsung Exynos-9820 processor. We note that the supported layers in LCE were limited, so only several mobile-friendly CNNs and BCNNs based on ResNet18 and MobileNetV1 were compared in Table 3. All proposed models were faster than FP32 models in Table 3. QB-Net showed good efficiency in terms of Top-1 accuracy and latency. For example, QB-Net-Large can achieve 69.8% Top-1 accuracy on ImageNet-1K, having 0.53 × 108 OPs and 65.5 ms latency on the RPi 4B. Although QSB-Net-Large can enhance Top-1 accuracy by 0.8%, its latency increased by 20.7 ms. Because QSB-Net-Large(SE1) only adopted SE blocks during downsampling, the increasing latency was small. Compared with ReActNetA (Liu et al., 2020), the proposed Large models provided better performances, having faster inference speed. QuickNet-Small showed fast inference speed because QuickNet models were optimized considering the mechanism of LCE (Bannink et al., 2021). However, its performance was only 59.4% Top-1 accuracy on ImageNet-1K. ': '1'} | {'1': 'Table 3 summarizes the comparison in terms of parameters and latency using Larq Compute Engine (LCE) (Bannink et al., 2021) on a single thread of RPi 4B and a Samsung Exynos-9820 processor. We note that the supported layers in LCE were limited, so only several mobile-friendly CNNs and BCNNs based on ResNet18 and MobileNetV1 were compared in Table 3. All proposed models were faster than FP32 models in Table 3. QB-Net showed good efficiency in terms of Top-1 accuracy and latency. For example, QB-Net-Large can achieve 69.8% Top-1 accuracy on ImageNet-1K, having 0.53 × 108 OPs and 65.5 ms latency on the RPi 4B. Although QSB-Net-Large can enhance Top-1 accuracy by 0.8%, its latency increased by 20.7 ms. Because QSB-Net-Large(SE1) only adopted SE blocks during downsampling, the increasing latency was small. Compared with ReActNetA (Liu et al., 2020), the proposed Large models provided better performances, having faster inference speed. QuickNet-Small showed fast inference speed because QuickNet models were optimized considering the mechanism of LCE (Bannink et al., 2021). However, its performance was only 59.4% Top-1 accuracy on ImageNet-1K. '} | {'images/9778eb2018549ef1931a9bfa64b7d44410cdf12eca63453a02dbc23fe31d2fc4.jpg': '1', 'images/006d3df0cedd8cca8e9b196619b195e8e90b2ee45fda46a25b87ba7eadd19fd8.jpg': '4', 'images/5622eca805d98e40cae460515aff01e792e70666f12f3e4e8cb00beb9fa34218.jpg': '3'} | {'1': 'images/9778eb2018549ef1931a9bfa64b7d44410cdf12eca63453a02dbc23fe31d2fc4.jpg', '4': 'images/006d3df0cedd8cca8e9b196619b195e8e90b2ee45fda46a25b87ba7eadd19fd8.jpg', '3': 'images/5622eca805d98e40cae460515aff01e792e70666f12f3e4e8cb00beb9fa34218.jpg'} | {'images/1d0dab952829356613e53edaf47d9565fef250e90ff9fe626693361f53a1a901.jpg': '3'} | {'3': 'images/1d0dab952829356613e53edaf47d9565fef250e90ff9fe626693361f53a1a901.jpg'} | {} | ['images/5622eca805d98e40cae460515aff01e792e70666f12f3e4e8cb00beb9fa34218.jpg', 'images/1d0dab952829356613e53edaf47d9565fef250e90ff9fe626693361f53a1a901.jpg', 'Table 3 summarizes the comparison in terms of parameters and latency using Larq Compute Engine (LCE) (Bannink et al., 2021) on a single thread of RPi 4B and a Samsung Exynos-9820 processor. We note that the supported layers in LCE were limited, so only several mobile-friendly CNNs and BCNNs based on ResNet18 and MobileNetV1 were compared in Table 3. All proposed models were faster than FP32 models in Table 3. QB-Net showed good efficiency in terms of Top-1 accuracy and latency. For example, QB-Net-Large can achieve 69.8% Top-1 accuracy on ImageNet-1K, having 0.53 × 108 OPs and 65.5 ms latency on the RPi 4B. Although QSB-Net-Large can enhance Top-1 accuracy by 0.8%, its latency increased by 20.7 ms. Because QSB-Net-Large(SE1) only adopted SE blocks during downsampling, the increasing latency was small. Compared with ReActNetA (Liu et al., 2020), the proposed Large models provided better performances, having faster inference speed. QuickNet-Small showed fast inference speed because QuickNet models were optimized considering the mechanism of LCE (Bannink et al., 2021). However, its performance was only 59.4% Top-1 accuracy on ImageNet-1K. '] | bac6faf2cc41635baf1f8d9390bc0ab7adafd49419d032d7b50513e1ac96d840 | 73620889d0c3cf6c5b960b5839ddc4eb6628b79c |
explanation | How does the choice of bin quantization(number and varying size) affect the performance of the algorithm? Why not adaptively optimize them as well? | In term of the number of bins $m$: First, we have already demonstrated the impact of $m$ on AHL-Gaussian in Figure 8. Figure 8 shows that AHL-Gaussian generally performs robustly regardless of $m$, although there is a slight performance drop on a few tasks when $m$ is set too low. Furthermore, adaptively optimizing $m$ during training is not practical, as the output of the Q-network is $m$-dimensional. If $m$ changes dynamically, the structure of the neural network would also need to change dynamically, which would undoubtedly pose significant challenges for both training and inference. In term of varying size: We are not entirely sure what the reviewer means by 'varying size'. We speculate that you might be referring to the bin width or the support interval range. In fact, when the number of bins $m$ is given, the bin width and the interval range are equivalent. We have already verified in Figure 1 that the performance of HL-Gaussian varies significantly under different interval ranges. Based on this observation, in this paper we proposed the dynamic optimization of the interval range (i.e., bin width) to address the differing requirements of interval ranges across environments and the dynamic evolving process of the value function. | ['Figure 8', 'Figure 1'] | ['images/70a9d08e0f80b7611b0f8c67a192c2ca45352de36f7d0704d1c54bd7e2b815fb.jpg', 'images/be584c8242fa38d1c855b93c0d77290fa26e7e264a3104bf84c16a34c9b9467e.jpg'] | ['figure'] | 2 | 3 | 5 | {'Given that the projection error Evmin,vmax,m,σ is insignificant for every (st, at) pair within D, Proposition 3.1 posits that utilizing HL-Gaussian to minimize LCE is an effective strategy for optimizing the traditional TD error LMSE. Furthermore, as highlighted by Ehsan Imani (2024), the CE loss holds a theoretical edge over the MSE loss in the optimization process, facilitating a more efficient path to the optimal solution with a reduced number of gradient steps—a concept supported by a wealth of empirical data (Farebrother et al., 2024; Ehsan Imani, 2024). Consequently, the adoption of CE as the optimization objective is well-founded in both theoretical understanding and practical results, which justifies our focus on CE in this study. ': '1', 'Interval update frequency. To determine how the frequency of interval updates affects performance, we conducted a series of experiments with varying ratios of interval update frequency to value function update frequency, as illustrated in Figure 10. The results indicate that AHL-Gaussian is quite resilient to changes in these ratios. This resilience is a practical advantage, as it allows AHL-Gaussian to maintain its performance while conserving computational resources. ': '2'} | {'1': 'Given that the projection error Evmin,vmax,m,σ is insignificant for every (st, at) pair within D, Proposition 3.1 posits that utilizing HL-Gaussian to minimize LCE is an effective strategy for optimizing the traditional TD error LMSE. Furthermore, as highlighted by Ehsan Imani (2024), the CE loss holds a theoretical edge over the MSE loss in the optimization process, facilitating a more efficient path to the optimal solution with a reduced number of gradient steps—a concept supported by a wealth of empirical data (Farebrother et al., 2024; Ehsan Imani, 2024). Consequently, the adoption of CE as the optimization objective is well-founded in both theoretical understanding and practical results, which justifies our focus on CE in this study. ', '2': 'Interval update frequency. To determine how the frequency of interval updates affects performance, we conducted a series of experiments with varying ratios of interval update frequency to value function update frequency, as illustrated in Figure 10. The results indicate that AHL-Gaussian is quite resilient to changes in these ratios. This resilience is a practical advantage, as it allows AHL-Gaussian to maintain its performance while conserving computational resources. '} | {'images/be584c8242fa38d1c855b93c0d77290fa26e7e264a3104bf84c16a34c9b9467e.jpg': '1', 'images/2c5705495c9bad92ad0f420abd493195a4afc691ffbb0de4c138d4d07d3c8c98.jpg': '3', 'images/70a9d08e0f80b7611b0f8c67a192c2ca45352de36f7d0704d1c54bd7e2b815fb.jpg': '8'} | {'1': 'images/be584c8242fa38d1c855b93c0d77290fa26e7e264a3104bf84c16a34c9b9467e.jpg', '3': 'images/2c5705495c9bad92ad0f420abd493195a4afc691ffbb0de4c138d4d07d3c8c98.jpg', '8': 'images/70a9d08e0f80b7611b0f8c67a192c2ca45352de36f7d0704d1c54bd7e2b815fb.jpg'} | {} | {} | {} | ['Interval update frequency. To determine how the frequency of interval updates affects performance, we conducted a series of experiments with varying ratios of interval update frequency to value function update frequency, as illustrated in Figure 10. The results indicate that AHL-Gaussian is quite resilient to changes in these ratios. This resilience is a practical advantage, as it allows AHL-Gaussian to maintain its performance while conserving computational resources. ', 'images/2c5705495c9bad92ad0f420abd493195a4afc691ffbb0de4c138d4d07d3c8c98.jpg', 'Given that the projection error Evmin,vmax,m,σ is insignificant for every (st, at) pair within D, Proposition 3.1 posits that utilizing HL-Gaussian to minimize LCE is an effective strategy for optimizing the traditional TD error LMSE. Furthermore, as highlighted by Ehsan Imani (2024), the CE loss holds a theoretical edge over the MSE loss in the optimization process, facilitating a more efficient path to the optimal solution with a reduced number of gradient steps—a concept supported by a wealth of empirical data (Farebrother et al., 2024; Ehsan Imani, 2024). Consequently, the adoption of CE as the optimization objective is well-founded in both theoretical understanding and practical results, which justifies our focus on CE in this study. '] | 536a3ce5c1920640ea5a4b29486e25e8334cfeb8171feb73c0e85baa08480927 | 74037b439e7d955826e41d227b7758ade07be927 |
explanation | What improvements does the proposed method show over traditional Random Forests? | Note that Table 1 in the original manuscript shows that RLF with data-driven $ ilde{p}$ outperforms RF in 20 datasets where eight of them are significant. Table 2 shows that RLF with tuned $ ilde{p}$ outperform RF in 23 datasets where twelve of them are statistically significant. Those results indeed demonstrate the significant superiority of RLF over RF, with the price of time efficiency. However, RF only statistically outperform RF in two datasets. | ['Table 1', 'Table 2'] | ['images/c323b86d0a02d61f764ebfd964df8d46844f84e2c075e878ec31ffa9bb1d98c3.jpg', 'images/c77bef4d9cc050f2c38234ee7df6f94d293e6ddac38d50ef523056a4a0c269c0.jpg'] | ['table'] | 2 | 3 | 5 | {'AL = {xi ∈ A : xij < z, i = 1, 2, ..., n}, AR = {xi ∈ A : xij ≥ z, i = 1, 2, ..., n} and Y¯A, Y¯AL, Y¯AR are the averages of responses Yi with the corresponding features are in sets A, AL and AR, respectively. ': '1', 'where Xj is the j-th dimension of an observation X. The response Y ∈R and random sample X will be i.i.d and uniformly distributed on the 100-dimension unit cube [0, 1]100. In this case, the effective dimension is 35. And σ is set to be 1.3 so that the signal-to-noise ratio is approximately 2. Fig. 2 summarizes Test MSEs for RLF and RF as functions of different hyperparameters. As we can see, given data-driven p˜, RLF outperforms RF in almost all experiment settings. See detailed experiment settings and analysis in section A.2 of appendix. ': '2', 'Peng et al. (2019) provided sharp Berry-Esseen bounds for of RF under the Bernoulli sampling (Chen & Kato, 2019). The main idea follows from the Stein’s method (Chen et al., 2010) and Hoeffding decomposition (Vaart, 2000). Getting inspired by the results in (Peng et al., 2019) and (Mentch & Hooker, 2016), we derive improved Berry-Esseen bounds of RLF for small-N settings (i.e, relatively small number of trees in RLF) where lim Nn = α and α > 0 or ∞in Theorem 3.2. ': '3'} | {'1': 'AL = {xi ∈ A : xij < z, i = 1, 2, ..., n}, AR = {xi ∈ A : xij ≥ z, i = 1, 2, ..., n} and Y¯A, Y¯AL, Y¯AR are the averages of responses Yi with the corresponding features are in sets A, AL and AR, respectively. ', '2': 'where Xj is the j-th dimension of an observation X. The response Y ∈R and random sample X will be i.i.d and uniformly distributed on the 100-dimension unit cube [0, 1]100. In this case, the effective dimension is 35. And σ is set to be 1.3 so that the signal-to-noise ratio is approximately 2. Fig. 2 summarizes Test MSEs for RLF and RF as functions of different hyperparameters. As we can see, given data-driven p˜, RLF outperforms RF in almost all experiment settings. See detailed experiment settings and analysis in section A.2 of appendix. ', '3': 'Peng et al. (2019) provided sharp Berry-Esseen bounds for of RF under the Bernoulli sampling (Chen & Kato, 2019). The main idea follows from the Stein’s method (Chen et al., 2010) and Hoeffding decomposition (Vaart, 2000). Getting inspired by the results in (Peng et al., 2019) and (Mentch & Hooker, 2016), we derive improved Berry-Esseen bounds of RLF for small-N settings (i.e, relatively small number of trees in RLF) where lim Nn = α and α > 0 or ∞in Theorem 3.2. '} | {} | {} | {'images/c77bef4d9cc050f2c38234ee7df6f94d293e6ddac38d50ef523056a4a0c269c0.jpg': '2', 'images/c323b86d0a02d61f764ebfd964df8d46844f84e2c075e878ec31ffa9bb1d98c3.jpg': '1'} | {'2': 'images/c77bef4d9cc050f2c38234ee7df6f94d293e6ddac38d50ef523056a4a0c269c0.jpg', '1': 'images/c323b86d0a02d61f764ebfd964df8d46844f84e2c075e878ec31ffa9bb1d98c3.jpg'} | {} | ['where Xj is the j-th dimension of an observation X. The response Y ∈R and random sample X will be i.i.d and uniformly distributed on the 100-dimension unit cube [0, 1]100. In this case, the effective dimension is 35. And σ is set to be 1.3 so that the signal-to-noise ratio is approximately 2. Fig. 2 summarizes Test MSEs for RLF and RF as functions of different hyperparameters. As we can see, given data-driven p˜, RLF outperforms RF in almost all experiment settings. See detailed experiment settings and analysis in section A.2 of appendix. ', 'AL = {xi ∈ A : xij < z, i = 1, 2, ..., n}, AR = {xi ∈ A : xij ≥ z, i = 1, 2, ..., n} and Y¯A, Y¯AL, Y¯AR are the averages of responses Yi with the corresponding features are in sets A, AL and AR, respectively. ', 'Peng et al. (2019) provided sharp Berry-Esseen bounds for of RF under the Bernoulli sampling (Chen & Kato, 2019). The main idea follows from the Stein’s method (Chen et al., 2010) and Hoeffding decomposition (Vaart, 2000). Getting inspired by the results in (Peng et al., 2019) and (Mentch & Hooker, 2016), we derive improved Berry-Esseen bounds of RLF for small-N settings (i.e, relatively small number of trees in RLF) where lim Nn = α and α > 0 or ∞in Theorem 3.2. '] | 0d4ed818c1296c3d0bceda28b3e389f84d888c23df5f2ee530e693e38f4af97e | 7adf1c754de64f5740f8f106cca3661d5c14c5de |
explanation | What is the significance of reducing memory requirements during the pre-filling phase? | Thank you for your comments! We agree with your insightful opinion. Our method can save both memory consumption and running time in both pre-filling phase and decoding phase, i.e., we save all four complexities, as shown in Figure 3 and Figure 6. Thus, we believe that this is a strength of our method rather than a weakness. | ['Figure 3', 'Figure 6'] | ['images/3768b74c1e297bcbf039b8d660fa2bdf286cb74d3514651e7aa1bc58e2c61a34.jpg', 'images/d8d0a8cde24746532ac13bfcae100ef20e538dbf94f352d7a5dc16e8444bb44e.jpg'] | ['figure'] | 2 | 3 | 5 | {'The results of our analysis on time complexity and GPU memory consumption are presented in Theorem 3.3 below, with the proof deferred to Appendix C. ': '1'} | {'1': 'The results of our analysis on time complexity and GPU memory consumption are presented in Theorem 3.3 below, with the proof deferred to Appendix C. '} | {'images/d8d0a8cde24746532ac13bfcae100ef20e538dbf94f352d7a5dc16e8444bb44e.jpg': '6', 'images/6d2d33a00699545d067faeec6e50a423edc0a92b525f8867e9574ccb401b153a.jpg': '1', 'images/8e4cbdcab337f77209f153bc7b40776e9de7b3c37a007d7ce7eb14cd1f044d9f.jpg': '2', 'images/3768b74c1e297bcbf039b8d660fa2bdf286cb74d3514651e7aa1bc58e2c61a34.jpg': '3'} | {'6': 'images/d8d0a8cde24746532ac13bfcae100ef20e538dbf94f352d7a5dc16e8444bb44e.jpg', '1': 'images/6d2d33a00699545d067faeec6e50a423edc0a92b525f8867e9574ccb401b153a.jpg', '2': 'images/8e4cbdcab337f77209f153bc7b40776e9de7b3c37a007d7ce7eb14cd1f044d9f.jpg', '3': 'images/3768b74c1e297bcbf039b8d660fa2bdf286cb74d3514651e7aa1bc58e2c61a34.jpg'} | {} | {} | {} | ['images/8e4cbdcab337f77209f153bc7b40776e9de7b3c37a007d7ce7eb14cd1f044d9f.jpg', 'images/6d2d33a00699545d067faeec6e50a423edc0a92b525f8867e9574ccb401b153a.jpg', 'The results of our analysis on time complexity and GPU memory consumption are presented in Theorem 3.3 below, with the proof deferred to Appendix C. '] | e0baf8dd1ad91d90468a09d7b391e9ec0d50583c9acfb2a10d673968966ad3ea | 7d15bc27917bbd69aad3055b45b93133ecdaed3d |
explanation | What comparisons are made with other algorithms in the results? | Figure 1 is an example of our generation, please checkout Figure 3 for comparison with other algorithms. | ['Figure 1', 'Figure 3'] | ['images/74c8e04559ffbb0bb35e2b9ee3bb09085f8b404d81a659e78e194784b25e95f6.jpg', 'images/63823079ae994a4b67523a84be726d6e8122f9c1a06f587cbcc7576209bee365.jpg'] | ['figure'] | 2 | 3 | 5 | {'Implementations We build Diff-contrast upon the codebase of Diffusion-DPO (Wallace et al., 2024). We leverage the CLIP encoder (Radford et al., 2021) to compute multi-modal embeddings. For human preference alignment, we performed one-stage fine-tuning on both models for SFT and Diff-contrast. We directly use the checkpoints provided by Diffusion-DPO1. For style alignment, we also conduct one-stage fine-tuning. Additionally, following Rafailov et al. (2023), we perform two-stage fine-tuning which is expected to be more effective due to the domain gap between the pretraining of T2I models and the style alignment task. The details on learning rate and optimization steps are listed in Table 4. ': '1', 'Maginal Sampling The forward process admits sampling yt at an arbitrary timestep t in closed form. Let αt := 1 −βt and α¯t := ts=1 αs. With the re-parametrization trick, we have ': '2'} | {'1': 'Implementations We build Diff-contrast upon the codebase of Diffusion-DPO (Wallace et al., 2024). We leverage the CLIP encoder (Radford et al., 2021) to compute multi-modal embeddings. For human preference alignment, we performed one-stage fine-tuning on both models for SFT and Diff-contrast. We directly use the checkpoints provided by Diffusion-DPO1. For style alignment, we also conduct one-stage fine-tuning. Additionally, following Rafailov et al. (2023), we perform two-stage fine-tuning which is expected to be more effective due to the domain gap between the pretraining of T2I models and the style alignment task. The details on learning rate and optimization steps are listed in Table 4. ', '2': 'Maginal Sampling The forward process admits sampling yt at an arbitrary timestep t in closed form. Let αt := 1 −βt and α¯t := ts=1 αs. With the re-parametrization trick, we have '} | {'images/63823079ae994a4b67523a84be726d6e8122f9c1a06f587cbcc7576209bee365.jpg': '3', 'images/74c8e04559ffbb0bb35e2b9ee3bb09085f8b404d81a659e78e194784b25e95f6.jpg': '1'} | {'3': 'images/63823079ae994a4b67523a84be726d6e8122f9c1a06f587cbcc7576209bee365.jpg', '1': 'images/74c8e04559ffbb0bb35e2b9ee3bb09085f8b404d81a659e78e194784b25e95f6.jpg'} | {'images/28e0a252ea579463eb43577fc660ce8167452d585364db9f0d7eb2b8c05ce298.jpg': '2'} | {'2': 'images/28e0a252ea579463eb43577fc660ce8167452d585364db9f0d7eb2b8c05ce298.jpg'} | {} | ['Implementations We build Diff-contrast upon the codebase of Diffusion-DPO (Wallace et al., 2024). We leverage the CLIP encoder (Radford et al., 2021) to compute multi-modal embeddings. For human preference alignment, we performed one-stage fine-tuning on both models for SFT and Diff-contrast. We directly use the checkpoints provided by Diffusion-DPO1. For style alignment, we also conduct one-stage fine-tuning. Additionally, following Rafailov et al. (2023), we perform two-stage fine-tuning which is expected to be more effective due to the domain gap between the pretraining of T2I models and the style alignment task. The details on learning rate and optimization steps are listed in Table 4. ', 'Maginal Sampling The forward process admits sampling yt at an arbitrary timestep t in closed form. Let αt := 1 −βt and α¯t := ts=1 αs. With the re-parametrization trick, we have ', 'images/28e0a252ea579463eb43577fc660ce8167452d585364db9f0d7eb2b8c05ce298.jpg'] | 446b5eb9410df27ae8a7f27d6e96c890243fd9038fdc74972b7bb517eb6f63a9 | 7d215f32b9c547d147e070fe22641e1bd18ff940 |
explanation | What explains the nonlinear change in FPR95 values with the number of shots? | The attribute of nonlinear characteristics in few-shot learning has been widely observed in the fields of OOD detection. For example, similar trend is observed in ID-like (1-shot and 4-shot in Places shown in table below) and LoCoOp (4 shot and 16 shot in SUN and Places shown in Table.1). The reason for the phenomenon could be: 1) the effectiveness of few-shot learning that effectively learns knowledge of downstream tasks especially in the very few samples; 2) Moreover, for few-shot OOD detection, more samples may not necessarily bring better performance, and incorporating more samples may confuse the discrimination, which should be the characteristic of the two datasets. In Table.8, the reason for the nonlinear change can partly be attributed to that our local-based model learns characteristics of OOD detection with efficiency and gets the best discrimination performance between ID and OOD in FPR95, and more samples just confuse the process. | ['Table 8', 'Table 1'] | ['images/965bab3560a0d9af27b7fb98e1bb25c4a669f29f3c1b04e1450716f3996ca693.jpg', 'images/53e9e501044728380d8e9d7999f415e192d3967731cbfabeaa8982c043076fbc.jpg'] | ['table'] | 2 | 3 | 5 | {'ImageNet-1k as ID dataset. We report 4 and 16-shot results on four common OOD datasets using ImageNet-1k as ID dataset. It can be seen in Tab. 1 that tuning local prompts only using hand-crafted global prompts (Ours) can achieve competitive results, especially in datasets with challenging fine local textures like iNaturalist. Concretely, we get impressive progress in 4-shot tuning (outperforming by 3.80% on FPR95). In 16-shot setting, our approach gets competitive results as well. All the results above strongly showcase the potential of regional enhancement for OOD detection as an orthogonal direction to global prompt optimization methods. Experimental results of more shots are shown in Appendix C. ': '1', 'Comparison with NegLabel. NegLabel Jiang et al. (2024) designs a novel scheme for the OOD score with negative labels. It leverages real outlier information with negative labels from extensive corpus databases. This kind of knowledge helps to a great extent pointed out by OE Hendrycks et al. (2018) and is inconsistent with real-world application, where negative categories are infinite. ': '2'} | {'1': 'ImageNet-1k as ID dataset. We report 4 and 16-shot results on four common OOD datasets using ImageNet-1k as ID dataset. It can be seen in Tab. 1 that tuning local prompts only using hand-crafted global prompts (Ours) can achieve competitive results, especially in datasets with challenging fine local textures like iNaturalist. Concretely, we get impressive progress in 4-shot tuning (outperforming by 3.80% on FPR95). In 16-shot setting, our approach gets competitive results as well. All the results above strongly showcase the potential of regional enhancement for OOD detection as an orthogonal direction to global prompt optimization methods. Experimental results of more shots are shown in Appendix C. ', '2': 'Comparison with NegLabel. NegLabel Jiang et al. (2024) designs a novel scheme for the OOD score with negative labels. It leverages real outlier information with negative labels from extensive corpus databases. This kind of knowledge helps to a great extent pointed out by OE Hendrycks et al. (2018) and is inconsistent with real-world application, where negative categories are infinite. '} | {'images/67940f9ea289f5adfafd8101d4d3ae4c62df2da2e49bc43235a1126d936f38be.jpg': '2'} | {'2': 'images/67940f9ea289f5adfafd8101d4d3ae4c62df2da2e49bc43235a1126d936f38be.jpg'} | {'images/53e9e501044728380d8e9d7999f415e192d3967731cbfabeaa8982c043076fbc.jpg': '1', 'images/965bab3560a0d9af27b7fb98e1bb25c4a669f29f3c1b04e1450716f3996ca693.jpg': '8'} | {'1': 'images/53e9e501044728380d8e9d7999f415e192d3967731cbfabeaa8982c043076fbc.jpg', '8': 'images/965bab3560a0d9af27b7fb98e1bb25c4a669f29f3c1b04e1450716f3996ca693.jpg'} | {} | ['images/67940f9ea289f5adfafd8101d4d3ae4c62df2da2e49bc43235a1126d936f38be.jpg', 'ImageNet-1k as ID dataset. We report 4 and 16-shot results on four common OOD datasets using ImageNet-1k as ID dataset. It can be seen in Tab. 1 that tuning local prompts only using hand-crafted global prompts (Ours) can achieve competitive results, especially in datasets with challenging fine local textures like iNaturalist. Concretely, we get impressive progress in 4-shot tuning (outperforming by 3.80% on FPR95). In 16-shot setting, our approach gets competitive results as well. All the results above strongly showcase the potential of regional enhancement for OOD detection as an orthogonal direction to global prompt optimization methods. Experimental results of more shots are shown in Appendix C. ', 'Comparison with NegLabel. NegLabel Jiang et al. (2024) designs a novel scheme for the OOD score with negative labels. It leverages real outlier information with negative labels from extensive corpus databases. This kind of knowledge helps to a great extent pointed out by OE Hendrycks et al. (2018) and is inconsistent with real-world application, where negative categories are infinite. '] | a949aa2f9a8cdb65d7ec966edbc62a6b2695aed8b9713288eebf82fea6f696c5 | 81f69a237d87b34e909480f5cf49102a3e67e529 |
explanation | How well does GUARD perform across different guidelines? | The detailed performance of GUARD across these guidelines is provided in Table 1, while the corresponding guideline-violating questions are summarized in Table 2. | ['Table 1', 'Table 2'] | ['images/d5554136b8721cedd533710ba71fc6ebf55ddb31fa8c2b31141051871ee98e3f.jpg', 'images/be1a379bb08acf773225f3e4e05a2a7be010a9b1747dfc867cfed5bf432d872e.jpg'] | ['table'] | 2 | 3 | 5 | {'According to Table 3, GUARD-JD consistently outperforms baseline methods, achieving the highest jailbreak success rates and lowest perplexity scores across various models. Specifically, GUARD-JD achieves success rates of 86.0% on Vicuna-13B, 82.6% on LongChat-7B, 80.0% on Llama2-7B, 78.6% on GPT-3.5, and 77.2% on GPT-4, demonstrating its effectiveness in generating playing scenarios that test model adherence to guidelines. ': '1', 'We observed that many efforts focus on breaking the built-in safety mechanisms of LLMs using manually crafted jailbreak prompts. A notable example is Jailbreak Chat (the link is in Appendix L), which hosts an extensive collection of ChatGPT jailbreak prompts. While these prompts were effective at the time of their creation, their effectiveness is often short-lived since the model developers readily access them and patch the vulnerabilities they find. In light of this, we try to understand why these jailbreak prompts can be applied to break the built-in safety mechanism. Further, we assume the potential for their reuse by modifying parts of these prompts that have become ineffective. ': '2'} | {'1': 'According to Table 3, GUARD-JD consistently outperforms baseline methods, achieving the highest jailbreak success rates and lowest perplexity scores across various models. Specifically, GUARD-JD achieves success rates of 86.0% on Vicuna-13B, 82.6% on LongChat-7B, 80.0% on Llama2-7B, 78.6% on GPT-3.5, and 77.2% on GPT-4, demonstrating its effectiveness in generating playing scenarios that test model adherence to guidelines. ', '2': 'We observed that many efforts focus on breaking the built-in safety mechanisms of LLMs using manually crafted jailbreak prompts. A notable example is Jailbreak Chat (the link is in Appendix L), which hosts an extensive collection of ChatGPT jailbreak prompts. While these prompts were effective at the time of their creation, their effectiveness is often short-lived since the model developers readily access them and patch the vulnerabilities they find. In light of this, we try to understand why these jailbreak prompts can be applied to break the built-in safety mechanism. Further, we assume the potential for their reuse by modifying parts of these prompts that have become ineffective. '} | {'images/e111d9f1856cef0c136f79102c8ee53e77be5af95dcdf9693faa9a6ff191f1dd.jpg': '2'} | {'2': 'images/e111d9f1856cef0c136f79102c8ee53e77be5af95dcdf9693faa9a6ff191f1dd.jpg'} | {'images/be1a379bb08acf773225f3e4e05a2a7be010a9b1747dfc867cfed5bf432d872e.jpg': '2', 'images/d5554136b8721cedd533710ba71fc6ebf55ddb31fa8c2b31141051871ee98e3f.jpg': '1'} | {'2': 'images/be1a379bb08acf773225f3e4e05a2a7be010a9b1747dfc867cfed5bf432d872e.jpg', '1': 'images/d5554136b8721cedd533710ba71fc6ebf55ddb31fa8c2b31141051871ee98e3f.jpg'} | {} | ['According to Table 3, GUARD-JD consistently outperforms baseline methods, achieving the highest jailbreak success rates and lowest perplexity scores across various models. Specifically, GUARD-JD achieves success rates of 86.0% on Vicuna-13B, 82.6% on LongChat-7B, 80.0% on Llama2-7B, 78.6% on GPT-3.5, and 77.2% on GPT-4, demonstrating its effectiveness in generating playing scenarios that test model adherence to guidelines. ', 'images/e111d9f1856cef0c136f79102c8ee53e77be5af95dcdf9693faa9a6ff191f1dd.jpg', 'We observed that many efforts focus on breaking the built-in safety mechanisms of LLMs using manually crafted jailbreak prompts. A notable example is Jailbreak Chat (the link is in Appendix L), which hosts an extensive collection of ChatGPT jailbreak prompts. While these prompts were effective at the time of their creation, their effectiveness is often short-lived since the model developers readily access them and patch the vulnerabilities they find. In light of this, we try to understand why these jailbreak prompts can be applied to break the built-in safety mechanism. Further, we assume the potential for their reuse by modifying parts of these prompts that have become ineffective. '] | 5514dd66726f2263c48ed9effb820a654f0a760dc710d35538d69f6ff852868e | 848e0e43537ae3b51b28a358739853caee521896 |
explanation | Discuss (perhaps using numerical evidence) that indeed the solutions computed are solutions to the PDE. Also discuss (again potentially with numerical evidence) that the solutions computed are local solutions and not global solutions. | We show what happens if we evaluated our trained local operator at nearby parameters in Figure 1 and Figure 3. While our local operator is trained only at \( \Theta_0 \), evaluating our local operator at a nearby parameter \( \Theta_0 + \delta \) gives a good approximation to the PDE solution at \( \Theta_0 + \delta \). | ['Figure 1', 'Figure 3'] | ['images/798272f1b7389f2292e3d7e356ecbda4622eb2cf66ded40d63faad92582f9494.jpg', 'images/9f6c2d896e6a68e9d277571481898248693af514a3bcc50b680d33f74b924f09.jpg'] | ['figure'] | 2 | 3 | 5 | {'In the upper level problem, we find the optimal PDE parameters Θ by minimizing the data loss with respect to Θ. In the lower level problem, we train a network to approximate the local operator u(x, Θ; W) by minimizing the local operator loss with respect to the weights of the neural network. ': '1'} | {'1': 'In the upper level problem, we find the optimal PDE parameters Θ by minimizing the data loss with respect to Θ. In the lower level problem, we train a network to approximate the local operator u(x, Θ; W) by minimizing the local operator loss with respect to the weights of the neural network. '} | {'images/7b571d324b43a051a3bba03c6ced21de84cfe593788d30f4201bf497d7fd223f.jpg': '2', 'images/798272f1b7389f2292e3d7e356ecbda4622eb2cf66ded40d63faad92582f9494.jpg': '1', 'images/f1bd0114926fbca51f69f505409c82686ac010f9bce700782a922a33441fad39.jpg': '4', 'images/9f6c2d896e6a68e9d277571481898248693af514a3bcc50b680d33f74b924f09.jpg': '3'} | {'2': 'images/7b571d324b43a051a3bba03c6ced21de84cfe593788d30f4201bf497d7fd223f.jpg', '1': 'images/798272f1b7389f2292e3d7e356ecbda4622eb2cf66ded40d63faad92582f9494.jpg', '4': 'images/f1bd0114926fbca51f69f505409c82686ac010f9bce700782a922a33441fad39.jpg', '3': 'images/9f6c2d896e6a68e9d277571481898248693af514a3bcc50b680d33f74b924f09.jpg'} | {} | {} | {} | ['images/f1bd0114926fbca51f69f505409c82686ac010f9bce700782a922a33441fad39.jpg', 'images/7b571d324b43a051a3bba03c6ced21de84cfe593788d30f4201bf497d7fd223f.jpg', 'In the upper level problem, we find the optimal PDE parameters Θ by minimizing the data loss with respect to Θ. In the lower level problem, we train a network to approximate the local operator u(x, Θ; W) by minimizing the local operator loss with respect to the weights of the neural network. '] | bc599a02d9a04b14c675bea275edbda3b317dc51adad863037187e81c0cdc81a | 84944c855c527ad9f2c47023b8e1f555bd4921cd |
explanation | What are the implications of distributional variations on the conclusions drawn from vector similarity comparisons? | Firstly, there exists a phenomenon in Figure 2 of the main text: for each aligned LLM, the average cosine similarity curves of N-N pairs and N-M pairs are nearly identical in the initial layers, with a noticeable gap only emerging from a certain intermediate layer. If the difference in processing N-N pairs and N-M pairs by aligned LLMs fundamentally stems from differences in the vector distributions of these question categories, we would expect to see a more pronounced divergence in the curves starting from the earliest layers. However, as shown in the figure, the gap between the curves only begins to widen from a specific intermediate layer, exhibiting an increasing growth rate before eventually leveling off. Thus, we interpret this phenomenon as follows: In the initial layers, aligned LLMs process malicious and normal questions similarly. It is only in the specific intermediate layers—enabled by the effect of safety layers—that the LLM starts to distinguish between the two types of questions. Furthermore, if the curve gap observed in aligned LLMs were solely due to differences in vector distributions, we would expect to see a similar gap in pre-trained LLMs. However, Figure 3 in the main text illustrates the N-N Pair and N-M Pair analysis for pre-trained LLMs such as LLaMA-2, LLaMA-3, and Gemma, which lack security alignment. We found that, for these pre-trained LLMs, the average cosine similarity curves for N-N Pairs and N-M Pairs remain nearly identical across all layers. This suggests that the curve gap may not stem from differences in vector distribution, but rather provides a clearer visualization of how the LLM distinguishes between different types of questions (malicious and normal). The lack of a gap in Figure 3 for pre-trained LLMs aligns with their inability to distinguish between malicious and normal questions, further indicating that the emergence of safety layers is a product of security alignment. | ['Figure 2', 'Figure 3'] | ['images/257af518f4af88ea6fa041278a6e367d8779c646c34b0ab4d47e9223b2b91066.jpg', 'images/40cde0d077ba0b13823a26dfeec93fe6d602535c12b564373561ffe123227fbb.jpg'] | ['figure'] | 2 | 3 | 5 | {'Evaluation Metrics for Fine-tuning Task. To compare the performance of SPPFT with full finetuning on the fine-tuning task, we select 500 samples from the alpaca finance (2024) dataset, and ensure that they are do not overlap with the fine-tuning data as our test dataset DT . We compute the average Rouge-L score Sr (Lin, 2004) of the labels of DT versus the LLM outputs to evaluate the performance of the LLM on the task of our fine-tuning dataset. Also, we use the MMLU scores Sm (Hendrycks et al., 2021b;a) of these LLMs as the overall performance evaluation metrics. ': '1'} | {'1': 'Evaluation Metrics for Fine-tuning Task. To compare the performance of SPPFT with full finetuning on the fine-tuning task, we select 500 samples from the alpaca finance (2024) dataset, and ensure that they are do not overlap with the fine-tuning data as our test dataset DT . We compute the average Rouge-L score Sr (Lin, 2004) of the labels of DT versus the LLM outputs to evaluate the performance of the LLM on the task of our fine-tuning dataset. Also, we use the MMLU scores Sm (Hendrycks et al., 2021b;a) of these LLMs as the overall performance evaluation metrics. '} | {'images/40cde0d077ba0b13823a26dfeec93fe6d602535c12b564373561ffe123227fbb.jpg': '3', 'images/10fc79bd68c26dcfdf6fdb0e8488c8dbce4debb10b00a4b57eac09074d6d01e3.jpg': '1', 'images/257af518f4af88ea6fa041278a6e367d8779c646c34b0ab4d47e9223b2b91066.jpg': '2'} | {'3': 'images/40cde0d077ba0b13823a26dfeec93fe6d602535c12b564373561ffe123227fbb.jpg', '1': 'images/10fc79bd68c26dcfdf6fdb0e8488c8dbce4debb10b00a4b57eac09074d6d01e3.jpg', '2': 'images/257af518f4af88ea6fa041278a6e367d8779c646c34b0ab4d47e9223b2b91066.jpg'} | {'images/ca25c0d2eb7af0af4c66e42b369fa9cf094c6cbe76c8d452929b86396cbb946d.jpg': '1'} | {'1': 'images/ca25c0d2eb7af0af4c66e42b369fa9cf094c6cbe76c8d452929b86396cbb946d.jpg'} | {} | ['images/ca25c0d2eb7af0af4c66e42b369fa9cf094c6cbe76c8d452929b86396cbb946d.jpg', 'images/10fc79bd68c26dcfdf6fdb0e8488c8dbce4debb10b00a4b57eac09074d6d01e3.jpg', 'Evaluation Metrics for Fine-tuning Task. To compare the performance of SPPFT with full finetuning on the fine-tuning task, we select 500 samples from the alpaca finance (2024) dataset, and ensure that they are do not overlap with the fine-tuning data as our test dataset DT . We compute the average Rouge-L score Sr (Lin, 2004) of the labels of DT versus the LLM outputs to evaluate the performance of the LLM on the task of our fine-tuning dataset. Also, we use the MMLU scores Sm (Hendrycks et al., 2021b;a) of these LLMs as the overall performance evaluation metrics. '] | 7ba5dbc5a9b6bda95b693ac0d170683e5d4d4d86e320e62858010bf77c05073f | 86abc452df0eb5db70617257cc777cb0cf7c8acf |
explanation | What proportion of the labeled trajectories were used during training? | The ratio of labeled to unlabeled trajectories is maintained at 1:1 in both the Language Table environment (Table 2) and the Simpler Env environment (Table 3). In Minecraft, labeled data accounts for approximately 35% of the total dataset. In the Minecraft environment, all tasks in the training set are accompanied by corresponding language instructions, except for Open Chest and Climb Mountain, which do not include language instructions. | ['Table 2', 'Table 3'] | ['images/4b2c97a19b8bb2b34186276d05cefb0bbd3c3d617fcb81922e8f1b4109e54771.jpg', 'images/0f481585d57ea819421356d1aeaf85db3f5985ff79e38946875729e8e8c31971.jpg'] | ['table'] | 2 | 3 | 5 | {'We evaluated GROOT-2 ’s steerability and performance on four Atari games (Breakout, Demon Attack, Hero, and Name This Game). Datasets from Agarwal et al. (2020), containing approximately 10M frames per game, were used. Episode returns were normalized to µ = 0, σ = 1. ': '1'} | {'1': 'We evaluated GROOT-2 ’s steerability and performance on four Atari games (Breakout, Demon Attack, Hero, and Name This Game). Datasets from Agarwal et al. (2020), containing approximately 10M frames per game, were used. Episode returns were normalized to µ = 0, σ = 1. '} | {'images/db3eb58608b3930d485e3283ac66b7aeed1cfee105ac4cde7722ee796a5526b3.jpg': '8'} | {'8': 'images/db3eb58608b3930d485e3283ac66b7aeed1cfee105ac4cde7722ee796a5526b3.jpg'} | {'images/0f481585d57ea819421356d1aeaf85db3f5985ff79e38946875729e8e8c31971.jpg': '3', 'images/4b2c97a19b8bb2b34186276d05cefb0bbd3c3d617fcb81922e8f1b4109e54771.jpg': '2', 'images/abfd0188d4880d67d7b464e01b560bdac1d38f91d895b87d4b633a341ce4859c.jpg': '1'} | {'3': 'images/0f481585d57ea819421356d1aeaf85db3f5985ff79e38946875729e8e8c31971.jpg', '2': 'images/4b2c97a19b8bb2b34186276d05cefb0bbd3c3d617fcb81922e8f1b4109e54771.jpg', '1': 'images/abfd0188d4880d67d7b464e01b560bdac1d38f91d895b87d4b633a341ce4859c.jpg'} | {} | ['We evaluated GROOT-2 ’s steerability and performance on four Atari games (Breakout, Demon Attack, Hero, and Name This Game). Datasets from Agarwal et al. (2020), containing approximately 10M frames per game, were used. Episode returns were normalized to µ = 0, σ = 1. ', 'images/abfd0188d4880d67d7b464e01b560bdac1d38f91d895b87d4b633a341ce4859c.jpg', 'images/db3eb58608b3930d485e3283ac66b7aeed1cfee105ac4cde7722ee796a5526b3.jpg'] | d4b2bd67b19c68bd1857d59ebcba001a9dec754ba172ed4fdb9378b40a09f6d9 | 8a94388a2da9cf966f5b009c4737833f7c6134c0 |
explanation | How does DPO perform without considering ties compared to the proposed methods? | We note that we report the performance of DPO without ties for all our experiments (Blue curves in Figure 1 and Figure 3). We refer to this configuration as DPO(CP) where CP stands for Clear Preference. We find that DPO(CP) achieves better performance than DPO(CP+TP) on all three experimental setups. | ['Figure 1', 'Figure 3'] | ['images/9bc772f732a7ad148aba40edc6d0bf7f5b065149767988536303bada9777c4fd.jpg', 'images/1ff4ee70bfd426a72f4e5639838f351c92e1014f1a049c881c8c3042e151c285.jpg'] | ['figure'] | 2 | 3 | 5 | {'We extend the DPO policy objective (Eq. 4) to include a binary flag t to indicate a tie: ': '1'} | {'1': 'We extend the DPO policy objective (Eq. 4) to include a binary flag t to indicate a tie: '} | {'images/6b6c8966bc83eed3e0bfcf9dcd36b19e8cb867b58d6104f9686fd18ba812238c.jpg': '4', 'images/9bc772f732a7ad148aba40edc6d0bf7f5b065149767988536303bada9777c4fd.jpg': '1', 'images/1ff4ee70bfd426a72f4e5639838f351c92e1014f1a049c881c8c3042e151c285.jpg': '3', 'images/1636957b3e24838a60883aab193aafac5905a4b2356c40f777af42c78def4ea0.jpg': '2'} | {'4': 'images/6b6c8966bc83eed3e0bfcf9dcd36b19e8cb867b58d6104f9686fd18ba812238c.jpg', '1': 'images/9bc772f732a7ad148aba40edc6d0bf7f5b065149767988536303bada9777c4fd.jpg', '3': 'images/1ff4ee70bfd426a72f4e5639838f351c92e1014f1a049c881c8c3042e151c285.jpg', '2': 'images/1636957b3e24838a60883aab193aafac5905a4b2356c40f777af42c78def4ea0.jpg'} | {} | {} | {} | ['images/1636957b3e24838a60883aab193aafac5905a4b2356c40f777af42c78def4ea0.jpg', 'We extend the DPO policy objective (Eq. 4) to include a binary flag t to indicate a tie: ', 'images/6b6c8966bc83eed3e0bfcf9dcd36b19e8cb867b58d6104f9686fd18ba812238c.jpg'] | 6170ad4c899d629f9ed6167ba0aaea26a67a30ed3a623fb44d5f6154b03467cc | 903b601877dbe63458f85ea54bc10cc123c4ad0c |
explanation | How does the paper address the expansion of basis sharing to more than two layers? | In the main body of the paper, we used two-layer sharing as an example. In the experiments, we have extended this sharing to multiple layers. The results are shown in Table 6 and Table 7 on page 9, where the performance of grouping every 2, 3, 4, 5, 6, 7, 8, 16 and 32 adjacent layers for basis sharing without and with LoRA fine-tuning is compared. | ['Table 6', 'Table 7'] | ['images/dc68747573609694b0b3dcef1a020623784e96d88459d2d39ca059f960f5e63b.jpg', 'images/dcc212bd3d0db6cfd08cbabb4d64e39f2bd2fe8da1bcea5e92c8b04a16f4bf4d.jpg'] | ['table'] | 2 | 3 | 5 | {'Impact on LLM Performance in Zero-Shot Setting We grouped different numbers of consecutive layers to examine the impact of the number grouped layers on the LLM performance without any fine-tuning. Table 6 shows the result. The number in the first column indicates the number of consecutive layers sharing a common basis matrix. For example, 4 means that every four consecutive layers share a basis matrix in the order from the first layer to the last layer. Compared with no basis sharing in SVD-LLM (# LAYERS = 1) under 20% compression ratio, Basis Sharing achieves a similar performance. Grouping four or five layers to share a basis matrix is more reasonable when compression ratio is lower than 30%, since they have the lowest PPL. Two layers sharing a basis matrix is a good choice when the compression ratio is larger than 30%. ': '1'} | {'1': 'Impact on LLM Performance in Zero-Shot Setting We grouped different numbers of consecutive layers to examine the impact of the number grouped layers on the LLM performance without any fine-tuning. Table 6 shows the result. The number in the first column indicates the number of consecutive layers sharing a common basis matrix. For example, 4 means that every four consecutive layers share a basis matrix in the order from the first layer to the last layer. Compared with no basis sharing in SVD-LLM (# LAYERS = 1) under 20% compression ratio, Basis Sharing achieves a similar performance. Grouping four or five layers to share a basis matrix is more reasonable when compression ratio is lower than 30%, since they have the lowest PPL. Two layers sharing a basis matrix is a good choice when the compression ratio is larger than 30%. '} | {'images/d45fa2c4776323f1ddb2e7f7989e224aa0e05bd7d5d7028d802d3a3daccd1ac3.jpg': '6', 'images/008f45b721fd73ca8d53ff5c0c93518abd4d80ee4e981b0d606e4943de57d255.jpg': '5'} | {'6': 'images/d45fa2c4776323f1ddb2e7f7989e224aa0e05bd7d5d7028d802d3a3daccd1ac3.jpg', '5': 'images/008f45b721fd73ca8d53ff5c0c93518abd4d80ee4e981b0d606e4943de57d255.jpg'} | {'images/dcc212bd3d0db6cfd08cbabb4d64e39f2bd2fe8da1bcea5e92c8b04a16f4bf4d.jpg': '7', 'images/dc68747573609694b0b3dcef1a020623784e96d88459d2d39ca059f960f5e63b.jpg': '6'} | {'7': 'images/dcc212bd3d0db6cfd08cbabb4d64e39f2bd2fe8da1bcea5e92c8b04a16f4bf4d.jpg', '6': 'images/dc68747573609694b0b3dcef1a020623784e96d88459d2d39ca059f960f5e63b.jpg'} | {} | ['Impact on LLM Performance in Zero-Shot Setting We grouped different numbers of consecutive layers to examine the impact of the number grouped layers on the LLM performance without any fine-tuning. Table 6 shows the result. The number in the first column indicates the number of consecutive layers sharing a common basis matrix. For example, 4 means that every four consecutive layers share a basis matrix in the order from the first layer to the last layer. Compared with no basis sharing in SVD-LLM (# LAYERS = 1) under 20% compression ratio, Basis Sharing achieves a similar performance. Grouping four or five layers to share a basis matrix is more reasonable when compression ratio is lower than 30%, since they have the lowest PPL. Two layers sharing a basis matrix is a good choice when the compression ratio is larger than 30%. ', 'images/008f45b721fd73ca8d53ff5c0c93518abd4d80ee4e981b0d606e4943de57d255.jpg', 'images/d45fa2c4776323f1ddb2e7f7989e224aa0e05bd7d5d7028d802d3a3daccd1ac3.jpg'] | dc9ab95ce1b73f1ac4390fc39128110865a4f26dc982077cd91f7f9150412f6b | 91357a8b9e1ae925eda1979cd2fe9f76825a6c0d |
explanation | What are the implications of using a small training dataset on the conclusions drawn from the experiments? | In our experiments, we utilize a dataset of approximately 100K instances, as we specifically focus on task-oriented scenarios rather than general-purpose multimodal capabilities. To validate the adequacy of our dataset, we observed clear patterns of convergence in both the mixed-task results in Figure 3 and the training curves in Figure 4, indicating that our dataset size is sufficient for the targeted tasks. | ['Figure 3', 'Figure 4'] | ['images/680e38d552e0e0578c3263e393ba085acfc283cdde3c2a81b71a88e1ae261443.jpg', 'images/3997ac63541dab2c0ce8a5bf4ecd5443cb38766d581ab5afe38cad5e9937d3f9.jpg'] | ['figure'] | 2 | 3 | 5 | {'• Empirical Study and Synthetic Data Engine: To investigate the root cause of this performance, we conduct a detailed empirical exploration of MLLM architecture and training strategies. To aid in our investigation, we develop a synthetic data engine capable of generating high-fidelity visual representations of fundamental geometric elements. This study leads to key insights, such as the importance of certain architectural choices and the use of curriculum-based, multi-stage training with progressively more complex visual descriptions for improving low-level visual perception. ': '1'} | {'1': '• Empirical Study and Synthetic Data Engine: To investigate the root cause of this performance, we conduct a detailed empirical exploration of MLLM architecture and training strategies. To aid in our investigation, we develop a synthetic data engine capable of generating high-fidelity visual representations of fundamental geometric elements. This study leads to key insights, such as the importance of certain architectural choices and the use of curriculum-based, multi-stage training with progressively more complex visual descriptions for improving low-level visual perception. '} | {'images/3997ac63541dab2c0ce8a5bf4ecd5443cb38766d581ab5afe38cad5e9937d3f9.jpg': '4', 'images/680e38d552e0e0578c3263e393ba085acfc283cdde3c2a81b71a88e1ae261443.jpg': '3', 'images/0e14c607c31e5ff642e1e675459fb0b4478c644dfa79fa21f7b8be142bbf1f57.jpg': '7'} | {'4': 'images/3997ac63541dab2c0ce8a5bf4ecd5443cb38766d581ab5afe38cad5e9937d3f9.jpg', '3': 'images/680e38d552e0e0578c3263e393ba085acfc283cdde3c2a81b71a88e1ae261443.jpg', '7': 'images/0e14c607c31e5ff642e1e675459fb0b4478c644dfa79fa21f7b8be142bbf1f57.jpg'} | {'images/947081caa4d49780d7eb8c567982244a9ad8bdfc150e95050e42b11c1437af46.jpg': '3'} | {'3': 'images/947081caa4d49780d7eb8c567982244a9ad8bdfc150e95050e42b11c1437af46.jpg'} | {} | ['• Empirical Study and Synthetic Data Engine: To investigate the root cause of this performance, we conduct a detailed empirical exploration of MLLM architecture and training strategies. To aid in our investigation, we develop a synthetic data engine capable of generating high-fidelity visual representations of fundamental geometric elements. This study leads to key insights, such as the importance of certain architectural choices and the use of curriculum-based, multi-stage training with progressively more complex visual descriptions for improving low-level visual perception. ', 'images/947081caa4d49780d7eb8c567982244a9ad8bdfc150e95050e42b11c1437af46.jpg', 'images/0e14c607c31e5ff642e1e675459fb0b4478c644dfa79fa21f7b8be142bbf1f57.jpg'] | 2a61cc7be499a2b19e6482e0254bb4aff5f0552e6b009f231a561ceff2251790 | a28dbf3321649d56e533889f3e7caa17bffe6eb5 |
explanation | What evidence supports the claim of a substantial Pareto improvement for a fixed training budget? | The substantial Pareto improvement is much more obvious in the top left plot of Figure 3, showing reconstruction MSE. The substantial Pareto improvement for a fixed training budget is also apparent in our scaling laws (specifically, the left subplot of Figure 1). The Pareto improvement is less visually obvious in the bottom left plot of Figure 3 because the fraction of loss recovered (FLR) metric is saturated. We have inverted the y-axis (plotting 1 - FLR as opposed to FLR) in our updated submission to highlight the difference more clearly. | ['Figure 1', 'Figure 3'] | ['images/6b20d07e0555d1caec9179c28d7c7da0f6b6bf4876b9e6d1c9dcec5ef5240481.jpg', 'images/31f6bbb8af24edd63200b4456017e217d1d9a595211ec1c16fff16ef7ceeb9ae.jpg'] | ['figure'] | 2 | 3 | 5 | {'We additionally benchmark against the ReLU SAE (Anthropic, 2024b) and the Gated SAE (Rajamanoharan et al., 2024a). The ReLU SAE uses the ReLU activation function and applies an L1 penalty to the feature activations to encourage sparsity. The Gated SAE avoids activation shrinkage (Wright & Sharkey, 2024) by separately determining which features should be active and how strongly activated they should be. ': '1'} | {'1': 'We additionally benchmark against the ReLU SAE (Anthropic, 2024b) and the Gated SAE (Rajamanoharan et al., 2024a). The ReLU SAE uses the ReLU activation function and applies an L1 penalty to the feature activations to encourage sparsity. The Gated SAE avoids activation shrinkage (Wright & Sharkey, 2024) by separately determining which features should be active and how strongly activated they should be. '} | {'images/31f6bbb8af24edd63200b4456017e217d1d9a595211ec1c16fff16ef7ceeb9ae.jpg': '3', 'images/6b20d07e0555d1caec9179c28d7c7da0f6b6bf4876b9e6d1c9dcec5ef5240481.jpg': '1', 'images/e225971aee33f39b2cf650ab1ceca71cb43d593ea377cfb1a70abceb84cc2108.jpg': '2', 'images/726f8b6e7b68bd0c2d4a50cc885a548d55c3a12160bdf54fc04cbc91271c4eda.jpg': '6'} | {'3': 'images/31f6bbb8af24edd63200b4456017e217d1d9a595211ec1c16fff16ef7ceeb9ae.jpg', '1': 'images/6b20d07e0555d1caec9179c28d7c7da0f6b6bf4876b9e6d1c9dcec5ef5240481.jpg', '2': 'images/e225971aee33f39b2cf650ab1ceca71cb43d593ea377cfb1a70abceb84cc2108.jpg', '6': 'images/726f8b6e7b68bd0c2d4a50cc885a548d55c3a12160bdf54fc04cbc91271c4eda.jpg'} | {} | {} | {} | ['images/e225971aee33f39b2cf650ab1ceca71cb43d593ea377cfb1a70abceb84cc2108.jpg', 'We additionally benchmark against the ReLU SAE (Anthropic, 2024b) and the Gated SAE (Rajamanoharan et al., 2024a). The ReLU SAE uses the ReLU activation function and applies an L1 penalty to the feature activations to encourage sparsity. The Gated SAE avoids activation shrinkage (Wright & Sharkey, 2024) by separately determining which features should be active and how strongly activated they should be. ', 'images/726f8b6e7b68bd0c2d4a50cc885a548d55c3a12160bdf54fc04cbc91271c4eda.jpg'] | 137c4ca2e10fe76fa98ffb83543f22203d524670bca1d8d1106afaaa3b282254 | a44532deefcee0af98255b79ca451963baa29739 |
explanation | Why does Grond work better than any other method? | The design of Grond is aware of the existence of parameter-space defenses, while other backdoor attacks do not consider parameter-space defenses. So, it's anticipated that Grond outperforms other backdoor attacks against parameter-space defenses. In addition, we also provide analyses of feature space (Figure 2) and parameter space (Figure 4) to show the effectiveness of Grond. Compared to all other baselines in our experiments, Grond shows better stealthiness in feature space and parameters space. All other baseline attacks show a set of prominent neurons with much higher TAC values than other neurons. In our TAC pruning experiment (Figure 4), we show that removing these neurons with higher TAC values could effectively mitigate backdoor attacks. However, as Grond constrains the parameter while training and spreads the backdoor effect to more neurons, the backdoor neurons's TAC values are close to benign neurons. Pruning these neurons will significantly reduce benign accuracy. | ['Figure 2', 'Figure 4'] | ['images/f03a5680d4f5b8884e3cee76ebb05846134ba93e3b6026457c631fac17395f9c.jpg', 'images/af93dd73f7c07612fb5ebb5e662c2f4851c3fbdd8644ce5ab471d9f908b258cb.jpg'] | ['figure'] | 2 | 3 | 5 | {'Mitigation refers to erasing the backdoor effect from the victim model by pruning the backdoorrelated neurons (pruning-based defenses) (Liu et al., 2018a; Wu & Wang, 2021; Zheng et al., 2022; Li et al., 2023a) or unlearning the backdoor trigger (fine-tuning-based defenses) (Zhu et al., 2023; Zeng et al., 2022; Min et al., 2023; Xu et al., 2024b). These methods attempt to remove the neurons associated with backdoors. For example, ANP (Wu & Wang, 2021) prunes neurons that are more sensitive to adversarial neuron noise, and FT-SAM (Zhu et al., 2023) combines sharpness-aware minimization with fine-tuning to decrease the norms of backdoor neurons. ': '1', 'Backdoor defenses can be classified into detection and mitigation. Detection refers to determining whether a model is backdoored (model detection) (Wang et al., 2019; Liu et al., 2019; Zhao et al., 2022; Wang et al., 2023; Xu et al., 2024b) or a given input is applied with a trigger (input detection) (Gao et al., 2019; Guo et al., 2023; Mo et al., 2024). Model detection by trigger inversion is considered one of the most general defenses against backdoors (Wang et al., 2022a; 2023; Xu et al., 2024b; Zhu et al., 2024). The inversed trigger could determine whether the model is backdoored and be used for backdoor unlearning. For example, NC (Wang et al., 2019) inverses input space triggers and determines the backdoor by selecting abnormally smaller triggers. ': '2'} | {'1': 'Mitigation refers to erasing the backdoor effect from the victim model by pruning the backdoorrelated neurons (pruning-based defenses) (Liu et al., 2018a; Wu & Wang, 2021; Zheng et al., 2022; Li et al., 2023a) or unlearning the backdoor trigger (fine-tuning-based defenses) (Zhu et al., 2023; Zeng et al., 2022; Min et al., 2023; Xu et al., 2024b). These methods attempt to remove the neurons associated with backdoors. For example, ANP (Wu & Wang, 2021) prunes neurons that are more sensitive to adversarial neuron noise, and FT-SAM (Zhu et al., 2023) combines sharpness-aware minimization with fine-tuning to decrease the norms of backdoor neurons. ', '2': 'Backdoor defenses can be classified into detection and mitigation. Detection refers to determining whether a model is backdoored (model detection) (Wang et al., 2019; Liu et al., 2019; Zhao et al., 2022; Wang et al., 2023; Xu et al., 2024b) or a given input is applied with a trigger (input detection) (Gao et al., 2019; Guo et al., 2023; Mo et al., 2024). Model detection by trigger inversion is considered one of the most general defenses against backdoors (Wang et al., 2022a; 2023; Xu et al., 2024b; Zhu et al., 2024). The inversed trigger could determine whether the model is backdoored and be used for backdoor unlearning. For example, NC (Wang et al., 2019) inverses input space triggers and determines the backdoor by selecting abnormally smaller triggers. '} | {'images/af93dd73f7c07612fb5ebb5e662c2f4851c3fbdd8644ce5ab471d9f908b258cb.jpg': '4', 'images/f03a5680d4f5b8884e3cee76ebb05846134ba93e3b6026457c631fac17395f9c.jpg': '2'} | {'4': 'images/af93dd73f7c07612fb5ebb5e662c2f4851c3fbdd8644ce5ab471d9f908b258cb.jpg', '2': 'images/f03a5680d4f5b8884e3cee76ebb05846134ba93e3b6026457c631fac17395f9c.jpg'} | {'images/b9fdcff010903de73a1ca20d282fca275c1eab511689a7fa4ccf8a63d782932a.jpg': '2'} | {'2': 'images/b9fdcff010903de73a1ca20d282fca275c1eab511689a7fa4ccf8a63d782932a.jpg'} | {} | ['images/b9fdcff010903de73a1ca20d282fca275c1eab511689a7fa4ccf8a63d782932a.jpg', 'Mitigation refers to erasing the backdoor effect from the victim model by pruning the backdoorrelated neurons (pruning-based defenses) (Liu et al., 2018a; Wu & Wang, 2021; Zheng et al., 2022; Li et al., 2023a) or unlearning the backdoor trigger (fine-tuning-based defenses) (Zhu et al., 2023; Zeng et al., 2022; Min et al., 2023; Xu et al., 2024b). These methods attempt to remove the neurons associated with backdoors. For example, ANP (Wu & Wang, 2021) prunes neurons that are more sensitive to adversarial neuron noise, and FT-SAM (Zhu et al., 2023) combines sharpness-aware minimization with fine-tuning to decrease the norms of backdoor neurons. ', 'Backdoor defenses can be classified into detection and mitigation. Detection refers to determining whether a model is backdoored (model detection) (Wang et al., 2019; Liu et al., 2019; Zhao et al., 2022; Wang et al., 2023; Xu et al., 2024b) or a given input is applied with a trigger (input detection) (Gao et al., 2019; Guo et al., 2023; Mo et al., 2024). Model detection by trigger inversion is considered one of the most general defenses against backdoors (Wang et al., 2022a; 2023; Xu et al., 2024b; Zhu et al., 2024). The inversed trigger could determine whether the model is backdoored and be used for backdoor unlearning. For example, NC (Wang et al., 2019) inverses input space triggers and determines the backdoor by selecting abnormally smaller triggers. '] | 3fa634640e3314ae789299045b3e5b8eddfd8bb43f37851126e0211a8a2bf93d | a574be3b40e57c307b5270859edded4aef9fb947 |
explanation | How many experiments did the author conduct, and are the differences in results statistically significant? | In this study, we conducted 440 experimental patterns. Initially, for the case of 2 clients, we performed 180 experimental patterns. Specifically, as shown in Table 1 of the paper, there are 9 attack patterns, including the case where no attack is conducted. For each of these patterns, we experimented with 5 defense methods (including the case where no defense is applied), 2 datasets, and 2 activation functions, resulting in a total of 9 * 5 * 2 * 2 = 180 experimental patterns. Similarly, for the case of 3 clients, as shown in Table 5 of the paper, there are 13 attack patterns, including the case where no attack is conducted, leading to 260 experimental patterns. Summing these, we conducted a total of 440 experimental patterns. Notably, in cases where the defense is successful, the defense method succeeds against all attack patterns, thereby demonstrating the effectiveness of the defense. | ['Table 1', 'Table 5'] | ['images/a4a5dd75551344808edf34cdd62d523f0818e5653cee78ebd8ca9c4676322761.jpg', 'images/4a296d22a184327de8e2e94572eb2da07b9c565f030798b310070d1b640944a2.jpg'] | ['table'] | 2 | 3 | 5 | {'First, we describe the proposed defense algorithm CC-VFed against Byzantine attacks. CC-VFed leverages the fact that the output labels become illegitimate in the presence of malicious clients, and images are shown in Appendix A. To identify such malicious clients as described above, methods similar to Grad-CAM (Selvaraju et al., 2017) is utilized. The determination of malicious clients for one epoch was performed in the following three steps performed at the central server: ': '1', 'Based on the above, the malicious clients addressed in this paper can manipulate only the input to the model, specifically, the training data, and all element values must fall within the specified range. We considered strong Byzantine attacks by malicious clients with these attack capabilities. ': '2', 'First, we consider a Gaussian attack and a same-value attack. These attack methods necessitate manipulating the values sent from the client to the server; however, under the encryption of the model, it is extremely difficult to reverse calculate the model to obtain the desired output. Therefore, the Gaussian attack and the same-value attack are not feasible. ': '3'} | {'1': 'First, we describe the proposed defense algorithm CC-VFed against Byzantine attacks. CC-VFed leverages the fact that the output labels become illegitimate in the presence of malicious clients, and images are shown in Appendix A. To identify such malicious clients as described above, methods similar to Grad-CAM (Selvaraju et al., 2017) is utilized. The determination of malicious clients for one epoch was performed in the following three steps performed at the central server: ', '2': 'Based on the above, the malicious clients addressed in this paper can manipulate only the input to the model, specifically, the training data, and all element values must fall within the specified range. We considered strong Byzantine attacks by malicious clients with these attack capabilities. ', '3': 'First, we consider a Gaussian attack and a same-value attack. These attack methods necessitate manipulating the values sent from the client to the server; however, under the encryption of the model, it is extremely difficult to reverse calculate the model to obtain the desired output. Therefore, the Gaussian attack and the same-value attack are not feasible. '} | {} | {} | {'images/a4a5dd75551344808edf34cdd62d523f0818e5653cee78ebd8ca9c4676322761.jpg': '1', 'images/4a296d22a184327de8e2e94572eb2da07b9c565f030798b310070d1b640944a2.jpg': '5'} | {'1': 'images/a4a5dd75551344808edf34cdd62d523f0818e5653cee78ebd8ca9c4676322761.jpg', '5': 'images/4a296d22a184327de8e2e94572eb2da07b9c565f030798b310070d1b640944a2.jpg'} | {} | ['First, we consider a Gaussian attack and a same-value attack. These attack methods necessitate manipulating the values sent from the client to the server; however, under the encryption of the model, it is extremely difficult to reverse calculate the model to obtain the desired output. Therefore, the Gaussian attack and the same-value attack are not feasible. ', 'Based on the above, the malicious clients addressed in this paper can manipulate only the input to the model, specifically, the training data, and all element values must fall within the specified range. We considered strong Byzantine attacks by malicious clients with these attack capabilities. ', 'First, we describe the proposed defense algorithm CC-VFed against Byzantine attacks. CC-VFed leverages the fact that the output labels become illegitimate in the presence of malicious clients, and images are shown in Appendix A. To identify such malicious clients as described above, methods similar to Grad-CAM (Selvaraju et al., 2017) is utilized. The determination of malicious clients for one epoch was performed in the following three steps performed at the central server: '] | 112e9be36185e91e1a1881956fcca4a1a0798eda748cd0fdd2d518b4e38d0161 | baeace9caa57822ba13ff1cc6ab940c507c30374 |
explanation | If other methods are trained using the same dataset as yours, how about the performance? | Thank you for your question. We understand that you are aiming to separate the impact of the dataset and the method on the results. However, comparing our approach with the other methods may not be entirely fair, as these methods are not specifically designed to address the precise semantic understanding of text in images, as seen in Table 5 of the manuscript. Among the three comparison methods we used (Disentangle, Prefix, and PAINT), each has a different focus. The Prefix method emphasizes language modeling, similar to adversarial word embedding training. PAINT focuses on interpolating the parameters of the entire VLM model. Only the Disentangle method is somewhat comparable to our approach, though it was trained with a setup designed for scenarios like the 'irrelevant' (easy) case in our work. To make the comparison as fair as possible, we trained and tested Disentangle on a subset of the data, using only the 'original' and 'irrelevant' samples, which align with the original Disentangle implementation. As shown in the following Table, Disentangle performs lower on the ToT subset compared to the original CLIP model and its performance on the original Disentangle dataset. This result is understandable, given that the Disentangle dataset is approximately 700 times larger than ToT. Considering the model design and dataset scale, the experiments in Table 4 (in the manuscript) provide the fairest comparison across methods. However, this also highlights the limitations of comparison methods, which focus primarily on image semantics and neglect textual semantics. | ['Table 5', 'Table 4'] | ['images/afa7bf801aedfd5b20ec67a1fc20c4e6d4cf5fba55171cfff1577d85443fbaf9.jpg', 'images/861c536bd93f935c4bf4f4787b5386fa9bb88154baf192b6ab3cc5c8e3ae4360.jpg'] | ['table'] | 2 | 3 | 5 | {'While typographic attacks may not strictly qualify as traditional attacks, they demonstrate how pretrained models effectively learn multimodal representations. Models trained on diverse image-text datasets implicitly learn correlations between text and its real-world meanings (Cao et al., 2023). For instance, a model might link the image of a ’cat’ with the word and concept of a cat, suggesting a unified representation of textual and conceptual semantics. However, this theory requires further empirical validation, and alternative explanations should also be explored. ': '1'} | {'1': 'While typographic attacks may not strictly qualify as traditional attacks, they demonstrate how pretrained models effectively learn multimodal representations. Models trained on diverse image-text datasets implicitly learn correlations between text and its real-world meanings (Cao et al., 2023). For instance, a model might link the image of a ’cat’ with the word and concept of a cat, suggesting a unified representation of textual and conceptual semantics. However, this theory requires further empirical validation, and alternative explanations should also be explored. '} | {'images/430688fd4381d28abfaa460d2245b16d07473dba6a8c550253f63f320bc8132b.jpg': '5', 'images/976bd4a555d15a7bc132fa93af9a85fd43b230eaf39b8ab401d2443d760a2ac3.jpg': '3'} | {'5': 'images/430688fd4381d28abfaa460d2245b16d07473dba6a8c550253f63f320bc8132b.jpg', '3': 'images/976bd4a555d15a7bc132fa93af9a85fd43b230eaf39b8ab401d2443d760a2ac3.jpg'} | {'images/afa7bf801aedfd5b20ec67a1fc20c4e6d4cf5fba55171cfff1577d85443fbaf9.jpg': '5', 'images/861c536bd93f935c4bf4f4787b5386fa9bb88154baf192b6ab3cc5c8e3ae4360.jpg': '4'} | {'5': 'images/afa7bf801aedfd5b20ec67a1fc20c4e6d4cf5fba55171cfff1577d85443fbaf9.jpg', '4': 'images/861c536bd93f935c4bf4f4787b5386fa9bb88154baf192b6ab3cc5c8e3ae4360.jpg'} | {} | ['While typographic attacks may not strictly qualify as traditional attacks, they demonstrate how pretrained models effectively learn multimodal representations. Models trained on diverse image-text datasets implicitly learn correlations between text and its real-world meanings (Cao et al., 2023). For instance, a model might link the image of a ’cat’ with the word and concept of a cat, suggesting a unified representation of textual and conceptual semantics. However, this theory requires further empirical validation, and alternative explanations should also be explored. ', 'images/430688fd4381d28abfaa460d2245b16d07473dba6a8c550253f63f320bc8132b.jpg', 'images/976bd4a555d15a7bc132fa93af9a85fd43b230eaf39b8ab401d2443d760a2ac3.jpg'] | d7c46b40a688d017c55120b97c3b9daf2e869644e9bdd99ee71a1d0c3d96bbe8 | be3486be8bc8c155b369efee783740438932ec4b |
explanation | How does ReKV scale with increasing video length and complexity? | ReKV scales effectively with varying video lengths. As illustrated in Figure 1b, ReKV consistently outperforms the Uniform Sampling baseline across six benchmarks, regardless of video length. Performance improves with an increasing number of retrieved frames, as shown in Figure 3a (ranging from 8 to 64 frames). This performance gain saturates beyond 64 frames, primarily due to the base Video-LLM’s limitations (e.g., LLaVA-OV, trained on a maximum of 32 frames, struggles to effectively process a larger number of retrieved frames). | ['Figure 1', 'Figure 3'] | ['images/b973a30530e7d44f808e03f496b4e91360869ef5240332c107e620d59546302f.jpg', 'images/aba604b77fcb1d46905d005af7b65195e41cda973d71a83c8243de3f3ead4899.jpg'] | ['figure'] | 2 | 3 | 5 | {'We propose ReKV, a novel, training-free approach that integrates seamlessly with existing Video Large Language Models (Video-LLMs) to enable efficient streaming video question-answering (StreamingVQA). Traditional VideoQA systems struggle with long videos, as they must process the entire video before responding to queries, and repeat this process for each new question. In contrast, our approach analyzes long videos in a streaming fashion, allowing for prompt responses as soon as user queries are received. Building on a common VideoLLM, we first incorporate a sliding-window attention mechanism, ensuring that input frames attend to a limited number of preceding frames, thereby reducing computational overhead. To prevent information loss, we store processed video key-value caches (KV-Caches) in RAM and disk, reloading them into GPU memory as needed. Additionally, we introduce a retrieval method that leverages an external retriever or the parameters within Video-LLMs to retrieve only queryrelevant KV-Caches, ensuring both efficiency and accuracy in question answering. ReKV enables the separation of video analyzing and question-answering across different processes and GPUs, significantly enhancing the efficiency of StreamingVQA. Through comprehensive experimentation, we validate the efficacy and practicality of our approach, which significantly boosts efficiency and enhances applicability over existing VideoQA models. ': '1'} | {'1': 'We propose ReKV, a novel, training-free approach that integrates seamlessly with existing Video Large Language Models (Video-LLMs) to enable efficient streaming video question-answering (StreamingVQA). Traditional VideoQA systems struggle with long videos, as they must process the entire video before responding to queries, and repeat this process for each new question. In contrast, our approach analyzes long videos in a streaming fashion, allowing for prompt responses as soon as user queries are received. Building on a common VideoLLM, we first incorporate a sliding-window attention mechanism, ensuring that input frames attend to a limited number of preceding frames, thereby reducing computational overhead. To prevent information loss, we store processed video key-value caches (KV-Caches) in RAM and disk, reloading them into GPU memory as needed. Additionally, we introduce a retrieval method that leverages an external retriever or the parameters within Video-LLMs to retrieve only queryrelevant KV-Caches, ensuring both efficiency and accuracy in question answering. ReKV enables the separation of video analyzing and question-answering across different processes and GPUs, significantly enhancing the efficiency of StreamingVQA. Through comprehensive experimentation, we validate the efficacy and practicality of our approach, which significantly boosts efficiency and enhances applicability over existing VideoQA models. '} | {'images/b973a30530e7d44f808e03f496b4e91360869ef5240332c107e620d59546302f.jpg': '1', 'images/aba604b77fcb1d46905d005af7b65195e41cda973d71a83c8243de3f3ead4899.jpg': '3'} | {'1': 'images/b973a30530e7d44f808e03f496b4e91360869ef5240332c107e620d59546302f.jpg', '3': 'images/aba604b77fcb1d46905d005af7b65195e41cda973d71a83c8243de3f3ead4899.jpg'} | {'images/9280ab5261a8d2ec8185376625fc74c511747a2bd21088fb850fe0cc30d19c37.jpg': '5', 'images/bfd5fb0a4e1346866411be5aa3afa91c90f6f17f3e8740dbd274285d6802ffc6.jpg': '3'} | {'5': 'images/9280ab5261a8d2ec8185376625fc74c511747a2bd21088fb850fe0cc30d19c37.jpg', '3': 'images/bfd5fb0a4e1346866411be5aa3afa91c90f6f17f3e8740dbd274285d6802ffc6.jpg'} | {} | ['images/9280ab5261a8d2ec8185376625fc74c511747a2bd21088fb850fe0cc30d19c37.jpg', 'We propose ReKV, a novel, training-free approach that integrates seamlessly with existing Video Large Language Models (Video-LLMs) to enable efficient streaming video question-answering (StreamingVQA). Traditional VideoQA systems struggle with long videos, as they must process the entire video before responding to queries, and repeat this process for each new question. In contrast, our approach analyzes long videos in a streaming fashion, allowing for prompt responses as soon as user queries are received. Building on a common VideoLLM, we first incorporate a sliding-window attention mechanism, ensuring that input frames attend to a limited number of preceding frames, thereby reducing computational overhead. To prevent information loss, we store processed video key-value caches (KV-Caches) in RAM and disk, reloading them into GPU memory as needed. Additionally, we introduce a retrieval method that leverages an external retriever or the parameters within Video-LLMs to retrieve only queryrelevant KV-Caches, ensuring both efficiency and accuracy in question answering. ReKV enables the separation of video analyzing and question-answering across different processes and GPUs, significantly enhancing the efficiency of StreamingVQA. Through comprehensive experimentation, we validate the efficacy and practicality of our approach, which significantly boosts efficiency and enhances applicability over existing VideoQA models. ', 'images/bfd5fb0a4e1346866411be5aa3afa91c90f6f17f3e8740dbd274285d6802ffc6.jpg'] | 89a5ed082f7507371470a0e2bf2f1c6f670c07352db1dfcda2cd5a68d4ef7e45 | c1454defee924634f213559af31f9407720604fe |
explanation | How does WMAdapter compare to Stable Signature in terms of quality and robustness? | All watermarking methods inherently involve a tradeoff between image quality and robustness. A more comprehensive and fair evaluation considers these two attributes as separate dimensions, directly comparing the quality-robustness tradeoff in a two-dimensional figure. In Figure 1, we present such a comparison, where points closer to the top-left corner indicate a better balance. The results show that our method achieves a superior quality-robustness tradeoff compared to Stable Signature. Specifically, WMAdapter-I achieves a 22% FID improvement with only a 3% accuracy gap. Unlike merely moving along the existing quality-robustness tradeoff boundary, our method pushes up the boundary, achieving a better overall balance. WMAdapter provides scalability by encoding $2^{48}$ watermark patterns directly into the adapter, eliminating the need to fine-tune the VAE decoder for each different watermark. Furthermore, as shown in the new experiments in Figure 5, WMAdapter demonstrates better robustness against regeneration attacks compared to Stable Signature. | ['Figure 1', 'Figure 5'] | ['images/49cdb53c8be77ae03211564ecf90f3090b953168552efd33e76a1df7cb754df0.jpg', 'images/52ce75c67195d0b2b5b392fc7b59010e8c66803033695229ca781c5ed7e321a7.jpg'] | ['figure'] | 2 | 3 | 5 | {'Recent works (Bui et al., 2023; Xiong et al., 2023; Min et al., 2024; Meng et al., 2024; Zhang et al., 2024; Kim et al., 2023; Nguyen et al., 2023) have explored watermark plugins for diffusion models. These plugins accept arbitrary watermark keys and generate watermark embeddings without requiring per-watermark finetuning, thereby addressing the scalability issue. However, these methods typically generate watermark embeddings without considering the image content (Kim et al., 2023; Xiong et al., 2023; Bui et al., 2023) (i.e., they are context-less) and often require finetuning or modifying diffusion modules to incorporate the watermark embeddings (Kim et al., 2023; Xiong et al., 2023; Feng et al., 2024) . Tab. 1 compares several watermarking methods. Unfortunately, finetuning the original diffusion pipeline or making intrusive modifications often leads to a significant drop in image quality, resulting in blurriness or noticeable artifacts. Fig. 1 illustrates the image quality of different methods, where artifacts introduced by other methods are evident. Find more examples in Fig. 13. ': '1', 'Table 2: Comparison with other watermarking methods on generation quality and robustness. All methods are evaluated on COCO 2017 val set (Lin et al., 2014) with image size 512 × 512. Since Stable Signature (Fernandez et al., 2023) requires finetuning of separate VAE decoders to embed different keys, we report its average results on 10 randomly sampled keys. We report TPR@FPR10−6 for detection performance. For robustness, we use Crop 0.3, JPEG 80, Brightness 1.5. ': '2'} | {'1': 'Recent works (Bui et al., 2023; Xiong et al., 2023; Min et al., 2024; Meng et al., 2024; Zhang et al., 2024; Kim et al., 2023; Nguyen et al., 2023) have explored watermark plugins for diffusion models. These plugins accept arbitrary watermark keys and generate watermark embeddings without requiring per-watermark finetuning, thereby addressing the scalability issue. However, these methods typically generate watermark embeddings without considering the image content (Kim et al., 2023; Xiong et al., 2023; Bui et al., 2023) (i.e., they are context-less) and often require finetuning or modifying diffusion modules to incorporate the watermark embeddings (Kim et al., 2023; Xiong et al., 2023; Feng et al., 2024) . Tab. 1 compares several watermarking methods. Unfortunately, finetuning the original diffusion pipeline or making intrusive modifications often leads to a significant drop in image quality, resulting in blurriness or noticeable artifacts. Fig. 1 illustrates the image quality of different methods, where artifacts introduced by other methods are evident. Find more examples in Fig. 13. ', '2': 'Table 2: Comparison with other watermarking methods on generation quality and robustness. All methods are evaluated on COCO 2017 val set (Lin et al., 2014) with image size 512 × 512. Since Stable Signature (Fernandez et al., 2023) requires finetuning of separate VAE decoders to embed different keys, we report its average results on 10 randomly sampled keys. We report TPR@FPR10−6 for detection performance. For robustness, we use Crop 0.3, JPEG 80, Brightness 1.5. '} | {'images/d55c10271818e863198ea27d607936d3444295b93879b34bfa67bfcd733215f3.jpg': '7', 'images/49cdb53c8be77ae03211564ecf90f3090b953168552efd33e76a1df7cb754df0.jpg': '1', 'images/52ce75c67195d0b2b5b392fc7b59010e8c66803033695229ca781c5ed7e321a7.jpg': '5'} | {'7': 'images/d55c10271818e863198ea27d607936d3444295b93879b34bfa67bfcd733215f3.jpg', '1': 'images/49cdb53c8be77ae03211564ecf90f3090b953168552efd33e76a1df7cb754df0.jpg', '5': 'images/52ce75c67195d0b2b5b392fc7b59010e8c66803033695229ca781c5ed7e321a7.jpg'} | {} | {} | {} | ['Recent works (Bui et al., 2023; Xiong et al., 2023; Min et al., 2024; Meng et al., 2024; Zhang et al., 2024; Kim et al., 2023; Nguyen et al., 2023) have explored watermark plugins for diffusion models. These plugins accept arbitrary watermark keys and generate watermark embeddings without requiring per-watermark finetuning, thereby addressing the scalability issue. However, these methods typically generate watermark embeddings without considering the image content (Kim et al., 2023; Xiong et al., 2023; Bui et al., 2023) (i.e., they are context-less) and often require finetuning or modifying diffusion modules to incorporate the watermark embeddings (Kim et al., 2023; Xiong et al., 2023; Feng et al., 2024) . Tab. 1 compares several watermarking methods. Unfortunately, finetuning the original diffusion pipeline or making intrusive modifications often leads to a significant drop in image quality, resulting in blurriness or noticeable artifacts. Fig. 1 illustrates the image quality of different methods, where artifacts introduced by other methods are evident. Find more examples in Fig. 13. ', 'images/d55c10271818e863198ea27d607936d3444295b93879b34bfa67bfcd733215f3.jpg', 'Table 2: Comparison with other watermarking methods on generation quality and robustness. All methods are evaluated on COCO 2017 val set (Lin et al., 2014) with image size 512 × 512. Since Stable Signature (Fernandez et al., 2023) requires finetuning of separate VAE decoders to embed different keys, we report its average results on 10 randomly sampled keys. We report TPR@FPR10−6 for detection performance. For robustness, we use Crop 0.3, JPEG 80, Brightness 1.5. '] | 29efdcf73f72c4e93136286343fd5fbfb183071a8029d1c78a12b9e239975517 | c88e12b6bdaf8a6657dc8b00bdef374b08a8acb3 |
explanation | How does increasing input frames affect the performance of TempMe? | Thank you for your insightful suggestions. For fair comparisons, our TempMe is consistent with existing PEFT TVR methods in sampling 12 frames on MSRVTT in Table 2 of our manuscript. To explore the impact of longer input frames, we conduct additional experiments in Table 4. Increasing the input frames to 18 significantly improves performance by leveraging more temporal information. However, this comes at the cost of higher GFLOPs and memory usage, as each GPU needs to process 32x18 frames. Compared to previous PEFT TVR methods like VoP and DGL, our TempMe with 18 frames achieves a significant improvement of 3.2% R@1 and 6.1% R-sum under comparable inference memory usage. | ['Table 2', 'Table 4'] | ['images/dc9185d819e471151843e22494cc25a3483e857c991ac951d79a8232fa8f80b3.jpg', 'images/5ff7425c291604b8d1fe7f3f71679db87dfff01db29475bcf34f5f93ced8d451.jpg'] | ['table'] | 2 | 3 | 5 | {'Ablation on Each Function. From a functional perspective, TempMe can be categorized into Temporal Modeling and Token Reduction, aimed at improving accuracy and efficiency, respectively. Table 9 demonstrates the impact of each function. (1) The accuracy improvements are attributed to Temporal Modeling, which aggregates clips progressively to enhance spatio-temporal learning. In this framework, the attention modules of the early layers are applied to intra-frame tokens in the spatial domain. In the later layers, they operate on tokens across frames in the spatio-temporal domain. (2) The efficiency improvements arise from Token Reduction, which reduces redundancy in the entire framework. In the early layers, it slightly reduces intra-frame tokens to decrease spatial redundancy. In the later layers, it significantly reduces tokens among frames to address large temporal redundancy. (3) Without the support of Temporal Modeling, Token Reduction alone significantly reduces tokens in the spatial domain, leading to a substantial 4.2% decrease in R-sum. However, due to the considerable temporal redundancy, the combination of Temporal Modeling and Token Reduction (TempMe) significantly reduces complexity while achieving high performance. ': '1', 'In this work, we focus on text-video retrieval using CLIP, where each sampled frame is processed as an independent token set. Existing token compression methods are limited to pruning or merging tokens within a single token set for an image or video, without addressing token compression across multiple sets or incorporating temporal fine-tuning. In contrast, we have explored a practical and feasible path to reach both superior performance and computational efficiency. By fruitfully integrating parameter-efficient fine-tuning and token compression techniques, we propose TempMe and reach state-of-the-art performance. TempMe can progressively merge different frame token sets, and thus minimize spatio-temporal redundancy and enhance temporal modeling across frames. ': '2'} | {'1': 'Ablation on Each Function. From a functional perspective, TempMe can be categorized into Temporal Modeling and Token Reduction, aimed at improving accuracy and efficiency, respectively. Table 9 demonstrates the impact of each function. (1) The accuracy improvements are attributed to Temporal Modeling, which aggregates clips progressively to enhance spatio-temporal learning. In this framework, the attention modules of the early layers are applied to intra-frame tokens in the spatial domain. In the later layers, they operate on tokens across frames in the spatio-temporal domain. (2) The efficiency improvements arise from Token Reduction, which reduces redundancy in the entire framework. In the early layers, it slightly reduces intra-frame tokens to decrease spatial redundancy. In the later layers, it significantly reduces tokens among frames to address large temporal redundancy. (3) Without the support of Temporal Modeling, Token Reduction alone significantly reduces tokens in the spatial domain, leading to a substantial 4.2% decrease in R-sum. However, due to the considerable temporal redundancy, the combination of Temporal Modeling and Token Reduction (TempMe) significantly reduces complexity while achieving high performance. ', '2': 'In this work, we focus on text-video retrieval using CLIP, where each sampled frame is processed as an independent token set. Existing token compression methods are limited to pruning or merging tokens within a single token set for an image or video, without addressing token compression across multiple sets or incorporating temporal fine-tuning. In contrast, we have explored a practical and feasible path to reach both superior performance and computational efficiency. By fruitfully integrating parameter-efficient fine-tuning and token compression techniques, we propose TempMe and reach state-of-the-art performance. TempMe can progressively merge different frame token sets, and thus minimize spatio-temporal redundancy and enhance temporal modeling across frames. '} | {'images/744b87118b296a3d8b6d6992544bd31eb704b4a89e39882cb60dac40d2a673d2.jpg': '1'} | {'1': 'images/744b87118b296a3d8b6d6992544bd31eb704b4a89e39882cb60dac40d2a673d2.jpg'} | {'images/dc9185d819e471151843e22494cc25a3483e857c991ac951d79a8232fa8f80b3.jpg': '2', 'images/5ff7425c291604b8d1fe7f3f71679db87dfff01db29475bcf34f5f93ced8d451.jpg': '4'} | {'2': 'images/dc9185d819e471151843e22494cc25a3483e857c991ac951d79a8232fa8f80b3.jpg', '4': 'images/5ff7425c291604b8d1fe7f3f71679db87dfff01db29475bcf34f5f93ced8d451.jpg'} | {} | ['Ablation on Each Function. From a functional perspective, TempMe can be categorized into Temporal Modeling and Token Reduction, aimed at improving accuracy and efficiency, respectively. Table 9 demonstrates the impact of each function. (1) The accuracy improvements are attributed to Temporal Modeling, which aggregates clips progressively to enhance spatio-temporal learning. In this framework, the attention modules of the early layers are applied to intra-frame tokens in the spatial domain. In the later layers, they operate on tokens across frames in the spatio-temporal domain. (2) The efficiency improvements arise from Token Reduction, which reduces redundancy in the entire framework. In the early layers, it slightly reduces intra-frame tokens to decrease spatial redundancy. In the later layers, it significantly reduces tokens among frames to address large temporal redundancy. (3) Without the support of Temporal Modeling, Token Reduction alone significantly reduces tokens in the spatial domain, leading to a substantial 4.2% decrease in R-sum. However, due to the considerable temporal redundancy, the combination of Temporal Modeling and Token Reduction (TempMe) significantly reduces complexity while achieving high performance. ', 'images/744b87118b296a3d8b6d6992544bd31eb704b4a89e39882cb60dac40d2a673d2.jpg', 'In this work, we focus on text-video retrieval using CLIP, where each sampled frame is processed as an independent token set. Existing token compression methods are limited to pruning or merging tokens within a single token set for an image or video, without addressing token compression across multiple sets or incorporating temporal fine-tuning. In contrast, we have explored a practical and feasible path to reach both superior performance and computational efficiency. By fruitfully integrating parameter-efficient fine-tuning and token compression techniques, we propose TempMe and reach state-of-the-art performance. TempMe can progressively merge different frame token sets, and thus minimize spatio-temporal redundancy and enhance temporal modeling across frames. '] | f60839a378d5c402b91410103f7efa40fe33625f24fe1fb9c2da21bd7cb09a59 | cbba405e78c20b44f3d36677036181f2147fe746 |
explanation | What are the performance advantages of 'WTS' when the weak model is scaled up? | First, we can confirm that the performance of 'WTS' cannot surpass that of a 'strong model' trained solely on clean samples. From Figure 9, the relatively low 'Performance Gap Recovered' aligns with your observation that the models exhibit poor recovery in weak-to-strong generalization from 4B to 7B. However, the primary reason for this lies in the relatively small difference in parameter size between the 4B and 7B models. As shown in Figure 14, when using models with significantly larger parameter sizes, we observe a notable improvement in the 'Performance Gap Recovered.' | ['Figure 9', 'Figure 14'] | ['images/ac75093d8db705057296d20f643e943b463213d789e8656e9492be6f54038dd0.jpg', 'images/366c1f328d526555f581223e99415a9cb7519bf4c87b219eb093cf6d274b9d2f.jpg'] | ['figure'] | 2 | 3 | 5 | {'Burns et al. (2023), we employ weaker models to generate datasets that served as weak supervision for stronger model training, i.e., 0.5B, 1.8B, and 4B models provide weak supervision for 1.8B, 4B, and 7B models, respectively. The results, shown in Figure 1 (Right), reveal that the average performance of the weaker models consistently underperforms compared to the “WTS-S” (averaging performance across models with single-capability weak to strong generalization on different datasets) and “WTS” (averaging performance of model with multi-capabilities weak to strong generalization). The performance between single-capability and multi-capabilities generalization is comparable. In addition, we can observe that the gains from weak to strong generalization are more pronounced when the model size is smaller due to its weaker capability. Further details, i.e., the performance of each capability for Figure 1, can be found in Appendix D. ': '1', 'By employing this two-stage training approach, the strong model benefits from an initial broad exposure to weak data, followed by focused training on higher accuracy and diverse samples. This method enhances the model’s generalization capabilities, leveraging the advantages of weak data to improve overall performance while mitigating risks of overconfidence and collapse. ': '2', 'Experimental Details. In the experiments, we utilized a series of Qwen-1.5 models (Bai et al., 2023) with varying parameters, specifically 0.5B, 1.8B, 4B, and 7B. The reward models are initialized from the strong model, i.e., Qwen-1.5, and they maintain the same parameters. To ensure a fair comparison, we followed the experimental setup from Burns et al. (2023), conducting all experiments with 2 epochs and a batch size of 40. The optimizer used was Adam (Kingma & Ba, 2015) with a learning rate of 1e-5. Weight decay was set at 0.01, and a cosine learning rate decay strategy was employed. During inference, the models utilized a greedy decoding strategy. The performance refers to accuracy, where a correct prediction exactly matches the ground truth answer. For weak to strong generalization, we use the performance gap recovered (PGR) metric (Burns et al., 2023) to measure the weak to strong generalization performance. All experiments are conducted on NVIDIA A100 80G GPUs. ': '3'} | {'1': 'Burns et al. (2023), we employ weaker models to generate datasets that served as weak supervision for stronger model training, i.e., 0.5B, 1.8B, and 4B models provide weak supervision for 1.8B, 4B, and 7B models, respectively. The results, shown in Figure 1 (Right), reveal that the average performance of the weaker models consistently underperforms compared to the “WTS-S” (averaging performance across models with single-capability weak to strong generalization on different datasets) and “WTS” (averaging performance of model with multi-capabilities weak to strong generalization). The performance between single-capability and multi-capabilities generalization is comparable. In addition, we can observe that the gains from weak to strong generalization are more pronounced when the model size is smaller due to its weaker capability. Further details, i.e., the performance of each capability for Figure 1, can be found in Appendix D. ', '2': 'By employing this two-stage training approach, the strong model benefits from an initial broad exposure to weak data, followed by focused training on higher accuracy and diverse samples. This method enhances the model’s generalization capabilities, leveraging the advantages of weak data to improve overall performance while mitigating risks of overconfidence and collapse. ', '3': 'Experimental Details. In the experiments, we utilized a series of Qwen-1.5 models (Bai et al., 2023) with varying parameters, specifically 0.5B, 1.8B, 4B, and 7B. The reward models are initialized from the strong model, i.e., Qwen-1.5, and they maintain the same parameters. To ensure a fair comparison, we followed the experimental setup from Burns et al. (2023), conducting all experiments with 2 epochs and a batch size of 40. The optimizer used was Adam (Kingma & Ba, 2015) with a learning rate of 1e-5. Weight decay was set at 0.01, and a cosine learning rate decay strategy was employed. During inference, the models utilized a greedy decoding strategy. The performance refers to accuracy, where a correct prediction exactly matches the ground truth answer. For weak to strong generalization, we use the performance gap recovered (PGR) metric (Burns et al., 2023) to measure the weak to strong generalization performance. All experiments are conducted on NVIDIA A100 80G GPUs. '} | {'images/366c1f328d526555f581223e99415a9cb7519bf4c87b219eb093cf6d274b9d2f.jpg': '14', 'images/ac75093d8db705057296d20f643e943b463213d789e8656e9492be6f54038dd0.jpg': '9'} | {'14': 'images/366c1f328d526555f581223e99415a9cb7519bf4c87b219eb093cf6d274b9d2f.jpg', '9': 'images/ac75093d8db705057296d20f643e943b463213d789e8656e9492be6f54038dd0.jpg'} | {} | {} | {} | ['Burns et al. (2023), we employ weaker models to generate datasets that served as weak supervision for stronger model training, i.e., 0.5B, 1.8B, and 4B models provide weak supervision for 1.8B, 4B, and 7B models, respectively. The results, shown in Figure 1 (Right), reveal that the average performance of the weaker models consistently underperforms compared to the “WTS-S” (averaging performance across models with single-capability weak to strong generalization on different datasets) and “WTS” (averaging performance of model with multi-capabilities weak to strong generalization). The performance between single-capability and multi-capabilities generalization is comparable. In addition, we can observe that the gains from weak to strong generalization are more pronounced when the model size is smaller due to its weaker capability. Further details, i.e., the performance of each capability for Figure 1, can be found in Appendix D. ', 'Experimental Details. In the experiments, we utilized a series of Qwen-1.5 models (Bai et al., 2023) with varying parameters, specifically 0.5B, 1.8B, 4B, and 7B. The reward models are initialized from the strong model, i.e., Qwen-1.5, and they maintain the same parameters. To ensure a fair comparison, we followed the experimental setup from Burns et al. (2023), conducting all experiments with 2 epochs and a batch size of 40. The optimizer used was Adam (Kingma & Ba, 2015) with a learning rate of 1e-5. Weight decay was set at 0.01, and a cosine learning rate decay strategy was employed. During inference, the models utilized a greedy decoding strategy. The performance refers to accuracy, where a correct prediction exactly matches the ground truth answer. For weak to strong generalization, we use the performance gap recovered (PGR) metric (Burns et al., 2023) to measure the weak to strong generalization performance. All experiments are conducted on NVIDIA A100 80G GPUs. ', 'By employing this two-stage training approach, the strong model benefits from an initial broad exposure to weak data, followed by focused training on higher accuracy and diverse samples. This method enhances the model’s generalization capabilities, leveraging the advantages of weak data to improve overall performance while mitigating risks of overconfidence and collapse. '] | 3c254e55392e2219309607396721f9a2a0375dc47832195258587e94311b0482 | d7f00dab460086f9dada228080a7ffe1277fb841 |
explanation | What performance differences exist between INNAprop and AdamW? | To clarify performance differences, we add a summary table in the revised paper showing each optimizer’s performance (see Table 2 and Table 3). INNAprop ($\alpha=0.1,\beta=0.9)$ reaches AdamW’s peak performance much earlier in training. INNAprop ($\alpha=2.0,\beta=2.0)$ achieves better test accuracy performance than AdamW. See Table 2 in the revised version. | ['Table 2', 'Table 3'] | ['images/c5a860f6424e2f228359b31385860ccc6b7080f50dbf080a0f8f425307dbf126.jpg', 'images/5eac5db56d7b96b2fcbaddd412ce374919678c601f05e01d06bb17232e02641f.jpg'] | ['table'] | 2 | 3 | 5 | {} | {} | {'images/7f0483dd10e3dd1675c5c41f70e389d7122498fedb8626321b01661992e2ace4.jpg': '4', 'images/089393818f7214970b736d92980f5ba64406640a80ef197003eb4e557efa8624.jpg': '3'} | {'4': 'images/7f0483dd10e3dd1675c5c41f70e389d7122498fedb8626321b01661992e2ace4.jpg', '3': 'images/089393818f7214970b736d92980f5ba64406640a80ef197003eb4e557efa8624.jpg'} | {'images/32d639b1cf9b9af2223a389a9b668b022ca09e7197702704c042280e81d135a6.jpg': '4', 'images/5eac5db56d7b96b2fcbaddd412ce374919678c601f05e01d06bb17232e02641f.jpg': '3', 'images/c5a860f6424e2f228359b31385860ccc6b7080f50dbf080a0f8f425307dbf126.jpg': '2'} | {'4': 'images/32d639b1cf9b9af2223a389a9b668b022ca09e7197702704c042280e81d135a6.jpg', '3': 'images/5eac5db56d7b96b2fcbaddd412ce374919678c601f05e01d06bb17232e02641f.jpg', '2': 'images/c5a860f6424e2f228359b31385860ccc6b7080f50dbf080a0f8f425307dbf126.jpg'} | {} | ['images/089393818f7214970b736d92980f5ba64406640a80ef197003eb4e557efa8624.jpg', 'images/7f0483dd10e3dd1675c5c41f70e389d7122498fedb8626321b01661992e2ace4.jpg', 'images/32d639b1cf9b9af2223a389a9b668b022ca09e7197702704c042280e81d135a6.jpg'] | 5044b7f768f6f9c5861f982e37ee400dad09a50c60b078646d4f9829c7fe9850 | d9630880d4c72d85d308f223cf3985d6bf4bfe37 |
explanation | What improvements have been made regarding the presentation issues noted in the figures? | We appreciate your attention to these details; we have improved the readability of Figure 4 and corrected the misalignment issues. Additionally, as you suggested, Figure 6 has been revised to focus exclusively on the MAPAX evaluation, providing a more thorough and detailed analysis to address reviewer concerns. | ['Figure 4', 'Figure 6'] | ['images/0cbe0e833d3d74272d481489b1d5700909615541971ca279af80f2357fe5278a.jpg', 'images/15b275c81d266b24a24fe86fb5737b5eafe755eb4767d3b543df52b9552f62ad.jpg'] | ['figure'] | 2 | 3 | 5 | {'model. As illustrated in Figure 1, MAPA decomposes the universal vector ∆W ∈ Rd×1 into a reconstruction matrix A ∈Rd×p and a projection vector B ∈Rp×1, where p ≤d. ': '1', 'Initialize: Global model W0 ∈Rd×1, reconstruction matrix A0 ∈ Rd×p, projection matrix B¯0 ← 0 ∈ Rp×1, seed r0 ': '2'} | {'1': 'model. As illustrated in Figure 1, MAPA decomposes the universal vector ∆W ∈ Rd×1 into a reconstruction matrix A ∈Rd×p and a projection vector B ∈Rp×1, where p ≤d. ', '2': 'Initialize: Global model W0 ∈Rd×1, reconstruction matrix A0 ∈ Rd×p, projection matrix B¯0 ← 0 ∈ Rp×1, seed r0 '} | {'images/15b275c81d266b24a24fe86fb5737b5eafe755eb4767d3b543df52b9552f62ad.jpg': '6', 'images/0cbe0e833d3d74272d481489b1d5700909615541971ca279af80f2357fe5278a.jpg': '4', 'images/0e2d0ea6c65406773453ac682093020f4080862844c7c55a6586f9930a9edfcb.jpg': '2'} | {'6': 'images/15b275c81d266b24a24fe86fb5737b5eafe755eb4767d3b543df52b9552f62ad.jpg', '4': 'images/0cbe0e833d3d74272d481489b1d5700909615541971ca279af80f2357fe5278a.jpg', '2': 'images/0e2d0ea6c65406773453ac682093020f4080862844c7c55a6586f9930a9edfcb.jpg'} | {} | {} | {} | ['images/0e2d0ea6c65406773453ac682093020f4080862844c7c55a6586f9930a9edfcb.jpg', 'Initialize: Global model W0 ∈Rd×1, reconstruction matrix A0 ∈ Rd×p, projection matrix B¯0 ← 0 ∈ Rp×1, seed r0 ', 'model. As illustrated in Figure 1, MAPA decomposes the universal vector ∆W ∈ Rd×1 into a reconstruction matrix A ∈Rd×p and a projection vector B ∈Rp×1, where p ≤d. '] | 806a44f5225f51143dfc156db3a164881c8ac9ff27db12fa576828c6a76bd552 | dd859bbcb913e160957344578bab0680c2b2595f |
explanation | How does the pooling strategy affect specific visual features in complex scenes? | PLLaVA use stride 2 to pool the spatial dimension, which makes slight effects on the visual features. PLLaVA presents superior ability of giving detailed video descriptions, since it achieve very good performance in the benchmark Video-ChatGPT, as shown in Table 2. Furthermore, PLLaVA also show good performance when dealing with fast moving elements. For example, PLLaVA 34B also achieve best results for detecting the Moving Attribute in the MVBench, as shown in Table 3. | ['Table 2', 'Table 3'] | ['images/7f246961263afea8ee541874d12816b1c47e4ac313d10d98d29a5a15fdec7d78.jpg', 'images/cf9ab08e3bbb197d28bf079138176c8a5db57f5d94b0fc99ab82c76f53217037.jpg'] | ['table'] | 2 | 3 | 5 | {'Adapting image MLLMs to the video domain can be challenging and susceptible to the designs of model structures, given the limited performance of existing methods. ': '1', 'For the temporal dimension, several target pooling shapes were chosen with spatial dimensions fixed as 12, including (4,12,12), (8,12,12), and (16,12,12). We study the temporal pooling effects by altering the number of input video frames. For example, pooling from (64,24,24) to (4,12,12) indicates every 16 frames are fused, then the downsampling rate should be 6.25%. All of the resulting model curves are shown in Figure 5(c) and 5(d). Different from spatial pooling, the model performance is sensitive to temporal pooling. As illustrated in these two figures, all lines achieve better performance with lower downsampling rates. In other words, pooling along temporal dimension always downgrades the model performance. ': '2', 'Difficulty to improve with more data. Data scaling has been a widely accepted means to improve the LLMs’ capability. However, The above phenomena indicate that employing image MMLMs in the video domain and seeking to benefit from the scaling of video data sam': '3'} | {'1': 'Adapting image MLLMs to the video domain can be challenging and susceptible to the designs of model structures, given the limited performance of existing methods. ', '2': 'For the temporal dimension, several target pooling shapes were chosen with spatial dimensions fixed as 12, including (4,12,12), (8,12,12), and (16,12,12). We study the temporal pooling effects by altering the number of input video frames. For example, pooling from (64,24,24) to (4,12,12) indicates every 16 frames are fused, then the downsampling rate should be 6.25%. All of the resulting model curves are shown in Figure 5(c) and 5(d). Different from spatial pooling, the model performance is sensitive to temporal pooling. As illustrated in these two figures, all lines achieve better performance with lower downsampling rates. In other words, pooling along temporal dimension always downgrades the model performance. ', '3': 'Difficulty to improve with more data. Data scaling has been a widely accepted means to improve the LLMs’ capability. However, The above phenomena indicate that employing image MMLMs in the video domain and seeking to benefit from the scaling of video data sam'} | {} | {} | {'images/cf9ab08e3bbb197d28bf079138176c8a5db57f5d94b0fc99ab82c76f53217037.jpg': '3', 'images/7f246961263afea8ee541874d12816b1c47e4ac313d10d98d29a5a15fdec7d78.jpg': '2'} | {'3': 'images/cf9ab08e3bbb197d28bf079138176c8a5db57f5d94b0fc99ab82c76f53217037.jpg', '2': 'images/7f246961263afea8ee541874d12816b1c47e4ac313d10d98d29a5a15fdec7d78.jpg'} | {} | ['Difficulty to improve with more data. Data scaling has been a widely accepted means to improve the LLMs’ capability. However, The above phenomena indicate that employing image MMLMs in the video domain and seeking to benefit from the scaling of video data sam', 'For the temporal dimension, several target pooling shapes were chosen with spatial dimensions fixed as 12, including (4,12,12), (8,12,12), and (16,12,12). We study the temporal pooling effects by altering the number of input video frames. For example, pooling from (64,24,24) to (4,12,12) indicates every 16 frames are fused, then the downsampling rate should be 6.25%. All of the resulting model curves are shown in Figure 5(c) and 5(d). Different from spatial pooling, the model performance is sensitive to temporal pooling. As illustrated in these two figures, all lines achieve better performance with lower downsampling rates. In other words, pooling along temporal dimension always downgrades the model performance. ', 'Adapting image MLLMs to the video domain can be challenging and susceptible to the designs of model structures, given the limited performance of existing methods. '] | 4502732038df8134d87d1a405be8dda1ae60ca6c9226901735d016ced8f8e783 | e0c9f7447c0f6a644f1d455ff1811a92b895eefb |
explanation | Is there any evidence of concept erosion in the COGFD approach? | In our experiments, we did observe some degree of quality deterioration in individual concepts after the fine-tuning process, particularly in scenarios involving multiple rounds of fine-tuning. Specifically: As shown in Figure 7, we demonstrate instances where the fine-tuning process results in changes to the generative output, with certain concepts appearing less distinct or with minor artifacts. Figure 6 further illustrates that as the number of fine-tuning rounds increases, the Clip Score for individual concepts tends to decline, which is indicative of a gradual erosion in the quality and distinctiveness of those concepts. | ['Figure 6', 'Figure 7'] | ['images/dd8193f2667e5e4bfdb7b71d7967ec113e11535a99a60beecadbb28508c528c9.jpg', 'images/59cf3fcaf1ff40aa42142f8534ea5cae2cb2cc2545dd1fb928a6ba68b3308798.jpg'] | ['figure'] | 2 | 3 | 5 | {'where in Eq. (1), D(ϕθ, ϕω, p) is a distance function to measure the similarity between the noises predicted by two different diffusion models ϕθ and ϕω based on the same textual prompt p. τ ∈[0, T] limits the range of the denoising process into the early stage. Based on Eq. (1), we decouple the co-occurrent high-level features of concepts within the concept combination through a gradient adversarial loss function. Given a concept combination m = c1 ∧c2 · · · ∧ck, the gradient adversarial loss function is defined as follows: ': '1'} | {'1': 'where in Eq. (1), D(ϕθ, ϕω, p) is a distance function to measure the similarity between the noises predicted by two different diffusion models ϕθ and ϕω based on the same textual prompt p. τ ∈[0, T] limits the range of the denoising process into the early stage. Based on Eq. (1), we decouple the co-occurrent high-level features of concepts within the concept combination through a gradient adversarial loss function. Given a concept combination m = c1 ∧c2 · · · ∧ck, the gradient adversarial loss function is defined as follows: '} | {'images/dd8193f2667e5e4bfdb7b71d7967ec113e11535a99a60beecadbb28508c528c9.jpg': '6', 'images/59cf3fcaf1ff40aa42142f8534ea5cae2cb2cc2545dd1fb928a6ba68b3308798.jpg': '7', 'images/4117ca9c69d3848c75ac6e5a95c03d22e86021f4bb4374033f2e2df11eefe9ea.jpg': '2'} | {'6': 'images/dd8193f2667e5e4bfdb7b71d7967ec113e11535a99a60beecadbb28508c528c9.jpg', '7': 'images/59cf3fcaf1ff40aa42142f8534ea5cae2cb2cc2545dd1fb928a6ba68b3308798.jpg', '2': 'images/4117ca9c69d3848c75ac6e5a95c03d22e86021f4bb4374033f2e2df11eefe9ea.jpg'} | {'images/26777e65b26772e5e30fc20ed4f756c1558544019b20b69fc4fd295260105139.jpg': '2'} | {'2': 'images/26777e65b26772e5e30fc20ed4f756c1558544019b20b69fc4fd295260105139.jpg'} | {} | ['images/26777e65b26772e5e30fc20ed4f756c1558544019b20b69fc4fd295260105139.jpg', 'images/4117ca9c69d3848c75ac6e5a95c03d22e86021f4bb4374033f2e2df11eefe9ea.jpg', 'where in Eq. (1), D(ϕθ, ϕω, p) is a distance function to measure the similarity between the noises predicted by two different diffusion models ϕθ and ϕω based on the same textual prompt p. τ ∈[0, T] limits the range of the denoising process into the early stage. Based on Eq. (1), we decouple the co-occurrent high-level features of concepts within the concept combination through a gradient adversarial loss function. Given a concept combination m = c1 ∧c2 · · · ∧ck, the gradient adversarial loss function is defined as follows: '] | 1f12f4d8eee38602ac1f30c5e9de6030a3f892d32ba8de0a161a805997866756 | e677956b80852bfdd406344151b203081a09d256 |
explanation | What trade-offs exist between RAG performance and inference time? | We have explicitly modeled such trade-off relationships between performance and test-time compute using our computation allocation model (e.g., as shown in Figure 1 and Figure 4). | ['Figure 1', 'Figure 4'] | ['images/9fd0e2f30cef6eec153de14ff12b5774fa9df3cb9ef082571514b6a09e09a8cb.jpg', 'images/af74761d40d8d57e7d59e464a71c202b6b1889b177ccfb5f43d008526acfe2d1.jpg'] | ['figure'] | 2 | 3 | 5 | {'During inference, in-context examples are prepended to the initial documents retrieved for the input query. Similarly, each inference request yields a sub-query, an intermediate answer, or the final answer. Upon sub-queries, additional documents are retrieved and merged with the initial ones to generate intermediate answers. In our implementation, we allow up to five iterations of query decomposition before generating the final answer. This iterative process effectively scales test-time computation, with the input tokens from all iterations summed to calculate the effective context length. IterDRAG facilitates a more granular approach by learning to: (1) decompose query into simple and manageable sub-queries; and (2) retrieve and locate relevant information to answer (sub)-queries. As such, the iterative retrieval and generation strategy helps narrowing the compositionality gap and improves knowledge extraction, thereby enhancing overall RAG performance. ': '1'} | {'1': 'During inference, in-context examples are prepended to the initial documents retrieved for the input query. Similarly, each inference request yields a sub-query, an intermediate answer, or the final answer. Upon sub-queries, additional documents are retrieved and merged with the initial ones to generate intermediate answers. In our implementation, we allow up to five iterations of query decomposition before generating the final answer. This iterative process effectively scales test-time computation, with the input tokens from all iterations summed to calculate the effective context length. IterDRAG facilitates a more granular approach by learning to: (1) decompose query into simple and manageable sub-queries; and (2) retrieve and locate relevant information to answer (sub)-queries. As such, the iterative retrieval and generation strategy helps narrowing the compositionality gap and improves knowledge extraction, thereby enhancing overall RAG performance. '} | {'images/251355d74c12dea087d5db02760c690033bb342730fd9548dc67941ba5e2cdb3.jpg': '3', 'images/9fd0e2f30cef6eec153de14ff12b5774fa9df3cb9ef082571514b6a09e09a8cb.jpg': '1', 'images/af74761d40d8d57e7d59e464a71c202b6b1889b177ccfb5f43d008526acfe2d1.jpg': '4', 'images/721ecd7735f42f9ea47f8a8d37435d9eae0c304f4f441f6747d60a6a7d3963f9.jpg': '6'} | {'3': 'images/251355d74c12dea087d5db02760c690033bb342730fd9548dc67941ba5e2cdb3.jpg', '1': 'images/9fd0e2f30cef6eec153de14ff12b5774fa9df3cb9ef082571514b6a09e09a8cb.jpg', '4': 'images/af74761d40d8d57e7d59e464a71c202b6b1889b177ccfb5f43d008526acfe2d1.jpg', '6': 'images/721ecd7735f42f9ea47f8a8d37435d9eae0c304f4f441f6747d60a6a7d3963f9.jpg'} | {} | {} | {} | ['images/251355d74c12dea087d5db02760c690033bb342730fd9548dc67941ba5e2cdb3.jpg', 'During inference, in-context examples are prepended to the initial documents retrieved for the input query. Similarly, each inference request yields a sub-query, an intermediate answer, or the final answer. Upon sub-queries, additional documents are retrieved and merged with the initial ones to generate intermediate answers. In our implementation, we allow up to five iterations of query decomposition before generating the final answer. This iterative process effectively scales test-time computation, with the input tokens from all iterations summed to calculate the effective context length. IterDRAG facilitates a more granular approach by learning to: (1) decompose query into simple and manageable sub-queries; and (2) retrieve and locate relevant information to answer (sub)-queries. As such, the iterative retrieval and generation strategy helps narrowing the compositionality gap and improves knowledge extraction, thereby enhancing overall RAG performance. ', 'images/721ecd7735f42f9ea47f8a8d37435d9eae0c304f4f441f6747d60a6a7d3963f9.jpg'] | 42284e3e78b452865889c2d53f752499903fe2639a29190e0bb1166e0e92ce47 | e89f339f8ac606f5077100f5dcecec54bceb3aeb |
explanation | Can this intermediate representation limit the versatility of the low-level motions? | As shown in Figure 2, our low-level action model receives inputs similar to its original version: an RGB-D image with the 2D path overlaid and a simplified language instruction specifying the task. Given these inputs, we believe the low-level model retains its versatility. Additionally, as shown in Figure 3, providing 2D paths allows models like RVT2 and 3DDA to perform better on basic tasks that do not require generalization. | ['Figure 2', 'Figure 3'] | ['images/a130ba7a664b08f650bf7104cbd73a972ee5b9c471839e8cd140edee9573364f.jpg', 'images/a77e1193fd4bef527ca7d8a85fe767813f0ae3f01bcaeb2752cb81a2f788a270.jpg'] | ['figure'] | 2 | 3 | 5 | {'Reliable, generalizable robotic learning techniques must marry the generalization benefits of large VLMs, with the efficiency, local robustness and dexterity of small imitation learning policies, all while being able to train from abundant and cheap sources of data. In this work, we ask – can we design VLA models that train on relatively abundant and cheap data sources, showing broad visual and semantic generalization, while capturing the low-level geometric and 3D understanding displayed by small imitation learning models? ': '1', 'Pixel Point Prediction. For pixel point prediction, we use the dataset released by RoboPoint (Yuan et al., 2024) with 1.4 million VQA tasks, with most answers represented as a list of 2D points corresponding to locations on the image. A sample consists of a prompt zo like Find all instances of cushions, an input image oo and labels po like [(0.25, 0.11), (0.22, 0.19), (0.53, 0.23)].1 This dataset consists of data automatically generated in simulation and collected from existing real-world datasets; its diversity and tasks enable the HAMSTER VLM to reason about pixel-object relationships across diverse scenes while retaining its semantic generalization capabilities. ': '2'} | {'1': 'Reliable, generalizable robotic learning techniques must marry the generalization benefits of large VLMs, with the efficiency, local robustness and dexterity of small imitation learning policies, all while being able to train from abundant and cheap sources of data. In this work, we ask – can we design VLA models that train on relatively abundant and cheap data sources, showing broad visual and semantic generalization, while capturing the low-level geometric and 3D understanding displayed by small imitation learning models? ', '2': 'Pixel Point Prediction. For pixel point prediction, we use the dataset released by RoboPoint (Yuan et al., 2024) with 1.4 million VQA tasks, with most answers represented as a list of 2D points corresponding to locations on the image. A sample consists of a prompt zo like Find all instances of cushions, an input image oo and labels po like [(0.25, 0.11), (0.22, 0.19), (0.53, 0.23)].1 This dataset consists of data automatically generated in simulation and collected from existing real-world datasets; its diversity and tasks enable the HAMSTER VLM to reason about pixel-object relationships across diverse scenes while retaining its semantic generalization capabilities. '} | {'images/a130ba7a664b08f650bf7104cbd73a972ee5b9c471839e8cd140edee9573364f.jpg': '2', 'images/120d1f0238f69f0d7d457764cf8af30d4457e4852e9ef3faac40356744701406.jpg': '1', 'images/a77e1193fd4bef527ca7d8a85fe767813f0ae3f01bcaeb2752cb81a2f788a270.jpg': '3'} | {'2': 'images/a130ba7a664b08f650bf7104cbd73a972ee5b9c471839e8cd140edee9573364f.jpg', '1': 'images/120d1f0238f69f0d7d457764cf8af30d4457e4852e9ef3faac40356744701406.jpg', '3': 'images/a77e1193fd4bef527ca7d8a85fe767813f0ae3f01bcaeb2752cb81a2f788a270.jpg'} | {} | {} | {} | ['images/120d1f0238f69f0d7d457764cf8af30d4457e4852e9ef3faac40356744701406.jpg', 'Reliable, generalizable robotic learning techniques must marry the generalization benefits of large VLMs, with the efficiency, local robustness and dexterity of small imitation learning policies, all while being able to train from abundant and cheap sources of data. In this work, we ask – can we design VLA models that train on relatively abundant and cheap data sources, showing broad visual and semantic generalization, while capturing the low-level geometric and 3D understanding displayed by small imitation learning models? ', 'Pixel Point Prediction. For pixel point prediction, we use the dataset released by RoboPoint (Yuan et al., 2024) with 1.4 million VQA tasks, with most answers represented as a list of 2D points corresponding to locations on the image. A sample consists of a prompt zo like Find all instances of cushions, an input image oo and labels po like [(0.25, 0.11), (0.22, 0.19), (0.53, 0.23)].1 This dataset consists of data automatically generated in simulation and collected from existing real-world datasets; its diversity and tasks enable the HAMSTER VLM to reason about pixel-object relationships across diverse scenes while retaining its semantic generalization capabilities. '] | 5ade1bb2311caac28ff696268ca4149f630ab1a538330d92678c703e37c772f8 | eab1a57cb36aef998b7c1e576fe6782daf963a21 |
explanation | Where improvements are seen (i.e. shorter time steps in RDD expts), are these attributable to the use of TFT or new loss function proposed? | We believe that we can attribute the gains to the loss function (or more precisely the whole algorithm that first learns the nuisance models and then uses the special loss to train a causal effect model). Indeed, we can compare the TFT baseline (same architecture, without the causal learning) to the Causal TFT: we can see on Table 1 that the causal version (especially with linear encoding) performs much better on causal effects (RDD RMSE) than the baseline; on Table 2 the effects are smaller but in the same direction: causally trained TFTs perform better than the baseline with the same architecture. Given that those models share the same architecture, the causal training explains the difference. | ['Table 1', 'Table 2'] | ['images/aa893a7211d4d52f5d306a9f9783a1b828787a3bb5a206c53726327a423dc840.jpg', 'images/f9a92c03c83013c90d048fa7887edcba2be23dc44d059955a57b0ed0cfcbd550.jpg'] | ['table'] | 2 | 3 | 5 | {'We next formalize causal effects and introduce orthogonal learning theory that we leverage in our models (§2.1). We then extend orthogonal learning to time-series models (§2.2), and instantiate this theory with deep learning architectures (§2.3). ': '1'} | {'1': 'We next formalize causal effects and introduce orthogonal learning theory that we leverage in our models (§2.1). We then extend orthogonal learning to time-series models (§2.2), and instantiate this theory with deep learning architectures (§2.3). '} | {'images/4e3e83c97846c1bd0666b1ca3f858b461b21405a403b4d3cb48f05429f0ac00f.jpg': '3', 'images/ed28783f3e7c5b97eb5367c8faf6e985d4d1cd5559bbe9996eb598a4f2e35ad9.jpg': '1'} | {'3': 'images/4e3e83c97846c1bd0666b1ca3f858b461b21405a403b4d3cb48f05429f0ac00f.jpg', '1': 'images/ed28783f3e7c5b97eb5367c8faf6e985d4d1cd5559bbe9996eb598a4f2e35ad9.jpg'} | {'images/f9a92c03c83013c90d048fa7887edcba2be23dc44d059955a57b0ed0cfcbd550.jpg': '2', 'images/aa893a7211d4d52f5d306a9f9783a1b828787a3bb5a206c53726327a423dc840.jpg': '1'} | {'2': 'images/f9a92c03c83013c90d048fa7887edcba2be23dc44d059955a57b0ed0cfcbd550.jpg', '1': 'images/aa893a7211d4d52f5d306a9f9783a1b828787a3bb5a206c53726327a423dc840.jpg'} | {} | ['images/ed28783f3e7c5b97eb5367c8faf6e985d4d1cd5559bbe9996eb598a4f2e35ad9.jpg', 'images/4e3e83c97846c1bd0666b1ca3f858b461b21405a403b4d3cb48f05429f0ac00f.jpg', 'We next formalize causal effects and introduce orthogonal learning theory that we leverage in our models (§2.1). We then extend orthogonal learning to time-series models (§2.2), and instantiate this theory with deep learning architectures (§2.3). '] | c8be921abbb88424e4ecfeac997ee491b043523fcfab2cfbe4320658b50ecdfb | ee4de645eb00b034b3705af4020f8ff3dc77856d |
Subsets and Splits