{"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.04", "parag_1": "During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 ) + CE( y, ˆ y 1 ) , where CE stands for cross-entropy. We refer to each of the cross-entropy losses as a modality-specific loss. We train the model until the highest accuracy on D val is reached.", "parag_2": "During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 ) + CE( y, ˆ y 1 ) , where CE stands for cross-entropy. We refer to each of the cross-entropy losses as a modality-specific loss. We train the model until ˆ y = y for all samples in D train and take the checkpoint of it when ˆ y reaches the highest accuracy on D val .", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_08"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.09", "parag_1": "The task was created with reference to the previous study [25]. Fig- ure 3 shows a schematic of the task. A pink circular start area (251-pixel radius) and a green target were displayed on a gray back- ground. First, participants clicked on the start area, and the cursor was fixed at the center of the start area. Assuming the initial position of the cursor may affect the cursor path andthe performance of pointing, we strictly fixed a starting position of the trial. Participants clicked again at the starting position, and the trial began. The start area disappeared as a feedback for the beginning of the trial. Partici- pants aimed at the target and ended the trial with the next click. If participants clicked correctly on the target, we marked the trial as a success; else, the trial was marked as a failure (error). We presented a sound feedback in response to the success or failure of the trial.", "parag_2": "The task was created by referring to a previous study [28]. Figure 3 shows a schematic of the task. A pink circular start area (251-pixel radius) and a green target were displayed on a gray background. The participants clicked on the start area; the cursor positioned at the center of the start area. We strictly fixed the starting position of the cursor for the trial assuming that the initial position of the cursor can affect the cursor path and performance of pointing [28]. The trial started once the participant clicked on the starting position. The start area then disappeared, which acted as feedback to indicate the start of the trial. Participants aimed at the target and ended the trial with the next click. If participants clicked the target correctly, we marked the trial as a success; else, the trial was marked as a failure (error). We presented a sound feedback in response to the success or failure of the trial.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the middle part of the paragraph to make it more better. Replace some words in the paragraph.", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Slightly revise for readability, you can reorganise ideas in sentences if necessary.", "annotator": "annotator_07"}} {"id_paragraph": "SyGfyinsH.I2YVGmIp0.00", "parag_1": "A + C + D refers to our approach. In (b), we show the same ablations over the entire trajectory until t = 20 . As can be seen, using the calibrated predictor produces a large gain and using the direct bound produce a large gain on average; these gains are most noticeable in the tails. Using the accumulated confidence produces a smaller, but still significant, gain. In (c) and (d), we show how the sizes vary with (cid:15) and δ , respectively. The trends are similar those for ResNet.", "parag_2": "A + C + D is our approach. As before, we omit results for the ablation using the VC generalization bound since n is so small that the bound does not hold for any k for the given (cid:15) and δ . In (b), we show the same ablations over the entire trajectory until t = 20 . As can be seen, using the calibrated predictor produces a large gain; these gains are most noticeable in the tails. Using the accumulated confidence produces a smaller, but still significant, gain. In (c) and (d), we show how the sizes vary with (cid:15) and δ , respectively. The trends are similar those for ResNet.", "annot_1": {"annotation": ["Content_addition", "Concision"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "WldWha1MT.LL2ZsGpJga.03", "parag_1": "A well-established metric to evaluate the topological performance of a segmentation network is the Betti number error, see appendix I, which compares the topological complexity of P and G . However, it is limited as it ignores the spatial correspondence ofthe topological features within their respective images (see Figure 2(b)).", "parag_2": "Betti number error The Betti number error β err (see App. K) compares the topological complexity of the binarized prediction P and the ground truth G . However, it is limited as it only compares the number of topological features in both images, while ignoring their spatial correspondence (see Fig.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite this definition in a more direct and academic style.", "annotator": "annotator_07"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.03", "parag_1": "We implement the fusion module as a multi-modal transfer module (MMTM) (Joze et al., 2020). The first step in MMTM is to squeeze feature maps from each uni-modal branch to vector representations via global average pooling over spatial dimensions. Next we concatenate these representations and applya linear transformation to obtain cross-modal context representation. We predict channel-wise weights for each modality based this context representation through two independent fully-connected layers. Finally, these weights are used tore-calibrate the channel-wise features per modality.", "parag_2": "We implement every fusion module by a multi-modal transfer module (MMTM) (Joze et al., 2020). Each MMTM connects two layers from the two uni-modal branches. There is first the global average pooling applied over spatial dimensions to transform feature maps into a vector. We concatenate the two vectors and apply linear transformation. We refer to its output as context representation. Next, for each uni-modal branch, we implement a fully connected layer on the context representation and get a vector with a dimension of the number of feature maps. Feature maps are re-scaled by this vector before passing to the next layer of the uni-modal branch.", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Rearrange the structure to make the structure clearer.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite this paragraph completely to make it clearer.", "annotator": "annotator_02"}} {"id_paragraph": "uJRtLYIOIq.e9xxGlB_c.00", "parag_1": "Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if they are added with large enoughconstants; for example, c − ∥ x − x ′ ∥ p for large enough c . Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant property of the Softmax normalization, we can leave it as an under-determined constant in our positional embedding design, which is Eq. (1) insection 4. Still, given a set of test points { x i } Ni =1 , one can do a geometric sequence search 1 to search for a c such that the N × N matrix c + ˜ k ( x i , x j )] Ni,j =1 ⪰ 0 . Hence, in this work, we do not need thevalue of c , but we can compute it if we do need its value, e.g., deriving the feature map of c + ˜ k .", "parag_2": "Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if a large enough constant is added. For example, c − ∥ x − x ′ ∥ p for large enough c . Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant property of the Softmax normalization, we can leave it as an under-determined constant in our positional embedding design (Eq. (1) in section 4). Given a set of test points { x i } Ni =1 , one can do a geometric sequence search 1 to search for a c such that the N × N matrix [ c + ˜ k ( x i , x j )] Ni,j =1 ⪰ 0 . Hence, we do not need the value of c , but we can compute it if needed, e.g., deriving the feature map of c + ˜ k .", "annot_1": {"annotation": ["Concision"], "instruction": "Rewrite some formulations, giving preference to shorter ones.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Concision"], "instruction": "Shorten this paragraph a bit while keeping all the informations.", "annotator": "annotator_07"}} {"id_paragraph": "xV0XmrSMtk.sYfR73R9z.02", "parag_1": "Discrete Variational Auto-Encoder. In a discrete variational autoencoder (DVAE) (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder. We consider the task of training a DVAE on the M NIST dataset where the encoder maps the input image to a discrete distribution of k -hot binary vector of length 20 in the latent space and the decoder reconstructs the image.", "parag_2": "Discrete Variational Auto-Encoder (DVAE). In a DVAE (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder. We consider the task of training a DVAE on the M NIST dataset where the encoder maps the input image to a discrete distribution of k -hot binary vector of length 20 in the latent space and the decoder reconstructs the image.", "annot_1": {"annotation": ["Concision"], "instruction": "Make this paragraph more concise by introducing acronyms earlier.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision"], "instruction": "Introduce the acronym DVAE earlier to avoid repeating it.", "annotator": "annotator_07"}} {"id_paragraph": "PDvmJtmgQb.gGrpxbc7UI.02", "parag_1": "Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically. On the theoretical side, it has been shown (Alon et al., 2019; Bassily et al., 2020a) that a combination of private and public data samples can yield asymptotically better worst-case PAC learning guarantees than either on their own. Another line of work (Papernot et al., 2016; 2018; Bassily et al., 2018b; Dwork & Feldman, 2018; Nandi & Bassily, 2020) considers public data that is unlabelled, but otherwise comes from the same distribution as the private data; the primary goal is to use the private data to generate labels for the public data, which can then be used arbitrarily. So far only two papers have considered out-of-distribution data. Bassily et al. (2020c) assume that whether a data record is public or private depends on its label; e.g., the public data may contain many negative examples, but few positive examples. They show that halfspaces can be learned in this model. Liu et al. (2021) consider synthetic data generation and provide guarantees that depend on the R ´ enyi divergences between the public and private distributions. Abadi et al. and Tramer & Boneh (2020) provided techniques to effectively use out-of-distribution public data for pre-training for DP-SGD. However, they did not consider techniques to improve a pre-trained model using private and public data, which is the focus of our work.", "parag_2": "Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically. On the theoretical side, it has been shown [3, 10] that a combination of private and public data samples can yield asymptotically better worst-case PAC learning guarantees than either on their own. Another line of work [8, 16, 29, 31, 32] considers public data that is unlabelled, but otherwise comes from the same distribution as the private data; the primary goal is to use the private data to generate labels for the public data, which can then be used arbitrarily. So far only two papers have considered out-of-distribution data. [12] assume that whether a data record is public or private depends on its label; e.g., the public data may contain many negative examples, but few positive examples. They show that halfspaces can be learned in this model. [26] consider synthetic data generation and provide guarantees that depend on the R ´ enyi divergences between the public and private distributions. [1] and [37] provided techniques to effectively use out-of-distribution public data for pre-training for DP-SGD. However, they did not consider techniques to improve a pre-trained model using private and public data, which is the focus of our work.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Unusable"], "instruction": "I want to use numbers for in-text citations. ", "annotator": "annotator_09"}} {"id_paragraph": "E2pFUCGYZ1.5hMS4Fg2b_b.00", "parag_1": "ADO iterations in the Bayesian framework are shown in Sec. 3.3 and Appendix A.3. Finally, with theestimated posterior, the predictive uncertainty can be quantified by evaluating the identified systemwith an ensemble of parameters. To further improve the prediction capability, especially for chaoticsystems, we propose to leverage data assimilation techniques, which is shown in the green box anddiscussed in Sec.3.4 and Appendix A.5.", "parag_2": "ADO iterations in the Bayesian framework are shown in Sec. 3.3 and supplemental materials. Finally,with the estimated posterior, the predictive uncertainty can be quantified by evaluating the identifiedsystem with an ensemble of parameters. To further improve the prediction capability, especially forchaotic systems, we propose to leverage data assimilation techniques, which is shown in the greenbox and discussed in Sec.3.4 and supplemental materials.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Use \"supplemental materials\" instead of \"Appendix\"", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Lightly revise for readability.", "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.14", "parag_1": "AGILE clearly outperforms all the baselines demonstrating that relational knowledge of other available actions is crucial for an optimal policy. RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys. Additionally, DQN is worse than CDQNbased architectures because the top-K greedy list action building ignores list interdependence.", "parag_2": "AGILE outperforms all the baselines, demonstrating that relational knowledge of other available actions is crucial for an optimal policy. RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys. Additionally, DQN is worse than CDQN-based architectures because the top-K greedy list-action ignores intra-list dependence.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Remove unnecessary words and fix the words if they are not in the correct form", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Remove terms that might be considered biased. Make the writing more clear.", "annotator": "annotator_03"}} {"id_paragraph": "mFNezF8ubW.g-sOkbqBcm.00", "parag_1": "Each concept in the hierarchy corresponds to one set of hidden nodes which are connected to the hidden nodes representing its children, if any. For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will be all to all connections from the hidden nodes representing Chordate to those accounting for Mammal, Bird and Reptile. The hidden nodes of a concept is also connected to the output prediction node for the concept itself and those for each of its children category nodes. An additional type of connectivity constrains the concept and category predictions to follow the hierarchical organization of the ontology. We illustrate each of these connections below.", "parag_2": "Each concept in the hierarchy corresponds to one set of hidden nodes that essentially represent the concept. These hidden nodes are connected to those representing its children, if any. For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will be all to all connections from the hidden nodes representing Chordate to those accounting for Mammal, Bird and Reptile. Consequently, the hidden representation for a child concept is computed from that of its parent. Given the representation in capture in the hidden nodes, two types of output prediction nodes detects the presence of the concept itself and any children category in the input. An additional type of connectivity explicitly constrains the concept and category predictions to follow the hierarchical organization of the ontology. We illustrate each of these connections below.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.25", "parag_1": "• Instead of using a relation network [] – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely. This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) . Using the relation network approach does improve our results slightly (e.g. 3.5 percentage points improvement on AP 0 . 125 for 128 × 128 ), but is also a bit slower. For simplicity, we therefore opted to not using relation networks. The architecture of the set encoder g is Linear( 19 → 512 )–ReLU–Linear( 512 → 512 )–FSPool (Zhang et al., 2020). The main difference to DSPN is that since there is no concatenation of pairs, so the input dimensionality is 19 instead of 38 and everything is applied on sets of size n rather than sets of size n 2 . • Instead of ResNet34 to encode the input image, we use the smaller ResNet18. This did not appear to affect results. • We increase the batch size from 32 to 128. There appeared to be no difference in results between the two, with 128 being faster by making better use of parallelization. • We use Nesterov’s Accelerated Gradient (Nesterov, 1983) with a momentum parameter of 0. instead of standard gradient descent without momentum. • Instead of fixing the number of iterations at 10 like DSPN, we set the number of iterations to 20 at the start of training and change it to 40 after 50 epochs. This had slightly better training loss than starting training with 40 iterations. We have tried a few other ways of increasing the number of iterations throughout training (going from 10 to 20 to 30 to 40 iterations, smooth increase from 1 to 40 over the epochs, randomly sampling an iteration between 20 and 40 every batch), which had little impact on results. • We drop the learning rate after 90 epochs from 1e-3 to 1e-4 for the last 10 epochs. This slightly improved training loss while also reducing variance in epoch-to-epoch validation loss. • In preliminary experiments, we rarely observed spikes in the training loss. Clipping the gradients in the inner optimization to a maximum L2 norm of 10 seemed to help.", "parag_2": "• Instead of using a relation network (Santoro et al., 2017) – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely. This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) . Using the relation network approach would improve our results slightly (e.g. 3.5 percentage points improvement on AP 0 .for 128 × 128 ), but is also a bit slower. For simplicity, we therefore opted to not using relation networks. The architecture of the set encoder g in iDSPN is Linear( 19 → 512 )–ReLU–Linear( 512 → 512 )– FSPool. The main difference to DSPN is that since there is no concatenation of pairs, so the input dimensionality is 19 instead of 38 and everything is applied on sets of size n rather than sets of size n 2 . • Instead of ResNet34 to encode the input image, we use the smaller ResNet18. This did not appear to affect results. • Instead of using a learned initial set Y 0 as in DSPN, we find that it makes no difference to randomly sample the initial set for every example. We therefore use the latter for simplicity. In initial experiments we found that even initializing every element to 0 causes no problems. • We increase the batch size from 32 to 128. There appeared to be no difference in results between the two, with 128 being faster by making better use of parallelization. • We use Nesterov’s Accelerated Gradient (Nesterov, 1983) with a momentum parameter of 0. instead of standard gradient descent without momentum. • Instead of fixing the number of iterations at 10 like DSPN, we set the number of iterations to 20 at the start of training and change it to 40 after 50 epochs. This had slightly better training loss than starting training with 40 iterations. We have tried a few other ways of increasing the number of iterations throughout training (going from 10 to 20 to 30 to 40 iterations, smooth increase from 1 to 40 over the epochs, randomly sampling an iteration between 20 and 40 every batch), which had little impact on results. iDSPN training was stable in all of these configurations. • We drop the learning rate after 90 epochs from 1e-3 to 1e-4 for the last 10 epochs. This slightly improved training loss while also reducing variance in epoch-to-epoch validation loss. • In preliminary experiments, we rarely observed spikes in the training loss. Clipping the gradients in the inner optimization to a maximum L2 norm of 10 seemed to help.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "lLwt-9RJ2tm.XJsauLjck.03", "parag_1": "That said, one might still question whether it is possible to match the solution quality of a givenψ -approximate offline algorithm for the maximization objectives in the models of computation we consider. We answer this in the affirmative for at least the dissimilarity objective of [15]; ourstructural decomposition of the cost function and its subsequent implications carry over identically. In particular for this cost function, our results imply (1 + o (1)) ψ -approximate algorithms for HCin weighted graphs that use ( i ) a single-pass and e", "parag_2": "That said, one can further question whether it is possible to match the solution quality of any given ψ -approximate offline algorithm for the maximization objectives in the models of computation we consider. We answer this in the affirmative; we can in fact achieve even stronger performance guarantees for both objectives in the sublinear resource regime by exploiting the fact that their corresponding optimal hierarchies have large objective function values 5 , allowing us to tolerate even larger additive errors in our cut-sparsifiers. A straightforward application of our structural decomposition of the cost function along with its downstream implications in each of the three models of computation directly gives us (1 − o (1 /ψ )) ψ -approximate algorithms for both HC maximization objectives in weighted graphs that use ( i ) a single-pass and e", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "9ALnOEcGN_.4eEIRZ-dm.00", "parag_1": "We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 36]. However, thereare several major distinctions between the existing methods and our proposed one. Previous workgenerates heatmaps based on supervised signals (each training graph is paired with its best solution)[4, 19], which are costly to obtain. DIMES is directly optimized with gradients estimated by the REINFORCE algorithm, which do not require supervised signals. As a result, DIMES can scale tolarge graphs with up to tens of thousands of nodes, and predict (nearly) optimal solutions withoutthe need for costly generation of supervised training data or human specification of problem-specificheuristics.", "parag_2": "We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 14, 36]. However, there are major distinctions between the existing methods and our DIMES. For instance, Fu et al. [17] learn to generate heatmaps via supervised learning (i.e., each training instance is paired with its best solution) [4, 19], which is very costly to obtain on large graphs. DIMES is directly optimized with gradients estimated by the REINFORCE algorithm without any supervision, so it can be trained on large graphs directly. As a result, DIMES can scale to large graphs with up to tens of thousands of nodes, and predict (nearly) optimal solutions without the need for costly generation of supervised training data or human specification of problem-specific heuristics.", "annot_1": {"annotation": ["Rewriting_medium", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "atxti8SVk.3K9AmPwALM.16", "parag_1": "Pascal: Scribble annotations. Table 3 shows that, without CRF post-processing, we get 74 . 1% mIoU, achieving 97 . 6% of full supervision performance; with CRF post-processing, we reach new SOTA: We get 75 . 9% mIoU, achieving 98 . 6% of full supervision performance.", "parag_2": "Pascal: Scribble annotations. Table 3 shows that, our method consistently delivers the best performance among methods without or with CRF post-processing. We get 74 . 2% ( 76 . 1% ) mIoU, achieving 97 . 5% ( 98 . 4% ) of full supervision performance in these two categories respectively.", "annot_1": {"annotation": ["Content_substitution", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_substitution", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "ByZyHzZC-.HktKf7-AW.01", "parag_1": "Our work is also related to other work on the importance of noise in SGDs, which have been previously explored. The main inspiration for having a learning rate schedule is to anneal noise (Bottou, 1998). Neelakantan et al. (2015) observe empirically that adding noise can aid optimization of very deep networks. Our analysis allows us to derive the impact of the gradient noise in the SGD stationary distribution. Additionally, our work also provides intuition toward explaining the recently proposed Cyclic Learning Rate (CLR) schedule (Smith, 2015). CLR schedules have demonstrated good optimization and generalization performances, but are grounded on empirical observation rather than on a theoretical understanding. We show that one can replace learning rate annealing with an equivalent batch size schedule. It suggests that the benefit of CLR relates to the noise that it induces and can be thought of as mixing in Monte Carlo Markov Chain (MCMC) methods. In the MCMC setting, annealing processes enable better mixing (Graham & Storkey, 2017).", "parag_2": "Our work is also related to the importance of noise in SGD, which has been previously explored. The main inspiration behind learning rate schedule has been shown to be noise annealing (Bottou, 1998). Neelakantan et al. (2015) observe empirically that adding noise can aid optimization of very deep networks. Our analysis allows us to derive the impact of the gradient’s noise in the SGD stationary distribution. Additionally, our work also provides intuitions toward explaining the recently proposed Cyclic learning rate (CLR) schedule (Smith, 2015). Cyclic learning rate schedules have demonstrated good optimization and generalization performances, but are grounded on empirical observation. We also show that one can replace learning rate annealing with an equivalent batch size schedule. It suggests that the benefit of cyclic learning rate relates to the noise that it induces.", "annot_1": {"annotation": ["Content_deletion", "Rewriting_light"], "instruction": "Remove unnecessary content in the last sentence.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Make the last sentence shorter, only keep the main idea. Slightly concise this paragraph and improve the english.", "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.11", "parag_1": "Design A supportstwo sorts of medication entries: drug or phys- ical activity. Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage. The suffix -WF indicates that the drug should be administered with food. Physical activity entries have a full-color fill, a dashed border, and a label indicating the name of the activity. All other calendar entries are represented with rectangles filled with different shades of grey.", "parag_2": "Design A supports medication (or drug) entries and physical activ- ities. Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage. The suffix -WF indicates that the drug should be administered with food. Physical activity entries have a full-color fill, a dashed border, and a label indicating the name of the activity. All other calendar entries are represented with rectangles filled with different shades of grey.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Make this paragraph a bit more fluid.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "I want to rewrite the first sentence.", "annotator": "annotator_09"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.04", "parag_1": "Because g is permutation-invariant, any ordering for the elements in Y has the same value for L . In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps. In the backward pass,Zhang et al. (2019) backpropagate through the gradient descent iterations in order to compute the gradients of the training objective with respect to the input vector z and the parameters θ of the encoder. ", "parag_2": "Because g is permutation-invariant, any ordering for the elements in Y has the same value for L . In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps. In the backward pass, the goal is to differentiate Equation 7 with respect to the input vector z and the parameters θ of the encoder. To do this, Zhang et al. (2019) unroll the gradient descent applied in the forward pass and backpropagate through each gradient descent step.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Add a sentence to explain the last sentence.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Improve the logical flow of the last half of the paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "cW17DDjQa_.6iDdN7-bYz.00", "parag_1": "We propose an algorithm to solve above optimization problem (3). The optimization problem contains non-continuous indicator function in constraint (3d, 3c), and non-convex constraint (3b), which make the problem difficult to solve. Therefore, we first reformulate the inequality constraints as soft regularizations and introduce Minimax optimization with dual variables. Then we tackle the non-differentiable objective function using self-defined numerical differentiation. At last, we summarize all the optimization details into a gradient-based optimization formulation.", "parag_2": "To address the optimization problem (3), we adopts the alternating direction method of multipliers (ADMM) for the reformulation. In details, the optimization problem contains non-continuous indicator function in constraint (3c, 3d), and non-convex constraint (3b), which make the problem difficult to solve. Therefore, we first reformulate the inequality constraints as soft regularizations and introduce Minimax optimization with dual variables. Then we tackle the non-differentiable objective function using self-defined numerical differentiation. At last, we summarize all the optimization details into a gradient-based optimization formulation.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "33RNh69fYq.kMvWVl725x.02", "parag_1": "Setup . Anomaly detection aims to detect whether an image contains anomalous regions. Theperformance is evaluated on MVTec-AD [3]. The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 . The feature maps from stage-1 to stage-4 of EfficientNet-b4[37] respectively have the channel of 24, 32, 56, and 160, and they are resized and concatenated together to form a 272-channel feature map. The reduced channel dimension is set as 256. AdamWoptimizer [18] with weight decay 1 × 10 − 4 is used for training. Our model is trained for 1000 epochs on 8 GPUs (NVIDIA Tesla V100 16GB) with batch size 64. The learning rate is 1 × 10 − 4 initially, and dropped by 0.1 after 800 epochs. The neighbor size is set as 7 × 7. The jittering scale and jitteringprobability are chosen as 20 and 1, respectively. The evaluation is run with 5 random seeds.", "parag_2": "Setup . Anomaly detection aims to detect whether an image contains anomalous regions. Theperformance is evaluated on MVTec-AD [4]. The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 . The feature maps from stage-1 to stage-4 of EfficientNet-b4[39] are resized and concatenated together to form a 272-channel feature map. The reduced channel dimension is set as 256. AdamW optimizer [20] with weight decay 1 × 10 − 4 is used. Our model is trained for 1000 epochs on 8 GPUs (NVIDIA Tesla V100 16GB) with batch size 64. The learning rate is 1 × 10 − 4 initially, and dropped by 0.1 after 800 epochs. The neighbor size, jittering scale, andjittering probability are set as 7 × 7, 20, and 1, respectively. The evaluation is run with 5 random seeds.", "annot_1": {"annotation": ["Concision"], "instruction": "Remove some details on model training to make the paragraph more concise.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Concision"], "instruction": "Remove unnecessary details to shorten this paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.21", "parag_1": "In the experiment of Fig. 5, we found that in RecSim the relation of items is easy to model such that AGILE could not outperform the ablations whereas AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making. Then, we hypothesized that the existence of the complex relations between actionsin the environment (e.g., tools and activators in CREATE) injects the complex action relations in the environment. For instance, an appropriate pair of an activator and a tool to use in CREATE depends on the situation. To this end, we implemented the pre-defined pairings among items in RecSim such that clicks can only happen when the correct pairs of items are recommended. Since action relations are complex, AGILE is expected to outperform the ablations. Figure 14 shows that AGILE beats the baselines and in Fig. AGILE slightly but consistently outperforms the ablations. In Fig.16, AGILE outperformed AGILEGCN shows that a GAT is capable of modeling the action relations correctly and AGILE converging faster than AGILE Only Action shows that the intermediate list information is crucial to efficiently learn to attend the other half in the pairing of items.", "parag_2": "In the experiment of Fig. 5, we found that in RecSim, the relation of items is easy to model such that AGILE could not outperform the ablations. In contrast, AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making. We hypothesize that these environments require complex relations between actions (e.g., tools and activators in CREATE). To this end, we implement the pre-defined pairings among items in RecSim such that clicks can only happen when the correct pairs of items are recommended. Since action relations are complex, AGILE is expected to outperform the ablations. Figure 14 shows that AGILE beats the baselines and in Fig.15 AGILE slightly but consistently outperforms the ablations. In Fig.16, AGILE outperforming AGILE-GCN shows that a GAT is capable of modeling the action relations correctly. AGILE converges faster than AGILE Only-Action. This shows that the state and the partially constructed list are crucial to learning to attend the other half in pairing items efficiently.", "annot_1": {"annotation": ["Rewriting_medium", "Content_deletion"], "instruction": "Make this paragraph shorter and easier to understand", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Concision"], "instruction": "Simplify the less essential ideas of the paragraph to make it more concise.", "annotator": "annotator_03"}} {"id_paragraph": "NwOG107NKJ.0PPYM22rdB.02", "parag_1": "Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users. Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest) Weber and Luo [2014]. Other features includeproject volume, documentation volume, presence ofsupporting files, codevolume and standardlibrary usage. The popularity velocity can be measured by (Total_Stars / project_life). Few studies have examined influence of user-popularity, repo-popularity, and triadic relationships in dynamic graphs.", "parag_2": "Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users. Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest) [Weber and Luo, 2014]. Other features include project size, file volume, critical folder, lines of code and calling of basic functions. The popularity rate can be measured by (Total_Stars / project_life). Few studies have examined influence of user-popularity, repo-popularity, and triadic relationships in dynamic graphs.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Make the use of a citation in the second sentence correct. Update the third sentence.", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve the readability of this paragraph.", "annotator": "annotator_03"}} {"id_paragraph": "ByZyHzZC-.HktKf7-AW.00", "parag_1": "The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014; Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et al., 2015; Sato & Nakagawa, 2014). In particular, Mandt et al. (2017) describe the dynamics of stochastic gradient descent (SGD) as a stochastic process that can be divided into three distinct phases. In the first phase, weights diffuse and move away from the initialization. In the second phase the gradient magnitude dominates the noise in the gradient estimate. In the final phase, the weights are near the optimum. (Shwartz-Ziv & Tishby, 2017) make related observations from an information theoretic point of view and suggest the diffusion behavior of the parameters in the last phase leads to the minimization of mutual information between the input and hidden representation. In a similar vein, we relate the SGD dynamics to the stationary distribution of the stochastic differential equation. Our derivation bears similarity with Mandt et al. However, while Mandt et al. (2017) aims at performing approximate Bayesian inference, our end goal is to analyse the stationary distribution reached by SGD.", "parag_2": "The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014; Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et al., 2015; Sato & Nakagawa, 2014). In particular, Mandt et al. (2017) describe the dynamics of stochastic gradient descent (SGD) as a stochastic process that can be divided into three distinct phases. In the first phase, weights diffuse and move away from the initialization. In the second phase the gradient magnitude dominates the noise in the gradient estimate. In the final phase, the weights are near the optimum. (Shwartz-Ziv & Tishby, 2017) make related observations from an information theoretic point of view and suggest the diffusion behavior of the parameters in the last phase leads to the minimization of mutual information between the input and hidden representation. In a similar vein, we relate the SGD dynamics to the stationary distribution of the stochastic differential equation. Our derivation bears similarity with Mandt et al. However, while Mandt et al. (2017) study SGD as an approximate Bayesian inference method in the final phase of optimization in a locally convex setting, our end goal is to analyse the stationary distribution over the entire parameter space reached by SGD. Further, our analysis allows us to compare the probability of SGD ending up in one minima over another, which is novel in our case.", "annot_1": {"annotation": ["Development", "Content_addition"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.05", "parag_1": "During training, the uni-modal branch largely focuses on the associated modality. The fusion modules generatecross-modal context information from the uni-modal branches and pass it back to them. Both ˆ y 0 and ˆ y 1 depend on information from both modalities. We end up with two functions, f 0 and f 1 , corresponding to the two uni-modal branches:", "parag_2": "During training, each uni-modal branch largely focuses on its associate input modality. The fusion modules generate context representation using all modalities and feed such information to the unimodal branches. Both ˆ y 0 and ˆ y 1 depend on information from both modalities. We end up with two functions, f 0 and f 1 , corresponding to the two uni-modal branches:", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Make the sentence understandable.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve the wording of this paragraph.", "annotator": "annotator_02"}} {"id_paragraph": "eyheq0JfG.lDLi0nFVcl.00", "parag_1": "For example, using mixup on top of random scaling and cropping improves the results by 0.4%. This suggests that thanks to the proposed methods, we are getting closer than ever to the capacity of a real-valued model (which is amenable to stronger augmentations).", "parag_2": "For example, using mixup on top of random scaling and cropping improves the results by 0.4%. In comparison, when we trained Real-to-Bin Martinez et al. (2020) with mixup, the accuracy dropped by 0.25% for Stage I, and 0.8% for Stage II. This suggests that, thanks to the proposed methods, we are getting closer than ever to the capacity of a real-valued model (which is amenable to stronger augmentations).", "annot_1": {"annotation": ["Content_addition", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.05", "parag_1": "Equivariance of DSPN We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets. The gradient of the permutation-invariant encoder g is always multiset-equivariant, but depending on the encoder, it is not necessarily setequivariant. Zhang et al. find that FSPool-based encoders (Zhang et al., 2020) perform by far the best among the ones they have tried. With this type of encoder, DSPN becomes exclusively multiset-equivariant . This is due to the use of numerical sorting in FSPool: the Jacobian of sorting is exclusively multiset-equivariant. We prove this in Appendix A.", "parag_2": "Equivariance of DSPN. We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets. The gradient of the permutation-invariant encoder g with respect to the set input Y is always multiset-equivariant, but depending on the encoder, it is not necessarily set-equivariant. Zhang et al. find that FSPool-based encoders (Zhang et al., 2020) achieved by far the best results among the encoders they have tried. With FSPool, DSPN becomes exclusively multiset-equivariant to its initialization Y 0 . This is due to the use of numerical sorting in FSPool: the Jacobian of sorting is exclusively multiset-equivariant (Appendix A).", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_light", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.05", "parag_1": "Lightweight Image SR Models. Recent years have been rising interest in investigating lightweight image SR models. These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting. Kim et al . firstly introduced recursive learning in DRCN to decrease model size (Kim et al., 2016b). Ahn et al . designed a cascading mechanism upon a residual network in CARN (Ahn et al., 2018). Hui et al . proposed a lightweight information multi-distillation network (IMDN) (Hui et al., 2019). Luo et al . designed the lattice block with butterfly structures (Luo et al., 2020). Recently, neural architecture search was introduced for image SR in FALSR (Chu et al., 2019a). Besides, model compression techniques, like knowledge distillation, have been investigated for image SR. He et al . proposed knowledge distillation based feature-affinity for efficient image SR (He et al., 2020). Lee et al . trained a teacher network to distill its knowledge to a student (Lee et al., 2020). Although those lightweight networks have achieved great progress, we still need to investigate deeper for more efficient image SR models.", "parag_2": "Lightweight Image SR Models. Recent years have been rising interest in investigating lightweight image SR models. These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting. Kim et al . firstly decreased parameter number by utilizing recursive learning in DRCN (Kim et al., 2016b). Ahn et al . proposed CARN by designing a cascading mechanism upon a residual network (Ahn et al., 2018). Hui et al . proposed a lightweight information multi-distillation network (IMDN) (Hui et al., 2019). Luo et al . designed the lattice block with butterfly structures (Luo et al., 2020). Recently, neural architecture search was applied for image SR, like FALSR (Chu et al., 2019a). Also, model compression techniques have been explored for image SR. He et al . proposed knowledge distillation based feature-affinity for efficient image SR (He et al., 2020). Lee et al . distilled knowledge from a larger teacher network to a student one (Lee et al., 2020). Those lightweight image SR models have obtained great progress, but we still need to investigate deeper for more efficient image SR models.", "annot_1": {"annotation": ["Rewriting_medium", "Concision"], "instruction": "Can you make my paragraph more concise?", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Concision"], "instruction": "Use shorter formulations and more direct language to make the paragraph more concise.", "annotator": "annotator_04"}} {"id_paragraph": "gIp_U0JsFa.T3RdAsTpzN.00", "parag_1": "Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1). As per [32], we consider the setting of ‘dataset shift’, whereby a model is developed on the source data and tested on the target data 6 , which is a common setting in medical applications [78, 84, 32]. 5 A Algorithm 1 (Conditional) independence testing assessing the nature of shift S on a single variable U ∈ G .", "parag_2": "Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1). As per [27], we consider the setting of ‘dataset shift’, whereby a model is developed on the source data and tested on the target data 6 , which is a common setting in medical applications [66, 71, 27].", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.22", "parag_1": "We report means and standard deviations of the models’ test accuracy in Table 1.[-\n-] The guided algorithm improves the models’ generalization performance over the vanilla algorithm in all four cases.It also outperforms the random algorithm, with the exception of ModelNet40, where their performances are very close.", "parag_2": "We report means and standard deviations of the models’ test accuracies in Table 1.[-\n-] 3 RUBi does not show consistent improvement across tasks compared to the vanilla algorithm. The guided algorithm improves the models’ generalization performance over all three other methods in all four cases.", "annot_1": {"annotation": ["Content_substitution", "Development"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Content_substitution", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "S1-LZxvKX.rJ009I8RX.03", "parag_1": "Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently. Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training. Sparse evolutionary training (Mocanu et al., 2018) used magnitude-based pruning and random growth at the end of each training epoch. NeST (Dai et al., 2017; 2018) iteratively grew and pruned parameters and neurons during training; parameter growth was guided by gradient and pruning by magnitude. Deep rewiring (Bellec et al., 2017) combined sparse reparameterization with stochastic parameter updates for training. These methods were mostly concerned with sparsifying fully connected layers and applied to relatively small and shallow networks. As will be discussed in Section 5, our method, more scalable and computationally efficient than these previous approaches, fully closed the generalization gap for the first time between training a compact sparse network and compression of a large deep CNN.", "parag_2": "Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently. Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training. Sparse evolutionary training (Mocanu et al., 2018) used magnitude-based pruning and random growth at the end of each training epoch. NeST (Dai et al., 2017; 2018) iteratively grew and pruned parameters and neurons during training; parameter growth was guided by gradient and pruning by magnitude. Deep rewiring (Bellec et al., 2017) combined sparse reparameterization with stochastic parameter updates for training. These methods were mostly concerned with sparsifying fully connected layers and applied to relatively small and shallow networks. We show that the method we propose in this paper is more scalable and computationally efficient than these previous approaches, while achieving better performance on deep convolutional networks.", "annot_1": {"annotation": ["Concision"], "instruction": "Edit the last sentence of this paragraph to make it shorter and remove the reference to Section 5.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision"], "instruction": "Rewrite the last sentence to make it more concise.", "annotator": "annotator_07"}} {"id_paragraph": "XXtXW925iG.JHwYPw52XHb.00", "parag_1": "In the previous section, we showed that the limiting diffusion exists when ⌘ and \u0000 go to zero witha fixed ratio. However, the situation is more complicated in the general case, i.e. , the intrinsic LR ⌘\u0000 ! 0 while ⌘\u0000 varies and is only upper bounded by some constant. A concrete example is ⌘ ! 0and \u0000 being fixed.", "parag_2": "In the previous section, we showed that the limiting diffusion exists when η and λ go to zero witha fixed ratio. However, the situation is more complicated in the general case, i.e. , the intrinsic LR ηλ → 0 while ηλ is upper bounded by some constant. A concrete example is η → 0 and λ beingfixed.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "aFWzpdwEna.MCecpd3utK.00", "parag_1": "In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is usually challenging or intractable without access to the environment that the learned policy will be deployed to. To address this problem, we study a bi-objective formulation for model-based offline RL and develop an efficient method, Pareto policy pool (P3), that produces a pool of diverse policies on the Pareto front performing different levels of trade-offs, providing the flexibility to select the best policy for each realistic environment from the pool. P3 provides a simple and principal approach that addresses the two major challenges in model-based offline RL: “model exploitation” and generalization to different unseen states. On the D4RL benchmark, P3 substantially outperforms several recent baseline methods over multiple tasks and shows the potentiality of learning a generalizable policy when the quality of pre-collected experiences is low.", "parag_2": "In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is challenging without access to the realistic environment. To address the problem, we study a bi-objective formulation for model-based offline RL and develop an efficient method that produces a pool of diverse policies on the Pareto front performing different levels of trade-offs, which provides flexibility to select the best policy in the inference stage. We extensively validate the efficacy of our method on the D4RL benchmark, where ours largely outperforms several recent baselines and exhibits promising results on low-quality datasets.", "annot_1": {"annotation": ["Concision", "Rewriting_heavy"], "instruction": "Make this paragraph more concise by rewriting the second half.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision", "Content_deletion"], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07"}} {"id_paragraph": "YkiRt7L93m.jgDbnUD7s.01", "parag_1": "We introduce a notion of projection between sets of probability measures supported on Euclidean spaces. The proposed definition is applicable between sets of general probability measures with different supports and possesses good computational and statistical properties. Italso provides a unique solution to the projection problem under mild conditions and can replicate the geometric properties of the target measure, such as its shape and support. To achieve this, we work in the 2Wasserstein space, that is, the set of all probability measures with finite second moments equipped with the 2 -Wasserstein distance.", "parag_2": "A notion of projection between sets of probability measures should be applicable between any set of general probability measures, replicate geometric properties of the target measure, and possess good computational and statistical properties. We introduce such a notion of projection between sets of general probability measures supported on Euclidean spaces. It provides a unique solution to the projection problem under mild conditions. To achieve this, we work in the 2 -Wasserstein space, that is, the set of all probability measures with finite second moments equipped with the 2 -Wasserstein distance.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Please, make this paragraph easier to read.", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite and reorganise this paragraph to improve the english and be more convincing, let the last sentence as it is.", "annotator": "annotator_07"}} {"id_paragraph": "jzQGmT-R1q.ugUt9B3XaO.02", "parag_1": "In Figure 2 we see that the networks trained in these two experiments both exhibit decreased ability to fit later target functions under a fixed optimization budget. This effect is strongest in small networks with ReLU activations, suggesting that some units may be saturating, but we see a similar trend across most architectures and prediction tasks. The sparse reward setting is particularly intriguing: we do not expect to see a monotone increase in error as the later label functions correspond to ‘easier’ learning problems (i.e. predicting the majority class will already yield reasonably low prediction error), but we do see that for equal difficulty, the network obtains greater error on the later target set than the earlier one, and this effect is significantly more pronounced than in the random labels tasks. This suggests that sparse reward signals can be particularly damaging to the ability of networks to fit new target functions.", "parag_2": "In Figure 2 we see that most networks trained in these two experiments exhibit decreasing ability to fit later target functions under a fixed optimization budget. This effect is strongest in small networks with ReLU activations, suggesting that this capacity loss may be driven by saturated units and that this phenomenon will be easiest to detect in settings where the network architecture is not highly over-parameterized relative to the prediction task. The sparse reward setting is particularly intriguing: we do not expect to see a monotone increase in error as the later label functions correspond to ‘easier’ learning problems (i.e. predicting the majority class will already yield reasonably low prediction error), but we do see that for equal difficulty, the network obtains greater error on the later target set than the earlier one, and this effect is significantly more pronounced than in the random labels tasks. This suggests that sparse reward signals can be particularly damaging to the ability of networks to fit new target functions.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.08", "parag_1": "VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz. We used an opticalmouse, Logitech gaming mouse (G-PPD-002WLr; 1600 DPI). The mouse-cursor speed via the OS setting was set to the middle of the slider in the control display and ” Enhance pointer precision ” setting was turned on to match the participant’s usual settings. The experimental system was implemented with Hot soup processor 3.6 and used in full-screen mode.", "parag_2": "VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz. We used an optical mouse (Logitech gaming mouse, G-PPD-002WLr; 1600 DPI, and the mouse-cursor speed based on the OS setting was set to the middle of the slider in the control display and the “ Enhance pointer precision ” setting was turned on to match the usual settings of the participant.). The experimental system was implemented with Hot Soup Processor 3.6 and used in the full-screen mode 1 .", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Improve the English of this paragraph", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_medium", "Rewriting_light"], "instruction": "Slightly revise the linking between phrases.", "annotator": "annotator_07"}} {"id_paragraph": "_nwyDQp-7.85dN7i1zNm.00", "parag_1": "To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d. from the same random distribution. This assumption leads to the bounds having the following form:", "parag_2": "To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d. from the same random distribution. Intuitively, this means that source and target tasks are independent, which does not reflect real-world applications of few-shot learning where the former are often different draws (without replacement) from the same dataset. Under this unrealistic assumption, the above-mentioned works obtained the bounds having the following form:", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.02", "parag_1": "Estimating ITE with observational data suffers from two primary issues: (1) missing counterfactuals, i . e ., we can only observe one factual outcome out of all potential outcomes; (2) treatment selection bias, i . e ., individuals have their preferences regarding treatment selection, making the population across different groups heterogeneous. To cope with missing counterfactuals, meta-learners (R et al., 2019) decompose the ITE estimation task into solvable subproblems. However,as shown in Section 2.1, the treatment selection bias makes it difficult to generalize the factual outcome estimators trained over the treated/untreated group to the entire population, and the ITE estimation isthus biased.Representation-based methods mitigate this selection bias by minimizing the distribution discrepancy between groups in the representation space. In particular, Uri et al.", "parag_2": "Estimating ITE with observational data has two main challenges: (1) missing counterfactuals, i . e ., only one factual outcome out of all potential outcomes can be observed; (2) treatment selection bias, i . e ., individuals have their preferences for treatment selection, making units in different treatment groups heterogeneous. To handle missing counterfactuals, meta-learners (K¨unzel et al., 2019) decompose the ITE estimation task into solvable factual outcome estimation subproblems. However, the treatment selection bias makes it difficult to generalize the factual outcome estimators trained within respective treatment groups to the entire population; consequently, the derived ITE estimator is biased.", "annot_1": {"annotation": ["Unusable", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.07", "parag_1": "We first give a brief view of the problem setting about deep CNN for image SR. We also observe that there exists heavy redundancy in the networks. To pursue more efficient image SR networks, we then propose structure-regularized pruning (SRP) method to compress them.", "parag_2": "We first present an overview of the problem setting about deep CNN for image SR. It is also observed that excessive redundancy exists in the SR deep CNNs. Then we move on to proposing our structureregularized pruning (SRP) method attempting to achieve more efficient SR networks.", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Can you paraphrase the last sentence?", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite the last sentence preferring passive voice over active.", "annotator": "annotator_04"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.02", "parag_1": "Recently, deep learning has gained tremendous success in modeling proteins, making data-driven methods more appealing than ever (Rives et al., 2019; Jumper et al., 2021). Nevertheless, challenges exist for developing deep learning-based models to predict mutational effects on protein-protein binding. The major challenge is the scarcity of experimental data — only a few thousands of protein mutations annotated with the change in binding affinity are publicly available (Geng et al., 2019b). This hinders supervised learning as the insufficiency of training data tends to cause over-fitting. Another difficulty is the absence of the structure of mutated protein-protein complexes. Mutating amino acids on a protein complex leads to changes on sidechain conformations (rotamers) (Najmanovich et al., 2000; Gaudreault et al., 2012). They account for the change in binding free energy but we do not have the knowledge of how exactly the conformation changes upon mutation.", "parag_2": "Recently, deep learning has shown significant promise in modeling proteins, making data-driven approaches more attractive than ever (Rives et al., 2019; Jumper et al., 2021). However, developing deep learning-based models to predict mutational effects on protein-protein binding is challenging due to the scarcity of experimental data. Only a few thousand protein mutations, annotated with changes in binding affinity, are publicly available (Geng et al., 2019b), making supervised learning challenging due to the potential for overfitting with insufficient training data. Another difficulty is the absence of the structure of mutated protein-protein complexes. Mutating amino acids on a protein complex leads to changes mainly in sidechain conformations (Najmanovich et al., 2000; Gaudreault et al., 2012), which contribute to the change in binding free energy. However, the exact conformational changes upon mutation are unknown.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the following paragraph using a more formal language.", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite this paragraph for better readability.", "annotator": "annotator_07"}} {"id_paragraph": "g5N2H6sr7.6J3ec8Dl3p.02", "parag_1": "Kernel (MLG) (Kondor & Pan, 2016). In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC (Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmadi, 2020). We denote our framework using (1) GCN (Kipf & Welling, 2017) in the decoders as ALATION-GCN 1 , (2) inverse of GCN in Section 4.1 in the decoders as ALATION-INVERSE-GCN.", "parag_2": "Kernel (MLG) (Kondor & Pan, 2016). In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC (Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmadi, 2020). We also include the results of recent supervised graph classification models: GCN (Kipf & Welling, 2017), GAT (Veliˇckovi´c et al., 2018), GIN (Xu et al., 2019b). We denote our framework using (1) GCN (Kipf & Welling, 2017) in the decoders as ALATION-GCN 2 , (2) inverse of GCN in Section 4.1 in the decoders as ALATION-INVERSE-GCN.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.11", "parag_1": "We defined the notch position ( Position ) as the condition. Position = Inside indicated that the notch was placed between the start area and the target, and Position = Outside indicated that the notch was placed to the right of the target. When the angle of entry to a target adjacent to a top edge with respect to the target was based on they-axis, an equivalent effect was observed at the angles of entry that were lineally symmetric about the y-axis [3]. Therefore, the performance would be the same whether the target was to the left or right of the starting area. To avoid increasing the workload of the participant, we always placed the starting area to the left of the target.", "parag_2": "We defined the notch position ( Position ) as the condition. Position = Inside indicates that the notch is placed between the start area and the target, and Position = Outside indicates that the notch is placed to the left of the target. An equivalent effect is observed at angles of entry that are lineally symmetric about the y-axis when the angle of entry the target adjacent to a top edge with respect to the target is based on the y-axis [3]. Therefore, the performance is the same whether the target is to the left or right of the starting area. We always place the starting area to the left of the target to avoid increasing the workload of the participant.", "annot_1": {"annotation": ["Rewriting_medium", "Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "aVemIPPM7t.-8hV3QV4L9.00", "parag_1": "Experiments were conducted on a small number of n1-standard-96 Google Cloud Platform VM instances, with 48 CPU cores on an Intel Skylake processor and 360 GB of RAM. It takes less than a week of compute on a single n1-standard-96 instance to run all the experiments described in this paper.", "parag_2": "Experiments were conducted on a workstation (Intel i9-7920X CPU with 64 GB of RAM), and a small number of r5.24xlarge AWS VM instances, with 48 CPU cores on an Intel Skylake processor and 768 GB of RAM. It takes less than a week of compute on a single r5.24xlarge instance to run all the experiments described in this paper.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "SRquLaHRM4.vI2x5N-YHC.00", "parag_1": "We solve this problem by introducing the optimal transport theory [51] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value. Furthermore,to reduce the computational cost and avoid the extra model parameters, we learn the prompts witha two-stage optimization strategy. At the first stage in the inner loop, we fix both visual and textfeatures and optimize the optimal transport problem by a fast Sinkhorn distances algorithm Then,in the outer loop, we fix all parameters of optimal transport and back-propagate the gradient to learnthe prompts with different characteristics. Compared with conventional distance (such as Euclideandistance of mean features), optimal transport can align different visual features for each local prompt,which is more robust to the visual misalignment and tolerates well feature shift [44]. It is because OTlearns an adaptive transport plan to align features, which achieves fine-grained matching across twomodalities. We conduct experiments on 11 datasets following the standard setting of CLIP [39] and CoOp [63] to evaluate our method. These experiments span the visual classification of generic objects,scenes, actions, fine-grained categories, and so on. The significant result improvement demonstratesthat PLOT can effectively learn representative and comprehensive prompts.", "parag_2": "We solve this problem by introducing the optimal transport theory [50] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value. Furthermore,to reduce the computational cost and avoid the extra model parameters, we learn the prompts witha two-stage optimization strategy. At the first stage in the inner loop, we fix both visual and textfeatures and optimize the optimal transport problem by a fast Sinkhorn distances algorithm Then,in the outer loop, we fix all parameters of optimal transport and back-propagate the gradient to learnthe prompts with different characteristics. We conduct comprehensive experiments on 11 datasetsfollowing the standard setting of CLIP [39] and CoOp [62] to evaluate our method. These experimentsspan the visual classification on generic objects, scenes, actions, fine-grained categories and so on. The significant result improvement demonstrates that PLOT can effectively learn representative andcomprehensive prompts.", "annot_1": {"annotation": ["Content_deletion"], "instruction": "Remove any unessential information in this paragraph.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Content_deletion", "Rewriting_light"], "instruction": "Please exclude the content related to optimal transport.", "annotator": "annotator_09"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.20", "parag_1": "Model Size and Mult-Adds. Compared with recent works (e.g., MemNet, CARN, and IMDN), our SRPN-L has the least parameter number. We also provide operations number with Mult-Adds by setting the output size as 3 × 1280 × 720. Our SRPN-L operates less Mult-Adds than most compared methods. Those comparisons indicate that SRP reduces parameters and operations efficiently.", "parag_2": "Model Size and Mult-Adds. Our SRPN-Lite has the fewest parameter number in comparison to recent efficient SR works such as MemNet, CARN, and IMDN. The comparison in terms of MultAdds (measured when the output size is set to 3 × 1,280 × 720) is also presented. As seen, our SRPNLite costs fewer Mult-Adds than most comparison methods. These results demonstrate the merits of SRP against other counterparts in striking a better network performance-complexity trade-off.", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Give me a more formal version of this paragraph", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rephrase the text and change SRPN-L to SRPN-Lite", "annotator": "annotator_06"}} {"id_paragraph": "MnewiFDvHZ.iAYttXl-uH.00", "parag_1": "• Fixed constraints g t p x q “ g p x q , @ t, where the constraint functions are the same across the timebut they are not necessary to be known when making decision at round t . Note the setting ofknown and fixed constraints in [14, 17, 29, 33] is a special case of ours.• Adversarial constraints g t p x q , where the constraint function g t p x q is unknown when makingdecision at round t and can be arbitrarily and adversarially chosen, as in [24, 20, 30].", "parag_2": "• Fixed constraints g t p x q “ g p x q , @ t, where the constraint function is known (fixed) when makingdecision at round t as in [15, 12, 30, 26]. • Adversarial constraints g t p x q , where the constraint function g t p x q is unknown when making decision at round t and can be arbitrarily and adversarially chosen, as in [22, 18, 27].", "annot_1": {"annotation": ["Concision"], "instruction": "Make paragraph more concise", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07"}} {"id_paragraph": "3686sm4Cs.AJMXMDLVn.01", "parag_1": "Results. Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling. Note that all the methods boost performance over a single model without requiring additional model parameters. However, our SuperWeight Ensembles outperforms all other methods on CIFAR-100 when using 36.5M parameters. shows that our approach outperforms or is on par with prior work in efficient ensembling. b) increases the number of parameters (without changing the architecture) using our approach compared to Deep Ensembles. See Section 4 for discussion", "parag_2": "Results. Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling. Note that all the methods boost performance over a single model without requiring additional model parameters. However, our SuperWeight Ensembles outperforms all other methods on CIFAR-100 when using 36.5M parameters. Unlike methods like BatchEnsemble (BE) (Wen et al., 2020) and MIMO (Havasi et al., 2021), which shows that our approach outperforms or is on par with prior work in efficient ensembling. (b) increases the number of parameters (without changing the architecture) using our approach compared to Deep Ensembles. See Section 4 for discussion", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.08", "parag_1": "However, as neural network estimators mainly update parameters with stochastic gradient methods, only a subset of the representation’s distribution is accessible within each iteration. As such, a shortcut (Liuyi et al., 2018) is to calculate the group discrepancy at a stochastic mini-batch level:", "parag_2": "However, since prevalent neural estimators mainly update parameters with stochastic gradient methods, only a fraction of the units is accessible within each iteration. A shortcut in this context is to calculate the group discrepancy at a stochastic mini-batch level:", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "check the wordings but keep the original content as much as possible", "annotator": "annotator_05"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve the language to make it more formal.", "annotator": "annotator_07"}} {"id_paragraph": "5Eyr2crzI.s502diDSt.00", "parag_1": "We also display the trade-off between inference speed and coverage from hierarchical refinement in Fig. From 16 upsampled points at the last iteration and lower, coverage performance starts to diminish while little speed gains are made. We still kept a relatively high N=64 in our model as we wanted to insure a wide coverage, and the time loss between 41 ms and 46 ms remains acceptable.", "parag_2": "We also display the trade-off between inference speed and coverage from hierarchical refinement in Fig. 7, evaluated on the Interpret multi-agent dataset with marginal MissRate 6 . The curve is obtained setting the number N of upsampled points at the last refinement iteration from 2 to 128. From N = 16 and lower, coverage performance starts to diminish while little speed gains are made. We still kept a relatively high N=64 in our model as we wanted to insure a wide coverage, and the time loss between 41 ms and 46 ms remains acceptable.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "atxti8SVk.3K9AmPwALM.15", "parag_1": "Pascal: Image tag annotations. On Pascal VOC dataset, our method outperforms others by a large margin. Table 2 shows that, without additional saliency labels, our method still achieves SOTA. Compared to (Chang et al., 2020), we improves mIoU bya sizable 4 . 5% .", "parag_2": "Pascal: Image tag annotations. Table 2 shows that, without using additional saliency labels, our method outperforms existing methods with saliency by 4 . 4% , and those without saliency by 5 .", "annot_1": {"annotation": ["Content_deletion", "Content_substitution"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "OzYyHKPyj7.O9Mk1uqXra.01", "parag_1": "The stack of Joulin & Mikolov (2015) simulatespartial pushes and pops by making each stack element a convex combination, or “superposition,” of the elements immediately above and below it (resulting from pushing and popping, respectively). In this model, stack elements are again vectors, and 𝑎 𝑡 = ( a 𝑡 , v 𝑡 ) , where the vector a 𝑡 is a probability distribution over three stack operations: push a new vector, no-op, and pop the top vector; v 𝑡 is the vector to be pushed. The vector v 𝑡 can be learned or can be set to h 𝑡 (Yogatama et al., 2018). The stack reading is the top cell of the stack. This model has quadratic time and space complexity with respect to input length. We refer the reader to Appendix A.2 for full details.", "parag_2": "The stack of Joulin & Mikolov (2015) simulates a combination of partial stack actions by computing three new, separate stacks: one with all cells shifted down (push), kept the same (no-op), and shifted up (pop). The new stack is then an element-wise interpolation (“superposition”) of these three stacks. In this model, stack elements are again vectors, and 𝑎 𝑡 = ( a 𝑡 , v 𝑡 ) , where the vector a 𝑡 is a probability distribution over three stack operations: push a new vector, no-op, and pop the top vector; v 𝑡 is the vector to be pushed. The vector v 𝑡 can be learned or can be set to h 𝑡 (Yogatama et al., 2018). The stack reading is the top cell of the stack. This model has quadratic time and space complexity with respect to input length. We refer the reader to Appendix A.2 for full details.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_10"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_03"}} {"id_paragraph": "BkwlK_dPB.SJfZLu8oB.00", "parag_1": "It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b . Intuitively, ˆ a depends on the scale of the problem such asvolume of the goal set |F RLgoal | and how complex and long the solution needs to be. ˆ b depends on the probability of sampling states that will expand the solution in the right direction. Therefore, ˆ b is a function of the dimensionality of S and the visibility of F , i.e. how constrained the tree expansion is. We refer the reader to Appendix S for more details on how the tail bound in Theorem 1 is derived.", "parag_2": "It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b . Intuitively, ˆ a depends on the scale of the problem. It grows as |F RLgoal | becomes smaller or as the length of the solution trajectory becomes longer. ˆ b depends on the probability of sampling states that will expand the tree in the right direction. It therefore shrinks as the dimensionality of S increases. We refer the reader to Appendix S2 for more details on the meaning of ˆ a, ˆ b and the derivation of the tail bound in Theorem 1.", "annot_1": {"annotation": ["Development", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_04"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rephrase the text to make it more direct and readable when necessary.", "annotator": "annotator_07"}} {"id_paragraph": "URRc6L6nmE.yUoqIf6zGY.00", "parag_1": "A less conservative approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space. Finally, the discontinuities of (4), (12) might be problematic and create chattering when implemented in real actuators. A continuous approximation that has shown to yield satisfying performance is the boundary-layer technique (Slotine et al.", "parag_2": "A less conservative and more robust approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space; such an approach constitutes part of our future work. Finally, the discontinuities of (4), (12) might be problematic and create chattering when implemented in real actuators. A continuous approximation that has shown to yield satisfying performance is the boundary-layer technique (Slotine et al.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "kAwMEYEIN.RlDWAM6qF.00", "parag_1": "HJB equation is stable only if p is sufficiently large. Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss is abetter choice. The theory also inspires us to develop a novel PINN training algorithm to minimize the L ∞ loss for HJB equations in a similar spirit to adversarial training. We believe this work provides important insights into the loss design in Physics-Informed deep learning.", "parag_2": "HJB equation is stable only if p is sufficiently large. Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss isa better choice. The theory also inspires us to develop a novel PINN training algorithm to minimize the L ∞ loss for HJB equations in a similar spirit to adversarial training. One limitation of this workis that we only work on the HJB Equation. Theoretical investigation of other important equations canbe an exciting direction for future works. We believe this work provides important insights into the loss design in Physics-Informed deep learning.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "YCmehaMzt.kHwUIOFr_.00", "parag_1": "In addition, we combine EM and our proposed OPS together to craft a kind of composed unlearnable examples. Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed. We evaluate the effectiveness of this composing method under different training strategies and find that it can always keep effective. Even if we use adversarial training and strong data augmentation like RandAugment, it is still able to degrade test accuracy to a relatively low level. Based on this property, we introduce CIFAR-10-S, where all the images are perturbed by the EM-OPS-composed noises. It can serve as a new benchmark to evaluate the abilities to learm critical information under the disturbance of composed non-semantic representations.", "parag_2": "Naturally, for the purpose of complementing each other, we can combine EM and our proposed OPS together to craft a kind of ensemble shortcut. Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed. We evaluate the effectiveness of this ensemble method under different training strategies and find that it can always keep effective. Even if we use adversarial training and strong data augmentation like RandAugment, it is still able to degrade test accuracy to a relatively low level. Based on this property, we introduce CIFAR-10-S, where all the images are perturbed by the EM-OPS-composed noises. It can serve as a new benchmark to evaluate the ability to learn critical information under the disturbance of composed non-semantic representations.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Change the idea of \"composition\" to \"ensemble\" if this paragraph. Fix any spelling mistake.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the first sentence. Improve English in this paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "NcdK3bdqnA.kF_TmXY8G0.00", "parag_1": "The results in Table 6 demonstrate that adopting image-specific linear projections outperforms directly sharing the contextual projections. The two types of image-specific linear projections do not lead to substantial performance differences. Thus, we take the strategy of only adding additional linear bias for augmented images and reuse contextual linear weights in generating visual attention keys and values for implementation convenience and parameter efficiency.", "parag_2": "The results in Table 6 demonstrate that adopting image-specific projection bias outperforms directly sharing the contextual projection bias. Introducing additional image-specific linear projection weights does not lead to further performance increase. Thus, we take the strategy of only adding additional linear bias for augmented images and reuse contextual linear weights in generating visual attention keys and values for implementation convenience and parameter efficiency.", "annot_1": {"annotation": ["Rewriting_medium", "Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "mS4xvgSiEH.i-a3xp3usm.00", "parag_1": "The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels. To ensure that the discrete latent space is necessary, we introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE.", "parag_2": "The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels. We introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE.", "annot_1": {"annotation": ["Concision"], "instruction": "Make this paragraph more concise.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07"}} {"id_paragraph": "g5N2H6sr7.6J3ec8Dl3p.04", "parag_1": "Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL, i.e., it is 10 times faster than INFOGRAPH and 15 times faster than MVGRL on PROTEINS. This is because our model neglects the tedious process of negative sampling used in both INFOGRAPH and MVGRL.", "parag_2": "Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL. Our model takes 10s to train one epoch of PORTEINS on Tesla P40 24G, while INFOGRAPH needs 127s and MVGRL needs 193s. This is because our model neglects the tedious process of negative sampling used in both INFOGRAPH and MVGRL.", "annot_1": {"annotation": ["Rewriting_medium", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.06", "parag_1": "Neural Network Pruning. Pruning aims to remove parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017). It mainly falls into two groups: filter pruning (a.k.a. structured pruning) 1 and weight-element pruning (a.k.a. unstructured pruning). The former aims to remove weights by filters (i.e., 4-d tensors), while the latter removes weights by single elements (i.e., a scalar). Structured pruning results in regular sparsity after pruning. It does not demand any special hardware features to achieve considerable practical acceleration. In contrast, unstructured pruning leads to irregular sparsity. Leveraging the irregular sparsity for acceleration typically demands special software libraries, while past works have shown the practical speedup is very limited (Wen et al., 2016), unless using customized hardware platform (Han et al., 2016a). In this paper, we focus on filter pruning for easy acceleration. Most efforts in pruning (mainly in classification task) have been spent on finding a better pruning criterion to select unimportant weights (Reed, 1993; Sze et al., 2017). Magnitude-based (Han et al., 2015; 2016b; Li et al., 2017) is the most prevailing criterion, which we will also employ to develop our method in this paper.As far as we know, no work before has managed to apply filter pruning to compressing image SR networks. This paper is meant to fill the blank.", "parag_2": "Neural Network Pruning. Network pruning aims to eliminate redundant parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017). The methodology of pruning mainly falls into two groups: filter pruning (or more generally known as structured pruning) * and weight-element pruning (also referred to as unstructured pruning). The former aims to remove weights by filters (i.e., 4-d tensors), while the latter removes weights by single elements (i.e., scalars). Structured pruning results in regular sparsity after pruning. It does not demand any special hardware features to achieve considerable practical acceleration. In contrast, unstructured pruning leads to irregular sparsity. Leveraging the irregular sparsity for acceleration typically demands special software supports, while past works have shown the practical speedup is very limited (Wen et al., 2016), unless using customized hardware platforms (Han et al., 2016a). In this paper, we tackle filter pruning instead of weight-element pruning for effortless acceleration. The major efforts in pruning (mainly in image classification) have been focusing on proposing a more sound pruning criterion to select unimportant weights (Reed, 1993; Sze et al., 2017). Criteria based on weight magnitude (Han et al., 2015; 2016b; Li et al., 2017) are the most prevailing ones, which we will also employ to develop our method in this paper.", "annot_1": {"annotation": ["Development", "Concision"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "Rewrite the last sentence to make it more concise by removing shortcomings of other work.", "annotator": "annotator_04"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.21", "parag_1": "Improved generalization performance We compare the generalization ability of the three algorithms (guided, random and vanilla). For each algorithm, we train three repetitions of each model using the same learning rate: 0.01, 0.1 and 0.01 for Colored-and-gray-MNIST, ModelNet40 and", "parag_2": "Improved generalization performance We compare the generalization ability of multi-modal DNNs trained by the three algorithms (guided, random and vanilla) and the RUBi learning strategy (Cadene et al., 2019). For each algorithm, we train each model three times with the same learning rate. We use 0.01, 0.1 and 0.01 as learning rate for Colored-and-gray-MNIST, ModelNet40 and", "annot_1": {"annotation": ["Development", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Development", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_08"}} {"id_paragraph": "sIqSoZ9KiO.KLlOZMoJ9G.01", "parag_1": "To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE. Apart from the implementation simplicity and shorter training time, a non-hierarchical VAE is more suitable for representation learning – there is a single stochastic vector and not a hierarchy of feature maps, which enables better control of the latent space. In particular, the gains in performance when using SDN were evaluated with respect to: (a) evidence lower bound (ELBO), as a proxy to measure how well an image distribution is approximated; (b) disentanglement of latent codes based on the corresponding metrics, to examine the effects of SDN decoder to the quality of learned latent representations.", "parag_2": "To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE. Apart from the implementation simplicity and shorter training time, non-hierarchical VAE is more suitable for disentangled representation learning, at least in the sense of (Higgins et al., 2016) where the aim is to factorize the dimensions of a latent vector. In particular, the gains in performance when using SDN were evaluated with respect to: (a) evidence lower bound (ELBO), as a proxy to measure how well an image distribution is approximated; (b) disentanglement of latent codes based on the corresponding metrics, to examine the effects of SDN decoder to the quality of learned latent representations.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Make sentence precise.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Rephrase the second sentence, mostly focusing on the second half.", "annotator": "annotator_07"}} {"id_paragraph": "q4rMz7ZfFG.uyxGiQeMP.01", "parag_1": "We give two cases of the GraphCodeBERT output for this task in Figure 6. In the first example, the model successfully finds Python source code that correctly matches the sementic of the query “Scans through a string for substrings matched some patterns”. The source code finds all substrings by calling re.findall () build-in fucntion. In the second case, the query is “Combing the individual byte arrays into one array”, and the model searches a source code from Java candidate codes. As we can see, the source code concatenates multiple arrays into one array by calling System.arraycopy () build-in fucntion.", "parag_2": "We use GraphCodeBERT to separately encode query and source code with data flow, and calculate inner product of their representations of the special token [ CLS ] as relevance scores to rank candidate codes. In the fine-turning step, we set the learning rate as 2e-5, the batch size as 32, the max sequence length of queries and codes as 128 and 256, and the max number of nodes as 64. We use the Adam optimizer to update model parameters and perform early stopping on the development set.", "annot_1": {"annotation": ["Rewriting_heavy", "Content_substitution"], "instruction": NaN, "annotator": "annotator_10"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_03"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.04", "parag_1": "SRCNN. Tai et al . later introduced memory block in MemNet (Tai et al., 2017b) for deeper network structure. Lim et al . (Lim et al., 2017) simplified the residual block (He et al., 2016) and constructed deeper and wider networks with a large number of parameters. Zhang et al . (Zhang et al., 2018b) proposed an even deeper network, residual channel attention network (RCAN), where the attention mechanism was firstly introduced in image SR. Liu et al . proposed FRANet (Liu et al., 2020) to make the residual features more focused on critical spatial contents. Later, Zhang et al . (Zhang et al., 2019) proposed residual non-local attention for image restoration, including image SR. Mei et al . proposed CSNLN (Mei et al., 2020) by combining local, in-scale/cross-scale non-local feature correlations, and external statistics. Most of them have achieved state-of-the-art results with deeper and wider networks. However, they suffer from huge model size (i.e., network parameter number) and/or heavy computation operations (i.e., FLOPs).", "parag_2": "SRCNN. Tai et al . later introduced memory block in MemNet (Tai et al., 2017b) for deeper network structure. Lim et al . (Lim et al., 2017) simplified the residual block (He et al., 2016) and constructed deeper and wider networks with a large number of parameters. Zhang et al . (Zhang et al., 2018b) proposed an even deeper network, residual channel attention network (RCAN), where the attention mechanism was firstly introduced in image SR. Liu et al . proposed FRANet (Liu et al., 2020) to make the residual features focus on critical spatial contents. Later, Zhang et al . (Zhang et al., 2019) proposed residual non-local attention for image restoration. Mei et al . proposed CSNLN (Mei et al., 2020) by combining local, in-scale/cross-scale non-local feature correlations, and external statistics. Most of those methods have achieved SOTA results. However, they suffer from huge model size (i.e., network parameter number) and/or heavy computation operations (i.e., FLOPs).", "annot_1": {"annotation": ["Concision"], "instruction": "Be more concise.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_light", "Concision"], "instruction": "Use shorter formulations to make some sentences more concise.", "annotator": "annotator_04"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.16", "parag_1": "In Figure 6, we analyze the agent performance qualitatively. (a) In CREATE, at t = 0 , the selected action spring in AGILE’s GAT attends to various other tools, especially covers all the tools that get activated with spring, such as trampoline. At t = 1 , the trampoline tool is selected with a strong attention on spring. This shows that for selecting trampoline, the agent checks for the presence of spring, so it is possible to place it before or after the trampoline. (b) In Grid World, we visualize the Summary-GAT ablation to see how summarizer utilizes attention. We consider the case where both dig − lava skills are available. The agent goes right, digs the orange lava, and is about to enter the pink lava. At this point, the Right action attends with a high weight to Dig − Pink skill, checking for its presence before making an irreversible decision of entering the lava. In contrast, the Utility Policy always follows the safe suboptimal path as it is blind to the knowledge of dig-skills before entering lava. Finally, in RecSim, we observe that the agent is able to maximize the CPR score by selecting 5 out of 6 items in the list from the same primary category. In contrast, Utility Policy cannot determine the most common category and is unable to maximize CPR well.", "parag_2": "In Figure 6, we analyze the agent performance qualitatively. (a) In CREATE, at t = 0 , the selected action spring in AGILE’s GAT attends to various other tools, especially the tools that get activated with spring , such as trampoline . At t = 1 , the trampoline tool is selected with strong attention on spring . This shows that for selecting the trampoline , the agent checks for its activator, spring , to ensure that it is possible to place spring before or after the trampoline. (b) In Grid World, we visualize the inter-action attention in Summary-GAT ’s summarizer. We consider the case where both dig − lava skills are available. The agent goes right, digs the orange lava, and is about to enter the pink lava. At this point, the Right action attends with a large weight to the Dig − Pink skill, checking for its presence before making an irreversible decision of entering the lava. In contrast, the Utility Policy always follows the safe suboptimal path as it is blind to the knowledge of dig-skills before entering lava. (c) In RecSim, we observe that the agent can maximize the CPR score by selecting 5 out of 6 items in the list from the same primary category. In contrast, Utility Policy cannot determine the most common available category and is unable to maximize CPR.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Make this paragraph better. Rewrite a sentence about the Grid World", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Improve the clarity in this paragraph.", "annotator": "annotator_03"}} {"id_paragraph": "aFzc_2nNz.WIdHkazOg.00", "parag_1": "Further improvement is expected if γ is selected independently for each training sample as shown through the Sample-Dependent Focal Loss (FLSD-53) proposed in [19], which, however, is based on heuristics and, as shown in this paper, does not generalize well. In this paper, we propose a calibration-aware adaptive focal loss called AdaFocal that utilizes the calibration properties of focal (and inverse-focal loss) and adaptively modifies γ t for different groups of samples based on (1) γ t −from the previous step (2) the magnitude of the model’s under/over-confidence. We evaluate AdaFocal on various image recognition tasks and one NLP task, covering a variety of network architectures, to confirm the improvement in calibration while achieving similar levels of accuracy. Additionally, models trained with AdaFocal are shown to achieve a significant boost in out-of-distribution detection capability.", "parag_2": "Further improvement is expected if γ is selected independently for each training sample (Sample-Dependent Focal Loss (FLSD-53) [19]). However, FLSD-53 is based on heuristics and does not generalize well. In this paper, we propose a calibration-aware adaptive focal loss called AdaFocal that utilizes the calibration properties of focal (and inverse-focal) loss and adaptively modifies γ t for different groups of samples based on γ t − 1 from the previous step and the knowledge of model’s under/over-confidence on the validation set. We evaluate AdaFocal on various image recognition and one NLP task, covering a wide variety of network architectures, to confirm the improvement in calibration while achieving similar levels of accuracy. Additionally, we show that models trained with AdaFocal achieve a significant boost in out-of-distribution detection.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Make the ideas in these paragraph more modular and easier to understand.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Concise this academic paragraph a bit and smooth out the writing.", "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.17", "parag_1": "L 1 -norm aspruning criterion, the same as (Li et al., 2017). However, our results are significantly better than theirs. The reason is that they do not impose any regularization on the pruned structure, thus the kept feature map channels are misaligned in residual blocks after pruning. In contrast, our method does not have this problem, thanks to the proposed structure regularization.", "parag_2": "L 1 -norm as the scoring criterion to select unimportant filters, same as (Li et al., 2017). Nevertheless, our method delivers significantly better results than theirs. The primary reason is that they do not impose any regularization on the pruned structure; the remaining feature maps are thus mismatched in residual blocks after pruning. In contrast, our method SRP is not bothered by this issue, showing the effectiveness of our proposed structure regularization.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_01"}, "annot_2": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_06"}} {"id_paragraph": "ryaiZC9KQ.ryt3YptA7.00", "parag_1": " DenseNet-169 in terms of feature sensitivity, error distribution and interactions between image parts, suggesting that modern DNNs approximately follow a similar bag-of-feature strategy.", "parag_2": "ResNet-152 or DenseNet-169 in terms of feature sensitivity, error distribution and interactions between image parts. This suggests that the improvements of DNNs over previous bag-of-feature classifiers in the last few years is mostly achieved by better fine-tuning rather than by qualitatively different decision strategies.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "eYzycFMXwr.8-KFmZiCM.01", "parag_1": "Normally, C a is too large because the size of batch is too large. In this case, we can reduce the size ofbatch and the number of the model partitions, and replicate each stage on the newly idle accelerator devices for data parallelism to increase the total batchsize to the original size, as shown in Figure 6. This not only reduces the C a, but also does not add new accelerator device andchange the batch size. Of course, we need to weigh the communication overhead between model parallelism ( C WPipe ) and data parallelism ( C DP ). When C WPipe is greater than C DP , we reduce the model-parallel batch size and the number of model partitions, and increase the number of data-parallel groups.", "parag_2": "Normally, too large C a is caused by too large microbatch. we can proportionally reduce the depth of the pipeline while reducing the size of the microbatch, and then we proportionally increase the width of data parallelism to maintain the same global batch size, as shown in Figure 6. As a result, the size of the micro-batch becomes smaller, and the C a also decreases, while the number of accelerators and the global batch size remain unchanged. Of course, we need to weigh the communication overhead between model parallelism ( C WPipe ) and data parallelism ( C DP ) to choose the appropriate ratio of depth ( d ) and width ( w ). When C WPipe is greater than C DP , we reduce the value of d : w, d ∗ w = N GPU .", "annot_1": {"annotation": ["Development", "Rewriting_heavy"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_heavy", "Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "NvI7ejSHFe.ppieLd2M4a.00", "parag_1": "PAU (Molina et al., 2019) leverages Pad´e approximation to form its search space. Motivated by the connection between Swish and ReLU, ACON (Ma et al., 2021) is proposed as an smooth approximator to the general Maxout family activation functions (Goodfellow et al., 2013). Our work proposes to learn adaptive activation function as a weighted sum of candidate functions, whose weights can be adapted to the underlying physics laws when modelling different PDE systems. While learning combinations of activation functions has been studied for convolutional neural networks on image classification (Dushkoff & Ptucha, 2016; Qian et al., 2018; Manessi & Rozza, 2018), we would like to argue that it is non-trivial to explore this idea in the context of PINNs. First, different PDE systems have various characteristics, which are difficult to model accurately by a single activation function. In contrast, learning a combination of candidate functions makes it possible to embed prior knowledge about the physics system into neural networks by including activation functions with suitable properties. In addition, while previous methods (Dushkoff & Ptucha, 2016; Qian et al., 2018; Manessi & Rozza, 2018) experiment with limit choice of activation functions, we add most of commonly-used activation functions into the candidate function set to ensure its diversity and to avoid the repetitive evaluations of each candidate activation function for different PDEs.", "parag_2": "PAU (Molina et al., 2019) leverages Pad´e approximation to form its search space. Motivated by the connection between Swish and ReLU, ACON (Ma et al., 2021) is proposed as a smooth approximator to the general Maxout family activation functions (Goodfellow et al., 2013). Our work proposes to learn an adaptive activation function as a weighted sum of candidate functions, whose weights can be adapted to the underlying physics laws when modelling different PDE systems. While similar ideas have been studied for convolutional neural networks in image classification (Dushkoff & Ptucha, 2016; Qian et al., 2018; Manessi & Rozza, 2018; S¨utfeld et al., 2020), some technical challenges remain unexplored in the context of PINNs, which have a higher demand for the smoothness and diversity of the candidate functions. First, the optimization of PDE-based constraints needs the activation function to provide higher-order derivatives, which causes the failure of widely-used ReLUs in PINNs. Second, unlike the image classification tasks, different PDE systems could have various characteristics, such as periodicity and rapid decay. This leads to a higher requirement for the diversity of the candidate functions. To overcome these challenges, we propose to build the candidate function set with simple elementary functions to embed the prior knowledge of physics systems, as well as commonly-used activation functions to ensure the diversity.", "annot_1": {"annotation": ["Rewriting_heavy", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "PDvmJtmgQb.gGrpxbc7UI.01", "parag_1": "In-distribution vs. Out-of-distribution Public Data: Prior works have considered both settings where the public data set D pub comes from the same distribution as the private data D priv (a.k.a. indistribution ) (Bassily et al., 2018a; Zhou et al., 2020; Kairouz et al., 2021a; Asi et al., 2021), and where the distributions are different (a.k.a. out-of-distribution ) (Abadi et al., 2016; Papernot et al., 2016; 2018; Liu et al., 2021). In principle, our algorithm can be used in out-of-distribution settings , but our results in this paper are for the in-distribution case. In the in-distribution setting, it is typical that there are fewer public data samples available than private data samples – i.e., n pub (cid:28) n priv – as it is harder to obtain public data sets than ones with privacy constraints attached. In-distribution public data could come from either altruistic opt-in users (Merriman, 2014; Avent et al., 2017) or from users who are incentivized to provide such data (e.g., mechanical turks). Out-of-distribution public data may be easier to obtain but can have various degrees of freedom; e.g., the domains of private and public data may not be identical, the representation of some classes may vary, the distributions can be mean shifted, etc. It is usually hard to quantify these degrees of freedom to the extent that we can provide precise guarantees. Hence, we leave this aspect for future exploration, and work with the idealized assumption that the public data comes from the same distribution as the private data, or, at least, that the differences between these two distributions are not material.", "parag_2": "In-distribution vs. Out-of-distribution Public Data: Prior works have considered both settings where the public data set D pub comes from the same distribution as the private data D priv (a.k.a. in-distribution ) [4, 7, 23, 39], and where the distributions are different (a.k.a. out-of-distribution ) [1, 26, 31, 32]. In principle, our algorithm can be used in out-of-distribution settings , but our results in this paper are for the in-distribution case. In the in-distribution setting, it is typical that there are fewer public data samples available than private data samples – i.e., n pub (cid:28) n priv – as it is harder to obtain public data sets than ones with privacy constraints attached. In-distribution public data could come from either altruistic opt-in users [5, 28] or from users who are incentivized to provide such data (e.g., mechanical turks). Out-of-distribution public data may be easier to obtain but can have various degrees of freedom; e.g., the domains of private and public data may not be identical, the representation of some classes may vary, the distributions can be mean shifted, etc. It is usually hard to quantify these degrees of freedom to the extent that we can provide precise guarantees. Hence, we leave this aspect for future exploration, and work with the idealized assumption that the public data comes from the same distribution as the private data, or, at least, that the differences between these two distributions are not material.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Unusable"], "instruction": "Convert in-text citations to numbers.", "annotator": "annotator_09"}} {"id_paragraph": "lLwt-9RJ2tm.XJsauLjck.01", "parag_1": "Given such a sparsifier, by setting S = { u } and T = { v } , one can recover whether or not edge ( u, v ) is present in G for any u, v ∈ V . discount in its parent’s contribution to the cost, which after cascading gives a third view of Eq.", "parag_2": "Given such a sparsifier, by setting S = { u } and T = { v } , one can recover whether or not edge ( u, v ) is present in G for any u, v ∈ V . is the second observation: the negative term w G ( S ∪ T, S ∪ T ) that internal node S contributes to the cost also appears as a positive term in its parent’s contribution to the cost. We can pass this term as a discount in its parent’s contribution to the cost, which after cascading gives a third view of Eq.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "isfcBsgB-H.SBe0hOLmg9.00", "parag_1": "Line-Entry System) or GNN-based (Graph Neural Networks) MRL methods either take SMILES strings as input that have difficulty in encoding molecule structure information, or over-emphasize the importance of GNN architectures but neglect their generalization ability. Here we propose using chemical reactions to assist learning molecule representation. The key idea of our approach is to preserve the equivalence of molecules with respect to chemical reactions in the embedding space, i.e., forcing the sum of reactant embeddings and the sum of product embeddings to be equal for each chemical equation. This constraint is proven effective to 1) keep the embedding space well-organized and 2) improve the generalization ability of molecule embeddings. Moreover, our model can use any GNN as the molecule encoder and is thus agnostic to GNN architectures. Experimental results demonstrate that our method achieves state-of-the-art performance in a variety of downstream tasks, e.g., 17.4% absolute Hit@1 gain in chemical reaction prediction, 2.3% absolute AUC gain in molecule property prediction, and 18.5% relative RMSE gain in graph-edit-distance prediction, respectively, over the best baseline method. All experimental code is provided in the supplementary material.", "parag_2": "Line-Entry System) or GNN-based (Graph Neural Networks) MRL methods either take SMILES strings as input that have difficulty in encoding molecule structure information, or over-emphasize the importance of GNN architectures but neglect their generalization ability. Here we propose using chemical reactions to assist learning molecule representation. The key idea of our approach is to preserve the equivalence of molecules with respect to chemical reactions in the embedding space, i.e., forcing the sum of reactant embeddings and the sum of product embeddings to be equal for each chemical equation. This constraint is proven effective to 1) keep the embedding space well-organized and 2) improve the generalization ability of molecule embeddings. Moreover, our model can use any GNN as the molecule encoder and is thus agnostic to GNN architectures. Experimental results demonstrate that our method achieves state-of-the-art performance in a variety of downstream tasks, e.g., reaction product prediction, molecule property prediction, reaction classification, and graph-edit-distance prediction. The code is available at https://github.com/hwwang55/MolR .", "annot_1": {"annotation": ["Concision"], "instruction": "Remove unnecessary details on specific numerical performance of the model. Link to https://github.com/hwwang55/MolR instead of supplementary material.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision", "Content_substitution"], "instruction": "Make the second last sentence from the end of this paragraph more concise by removing too precise details. For the last sentence, the code is now provided on github.", "annotator": "annotator_07"}} {"id_paragraph": "ZpvHK3zB43.QhVM4p3DKI.00", "parag_1": "We have proposed FROB which uses the generated support boundary of the normal data distribution for few-shot OoD detection. FROB tackles the few-shot problem using classification with OoD detection. In real-world applications, in the wild, it is a challenge to robustly perform classification and few-shot OoD detection with high levels of reliability. The contribution of FROB is the combination of the generated boundary in a self-supervised learning manner and the imposition of low confidence at this learned boundary. To improve robustness, FROB generates strong adversarial samples on the boundary, and forces samples from OoD and on the boundary to be less confident. By including the boundary, FROB reduces the threshold linked to the model’s few-shot robustness. FROB redesigns, restructures, and streamlines OE to work even for zero-shots. FROB maintains the OoD performance approximately constant, independent of the few-shot number. The performance of FROB with the self-supervised learning boundary is robust and effective as the performance is approximately stable as the few-shot outliers decrease in number, while the performance of FROB without O ( z ) decreases as the few-shots decrease. The evaluation of FROB, on many sets, shows that it is effective, achieves competitive state-of-the-art performance, and outperforms benchmarks in the few-shot OoD detection setting in AUC-type metrics. In the future, in addition to confidence and class, FROB will also output important regions and bounding boxes around abnormal objects.", "parag_2": "We have proposed FROB which uses the generated support boundary of the normal data distribution for few-shot OoD detection. FROB tackles the few-shot problem using classification with OoD detection. The contribution of FROB is the combination of the generated boundary in a self-supervised learning manner and the imposition of low confidence at this learned boundary. To improve robustness, FROB generates strong adversarial samples on the boundary, and forces samples from OoD and on the boundary to be less confident. By including the self-produced boundary, FROB reduces the threshold linked to the model’s few-shot robustness. FROB redesigns, restructures, and streamlines OE to work even for zero-shots. It robustly performs classification and few-shot OoD detection with a high level of reliability in real-world applications, in the wild. FROB maintains the OoD performance approximately constant, independent of the few-shot number. The performance of FROB with the self-supervised learning boundary is robust and effective, as the performance is approximately stable as the few-shot outliers decrease in number, while the performance of FROB without O ( z ) decreases as the few-shots decrease. The evaluation of FROB, on many sets, shows that it is effective, achieves competitive state-of-the-art performance, and outperforms benchmarks in the few-shot OoD detection setting in AUC-type metrics. In the future, in addition to confidence and the class, FROB will also output important regions and bounding boxes around abnormal objects.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "lLwt-9RJ2tm.XJsauLjck.02", "parag_1": "Several other variations of this basic setup have been considered. For example, [12] have considered this problem in the presence of structural constraints. [11, 31, 34] considered a setting where vertices are embedded in a metric space and the similarity/dissimilarity between two vertices is given by their distances. The most relevant to our work amongst these is [34] which considered this metric embedded hierarchical clustering problem in a streaming setting. However, the stream in their setting is composed of vertices while edge weights can be directly inferred using distances between vertices; whereas the stream in our streaming setting is composed of edges while vertices are already known. Moreover, their study is only limited to the streaming setting. There has also been work on designing faster/parallel agglomerative algorithms such as single-linkage, average-linkage etc.[40, 17]. However, these algorithms are not known to achieve a good approximation factor for Dasgupta’s objective, which is the main focus of our paper. [27] studied the hierarchical clustering problem in an MPC setting. However, their work only considered the maximization objectives[32, 15], while our work is primarily focussed on the minimization objective of [16].", "parag_2": "Several other variations of this basic setup have been considered. For example, [12] have considered this problem in the presence of structural constraints. [11, 34, 37] considered a setting where vertices are embedded in a metric space and the similarity/dissimilarity between two vertices is given by their distances. The most relevant to our work amongst these is [37] which considered this metric embedded hierarchical clustering problem in a streaming setting. However, the stream in their setting is composed of vertices while edge weights can be directly inferred using distances between vertices; whereas the stream in our streaming setting is composed of edges while vertices are already known. Moreover, their study is only limited to the streaming setting. There has also been work on designing faster/parallel agglomerative algorithms such as single-linkage, average-linkage etc. While these works share the same motivation as ours, namely, scaling HC algorithms to massive datasets, these results are largely orthogonal to ours. The primary philosophical difference is that these aforementioned works are aimed at speeding up/parallelizing very specific kinds of linkage based algorithms, while recovering the same or similar cluster trees (under very different notions of similarity) that would have been computed by the slower/sequential algorithm. Moreover, the specific algorithms considered in these works have no known approximation guarantees for Dasgupta’s objective. Our work on the other hand approaches this problem from an optimization perspective. Through data sparsification, we aim to recover a cluster tree with marginal loss in objective function value as compared to one computed over the entire (dense) input data by any given HC algorithm as a blackbox, achieving a speedup in runtime or reducing its memory requirement due to sparsity. [29] studied the hierarchical clustering problem in an MPC setting. However, their work only considered the maximization objectives [35, 17], while our work is primarily focussed on the minimization objective of [18].", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "zzdwUcxTjWY.rVxmgW1FRK.00", "parag_1": "Modern deep neural networks have achieved unprecedented success in known contexts for which they are trained, yet they do not necessarily know what they don’t know (Nguyen et al., 2015). In particular, neural networks have been shown to produce high posterior probability for out-of-distribution (OOD) test inputs, which should not be predicted by the model. Taking self-driving car as an example, an object detection model trained to recognize in-distribution objects ( e.g., cars, stop signs) can produce a high-confidence prediction for an unseen object of a moose (see Figure 1(a)). Such a failure case raises concerns in model reliability, and worse, may lead to a catastrophic effect when deployed in safety-critical applications.", "parag_2": "Modern deep neural networks have achieved unprecedented success in known contexts for which they are trained, yet they often struggle to handle the unknowns. In particular, neural networks have been shown to produce high posterior probability for out-of-distribution (OOD) test inputs (Nguyen et al., 2015), which arise from unknown categories and should not be predicted by the model. Taking self-driving car as an example, an object detection model trained to recognize in-distribution objects ( e.g., cars, stop signs) can produce a high-confidence prediction for an unseen object of a moose; see Figure 1(a). Such a failure case raises concerns in model reliability, and worse, may lead to catastrophe when deployed in safety-critical applications.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Improve the English of this paragraph.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Make this paragraph more formal and fitting to academic style.", "annotator": "annotator_07"}} {"id_paragraph": "jP_amc4U0A.Y2t7AFVo5Z.00", "parag_1": "We proposed a new inference lgorithm for distributions parametrized by normalizing flow models. The need for approximate inference is motivated by our theoretical hardness result for exact inference,which is surprising given that it applies to invertible models. We also presented a detailed empirical evaluation of our method with both quantitative and qualitative results on a wide range of tasks and datasets. Overall, we believe that the idea of a pre-generator creating structured noise is a useful and general method for leveraging pre-trained generators to solve new generative problems.", "parag_2": "We proposed a new inference algorithm for distributions parametrized by a flow. The need for approximate inference is motivated by the hardness of exact inference. We also presented a detailed empirical evaluation of our method with both quantitative and qualitative results on a wide range of tasks and datasets. Overall, we believe that the idea of a pre-generator creating structured noise is a useful and general method for leveraging pre-trained generators to solve new generative problems.", "annot_1": {"annotation": ["Concision"], "instruction": "Remove details which are unnecessary for the overall paragraph. Fix any spelling mistakes.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision"], "instruction": "Correct and concise the two first sentences.", "annotator": "annotator_07"}} {"id_paragraph": "fDUdAYCQqZy.0cNiGAHFml.02", "parag_1": "We use the following toy example to further illustrate the trade-offs achieved by EVL. Consider a random generated MDP. When the operator can be applied exactly, the Bellman optimality operator is sufficient to learn the optimal value V ∗ . However, applying operators with an offline dataset raises a noise on the actual operator due to the estimation error with finite and biased data. We simulate this effect by adding random Gaussian noise to the operator. Applying the optimality operator on offline datasets can lead to severe overestimation due to the maximization bias and bootstrapping. The value estimation learned by EVL, on the contrary, achieves a trade-off between learning optimal policy and behavior cloning and can be close to the optimal value with proper chosen τ , as depicted in Figure 2. The estimation error can be significant when the dataset is small, and EVL needs a smaller τ to be more conservative and closer to behavior cloning. When the dataset is large, the estimation error becomes small, and we can use a larger τ to recover the optimal policy. However, the expectile operator in Equation 2 does not have a closed-form solution. In practice, we consider the one-step gradient expectile operator", "parag_2": "We use the following toy example to further illustrate the trade-offs achieved by EVL. Consider a random generated MDP. When the operator can be applied exactly, the Bellman optimality operator is sufficient to learn the optimal value V ∗ . However, applying operators with an offline dataset raises a noise on the actual operator due to the estimation error with finite and biased data. We simulate this effect by adding random Gaussian noise to the operator. Applying the optimality operator on offline datasets can lead to severe overestimation due to the maximization bias and bootstrapping. The value estimation learned by EVL, on the contrary, achieves a trade-off between learning optimal policy and behavior cloning and can be close to the optimal value with proper chosen τ , as depicted in Figure 2. The noise upon the operator largely depends on the size of the dataset. Estimation error can be significant with insufficent data. In this case, we need a small τ to be conservative and be close to behavior cloning. When the dataset is large and we are able to have an accurate estimation for the operator, we can use a larger τ to recover the optimal policy. By adjusting τ , the expectile operator can accommodate variant types of datasets. However, the expectile operator in Equation 4 does not have a closed-form solution. In practice, we consider the one-step gradient expectile operator", "annot_1": {"annotation": ["Content_addition", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "vokZIVWUXN.zMdXRtaisu.01", "parag_1": "Xie et al., 2020; Sohn et al., 2020) (also known as input consistency regularization) or fit the unlabeled data on its predictions generated by a previously learned model (Lee, 2013; Chen et al., 2020b). It is interesting that UDA (Xie et al., 2020) reveals the crucial role of noise produced by advanced data augmentation methods. FixMatch (Sohn et al., 2020) usestwo versions of augmentation (weak augmentation and strong augmentation) and argues that predictions from weakly-augmented imagescan be used to supervise the output of strongly augmented data. SimCLRv2 (Chen et al., 2020b) first fine-tunes the pre-trained model from the labeled data and then distills on the unlabeled data. Self-Tuning (Wang et al., 2021) further improves data efficiency by a pseudo group contrast (PGC) mechanism but limited on classification setup. Moreover, various recent methods (van denOord et al., 2018; He et al., 2020; Wu et al., 2018; Hadsell et al., 2006; Tian et al., 2019; Chen et al., 2020a) improve data efficiency by self-supervised learning. However, most existing data-efficient methods focus on classification setup while rare attention has been paid to deep regression .", "parag_2": "Xie et al., 2020; Sohn et al., 2020) (a.k.a. input consistency regularization) or fit the unlabeled data on its predictions generated by a previously learned model (Lee, 2013; Chen et al., 2020b). Further, Co-Training (Blum & Mitchell, 1998b), Deep Co-Training Qiao et al. and Tri-Training (Zhou & Li, 2005a) improve data efficiency from an interesting perspective of different views of classifiers. MixMatch (Berthelot et al., 2019), ReMixMatch (Berthelot et al., 2020) and UDA (Xie et al., 2020) reveal the crucial role of noise produced by advanced data augmentation methods. FixMatch (Sohn et al., 2020) uses predictions from weakly-augmented images to supervise the output of strongly augmented data. Meta Pseudo Labels (Pham et al., 2021) further improves data efficiency by making the teacher constantly adapted by the feedback of the student’s performance on the labeled dataset. SimCLRv2 (Chen et al., 2020b) first fine-tunes the pre-trained model from the labeled data and then distills on the unlabeled data. Self-Tuning (Wang et al., 2021) introduces a pseudo group contrast (PGC) mechanism but is limited on classification setup. Besides of involving unlabeled data from the same distribution, another promising direction for improving data efficiency is introducing a complementary perspective to further improve data efficiency by introducing a related but different domain (Long et al., 2015; Ganin & Lempitsky, 2015; Long et al., 2017; Saito et al., 2018b; Lee et al., 2019; Zhang et al., 2019; Saito et al., 2018a; 2019). Moreover, various recent methods (van den Oord et al., 2018; He et al., 2020; Wu et al., 2018; Hadsell et al., 2006; Tian et al., 2019; Chen et al., 2020a) improve data efficiency by self-supervised learning. However, most existing data-efficient methods focus on classification setup while rare attention has been paid to deep regression .", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "tOMAf1V5dI.SNeLZ71pb5.00", "parag_1": "CNN-based Architectures. Since AlexNet (Krizhevsky et al., 2012) won the ImageNet competition in 2012, the CNN-based architectures have gradually been utilized to automatically extract image features instead of hand-crafted features. Subsequently, the VGG network (Simonyan & Zisserman, 2015) is proposed, which purely uses a series of 3 × 3 convolution and fully connected layers, and obtains outstanding performance in image classification. Furthermore, ResNet (He et al., 2016) is proposed, which utilizes the residual connection to transfer features in different layers, thereby alleviating the problem of gradient vanishing and obtaining superior performance. After that, the residual module becomes an important component of the network design and is also employed in subsequent transformer-based architectures and MLP-based architectures. Some papers have made further improvements to the convolution operation in CNN-based architecture, such as dilated convolution (Yu & Koltun, 2016) and deformable convolution (Dai et al., 2017). EfficientNet (Tan & Le, 2019; 2021) introduces neural architecture search into CNN to search for a suitable structure.", "parag_2": "CNN-based Architectures. Since AlexNet (Krizhevsky et al., 2012) won the ImageNet competition in 2012, the CNN-based architectures have gradually been utilized to automatically extract image features instead of hand-crafted features. Subsequently, the VGG network (Simonyan & Zisserman, 2015) is proposed, which purely uses a series of 3 × 3 convolution and fully connected layers. ResNet (He et al., 2016) utilizes the residual connection to transfer features in different layers, which alleviates the gradient vanishing and obtains superior performance. Some papers make further improvements to the convolution operation in CNN-based architecture, such as dilated convolution (Yu & Koltun, 2016) and deformable convolution (Dai et al., 2017). EfficientNet (Tan & Le, 2019; 2021) introduces neural architecture search into CNN to search for a suitable network structure.", "annot_1": {"annotation": ["Concision"], "instruction": "Make this paragraph more concise.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_deletion", "Concision"], "instruction": "Remove the sentence about the residual module. Make the paragraph more concise.", "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.13", "parag_1": "In this section, we evaluate three different aspects of our contributions: the usefulness of exclusive multiset-equivariance (Subsection 4.1), the differences between our implicit and automatic differentiation (Subsection 4.2), and the applicability of iDSPN to a larger-scale dataset (Subsection 4.3). We provide detailed descriptions of the experimental procedure in Appendix D, show example inputs and outputs in Appendix E, and open-source the code to reproduce all experiments at https://github.com// .", "parag_2": "In this section, we evaluate three different aspects of our contributions: the usefulness of exclusive multiset-equivariance (Section 4.1), the differences between automatic and our approximate implicit differentiation (Section 4.2), and the applicability of iDSPN to a larger-scale dataset (Section 4.3). We provide detailed descriptions of the experimental procedure in Appendix D, show example inputs and outputs in Appendix E, and open-source the code to reproduce all experiments at https: //github.com// and in the supplementary material.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Be clear about references.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_medium", "Rewriting_light"], "instruction": "Lightly clarify the text. Add a reference to appendix at the end.", "annotator": "annotator_07"}} {"id_paragraph": "CzTbgFKuy.hfDu8DsDq6.02", "parag_1": "Our main example willinstead be online job scheduling via minimizing the fractional makespan, following Lattanzi et al. They consider the problem of assigning each in a sequence of variablesized jobs to one of m machines [30, Section 3]. The authors provide an algorithm that uses predictions ˆw ∈ R m> 0 of “good\" machine weights w ∈ R m> 0 to assign jobs based on how well ˆw corresponds to machine demand; the algorithm has a runtime guarantee of O (cid:0) log min { max i ˆw [ i ] / w [ i ] , m } (cid:1) . They also discus learning linear and more complicated predictors, but without guarantees. In this section we provide guarantees for the linear prediction setting in which we target the logarithm of the machine weights, which makes the problem convex. Note we assume features lie in the f -dimensional simplex, and for simplicity we only consider learning the linear transform from features to predictors and not the intercept, as the latter is subsumed by the former. For the online result, we use the parameter-free algorithm of Orabona and Pal [38], an OGD-type method that allows us to not assume any bound on the machine weights and thus compete with the optimal linear predictor in all of R m × f .", "parag_2": "Our main example will be online job scheduling via minimizing the fractional makespan [30], where we must assign each in a sequence of variable-sized jobs to one of m machines. Lattanzi et al. [30] provide an algorithm that uses predictions ˆw ∈ R m> 0 of “good” machine weights w ∈ R m> 0 to assign jobs based on how well ˆw corresponds to machine demand; the method has a performance guarantee of O (log min { max i ˆw [ i ] w [ i ] , m } ) . They also discuss learning linear and other predictors, but without guarantees. We study linear prediction of the logarithm of the machine weights, which makes the problem convex, and assume features lie in the f -dimensional simplex. For simplicity we only consider learning the linear transform from features to predictors and not the intercept, as the former subsumes the latter. For the online result, we use KT-OCO [38, Algorithm 1], a parameter-free subgradient method with update x t +1 ← 1+ (cid:80) ts =1 (cid:104) g s , x s (cid:105) t + (cid:80) ts =1 g s for g s = ∇ U s ( x s ) ; it allows us to not assume any bound on the machine weights and thus to compete with the optimal linear predictor in all of R m × f .", "annot_1": {"annotation": ["Concision", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "KUhhOtV2Yw.nPdxbHsbU.00", "parag_1": "Generally, those statistical notions can be expressed in terms of different (conditional) independence statements between the involved random variables (Barocas et al., 2019): ¯ y ⊥ s (eq. 5), ¯ y ⊥ s | y (eq. 6–7), and y ⊥ s | ¯ y (eq. 8–9). If our training set has no positive outcome for the demographic s = 0 , i.e. M y =1 ,s =0 = ∅ , the true positive rate for this group will suffer, and therefore we will likely not be able to satisfy, among others, equality of true positive rate (eq. 6).", "parag_2": "Generally, those statistical notions can be expressed in terms of different (conditional) independence statements between the involved random variables (Barocas et al., 2019): ¯ y ⊥ s (equation 5), ¯ y ⊥ s | y (equation 6 – equation 7), and y ⊥ s | ¯ y (equation 8 – equation 9). If our training set has no positive outcome for the demographic s = 0 , i.e. M y =1 ,s =0 = ∅ , the true positive rate for this group will suffer, and therefore we will likely not be able to satisfy, among others, equality of true positive rate.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Prefer extended forms over abbreviations of words.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Write the abbreviation in their full form.", "annotator": "annotator_07"}} {"id_paragraph": "slsGUcTSZI.DH75WqDfD7.00", "parag_1": "We highlight an adaptation of BN named as static Batch Normaliztion (sBN) for optimizing privacy constrained heterogeneous models. During the training phase, sBN does not track running estimates and simply normalize batch data. We do not track the local running statistics as the size of local models may also vary dynamically. This method is suitable for HeteroFL as every communication round is independent. After the training process finishes, the server sequentially query local clients and cumulatively update global BN statistics. Thus, this method greatly reduces the risk of leaking private data because the calculation of BN statistics and the optimization of parameters are isolated. We also empirically found this trick significantly outperforms other forms of normalization methods including the InstanceNorm (Ulyanov et al., 2016), GroupNorm (Wu & He, 2018) , and LayerNorm (Ba et al., 2016) as shown in Table 4 and Table 5.", "parag_2": "We highlight an adaptation of BN named as static Batch Normaliztion (sBN) for optimizing privacy constrained heterogeneous models. During the training phase, sBN does not track running estimates and simply normalize batch data. We do not track the local running statistics as the size of local models may also vary dynamically. This method is suitable for HeteroFL as every communication round is independent. After the training process finishes, the server sequentially query local clients and cumulatively update global BN statistics. There exist privacy concerns about calculating global statistics cumulatively and we hope to address those issues in the future work. We also empirically found this trick significantly outperforms other forms of normalization methods including the InstanceNorm (Ulyanov et al., 2016), GroupNorm (Wu & He, 2018) , and LayerNorm (Ba et al., 2016) as shown in Table 4 and Table 5.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "8_oadXCaRE.Kt4-LpYuM.01", "parag_1": "Importantly, it is equipped with a simple normalization of the layer’s activations, and an optional temperature-scaling mechanism (Hinton et al., 2015), producing a soft WTA instead of selecting a single \"hard\" winner neuron. This allows us to prove formally that a SoftHebb layer is a generative mixture model that objectively minimizes its Kullback-Leibler (KL) divergence from the input distribution through Bayesian inference, thus providing a new formal ML-theoretic perspective of these networks. We complement our main results, which are theoretical, with experiments that are small-scale but produce intriguing results. As a generative model, SoftHebb has a broader scope than classification, but we test it in simulations on the tasks of recognizing MNIST handwritten digits and Fashion-MNIST fashion products. First, we confirm that SoftHebb is more accurate than a hard-WTA model. Second, we validate that it minimizes a loss function (cross-entropy) even though it has no access to it or to labels during learning. In addition, likely owing to its Bayesian and generative properties, the unsupervised WTA model outperforms a supervised two-layer perceptron in several aspects: learning speed and accuracy in the first presentation of the training dataset, robustness to noisy data, and increased robustness to one of the strongest white-box adversarial attacks, i.e. projected gradient descent (PGD) (Madry et al., 2017), and without any explicit defence. Interestingly, the SoftHebb model also exhibits inherent properties of deflection (Qin et al., 2020) of the adversarial attacks, and generates object interpolations.", "parag_2": "Importantly, it is equipped with a simple normalization of the layer’s activations, and an optional temperature-scaling mechanism (Hinton et al., 2015), producing a soft WTA instead of selecting a single \"hard\" winner neuron. This allows us to prove formally that a SoftHebb layer is a generative mixture model that objectively minimizes its Kullback-Leibler (KL) divergence from the input distribution through Bayesian inference, thus providing a new formal ML-theoretic perspective of these networks. We complement our main results, which are theoretical, with experiments that are small-scale but produce intriguing results. As a generative model, SoftHebb has a broader scope than classification, but we test it on image classification tasks. Surprisingly, in addition to overcoming several inefficiencies of backpropagation, the unsupervised WTA model also outperforms a supervised two-layer perceptron in several aspects: learning speed and accuracy in the first presentation of the training dataset, robustness to noisy data and to one of the strongest white-box adversarial attacks, i.e. projected gradient descent (PGD) (Madry et al., 2017), and without any explicit defence. Interestingly, the SoftHebb model also exhibits inherent properties of deflection (Qin et al., 2020) of the adversarial attacks, and generates object interpolations.", "annot_1": {"annotation": ["Concision"], "instruction": "Make this paragraph shorter by removing details.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Content_deletion", "Concision"], "instruction": "Summarize the middle of the paragraph to make it shorter and more concise. Remove unnecessary details.", "annotator": "annotator_07"}} {"id_paragraph": "9wfZbn73om.FhHH15YtKt.02", "parag_1": "Early works understand the InfoNCE loss based on maximizing the mutual information (MI) between positive samples (Oord et al., 2018; Bachman et al., 2019; Hjelm et al., 2018; Tian et al., 2019; 2020). However, a rigorous relationship between mutual information and the downstream classification error has not been established. Tschannen et al. (2019) also find that optimizing tighter bounds of MI does not imply better representations. Thus, MI may not fully explain the success of InfoNCE. Besides, Arora et al. (2019) directly analyze the generalization of InfoNCE loss based on the assumption that positive samples are drawn from the same latent classes, which is different from practical contrastive algorithms. Ash et al. (2021) study the role of negative samples in contrastive SSL, and show an interesting collision-coverage trade-off theoretically. Furthermore, HaoChen et al. (2021) study contrastive SSL from a matrix decomposition perspective, but it is only applicable to their spectral contrastive loss. The behavior of InfoNCE is also studied from the perspective of alignment and uniformity (Wang & Isola, 2020), sparse coding model (Wen & Li, 2021), and the “expansion” assumption (Wei et al., 2020).", "parag_2": "Early works understand the InfoNCE loss based on maximizing the mutual information (MI) between positive samples (Oord et al., 2018; Bachman et al., 2019; Hjelm et al., 2018; Tian et al., 2019; 2020; Tschannen et al., 2019). However, a rigorous relationship between mutual information and downstream performance has not been established. Besides, Arora et al. (2019) directly analyze the generalization of InfoNCE loss based on the assumption that positive samples are drawn from the same latent classes, which is different from practical algorithms. Ash et al. (2021) study the role of negative samples and show an interesting collision-coverage trade-off theoretically. HaoChen et al. (2021) study contrastive SSL from a matrix decomposition perspective, but it is only applicable to their spectral contrastive loss. The behavior of InfoNCE is also studied from the perspective of alignment and uniformity (Wang & Isola, 2020), sparse coding model (Wen & Li, 2021), the expansion assumption (Wei et al., 2020), stochastic neighbor embedding (Hu et al., 2022), and augmentation robustness (Zhao et al., 2023).", "annot_1": {"annotation": ["Content_deletion", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.02", "parag_1": "Set prediction modelsmake use of set-to-set functions that are permutation-equivariant (Lee et al., 2019; Locatello et al., 2020; Carion et al., 2020; Kosiorek et al., 2020). This is desirable when processing sets because it prevents a function from relying on the arbitrary order of the set in its matrix representation. Permutation-equivariant functions can be easily composed to build larger models that remain equivariant, which fits well into deep learning architectures as building blocks.", "parag_2": "Recent set prediction models (Lee et al., 2019; Locatello et al., 2020; Carion et al., 2020; Kosiorek et al., 2020) make use of set-to-set (permutation-equivariant list-to-list) functions to refine an initial set Y 0 , which is usually a randomly generated or learnable matrix. Permutation-equivariance is desirable when processing sets because it prevents a function from relying on the arbitrary order of the set in its matrix representation. Such functions can be easily composed to build larger models that remain equivariant, which fits well into deep learning architectures as building blocks.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.04", "parag_1": "Hollinworth et al. found that senior citizens generally lose the cursor due to poor eyesight and sustained concentration [15]. Therefore, the implemented a Field Mouse (a mouse with a touch sensor attached) and proposed a technique wherein the cursor moves to the center of the screen when the user hold the mouse. This technique reduced the time required to search for the cursor, which in turn reduced the movement time. Stephane et al. focused on screen torus settings [16]. With this setting, for example, when the cursor reaches the right edge, it appears from the left edge. As a result, users can easily lose sight of the cursor when it warps. Therefore, they proposed a TorusDesktop technique that adds appropriate visual feedback between the time the cursor warping, and making it difficult to lose sight of the cursor.", "parag_2": "Hollinworth et al. found that senior citizens lose the cursor be- cause of poor eyesight and sustained concentration, and therefore, they implemented a Field Mouse (a mouse with a touch sensor at- tached) and proposed a technique wherein the cursor moves to the center of the screen when the user holds the mouse [15]. This tech- nique help reduce the time required to search for the cursor, which in turn reduces the movement time. Stephane et al. focused on screen torus settings [16]. With this setting, when the cursor reaches the screen edge, it appears from the opposite end. For example, when the cursor reaches the right edge, it appears from the left edge. However, users can easily lose sight of the cursor when it warps around the edges. To overcome this issue, they proposed a TorusDesktop technique that adds appropriate visual feedback between the time the cursor warps. These studies focused on the user losing sight of the cursor, but these did not focus on a scenario where the cursor is hidden.", "annot_1": {"annotation": ["Rewriting_medium", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.11", "parag_1": "To warm-up the model, we perform regular steps in the first epoch. We switch from regular steps to re-balancing steps if | d speed ( t ) | > α , where α is the imbalance parameter . The training takes Q re-balancing steps before returning to regular mode. We refer to Q as the re-balancing window size .", "parag_2": "To warm-up the model, we perform only regular steps in the first training epoch. Then we switch from regular steps to re-balancing steps if | d speed ( t ) | > α , where α is a hyperparameter, referred to as the imbalance tolerance parameter . The training takes Q re-balancing steps before returning to regular mode. We refer to the hyperparameter Q as the re-balancing window size .", "annot_1": {"annotation": ["Development"], "instruction": "Change the descriptions so that the hyperparameters can be easily referred to later", "annotator": "annotator_05"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "r1DvZQwjB.Hk8CzQDiB.00", "parag_1": "Unlike other numerical methods such as finite differences and finite elements, the derivatives of the desired function can be analytically calculated to any order. This framework therefore, enables the solution of high order non-linear PDEs. The proposed algorithm is a unified formulation of both forward and inverse problems where the optimized loss function consists of few elements: fidelity terms of Land L ∞ norms, boundary conditions constraints and additional regularizers. This setting is flexible in the sense that regularizers can be tailored to specific problems. We demonstrate our method on several free shape 2D second order systems with application to Electrical Impedance Tomography (EIT), diffusion and wave equations.", "parag_2": "Unlike other numerical methods such as finite differences and finite elements, the derivatives of the desired function can be analytically calculated to any order. This framework therefore, enables the solution of high order non-linear PDEs. The proposed algorithm is a unified formulation of both forward and inverse problems where the optimized loss function consists of few elements: fidelity terms of Land L ∞ norms that unlike previous methods promote a strong solution. Robust boundary conditions constraints and additional regularizers are included as well. This setting is flexible in the sense that regularizers can be tailored to specific problems. We demonstrate our method on several free shape 2D second order systems with application to Electrical Impedance Tomography (EIT), diffusion and wave equations.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.17", "parag_1": "The design of the calendar should avoid design elements that in- troduce clutter to the calendar ( DG3 ) . Design elements such as colors, sliders, labels, and markers should be carefully employed to avoid overwhelming the calendar. One of the reasons why Design B was preferred is because it is less cluttered: medication entries can be rendered effectively using position, shape, and size. The size of a medication entry should be as small as possible so as not to occupy too much space. Size should not be used to indicate either allowed or preferred administration time of a medication entry, and the size of the entry should also be uniform regardless of the length of the allowed period of administration. Using shapes with a colored outline and transparent fill was associated with less noise and hence preferred by the users. While the slider design was effective in com- municating both the allowed and preferred administration period, it made the entry occupy a lot of calendar space and was also misread by some participants as an indicator of delayed release for certain medications. Sliders should thus be avoided. Familiar icons such as tablets can be used to indicate medication entries.", "parag_2": "The design of the calendar should avoid design elements that in- troduce clutter to the calendar ( DG3 ) . One of the reasons why Design B was preferred is because it is less cluttered: medication entries can be rendered effectively using position, shape, and size. The size of a medication entry should be as small as possible so as not to occupy too much space. Size should not be used to indicate either allowed or preferred administration time of a medication entry, and the size of the entry should also be uniform regardless of the length of the allowed period of administration. Using shapes with a colored outline and transparent fill was associated with less noise by participants. While the slider design was effective in communicating both the allowed and preferred administration period, it made the entry occupy a lot of calendar space and was also misread by some participants. Familiar icons such as tablets can be used to indicate medication entries.", "annot_1": {"annotation": ["Concision", "Content_deletion"], "instruction": "Remove unnecessary details and explanations.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Content_deletion", "Development"], "instruction": NaN, "annotator": "annotator_09"}} {"id_paragraph": "SkMm_pDYm.rkQWRbxAQ.00", "parag_1": "S t +1 = f st ( S t , A t , U st ) . This is always possible using auto-regressive uniformization. The DAG G of the resulting SCM is shown in fig. 1. This procedure is closely related to the ‘reparameterization trick’ for models with lotion-scale distributions (Kingma & Welling, 2013; Rezende et al., 2014).", "parag_2": "S t +1 = f st ( S t , A t , U st ) . This is always possible using auto-regressive uniformization, see Lemma 2 in the appendix. The DAG G of the resulting SCM is shown in fig. 1. This procedure is closely related to the ‘reparameterization trick’ for models with location-scale distributions (Kingma & Welling, 2013; Rezende et al., 2014).", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.12", "parag_1": "The prior rotamers ˜ χ j are inaccurate or unknown in many cases. For example, if we mutate some amino acids in the protein complex, the rotamers of the mutated amino acids are unknown, and the rotamers of amino acids nearby the mutated ones are inaccurate because they are affected by the mutation. The probability density is defined over the d -dimensional torus T D = ( S 1 ) D , and we show below our proposed flow-based architecture to model the density.", "parag_2": "The prior rotamers ˜ χ j are often inaccurate or unknown. For example, if we mutate some residues, the rotamers of the mutated residues are unknown, and the rotamers of residues nearby the mutated ones are inaccurate because they are affected by the mutation. The probability density is defined over the d -dimensional torus T D = ( S 1 ) D , and we describe below the flow-based architecture to model the density.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Replace every apparition of \"amino acids\" or \"amino acids in the protein complex\" by \"residues\"", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "Replace occurrences of amino acids by residues. Make this paragraph a lit bit more concise.", "annotator": "annotator_07"}} {"id_paragraph": "skR2qMboVK.lmwxQfhmln.01", "parag_1": "Margin-Density (Nguyen & Smeulders, 2004). Scores candidates by the product of their margin and their density estimates, so as to increase diversity. The density is computed by first clustering the penultimate layer activations of all |Z| candidate points via K -means. Then, the density score of candidate x i is computed as: | C ( x i ) | / |Z| , where C ( x i ) is the cluster containing x i . We useclusters.", "parag_2": "Margin-Density (Nguyen & Smeulders, 2004). Scores candidates by the product of their margin and their density estimates, so as to increase diversity. The density is computed by first clustering the penultimate layer activations of the current model on all |Z| candidate points via K -means. Then, the density score of candidate x i is computed as: | C ( x i ) | / |Z| , where C ( x i ) is the cluster containing x i . We use min { 20 , |Z|} clusters.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "IoTyuVEanE.Et-c0vQfeb.04", "parag_1": "Parsing signal from noise is critical to learning from weak and rule-based supervision. Accordingly, we compare ReGAL’s ability to that of our baselines in accurately classifying instances based on a set of seed rules, which are shown in Table 1. For each dataset, we provide exactly one seed LF foreach class. Each seed LF contains exactly six single-token keywords. If any of these keywords is found in document d i , the LF assigns its label; otherwise, it abstains from labeling d i .", "parag_2": "Parsing signal from noise is critical to learning from weak and rule-based supervision. Accordingly, we compare ReGAL’s ability to that of our baselines in accurately classifying instances based on a set of seed rules, which are shown in Table 1. We provided exactly each class with exactly one labeling function consisting of six keywords or phrases adapted from [16]. If any of these keywords is found in document d i , the LF assigns its label; otherwise, it abstains from labeling d i .", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_medium", "Development"], "instruction": NaN, "annotator": "annotator_08"}} {"id_paragraph": "LC37_sQl_t.XlHDVLz97W.01", "parag_1": "In this paper, we introduce ZeroC, a new framework for zero-shot concept recognition and acquisition at inference time. Our experiments show that in a challenging grid-world domain, ZeroC is able to recognize complex, hierarchical concepts composed of English characters in a grid-world in a zero-shot manner, being given a high-level, symbolic specification of their structures, and after being trained with simpler concepts. In addition, we demonstrate that an independently trained ZeroC is able to transfer hierarchical concepts across different domains at inference. Although this work is evaluated only in grid-world domain, we are the first to address this difficult challenge, and hope that this work will make a useful step in the development of composable neural systems, capable of zero-shot concept recognition and acquisition and hence suitable for more diverse tasks.", "parag_2": "In this paper, we introduce ZeroC, a new framework for zero-shot concept recognition and acquisition at inference time. Our experiments show that in a challenging grid-world domain, ZeroC is able to recognize complex, hierarchical concepts composed of English characters in a grid-world in a zero-shot manner, being given a high-level, symbolic specification of their structures, and after being trained with simpler concepts. In addition, we demonstrate that an independently trained ZeroC is able to transfer hierarchical concepts across different domains at inference. Although this work is evaluated only in grid-world visual domain, we are the first to address this difficult challenge. We are also excited to see its potential application in broader domains, e.g. in AI for scientific discovery, where it may infer novel patterns and concepts from data in a zero-shot manner. We hope that this work will make a useful step in the development of composable neural systems, capable of zero-shot concept recognition and acquisition and hence suitable for more diverse tasks.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Content_addition", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_08"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.17", "parag_1": "Attention on all AP metrics. It is better at the attribute classification ignoring the 3d coordinates (96. → 98.8) and improves especially for the stricter AP thresholds like AP 0 . 125 (7.9 → 76.9). Note that a stricter AP threshold is always upper bounded by the looser AP threshold, so iDSPN is guaranteed to be better than Slot Attention † on AP 0 . We observed some overfitting when using 128x image inputs, which did not show up in preliminary experiments when training an autoencoder with the ground-truth set as input. We reduced this overfitting significantly by increasing the image size to 256x256 while keeping the latent vector size the same, which results in further performance improvements. We thus believe that the overfitting is due to the ResNet18 image encoder rather than iDSPN.", "parag_2": "Attention on all AP metrics. It is better at attribute classification when ignoring the 3d coordinates (AP ∞ , 96.4% → 98.8%) and improves especially for the metrics with stricter 3d coordinate thresholds (AP 0 . 125 , 7.9% → 76.9%). This is despite Slot Attention † using a three times higher weight on the loss for the coordinates than iDSPN. Note that a stricter AP threshold is always upper bounded by a looser AP threshold, so iDSPN is guaranteed to be better than Slot Attention † on AP 0 . We observe some overfitting with 128x128 image inputs (1.6e-4 train loss, 5.4e-4 validation loss), which did not appear in preliminary experiments when training an autoencoder with the ground-truth set as input. We reduce this generalization gap by increasing the image size to 256x256 while keeping the latent vector size the same, which results in further performance improvements (1.1e-4 train loss, 2.5e-4 validation loss). We thus believe that the overfitting is due to the ResNet18 image encoder rather than iDSPN.", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "SyF8k7bCW.HytIRPamf.01", "parag_1": "Less Constraints: During encoding, the explicit word order information used inRNN will help the vector representation capture more of the temporally-specific relationships among words, but this same constraint (if using RNN as the decoder) could be an inappropriate constraint in the decoding process.", "parag_2": "The results are presented in the Table 1. Generally, the three different decoding settings didn’t make much of a difference in terms of the performance on selected downstream tasks, with RNN or CNN as the decoder. The results tell us that, in terms of learning good sentence representations, the autoregressive decoder doesn’t require the correct ground-truth words as the inputs.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "Can you reformulate my entire paragraph?", "annotator": "annotator_09"}} {"id_paragraph": "lLwt-9RJ2tm.XJsauLjck.00", "parag_1": "Unfortunately, the distortion in w G ( S, T ) can be very large depending on the quantities on the right, and the cumulative error in cost G ( T ) blows up with the depth of the tree which is even worse. Hereis the second observation: the negative term w G ( S ∪ T, S ∪ T ) that internal node S contributes to thecost also appears as a positive term in its parent’s contribution to the cost. We can pass this term as a since there always exists an optimal hierarchy that is binary.", "parag_2": "Unfortunately, the distortion in w G ( S, T ) can be very large depending on the quantities on the right, and the cumulative error in cost G ( T ) blows up with the depth of the tree which is even worse. Here optimal hierarchy that is binary.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "p8yrWJS4W.eHA5NswPr.02", "parag_1": "Results. Fig. 4 shows that certain alterations—such as completely removing articles from the evaluated text—have almost no impact on the divergence between our reference and test corpora for various ∆ . In fact, text without any articles is judged as better than GPT-2 XL ’s by most of the cluster-based divergences. Further, while this perturbation undoubtedly affects the text’s fluency, it has less of an effect on this divergence than, e.g., truncating texts. This is arguably undesirable: A metric of text quality should place more emphasis on fluency than surface statistics, such as length.", "parag_2": "Results. Fig. 4 shows that certain alterations to the evaluated text—such as completely removing articles—have almost no impact on its divergences from the reference corpora for various ∆ . In fact, text without any articles is judged as better than GPT-2 XL ’s by all of the cluster-based divergences (see Fig. 9 for a zoomed in version). Further, while this perturbation undoubtedly affects the text’s fluency, it has less of an effect on ∆ than, e.g., truncating texts. This is arguably undesirable: A metric of text quality should place more emphasis on fluency than surface statistics, such as length.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Make the concepts a bit more specific, such that some vague ideas are more clear.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Revise the writing for better readability.", "annotator": "annotator_07"}} {"id_paragraph": "rkwFe19K7.BJCfw3tCm.00", "parag_1": "We follow several rules when selecting victim nodes. First, the attack must be successful on the victim node to fool the model. Next, we try our best to find successful attacks on victim nodes with different node degree to evaluate diverse victim nodes’ properties. Finally, we choose victim nodes among those with the same degree uniformly at random to perform the detection. We observe that without considering detection, the Multi-edges direct attack is the most successful attacking model, followed by Single-edge attack and finally Multi-edges indirect attack. Therefore, we selected 20, 10, 6 victim nodes respectively for these three attack methods. The selected victim node degrees are shown in the appendix.", "parag_2": "We follow several rules when selecting victim nodes. First, the attack must be successful on the victim node to fool the model. Next, we try our best to find successful attacks on victim nodes with different node degree to evaluate diverse victim nodes’ properties. Finally, we choose victim nodes among those with the same degree uniformly at random to perform the detection. We observe that without considering detection, the Multi-edges direct attack is the most successful attacking model, followed by Single-edge attack and finally Multi-edges indirect attack. Therefore, we selected 20, 10, 6 victim nodes respectively for these three attack methods on real-world data. For synthetic data, we simply pick two victim nodes, one with the smallest degree and the other with the largest degree. The selected victim node degrees are shown in the appendix.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "B1SkMaDvr.W2MCLgZGr.00", "parag_1": "In this paper, we prove new generalization bounds for convolutional networks that take account of this effect. As in earlier analyses for the fully connected case, our bounds are in terms of the distance from the initial weights, and the number of parameters. Additionally, our bounds are “size-free”, in the sense that they are independent of the number of pixels in the input, or the height and width of the hidden feature maps.", "parag_2": "In this paper, we prove new generalization bounds for convolutional networks that take account of this effect. As in earlier analyses for the fully connected case, our bounds are in terms of the distance from the initial weights, and the number of parameters. Additionally, our bounds independent of the number of pixels in the input, or the height and width of the hidden feature maps.", "annot_1": {"annotation": ["Concision"], "instruction": "Make the ideas more concise.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision"], "instruction": "Remove unnecessary details.", "annotator": "annotator_07"}} {"id_paragraph": "MnewiFDvHZ.iAYttXl-uH.03", "parag_1": "We compared RECOO with Algorithm 1 in [30], where learning rates are summarized in Table 4in Appendix H. Figure 2 includes the cumulative losses and violations. In particular, The (mean,variance) pair of RECOO for loss and violation are p 42510 . 05 q and p 713 . 45 , 1 . 60 q at the end of learning horizon, respectively while that of Algorithm 1 in [30] are p 43011 . 17 q and p 1684 . 17 , 2 . 85 q . It is justified RECOO performs better, especially in terms of the cumulative violation,which further shows the “rectified” design reduces the cumulative hard constraint violation.", "parag_2": "We compared RECOO with Algorithm 1 in [27], with α t “ η t “ ? t , γ t “ t 1 { 2 ` 0 . 01 in RECOO; andα t “ 0 . 8 {? t, β t “ 5 {? t and γ t “ 0 . 5 {? t for Algorithm 1 in [27] (these are the optimized learningrates). Figure 2 includes the cumulative losses and violations. In particular, The (mean, variance)pair of RECOO for loss and violation are p 42510 . 05 q and p 713 . 45 , 1 . 60 q at the end of learning horizon, respectively while that of Algorithm 1 in [27] are p 43011 . 17 q and p 1684 . 17 , 2 . 85 q . It is justified RECOO performs better, especially in terms of the cumulative violation, which furthershows the “rectified” design reduces the cumulative hard constraint violation.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.00", "parag_1": "Intelligent agents can solve tasks in a variety of ways depending on the action set at their disposal. For instance, while using a toolkit for repair, the choice of tool (the action) closely depends on what other tools are available. Yet, such dependence on other available actions is ignored in conventional reinforcement learning (RL)since it assumes a fixed action set. In this work, we posit that learning the interdependence between actions is crucial for RL agents acting under a varying action set. To this end, we propose a novel policy architecture that consists ofan input graph composed of available actions and a graph attention network to learn the action interdependence. We demonstrate that our architecture makes action decisions by correctly attending to the relevant actions in both value-based and policy-based RL. Consequently, it consistently outperforms non-relational architectures on applications where the action space can vary, such as recommender systems and physical reasoning with tools and skills.", "parag_2": "Intelligent agents can solve tasks in various ways depending on their available set of actions. However, conventional reinforcement learning (RL) assumes a fixed action set. This work asserts that tasks with varying action sets require reasoning of the relations between the available actions. For instance, taking a nail-action in a repair task is meaningful only if a hammer-action is also available. To learn and utilize such action relations, we propose a novel policy architecture consisting of a graph attention network over the available actions. We show that our model makes informed action decisions by correctly attending to other related actions in both value-based and policy-based RL. Consequently, it outperforms non-relational architectures on applications where the action space often varies, such as recommender systems and physical reasoning with tools and skills.", "annot_1": {"annotation": ["Content_substitution", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "Make reasoning understandable, use accurate words.", "annotator": "annotator_08"}} {"id_paragraph": "S6FTGJ2qg.pZxjlXjpkL.00", "parag_1": "Ermon, 2019). Song et al. (2020b) shows that diffusion models are trained using denoising score matching (Vincent, 2011), a conditional objective that provides unbiased gradients with respect to the score matching objective. Conditional Flow Matching draws inspiration from this result, but generalizes to matching vector fields directly. Due to the ease of scalability, diffusion models have received increased attention, producing a variety of improvements such as loss-rescaling (Song et al., 2021), adding classifier guidance along with architectural improvements (Dhariwal & Nichol, 2021), and learning the noise schedule (Nichol & Dhariwal, 2021; Kingma et al., 2021). However, (Nichol & Dhariwal, 2021) and (Kingma et al., 2021) only consider a restricted setting of Gaussian conditional paths defined by simple diffusion processes with a single parameter—in particular, it does not include our conditional OT path. While existing works make use of a connection between diffusion processes and continuous normalizing flows with the same probability path (Maoutsa et al., 2020b; Song et al., 2020b; 2021), our work allows us to generalize beyond the class of probability paths modeled by simple diffusion. With our work, it is possible to completely sidestep the diffusion process construction and reason directly with probability paths, while still retaining efficient training and log-likelihood evaluations.", "parag_2": "Ermon, 2019). Song et al. (2020b) shows that diffusion models are trained using denoising score matching (Vincent, 2011), a conditional objective that provides unbiased gradients with respect to the score matching objective. Conditional Flow Matching draws inspiration from this result, but generalizes to matching vector fields directly. Due to the ease of scalability, diffusion models have received increased attention, producing a variety of improvements such as loss-rescaling (Song et al., 2021), adding classifier guidance along with architectural improvements (Dhariwal & Nichol, 2021), and learning the noise schedule (Nichol & Dhariwal, 2021; Kingma et al., 2021). However, (Nichol & Dhariwal, 2021) and (Kingma et al., 2021) only consider a restricted setting of Gaussian conditional paths defined by simple diffusion processes with a single parameter—in particular, it does not include our conditional OT path. In an another line of works, (De Bortoli et al., 2021; Wang et al., 2021; Peluchetti, 2021) proposed finite time diffusion constructions via diffusion bridges theory resolving the approximation error incurred by infinite time denoising constructions. While existing works make use of a connection between diffusion processes and continuous normalizing flows with the same probability path (Maoutsa et al., 2020b; Song et al., 2020b; 2021), our work allows us to generalize beyond the class of probability paths modeled by simple diffusion. With our work, it is possible to completely sidestep the diffusion process construction and reason directly with probability paths, while still retaining efficient training and log-likelihood evaluations.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "skR2qMboVK.lmwxQfhmln.00", "parag_1": "BALD (Houlsby et al., 2011) estimates the mutual information (MI) between the datapoints and the model weights, the idea being that points with large MI between the predicted label and weights have a larger impact on the trained model’s performance. The measure, denoted I , is the conditional entropy over predictions given weights less the earlier entropy term.", "parag_2": "BALD (Houlsby et al., 2011) estimates the mutual information (MI) between the datapoints and the model weights, the idea being that points with large MI between the predicted label and weights have a larger impact on the trained model’s performance. The measure, denoted I , is approximated as:", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "p8yrWJS4W.eHA5NswPr.03", "parag_1": "On the other hand, our metrics deem text with stopwords removed as utterly different from the reference. Permuting words within texts has a similar effect, demonstrating that, at least to some extent, the embedding space captures notions of syntax and grammaticality, rather than pure unigram These results inspire us to investigate which surface features of text are encoded in embedding clusters. Following our setup in §6.1, we look at whether clusters encode the percentage of stopwords or punctuation in texts. We use solely the WebText dataset to train our clustering functions in this setting. We then compute the average percentage of stopwords or punctuation per cluster in half of our strings.", "parag_2": "On the other hand, our metrics deem text with stopwords removed as utterly different from the reference. Permuting words within texts has a similar effect, demonstrating that, at least to some extent, the embedding space captures notions of syntax and grammaticality, rather than pure unigram statistics. The increase in ∆ shown when performing sentence-level permutations likewise suggests that the clusters delineate different levels of coherence to some extent. In Fig. 10 (in App. E), we perform an additional experiment where we again probe the clusters (as in §6.1), but for surface features of text this time, such as the percentage of stopwords and punctuation symbols in a text. There we see evidence that such features of text are not strongly encoded in the clustering scheme.", "annot_1": {"annotation": ["Development", "Content_addition"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "Sy-6xpqtX.S1Ogmz_a7.00", "parag_1": "Optimization algorithms can provide insight and guidance in the design of deep network architectures (Vogel & Pock, 2017; Yang et al., 2016; Zhang & Ghanem, 2018). For example, Yang et al. (2016) have proposed a deep network architecture for compressed sensing. Their network, dubbed ADMM-Net, is inspired by ADMM updates (Boyd et al., 2011) on the compressed sensing objective. Similarly, Zhang & Ghanem (2018) demonstrated that unrolling a proximal gradient descent solver (Beck & Teboulle, 2009) on the same problem can further improve performance. ", "parag_2": "Optimization algorithms can provide insight and guidance in the design of deep network architectures (Vogel & Pock, 2017; Kobler et al., 2017; Yang et al., 2016; Zhang & Ghanem, 2018). For example, Yang et al. (2016) have proposed a deep network architecture for compressed sensing. Their network, dubbed ADMM-Net, is inspired by ADMM updates (Boyd et al., 2011) on the compressed sensing objective. Similarly, Zhang & Ghanem (2018) demonstrated that unrolling a proximal gradient descent solver (Beck & Teboulle, 2009) on the same problem can further improve performance. Moreover, the work of Kobler et al. (2017) demonstrated a relation between incremental proximal methods and ResNet blocks.", "annot_1": {"annotation": ["Content_addition", "Development"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "wxzHtn7XId.d4DKAyZjOj.00", "parag_1": "Optimization Dilemma in OOD Algorithms. Along with the developments of OOD methods, the optimization dilemma in OOD generalization is gradually perceived in the literature, and raises new puzzles to the community. In fact, several recent works also notice the optimization dilemma in OOD algorithms, specifically, the trade-off between discovering the statistical correlations (i.e., ERM) and preventing the usage of spurious correlations (e.g., IRM). Empirically, Gulrajani & Lopez-Paz (2021) observe that, with careful hyperparameter tuning and evaluation setting, many OOD algorithms cannot outperform ERM in domain generalization, demonstrating the difficulties of properly mitigating the trade-offs between OOD and ERM objectives in practice. Moreover, Sagawa* et al. (2020); Zhai et al. (2022) find that, regularization on ERM, or sacrificing ERM performance, is usually needed for achieving satisfactory OOD performance, which aligns with our findings through Pareto front as shown in Fig. 6(a) and Fig. 7(a). Besides, Lin et al. (2022a) find that IRM can easily overfit and learns unexpected features when applying IRM on large neural networks. Zhou et al. (2022) propose to alleviate this problem by imposing sparsity constraints. Orthogonal to Lin et al. (2022a) ; Zhou et al. (2022) that focuses on the optimization consequences, we focus on the optimization process of OOD objectives. In addition, Zhang et al. (2022a) find that, the performance of OOD algorithms largely relies on choosing proper pretraining epochs which aligns with our findings in Fig. 1(d), hence propose to construct a ready-to-use features for stable OOD generalization performance. Orthogonal to Zhang et al. (2022a), we focus on developing better optimization scheme for OOD algorithms, including choosing the proper objectives and the achievability of the invariant predictors. Besides, Lv et al. (2021) propose ParetoDA to leverage MOO to resolve the gradient conflicts amon the objectives in Domain Adaption. ParetoDA uses the guidance of validation loss based on the data that has the identical distribution to test distribution, to trade-off the conflicts in domain adaption objectives.", "parag_2": "Optimization Dilemma in OOD Algorithms. Along with the developments of OOD methods, the optimization dilemma in OOD generalization is gradually perceived in the literature, and raises new puzzles to the community. In fact, several recent works also notice the optimization dilemma in OOD algorithms, specifically, the trade-off between discovering the statistical correlations (i.e., ERM) and preventing the usage of spurious correlations (e.g., IRM). Empirically, Gulrajani & Lopez-Paz (2021) observe that, with careful hyperparameter tuning and evaluation setting, many OOD algorithms cannot outperform ERM in domain generalization, demonstrating the difficulties of properly mitigating the trade-offs between OOD and ERM objectives in practice. Moreover, Sagawa* et al. (2020); Zhai et al. (2022) find that, regularization on ERM, or sacrificing ERM performance, is usually needed for achieving satisfactory OOD performance. A similar phenomenon has also been observed by Zhao et al. Xie et al. ; Sadeghi et al. ; Sener & Koltun (2022); Teney et al. (2022), which aligns with our findings through Pareto front as shown in Fig. 6(a) and Fig. 7(a). Besides, Lin et al. (2022a) find that IRM can easily overfit and learns unexpected features when applying IRM on large neural networks. Zhou et al. (2022) propose to alleviate this problem by imposing sparsity constraints. Orthogonal to Lin et al. (2022a) ; Zhou et al. (2022) that focuses on the optimization consequences, we focus on the optimization process of OOD objectives. In addition, Zhang et al. (2022a) find that, the performance of OOD algorithms largely relies on choosing proper pretraining epochs which aligns with our findings in Fig. 1(d), hence propose to construct a ready-to-use features for stable OOD generalization performance. Orthogonal to Zhang et al. (2022a), we focus on developing better optimization scheme for OOD algorithms, including choosing the proper objectives and the achievability of the invariant predictors. Besides, Lv et al. (2021) propose ParetoDA to leverage MOO to resolve the gradient conflicts amon the objectives in Domain Adaption. ParetoDA uses the guidance of validation loss based on the data that has the identical distribution to test distribution, to trade-off the conflicts in domain adaption objectives.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "HWNjBFvR-q.BTfXOtvRW9.00", "parag_1": "We thank Huan Zhang for helpful discussions and Alex Wang for helpful comments on a draft of this work. CW was supported by a NSF Graduate Research Fellowship. Toyota Research Institute provided funds to support this work.", "parag_2": "We thank Huan Zhang for helpful discussions and Alex Wang for helpful comments on a draft of this work. CW was supported by a NSF Graduate Research Fellowship. Portions of this work were supported by funds from Toyota Research Institute and the Bosch Center for AI.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "W6V9WgTOwm.kXnvpTSqMp.00", "parag_1": "Discussion. In this paper, we show that ID-calibrated ensembles, a simple method of calibrating a standard and robust model only on ID data and then ensembling them, can eliminate the tradeoff between in-distribution (ID) and out-of-distribution (OOD) accuracy on a wide range of natural shifts. We hope that this leads to more widespread use and deployment of robustness interventions.", "parag_2": "Conclusion and Future Work. In this paper, we show that ID-calibrated ensembles, a simple method of calibrating a standard and robust model only on ID data and then ensembling them, can eliminate the tradeoff between in-distribution (ID) and out-of-distribution (OOD) accuracy on a wide range of natural shifts. We hope that this leads to more widespread use and deployment of robustness interventions.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Rename this section to a more appropriate title.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Rename the section \"Conclusion and Future Work\"", "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.11", "parag_1": "Overview Our method consists of three parts. The core component is the rotamer density estimator (RDE), a conditional normalizing flow that models the probability density of sidechain conformations (rotamers) given the amino acid type and environments (Section 3.2). Next is the algorithm for estimating the entropy of the distribution parameterized by the normalizing flow (Section 3.3). Finally, we show how to use the entropy of the mutated and wild-type protein-protein interfaces at both bound and unbound states to estimate the change in binding free energy ( ∆∆ G ) upon mutation, and how to use neural networks to predict ∆∆ G more accurately using the unsupervised representations from the RDE (Section 3.4). A protein-protein complex is a multi-chain protein structure that can be divided into two groups. Each group contains at least one protein chain and each chain consists of multiple amino acids. For a protein complex containing n amino acids, we number them from 1 to n . The two groups of the complex can be represented by two disjoint sets of indices A, B ⊂ { 1 . . . n } An amino acid is characterised by its type, position, orientation, and sidechain conformation. We denote the type, position, and orientation of the i -th ( i ∈ { 1 . . . n } ) amino acid as a i ∈ { 1 . . . 20 } , p i ∈ R 3 , and O i ∈ SO (3) respectively. The sidechain conformation of the amino acid is called rotamer . As the conformational degree of freedom of the sidechain is defined by rotatable bonds, a rotamer can be sufficiently parameterized by torsional angles w.r.t. the rotatable bonds. The number of torsional angles vary between 0 to 4 depending on the amino acid type. For an amino acid with d torsional angles, we denote the k -th ( k ∈ { 1 . . . 4 } ) torsional angle by χ ( k ) i ∈ [0 , 2 π ) . Collectively, all the torsional angles are denoted by a vector χ i = ( χ ( k ) i ) dk =1 . Using the language of geometry, an angle can represented by a point on the unit circle S 1 . A vector consisting of d angular values reside on the product of d unit circle, known as the d -dimensional torus T D = ( S 1 ) D .", "parag_2": "Overview Our method comprises three main components. The first is the Rotamer Density Estimator (RDE), which is a conditional normalizing flow that models the probability density of sidechain conformations (rotamers) based on the amino acid type and backbone structures (Section 3.2). The second component is an algorithm that estimates the entropy of the distribution parameterized by the normalizing flow (Section 3.3). Lastly, we describe how we use the entropy of the protein-protein interfaces in both the mutated and wild-type states, both bound and unbound, to estimate the change in binding free energy ( ∆∆ G ). We also detail how we use neural networks to achieve more accurate predictions of ∆∆ G using the unsupervised representations from the RDE (Section 3.4). Definitions and Notations A protein-protein complex is a multi-chain protein structure that can be divided into two groups. Each group contains at least one protein chain and each chain consists of multiple (amino acid) residues. For a protein complex containing n residues, we number them from 1 to n . The two groups of the complex can be represented by two disjoint sets of indices A, B ⊂ { 1 . . . n } . A residue is characterized by its type, position, orientation, and sidechain conformation. We denote the type, position, and orientation of the i -th ( i ∈ { 1 . . . n } ) residue as a i ∈ { 1 . . . 20 } , p i ∈ R 3 , and O i ∈ SO (3) respectively. The sidechain conformation of the residue is called rotamer . As the conformational degree of freedom of the sidechain is defined by rotatable bonds, a rotamer can be parameterized by torsional angles w.r.t. the rotatable bonds. The number of torsional angles varies between 0 to 4 depending on the residue type. For a residue with d torsional angles, we denote the k -th ( k ∈ { 1 . . . 4 } ) torsional angle by χ ( k ) i ∈ [0 , 2 π ) . Collectively, all the torsional angles are denoted by a vector χ i = ( χ ( k ) i ) dk =1 . Using the language of geometry, an angle can be represented by a point on the unit circle S 1 . A vector consisting of d angular values resides on the product of d unit circle, known as the d -dimensional torus T D = ( S 1 ) D .", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Generate a more formal version of this paragraph", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Replace all mentions of amino acid by 'residue'. Revise this paragraph for clarity.", "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.00", "parag_1": "Lightweight image super-resolution (SR) networks have obtained promising results with moderate model size. However, they are impractical or neglected to be extended to larger networks. At the same time, model compression techniques, like neural architecture search and knowledge distillation, typically consume considerable computation resources. In contrast, network pruning is a cheap and effective model compression technique. However, it is hard to be applied to SR networks directly, because filter pruning for residual blocks is well-known tricky. To address the above issues, we propose structure-regularized pruning (SRP), which imposes regularization on the pruned structure to make sure the locations of pruned filters are aligned across different layers. Specifically, for the layers connected by the same residual, we select the filters of the same indices as unimportant filters. To transfer the expressive power in the unimportant filters to the rest of the network, we employ L 2 regularization to drive the weights towards zero so that eventually their absence will cause minimal performance degradation. We apply SRP to train efficient image SR networks, resulting in a lightweight network SRPN-L and a very deep one SRPN. We conduct extensive comparisons with both lightweight and larger image SR networks. SRPN-L and SRPN achieve superior performance gains over recent methods quantitatively and visually.", "parag_2": "Several image super-resolution (SR) networks have been proposed of late for efficient SR, achieving promising results. However, they are still not lightweight enough and neglect to be extended to larger networks. At the same time, model compression techniques, like neural architecture search and knowledge distillation, typically consume considerable computation resources. In contrast, network pruning is a cheap and effective model compression technique. However, it is hard to be applied to SR networks directly because filter pruning for residual blocks is well-known tricky. To address the above issues, we propose structure-regularized pruning (SRP), which imposes regularization on the pruned structure to ensure the locations of pruned filters are aligned across different layers. Specifically, for the layers connected by the same residual, we select the filters of the same indices as unimportant filters. To transfer the expressive power in the unimportant filters to the rest of the network, we employ L 2 regularization to drive the weights towards zero so that eventually, their absence will cause minimal performance degradation. We apply SRP to train efficient image SR networks, resulting in a lightweight network SRPN-Lite and a very deep one SRPN. We conduct extensive comparisons with both lightweight and larger networks. SRPN-Lite and SRPN perform favorably against other recent efficient SR approaches quantitatively and visually.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Replace all occurrences of SRPN-L with SRPN-Lite. Improve the english of this paragraph.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Replace SRPN-L by SPRN-Lite. Make the first and last sentence more fitting to the academic style.", "annotator": "annotator_07"}} {"id_paragraph": "_VWsQJEH-X3.Tr4NZOz3iN.00", "parag_1": "For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] See Section (b) Did you describe the limitations of your work? [Yes] See Section (c) Did you discuss any potential negative societal impacts of your work? [No] (d) Have you read the ethics review guidelines and ensured that your paper conforms tothem? [Yes]2. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [Yes] See Section 2(b) Did you include complete proofs of all theoretical results? [Yes] See the supplementalmaterial (Appendix.pdf)3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main ex- perimental results (either in the supplemental material or as a URL)? [Yes] See the supplemental material. Our codes contains README.md to reproduce the our experi- mental results. (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Section (c) Did you report error bars (e g., with respect to the random seed after running experiments multiple times)? [Yes] See Section (d) Did you include the total amount of compute and the type of resources used (e.g., typeof GPUs, internal cluster, or cloud provider)? [Yes] See Section 5. If you are using existing assets (e g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [N/A] (b) Did you mention the license of the assets? [Yes] See the supplemental materials. Our codes contains LICENSE.txt. (c) Did you include any new assets either in the supplemental material or as a URL? [Yes] See the supplemental materials containing our source codes (d) Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A] Our dataset was reproduced by ourselves based on the literature (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [Yes] If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, ifapplicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]", "parag_2": "For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] See Section(b) Did you describe the limitations of your work? [Yes] See Section(c) Did you discuss any potential negative societal impacts of your work? [No] (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] The paper conforms with the provided guidelines. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [Yes] See Section(b) Did you include complete proofs of all theoretical results? [Yes] See the supplemental material (Appendix.pdf) 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See the supplemental material. Our codes contains README.md to reproduce the our experimental results. (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Section(c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes] See Section(d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Section 5. 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [N/A] (b) Did you mention the license of the assets? [Yes] See the supplemental materials. Our codes contains LICENSE.txt. (c) Did you include any new assets either in the supplemental material or as a URL? [Yes] See the supplemental materials containing our source codes (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A] Our dataset was reproduced by ourselves based on the literature (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [Yes] 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "H1O5OQGfz.SksTSgdMG.00", "parag_1": "We conduct a preliminary investigation of this issue by studying the generalizability of KD, BU and LID for detecting previously unseen attack strategies on the CIFAR-10 dataset. The KD, BU and LID detectors are trained on samples of the simplest attack strategy, FGM, and then tested on samples of the more complex attacks BIM-a, BIM-b, JSMA and Opt. The training and test datasets are generated in the same way as in our previous experiments, and the test attack data is standardized by scaling so as to fit the training data. The results are shown in Table 2, from which we see that the LID detector trained on FGM can accurately detect the much more complex attacks of the other strategies. The KD and BU characteristics can also achieve good performance on this transfer learning task, but are less consistent than our proposed LID characteristic. The results appear to indicate that the adversarial regions generated by different attack strategies possess similar dimensional properties.", "parag_2": "We conduct a preliminary investigation of this issue by studying the generalizability of KD, BU and LID for detecting previously unseen attack strategies on the CIFAR-10 dataset. The KD, BU and LID detectors are trained on samples of the simplest attack strategy, FGM, and then tested on samples of the more complex attacks BIM-a, BIM-b, JSMA and Opt. The training and test datasets are generated in the same way as in our previous experiments with only the FGM attack applied on the train set while the other attacks applied separately on the test set. The test attack data is standardized by scaling so as to fit the training data. The results are shown in Table 2, from which we see that the LID detector trained on FGM can accurately detect the much more complex attacks of the other strategies. The KD and BU characteristics can also achieve good performance on this transfer learning task, but are less consistent than our proposed LID characteristic. The results appear to indicate that the adversarial regions generated by different attack strategies possess similar dimensional properties.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_08"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.14", "parag_1": "A , three with Design B, and 10 with Design C . To complete this task, participants could rely on the bars that indicate allowed medication intake times with Design A and Design C . With Design B , the marker on the medication entry indicated the allowed time. As most participants were not successful in completing this task with Design B , they provided ample feedback on this design. Five participants said the design does not support the task, for example, P1 “it doesn’t show any other time of the day that you can take it”. Three participants (P5, P11, and P12) said they can reschedule in any free slot, for example, P9 said “It seems that 7am is a possibility because there is no other and there is no indication of conflicts.” . P10 commented they would move it to a slot and observe if a conflict was flagged, saying “I don’t know. I think I would just move it and see if a conflict came up.” . P8 participants indicated that use use of the bar to indicate allowed schedule times could also be read as extended release time, saying “That would mean that it’s something that it’s an extended release. Warfarin is not an extended release.” P10 and P6 reasoned that a medication that is supposed to be taken at a specific time point should not occupy a full hour on the calendar. For example, P10 said “I really don’t like this fact that it says 6am on the side and then it makes it a block of time.”", "parag_2": "B . To complete this task, participants could rely on the bars that indicate allowed medication intake times with Design A and Design C . With Design B , the marker on the medication entry indi- cated the allowed time and five participants said Design B did not support that task. For example, P1 said “it doesn’t show any other time of the day that you can take it”. Three participants (P5, P11, and P12) said they could reschedule in any free slot, for example, P9 said “It seems that 7am is a possibility because there is no other and there is no indication of conflicts.” . P10 commented they would move it to a slot and observe if a conflict was flagged, saying “I don’t know. I think I would just move it and see if a conflict came up.” . P indicated the use of the bar to indicate allowed schedule times could also be read as extended release time, saying “That would mean that it’s something that it’s an extended release. Warfarin is not an extended release.” P10 and P6 reasoned that a medication that is supposed to be taken at a specific time point should not occupy a full hour on the calendar. For example, P10 said “I really don’t like this fact that it says 6am on the side and then it makes it a block of time.”", "annot_1": {"annotation": ["Concision"], "instruction": "Make this paragraph much more concise.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision"], "instruction": "Please write more concisely about design B.", "annotator": "annotator_09"}} {"id_paragraph": "atxti8SVk.3K9AmPwALM.09", "parag_1": "Semantic co-occurrence. Semantic context characterizes the co-occurrences of different objects and can be used to group and separate pixels. We define semantic context as the union of object classes in each image. Even without the location of labels, we can leverage semantic context to impose global regularization in the latent feature space, i.e., the pixel-wise feature embedding should be separated from all the other pixels from images without overlapping object categories.", "parag_2": "Semantic co-occurrence. Semantic context characterizes the co-occurrences of different objects, which can be used as a prior to group and separate pixels. We define semantic context as the union of object classes in each image. Even without the pixel-wise localization of semantic labels, we can leverage semantic context to impose global regularization on the latent feature: The feature should separate images without any overlapping object categories.", "annot_1": {"annotation": ["Concision"], "instruction": "Rewrite this paragraph to be more concise.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_light", "Concision"], "instruction": "Split the last sentence and make it slightly shorter. Improve the english.", "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.14", "parag_1": "Results Table 2 shows our results. iDSPN is close to solving the problem at any training set size by being exclusively multiset-equivariant like DSPN. As expected, the set-equivariant models are unable to solve this task because they cannot map the equal elements in the input to different elements in the output. This applies even to transformers with random position embeddings, which similarly to TSPN (Kosiorek et al., 2020) and Slot Attention (Locatello et al., 2020) use noise to make elements not exactly equal. Meanwhile, transformers with position encoding and BiLSTM require at least 100 × more training samples to come close to the performance of iDSPN. This is because they lack the correct structural bias of not relying on the order of the elements, which makes them less sample-efficient. Note that the non-equivariant models are unlikely to benefit significantly from even more training data because they overfit on 1 × and 10 × , but no longer on 100 × .", "parag_2": "Results. Table 2 shows our results. The two exclusively multiset-equivariant models DSPN and iDSPN perform well at any training set size. As expected, the set-equivariant models are unable to solve this task because they cannot map the equal elements in the input to different elements in the output. This applies even to transformers with random “position” embeddings, which similarly to TSPN (Kosiorek et al., 2020) and Slot Attention (Locatello et al., 2020) use noise to make elements not exactly equal. Meanwhile, BiLSTM and transformers with normal positional encoding require at least 100 × more training samples to come close to the performance of iDSPN. This is because they lack the correct structural bias of not relying on the order of the elements, which makes them less sample-efficient. Note that the non-equivariant models are unlikely to benefit significantly from even more training data because they overfit on 1 × and 10 × , but no longer on 100 × .", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Simplify the second sentence.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the 2nd sentence to make it easier to read and less confusing.", "annotator": "annotator_07"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.12", "parag_1": "Colored-and-gray-MNIST (Kim et al., 2019) is a synthetic dataset based on MNIST (LeCun et al., 1998). In the training set of 60,000 examples, each example has two images, a gray-scale image and a monochromatic image, with color strongly correlated with its digit label. For the validation set of 10,000 examples, each example also has a gray-scale and a corresponding monochromatic image however with a low correlation between the color and its label. We consider the monochromatic image as the first modality m 0 and the gray-scale one as the second modality m 1 . We use a neural network with four convolutional layers as the uni-modal branch and employ three MMTMs (Joze et al., 2020) to connect them. The corresponding uni-modal DNNs trained on the monochromatic images and We use this synthetic dataset mainly to show that the proposed greedy learner hypothesis happens in practice.", "parag_2": "Colored-and-gray-MNIST (Kim et al., 2019) is a synthetic dataset based on MNIST (LeCun et al., 1998). In the training set of 60,000 examples, each example has two images, a gray-scale image and a monochromatic image, with color strongly correlated with its digit label. For the validation set of 10,000 examples, each example also has a gray-scale image and a corresponding monochromatic image however with a low correlation between the color and its label. We consider the monochromatic image as the first modality m 0 and the gray-scale one as the second modality m 1 . We use a neural network with four convolutional layers as the uni-modal branch and employ three MMTMs to connect them. The corresponding uni-modal DNNs trained on the monochromatic images and the gray-scale images achieve the validation accuracies of 43% and 98%, respectively. We use this synthetic dataset mainly to demonstrate the proposed greedy learner hypothesis.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_05"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "JKGrgCQWiO.vNgWh1ZGc.00", "parag_1": "In this work, we develop a novel third family of attacks, recursive gradient attack on privacy (RGAP), that is based on a recursive, depth-wise algorithm for recovering training data from gradient information. Different from the analytical attack using the bias term, R-GAP utilizes much more information and is the first closed-form algorithm that works on both convolutional networks and fully connected networks with or without bias term. Compared to optimization-based attacks, it is not susceptible to local optima, and is orders of magnitude faster to run with a deterministic running time. Furthermore, the insights gained from the closed form of our recursive attack have lead to a refined rank analysis that predicts which network architectures enable full recovery, and which lead to provable security or noisy recovery due to rank-deficiency. This explains well the performance of both closed-form and optimization-based attacks.", "parag_2": "In this work, we develop a novel third family of attacks, recursive gradient attack on privacy (RGAP), that is based on a recursive, depth-wise algorithm for recovering training data from gradient information. Different from the analytical attack using the bias term, R-GAP utilizes much more information and is the first closed-form algorithm that works on both convolutional networks and fully connected networks with or without bias term. Compared to optimization-based attacks, it is not susceptible to local optima, and is orders of magnitude faster to run with a deterministic running time. Furthermore, we show that under certain conditions our recursive attack can fully recover training data in cases where optimization attacks fail. Additionally, the insights gained from the closed form of our recursive attack have lead to a refined rank analysis that predicts which network architectures enable full recovery, and which lead to provable noisy recovery due to rankdeficiency. This explains well the performance of both closed-form and optimization-based attacks.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.16", "parag_1": "Finally, we evaluate iDSPN on the CLEVR (Johnson et al., 2017) object property prediction task that was used in DSPN (Zhang et al., 2019) and Slot Attention (Locatello et al., 2020). Given an image of a synthetic 3d scene containing several objects, the goal is to predict the set of their properties: 3d coordinate, size, color, and material (18 dimensions in total). An additional dimension is added as a binary indicator for whether an element is present or not to account for the different set sizes in the dataset. The main comparison we intend to make here is against Slot Attention, the state-of-the-art model on this dataset. Setup We follow the experimental setup of DSPN where a ResNet (He et al., 2016) encodes the image into the input vector z , which is decoded into the set by iDSPN. We include a few modifications to improve our results and also simplify the model slightly, which we elaborate on in Appendix D. Most notably, we use Nesterov’s Accelerated Gradient as iDSPN optimizer.", "parag_2": "Finally, we evaluate iDSPN on the CLEVR (Johnson et al., 2017) object property prediction task that was used in DSPN (Zhang et al., 2019) and Slot Attention (Locatello et al., 2020). Given an image of a synthetic 3d scene containing up to ten objects, the goal is to predict the set of their properties: 3d coordinate, size, color, and material (18 dimensions in total). To account for different set sizes in the dataset, a 19th dimension is added as a binary indicator for whether an element is present or not. Setup. We mostly follow the experimental setup of DSPN where a ResNet (He et al., 2016) encodes the image into the input vector z , which is decoded into a set by iDSPN. We also include a few modifications, which we elaborate on in Appendix D. Most notably, we use Nesterov’s Accelerated Gradient as iDSPN optimizer and increase the number of optimization steps, both changes that would make DSPN even slower and use more memory.", "annot_1": {"annotation": ["Development", "Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "kJOgIGrJMU.jC95Y2Lt4f.00", "parag_1": "Pratt, 1998) and (multi-source) domain adaptation. In fact, there are a few works in domain generalization that are inspired by the meta-learning principles, such as Li et al. (2018a) ; Balaji et al. ; Li et al. Dou et al. In Ren et al. (2018), we also see the leveraging of gradient inner product in meta-learning, where it is used to determine the importance weight of training examples. We discuss the connection between our proposed algorithm to meta-learning in more details in Appendix A.1.", "parag_2": "Pratt, 1998) and (multi-source) domain adaptation. In fact, there are a few works in domain generalization that are inspired by the meta-learning principles, such as Li et al. (2018a); Balaji et al. ; Li et al. Dou et al. Specifically, Li et al. (2020) also proposes to adapt Reptile for domain generalization tasks, however they study their method under the sequential learning setting, whereas our method can be trained on all domains and therefore learns faster, especially when the number of domains is large. In Ren et al. (2018), we also see the leveraging of gradient inner product in meta-learning, where it is used to determine the importance weight of training examples. We discuss the connection between our proposed algorithm to meta-learning in more details in Appendix A.1.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.19", "parag_1": "Twelve local university students from a different participants group from experiment 1 participated in this experiment. The average age was 22.3 years ( SD = 1 . 67). All participants were skillful in mouse operation and used their dominant right hand.", "parag_2": "A total of 12 local university students from a different participants group from that in Experiment 1 participated in this experiment. The average age was 22.3 years ( SD = 1 . 67). All participants were skilled in mouse operation and used their dominant hand (right hand).", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Rephrase the paragraph", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Revise this text to make it more clear.", "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.18", "parag_1": "Reweighting methods weight individuals with balanced score to obtain globally balanced distributions, represented by the inverse propensity score (IPS) approach (Rosenbaum & Rubin, 1983a) and its doubly robust variant (Robins et al., 1994). Imai & Ratkovic (2014) and Fong et al. (2018) propose to calculate the balancing score via an optimization problem. Kuang et al. (2017b) and Kuang et al. (2017a) further consider the non-confounding factors in covariates. However, these methods suffer from high variance and are vulnerable to non-overlapped units.", "parag_2": "Reweighting-based methods weight individuals with balanced scores to achieve globally balanced distributions, represented by the inverse propensity score (IPS) approach (Rosenbaum & Rubin, 1983a) and its doubly robust variant (Robins et al., 1994). Imai & Ratkovic (2014) and Fong et al. (2018) propose calculating the balancing score by solving an optimization problem. Kuang et al. (2017b) and Kuang et al. (2017a) consider additional non-confounding factors in covariates. However, these methods are susceptible to non-overlapping units and suffer from a high variance issue.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Use formal words in the last sentence.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Reorder the last sentence arguments. Make this paragraph a bit more precise.", "annotator": "annotator_07"}} {"id_paragraph": "X50LVGSli.jqJzurpUu.02", "parag_1": "MVC problem respectively. In both problems and across the five datasets, Meta-EGN siginificantly outperforms EGN and RUN-CSP, both before and after the fine-tuning step. In comparison with the traditional CO solvers, Meta-EGN narrows the gap from Gurobi9.5 on those real small graphs. For RB graphs, Meta-EGN outperforms Gurobi9.5 for the MC problem. For the MVC problem, Meta-EGN outperforms Gurobi9.5 on RB500.", "parag_2": "MVC problem respectively. In both problems and across the five datasets, Meta-EGN significantly outperforms EGN and RUN-CSP, both before and after the fine-tuning step. In comparison with the traditional CO solvers, Meta-EGN narrows the gap from Gurobi9.5 on those real small graphs. For RB graphs, Meta-EGN outperforms Gurobi9.5 on RB500 for both the MC and MVC problems.", "annot_1": {"annotation": ["Concision"], "instruction": "Fuse the last two sentences for conciseness.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision"], "instruction": "Merge the two last sentences to make it shorter.", "annotator": "annotator_07"}} {"id_paragraph": "Rd7TGMaUy.dkY5HcKwZ1.02", "parag_1": "A curious feature of our model is that during training one has to back-propagate over the gradient of target distribution multiple times to optimize R . In (Titsias & Dellaportas, 2019), the authors avoid multiple back-propagation by stopping the derivative calculation at the density gradient term. In our experiment, we find it is necessary for good performance.", "parag_2": "A curious feature of our model is that during training one has to back-propagate over the gradient of the target distribution multiple times to optimize R . In (Titsias & Dellaportas, 2019) the authors avoid multiple back-propagation by stopping the derivative calculation at the density gradient term. In our experiment we do not use this trick and perform full back-propagation without encountering any issue. We found that stopping the derivative computation instead harms performance.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.19", "parag_1": "Results As shown in Figure 3, | d util ( f ) | increases along λ , especially when log( λ ) ≥ − 5 and | d util ( f ) | is positively correlated with R ( f ) . In other words, the stronger the regularization, the larger the imbalance in utilization between modalities we obverse. It confirms the second conjecture in §3.2, which states that the stronger regularization we apply, the greedier the multi-modal learning process becomes. We see | d speed | follows the same trend as | d util | . Again, it supports our choice of using the conditional learning speed to predict the conditional utilization rate.", "parag_2": "Results As shown in Figure 3, | d util ( f ) | increases along λ , especially when log( λ ) ≥ − 5 . We also see that | d util ( f ) | is positively correlated with R ( f ) . In other words, the stronger the regularization is, the larger the imbalance in utilization between modalities we observe. We see that | d speed | follows the same trend as | d util | . Again, it supports our choice of using the conditional learning speed to predict the conditional utilization rate.", "annot_1": {"annotation": ["Content_deletion", "Rewriting_medium"], "instruction": "Split first sentence in two and delete the third sentence", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Exclude redundant expression.", "annotator": "annotator_08"}} {"id_paragraph": "g5N2H6sr7.6J3ec8Dl3p.03", "parag_1": "The classification accuracies on the five benchmarks are shown in Table 1. MLG, as a kernel method, performs well on PROTEINS. However, it suffers from a long run time and takes more than 1 day on two larger datasets, as observed in INFOGRAPH. Our method achieves the best results in 4 out of 5 datasets compared with both kernel and unsupervised models, e.g., it achieves 1.5% improvement over previous state-of-the-art on Reddit-Binary, and 1.8% improvement on IMDB-Binary, which shows the superiority of our methods. ", "parag_2": "The classification accuracies on the five benchmarks are shown in Table 4. MLG, as a kernel method, performs well on PROTEINS. However, it suffers from a long run time and takes more than 1 day on two larger datasets, as observed in INFOGRAPH. Our method achieves the best results in 4 out of 5 datasets compared with both kernel and unsupervised models, e.g., it achieves 1.5% improvement over previous state-of-the-art on Reddit-Binary, and 1.8% improvement on IMDB-Binary, which shows the superiority of our methods. When compared with supervised graph classification models, ours beats the best supervised classification model GIN on IMDB-BIN, on-par with GIN on PROTEINS, IMDB-MULTI, and only loses on REDDIT.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "IuxfzBFSR0.CSFycBGzvd.01", "parag_1": " Theorem 6.1 applies to the general cost function with ρ set to 0. Note that the regret upper bound depends on the total number of time steps T , which is random. To replace the T -dependence by the", "parag_2": "K rather than T . Furthermore, for finite horizon MDPs, the number of steps is equal to T = KH . In this case, the result in 6.1 can avoid the extra factor of B, c min and other logarithmic term. Theorem 6.1 applies to the general cost function with ρ set to 0. Note that the regret upper bound depends on the total number of time steps T , which is random. To replace the T -dependence by the", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_08"}} {"id_paragraph": "BkxG1CvhWf.wcpE7maMLZ4.02", "parag_1": "First, it is the tightest topological property of state spaces that has been studied in the literature of model checking and planning, as far as we know. Secondly, although the worst-case complexity of computing the diameter for a factored transition system, and succinct digraphs more generally, is Π P 2 -hard (Hemaspaandra et al. 2010), there are practical methods that can compositionally compute upper bounds on the diameter (Baumgartner, Kuehlmann, and Abraham 2002; Rintanen and Gretton 2013; Abdulaziz, Gretton, and", "parag_2": "First, it is the tightest topological property of state spaces that has been studied. Secondly, although the worst-case complexity of computing the diameter for a succinct graph is Π P 2 -hard (Hemaspaandra et al. 2010), there are practical methods that can compositionally compute upper bounds on the diameter (Baumgartner, Kuehlmann, and Abraham 2002; Rintanen and Gretton 2013; Abdulaziz, Gretton, and", "annot_1": {"annotation": ["Concision"], "instruction": "Remove unnecessary details.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Concision"], "instruction": "Concise by removing unnecessary details.", "annotator": "annotator_07"}} {"id_paragraph": "jyac3IgQ44.f4au9jfat5.00", "parag_1": "Vision transformer. Transformer [42, 5] has recently achieved great success in computer vision [6,1 , 21, 18, 58, 50, 44]. Swin-transformer [21] restricts self-attention to non-overlapping local windowswhile allowing cross-window connection to improve efficiency. SSA [29] divides attention headsinto multiple groups to aggregate image features with different granularities. Guo et al. [9] and Zhao et al. [55] make the first step towards introducing the transformer for point cloud analysis. Recently, many approaches [45, 54, 25, 10, 22] apply local self-attention on voxels to learn richerfeature representation. Our work extends the window-based attention on 3D voxelsby introducingscale-aware attention learning equipped with novel sampling strategies for the queries and the keys toimprove both accuracy and efficiency.", "parag_2": "Vision transformer. Inspired by the great success of Transformer [35, 4] in NLP, some worksemploy it in the field of computer vision [5, 1, 18, 16, 50, 42, 36]. Swin-transformer [18] increasesefficiency by limiting self-attention computation to non-overlapping local windows while allowing forcross-window connection. SSA [24] divides the attention head into multiple groups and aggregatesimage features with different granularity, respectively, which has achieved excellent performance. In the field of point cloud, Guo et al. [8] and Zhao et al. [47] introduce transformer paradigm forpoint cloud classification and segmentation. Recently, many methods [37, 46, 22, 9, 19] applylocal self-attention mechanism on voxels to learn richer feature representations. Our model extendswindow attention to 3D voxels and flexibly combines different window sizes to capture multi-scaleand multi-granularity features while shunting them to different attention heads in SSA.", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite this paragraph for improved readability and clarity.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_heavy", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "Rd7TGMaUy.dkY5HcKwZ1.00", "parag_1": "In MCMC, one chooses a transition kernel that leaves the target distribution invariant and constructs a Markov Chain by applying the kernel repeatedly. The MCMC method relies only on the ergodicity assumption. Other than that it is general, if enough computation is performed, the Markov Chain generates correct samples from any target distribution, no matter how complex the distribution is.", "parag_2": "In MCMC, one chooses a transition kernel that leaves the target distribution invariant and constructs a Markov Chain by applying the kernel repeatedly. The MCMC method relies only on the ergodicity assumption, other than that it is general. If enough computation is performed, the Markov chain generates correct samples from any target distribution, no matter how complex the distribution is.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Rephrase the paragraph", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_medium", "Rewriting_light"], "instruction": "Balance sentences length.", "annotator": "annotator_07"}} {"id_paragraph": "-JRdgpyZWz.82w43do7ak.00", "parag_1": "Choose a [ t ] = w.p. m ] = w.p 1 − m t then end end end are multiple types of arms and train a separate NeurWIN for each type. During testing, the controller calculates the index of each arm based on the arm’s NeurWIN and schedules the M arms with the highest indices.", "parag_2": "For such scenarios, we consider that there are multiple types of arms and train a separate NeurWIN for each type. During testing, the controller calculates the index of each arm based on the arm’s state and schedules the M arms with the highest indices.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_04"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "Rd7TGMaUy.dkY5HcKwZ1.01", "parag_1": "To incorporate the gradient of the target distribution into the sampler, we take inspiration from the HMC algorithm. Basic HMC starts with drawing a random initial momentum v 0 , followed by several steps of leapfrog integration. In the following, we denote the momentum variable after n updates by v n and position by x n . In a leapfrog step, the integrator first updates v with a half step of the gradient: v n (cid:48) = v n − 1 − (cid:15) 2 ∂ x U ( x n − 1 ) , followed by a full step of x update: x n = x n − 1 + (cid:15)v n (cid:48) , and another half step of v update: v n = v n (cid:48) − (cid:15) \n ∂ x U ( x n ) . After several steps, the update of x can be written as: x n = x 0 + (cid:80) ni =\n v i (cid:48) , which has the form x (cid:48) = x + z with z = (cid:80) ni v i (cid:48) = − nv 0 − n(cid:15) 2 (cid:2) ∂ x U ( x 0 ) (cid:3) − (cid:15) (cid:2)(cid:80) ni =1 ( n − i ) ∂ x U ( x i ) (cid:3) . Note that, although z does not have a tractable density, the equation suggests that a tractable model of z that use gradient information effectively should subtract the gradient of the target distribution evaluated at intermediate point of x from a sample of a base distribution v 0 .", "parag_2": "The gradient of the target distribution enters our model in those affine transformations. To motivate the particular form we choose, we take a closer look at the HMC algorithm. Basic HMC starts with drawing a random initial momentum v 0 , followed by several steps of leapfrog integration. Let x n be the momentum variable after n updates by v n and position. In a leapfrog step, the integrator first updates v with a half step of the gradient: v n (cid:48) = v n − 1 − (cid:15) 2 ∂ x U ( x n − 1 ) , followed by a full step of x update: x n = x n − 1 + (cid:15)v n (cid:48) , and another half step of v update: v n = v n (cid:48) − (cid:15) \n ∂ x U ( x n ) . After several steps, the overall update of x can be written as: x n = x 0 + (cid:80) ni =\n v i (cid:48) , which has the form x (cid:48) = x + z with z = (cid:80) ni v i (cid:48) = − nv 0 − n(cid:15) 2 (cid:2) ∂ x U ( x 0 ) (cid:3) − (cid:15) (cid:2)(cid:80) ni =1 ( n − i ) ∂ x U ( x i ) (cid:3) . The equation for generating z through affine transformations, describes how the gradient of the target distribution, evaluated at some intermediate point of x , should be included.", "annot_1": {"annotation": ["Development", "Concision"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Development", "Concision"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "S1BhqsOsB.1mgtDFRDc.03", "parag_1": "We show a visualization of the occupancy grids in Figure 9 (right). We visualize the occupancy grids by converting them to heightmaps. This is achieved by multiplying each voxel’s occupancy value by its height coordinate in the grid, and then taking a max along the grid’s height axis. The visualizations show that the occupancy module learns to fill the “holes” of the partial view, effectively imagining the complete 3D scene.", "parag_2": "We show a visualization of the estimated occupancy volumes in Figure 9-right. We visualize the occupancy volumes by converting them to heightmaps. This is achieved by multiplying each voxel’s occupancy value by its height coordinate in the grid, and then taking a max along the grid’s height axis. The visualizations show that the occupancy module learns to fill the “holes” of the partial view.", "annot_1": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Remove second part of last sentence and Replace \"grids\" by \"volumes\" ", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Delete unnecessary details. Make the text more formal.", "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.07", "parag_1": "Pointing operation to the edge target takes advantage of the cursor stopping at the edge of the screen to complete the pointing without precise control. However, pushing-edge (pushing the cursor to the edge of the screen) behavior increases the distance traveled by the mouse, thereby increasing the movement time. Yamanaka [25] defined PE (Path Efficiency) to calculate the efficiency of the cursor movements (Eq. 7).", "parag_2": "A pointing operation for an edge target exploits the fact that the cur- sor stops at the edge of the screen to complete the pointing without precise control. However, pushing-edge behavior, i.e., pushing the cursor to the edge of the screen, increases the distance traveled by the mouse, and this increases the movement time. Yamanaka [28] defined PE (Path Efficiency) to calculate the efficiency of the cursor movements (Eq. 7).", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Change some words in this paragraph for the better ", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_medium", "Rewriting_light"], "instruction": "Improve the linking between ideas to make the paragraph more precise and readable.", "annotator": "annotator_07"}} {"id_paragraph": "vhQqjIOkI.IA5eA6BTPs.00", "parag_1": "The second component focuses on modeling the global temporal patterns in the dataset through identifying a small set of temporal global basis functions . The basis time-series, when combined in different ways, can express the individual dynamics of each time series. In our model, the basis time-series are encoded in a trained seq-2-seq model (Sutskever et al., 2014) model in a functional form. Each time series is then associated with a learned embedding vector that specifies the weights for decomposition along these basis functions. Predicting a time series into the future using this model then just involves extrapolating the global basis functions and combining them using its weight vector, without explicitly using the past values of that time series. The coherence constraints therefore only impose constraints on the embedding vectors of each time series, which can be easily modeled by a hierarchical regularization function. We call this component a basis decomposition model . In Section A.2, we also provide theoretical justification for how such hierarchical regularization using basis decomposition results in improved prediction accuracy.", "parag_2": "The second component focuses on modeling the global temporal patterns in the dataset through identifying a small set of temporal global basis functions . The basis time-series, when combined in different ways, can express the individual dynamics of each time series. In our model, the basis time-series are encoded in a trained seq-2-seq model (Sutskever et al., 2014) model in a functional form. Each time series is then associated with a learned embedding vector that specifies the weights for decomposition along these basis functions. Predicting a time series into the future using this model then just involves extrapolating the global basis functions and combining them using its weight vector, without explicitly using the past values of that time series. The coherence constraints therefore only impose constraints on the embedding vectors of each time series, which can be easily modeled by a hierarchical regularization function. We call this component a basis decomposition model . As we will see, this part of the model is only approximately coherent unless the embedding constraints hold exactly. In particular, in this paper, we focus on improving model accuracy rather than preserving exact coherency. In Section A.2, we also provide theoretical justification for how such hierarchical regularization using basis decomposition results in improved prediction accuracy.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "9wfZbn73om.FhHH15YtKt.00", "parag_1": "To this end, we define a kind of ( σ, δ ) -measure to mathematically quantify the data augmentation, and then provide an upper bound of the downstream classification error rate based on the measure. It reveals that the generalization ability of contrastive self-supervised learning is related to three key factors: alignment of positive samples, divergence of class centers, and concentration of augmented data. The first two factors are properties of learned representations, while the third one is determined by pre-defined data augmentation. With the above theoretical findings, we then study two canonical contrastive losses, InfoNCE and cross-correlation, to see how they satisfy the first two factors. Furthermore, we conduct various experiments to study the third factor, and observe that the downstream performance is highly correlated to the concentration of augmented data.", "parag_2": "To this end, we define a kind of ( σ, δ ) -measure to mathematically quantify the data augmentation, and then provide an upper bound of the downstream classification error rate based on the measure. It reveals that the generalization ability of contrastive self-supervised learning is related to three key factors: alignment of positive samples, divergence of class centers, and concentration of augmented data. The first two factors are properties of learned representations, while the third one is determined by pre-defined data augmentation. We further investigate two canonical contrastive losses, InfoNCE and cross-correlation, to show how they provably achieve the first two factors. Moreover, we conduct experiments to study the third factor, and observe a strong correlation between downstream performance and the concentration of augmented data.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Use accurate words.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Make the second half of this paragraph more precise and direct.", "annotator": "annotator_07"}} {"id_paragraph": "5t8NvKONr.tls-ZX2iE.01", "parag_1": "We now introduce the theorem, which offers a guideline on the neural network architecture for operator learning. It suggests that if the entire architecture can be replaced with a fully connected neural network, large complexity should be required for training. It also verifies that the lower bound for a universal activation function is a sharp bound on the number of parameters. We note here an assumption, which is a sufficient condition for proving the theorem.", "parag_2": "We now introduce the theorem, which offers a guideline on the neural network architecture for operator learning. It suggests that if the entire architecture can be replaced with a fully connected neural network, large complexity should be required for approximating the target function. It also verifies that the lower bound for a universal activation function is a sharp bound on the number of parameters. First, we give an assumption to obtain the theorem.", "annot_1": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Remove redundant details. Use more precise words.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Make it more precise when necessary.", "annotator": "annotator_07"}} {"id_paragraph": "fJhx73ErBg.NeKLbmOxG8.02", "parag_1": "LogicRiskNet - a differentiable parametric ptSTL risk monitor. We describe how we may construct a ptSTL risk monitor based on a learned stochastic risk measure that can be applied to aprobabilistic description of human behaviors. We extend the definition of robustness degree (introduced in Section 2) to encompass belief states. In order to calculate the risk of a trajectory of beliefstates w.r.t a ptSTL formula φ , we need to modify the definition of the robustness. Recall that thebasic element of φ is a predicate of the form p ( x ) < c , define α ( x ) = c − p ( x ) as the predicatefunction (this is same as the robustness definition of the predicate). Since x is stochastic, rather thandeterministic, we can evaluate ptSTL formulas instead using a risk measure ρ : X → R , representing expectation, mean-variance, etc. {+\n+} We can then apply this risk measure to the robustness definitionyielding α ρ ( x ) = ρ ( α ( x )) , x ∈ X (This treatment is similar to [12]).", "parag_2": "LogicRiskNet - a differentiable parametric ptSTL risk monitor. We describe how we may construct a ptSTL risk monitor based on a learned stochastic risk measure that can be applied to aprobabilistic description of human behaviors. We extend the definition of robustness degree (introduced in Section 2) to encompass belief states. In order to calculate the risk of a trajectory of beliefstates w.r.t a ptSTL formula φ , we need to modify the definition of the robustness. Recall that thebasic element of φ is a predicate of the form p ( x ) < c , define α ( x ) = c − p ( x ) as the predicate (thisis same as the robustness degree for the predicate). Since x is stochastic, rather than deterministic,we can evaluate ptSTL formulas instead using a risk measure ρ : X → R , representing expectation,mean-variance, value-at-risk, etc. {+\n+} In this work, we assume the expectation risk measure, but canequally well replace this with others. We can then apply this risk measure to the robustness definitionyielding α ρ ( x ) = ρ ( α ( x )) , x ∈ X (This treatment is similar to [13]).", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_01"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.07", "parag_1": "According to the greedy learner hypothesis, we can make multi-modal learning less greedy by controlling the speed at which a multi-modal DNN learns to rely on each modality. To this end, we derive conditional learning speed to measure the speed at which the DNN learns from one modality. It serves as an efficient proxy to the conditional utilization rate of the corresponding modality. We then propose the balanced multi-modal learning algorithm, which controls model’s conditional learning speed between modalities in order to prevent it from being greedy.", "parag_2": "We aim to make multi-modal learning less greedy by controlling the speed at which a multi-modal DNN learns to rely on each modality. To this end, we define conditional learning speed to measure the speed at which the DNN learns from one modality. It serves as an efficient proxy to the conditional utilization rate of the corresponding modality, as shown empirically in §5.2 and §5.3. We then propose the balanced multi-modal learning algorithm, which controls the difference in conditional learning speed between modalities that the model exhibits during training.", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_08"}, "annot_2": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "Byyb66j52G.hR5KKRfhQm.13", "parag_1": "On the contrary, random convolution can induce a growing difficulty by increasing the number of factors on a single background. Therefore, the generalization rapidly decreases after augmentationinterrupted when training with a single background because the learning direction toward generalization about various backgrounds is not helpful to train. On the other hand, the training can have helpwhen their difficulty is solved by augmentation, such as Figure 2(b) and Figure 2(c). Thus, in deep RL, neural networks maintain the regularization when augmentation helps the training.", "parag_2": "On the contrary, random convolution can induce a growing difficulty by increasing the number of factors on a single background. Therefore, the generalization rapidly decreases after augmentation is interrupted during training with a single background because the learning direction toward generalization about various backgrounds is not helpful to train. In contrast, the training can help when their difficulty is solved by augmentation (Figure 2(b), 2(c)). Thus, in deep RL, neural networks maintain the regularization when augmentation helps the training.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Add missing spaces.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve the english in the paragraph, make it slightly more formal.", "annotator": "annotator_07"}} {"id_paragraph": "v8Vdrwfrg.Hrx_LZTUq.01", "parag_1": "Dynamic p. during training g ( (cid:101) w t ) (on pruned model) dynamically adapting mask m t + adaptive every few iterations + can recover from premature pruning from a theoretical point of view, and to provide further insights and interpretation. We do not require tuning of additional hyperparameters, and no retraining of the sparse model is needed (though can further improve performance).", "parag_2": "Dynamic p. during training g ( (cid:101) w t ) (on pruned model) dynamically adapting mask m t + adaptive every few iterations + can recover from premature pruning tuning of additional hyperparameters, and no retraining of the sparse model is needed (though can further improve performance).", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "yUZ7b8bJWZ.rcOtDrFL7.00", "parag_1": "For example, Cohen et al. ; Rezende & Racani`ere (2021) approximate the OT map to define normalizing flows on Riemannian manifolds, Hamfeldt & Turnquist (2021a;b); Cui et al. (2019) derive algorithms to approximate the OT map on the sphere, Alvarez-Melis et al. HoyosIdrobo (2020) learn the transport map on hyperbolic spaces. However, the computational bottleneck to compute the Wasserstein distance on such spaces remains, and, as underlined in the conclusion of (Nadjahi, 2021), defining SW distances on manifolds would be of much interest. Notably, Rustamov & Majumdar (2020) proposed a variant of SW, based on the spectral decomposition of the LaplaceBeltrami operator, which generalizes to manifolds given the availability of the eigenvalues and eigenfunctions. However, it is not directly related to the original SW on Euclidean spaces.", "parag_2": "For example, Cohen et al. Idrobo (2020) learn the transport map on hyperbolic spaces. However, the computational bottleneck to compute the Wasserstein distance on such spaces remains, and, as underlined in the conclusion of (Nadjahi, 2021), defining SW distances on manifolds would be of much interest. Notably, Rustamov & Majumdar (2020) proposed a variant of SW, based on the spectral decomposition of the LaplaceBeltrami operator, which generalizes to manifolds given the availability of the eigenvalues and eigenfunctions. However, it is not directly related to the original SW on Euclidean spaces.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.21", "parag_1": "We had to exclude 10 runs of DSPN due to significantly worse results making their average uninformative. The following excluded DSPN runs were all using 40 iterations:", "parag_2": "We had to exclude 10 runs of DSPN due to significantly worse results making their average uninformative. We did not observe any stability issues with iDSPN, so we did not have to exclude any iDSPN runs. The following excluded DSPN runs were all using 40 iterations:", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "fDUdAYCQqZy.0cNiGAHFml.01", "parag_1": "Figure 1. VEM uses expectile V -learning (EVL) to learn V -functions while avoiding extrapolation error in the action space. EVL uses an expectile operator that interpolates between Bellman expectation operator and optimality operator to balance behavior cloning and optimal value learning. Further, VEM integrates memory-based planning to improve the advantage estimation and accelerate the convergence of EVL. Finally, generalized advantage-weighted learning is used for policy learning with enhanced advantage estimation. A formal description for the VEM algorithm is shown in Algorithm 1 in Appendix A.1.", "parag_2": "Figure 1. VEM uses expectile V -learning (EVL) to learn V -functions while avoiding extrapolation error in the action space confines value learning within the dataset to reduce extrapolation error. EVL uses an expectile operator that interpolates between Bellman expectation operator and optimality operator to balance behavior cloning and optimal value learning. Further, VEM integrates memory-based planning to improve the advantage estimation and accelerate the convergence of EVL. Finally, generalized advantage-weighted learning is used for policy learning with enhanced advantage estimation. A formal description for the VEM algorithm is shown in Algorithm 1 in Appendix A.1.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.06", "parag_1": "Traditional approaches to predicting the effect of mutation on protein binding can be roughly divided into two classes: biophysical methods and statistical methods. Biophysical methods focus on modeling inter-atomic interactions, e.g. hydrogen bonding, electrostatic forces, etc., with mechanical and statistical energy functions. These methods use their underlying energy function to sample conformations of the mutated protein complex and predict the change in binding free energy upon mutations (Schymkowitz et al., 2005; Park et al., 2016; Alford et al., 2017; Steinbrecher et al., 2017). Statistical methods are based on feature engineering. Descriptors that summarize the geometric, physical, evolutionary, and motif properties of proteinsare used to fit the statistical model that predicts the effect of mutations on binding free energy (Geng et al., 2019a; Zhang et al., 2020). Traditional methodsgenerally face the trade-off between speed and accuracy due to the time-consuming sampling process. Sophisticated energy functions and feature engineering underlying these methods depend heavily on human knowledge, thus limiting their pace to improve with the fast-growing of available protein structures.", "parag_2": "Traditional approaches to predicting the effect of mutation on protein binding can be roughly divided into two classes: biophysical and statistical methods. Biophysical methods utilize energy functions to model inter-atomic interactions. These methods sample conformations of the mutated protein complex and predict changes in binding free energy (Schymkowitz et al., 2005; Park et al., 2016; Alford et al., 2017; Steinbrecher et al., 2017). Statistical methods rely on feature engineering, which uses descriptors summarizing geometric, physical, evolutionary, and motif properties of proteins to predict mutational effects (Geng et al., 2019a; Zhang et al., 2020). Traditional methods face the trade-off between speed and accuracy. Their performance depends heavily on human expertise, which limits their pace to improve with the fast-growing of available protein structures.", "annot_1": {"annotation": ["Concision", "Content_deletion"], "instruction": "Summarize this:", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07"}} {"id_paragraph": "BkVj6Z-AW.SytnTZWCZ.01", "parag_1": "LSTM or RNN networks for this task Fragkiadaki et al. ; Jain et al. ; Bütepage et al. (2017); Martinez et al. and these works produce reasonably realistic output at a number of tasks such as sitting, talking, smoking, etc. However, these existing methods also have a critical drawback: the motion becomes unrealistic within a couple of seconds and is unable to recover.", "parag_2": "LSTM or RNN networks for this task (Fragkiadaki et al., 2015; Jain et al., 2016; Bütepage et al., 2017; Martinez et al., 2017), and these works produce reasonably realistic output at a number of tasks such as sitting, talking, smoking, etc. However, these existing methods also have a critical drawback: the motion becomes unrealistic within a couple of seconds and is unable to recover.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.01", "parag_1": "Electronic calendars have become instrumental in the manage- ment of daily activities [16–18]. They are used to coordinate interactions among individual schedules of family or team members and can convey meaning and values behind the priorities of schedul- ing [19]. Calendars have been used to visualize temporal trends that include everyday activities such as energy use in work places, fitness tracking, and work routines [20–22]. In healthcare, unit-of-use packaging that incorporates a simple day or date feature have been used in efforts to help manage prescriptions and improve adherence by prompting patients to maintain the prescribed dosing schedule [23]. While such prescription managers come as stand-alone tools, we look at the possibility of integrating prescription management into the main calendar already used by the patient. Such integration comes with challenges. The first challenge is how to render the pre- scription entries so that the user can differentiate between a normal calendar entry and one that is part of a prescription. The second chal- lenge is how to ensure that the patient rescheduling a prescription entry does so within the safety envelop of the prescription.", "parag_2": "Electronic calendars have become instrumental in the manage- ment of daily activities [16–18]. They are used to coordinate interactions among individual schedules of family or team members and can convey meaning and values behind the priorities of scheduling [19]. Calendars have been used to visualize temporal trends that include everyday activities such as energy use in work places, fitness tracking, and work routines [20–22]. We are interested in exploring the possibility of integrating pre- scription management into electronic calendars that are already used by many patients [23,24]. Such integration comes with challenges. The first challenge is how to render the prescription entries so that the user can differentiate between a normal calendar entry and one that is part of a prescription. The second challenge is how to ensure that patients who reschedule prescriptions do so within the specified safety constraints.", "annot_1": {"annotation": ["Concision"], "instruction": "Rewrite the latter half of this paragraph to make it more concise.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Merge the two sentences in the middle about integrating prescription management in a new shorter sentence. Improve the english in the last sentence.", "annotator": "annotator_07"}} {"id_paragraph": "RPX7thbt2Mv.PdsbQ4ckYr.00", "parag_1": "Ermon, 2016), require a potentially large number of online samples during training, resulting in poor sample efficiency. Moreover, algorithms such as GAIL follow a similar training paradigm as in the Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), which formulate the problem of IRL as a minimax optimization problem to learn a discriminator that implicitly minimizes an f-divergence. It was found however that AIL methods can be very difficult to optimize, requiring careful tuning of hyperparameters (Orsini et al., 2021). Another popular approach is to perform distribution matching between the policy and the expert demonstrations as popularized by algorithms such as ValueDICE (Kostrikov et al., 2022b).", "parag_2": "Ermon, 2016), require a potentially large number of online samples during training, resulting in poor sample efficiency. Moreover, algorithms, such as GAIL, follow a training paradigm that is similar to Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), by formulating the problem of IRL as a minimax optimization problem to learn a discriminator that implicitly minimizes an fdivergence. It was found, however, that Adversarial imitation learning (AIL) methods such as GAIL can be very difficult to optimize, requiring careful tuning of hyperparameters (Orsini et al., 2021). Another popular approach is to perform distribution matching between the policy and the expert demonstrations (Englert et al., 2013; Kostrikov et al., 2022b).", "annot_1": {"annotation": ["Rewriting_light", "Development"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_light", "Development"], "instruction": NaN, "annotator": "annotator_08"}} {"id_paragraph": "HJ1yWQ-A-.BkSfNXWAb.00", "parag_1": "Humans excel at recognizing objects such as handwritten digits from modified or perturbed inputs. Even if presented with digits which are translated, corrupted, or inverted, we can usually correctly label them without the need of re-learning them from scratch. This may be due to the fact that human intelligence utilizes mechanisms (such as translation) that are generic and generalize across object classes. These mechanisms are modular, re-usable and broadly applicable , and the problem of learning such mechanisms from data is a fundamental question for the study of transfer.", "parag_2": "Humans are able to recognize objects such as handwritten digits based on distorted inputs. When presented with digits which are translated, corrupted, or inverted, we can usually correctly label them without the need of re-learning them from scratch. The same applies for new objects, essentially after having seen them once. This may be due to the fact that human intelligence utilizes mechanisms (such as translation) that are generic and generalize across object classes. These mechanisms are modular, re-usable and broadly applicable , and the problem of learning them from data is a fundamental question for the study of transfer.", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "JSl6-2Rvl.EjaJa1fUzn.00", "parag_1": "Limitations and Future Work EAGER assumes the QA system was pre-trained using a preexisting set of example trajectories. Next steps will consist in investigating how to remove this limitation, e.g. by implementing autotelic strategies based on QG/QA learned online. Besides, in this work we tested our method on BabyAI, a 2D environment with synthetic language. In the future, we would like to consider a more complex language, generating more complex questions than the one obtained by masking, and testing our method on more realistic environments with true human instructions, as in the ALFRED dataset [32].", "parag_2": "Limitations and Future Work EAGER assumes the QA system was pre-trained using a preexisting set of example trajectories. Next steps will consist in investigating how to remove this limitation, e.g. by implementing autotelic strategies based on QG/QA learned online. Besides, in this work we tested our method on BabyAI, a 2D environment with synthetic language. In the future, we would like to consider a more complex language, generating more complex questions than the one obtained by masking, and testing our method on more realistic environments with true human instructions, as in the ALFRED dataset [37]. Acknowledgments and Disclosure of Funding This work benefited from the use of the Jean Zay supercomputer associated with the Genci grant A0091011996, as well as from the ANR DeepCuriosity AI chair project.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_08"}} {"id_paragraph": "xV0XmrSMtk.sYfR73R9z.01", "parag_1": "In the case when a user has no control about the incoming gradient ∆ I ω = − d (cid:96)/ d y , the update ∆ I ωin ker P can be much larger in magnitude than ∆ I ω 1 in Im P . In theory, this is not very problematic, as updates in ker P do not affect the optimization problem. However, in practice, these irrelevant updates can lead to explosion of the cost vector as well as problems with adaptive optimizers, which will take into account the irrelevant updates to adjust the learning rate and hence slow down the convergence.", "parag_2": "In the case when a user has no control about the incoming gradient ∆ I ω = − d (cid:96)/ d y , the update ∆ I ωin ker P can be much larger in magnitude than ∆ I ω 1 in Im P . In theory, if updates were applied directly in cost space, this would not be very problematic, as updates in ker P do not affect the optimization problem. However, in practice, the gradient is further backpropagated to update the weights in the components before the solver. Spurious irrelevant components in the gradient can therefore easily overshadow the relevant part of the update, which is especially problematic as the gradient is computed from stochastic mini-batch samples.", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_08"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "PTxXw98AiB.3lJGkowFzW.00", "parag_1": "(Chen et al., 2018) devised GCBD method to learn and generate noise in the given noisy images using W-GAN (Arjovsky et al., 2017) and utilized the unpaired clean images to build a supervised training set. Our GAN2GAN is related to (Chen et al., 2018), but we significantly improve their noise learning step and do not use the clean data at all. Table 1 summarizes and compares the settings among the above mentioned recent baselines. We clearly see that only our GAN2GAN and N2V do not utilize any “sidekicks” that other methods take advantage of.", "parag_2": "Chen et al. (2018) devised GCBD method to learn and generate noise in the given noisy images using W-GAN Arjovsky et al. and utilized the unpaired clean images to build a supervised training set. Our GAN2GAN is related to Chen et al. , but we significantly improve their noise learning step and do not use the clean data at all. Table 1 summarizes and compares the settings among the above mentioned recent baselines. We clearly see that only our GAN2GAN and N2V do not utilize any “sidekicks” that other methods take advantage of.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Integrates quotations into the text.", "annotator": "annotator_07"}} {"id_paragraph": "otEbOIweB6.rbCKB0Uy9.00", "parag_1": "Transition reparametrized actions tion reparamtrization does not have to rely on any hierarchical structures in the offline data, and can therefore utilize highly suboptimal datasets (e.g., with random actions).", "parag_2": "Transition reparametrized actions imitation without the need to reduce control frequency. Unlike learning temporal abstractions, action reparamtrization does not have to rely on any hierarchical structures in the offline data, and can therefore utilize highly suboptimal datasets (e.g., with random actions).", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "5t8NvKONr.tls-ZX2iE.03", "parag_1": "The proof can be divided into three parts. Firstly, we come up with a neural network approximation ρ NNn of ρ n of which size is O ( w · ϵ − m/rn ) within an error ϵ n . Next, construct a neural network approximation of Φ using the lemma 3. Finally, the inner product is replaced with a neural network as in the inequality 10.", "parag_2": "The proof can be divided into three parts. Firstly, we come up with a neural network approximation ρ NNn of ρ n of which size is O ( w · ϵ − m/rn ) within an error ϵ n . Next, construct a neural network approximation of Φ using the Lemma 3. Finally, the inner product π p n ( β n , τ n ) is replaced with a neural network as in (10) of Lemma 2.", "annot_1": {"annotation": ["Rewriting_light", "Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.05", "parag_1": "Action Utility In order to use the relational action representations with RL, we follow the insight of utility network π u from Jain et al. It takes the relational action representations, state, and action summary as input for each action and outputs a utility score reflecting how useful the action is for the current state and in relation to the other available actions. The utility score can be used as Q-value directly if the training algorithm is DQN, whereas it can be used as a logit fed into a softmax function to form a probability distribution over the available actions.", "parag_2": "Action Utility : To use the relational action representations with RL, we follow the utility network architecture π u from Jain et al. It takes the relational action representation, the state, and the action set summary as input for each available action in parallel. It outputs a utility score π u ( c Ra , s, ¯ c R ) for how useful an action a is for the current state and in relation to the other available actions. The utility scores can be used as a Q-value directly for value-based RL or as a logit fed into a softmax function to form a probability distribution over the available actions for policy-based RL.", "annot_1": {"annotation": ["Concision", "Content_addition"], "instruction": NaN, "annotator": "annotator_04"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_01"}} {"id_paragraph": "S1CMuZFor.H1NchtnoS.00", "parag_1": "Linear networks without activation functions are important subject, and there are a number of theoretical works on the implicit regularization in over-parameterized neural networks mainly focusing on linear models (Ji & Telgarsky, 2018; Gidel et al., 2019; Arora et al., 2019a). In contrast, whole properties of over-parameterized DNNs that may result from nonlinearity of activation functions cannot be captured by the NTK analysis. This is because DNN models optimized by gradient descent are approximated by linear models, specifically, a linear combination of corresponding NTKs. However, some mechanisms of the implicit regularization can depend on nonlinearity . This leads to the next question:", "parag_2": "Linear networks without activation functions are important subject, and there are a number of theoretical works on the implicit regularization in over-parameterized neural networks mainly focusing on linear models (Ji & Telgarsky, 2018; Gidel et al., 2019; Arora et al., 2019a). In contrast, whole properties of over-parameterized DNNs that may result from nonlinearity of activation functions cannot be captured by the approximated linear models, specifically, the kernel regression predictor using the NTK. However, some mechanisms of the implicit regularization can depend on nonlinearity . This leads to the next question:", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.16", "parag_1": "Table 1 lists the results of model fitting using all (120) data points. Along with comparing the adjusted R 2 data, we showed the Akaike information criterion ( AIC ) values because of the different number of constants included in the model [2]. A model with higher R 2 and lower AIC was defined as the better model. If the difference between the AIC s was higher than 2, the difference was worth considering, and if is the difference was higher than 10, the difference was consid- ered significant. In experiment 1, we set the starting position of the trials. We excluded Eq. 5 and Eq. 6 from the comparison because the values of B in these models could not be obtained correctly.", "parag_2": "Table 1 lists the results of model fitting using all 120 data points. We showed the Akaike information criterion ( AIC ) values because of the different number of constants included in the model along with the adjusted R 2 data [2]. A model with a higher adj. R 2 and lower AIC was defined as the better model. When the difference between the AIC s was higher than 2, the difference was worth considering; when it was higher than 10, it was considered significant. In Experiment 1, we set the starting position of the trials, and we excluded Eqs.and 6 from the comparison because the values of B in these models could not be obtained correctly.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Restructured some sentences in this paragraph and merge the last two sentences ", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_medium", "Rewriting_light"], "instruction": "Improve the linking between phrases.", "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.02", "parag_1": "Pointing (point targets such as buttons or icons) should be fast and accurate. The two main factors the affect the movement time are the target size and the distance from the initial position of the cursor to the target [11,19]. The movement time increases as the distance increases and the target size decreases. Furthermore, placing distrac- tors (which do not hide the cursor) on the path to the target increases the movement time [6,23]. By placing a notch, the user could miss the cursor position inside the notch or lose sight of the cursor, which may increase movement time considering the user avoids the notch or moves the mouse cursor carefully near the notch. In experiment 1, longer movement timeswere recorded because the cursor was hidden by the notch, when pointing a target at the top edge from another target, which was also at the top edge.", "parag_2": "Pointing, i e., using the cursor to point at targets such as buttons or icons should be fast and accurate. Two factors that affect the movement time are target size and distance from the initial position of the cursor to the target [11,19]. The movement time increases with an increase in distance and a decrease in target size. Further, placing distractors (which do not hide the cursor) on the path to the target increases the movement time [6,25]. The notch can cause a user to miss the cursor position when it is inside the notch or to lose sight of the cursor, which can increase movement time. Avoiding the notch or moving the cursor carefully around the notch can increase the movement time. We performed three experiments to evaluate the effect of the notch on the movement of the mouse cursor. In Experiment 1, we recorded longer movement times because the cursor was hidden by the notch when moving between targets at the top edge of the display.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Improve English in this paragraph. Explain more about the experiments", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Add a sentence to introduce the experiment. Improve the paragraph for better readability.", "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.20", "parag_1": "Medication entries should have a marker that communicates the time that their reminder will be triggered. The marker should not communicatea time range (as we used in all three designs), but a point in time, such as a minute, when the reminder is triggered. The calendar should have daily summaries. These summaries should be used to give an overview of the list of medications to be administered every day. They should only contain the name of the medication. Their design should be such that the user can decide whether to add or remove them from the display. When dealing with conflicts, emphasis should be placed on theposition of the conflicting entries and not the connectors. If present, lines that connect conflicting entries should have less emphasis. On-calendar conflict representation should not be used as the main indication of an error after a rescheduling activity. The user shouldinstead be notified of the impending conflict beforehand. Participants preferred that normal, dismissible error messages be displayed and show the full information regarding the conflicts being introduced by the action. When rescheduling medication entries, cells that are either safe or unsafe should be communicated to the user by highlighting them using a color that implies either success or warning (e.g., green for the former and yellow for the latter). Although some participants felt that the design should not allow them to schedule an entry in the space that is likely to cause a conflict, there might be situations where this possibility would be desirable. The user should, in this case, be guided on possible moves that will resolve the conflict. This can be done by shading or using an outline for all the cells to which an entry may be rescheduled to resolve the conflict, and letting users configure the amount of warnings and error messages they want to receive.", "parag_2": "Medication entries should have markers that communicate the times that their reminders will be triggered. The markers should not communicate time ranges but points in time when reminders are triggered. The calendar should have daily summaries of the list of medications to be administered each day. These summaries should only contain the name of the medication and users should be able to show or hide them. Medication conflicts should be emphasized on the conflicting entries rather than on the connectors. The user should be notified of a newly created conflict upon rescheduling an entry, preferably via dismissible error messages that describe the conflict. When rescheduling medication entries, cells that are either safe or unsafe should be highlighted to the user to guide their action. Although some participants felt that the design should not allow them to schedule an entry in the space that is likely to cause a conflict, there might be situations where this possibility is unavoidable. The user should, in this case, be guided on possible moves that will resolve the conflict. This can be done by shading or using an outline for all the cells to which an entry may be rescheduled to resolve the conflict, and letting users configure the amount of warnings and error messages they want to receive.", "annot_1": {"annotation": ["Content_deletion", "Concision"], "instruction": "Heavily remove details from this paragraph to make it more concise.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Please condense my paragraph related to medication conflicts.", "annotator": "annotator_09"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.14", "parag_1": "In practice, we stack multiple bijectives to enable more complex transform. The derivative of the composite can be computed efficiently by applying the chain rule. At inference time, the inverse mapping f − 1 ( y ) can be computed efficiently (Rezende et al., 2020). To find the solution of f − 1 ( y ) , the first step is to locate the unique bin that contains y . Assuming y belongs to the k -th bin, finding its corresponding x amounts to finding the root of the quadratic equation f k ( x | x k,k +1 , y k,k +1 , δ k,k +1 ) = y in [ x k , x k +1 ] , whose solution is closed-form.", "parag_2": "In practice, we stack multiple bijectives to enable more complex transformation. The derivative of the composite can be computed efficiently using the chain rule. At inference time, we can efficiently compute the inverse mapping f − 1 ( y ) (Rezende et al., 2020): To find the solution of f − 1 ( y ) , the first step is to locate the unique bin that contains y . Assuming y belongs to the k -th bin, finding its corresponding x amounts to finding the root of the quadratic equation f k ( x | x k,k +1 , y k,k +1 , δ k,k +1 ) = y in the interval [ x k , x k +1 ] , for which a closed-form solution exists.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Please, review this paragraph, modify only if necessary", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve the English of this paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.04", "parag_1": "Optimal transport defines a powerful geometry to measure the distribution discrepancy. Monge first formulated optimal transport asa problem of finding an optimal mapping between two measures. However, this formulation cannot guarantee the existence and uniqueness of solutions. More applicable is Kantorovich’s formulation, which can be seen as a generalized Monge problem.", "parag_2": "Optimal transport (OT) instantiates distribution discrepancy as the minimum transport cost, which provides a grip for quantifying the treatment selection bias in Figure 1(a). Monge (1781) first formulated OT as finding an optimal mapping between two distributions. However, this formulation cannot guarantee the existence and uniqueness of solutions. Kantorovich (2006) proposed a more applicable formulation in Definition 2.3, which can be seen as a generalization of Monge problem.", "annot_1": {"annotation": ["Rewriting_medium", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "XWtcKUZcZw.EuDy1zb2AN.00", "parag_1": "Scenes based targets namely traversable surfaces and object openability. On the test set, our (Cache) SIR consistently outperforms multiple baselines (Fig 5) including: Auto Encoder on AI2-THOR images, a Navigation agent within AI2-THOR, and Classifier – a CNN trained to classify objects in", "parag_2": "Predictions with synthetic images - We use SIRs to predict geometry and appearance-based targets namely depth, surface normals, object class, object depth and object normals, as well as affordancebased targets namely traversable surfaces and object openability. On the test set, our (Cache) SIR consistently outperforms multiple baselines (Fig 5) including: Auto Encoder on AI2-THOR images, a Navigation agent within AI2-THOR, and Classifier – a CNN trained to classify objects in", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "YaBfIcnhEVq.bz9r9z9RFt.00", "parag_1": "We make this assumption for the following reasons. First of all, our offline learning guarantee (Theorem 3.2) provides simultaneously comparison to all the policies, which is stronger than competing with optimal policy only (whereas relaxed assumption suffices, e.g. sup x (cid:62) < ∞ (Uehara & Sun, 2021)). As a consequence, the behavior distribution µ must be able to explore each feature dimension for the result to be valid. Second, even if Assumption 2.2 does not hold, we can always restrict our algorithmic design to the effective subspan of Σ ph , which causes the alternative notion of κ := min h ∈ [ H ] { κ h : s.t. κ h > 0 } . In this scenario, learning the optimal policy cannot be guaranteed as a constant suboptimality gap needs to be suffered due to the lack of coverage and this is called assumption-free RL in Yin & Wang (2021b). Lastly, previous works analyzing the linear MDPs impose very similar assumptions, e.g. Xie et al. (2021a) Theorem 3.2 where Σ − 1 D exists and Min et al. (2021) for the OPE problem.", "parag_2": "We make this assumption for the following reasons. First of all, our offline learning guarantee (Theorem 3.2) provides simultaneously comparison to all the policies, which is stronger than competing with optimal policy only (whereas relaxed assumption suffices, e.g. sup x ∈ R d x Σ π(cid:63) x (cid:62) x Σ µ x (cid:62) < ∞ (Uehara and Sun, 2021)). As a consequence, the behavior distribution µ must be able to explore each feature dimension for the result to be valid. Second, even if Assumption 2.2 does not hold, we can always restrict our algorithmic design to the effective subspan of Σ ph , which causes the alternative notion of κ := min h ∈ [ H ] { κ h : s.t. κ h = smallest positive eigenvalue at time h } (see Appendix G.1 for detailed discussions). In this scenario, learning the optimal policy cannot be guaranteed as a constant suboptimality gap needs to be suffered due to the lack of coverage and this is called assumption-free RL in Yin and Wang (2021b). Lastly, previous works analyzing the linear MDPs impose very similar assumptions, e.g. Xie et al. (2021a) Theorem 3.2 where Σ − 1 D exists and Min et al. (2021) for the OPE problem.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_01"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "jzQGmT-R1q.ugUt9B3XaO.00", "parag_1": "In this section we show that neural networks progressively lose their ability to quickly fit new targets when trained on sequential prediction tasks (i.e. settings in which the agent must solve a regression problem that iteratively changes over the course of training) including but not limited to those found in value-based RL. We find that capacity loss is particularly pronounced in sparse prediction tasks, where many of the target values the agent seeks to predict are zero. To study the effect of extreme capacity loss on performance in greater depth, we present a special case of the target-fitting capacity measure which is efficient to compute and has the intuitive interpretation of measuring the ability of the representation to linearly disentangle states. We find evidence that agents which have greater capacity according to this metric tend to achieve better performance in challenging environments from the Atari suite where agents fail to match human performance, and that those suffering from representation collapse according to this metric fail to make any learning progress at all.", "parag_2": "In this section we demonstrate conditions under which neural networks progressively lose their ability to quickly fit new targets when trained on sequences of prediction tasks including but not limited to those found in value-based RL. We find that capacity loss is particularly pronounced in sparse prediction tasks, where many of the target values the agent seeks to predict are zero. To study the effect of extreme capacity loss on performance in greater depth, we present a special case of the target-fitting capacity measure which is efficient to compute and has the intuitive interpretation of measuring the ability of the representation to linearly disentangle states. We find evidence that agents which have greater capacity according to this metric tend to achieve better performance in challenging environments from the Atari suite where agents fail to match human performance, and that those suffering from representation collapse according to this metric fail to make any learning progress at all.", "annot_1": {"annotation": ["Rewriting_light", "Content_deletion"], "instruction": "Rewrite the first sentence. Remove the example to make it shorter.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Revise the first sentence in a more academic style. Remove unnecessary details.", "annotator": "annotator_07"}} {"id_paragraph": "WldWha1MT.LL2ZsGpJga.00", "parag_1": "Image segmentation is a largely researched field where neural networks find vast applications in many facets of technology. Some of the most popular approaches to train segmentation networks employ loss functions optimizing pixel-overlap, an objective that is insufficient for many segmentation tasks. In recent years, their limitations fueled a growing interest in topology-aware methods, which aim to recover the correct topology of the segmented structures. However, so far, none of the existing approaches achieve a spatially correct matching between the topological features (persistence barcodes) of label (ground truth) and prediction (output of the neural network).", "parag_2": "Image segmentation is a largely researched field where neural networks find vast applications in many facets of technology. Some of the most popular approaches to train segmentation networks employ loss functions optimizing pixel-overlap, an objective that is insufficient for many segmentation tasks. In recent years, their limitations fueled a growing interest in topology-aware methods, which aim to recover the correct topology of the segmented structures. However, so far, none of the existing approaches achieve a spatially correct matching between the topological features of ground truth and prediction.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.01", "parag_1": "Image super-resolution (SR) is a fundamental computer vision task, which aims to recover a high-resolution (HR) image from its low-resolution (LR) counterpart. In general, image SR is ill-posed as a many-to-one mapping problem. To alleviate this problem, plenty of deep convolutional neural networks (CNNs) (Dong et al., 2014; 2016; Kim et al., 2016b; Tai et al., 2017b) have been investigated to achieve accurate mapping from LR image to its HR target.", "parag_2": "Image super-resolution (SR), a classic task in computer vision, aims to recover a high-resolution (HR) image based on its low-resolution (LR) counterpart. Essentially, image SR is ill-posed as a many-to-one mapping problem. To tackle this problem, plenty of deep convolutional neural networks (CNNs) (Dong et al., 2014; 2016; Kim et al., 2016b; Zhang et al., 2018c; 2020; 2021) have been investigated to learn the accurate mapping from LR image to its HR target.", "annot_1": {"annotation": ["Rewriting_light", "Content_substitution"], "instruction": "Replace the citation to (Tai et al., 2017b) with a citation to (Zhang et al., 2018c; 2020; 2021). Improve the english of this paragraph.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Make the language of this paragraph a bit more simple.", "annotator": "annotator_07"}} {"id_paragraph": "jyac3IgQ44.f4au9jfat5.02", "parag_1": "We gather the non-empty voxels within the query window {+size (e.g. s \n = (2 , 2 , 2) )+} and apply Chessboard Sampling (CBS) to sample the queries. {+key window sizes (e.g. s \n =+} For the keys, we gather the non-empty voxels from the key windows of different sizes separately, and {+s \n = (4 , 4 , 4) ), respectively. Then, we apply CBS to obtain sampled queries, while employ FPS to+} get multiple sets of keys through Balanced Multi-window Sampling, with each set representing information of a specific scale. Keys from windows of different sizes are assigned to differenthead groups to perform scale-aware attention learning, thus simultaneously capturing both long-range context and fine-grained details.", "parag_2": "In MsSVT block, we gather the voxels with the query window {+size (e.g. s \n = (2 , 2 , 2) )+} and the {+key window sizes (e.g. s \n =+} (2 , 2 , 2) and {+s \n = (4 , 4 , 4) ), respectively. Then, we apply CBS to obtain sampled queries, while employ FPS to+} get sampled keys and values of same number (e.g. N K = 3 ) among all key windows. Finally, we feed the same query and the different keys into different attention head groups, respectively.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.25", "parag_1": "The rotamer density estimator (RDE) is a generative model for sidechain structures. It can be used to predict sidechain conformations by sampling from the estimated distribution. We use the RDE to sample sidechain torsional angles (rotamers) for structures with sidechains removed in our test split of PDB-REDO. For each amino acid, 10 rotamers are sampled independently, and the one with the highest probability is selected as the final prediction. We compare RDE with two baseline methods Rosetta (fixbb) (Leman et al., 2020) and SCWRL4 (Krivov et al., 2009). Table 4 shows that RDE outperforms the baselines on all four torsional angles in terms of angular errors. Detailed per-amino-acid accuracy is presented in Table 11 in the appendix.", "parag_2": "RDE is a generative model for protein sidechain structures, which can predict sidechain conformations by sampling from the estimated distribution. We use RDE to sample sidechain torsional angles (rotamers) for structures with 10% sidechains removed in our test split of PDB-REDO. For each residue, 10 rotamers are sampled independently, and the one with the highest probability is selected as the final prediction. We compare RDE with two baseline methods Rosetta (fixbb) (Leman et al., 2020) and SCWRL4 (Krivov et al., 2009). Our results shown in Table 4 demonstrate that the RDE outperforms the baselines on all four torsional angles in terms of angular errors. For a detailed per-amino-acid accuracy, please refer to Table 9 in the appendix.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Make this paragraph more fluid.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_light", "Concision"], "instruction": "Make the beginning of the paragraph more concise. Make the end of the paragraph more fitting to the academic style.", "annotator": "annotator_07"}} {"id_paragraph": "VRrAfKMSF8.g8rfar4U7.00", "parag_1": "Besides AT and RND, diverse defenses have also been proposed, and it would be interesting to see their results. DENT [19] optimizes the model in test-time, trying to learn the AE distribution. PNI [51] injects noise during training, making the learned weights less sensitive to input perturbations. TRS [60] ensembles three models with low attack transferability between each other. They are originally developed for gradient-based attacks, but also provide some protection against SQAs. As displayed in Table 3, however, they are not comparable to AAA-linear in real cases regarding the accuracy, calibration, and defense performance. Here, we also test a strong SQA QueryNet, which uses three architecture-alterable models to steal the DNN. Due to its utilization of large-scale testing samples, QueryNet greatly hurts DNNs, but AAA is still the defense that protects the model best.", "parag_2": "Besides AT and RND, diverse defenses have also been proposed. DENT [19] optimizes the model in test-time. PNI [54] injects noise during training. TRS [64] ensembles three models with low attack transferability. They are developed for gradient-based attacks, but also provide protection against SQAs. However, seeing Table 3, they are not comparable to AAA-linear in real cases regarding the accuracy, calibration, and defense performance. Here, we also test a strong SQA QueryNet, which uses three architecture-alterable models to steal the DNN. Due to its utilization of large-scale testing samples, QueryNet greatly hurts DNNs, but AAA is still the defense that protects the model best.", "annot_1": {"annotation": ["Content_deletion", "Rewriting_light"], "instruction": "Remove unnecessary details.", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Concision"], "instruction": "Delete unnecessary details, mostly in the two first sentences.", "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.13", "parag_1": "Lightweight Network. First, we revise EDSR baseline (i.e.16 residual blocks) (Lim et al., 2017) by removing the final Conv layer to save parameters. Same as IMDN (Hui et al., 2019), the reconstruction was done within the pixel-shuffle layer (Shi et al., 2016). We set the channel number in revised EDSR baseline as 256 and then prune it to 45. For × 2, we reduce the number of parameters from 19.5M to 609K and name the compressed model as SRPN-L.", "parag_2": "Lightweight Networks. First, we adapt the EDSR baseline (Lim et al., 2017) with 16 residual blocks by removing the final convolutional layer to save parameters. The reconstruction upscaling is realized by the pixel-shuffle layer (Shi et al., 2016) following common practice. We set the channel number in the revised EDSR baseline as 256 and then prune it to 45. For × 2 scale, we reduce the number of parameters from 19.5M to 609K and name the compressed model as SRPN-Lite.", "annot_1": {"annotation": ["Content_deletion"], "instruction": "Please, remove the clarifications that do are not necessary for the development of the idea:", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "Remove Hui et al. citation and improve writing of the paragraph", "annotator": "annotator_06"}} {"id_paragraph": "usz0l2mwO.5ie3V0GP-.02", "parag_1": " We evaluate the performance on seven benchmarks on different tasks, including text classification, natural language inference, similarity, and paraphrase detection. For NLI, we experiment with the SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) benchmarks. For text classification, we evaluate on two sentiment analysis datasets, namely IMDB (Maas et al., 2011) and Yelp (YELP) (Zhang et al., 2015). We additionally evaluate on three low-resource datasets in the GLUE benchmark (Wang et al., 2019). These include paraphrase detection using MRPC (Dolan and Brockett, 2005), semantic textual similarity using STS-B (Cer et al., 2017), and textual entailment using RTE (Dagan et al., 2006). We evaluate on the standard validation and test splits. Since the test sets are not available for MNLI, we tune on the matched dev set and evaluate on the mismatched dev set (MNLI-M) or vice versa (see Appendix A for datasets statistics and more details on the experimental setups).", "parag_2": "Datasets We evaluate the performance on seven different benchmarks for multiple tasks, in particular text classification, natural language inference, similarity, and paraphrase detection. For NLI, we experiment with two well-known NLI benchmarks, namely SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018). For text classification, we evaluate on two sentiment analysis datasets, namely IMDB (Maas et al., 2011) and Yelp2013 (YELP) (Zhang et al., 2015). We additionally evaluate on three low-resource datasets in the GLUE benchmark (Wang et al., 2019): 3 paraphrase detection using MRPC (Dolan & Brockett, 2005), semantic textual similarity using STS-B (Cer et al., 2017), and textual entailment using RTE (Dagan et al., 2006). For the GLUE benchmark, SNLI, and Yelp, we evaluate on the standard validation and test splits. For MNLI, since the test sets are not available, we tune on the matched dev set and evaluate on the mismatched dev set (MNLI-M) or vice versa. See Appendix A for datasets statistics and Appendix B for hyper-parameters of all methods.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.18", "parag_1": "We used different apparatus for experiments 1 and 2, which did not have a significant effect on the conclusions of this study. We used a desktop PC (Intel Core i9-12900KF, GeForce RTX 3070 Ti, 32GB RAM, Windows 10 Home). The display was manufactured by AOPEN (model 25XV2QFbmiiprx; 24.5” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 360 Hz. We used an optical mouse, Logitech Gaming Mouse (G300s; 1600 DPI). The mouse- cursor speed via the OS setting was set to the middle of the slider in the control-display and ” Enhance pointer precision ” setting was turned on to match the participant’s usual settings. The experimental system was implemented with Hot Soup Processor 3.6 and used in full-screen mode.", "parag_2": "We used a different apparatus for both experiments; however this did not have a significant effect on the conclusions of this study. We used a desktop PC (Intel Core i9-12900KF, GeForce RTX 3070 Ti, 32 GB RAM, Windows 10 Home). The display was manufactured by AOPEN (model 25XV2QFbmiiprx; 24.5” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 360 Hz. We used an optical mouse (Logitech gaming mouse, G300s; 1600 DPI, and the mouse- cursor speed based on the OS setting was set to the middle of the slider in the control display and the “ Enhance pointer precision ” setting was turned on to match the usual settings of the participant.). The experimental system was implemented with Hot Soup Processor 3.6 and used in the full-screen mode.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Improve the English in this paragraph by choosing better words", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_medium", "Rewriting_light"], "instruction": "Improve the linking between phrases.", "annotator": "annotator_07"}} {"id_paragraph": "VWgazAa3VJ.iaVGYcsIw.00", "parag_1": "Recently, many methods to induce sparsity in neural networks have shown that it is possible to train models with an overwhelming fraction of the weights being \n Molchanov et al. ; Gale et al. ; Frankle and Carbin (2018); Louizos et al. ; Evci et al. ; Zhu and Gupta (2017). Many of these methods gradually decrease the number of weights in the network through training by using some combination of each weight’s gradient and magnitude. Fine grained sparsity is hard to accelerate on modern hardware, although there have been some recent results demonstrating that speedups are possible Elsen et al. Vecchi et al. (2019) considered inducing sparsity in quaternion networks. Primitives that increase computational density of fundamental interactions would increase the performance of sparse methods as demonstrated on the GPU by Mueller-Roemer et al. in scientific computing.", "parag_2": "Recently, many methods to induce sparsity in neural networks have shown that it is possible to train models with an overwhelming fraction of the weights being \n (Molchanov et al., 2017; Gale et al., 2019; Frankle and Carbin, 2019; Louizos et al., 2018; Evci et al., 2019; Zhu and Gupta, 2017). Many of these methods gradually decrease the number of weights in the network through training by using some combination of each weight’s gradient and magnitude. Fine grained sparsity is hard to accelerate on modern hardware, although there have been some recent results demonstrating that speedups are possible (Elsen et al., 2020). (Vecchi et al., 2019) considered inducing sparsity in quaternion networks. Primitives that increase computational density of fundamental interactions would increase the performance of sparse methods as demonstrated on the GPU by (Mueller-Roemer et al., 2019) in scientific computing.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Review this paragraph, make it easier to read.", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "dD-sDO1KaC.N7AsVzdCxV.00", "parag_1": "Backdoor-based model watermarking relied on an assumption that the trigger matches hidden backdoors contained in the suspicious model. However, the assumption may not hold since the backdoor may be changed during the stealing process. In this section, we verify this limitation.", "parag_2": "Backdoor-based model watermarking relied on an assumption that the trigger matches hidden backdoors contained in the suspicious model. However, the assumption may not hold. In this section, we verify this limitation.", "annot_1": {"annotation": ["Content_deletion"], "instruction": "Remove information on why the assumption might not hold.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "Make the second sentence much shorter, only keep the main idea.", "annotator": "annotator_07"}} {"id_paragraph": "xV0XmrSMtk.sYfR73R9z.03", "parag_1": "The explanation is that, for ranking-based loss, the incoming gradient does not point toward achievable solutions, as discussed in Sec. 3.2. For more details see Suppl. B.3. We conclude that for the retrieval experiment, Identity does not match the performance of BB. Presumably, this is because the crude approximation of the permutahedron by a sphere ignores too much structural information.", "parag_2": "The explanation is that, for ranking-based loss, the incoming gradient does not point toward achievable solutions, as discussed in Sec. 3.2. For more details, including additional evaluation of other applicable projections, see Suppl. B.3. We conclude that for the retrieval experiment, Identity does not match the performance of BB. Presumably, this is because the crude approximation of the permutahedron by a sphere ignores too much structural information.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "MZYBK_Wp2X.HVFitLjAId.03", "parag_1": "Fig. 3a shows that the first two components take about 90% of the explained variance. Fig. 3b shows that these components include only τ 1 , avg degree, and modularity. The fact that n is not used means that the size of the graph as well as the density are not important for choosing the best measure. The fact that τ 2 is not used means that the difference in sizes of clusters is not important, too.", "parag_2": "Fig. 3a shows that the first two components take about 90% of the explained variance. Fig. 3b shows that these components include only τ 1 , avg degree, and modularity. The fact that n is not used means that the size of the graph as well as the density are not of primary importance for choosing the best measure. So is not τ 2 measuring the diversity of cluster sizes.", "annot_1": {"annotation": ["Content_substitution", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Content_substitution", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_08"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.04", "parag_1": "Therefore, we seek to predict the change in binding free energy via estimating the change in conformational flexibility . Specifically, our method consists of three main parts. The first part is a conditional generative model built upon normalizing flows on torus for estimating the density of amino acid sidechain conformations (rotamers) given the structural environment of the amino acid, named Rotamer Density Estimator (RDE). We choose normalizing flows because it allows exact likelihood computation, which is indispensable for the second part of the method — entropy estimation. We estimate the entropy of the rotamer distribution predicted by the model and use it as the measurement of conformational flexibility. Finally, we use the entropy of protein-protein interfaces at different states to predict the change in binding free energy ( ∆∆ G ). In addition to directly using the entropy to predict ∆∆ G , we also use simple neural networks to extract ∆∆ G from the unsupervised representations produced by the RDE.", "parag_2": "Therefore, by comparing the entropy losses of wild-type and mutated protein complexes, we can estimate the effect of mutations on binding affinity. Based on this principle, we introduce a novel approach to predict the impact of amino acid mutations on proteinprotein interaction. The core of our method is the Rotamer Density Estimator (RDE), a conditional generative model that estimates the density of amino acid sidechain conformations (rotamers). We use the entropy of the estimated density as a metric of conformational flexibility. By subtracting the entropy of the separated proteins from the entropy of the complex, we obtain an estimation of binding affinity. Finally, we can assess the effect of mutations by comparing the estimated binding affinities of wild-type and mutant protein complexes. In addition to directly comparing entropy, we also employ neural networks to predict ∆∆ G from the representations learned by RDE.", "annot_1": {"annotation": ["Content_substitution", "Rewriting_heavy"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "IoTyuVEanE.Et-c0vQfeb.00", "parag_1": "Follow-up studies [12] on Snorkel have also shown that Snorkel users tended to select rules by looking at individual labeled instances and seeking to create rules reflecting the patterns they observe in the data. This study further demonstrated that label efficiency can be improved by showing users currently unlabeled instances and instances with high label conflict, i.e. seeking labels in highly valuable areas of the training data. It also demonstrates that even subject matter experts tend to relyon the data to create their rules, rather than creating them from independent knowledge.", "parag_2": "Follow-up studies [12] on Snorkel have also shown that Snorkel users tended to select rules by looking at individual labeled instances and seeking to create rules reflecting the patterns they observe in the data. This study further demonstrated that label efficiency can be improved by showing users currently unlabeled instances and instances with high label conflict, i.e. seeking labels in highly valuable areas of the training data. ReGAL provides the possibility to extend this by automatically proposing rules suited to conflicted or unlabeled data slices, thus reducing the manual effort involved.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.18", "parag_1": "The only set prediction model we are aware of that can be exclusively multiset-equivariant is DSPN (Zhang et al., 2019; Huang et al., 2020). DESP (Zhang et al., 2021) is exclusively multiset-equivariant but not a standard set predictor. It also uses the Jacobian of sorting, but has the slightly different goal of diverse sampling by learning set energies without a traditional set loss, so we do not compare against them in the experiments.", "parag_2": "The only set prediction model we are aware of that can be exclusively multiset-equivariant is DSPN (Zhang et al., 2019; Huang et al., 2020). Note that Zhang et al. (2021a) also make use of the exclusively multiset-equivariant Jacobian of sorting, but instead focus on learning multimodal densities over sets, which is tangential to our work.", "annot_1": {"annotation": ["Concision"], "instruction": "Make this paragraph more concise by using more direct formulations", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make the paragraph shorter but don't touch at the first sentence.", "annotator": "annotator_07"}} {"id_paragraph": "txe2sPPkO.id6Xr1pUq.00", "parag_1": "In this section we discuss how SafeNet can be instantiated in practice. There are two aspects the data owners need to agree upon before instantiating SafeNet: i) The MPC framework used for secure training and prediction phase and ii) the parameters in Theorem 6 to achieve poisoning robustness. The MPC framework is agreed upon by choosing the total number of outsourced servers N participating in the MPC, the number of corrupted servers T and the nature of the adversary (semihonest or malicious in the SOC paradigm). The owners then agree upon a filtering threshold ϕ and the number of poisoned owners t that can be tolerated. Once these parameters are chosen the maximum allowed error probability of the local models trained by the honest owners based on Lemma 5 and", "parag_2": "In this section we discuss how SafeNet can be instantiated in practice. There are two aspects the data owners need to agree upon before instantiating SafeNet: i) The MPC framework used for secure training and prediction phase and ii) the parameters in Theorem 6 to achieve poisoning robustness. The owners agree upon the number of outsourced servers N participating in the MPC, the number of corrupted servers T along with the role of the adversary (semi-honest or malicious) in the MPC and consequently choose an appropriate training framework that satisfies this criteria. The owners then agree upon a filtering threshold ϕ and the number of poisoned owners t that can be tolerated. Once these parameters are chosen the maximum allowed error probability of the local models trained by the honest owners based on Lemma 5 and", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the middle sentence of this paragraph to make it clearer.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the long sentence in the middle sentence to improve clarity.", "annotator": "annotator_07"}} {"id_paragraph": "NwOG107NKJ.0PPYM22rdB.03", "parag_1": "TERGM models estimated within the markov chain assumption are typically incapable of generating and reproducing realistic dynamics observed in real-world online social networks. We hypothesized that increasing the model’s capacity to describe triadic network properties would reduce the error between the model and empirical observations. We propose TTERGM here to sequentially predict network probabilities by integrating the dynamics between influencers and followers (Code isavailable at https://github.com/alvin68633466/TTERGM-Social-Theory-Driven-network-simulation). TTERGM was run on a computer with 12900K CPU, 1080TI and 128GB RAM. Figure 2 shows the framework of TTERGM that has five major components - data collection module, network processing module, feature extraction module, pattern analysis module, and a generative network module. The data collection module is discussed in Section 3.3.", "parag_2": "TERGM models estimated within the markov chain assumption are typically incapable of generating and reproducing realistic dynamics observed in real-world online social networks. We hypothesized that increasing the model’s capacity to describe triadic network properties would reduce the error between the model and empirical observations. We propose TTERGM here to sequentially predict network probabilities by integrating the dynamics between influencers and followers. TTERGM was run on a computer with 12900K CPU, 1080TI and 128GB RAM. Figure 2 shows the framework of TTERGM that has five major components - data collection module, network processing module, feature extraction module, pattern analysis module, and a generative network module.", "annot_1": {"annotation": ["Concision"], "instruction": "Remove the information about the code. And remove the last sentence.", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "Remove the mentions to the code and to other sections.", "annotator": "annotator_03"}} {"id_paragraph": "atxti8SVk.3K9AmPwALM.08", "parag_1": "Low-level image similarity. We propagatethe labels within visually coherent regions,as visual similarity often goes with semantic similarity. To generate an over-segmentation, we follow Hwang et al. by using the HED contour detector (Xie & Tu, 2015) (pre-trained on BSDS dataset (Arbelaez et al., 2010)) and the procedure in gPb-owt-ucm (Arbelaez et al., 2010). Such bottom-up segmentation techniques consider both local and global appearance affinity without semantic information. Some of the segments could contain pixels from different categories. For learning pixel embedding at pixel i , we define positive and negative segments w.r.t. i as i ’s own segment and all the other segments, denoted as V + and V − , respectively. Such a formulation helps learn embeddings that respect low-level visual cues. In our implementation, we only consider segments in the same image as i ’s image as positive and negative samples. We follow SegSort Hwang et al. (2019) to align the contour-based over-segmentations with segmentations generated by K-Means clustering in SegSort.", "parag_2": "Low-level image similarity. To propagate labels within visually coherent regions, we generate a low-level over-segmentation. Following SegSort Hwang et al. (2019), we use the HED contour detector (Xie & Tu, 2015) (pre-trained on BSDS500 dataset (Arbelaez et al., 2010)) and gPb-owtucm (Arbelaez et al., 2010) to generate a segmentation without semantic information. We define i ’s positive and negative segments as i ’s own segment and all the other segments, denoted as V + and V − respectively. We only consider segments in the same image as pixel i ’s. We align the contour-based over-segmentations with segmentations generated by K-Means clustering as in SegSort.", "annot_1": {"annotation": ["Concision", "Content_deletion"], "instruction": "Make this paragraph considerably more concise. Remove any unnecessary details that are not essential for the main point of the paragraph.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Content_deletion", "Concision"], "instruction": "This paragraph is too long, make it almost 50% shorter but keep the important informations.", "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.17", "parag_1": "Neural Network Predictor The hidden representation h i of each amino acid for parameterizing the normalizing flows contains sufficient information about the rotamer probability density. To extract binding information from the representations in a more flexible way, we also try neural networks. Specifically, we use MLPs to transform the representation h i and apply max-pooling to obtain a global representation of the structure. Then, we subtract the representation of the wild-type structure from the representation of the mutant and feed it to another MLP to predict ∆∆ G . To enforce anti-symmetry, we swap the wild-type and the mutant to predict − ∆∆ G and take (∆∆ G − ( − ∆∆ G )) / 2 as the final prediction. The network is trained with the MSE loss. During training, gradients are not back-propagated through h i so the rotamer density estimator is frozen, as our goal is to exploit the unsupervised representations learned by the RDE.", "parag_2": "Neural Network Predictor Each residue’s hidden representation h i used to parameterize the normalizing flows contains sufficient information about the rotamer distribution. To extract binding information from these representations in a more flexible way, we employed neural networks. Specifically, we utilized a network that shares the same architecture as the encoder to transform the representation h i and applied max-pooling to obtain a global structure representation. We then subtracted the representation of the wild-type structure from the mutant representation and fed it into another MLP to predict ∆∆ G . To enforce anti-symmetry, we swapped the wild-type and mutant to predict − ∆∆ G , and computed (∆∆ G − ( − ∆∆ G )) / 2 as the final prediction. The network was trained using the MSE loss. During training, we freeze the RDE weights and do not back-propagate gradients through h i to fully exploit the unsupervised representations learned by RDE.", "annot_1": {"annotation": ["Rewriting_light", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "Sjtl3UReL.bnGXjxmB3.00", "parag_1": "Remark 3 . The major novelty of obtaining these theoretical results relies on the developed variant of the augmented Lagrangian function and subsequently derived recursion of the successive dual variables, which quantify well the consensus errors resulting from both UL and LL optimization processes in terms of primal variables. Note that this is distinct from the existing theoretical analysis of stochastic algorithms [38, 40, 41], even including stochastic primal-dual algorithms [42, 45], and also the classical deterministic primal-dual algorithms [46] for decentralized optimization.", "parag_2": "Remark 3 . The major novelty of obtaining these theoretical results relies on the developed variant of the augmented Lagrangian function and subsequently derived recursion of the successive dual variables, which quantify well the consensus errors resulting from both UL and LL optimization processes in terms of primal variables. Note that this is distinct from the existing theoretical analysis of stochastic algorithms, such as distributed SGD [38, 39], gradient tracking [40, 41], stochastic primal-dual algorithms [42, 45], etc.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "IuxfzBFSR0.CSFycBGzvd.00", "parag_1": "Update criteria As mentioned before, Algorithm 1 runs in epochs indexed by j , and one epoch ends when either of the two update criteria is triggered (Line 9). The first updating criterion is satisfied once the determinant of Σ t is doubled compared to the determinant at the end of the previous epoch. This is called lazy policy update that has been used in the linear bandits and RL literature (Abbasi-Yadkori et al., 2011; Zhou et al., 2021b), which reflects the diminishing return of learning the underlying transition. Moreover, this update criterion reduces the computational cost as the total number of epochs would be bounded by O (log T ) . The doubling visitation criterion used in tabular", "parag_2": "Update criteria As mentioned before, Algorithm 1 runs in epochs indexed by j , and one epoch ends when either of the two update criteria is triggered (Line 9). The first updating criterion is satisfied once the determinant of Σ t is doubled compared to the determinant at the end of the previous epoch. This is called lazy policy update that has been used in the linear bandits and RL literature (Abbasi-Yadkori et al., 2011; Zhou et al., 2021b), which reflects the diminishing return of learning the underlying transition. One intuition behind the determinant doubling criterion is that the determinant can be viewed as a surrogate measure of the exploration in the feature space. Thus one only updates the policy when enough further exploration has been made. Moreover, this update criterion reduces the computational cost as the total number of epochs would be bounded by O (log T ) . Here T denotes the total number of steps through all K episodes. The doubling visitation criterion used in tabular", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_08"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.24", "parag_1": " We used a general cursor in allof our experiments. However, pointing-facilitation technique have been proposed to improve the pointing performance. For example,when using Bubble Cursor [13] or Ninja Cursors [18], the notch effect for the movement time may be reduced. However, we did not consider them in this study because of the somewhat tricky behavior of these techniques. In this study, we only considered the movement time and error rate for evaluation. However, placing the notch can increase psychological stress for users. For example, if the notch is changed to an area where the cursor cannot enter, the user may feel uncomfortable about the cursor unnecessarily getting caught in the notch. Therefore, the limitation of this study is that we only investigated the effect of the notch for movement time and error rate.", "parag_2": "Bubble Clusters [26] and Attribute Gates [24]). We are interested in whether Eq. 8 can be applied to more than just predicting movement time in the scenario where the notch is placed. We used a general cursor in all our experiments. However, pointing-facilitation technique have been proposed to improve the pointing performance. For example, the notch effect for the movement time may be reduced when using Bubble Cursor [13] or Ninja Cursors [18]. However, we did not consider them in this study because of the somewhat tricky behavior of these techniques. We instructed the participants to avoid any clutching action during the trial; however, in actual use of the PC, the user may perform clutching actions. For example, if the cursor is hidden by a notch during a clutching action, the effect of the notch may be increased. Restricting the clutch action may have restricted the user’s operation strategy, which is a limitation of this study. In this study, we only considered the movement time and error rate for evaluation. However, placing the notch can increase the psy- chological stress experienced by users. For example, the user may feel uncomfortable about the cursor unnecessarily getting caught in the notch if the notch is changed to an area where the cursor cannot enter. Therefore, we only investigated the effect of the notch for movement time and error rate, which is another limitation.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.23", "parag_1": "The user model takes as input the user information(i.e. , a concatenation of user attributes and a sequence of the user interactions) and a set of item embeddings in the list. The user information is passed on to a single layer gated recurrent network(GRU (Cho et al., 2014)) followed by a 2-layer MLP to extract the compact representation of the state. Also, the set of item embeddings is processed by the same architecture of the GRU network(this and the one before are not shared and initialized differently) as the list-embedding. Finally, a 2-layer MLP takes as input the concatenation of those two embeddings(i.e. , state-embedding and list-embedding) and provides the scores of items in the list followed by the sigmoid function to transform to the individual click likelihood.", "parag_2": "The user model takes as input the user information(i.e. , a concatenation of user attributes and a sequence of the user interactions) and a set of item embeddings in the list. The user information is passed on to a single layer gated recurrent network(GRU (Cho et al., 2014)) followed by a 2-layer MLP to extract the compact representation of the state. The same GRU network architecture (with different weights) processes the set of item embeddings into a list-embedding. Finally, a 2-layer MLP takes as input the concatenation of those two embeddings(i.e. , state-embedding and list-embedding) and provides the scores of items in the list followed by the sigmoid function to transform to the individual click likelihood.", "annot_1": {"annotation": ["Concision"], "instruction": "Make the third sentence shorter and easier to understand", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Concision"], "instruction": "Simplify the convoluted sentences to make the paragraph more concise.", "annotator": "annotator_03"}} {"id_paragraph": "fJhx73ErBg.NeKLbmOxG8.03", "parag_1": "Temporal logic (TL)-guided policy learning is an area that we take have taken inspiration from. In this area, TL is often used to specify the ego agent’s desired high-level behavior and used to generaterewards. The authors of [17, 18, 19] provide surveys of recent work on the use of TL in RL. The exploration problem still exists in these methods. The authors of [20, 21] learns finite state automata from demonstration and use them to guide planning with the value iteration network which avoids exploration. It can sometimes be tedious to manually design a TL formula that yields satisfying behaviors. Work has been done to make components of the formula learnable from data. In [22]learns linear temporal logic (LTL) formulas from demonstrations. Given the close relationship between TL and automaton[23], the authors of [24] proposes a method that learns reward machines (an automata-like reward presentation) from demonstrations. A commonality to these methods is that the LTL that they use operate on propositions (binary variables with values true or false). Therefore,either high-level demonstrations (proposition traces) or a mapping from low-level states to propositions is required. This also limits these methods to operate on discreteand finite state and actionspaces. Our method works with STL which operates on continuous signals, therefore, we are able tolearn from continuous demonstration trajectories which is more readily available in robotic systems. The LogicRiskNet that we propose is also learnable from demonstrations.", "parag_2": "Temporal logic (TL)-guided policy learning is an area that we take have taken inspiration from. In this area, TL is often used to specify the ego agent’s desired high-level behavior and used togenerate rewards. The authors of [18, 19, 20] provide surveys of recent work on the use of TL in RL. The exploration problem still exists in these methods. To address these challenges, the authorsof [21, 22] learn finite state automata from demonstration and use them to guide planning with the value iteration network which avoids exploration. It can sometimes be tedious to manually design a TL formula that yields satisfying behaviors. Work has been done to make components of the formula learnable from data. In [23], the authors propose learning linear temporal logic (LTL) formulas from demonstrations. Given the close relationship between TL and automaton[24], theauthors of [25] propose a method that learns reward machines (an automata-like reward presentation) from demonstrations. A shortcoming of these methods is that the LTL that they use operates onpropositions (binary variables with values true or false). Unlike our STL-based approach, theseapproaches require discrete state and action spaces and require the demonstrations themselves tohave the same discrete representations.", "annot_1": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "Make this paragraph easier to read, remove unnecessary details if needed", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Summarize the last third of this paragraph in one sentence. Smooth out the writing.", "annotator": "annotator_07"}} {"id_paragraph": "jyac3IgQ44.f4au9jfat5.06", "parag_1": "We build our 3D backbone by stacking multiple MsSVT blocks, as shown in Fig. Noted that weset both the query and the key window size in the last MsSVT block as (1 , 1 , ∞ ) to compress the 3Dvoxels into a 2D feature map, where the query is the average voxel features within the pillar window.", "parag_2": "As shown in Fig 2, we build our 3D backbone by stacking Mixed-scale Sparse Voxel Transformer(MsSVT) blocks. It is worth noting that both the query and key window size in the last block of MsSVT are set to (1 , 1 , ∞ ) so as to compressing the 3D voxels into 2D feature map, where the queryis the average of all the voxel features within the pillar window.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Improve the English of this paragraph.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_light", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hAi0PMz9T7.Ut8ESfYp1.04", "parag_1": "This work studied the distillation of NN-based deep reinforcement learning agents into symbolic policies for performance-oriented congestion control in TCP. Our branched symbolic frameworkenjoys better simplicity and efficiency while exhibiting comparable and often improved performancesover their black-box teacher counterparts on both simulation and rigorous emulation testbeds. Ourresults point towards a fresh direction to make congestion control extremely light-weight and simpler,via a symbolic design. Our future work aims for more integrated neural-symbolic solutions and fastermodel-free online training/fine-tuning for performance-oriented congestion control. Exploring thefairness between our learned CC and legacy CC is also an interesting next step. Besides, we also aimto apply symbolic distillation to a wider range of systems and networking problems.", "parag_2": "This work studies the distillation of NN-based deep reinforcement learning agents into symbolic policies for performance-oriented congestion control in TCP. Our branched symbolic framework has better simplicity and efficiency while exhibiting comparable and often improved performance over their black-box teacher counterparts on both simulation and emulation environments. Our results point towards a fresh direction to make congestion control extremely light-weight, via a symbolic design. Our future work aims at more integrated neurosymbolic solutions and faster model-free online training/fine-tuning for performance-oriented congestion control. Exploring the fairness of neurosymbolic congestion control is also an interesting next step. Besides, we also aim to apply symbolic distillation to a wider range of systems and networking problems.", "annot_1": {"annotation": ["Rewriting_medium", "Concision"], "instruction": "Rewrite the second-to-last sentence to make it more general and shorten some formulations.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Rewriting_light", "Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "S1qImCcFQ.Ske132uA7.01", "parag_1": "Let { A n , b n } be the dynamics parameters associated with node n . Even though only the discrete states are associated with the leaf nodes, we will introduce dynamics at the internal nodes as well. These internal dynamics serve as a link between the leaf node dynamics via a hierarchical prior,", "parag_2": "Let { A n , b n } be the dynamics parameters associated with node n . Although the locally linear dynamics of a discrete state are specified by the leaf nodes, we introduce dynamics at the internal nodes as well. These internal dynamics serve as a link between the leaf node dynamics via a hierarchical prior,", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "ByngnZiT7.BkN8nWiaX.00", "parag_1": "Table 1 summarizes the results, comparing the cost-sensitive robust error between the baseline model trained for overall robustness and a model trained using our cost-sensitive robust optimization. The proposed cost-sensitive robust defense model is trained with (cid:15) = 0 . 2 based on loss function (3.1) and the corresponding cost matrix C . The regularization parameter α is tuned via cross validation (see Appendix A in the supplementary materials for details). We report the selected best α , classification error and cost-sensitive robust error on the testing dataset.", "parag_2": "Table 1 summarizes the results, comparing the cost-sensitive robust error between the baseline model trained for overall robustness and a model trained using our cost-sensitive robust optimization. The proposed cost-sensitive robust defense model is trained with (cid:15) = 0 . 2 based on loss function (3.2) and the corresponding cost matrix C (see Appendix B.2 for comparison results with different (cid:15) ). The regularization parameter α is tuned via cross validation (see Appendix A in the supplementary materials for details). We report the selected best α , classification error and cost-sensitive robust error on the testing dataset.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.03", "parag_1": "Cole & Warwicker, 2002; Kastritis & Bonvin, 2013). Intuitively speaking, amino acids on the interface become less flexible (lower entropy) after they get contact with other proteins due to the geometrical and physical restraints imposed by the binding partner (Figure 1). Higher binding affinity implies better complementarity between the two parts, thus higher rigidity.", "parag_2": "Cole & Warwicker, 2002; Kastritis & Bonvin, 2013). When two proteins bind, the residues located at the interface tend to become less flexible (i.e. having lower entropy) due to the physical and geometric constraints imposed by the binding partner (Figure 1). A higher amount of entropy loss corresponds to a stronger binding affinity.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "QSAQjBO0aj.srl-4uM-pl.00", "parag_1": "Trigger Feature Hypothesis We hypothesize that the trigger features are sparsely encoded in only afew channels, while clean image features need to be encoded across many channels for the effectiveclassification . This is a key difference from normal data features that are presumably distributed moreevenly across channels, which indicates that these two types of features might behave differently incertain situations, leading to our main technical contribution. More illustrations of trigger featuresare provided in App. A3.", "parag_2": "Trigger Feature Hypothesis We hypothesize that the trigger features are sparsely encoded in only afew channels, while clean image features need to be encoded across many channels . This is a keydifference from normal data features that are presumably distributed more evenly across channels,which indicates that these two types of features might behave differently in certain situations, leadingto our main technical contribution. Illustrations of triggers are provided in App. A3.", "annot_1": {"annotation": ["Concision"], "instruction": "Make this paragraph more concise.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make this paragraph a bit shorter.", "annotator": "annotator_07"}} {"id_paragraph": "nkOpNqg-ip.OwJsIhe_p.02", "parag_1": "Despite being only an example scheme, one advantage of Naive AutoML over black-box optimization that already becomes clear here is that it directly generates important insights that can significantly support the data scientist working with it. For example, even such a simple question as “what is the potential of feature selection on the given data?” cannot be directly answered by the existing black-box approaches. In our scheme, the filtering stage (cf. Sec. A.3) is a very good basis to give an initial answer to this question. More complex stage schemes, e.g. including wrapping , can answer such questions even in much more detail. In this sense, Naive AutoML presents itself as more amenable to the growing demand for meaningful interaction between the tool and the human Wang et al. Crisan and Fiore-Gartland (2021); Drozdal et al. ; Wang et al. compared to the currently adopted black-box approaches.", "parag_2": "Despite being only an example scheme, one advantage of Naive AutoML that already becomes clear here is that it directly generates important insights that can significantly support the data scientist working with it. For example, a question like “what is the potential of feature selection on the given data?” can be answered by black-box approaches only after some post-processing, if at all. In our scheme, the filtering stage (cf. Sec. A.3) is a very good basis to give an initial answer to this question. In this sense, Naive AutoML presents itself as more amenable to the growing demand for meaningful interaction between the tool and the human (Wang et al., 2019; Crisan and Fiore-Gartland, 2021; Drozdal et al., 2020; Wang et al., 2021) compared to the currently adopted black-box approaches.", "annot_1": {"annotation": ["Concision"], "instruction": "Make the paragraph more concise by focusing on the main points.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Concision", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "ssjKKm0b5y.3wi5X8wrM_.02", "parag_1": "We evaluate Pareto HyperNetworks (PHNs) on a set of diverse multi-objective problems. The experiments show the superiority of PHN over other MOO methods. Wewill make our code publicly available in order to facilitate further research.", "parag_2": "We evaluate Pareto HyperNetworks (PHNs) on a set of diverse multi-objective problems. The experiments show the superiority of PHN over previous MOO methods. We make our source code publicly available at: https://github.com/AvivNavon/pareto-hypernetworks .", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_08"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "9ALnOEcGN_.4eEIRZ-dm.01", "parag_1": "Our proposed method in this paper belongs to the category of construction heuristics leaners in the sense of producing a one-shot solution per problem instance. But unlike previous methods which generate the solutions via a constructive Markov decision process (MDP) with rather costly decoding steps (adding one un-visited node per step to a partial solution), we introduce a compact continuesspace to parameterize the underlying distribution of discrete candidate solutions, and to allow efficient sampling from such distribution without costly neural network-involved decoding. ", "parag_2": "Our proposed method in this paper belongs to the category of construction heuristics learners in the sense of producing a one-shot solution per problem instance. However, there are major distinctions between previous methods and ours. One distinction is how to construct solutions. Unlike previous methods which generate the solutions via a constructive Markov decision process (MDP) with rather costly decoding steps (adding one un-visited node per step to a partial solution), we introduce a compact continuous space to parameterize the underlying distribution of discrete candidate solutions, and to allow efficient sampling from that distribution without costly neural network-involved decoding. Another distinction is about the training framework.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "RPX7thbt2Mv.PdsbQ4ckYr.01", "parag_1": "Hopper-v2 ) from the OpenAI Gym MuJoCo locomotion tasks. For each environment, we use the medium-v2 , medium-replay-v2 and medium-expert-v2 datasets to construct the expert demonstrations and the unlabeled dataset. For the expert demonstrations, we choose the best episodes from the D4RL dataset based on the episodic return. In practice, the expert demonstrations can be provided separately and we are only selecting the expert demonstration in this way for ease of evaluation. To obtain the unlabeled dataset, we discard the original reward information in the dataset. We then run OTR to label the dataset based on the optimal coupling between the unlabeled episodes and the chosen expert demonstrations. Afterward, we proceed with running the offline RL algorithm.", "parag_2": "Hopper-v2 ) from the OpenAI Gym MuJoCo locomotion tasks. For each environment, we use the medium-v2 , medium-replay-v2 and medium-expert-v2 datasets to construct the expert demonstrations and the unlabeled dataset. For the expert demonstrations, we choose the best episodes from the D4RL dataset based on the episodic return. 2 To obtain the unlabeled dataset, we discard the original reward information in the dataset. We then run OTR to label the dataset based on the optimal coupling between the unlabeled episodes and the chosen expert demonstrations. Afterward, we proceed with running the offline RL algorithm.", "annot_1": {"annotation": ["Concision"], "instruction": "Remove the fourth sentence", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Concision"], "instruction": "Exclude unnecessary information.", "annotator": "annotator_08"}} {"id_paragraph": "rJv8ichjB.k-Li2JE2Z.00", "parag_1": "Much (though not all) work on program synthesis is focused on domain specific languages that are less than maximally expressive (Gulwani, 2011; Balog et al., 2016; Wang et al., 2017; Alur et al., 2015). We would like to focus on the synthesis of programs in a Turing complete language, but this presents technical challenges: First, general purpose languages such as C++ or Python are typically quite complicated and sometimes not fully specified; this makes it a challenge to search over partial programs in those languages. Second, sandboxing and executing code written in these languages is nontrivial. Finally, searching over and executing many programs in these languages can be quite slow, since this is not what they were designed for.", "parag_2": "Much (though not all) work on program synthesis is focused on domain specific languages that are less than maximally expressive (Gulwani, 2011; Balog et al., 2016; Wang et al., 2017; Alur et al., the search procedure used will tend to emit ‘shorter’ programs first, and so there is an Occam’s-Razor-type argument (Spade & Panaccio, 2019) to be made that you should get this for free. We would like to focus on the synthesis of programs in a Turing complete language, but this presents technical challenges: First, general purpose languages such as C++ or Python are typically quite complicated and sometimes not fully specified; this makes it a challenge to search over partial programs in those languages. Second, sandboxing and executing code written in these languages is nontrivial. Finally, searching over and executing many programs in these languages can be quite slow, since this is not what they were designed for.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "9ALnOEcGN_.4eEIRZ-dm.02", "parag_1": "Here the higher valued θ i,j means the higher probability for the edge from node i to node j to be sampled. More importantly, notice that we use matrix θ ∈ R n × n to parameterize the probabilisticdistribution of n ! discrete feasible solutions. The compact, continuous and differentiable spaceof θ allows us to leverage gradient-based optimization without costly MDP-based construction offeasible solutions, which has been a bottleneck for scaling up in representative DRL solvers so far. Inother words, we also no longer need costly MCMC-based sampling for optimizing our model dueto the chain-rule decomposition. Instead, we use autoregressive factorization for sampling from theauxiliary distribution, which is faster than sampling with MCMC from the distribution defined by theenergy function.", "parag_2": "Here a higher valued θ i,j corresponds to a higher probability for the edge from node i to node j to be sampled. The compact, continuous and differentiable space of θ allows us to leverage gradientbased optimization without costly MDP-based construction of feasible solutions, which has been a bottleneck for scaling up in representative DRL solvers so far. In other words, we also no longer need costly MCMC-based sampling for optimizing our model due to the chain-rule decomposition. Instead, we use autoregressive factorization for sampling from the auxiliary distribution, which is faster than sampling with MCMC from the distribution defined by the energy function.", "annot_1": {"annotation": ["Concision"], "instruction": "Make this paragraph more concise.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_deletion", "Rewriting_light"], "instruction": "Delete the second sentence. Improve the english in the first sentence.", "annotator": "annotator_07"}} {"id_paragraph": "By2l1r_DB.BJLFqqnsr.00", "parag_1": "Our initial experiments compare our agent to baseline agents trained on a single policy. For these experiments, we use the navigation environment defined previously with three objectives: stay on the road, avoid hazards, and move right. Note that the opposite of each of these objectives are also included in possible behavior specifications due to the semantics of our language that enable minimization. We define a set of 50,000 specifications to sample from during training, with which the agent learns to generalize to novel specifications. The left plot in Figure 3 shows the average episodic reward for 100 never-before-seen test specifications throughout training for our agent with and without curriculum learning. We compare the results of these two agents with 100 baseline DQN agents trained on each of these 100 behaviors. The error bars in Figure 3 show one standard deviation in average reward for multiple agent initializations.", "parag_2": "Our initial experiments compare our agent to baseline agents trained on a single policy. For these experiments, we use the navigation environment defined previously with three objectives: stay on the road, avoid hazards, and move right. Note that the opposite of each of these objectives are also included in possible behavior specifications due to the semantics of our language that enable minimization. We define a set of 50,000 specifications to sample from during training, with which the agent learns to generalize to novel specifications. These specifications are randomly generated according to number of atomic statements, logical connectives, hard vs. soft constraints, and value of constraints. We randomly sample test specifications from these generated specifications. The left plot in Figure 3 shows the average episodic reward for 100 never-before-seen test specifications throughout training for our agent with and without curriculum learning. We compare the results of these two agents with 100 baseline DQN agents trained on each of these 100 behaviors. The error bars in Figure 3 show one standard deviation in average reward for multiple agent initializations.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "SyF8k7bCW.HytIRPamf.00", "parag_1": "Mirkovic, 2009; Binder & Desai, 2011). The idea of learning from the context information was first successfully applied to vector representation learning for words in Mikolov et al. (2013b) and learning from the occurrence of words also succeeded in Pennington et al. ", "parag_2": "Mirkovic, 2009; Binder & Desai, 2011). The idea of learning from the context information (Turney & Pantel, 2010) was recently successfully applied to vector representation learning for words in Mikolov et al. (2013); Pennington et al. Collobert et al.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_medium", "Content_addition"], "instruction": NaN, "annotator": "annotator_09"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.22", "parag_1": "We further compare our network pruning method with representative model compression techniques for image SR. Specifically, we compare with neural architecture search based methods (Chu et al., 2019b;a) and knowledge distillation (KD) based methods (Lee et al., 2020). We provide quantitative results in Tab. 4. Our SRPN-L obtains the best performance with the least parameter number and Mult-Adds. With our SRP pruning method, we do not have to search lots of architectures or train a teacher network, which consumes extra computation resources. These comparisons show that our SRP method has great potential for efficient image SR.We provide more discussions and comparisons with related works (e.g., DCP (Zhuang et al., 2018), DHP (Li et al., 2020)) in the appendix.", "parag_2": "We further compare our SRP to other representative efficient image SR approaches via model compression. Concretely, neural architecture search based methods (Chu et al., 2019b;a) and knowledge distillation (KD) based methods (Lee et al., 2020) are compared to. Quantitative results at × 2 scale are presented in Tab. 4, where our SRPN-Lite delivers better PSNR results across different datasets with fewer parameters and Mult-Adds. With our SRP pruning method, there is no need to search massive network architectures or pretraining a teacher network, which usually consumes considerable computation resources. These comparisons show that our SRP, as a network pruning method, has as much potential (if not more) as other model compression techniques for efficient image SR.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Please, rewrite this paragraph, make it easier to read", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_medium", "Concision"], "instruction": "Write in a more passive style and remove the last sentence", "annotator": "annotator_06"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.11", "parag_1": "We collect interaction data for one month for a listwise online campaign recommender system. Users are represented by attributes such as age, occupation, and localities. Items attributes are also given such as text features, image features, and reward points of campaigns. We simulate a representative RL environment by training a reward model on a set of 68,775 users and 57 items to estimate the click likelihood of the users(Appendix A.4). Weaugment the user reward model with the CPR reward to incentivize the agent to recommend high-CPR user-relevant items. We train a VAE (Kingma & Welling, 2013) to learn action representations, which are given to the RL agent as input. The test data consists of 82,445 users and 58 items, about 30 of which are shared with the training items. We report the test reward for models trained with CDQN algorithm.", "parag_2": "We collect four-week interaction data in a listwise online campaign recommender system. Users are represented by attributes such as age, occupation, and localities. Item attributes include text features, image features, and reward points of campaigns. We train a VAE (Kingma & Welling, 2013) to learn item representations. We create a representative RL environment by training two click-estimation models using data from the first two weeks for training and the last two weeks for evaluation. The training environment consists of 68,775 users and 57 items, while testing has 82,445 users anditems, with an overlap of 30 items. The reward function combines the user-click and CPR value of the list. The explicit CPR reward is a representative scenario for when the designer has listwise objectives in addition to user satisfaction. We train with CDQN and report the test reward.", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite the majority of the paragraph, avoiding we and writing in a more neutral tone.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite and reorganize the paragraph to convey the ideas more clearly.", "annotator": "annotator_07"}} {"id_paragraph": "HJRpJl_vr.H115opYiS.00", "parag_1": "The shot-number k only appears in first two terms of the denominator. It does not contribute to the last term of the denominator. This implies diminishing returns in expected accuracy when more support data is added without altering φ . 2. By observing the degree of terms in equation 7 (and treating the last term of the denominator as a constant), it is clear that increasing k will decrease the sensitivity (magnitude of partial derivative) of this lower bound to Σ c , and increase its sensitivity to Σ . 3. If one postulates that meta-learning updates on φ is similar to gradient ascent on this accuracy lower bound, then learning with smaller k emphasizes minimizing noise, while learning with higher k emphasizes maximizing signal.", "parag_2": "The shot-number k only appears in first two terms of the denominator, implying that the bound saturates quickly with increasing k . This is also in agreement with the empirical observation that meta-testing accuracy has diminishing improvements when more support data is added. By observing the degree of terms in equation 7 (and treating the last term of the denominator as a constant), it is clear that increasing k will decrease the sensitivity (magnitude of partial derivative) of this lower bound to Σ c , and increase its sensitivity to Σ . 3. If one postulates that meta-learning updates on φ is similar to gradient ascent on this accuracy lower bound, then learning with smaller k emphasizes minimizing noise, while learning with higher k emphasizes maximizing signal.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "BkxG1CvhWf.wcpE7maMLZ4.00", "parag_1": "A gap in the literature seems to be a practical completeness threshold for cost optimal planning problems that have actions with 0-cost. This is one hurdle to the application of SAT-based planning to such problems since, without a reasonable completeness threshold, optimality can only be proved after solving the compilation for a horizon that is the number of states in the state space. This is impractical for most problems since it can be exponentially bigger than the size of the given problem. It should be noted that some approaches try to circumvent the need for a tight completeness threshold, such the ones by Robinson et al. and Leofante et al., which add an over-approximation of the transition relation underlying the planning problem to the encoding. Optimality of a given solution is then proved when this over-approximation is unsatisfiable. Nonetheless, these approaches still need to compute compilations for multiple horizons and they are susceptible to having to solve compilations for an exponential horizon, unless a tighter completeness threshold is available, since the over-approximation is a relaxation of the given problem, i.e. it could be solvable even if the concrete problem is not solvable.", "parag_2": "A gap in the literature seems to be a practical completeness threshold for cost optimal planning problems that have actions with 0-cost. This is one hurdle to the application of SAT-based planning to such problems, since without a reasonable completeness threshold, optimality can only be proved after solving the compilation for a horizon that is the number of states in the state space. This is impractical for most problems since it can be exponentially bigger than the size of the given problem. It should be noted that some approaches try to circumvent the need for a tight completeness threshold, such the ones by Robinson et al. and Leofante et al., which add an over-approximation of the transition relation underlying the planning problem to the encoding. Optimality of a given solution is then proved when this over-approximation is unsatisfiable. Nonetheless, these approaches still need to compute compilations for multiple horizons and they are suscepteble to having to solve compilations for the same exponential horizon, since the overapproximation is generally incomplete, i.e. it could be solvable even if the concrete system is not solvable.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Improve the English of this paragraph.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Concise the last sentence of this text.", "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.23", "parag_1": "Evaluation We compute the accuracy at the sample-level, meaning a predicted set is considered correct if and only if every element is correct. The baselines are very volatile during training, further resulting in very large variances at the end of training. To reduce this variance, we pick the best model according to the validation accuracy which we evaluate every 500 training steps. We report the accuracy on the test set for the best model. Each run samples new datasets based on the random seed, so we evaluate all models using the same set of random seeds.", "parag_2": "Evaluation. We compute the accuracy at the sample-level, meaning a predicted set is considered correct only if every element is correct. The predicted ID for each predicted element is obtained by taking the argmax over the elements’ dimensions in the output. The baselines are very volatile during training, which results in very large variances at the end of training. To reduce this variance, we pick the best model according to the validation accuracy, evaluated every 500 training steps. We report the accuracy on the test set for the best model. Each run samples new datasets based on the random seed, so we evaluate all models using the same set of random seeds.", "annot_1": {"annotation": ["Content_addition", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.01", "parag_1": "Implicit DSPN. Motivated by our analysis on exclusive multiset-equivariance, we seek tofurther improve DSPN. We propose implicit DSPN (iDSPN) in Section 3: a version of DSPN that uses implicit differentiation, which enables better optimizers and more iterations to be used at a constant memory cost and less computation time. We then simplify this approach to avoid computing a Hessianentirely while still keeping the same benefits.", "parag_2": "Implicit DSPN. Despite this beneficial property, DSPN is outperformed by the set-equivariant Slot Attention (Locatello et al., 2020), which motivates us to improve other aspects of DSPN. We propose implicit DSPN (iDSPN) in Section 3: a version of DSPN that uses approximate implicit differentiation. Implicit differentiation enables us to use better optimizers and more iterations at a constant memory cost and less computation time. The approximation makes this faster to run and easier to implement by avoiding the computation of a Hessian while still having the same benefits.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "c-9Hob6rd2.H4aN8Z9LDS.01", "parag_1": "FetchPush . As shown in Figure 5(c), in the fetch environment, the agent is trained to fetch an object from the initial position (rectangle in green) to a distant position (rectangle in red). Although the fetch tasks are more complicated than they reach ones in the maze, GSRL also yields large performance gain, as shown in Figure 6(c).", "parag_2": "FetchPush . As shown in Figure 11(c), in the fetch environment, the agent is trained to fetch an object from the initial position (rectangle depicted in green) to a distant position (rectangle depicted in red). Let the origin (0 , 0 , 0) denote the projection of the gripper’s initial coordinate on the table. The object is uniformly generated on the segment from ( − 0 . 0 , − 0 . 0 , 0) to (8 , 8 , 0) , and the goal is uniformly generated on the segment from ( − 0 . 0 , − 0 . 0 , 0) to (8 , 8 , 0) .", "annot_1": {"annotation": ["Development", "Content_addition"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_09"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.01", "parag_1": "Traditional computational approaches are mainly based on biophysics and statistics (Schymkowitz et al., 2005; Park et al., 2016; Alford et al., 2017). Though having dominated the area for years, their limitations are non-negligible. In general, biophysics-based methods face the trade-off between efficiency and accuracy as they rely on sampling from energy functions. Statistical methods are efficient but their capacity is limited by the descriptors considered in the model. Both biophysics and statistics-based methodsrely heavily on human knowledge. Thus, they can hardly benefit from the fast-growing ofavailable protein structures. These limitations mark that predicting the effect of mutation on protein binding remains an open problem.", "parag_2": "Traditional computational approaches are mainly based on biophysics and statistics (Schymkowitz et al., 2005; Park et al., 2016; Alford et al., 2017). Although these methods have dominated the field for years, they have several limitations. Biophysics-based methods face a trade-off between efficiency and accuracy since they rely on sampling from energy functions. Statistical methods are more efficient, but their capacity is limited by the descriptors considered in the model. Furthermore, both biophysics and statistics-based methods heavily rely on human knowledge, preventing it to ∗ Equal contribution. benefit from the growing availability of protein structures. As a result, predicting the effects of mutations on protein-protein binding remains an open problem.", "annot_1": {"annotation": ["Unusable", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "c8pZvSp-5r.zd4IIIuixp.01", "parag_1": "In addition, both DeiT and ViT utilize an extra learnable class token to perform classification ( i.e ., cls token shown in Figure 1 (a) and (b)). By design, the class token is not translation-invariant although it can learn to be so. A simple alternative is to directly replace it with a global average pooling (GAP), which is inherently translation-invariant, resulting in our CVPT-GAP. Together with the translation-equivariant positional encodings, CVPT-GAP is uttermost translation-invariant and thus can achieve much better image classification performance.", "parag_2": "In addition, both DeiT and ViT utilize an extra learnable class token to perform classification ( i.e ., cls token shown in Figure 1 (a) and (b)). By design, the class token is not translation-invariant although it can learn to be so. A simple alternative is to directly replace it with a global average pooling (GAP), which is inherently translation-invariant, resulting in our CVPT-GAP. Together with the conditional positional encodings, CVPT-GAP can achieve much better image classification performance.", "annot_1": {"annotation": ["Concision"], "instruction": "Simplify the conclusions of this paragraph wo make it clearer and more concise.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision"], "instruction": "Simplify the last sentence by removing the notion of translation-equivariant and just calling it conditional positional encodings.", "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.19", "parag_1": "When compared with all previous methods, our SRPN-L performs the best on all the datasets with all scaling factors. Different from careful network designs as most compared methods have done, we start with the existing EDSR baseline (Lim et al., 2017) and prune it to a much smaller network, showing the effectiveness of our proposed SRP.", "parag_2": "When compared with all previous methods, our SRPN-Lite performs the best on all the datasets under all scaling factors. Unlike most comparison methods, which achieve efficiency through careful network designs, our work starts with the existing EDSR baseline (Lim et al., 2017) and prunes it to a much smaller network, showing the effectiveness of our proposed SRP.", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite the following paragraph, make it more formal.", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve the writing and change SRPN-L to SRPN-Lite", "annotator": "annotator_06"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.22", "parag_1": " Shan et al. (2022) identifies 5 single-point mutations on a human antibody against SARS-CoV-2 that enhance neutralization (effectiveness). There are 494 possible single-point mutations on the heavy chain CDR region of the antibody in total. We use the most competitive methods benchmarked in Section 4.1 to predict ∆∆ G s for all the mutations and rank them in ascending order (lowest ∆∆ G in the top). A predictor is considered more effective if it ranks more favorable mutations in the top place. Table 2 shows that RDE-Network and DDGPred successfully identify three mutations (Ranking ≤ 10%), and RDE-Network ranks them higher.", "parag_2": "In Shan et al. , the authors report five single-point mutations on a human antibody against SARS-CoV-2 that enhance neutralization effectiveness. These mutations are among the 494 possible single-point mutations on the heavy chain CDR region of the antibody. We use the most competitive methods benchmarked in Section 4.1 to predict ∆∆ G s for all the single-point mutations and rank them in ascending order (lowest ∆∆ G in the top). The effectiveness of a predictor is determined by the number of favorable mutations ranked in the top place. As shown in Table 2, RDE-Network and DDGPred successfully identify three mutations (Ranking ≤ 10%), with RDE-Network ranking them higher.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Fluidify this paragraph.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve the English in this paragraph in an academic style.", "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.10", "parag_1": "Given the issue above, it is necessary to prune all the Conv layers in residual blocks if we seek acceleration of practical use. Thus, we need a method to align the pruned indices in all constrained Conv layers. Regularization then arises as a natural solution given its prevailing use to impose priors on the sparsity structure in pruning (Reed, 1993; Wen et al., 2016; Wang et al., 2021).", "parag_2": "Given this issue, it is imperative to prune all the Conv layers in residual blocks, thus calling for an approach to align the pruned indices of all constrained Conv layers. Regularization then arises as a promising solution considering it has been widely used before to impose priors on the sparsity structure in classification (Reed, 1993; Wen et al., 2016; Wang et al., 2021).", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "I want to use other words in my paragraph.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_light", "Concision"], "instruction": "Revise this text to make it a little more concise and fitting to the academic style.", "annotator": "annotator_07"}} {"id_paragraph": "FKg16y0Y9A.ztJ9BPSr-.00", "parag_1": "We have implemented Stars as part of[redacted for blind review] which is based on the Adaptive Massively Parallel Computation (AMPC) model [7]. Each logical unit of computation is automatically distributed across a number of worker machines, with the experiments in this paper scaling to thousands of individual workers.", "parag_2": "We have implemented Stars as part of the Grale [25] graph building system using Flume - a C++ counterpart to FlumeJava [13]. which is based on the Adaptive Massively Parallel Computation (AMPC) model [7]. Each logical unit of computation is automatically distributed across a number of worker machines, with the experiments in this paper scaling to thousands of individual workers.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.06", "parag_1": "Note that exclusive multiset-equivariance is not always obtained in DSPN, but depends on the choice of encoder. For instance, a DeepSets encoder (Zaheer et al., 2017) – which is based on sum pooling – has the same gradients for equal elements, which would make DSPN set-equivariant. It is specifically the use of the exclusively multiset-equivariant gradient of sorting that makes DSPN exclusively multiset-equivariant.", "parag_2": "Note that DSPN is not always exclusively multiset-equivariant, but it depends on the choice of encoder. A DeepSets encoder (Zaheer et al., 2017) – which is based on sum pooling – has the same gradients for equal elements, which would make DSPN set-equivariant. It is specifically the use of the exclusively multiset-equivariant gradient of sorting that makes DSPN exclusively multiset-equivariant.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Change the subject in the first sentence.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_light", "Concision"], "instruction": "Lightly revise this paragraph for better readability while trying to make it a little shorter without loosing informations.", "annotator": "annotator_07"}} {"id_paragraph": "7VIguXRv9h.yPdniQMisK.00", "parag_1": "• Implicit Curricula: Examples are learned in a consistent order (Section 2). We show that the order in which examples are learned is consistent across runs, similar training methods, and similar architectures. Furthermore, we show that it is possible to change this order by changing the order in which examples are presented during training. Finally, we establish that well-known notions of sample difficulty are highly correlated with each other. • Curricula achieve (almost) no improvement in the standard setting (Section 4). We show curriculum learning, random, and anti-curriculum learning perform almost equally well in the standard setting. Furthermore, we establish that using similar techniques to remove examples from the training set (as opposed to introducing them) also does not help. • Curriculum learning improves over standard training when training time is limited (Section 5). Imitating the large data regime, where training for multiple epochs is not feasible, we limit the number of iterations in the training algorithm and compare curriculum, random and anti-curriculum ordering against standard training. Our experiments reveal a clear advantage of curriculum learning over other methods. • Curricula improves over standard training in noisy regime (Section 5). Finally, we mimic noisy data by adding label noise. Similar to Jiang et al. ; Saxena et al. ; Guo et al. , our experiments indicate that curriculum learning has a clear advantage over other curricula and standard training.", "parag_2": "• Implicit Curricula: Examples are learned in a consistent order (Section 2). We show that the order in which examples are learned is consistent across runs, similar training methods, and similar architectures. Furthermore, we show that it is possible to change this order by changing the order in which examples are presented during training. Finally, we establish that well-known notions of sample difficulty are highly correlated with each other. • Curricula achieve (almost) no improvement in the standard setting (Section 4 and 6). We show curriculum learning, random, and anti-curriculum learning perform almost equally well in the standard setting.• Curriculum learning improves over standard training when training time is limited (Section 5 and 6) . Imitating the large data regime, where training for multiple epochs is not feasible, we limit the number of iterations in the training algorithm and compare curriculum, random and anti-curriculum ordering against standard training. Our experiments reveal a clear advantage of curriculum learning over other methods. • Curricula improves over standard training in noisy regime (Section 5 and 6). Finally, we mimic noisy data by adding label noise. Similar to Jiang et al. ; Saxena et al. ; Guo et al. , our experiments indicate that curriculum learning has a clear advantage over other curricula and standard training.", "annot_1": {"annotation": ["Concision", "Content_deletion"], "instruction": "Remove the less important details in the results.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "Remove unnecessary details.", "annotator": "annotator_07"}} {"id_paragraph": "tUjROCVSs0.LG_Cl6t7Bt.00", "parag_1": "We develop our approach focusing on the shape space of discrete shells, where shapes are given by triangle meshes and the manifold is equipped with an elasticity-based metric. In principle, our approach is also applicable to other shape spaces such as manifolds of images, and we will include remarks on how we propose this could work. We evaluate our approach with experiments on data manifolds of triangle meshes, both synthetic ones and ones extracted from data via SPGA, and we demonstrate that the proposed composite network architecture outperforms a monolithic fully connected network architecture as well as an approach based on the affine combination of the factors. ", "parag_2": "We develop our approach focusing on the shape space of discrete shells, where shapes are given by triangle meshes and the manifold is equipped with an elasticity-based metric. In principle, our approach is also applicable to other shape spaces such as manifolds of images, and we will include remarks on how we propose this could work. We evaluate our approach with experiments on data manifolds of triangle meshes, both synthetic ones and ones extracted from data via SPGA, and we demonstrate that the proposed composite network architecture outperforms a monolithic fully connected network architecture as well as an approach based on the affine combination of the factors. We see this work as a first step to use NN to accelerate the complex computations of shape manifold parameterizations.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "NAxP0iFmBr.5QBuYp8GH.03", "parag_1": "We trained all the learning-based control policies in a mix of 60 instances of the Blank Environment containing 1 to 6 humans (uniformly sampled). Each training run has 500,000 steps of environment interactions. For fair evaluation, we report all of the mean metrics based on the data of a hundred 500-step episodes. In all experiments, the cameras and humans interact in a 10m × 10m area.", "parag_2": "We trained all the learning-based control policies in a mix of 28 instances of the BlankEnv uniformly containing 1 to 6 humans. Each training run has 700,000 steps (1,000 training iterations). For fair evaluation, we report all of the mean metrics based on the data from the latest hundred 500-steps episodes. In all experiments, the cameras and humans interact in a 10m × 10m area.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Content_substitution", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_08"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.06", "parag_1": "Kullback-Leibler divergence) fails (Seguy et al., 2018). In addition, it does not require adversarial training and is, therefore, easier to optimize than adversarial-based measures (Kallus, 2020).", "parag_2": "Kullback-Leibler divergence) fails (Seguy et al., 2018). In addition, the calculated discrepancy can be optimized with the traditional supervised learning framework instead of the adversarial learning framework, and is therefore easier to optimize than adversarial-based methods (Kallus, 2020).", "annot_1": {"annotation": ["Development", "Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.03", "parag_1": "Action Graph The input to our policy framework consists of the state s and a list C = [ c a 0 , ..., c a k ] of action representations for each action a i ∈ A . We build a fully connected action graph G with vertices corresponding to each action. If certain action relations are predefined in domain knowledge, we can remove other edges to ease training (Appendix B.1). The relations that we must infer depend on the current state. For instance, a screwdriver is relevant to a screw while repairing furniture, but a drill machine is more related when the screw is for a wall. Following this insight, we join state and action representations, c (cid:48) a k = ( s, c a k ) to form the input nodes of the graph. In Sec. 6.3.2, we validate this insight by comparing against having graph nodes as action representations only.", "parag_2": "Action Graph : The input to our policy framework consists of the state s and a list C = [ c a 0 , ..., c a k ] of action representations for each action a i ∈ A . We build a fully connected action graph G with vertices corresponding to each available action. If certain action relations are predefined via domain knowledge, we can reduce some edges to ease training (Appendix B.1). We note that the action relations can vary depending on the environment state. For instance, a screwdriver is related to a screw for furniture repair, but a drill machine is more related when the screw is for a wall. Therefore, we join the state and action representations, c (cid:48) a i = ( s, c a i ) to obtain the nodes of the graph. Sec. 6.3. validates that learning state-dependent action relations leads to more optimal solutions.", "annot_1": {"annotation": ["Rewriting_light", "Content_substitution"], "instruction": NaN, "annotator": "annotator_04"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Review this paragraph, when needed try to make it clearer.", "annotator": "annotator_01"}} {"id_paragraph": "UlHNcByJV.W1RxpkrWx8.03", "parag_1": "Adult dataset. The de-biased classifier achieves higher recall while maintaining predictive ability evidenced by its precision value. We note that trade-off between precision and recall can be regulated by changing the number of epochs and not resetting the weights for each batch. Our code generates a full log of performance metrics for the biased and de-biased classifiers for every run of the algorithm.", "parag_2": "Adult dataset. The de-biased classifier achieves higher recall while maintaining predictive ability evidenced by its precision value. Our code generates a full log of performance metrics for the biased and de-biased classifiers for every run of the algorithm.", "annot_1": {"annotation": ["Content_deletion"], "instruction": "Remove non-essential sentences.", "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.07", "parag_1": "Recently, deep learning-based approaches to predicting mutational effects on protein binding have emerged. We group them into three categories: end-to-end models, pre-training-based models, and unsupervised models. End-to-end models take both mutant and wild-type(not mutated) protein structuresalong with other features as input and directly predict the difference in binding free energy. End-to-end models achieve satisfactory correlation between prediction and ground truth on the whole SKEMPI benchmark consisting of many different structures (Shan et al., 2022), but the perstructure correlation, which is more relevant to practical use, is still low. In attempt to settle the issue of data scarcity, another line of work proposes to pre-train a feature extraction networkand then use regression models to predict the effect of mutations based on the learned features (Liu et al., 2021; Yang et al., 2022).", "parag_2": "Recently, deep learning-based approaches have emerged. We group them into three categories: endto-end models, pre-training-based models, and unsupervised models. End-to-end models directly predict the difference in binding free energy by taking both mutant and wild-type protein structures as input (Shan et al., 2022). Pre-training-based models attempt to address data scarcity by pre-training a feature extraction network (Liu et al., 2021; Yang et al., 2022; Zhang et al., 2022).", "annot_1": {"annotation": ["Content_deletion"], "instruction": "Give me a shorter version of this:", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Content_deletion", "Concision"], "instruction": "Make this paragraph twice as short by making the content more concise and deleting unnecessary details.", "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.20", "parag_1": "Previous uses of implicit differentation in meta-learning [iMAML] and neural architecture search [iDARTS] also involve solving a linear system, but their Y corresponds to the neural network parameters and thus usually has millions of entries. In contrast, in our setting we work with a much smaller Y , which for example only contains 190 entries in sec [].", "parag_2": "Previous uses of implicit differentiation in meta-learning (Rajeswaran et al., 2019) and neural architecture search (Zhang et al., 2021b) also involve solving a linear system, but their Y corresponds to the neural network parameters and thus usually has millions of entries. In contrast, in our setting we work with a much smaller Y , which for example only contains 190 entries in Section 4.3. Regularization. In Rajeswaran et al.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "HJi5QRusB.3ELqS2sPA.00", "parag_1": "Main results Table 1 displays the accuracy of each model on the test set of each dataset, after they were training on ImageNet-only or all datasets. Traffic Signs and MSCOCO are not used for training in either case, as they are reserved for evaluation. We propose to use the average (over the datasets) rank of each method as our metric for comparison, where smaller is better. A method receives rankif it has the highest accuracy, rank 2 if it has the second highest, and so on. If two models share the best accuracy, they both get rank 1.5, and so on. We find that fo-Proto-MAML is the top-performer according to this metric, the Finetune Baseline notably presents a worthy opponent, while fo-MAML, to our surprise, performs quite poorly on M ETA -D ATASET . We include more detailed versions of these tables displaying confidence intervals and per-dataset ranks in the Appendix.", "parag_2": "Main results Table 1 displays the accuracy of each model on the test set of each dataset, after they were trained on ImageNet-only or all datasets. Traffic Signs and MSCOCO are not used for training in either case, as they are reserved for evaluation. We propose to use the average (over the datasets) rank of each method as our metric for comparison, where smaller is better. A method receives rankif it has the highest accuracy, rank 2 if it has the second highest, and so on. If two models share the best accuracy, they both get rank 1.5, and so on. We find that fo-Proto-MAML is the top-performer according to this metric, Prototypical Networks also perform strongly, and the Finetune Baseline notably presents a worthy opponent 3 . We include more detailed versions of these tables displaying confidence intervals and per-dataset ranks in the Appendix.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "oS9Uk_Rig.NE2g1bZGme.00", "parag_1": "To encourage evasion with fewer mutation turns. We further update the reward over a mutation period using the reward function defined as R t = R t − 1 − nσ where the value for R t is either given by R l or R s depending the environment used, n is the number of mutation turns within current episode, and σ is the constant step penalty, which is set to be 0.1 in our environment.", "parag_2": "To encourage evasion with fewer mutation turns. We further update the reward over a mutation period using the reward function defined as R = R t − σt where the value for R t is either given by R l or R s at step t inside one episode, and σ is the constant step penalty, which is set to be 0.1 in our environment.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "SyF8k7bCW.HytIRPamf.04", "parag_1": "We adopted the idea proposed in Chen et al. They aim to build a model for supervised SNLI task (Bowman et al., 2015), and the model concatenates the outputs from a global mean-pooling function and a global max-pooling function to serve as a sentence representation, and shows a performance boost on the SNLI dataset. Besides, Conneau et al. (2017) found that the model with global max-pooling function has stronger transferability than the model with a global mean-pooling function after supervised training on SNLI.", "parag_2": "We followed the idea proposed in Chen et al. They built a model for supervised SNLI task (Bowman et al., 2015) that concatenates the outputs from a global mean pooling and a global max pooling to serve as a sentence representation, and showed a performance boost on the SNLI dataset. Also, Conneau et al. (2017) found that the model with global max pooling function has stronger transferability than the model with a global mean pooling function after supervised training on SNLI.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Rewrite this paragraph using more formal language", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Rephrase the text", "annotator": "annotator_06"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.26", "parag_1": "In this work, we propose the rotamer density estimator (RDE) that estimates the distribution of rotamers. We find the entropy of the estimated distributions and the unsupervised representations produced by the RDE enable more accurate prediction of binding ∆∆ G . One of the major limitations of this work is that it cannot directly model backbone flexibility. Neither do all the considered baselines. Therefore, an importantnon-trivial future direction is extending the proposed model to backbone-flexible cases.", "parag_2": "In this work, we introduce the Rotamer Density Estimator (RDE) which estimates the distribution of rotamers for protein sidechains. We demonstrate that RDE leads to improved accuracy in predicting binding ∆∆ G compared to other methods. One limitation of RDE is the inability to model backbone flexibility directly which is an important future direction for extending the proposed model. Nonetheless, our work highlights the potential of using machine learning techniques to improve mutational effect prediction for protein-protein interaction.", "annot_1": {"annotation": ["Rewriting_heavy", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.03", "parag_1": "Reminders are among the most common technological interventions to improve adherence to medication [4,27,28]. Reminders can take many forms, including interventions of caregivers through video and voice calls [29] and text messages [30], smart pill boxes [10], and computer applications [13]. Focusing on systems that visually represent reminder events to stay within the scope of this research, we identified two relevant technological interventions. The first one is a health literacy tool called Medication Calen- dar [14]. It was designed to improve antihypertensive medication adherence. The Medication Calendar provides a graphical view of medication to be taken during a given section of a day. Its layout shows Morning, Afternoon, Evening, and Bedtime as columns and the medications as rows. It displays (in text format) the medication name; the time of day that it should be consumed, the number of times daily that it should be taken; the dose of medication administered; and clinical indication for medication. The application then triggers reminders when it is time to take a medication. The second relevant intervention is a tool for representing graph- ically enhanced interventions in coronary heart disease [31] The tool they designed also shows Morning, Afternoon, Evening, and Bedtime as columns, the These two contributions provide foundational design elements to consider for visualizing medications and their reminders in a calendar design. Specifically, the layout (columns and rows) and design concepts (graphical representation of drug entities) provide a starting point for design.", "parag_2": "Reminders are among the most common technological interventions to improve adherence to medication [4,26,27]. Reminders can take many forms, including interventions of caregivers through video and voice calls [28] and text messages [29], smart pill boxes [10], and computer applications [13]. Focusing on systems that visually represent reminder events to stay within the scope of this research, we identified two relevant technological interventions. The first one is a health literacy tool called Medication Calendar [14]. It was designed to improve antihypertensive medication adherence. The Medication Calendar provides a graphical view of medication to be taken during a given section of a day. Its layout shows Morning, Afternoon, Evening, and Bedtime as columns and the medications as rows. It displays (in text format) the name of the medication, the time of day that it should be administered, the number of times daily that it should be taken, dosage information, and additional clinical indications. The application then triggers reminders when it is time to administer the medication. The second relevant intervention is a tool for representing graphically enhanced interventions for coronary heart disease [30] The tool shows the time of day (Morning, Afternoon, Evening, and Bedtime) as columns and the list of medication as rows. An additional col- umn indicates the purpose of the medication. Row headers include medication name, dosage, and the time of day when it should be administered. The novelty in this work is that the table cells contain graphical images of the corresponding medication. These two contributions provide foundational design elements to consider for visualizing medications and their reminders in a calendar-style design. Specifically, the layout (columns and rows) and design concepts (graphical representation of drug entities) pro- vide a starting point for an integrated design.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "SRquLaHRM4.vI2x5N-YHC.01", "parag_1": "Optimal Transport The Optimal Transport [30] is initially introduced to solve the problem of howto reduce the cost when moving several items simultaneously. Recently, OT theory has drawn wideattention in the machine learning and computer vision community by comparing distributions readilyavailable to them under the form of feature sets [37]. Due to the brilliant property of distributionmatching, OT has been applied in many theoretic and application tasks including generative models [1,45, 60], structural matching [4, 57, 61, 56] (e.g. sequence matching [4] and graph matching [56]),and other distribution-based tasks (such as clustering [22], distribution estimation [2], and causaldiscovery [50]). In this paper, we use OT to align the features of vision and language modalitieswhich represents thedata structure by learning an adaptive transport plan [44].", "parag_2": "Optimal Transport The Optimal Transport [30] is initially introduced to solve the problem of howto reduce the cost when moving simultaneously several items. Recently, OT theory has drawn wideattention in the machine learning and computer vision community by comparing distributions readilyavailable to them under the form of feature sets [37]. Due to the brilliant property of distributionmatching, OT has been applied in many theoretic and application tasks including generative models [1,44, 59], structural matching [4, 56, 60, 55] (e.g. sequence matching [4] and graph matching [55]),and other distribution-based tasks (such as clustering [22], distribution estimation [2], and causaldiscovery [49]). In this paper, we use OT distance to align the features of vision and languagemodalities and propose a two-stage learning strategy to guide the learning of multiple prompts.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_medium", "Unusable"], "instruction": "I want to improve the last sentence.", "annotator": "annotator_09"}} {"id_paragraph": "x8CcXI4Ei.4yg90qT46L.01", "parag_1": "Generalization of meta learning. The excess risk , as a metric of generalization ability of gradientbased meta learning has been analyzed recently [3,4,9,14,18,42]. The generalization of meta learninghas been studied in [27] in the context of mixed linear regression, where the focus is on investigatingwhen abundant tasks with small data can compensate for lack of tasks with big data. Generalization performance has also been studied in a relevant but different setting - representation based metalearning [13,16]. Information theoretical bounds have been proposed in [10,26], which bound thegeneralization error in terms of mutual information between the input training data and the output of the meta-learning algorithms. The PAC-Bayes framework has been extended to meta learning to provide a PAC-Bayes meta-population risk bound [1,15,19,34]. These works mostly focus on the case where the meta learning model is underparameterized; that is, the total number of meta training data from all tasks is larger than the dimension of the model parameter. Recently, overparameterized metalearning has attracted much attention. Bernacchia [6] suggests that in overparameterized MAML,negative learning rate in the inner loop is optimal during meta training for linear models with Gaussian data. Sun et al. [39] shows that the optimal representation in representation-based meta learning isoverparameterized and provides sample complexity for the method of moment estimator.", "parag_2": "Generalization of meta learning. The excess risk , as a metric of generalization ability of nestedmeta learning has been analyzed recently [3,4,9,13,17,41]. Generalization performance has also been studied in a relevant but different setting - representation based meta learning [12,15]. Informationtheoretical generalization bounds have been proposed in [10,25], which bound the generalization errorin terms of mutual information between the input training data and the output of the meta-learning algorithms. The PAC-Bayes framework has been extended to meta learning to provide a PAC-Bayes meta-population risk bound [1,14,18,32]. These works mostly focus on the case where the meta learning model is underparameterized; that is, the total number of meta training data from all tasks is larger than the dimension of the model parameter. Recently, overparameterized meta learninghas attracted much attention. Bernacchia [6] suggests that in overparameterized MAML, negativelearning rate in the inner loop is optimal during meta training for linear models with Gaussian data. Sun et al. [37] shows that the optimal representation in representation-based meta linear regression isoverparameterized and provides sample complexity bounds for the method of moment estimator.", "annot_1": {"annotation": ["Content_deletion"], "instruction": "Remove a redundant sentence. Use clearer expression.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "Improve the English and remove the second sentence.", "annotator": "annotator_02"}} {"id_paragraph": "I6_1TEti_.kbRnfqVqh.00", "parag_1": "Consistency layers. Approaches ensuring consistency by embedding the constraints into the predictive layer as in SPLs include MultiplexNet [37] and HMCCN [31]. MultiplexNet is able to encode only constraints in disjunctive normal form, which is problematic for generality (D4) and efficiency (D6) as neuro-symbolic SOP tasks involve an intractably large number of clauses – e.g. our pathfinding experiments involves billions of clauses. HMCCN encodes label dependencies as fuzzy relaxation and is the current state-of-the-art model for HMLC [31]. Even its recent extension [32] is restricted to a certainfamily of constraints(D4) that can be represented with fuzzy logic.", "parag_2": "Consistency layers. Approaches ensuring consistency by embedding the constraints into the predictive layer as in SPLs include MultiplexNet [38] and HMCCN [32]. MultiplexNet is able to encode only constraints in disjunctive normal form, which is problematic for generality (D4) and efficiency (D6) as neuro-symbolic SOP tasks involve an intractably large number of clauses – e.g. our pathfinding experiments involves billions of clauses. HMCCN encodes label dependencies as fuzzy relaxation and is the current state-of-the-art model for HMLC [32]. HMCCN and even its recent extension [33] are restricted to only certain constraints that can be exactly encoded with fuzzy logic easily. SPLs instead can express constraints encoded as arbitrary propositional logical formulas (D4).", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "S1BhqsOsB.1mgtDFRDc.01", "parag_1": "Unimodal losses such as mean squared error are not very useful when predicting high dimensional data, due to the stochasticity of the output space. Researchers have tried to handle such stochasticity using latent variable models (Loehlin, 1987) or autoregressive prediction of the output pixel space, which involves sampling each pixel value from a categorical distribution conditioned on the output thus far (Van den Oord et al., 2016). Another option is to make predictions in a latent feature space. Recently, Oord et al. (2018) followed this direction and used an objective that preserves mutual information between the future bottom-up extracted features and the predicted contextual latent features, applying it in speech, text and image patches in single images. The view contrastive loss proposed in this work is a non-probabilistic version of their contrastive objective. However, our work focuses on the video domain as opposed to image patches, and uses drastically different architectures for both the contextual and bottom-up representations, using a 3D representation bottleneck.", "parag_2": "Unimodal losses such as mean squared error are not very useful when predicting high dimensional data, due to the stochasticity of the output space. Researchers have tried to handle such stochasticity using latent variable models (Loehlin, 1987) or autoregressive prediction of the output pixel space, which involves sampling each pixel value from a categorical distribution conditioned on the output thus far (Van den Oord et al., 2016). Another option is to make predictions in a feature space which is less stochastic than the input. Recently, Oord et al. (2018) followed this direction and used an objective that preserves the mutual information between “top-down” contextual features predicted from input observations, and “bottom-up” features produced from future observations; it applied this objective in speech, text, and image crops. The view-contrastive loss proposed in this work is a non-probabilistic version of their contrastive objective. However, our work focuses on the video domain as opposed to image patches, and uses drastically different architectures for both the top-down and bottom-up representations, involving a 3D egomotion-stabilized bottleneck.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "S1CMuZFor.H1NchtnoS.01", "parag_1": "• The NTK is defined using the gradient of the DNN output with respect to weight parameter space . In contrast, the linear approximation Lemma in this paper) is defined using the gradient of the DNN output with respect to input parameter space . In other words, the variables to be differentiated are different. • Although NTK analysis is limited to gradient descent , our analysis can be applied to stochastic gradient descent . • The random walk analysis indicates that over-parameterized ReLU DNNs interpolate almost linearly between the data points. For ReLU activation, since the NTK kernel mapping is not Lipschitz but 1 / 2 -H ¨ older, it is difficult to obtain such a result in the NTK analysis without a tradeoff between smoothness and approximation (Bietti & Mairal, 2019).", "parag_2": "• The NTK is defined using the gradient of the DNN output with respect to weight parameter space . In contrast, the linear approximation (Lemma 3 in this paper) is defined using the gradient of the DNN output with respect to input parameter space . In other words, the variables to be differentiated are different. • The random walk analysis indicates that over-parameterized ReLU DNNs interpolate almost linearly between the data points. For ReLU activation, since the NTK kernel mapping is not Lipschitz but 1 / 2 -H ¨ older, it is difficult to obtain such a result in the NTK analysis without a tradeoff between smoothness and approximation (Bietti & Mairal, 2019).", "annot_1": {"annotation": ["Content_deletion"], "instruction": "Please exclude the content that seems unnecessary.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "Remove the second item of the list.", "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.22", "parag_1": "Dataset The input of every example is represented by a 64 × 4 matrix, where each row is a one-hot vector that is sampled i.i.d. from the multinomial distribution over the equally weighted 4 classes. We generate the target matrix of size 64 × 64 by counting the occurrences for each unique input class sequentially from top to bottom and represent the count as a64 dimensional one-hot vector. The setting with 1 × samples has a training dataset of size 640. We additionally use a validation set of size 6,400 and a test set of size 64,000.", "parag_2": "MSE as pairwise loss for DSPN/iDSPN and cross-entropy as pairwise loss for the other models. The baselines perform worse with MSE as pairwise loss. For each example, the input multiset has a size of 64 with 4-dimensional elements corresponding toclasses. This is represented as a 64 × 4 matrix where each row is a one-hot vector that is sampled i.i.d. from the multinomial distribution over the equally-weighted 4 classes. We generate the target multiset of the same set size with 64-dimensional elements ( 64 × 64 matrix), each element being a one-hot vector that represents a number. For each class, we number the corresponding elements sequentially. The setting with 1 × samples has a training dataset of size 640. We additionally use a validation set of size 6,400 and a test set of size 64,000 for every run.", "annot_1": {"annotation": ["Rewriting_heavy", "Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.05", "parag_1": "Our method provides a solution to the aforementioned challenges. Training the rotamer density estimator requires only protein structures. Thus, itis an unsupervised learner of the effectof mutations on binding and it alleviates the difficulty rising from the scarcity of annotated mutation data. In addition, our method does not require the structure of the mutated protein as input. Rather, it treats mutated structures as latent variables and the rotamer density estimator is an approximator of the latent distribution. Our method outperforms empirical energy functions and machine learning models for ∆∆ G prediction . In addition, as a generative model for rotamers, the RDE predicts sidechain conformations accurately .", "parag_2": "Our method is an attempt to address the aforementioned challenges. The Rotamer Density Estimator is trained solely on protein structures, not requiring other labels, making it an unsupervised learner of the mutation effect on protein-protein interaction. This feature mitigates the challenge posed by the scarcity of annotated mutation data. Moreover, our method does not require the mutated protein structure as input. Instead, it treats mutated structures as latent variables, which are approximated by RDE. Our method outperforms both empirical energy functions and machine learning models for predicting ∆∆ G . Additionally, as a generative model for rotamers, the RDE accurately predicts sidechain conformations.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Give me a more formal version of the following paragraph.", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite this paragraph in a more formal and academic way.", "annotator": "annotator_07"}} {"id_paragraph": "UlHNcByJV.W1RxpkrWx8.00", "parag_1": "In this section we describe our main contribution, Adversarial Optimism (AdOpt) in detail. At a high level, AdOpt uses two classifiers. The first one is a classifier trained on all the accepted data thus far, without any de-biasing. We refer to this as the “biased” classifier hereafter. The second one is our adversarially de-biased classfier trained on the same data. AdOpt then proceeds as follows:", "parag_2": "In this section we describe our main contribution, Adversarial Optimism (AdOpt) in detail. AdOpt uses two classifiers. The first one is a “biased” classifier trained on all the accepted data thus far. The second one is our adversarially de-biased classifier. AdOpt then proceeds as follows:", "annot_1": {"annotation": ["Concision"], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07"}} {"id_paragraph": "usz0l2mwO.5ie3V0GP-.03", "parag_1": "Besides improving fine-tuning on low-resource data by removing irrelevant features, we expect VIB to improve on out-of-domain data because it removes redundant features. In particular, annotation artifacts in a specific dataset are known to create shortcut features, which are superficial cues correlated with a label (Gururangan et al., 2018). We hypothesize that these shortcuts are easy to learn (especially when the amount of data is not sufficient), but are actually redundant with deeper features, which capture the true generalizations in the task. By compressing the input embeddings, VIB encourages learning these desirable general features(Shamir et al., 2010; Tishby and Zaslavsky, 2015). By removing the redundant features, the trained model generalizes better to out-of-domain datasets that lack these shortcut features (Belinkov et al., 2019). To evaluate out-of-domain generalization, we take NLI models trained on medium-sized 6K subsampled SNLI and MNLI in Section 3.2 and test their generalization to several NLI datasets.", "parag_2": "Besides improving fine-tuning on low-resource data by removing irrelevant features, we expect VIB to improve on out-of-domain data because it removes redundant features. In particular, annotation artifacts create shortcut features, which are superficial cues correlated with a label (Gururangan et al., 2018; Poliak et al., 2018) that do not generalize well to out-of-domain datasets (Belinkov et al., 2019a). Since solving the real underlying task can be done without these superficial shortcuts, they must be redundant with the deep semantic features that are truly needed. We hypothesize that many more superficial shortcut features are needed to reach the same level of performance as a few deep semantic features. If so, then VIB should prefer to keep the concise deep features and remove the abundant superficial features, thus encouraging the classifier to rely on the deep semantic features, and therefore resulting in better generalization to outof-domain data. To evaluate out-of-domain generalization, we take NLI models trained on medium-sized 6K subsampled SNLI and MNLI in Section 3.2 and evaluate their generalization on several NLI datasets.", "annot_1": {"annotation": ["Development", "Rewriting_heavy"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "33RNh69fYq.kMvWVl725x.00", "parag_1": "Feature reconstruction . A linear projection is first applied to these feature tokens to reduce C org to a smaller channel, C . Then these tokens are processed by NME and LQD. The learnable position embeddings [12, 13] are added in the attention module to inform the spatial information. Afterward, another linear projection is used to recover the channel from C to C org . After reshape,the reconstructed feature map, f rec ∈ R C org × H × W , is finally obtained.", "parag_2": "Feature reconstruction . The feature map, f org , is first tokenized to H × W feature tokens, followedby a linear projection to reduce C org to a smaller channel, C . Then these tokens are processed by NME and LQD. The learnable position embeddings [14, 15] are added in attention modules to informthe spatial information. Afterward, another linear projection is used to recover the channel from C to C org . After reshape, the reconstructed feature map, f rec ∈ R C org × H × W , is finally obtained.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "atxti8SVk.3K9AmPwALM.04", "parag_1": "Metric learning develops a feature representation based on data grouping and separation cues. Our method (Fig. 3) segments an image by learning a pixel-wise embedding with a contrastive loss between pixels and segments. We start from defining two disjoint sets – positive and negative segments (exemplars) with respect to pixel i . Our goal is to group i with positive segments while separating it from negative ones. Given latent feature φ ( i ) at pixel iand a specific distance metric, the objective is todecrease (increase) the distance between φ ( i ) and its positive (negative) segments.", "parag_2": "Metric learning develops a feature representation based on data grouping and separation cues. Our method (Fig. 3) segments an image by learning a pixel-wise embedding with a contrastive loss between pixels and segments: For each pixel i , we learn a latent feature φ ( i ) such that i is close to its positive segments (exemplars) and far from its negative ones in that feature space.", "annot_1": {"annotation": ["Concision"], "instruction": "Make this paragraph heavily more concise in the explanations made.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision"], "instruction": "Don't give to much details about the method of learning, just keep the main idea.", "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.23", "parag_1": "Statistical Significance To show that there is a statistically significant relationship between the entropy estimated by the RDE and the experimental ∆∆ G values, we conduct linear regression analysis using the RDE-Linear model defined in Eq.9. The linear model contains 7 coefficients and 1 bias: w bound W L , w bound W R , w unbnd W L , w bound M L , w bound M R , w unbnd M L , ( w unbnd M R − w unbnd W R ) , and b . Here, w unbnd M R and w unbnd W R are merged because the receptor is not mutated, so the entropy estimated by the model are identical. For short, we denote the merged coefficient by w unbnd R . We perform linear regression on the SKEMPI2 dataset and present the regression coefficients, bias, and P-values in Table 3.", "parag_2": "Statistical Significance To demonstrate a statistically significant relationship between the entropy estimated by RDE and experimental ∆∆ G values, we conduct linear regression analysis using the RDE-Linear model defined in Eq. The linear model consists of seven coefficients and one bias term: w bound W L , w bound W R , w unbnd W L , w bound M L , w bound M R , w unbnd M L , w unbnd R = ( w unbnd M R − w unbnd W R ) , and b . Note that w unbnd M R and w unbnd W R are merged into w unbnd R , as the receptor is not mutated. We perform linear regression on the SKEMPI2 single-mutation dataset and present the regression coefficients, bias, and P-values in Table 3.", "annot_1": {"annotation": ["Rewriting_medium", "Concision"], "instruction": "Simplify the explanation of the merged w unbnd M R and w unbnd W R.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Concise the penultimate sentence. Improve the English in this paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "OzYyHKPyj7.O9Mk1uqXra.04", "parag_1": "For all tasks, we see that our RNS-RNN (denoted NS+S+U) attains near-optimal cross-entropy (within 0.05 nats) on the validation set. All stack models effectively solve the deterministic marked reversal and Dyck tasks, although we note that on marked reversal the NS models do not generalize well on held-out lengths. Our new model excels on the three nondeterministic tasks: unmarked reversal, padded reversal, and hardest CFL. We find that the combination of including PDA states in the reading and allowing the action weights to be unnormalized (+S+U) greatly improves performance on unmarked reversal and hardest CFL over previous work. For unmarked reversal, merely changing the task by adding EOS causes the baseline NS model to perform worse than Gref and JM; using both enhancements (+S+U) is essential to surpassing them. For padded reversal, we see that the addition of PDA states in the stack reading (+S) proves essential to improving performance.", "parag_2": "For all tasks, we see that our RNS-RNN (denoted NS+S+U) attains near-optimal cross-entropy (within 0.05 nats) on the validation set. All stack models effectively solve the deterministic marked reversal and Dyck tasks, although we note that on marked reversal the NS models do not generalize well on held-out lengths. Our new model excels on the three nondeterministic tasks: unmarked reversal, padded reversal, and hardest CFL. We find that the combination of both enhancements (+S+U) greatly improves performance on unmarked reversal and hardest CFL over previous work. For unmarked reversal, merely changing the task by adding EOS causes the baseline NS model to perform worse than Gref and JM; this may be because it requires the NS-RNN to learn a correlation between the two most distant time steps. Both enhancements (+S+U) in the RNS-RNN are essential here; without unnormalized weights, the model does not find a good solution during training, and without PDA states, the model does not have enough information to make optimal decisions. For padded reversal, we see that the addition of PDA states in the stack reading (+S) proves essential to improving performance.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_10"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_03"}} {"id_paragraph": "8jLtSbSLbC.T9bU3kyKC4.00", "parag_1": "Mathematically, any irreversible mapping y = f(args...) can be trivially transformed to its reversible form y += f(args...) or y (cid:89) = f(args...) ( (cid:89) is the bit-wise XOR ), where y is a pre-emptied variable. But in numeric computing with finite precision, this is not always true. The reversibility of arithmetic instruction is closely related to the number system. For integer and fixed point number system, y += f(args...) and y -= f(args...) are rigorously reversible. For logarithmic number system and tropical number system (Speyer & Sturmfels, 2009), y *= f(args...) and y /= f(args...) as reversible (not introducing the zero element). While for floating point numbers, none of the above operations are rigorously reversible. However, for convenience, we ignore the rounding errors in floating point + and - operations and treat them on equal footing with fixed point numbers in the following discussion. Other reversible operations includes SWAP , ROT , NEG et. al., and this instruction set is extensible. One can define a reversible multiplier in NiLang as in Listing. 2.", "parag_2": "Mathematically, any irreversible mapping y = f(args...) can be trivially transformed to its reversible form y += f(args...) or y (cid:89) = f(args...) ( (cid:89) is the bit-wise XOR ), where y is a pre-emptied variable. But in numeric computing with finite precision, this is not always true. The reversibility of arithmetic instruction is closely related to the number system. For integer and fixed point number system, y += f(args...) and y -= f(args...) are rigorously reversible. For logarithmic number system and tropical number system (Speyer & Sturmfels, 2009), y *= f(args...) and y /= f(args...) as reversible (not introducing the zero element). While for floating point numbers, none of the above operations are rigorously reversible. However, for convenience, we ignore the round-o ff errors in floating-point + and - operations and treat them on equal footing with fixed-point numbers in the following discussion. In Appendix ?? , we will show doing this is safe in most cases provided careful implementation. Other reversible operations includes SWAP , ROT , NEG et. al., and this instruction set is extensible. One can define a reversible multiplier in NiLang as in Listing. 2.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_04"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "TFoRhVCpnb.yqo5NaW74.00", "parag_1": "Image semantic segmentation is the task of pixel-level semantic label allocation for recognizing objects in an image. The development of Deep Neural Networks (DNNs) has promoted the rapid development of the semantic segmentation task [6, 58, 19] in recent years. However, training sucha fully-supervised semantic segmentation model requires large numbers of pixel-wise annotations.", "parag_2": "Image Semantic Segmentation is the task of pixel-level semantic label allocation for recognizing objects in an image. The development of Deep Neural Networks (DNNs) has promoted the rapid development of the semantic segmentation task [7, 20, 63] in recent years. However, training such a Fully-Supervised Semantic Segmentation model requires large numbers of pixel-wise annotations.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Use uppercases properly.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Use capital letters at the beginning of every word in the names of segmentation methods.", "annotator": "annotator_07"}} {"id_paragraph": "fDUdAYCQqZy.0cNiGAHFml.03", "parag_1": "In real-world problems, the dynamics are often nearly deterministic. We leverage this assumption and remove the expectation over the next states in the operator, which leads to Expectile V -Learning, where we train the value network to minimize the following loss:", "parag_2": "In real-world problems, the dynamics are often nearly deterministic. We leverage this assumption We consider the case where the dynamics are nearly-deterministic like robotic applications, and we remove the expectation over the next states in the operator. This leads to a practical algorithm, Expectile V -Learning, where we train the value network to minimize the following loss:", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "Byyb66j52G.hR5KKRfhQm.12", "parag_1": "Interrupted augmentation . We wonder how generalization would change after regularizationstopped. Thus, we stop the DA during training, such as (0, 5), (0, 15). When we compare the graph Figure 2(d) and Figure 2(e), the generalization performance in Figure 2(d) rapidly decreasesafter interrupted on both (0, 5) and (0, 15). In contrast, the curves (0, 5) and (0, 15) in Figure 2(e)maintain the generalization in spite of interrupting augmentation. In training performance, InDA,which uses augmentation throughout training, performs better than PPO in Figure 2(b); however,augmentation does not improve the training performance in Figure 2(a). These results mean that the random convolution alleviates the difficulty by various backgrounds.", "parag_2": "Interrupted augmentation . To determine how generalization would change after regularization stopped, we stop the DA during training, such as (0, 5), (0, 15). Jumper with easybg mode rapidly lost generalization performance (after interruption at both (0, 5) and (0, 15)) (Figure 2(d)), whereas Jumper with easy mode do not (Figure 2(e)). InDA, which uses augmentation throughout training, performs better than PPO during training (Figure 2(b)), but augmentation does not improve the training performance (Figure 2(a)). These results mean that the random convolution alleviates the difficulty by various backgrounds.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Make sentences concise, add missing spaces.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_medium", "Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "SyF8k7bCW.HytIRPamf.03", "parag_1": "While an RNN decoder, is designed to produce thenext word, a CNN decoder is free to find any relevant local patterns within the target sequences. The generation process is only conditioned on the sentence representation. Although the word order information is implicitly encoded in the CNN decoder, it is not emphasized as it is explicitly in the RNN decoder. The CNN decoder cares about the quality of generated sequences globally instead of the quality of the next generated word. Relaxing the emphasis on the next word, may help the CNN decoder model to explore the contribution of context in a larger space.", "parag_2": "In our model, the CNN decoder predicts all words at once during training, which is different from autoregressive decoders, and we call it a predict-all-words CNN decoder. We want to compare the performance of the predict-all-words decoders and that of the autoregressive decoders separate from the RNN/CNN distinction, thus we designed a predict-all-words CNN decoder and RNN decoder. The predict-all-words CNN decoder is described in Section 2, which is a stack of 3 convolutional layers, and all words are predicted once at the output of the decoder.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "Please rephrase the entire paragraph for better readability.", "annotator": "annotator_09"}} {"id_paragraph": "Mu-tqfqX-.6NSudk3nD.03", "parag_1": "As a quick example of selection collider bias, if we were to ask you the gender of some random person born in 1801,and one in 1999, you may toss a coin to determine your answer, asbirth date and gender are unconditionally independent in the real world. However, if instead we where to ask about the gender of a person born in 1801, and one in 1999, that we saw on two random Wikipedia articles today, then you may condition your guess on some combination of birth date, gender, and importantly, what gets recorded in Wikipedia.", "parag_2": "If someone was to ask you the gender of a random person born in 1801, you may toss a coin to determine your answer, as gender at birth is invariant to time. However, if instead someone was to ask about the gender of a person born in 1801 on a random Wikipedia page, you may then inform your guess with the knowledge that the level of recognition required to be recorded in Wikipedia is not invariant to time. Thus in your answer, you would have induced a conditional dependency between date and gender, that you may reapply when asked to guess gender of a person born in 2001 on a random Wikipedia page.", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite this paragraph to improve its clarity.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "This paragraph is confusing, rewrite to make it clearer and more readable.", "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.17", "parag_1": "We validate the choice of using graph attention network as the relational architecture. In Figure 7, we compare GAT against a graph convolutional network (GCN) (Kipf & Welling, 2016) in the action graph of AGILE. We observe that for thesimple grid world and RecSim tasks, GCN achieves optimal performance. This is because GCN can still act like a summarizer, despite the edge weights notbeing learned. However, it suffers in CREATE and RecSim-pairing (Figure 16) where the action space is large and requires diverse action relations. Moreover, we believe that the attention in GAT makes the graph sparse to ease RL training, which in contrast, is difficult in a fully-connected GCN.", "parag_2": "We validate the choice of using graph attention network as the relational architecture. In Figure 7, we compare GAT against a graph convolutional network (GCN) (Kipf & Welling, 2016) to act over AGILE’s action graph. We observe that GCN achieves optimal performance for the grid world and RecSys tasks. GCN can learn simple action relations even though the edge weights are not learned. However, it suffers in CREATE and RecSim-pairing (Figure 16), where the action relations are diverse and plenty. Moreover, we believe that the attention in GAT makes the graph sparse to ease RL training, which in contrast, is difficult in a fully-connected GCN.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the middle part of this paragraph and improve the English in the remainder", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "Make this paragraph a bit more concise.", "annotator": "annotator_03"}} {"id_paragraph": "wnT56xFToh.QBiYZ6j1pM.00", "parag_1": "CNNs to tag images of road scenes from 52 possible labels. In the medical domain, (Wang et al., 2017) present a chest X-ray dataset in which one image may contain multiple abnormalities. Multilabel classification is also prominent in natural language processing (Nam et al., 2014). Our proposed method is therefore relevant to a wide range of applications in the real world.", "parag_2": "CNNs to tag images of road scenes from 52 possible labels. In the medical domain, (Wang et al., 2017) present a chest X-ray dataset in which one image may contain multiple abnormalities. Multilabel classification is also prominent in natural language processing (Nam et al., 2014). Recent work also provides a theoretical analysis of multi-label classification under various measures (Wu & Zhu, 2020). Our proposed method is therefore relevant to a wide range of applications in the real world.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "S1BhqsOsB.1mgtDFRDc.00", "parag_1": "Predictive coding theories suggest that the brain learns by predicting observations at various levels of abstraction. One of the most basic prediction tasks is view prediction: how would a given scene look from an alternative viewpoint? Humans excel at this task. Our ability to imagine and fill in missing visual information is tightly coupled with perception: we feel as if we see the world in 3 dimensions, while in fact, information from only the front surface of the world hits our (2D) retinas. This paper explores the connection between view-predictive representation learning and its role in the development of 3D visual recognition. We propose inverse graphics networks, which take as input 2.5D video streams captured by a moving camera, and map to stable 3D feature maps of the scene, by disentangling the scene content from the motion of the camera. The model can also project its 3D feature maps to novel viewpoints, to predict and match against target views. We propose contrastive prediction losses that can handle stochasticity of the visual input and can scale view-predictive learning to more photorealistic scenes than those considered in previous works. We show that the proposed model learns 3D visual representations useful for (1) semi-supervised learning of 3D object detectors, and (2) unsupervised learning of 3D moving object detectors, by estimating motion of the inferred 3D feature maps in videos of dynamic scenes. To the best of our knowledge, this is the first work that empirically shows view prediction to be a useful and scalable self-supervised task beneficial to 3D object detection.", "parag_2": "Predictive coding theories suggest that the brain learns by predicting observations at various levels of abstraction. One of the most basic prediction tasks is view prediction: how would a given scene look from an alternative viewpoint? Humans excel at this task. Our ability to imagine and fill in missing information is tightly coupled with perception: we feel as if we see the world in 3 dimensions, while in fact, information from only the front surface of the world hits our retinas. This paper explores the role of view prediction in the development of 3D visual recognition. We propose neural 3D mapping networks, which take as input 2.5D (color and depth) video streams captured by a moving camera, and lift them to stable 3D feature maps of the scene, by disentangling the scene content from the motion of the camera. The model also projects its 3D feature maps to novel viewpoints, to predict and match against target views. We propose contrastive prediction losses to replace the standard color regression loss, and show that this leads to better performance on complex photorealistic data. We show that the proposed model learns visual representations useful for (1) semi-supervised learning of 3D object detectors, and (2) unsupervised learning of 3D moving object detectors, by estimating the motion of the inferred 3D feature maps in videos of dynamic scenes. To the best of our knowledge, this is the first work that empirically shows view prediction to be a scalable self-supervised task beneficial to 3D object detection.", "annot_1": {"annotation": ["Development", "Concision"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "Shorten this paragraph while making it more precise, mainly on the sentence about prediction losses.", "annotator": "annotator_07"}} {"id_paragraph": "sIqSoZ9KiO.KLlOZMoJ9G.02", "parag_1": "Summary. This paper introduced a novel neural network architecture, specifically suitable for modeling image generators (decoders) in the context of deep generative modeling. The proposed SDN layer was analyzed in the context of: (a) a complex hierarchical VAE model, where we obtained state-of-the-art performance in non-autoregressive density modeling; (b) a vanilla VAE, resulting in improvements in both density modeling and representation learning.", "parag_2": "Summary. This paper introduced a novel neural layer suitable for deep neural networks that produce images – image generators (decoders). Proposed SDN improves upon convolutional networks in terms of incorporating the prior on spatial coherence and modeling of long-range spatial dependencies. SDN was analyzed in the context of: (a) a complex hierarchical VAE model, where the state-of-the-art performance was obtained in non-autoregressive density modeling; (b) a vanilla VAE, resulting in improvements in both density modeling and disentangled representation learning.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "416QeRWm9c.EIldqblSQa.00", "parag_1": "Gansbeke et al., 2021; Yang et al., 2020). However, existing analysis typically assumes that the pre-training data distribution is the same as the target distribution, while the difference between the two is the critical reason for the trade-off focused in this work. Thus, our work needs to propose new analysis approaches. Recently, Cole et al. try to identify conditions where self-supervised contrastive representation learning methods can produce “good” visual representations and point out the “diversity-difficulty trade-off” phenomenon, which is most relevant to our work. However, they only empirically show the trade-off. They do not have a systematic study and do not give any analysis to explain why it happens. Bommasani et al. ask for further research on the issue of specialization vs. diversity in foundation model training data but do not give any thorough study as well. At the same time, our work provides a better understanding of the trade-off.", "parag_2": "Gansbeke et al., 2021; Yang et al., 2020). However, existing analysis typically assumes that the pre-training data distribution is the same as the target distribution, while the difference between the two is the critical reason for the trade-off focused in this work. Thus, our work proposes new analysis approaches. Recently, Cole et al. have tried to identify conditions where self-supervised contrastive representation learning methods can produce “good” visual representations and point out the “diversity-difficulty trade-off” phenomenon, which is most relevant to our work. However, they only empirically show the trade-off, but do not provide a systematic study and analysis to explain why it happens. Bommasani et al. call for further research on the issue of specialization vs. diversity in foundation model training data, but do not provide a thorough study as well. Our work attempts to provide a better understanding of the trade-off between universality and label-efficiency.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the last sentence to give it a more modest town, and change some formulations to improve the flow of the paragraph.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "wSf7BpyxTb.ZCPjX5OcL.01", "parag_1": "In this section, we further strengthour algorithm with SPIDER variance reduction technique [11], a variantof SARAH [31, 32], as stated in algorithm 3. Indeed, we use a large batchsize of b in every q iterations and use small batchsizes of b x and b y for the rest. We prove that SAPD+ using variance reduction, i.e., with VR-flag = true , achieves an oracle complexity of O", "parag_2": "In this section, we equip SAPD+ with SPIDER variance reduction technique [12], a variant of SARAH [32, 33]. More precisely, for inexactly solving SCSC subproblems given in (4), we propose using VR-SAPD as stated in algorithm 3. Note VR-SAPD employs a large batchsize of b in every q iterations and use small batchsizes of b x and b y for the rest. We prove that SAPD+ using variance reduction, i.e., with VR-flag = true , achieves an oracle complexity of O", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "OCBr-AN0r.k4RhAVo6Ik.00", "parag_1": "In general, the model parameter is fixed and the attacker can only provide crafted examples to foolthe model. Based on the amount of information the attacker can access, the adversarial attack canbe categorized into three classes. (1) White-box attack . The attacker has full accessto the system,including parameters and gradients of the target model, the input data, and the label. (2) Grey-boxattack . The attacker can partially access the system, including the target model and training inputdata, except the labels. (3) Black-box attack . The attacker can only access training input data butcannot access the target model and labels.", "parag_2": "Note the adversarial attack happened in the testing stage, and the attackers cannot manipulate theforecasting model or its output. On the benign testing set, the forecasting model can perform well. Based on the amount of information the attacker can access in the testing stage, the adversarial attackcan be categorized into three classes. White-box attack . The attacker can fully access the target model,including the model architecture, the model parameters, gradients, model outputs, the input trafficstates, and the corresponding labels. Grey-box attack . The attacker can partially access the system,including the target model and the input traffic states, but without the labels. Black-box attack . Theattacker can only access the input traffic states, query the outputs of the target model or leverage asurrogate model to craft the adversarial examples.", "annot_1": {"annotation": ["Development", "Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "v8Vdrwfrg.Hrx_LZTUq.00", "parag_1": "We propose a new pruning approach to obtain sparse neural networks with state-of-the-art test accuracy. Our compression scheme uses a new saliency criterion that identifies important weights in the network throughout training to propose candidate masks. As a key feature, our algorithm not only evolves the pruned sparse model alone, but jointly also a (closely related) dense model that is used in a natural way to correct for pruning errors during training. This results in better generalization properties on a wide variety of tasks, since the simplicity of the scheme allows us further to study it time sparsity low med high t", "parag_2": "We propose a new pruning approach to obtain sparse neural networks with state-of-the-art test accuracy. Our compression scheme uses a new saliency criterion that identifies important weights in the network throughout training to propose candidate masks. As a key feature, our algorithm not only evolves the pruned sparse model alone, but jointly also a (closely related) dense model that is used in a natural way to correct for pruning errors during training. This results in better generalization properties on a wide variety of tasks, since the simplicity of the scheme allows us further to study it from a theoretical point of view, and to provide further insights and interpretation. We do not require time sparsity low med high t", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.06", "parag_1": "Listwise RL (CDQN) : To solve the combinatorial action space problem of listwise actions, we follow the Cascaded DQN (CDQN) framework of Chen et al. (2019a). The main challenge is that building the list all at once is not feasible due to the intractably large number of possible lists. Therefore, the key is to build the list incrementally, one action at a time. Thus, each list index can be treated as an individual non-combinatorial action which can be trained with RL. We replace the Q-network of CDQN with AGILE in order to accommodate a varying action space. We share the weights of the cascaded Q-networks. Algorithm 1 provides complete details on listwise AGILE.", "parag_2": "Listwise RL (CDQN) : For tasks with listwise actions, we follow the Cascaded DQN (CDQN) framework of Chen et al. (2019a). The main challenge is that building the action list all at once is not feasible due to a combinatorial number of possible list-actions. Therefore, the key is to build the list incrementally, one action at a time. Thus, each list index can be treated as an individual action decision trained with independent Q-networks. We replace the Q-network of CDQN with AGILE to support a varying action space. Sharing the weights of the cascaded Q-networks led to better performance. Algorithm 1 provides complete details on CDQN for listwise AGILE.", "annot_1": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Make first sentence more concise. Rewrite phrases, prefer short formulations and avoid we.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Make first sentence more concise. Rewrite phrases, prefer short formulations and avoid we.", "annotator": "annotator_01"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.18", "parag_1": "We conduct this study with ModelNet40, using the front and rear views. We choose ten values from an interval [10 − 9 , 10 − 3 ] as λ . We use SGD without momentum, set the learning rate to 0.1 and batch size to eight. For each combination of hyperparameters, we train three model repetitions.", "parag_2": "We conduct this study with ModelNet40, using the front and rear views. We choose ten values from an interval [10 − 9 , 10 − 3 ] as λ . We use SGD without momentum, set the learning rate to 0.1 and batch size to eight. Using each combination of hyperparameters, we repeat training for three times with random initialization and get three models.", "annot_1": {"annotation": ["Development", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Development", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_08"}} {"id_paragraph": "BJ49j43UH.B15bvYjiH.00", "parag_1": "Datasets: We consider 12 public datasets (3 public tabular datasets, 7 public image datasets, and 2 public language datasets) to evaluate DVRL in comparison to multiple benchmark methods.public tabular datasets are (1) Blog, ( 2) Adult, (3) Rossmann; 7 public image datasets are (4) HAM 10000, (5) MNIST, (6) USPS, (7) Flower, (8) Fashion-MNIST, (9) CIFAR-10, (10) CIFAR-100;public language datasets are (11) Email Spam, (12) SMS Spam. Detailsof the datasets can be found in the provided hyper-links (in blue).", "parag_2": "Datasets: We consider 12 public datasets (3 public tabular datasets, 7 public image datasets, andpublic language datasets) to evaluate DVRL in comparison to multiple benchmark methods. 3 public tabular datasets are (1) Blog, ( 2) Adult, (3) Rossmann; 7 public image datasets are (4) HAM 10000, (5) MNIST, (6) USPS, (7) Flower, (8) Fashion-MNIST, (9) CIFAR-10, (10) CIFAR-100; 2 public language datasets are (11) Email Spam, (12) SMS Spam. Details can be found in the hyper-links.", "annot_1": {"annotation": ["Concision"], "instruction": "Make the last sentence more concise.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make the last sentence shorter.", "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.00", "parag_1": "A prescription is a common and important form of medical interven- tion that is used in modern clinical settings. It comes as a recommendation from an healthcare provider to a patient [1]. It indicates actions such as taking medications, following a diet, or executing physical exercises [2]. When agreed upon between the patient and the healthcare provider, the patient is expected to follow a prescription [3]. The extent to which this implementation corresponds to the agreed upon recommendation is known as adherence [2]. Non-adherence to prescriptions is a significant problem in healthcare [2,4]. Adherence rates average 50% and account for 33-69% of hospital re-admissions, resulting into billions of dollars per year [5,6].", "parag_2": "A prescription is a common and important form of medical inter- vention provided a clinician to a patient [1]. It indicates actions such as taking medications, following a diet, or executing physical exercises [2]. When agreed upon between a patient and their healthcare provider, the patient is expected to follow their prescription [3]. The extent to which a patient follows an agreed-upon prescription is referred to as adherence [2]. Non-adherence to prescriptions is a significant problem in healthcare [2,4]. Adherence rates average 50% and account for 33-69% of hospital re-admissions, resulting into billions of dollars per year [5,6].", "annot_1": {"annotation": ["Concision"], "instruction": "Revise this paragraph to be more concise.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "Merge the two first sentences in one shorter one. Improve the sentence defining adherence to make it clearer.", "annotator": "annotator_07"}} {"id_paragraph": "UlHNcByJV.W1RxpkrWx8.01", "parag_1": "• If a data point is accepted by the biased classifier, it is accepted and added to the dataset with the true label. • If a data point is instead rejected by the biased classifier we use our de-biased classifier to decide whether to add it to the Pseudo-label dataset from PLOT. • We then apply the Pseudo-label mechanism from PLOT, i.e. retraining on optimistic lables , on these candidates to decide final acceptance.", "parag_2": "• If a data point is accepted by the biased classifier, we accept it and add it to the dataset with the true label. • If a data point is instead rejected by the biased classifier, we use de-biased classifier to decide whether to add it to the pseudo-label dataset. • As in PLOT, retrain on the pseudo-label candidates with optimistic labels to decide final acceptance.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Rewrite the bullet points, making them more independent and preferring active over passive formulations", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Shorten the last sentence. Make this paragraph more direct.", "annotator": "annotator_07"}} {"id_paragraph": "atxti8SVk.3K9AmPwALM.03", "parag_1": "Wang et al., 2020; Fan et al., 2020; Sun et al., 2020). Xu et al. (2015) formulates all types of weak supervision as linear constraints on a SVM. Recent works (Lin et al., 2016; Kolesnikov & Lampert, 2016;Pathak et al., 2015) typically use Class Activation Map (CAM) (Zhou et al., 2016) to obtain an initial dense mask and then train a model iteratively. GAIN (Li et al., 2018) utilizes image tag or bounding box annotations to refine these class-specific activation maps. Sun et al. (2020) considers within-image relationships and explores the idea of co-segmentation. Fan et al. (2020) estimates the foreground and background for each category, with which the network learns to generate more precise CAMs. Regularization is enforced at either the image level (Lin et al., 2016; Kolesnikov & Lampert, 2016; Pathak et al., 2015) or the feature level (Tang et al., 2018a;b) to produce better dense masks. We incorporate this concept into adaptive feature learning and train the model only once. It is worth noting that each of the annotations carries different assumptions, we propose to unify all these types of weak annotations in a single contrastive learning framework.", "parag_2": "Wang et al., 2020; Fan et al., 2020; Sun et al., 2020). Xu et al. (2015) formulates all types of weak supervision as linear constraints on a SVM. Papandreou et al. bootstraps segmentation predictions via EM-optimization. Recent works (Lin et al., 2016; Kolesnikov & Lampert, 2016; Pathak et al., 2015) typically use CAM (Zhou et al., 2016) to obtain an initial dense mask and then train a model iteratively. GAIN (Li et al., 2018) utilizes image tags or bounding boxes to refine these class-specific activation maps. Sun et al. (2020) considers within-image relationships and explores the idea of co-segmentation. Fan et al. (2020) estimates the foreground and background for each category, with which the network learns to generate more precise CAMs. Regularization is enforced at either the image level (Lin et al., 2016; Kolesnikov & Lampert, 2016; Pathak et al., 2015) or the feature level (Tang et al., 2018a;b) to produce better dense masks. We incorporate this concept into adaptive feature learning and train the model only once. All types of weak annotations are dealt with in a single contrastive learning framework.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Development", "Concision"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.22", "parag_1": "Bi-LSTM : The raw action representations of candidate actions are passed on to the 2-layer MLP followed by ReLU. Then the output of the MLP is processed by a 2-layer bidirectional LSTM (Huang et al., 2015) followed by another 2-layer MLP to create the action-summary to be used in the subsequent utility network.", "parag_2": "Bi-LSTM : The raw action representations of candidate actions are passed on to the 2-layer MLP followed by ReLU. Then, the output of the MLP is processed by a 2-layer bidirectional LSTM (Huang et al., 2015). Another 2-layer MLP follows this to create the action set summary to be used in the following utility network.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Update the last sentence and split it into two sentences to make it easier to understand", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Split this paragraph into smaller and more focused points.", "annotator": "annotator_03"}} {"id_paragraph": "9B3Sn8E9.J-9pEjms.00", "parag_1": "We thank Yasaman Bahri for significant code contributions, frequent discussion and useful feedback on the manuscript, Sergey Ioffe for feedback on the text, as well as Greg Yang, Ravid Ziv, and Jeffrey", "parag_2": "We thank Yasaman Bahri for frequent discussion and useful feedback on the manuscript. We additionally appreciate both Yasaman Bahri and Greg Yang for the ongoing contributions to improve the library. We thank Sergey Ioffe for feedback on the text, as well as Ravid Ziv, and Jeffrey", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_04"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "MnewiFDvHZ.iAYttXl-uH.02", "parag_1": "Moreover, we replace the original constraint function g t p¨q with ˆ g ` t ´ 1 p¨q and the dual variables λ twith Q p t ´ 1 q such that Q p t ´ 1 q ˆ g ` t ´ 1 p x q is a rectified approximator of λ t g t p x q . We also added thethe regularization (or smooth) term α t } x ´ x t } 2 that helps the stability of the algorithm. Note thisdesign is related to penalty-based proximal optimization where we aim to minimize an approximatedf t p x q w.r.t. proximal operator on the “old” rectified function ˆ g ` t ´ 1 p x q . In the penalty update of Q p t q , we first rectify the original constraint function g t p¨q with ˆ g ` t p¨q andadd it to Q p t ´ 1 q such that penalty increases when violation occurs in each round. Further werectify Q p t q with a round-dependent constant η t to impose a “minimum” penalty price. The designof rectified penalty update induces conservative decisions in the decision-making step to minimizeconstraint violation. Note that it is different with the traditional primal-dual algorithm that doesnot rectify the constraint violation and impose a minimum penalty price, and when the price (dualvariable) is zero, the algorithm can take very aggressive decisions that lead to large hard violation. This is not a problem when primal-dual algorithm is used as a numerical method for solving aconstrained optimization problem, but leads to overly aggressive decisions and large violation whenapplying it to COCO. For a similar reason, RECOO rectifies the amount of violation g t p x q in thedecision-making step so the violation does not become negative to prevent overly optimistic decisions. We will see that the rectifiers in both decision-making and penalty update leads toan upper boundof “regret + violation” as a whole (Lemma 1) such that we can establish regret and (hard) constraintviolation directly. These techniques are different from the primal-dual optimization that quantifiesthe constraint violation indirectly by bounding the dual variables/virtual queues. Finally, we comment that our algorithm only needs to solve an “almost” unconstrained optimizationproblem ( X is usually a simple set like the box constraints). Therefore, the gradient-based methodsare sufficient to find the minimizer or we might even find its close-form with the inverse operationof the function by taking “gradient “ zero”.", "parag_2": "Moreover, we replace the original constraint function g t p¨q with ˆ g ` t ´ 1 p¨q and the dual variables λ twith Q p t ´ 1 q such that Q p t ´ 1 q ˆ g ` t ´ 1 p x q is a rectified approximator of λ t g t p x q . We also added thethe regularization (or smooth) term α t } x ´ x t } 2 that helps the stability of the algorithm. In the penalty update of Q p t q , we first rectify the original constraint function g t p¨q with ˆ g ` t p¨q and addit to Q p t ´ 1 q such that penalty increases when violation occurs in each round. Further we rectify Q p t qwith a round-dependent constant η t to impose a “minimum” penalty price. The design of rectifiedpenalty update induces conservative decisions in the decision-making step to minimize constraintviolation. Note that it is different with the traditional primal-dual algorithm that does not rectifythe constraint violation and impose a minimum penalty price, and when the price (dual variable)is zero, the algorithm can take very aggressive decisions that lead to large hard violation. This isnot a problem when primal-dual algorithm is used as a numerical method for solving a constrainedoptimization problem, but leads to overly aggressive decisions and large violation when applying it to COCO. For a similar reason, RECOO rectifies the amount of violation g t p x q in the decision-makingstep so the violation does not become negative to prevent overly optimistic decisions. We will see thatthe rectifiers in both decision-making and penalty update lead to a small cumulative hard constraintviolation ř Tt “ 1 g ` t p x t q .", "annot_1": {"annotation": ["Concision", "Development"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "Remove unnecessary content to make this paragraph shorter.", "annotator": "annotator_07"}} {"id_paragraph": "b5cpxvkUTu.haUk--I4J.00", "parag_1": "If you are using existing assets (e g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] (b) Did you mention the license of the assets? [No] (c) Did you include any new assets either in the supplemental material or as a URL? [No] (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [Yes] (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [No] If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]", "parag_2": "If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] (b) Did you mention the license of the assets? [No] (c) Did you include any new assets either in the supplemental material or as a URL? [No] (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [Yes] (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [No] 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "5t8NvKONr.tls-ZX2iE.02", "parag_1": "The core of this proof is based on substituting a neural network for the inner product between the branch net and trunk net. The neural network with a low complexity could approximate the inner product with better performance since the inner product is an infinitely differentiable function. The DeepONet showed defects as it could be replaced with a neural network with E ( u ) and y as input. A large number of basis p in DeepONet increases the number of parameters of the trunk net, which can be considered a target network in HyperDeepONet. Models such as Shift-DeepONet and flexDeepONet could achieve the desired accuracy with a small number of basis. Still, there was a trade-off in which the first hidden layer of the target network required numerous units. There was no restriction on the dimension of the last hidden layer in the target network for NOMAD, which uses a fully nonlinear reconstruction. However, the first hidden layer of the target network had to be wide enough, increasing the number of parameters. Details can be found in Appendix C.", "parag_2": "The core of this proof is showing that the inner product between the branch net and the trunk net could be replaced with a neural network that has a low complexity (Lemma 1). Therefore, the entire structure of DeepONet could be replaced with a neural network that receives [ E ( u ) , y ] ∈ R d y + m as input. It gives the lower bound for the number of parameters in DeepONet based on Theorem 1. The proof can be found in Appendix C.1. The analogous results holds for variant models of DeepONet. Models such as Shift-DeepONet and flexDeepONet could achieve the desired accuracy with a small number of basis. Still, there was a trade-off in which the first hidden layer of the target network required numerous units. There was no restriction on the dimension of the last hidden layer in the target network for NOMAD, which uses a fully nonlinear reconstruction. However, the first hidden layer of the target network also had to be wide enough, increasing the number of parameters. Details can be found in Appendix C.2.", "annot_1": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Remove unnecessary details. Include citation.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite the beginning of the paragraph to improve the argumentation.", "annotator": "annotator_07"}} {"id_paragraph": "uUr8LbBzx8.u296507jDZ.00", "parag_1": " TransBoost’s performance is compared to the standard inductive (fully supervised) performance. Forinstance, using 20% of the training set and 25% of the test set, we obtained the best top-1 accuracygain of +3.58% in the transductive setting while the performance in the inductive setting degraded by-1.21%. We note that the performance in the inductive setting (highlighted in blue) is evaluated usingthe test instances that were not used at training time.", "parag_2": "Table 4 presents 16 experiments of TransBoost’s procedure performed on all combinations of the instance, % of training set and 25% of the test set, we obtained the best top gain of +3.58% in the transductive setting while the performance in the inductive setting degraded by We note that the performance in the inductive setting (highlighted in blue) is evaluated usingthe test instances that were not used at training time.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.15", "parag_1": "Four participants answered that they intentionally used edges (Figure 5 (iv)) in all conditions. However, they were affected by the cursor hidden by the notch, causing them to lose sight of the cursor. All participants answered that when the cursor was hidden by the notch, they tried to find the cursor by moving the mouse vigorously.", "parag_2": "Four participants answered that they intentionally used edges (Figure 5 (iv)) in all conditions. However, the notch hid the cursor, which caused them to lose sight of the cursor. All participants answered that they attempted to find the cursor by moving the mouse vigorously when the cursor was hidden by the notch.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Restructure the last two sentences in this paragraph", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Revise this paragraph to make it more clear and concise.", "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.19", "parag_1": "To make this point more concrete, we implement push apart from the main text using the Jacobian of sorting. We use sort : R n → R n to denote sorting of a multiset with scalar elements inascending order. This is extended to FSPool with higher-dimensional elements by independently sorting each dimension across the multiset. We begin by defining the function g ([ a, b ]) = sort ([ a, b ]) · [ − 1 , 1] . This multiplies the smaller value with − 1 and the larger with 1 (ties broken arbitrarily), then adds them together. Taking the gradient of g with respect to the input set means following the permutation that the sort applied backwards (similar to how max pooling is differentiated), so the − 1 gets propagated to the smaller element andto the larger one. In other words, ∇ [ a,b ] g = [ − 1 , 1] if a ≤ b (alternatively a < b ), else [1 , − 1] . Now we can define push apart ([ a, b ]) = [ a, b ] + ∇ [ a,b ] g to obtain exactly the same function as in the main text.", "parag_2": "To make this point more concrete, we implement push apart from the main text using the Jacobian of sorting. We omit the transposes in the following for brevity. We begin by defining the function g ([ a, b ]) = sort ([ a, b ]) · [ − 1 , 1] . This multiplies the smaller value with − 1 and the larger with(ties broken arbitrarily), then adds them together. Taking the gradient of g with respect to the input set means following the permutation that the sort applied in reverse ( S (cid:62) X ), so the − 1 gets propagated to the smaller element and 1 to the larger one. In other words, ∇ [ a,b ] g = [ − 1 , 1] if a ≤ b (alternatively a < b ), else [1 , − 1] . Now we can define push apart ([ a, b ]) = [ a, b ] + ∇ [ a,b ] g to obtain exactly the same function as in the main text.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.05", "parag_1": "Because typical targets on GUIs are rectangular, target height ( H ) also affects the movement time [3,8,14,20,21,27]. Accot and Zhai [1] proposed a model for a bivariate (2D) pointing tasks that takes H . Zhang et al. [28] proposed to balance the effects of W and H (Eq. 2).", "parag_2": "Target height ( H ) also affects the movement time because typical targets on grafical user interfaces (GUIs) are rectangular [3,8,14,20, 22,30]. Accot and Zhai [1] proposed a model for a bivariate (2D) pointing tasks that considers H . Further, Zhang et al. [31] proposed to balancing the effects of W and H .", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Rewrite this paragraph and focus more on the first sentence", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Improve this paragraph for clarity, mainly the first sentence.", "annotator": "annotator_07"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.01", "parag_1": "According to the greedy learner hypothesis, it is the speed at which a multi-modal DNN learns from each modality that leads to imbalance in conditional utilization rate. If we appropriately intervene in the learning process to adjust these speeds, we may be able to prevent the hurtful imbalance across input modalities. We characterize such learning speed by introducing a metric, called conditional learning speed . We empirically show that it is a reasonable proxy for conditional utilization rate. We introduce a training algorithm, balanced multi-modal learning which accelerates the model to learn from either modality alternatively according to their conditional learning speeds. We show that it produces models that learn to use all modalities appropriately and achieve stronger generalization on three multi-modal datasets: Colored MNIST dataset (Kim et al., 2019), ModelNet40 dataset of 3D objects (Su et al., 2015) and NVGesture Dataset (Molchanov et al., 2015).", "parag_2": "According to the greedy learner hypothesis, it is the diverged speed at which a multi-modal DNN learns from different modalities that leads to an imbalance in conditional utilization rate. If we intervene in the training process to adjust these speeds, we may be able to prevent the hurtful imbalance across input modalities. We analyze the learning dynamics of model components and propose a metric named by conditional learning speed using the gradient norm and weight norm of models’ parameters. It measures the relative learning speed at which the model learns from one modality against the other modality. We empirically show that it is a reasonable proxy for conditional utilization rate. We introduce a training algorithm, balanced multi-modal learning which guides the model to learn intentionally from one of the modalities according to the conditional learning speeds. We show that models trained with this algorithm learn to use all modalities appropriately and achieve stronger generalization on three multi-modal datasets: Colored MNIST dataset (Kim et al., 2019), ModelNet40 dataset of 3D objects (Su et al., 2015) and NVGesture Dataset (Molchanov et al., 2015).", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_08"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.00", "parag_1": "Although the mouse cursor can enter the notch area at the top of the MacBook Pro (2021) display, it is partially or entirely hidden by the notch. Avoiding the notch or moving the cursor carefully around the notch can increase the movement time. In this study, we conducted a series of experiments to evaluate the effect of the notch on the movement of the mouse cursor. Experiment 1 showed that the notch increased the pointing movement time in specific situations. Experiment 2 showed that it is better to avoid the notch than enter the notch in the current specification of the notch. Experiment 3 showed that changing the notch to an area where the cursor cannot enter is an effective way of pointing at the target more rapidly and accurately, if the target is adjacent to the notch. Consequently, the outer edge of the notch stops the cursor, resulting in faster and more accurate target-pointing. Therefore, the notch should be an area where the cursor cannot enter .", "parag_2": "The notch on the top edge of the MacBook Pro (2021) display hides the mouse cursor even though the cursor can move under this area. Avoiding the notch or moving the cursor carefully around the notch can increase the movement time. In this study, we perform three experiments to evaluate the effect of the notch on the movement of the mouse cursor. In Experiment 1, we showed that the notch increases the pointing movement time under specific scenarios. In Experiment 2, we showed that it is better to avoid the notch instead of moving the cursor under the notch given its current specification. Finally, in Experiment 3, we showed that changing the notch to an area where the cursor cannot enter is an effective approach that allows the user to point at the target more rapidly and accurately if the target is adjacent to the notch. This is because the outer edge of the notch stops the cursor, and this results in faster and more accurate target pointing. Thus, the notch should be an area where the cursor cannot enter .", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Improve the words using in this paragraph", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Modify this paragraph to make it more direct and easy to read.", "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.16", "parag_1": "When dealing with conflicts, arrows are effective in communi- cating the suggested conflict resolution action. End-of-line arrows can be used to indicate that the medication entries which have been scheduled too close together should be taken apart and vice-versa. The calendar should also have support for indication that a given entry is optional. Suchsupport targets entries such as prescription medication that should be taken as needed and non-prescription medication that is bought over the counter. These design decisions are influenced by everyday activities that users are familiar with.", "parag_2": "When dealing with conflicts, arrows are effective in communi- cating the suggested conflict resolution action. End-of-line arrows can be used to indicate that medication entries that have been sched- uled too close together should be taken apart and vice-versa. The calendar should also have support for indication that a given entry is optional. Such entries would be used for medications that should be administered “as needed” and non-prescription medications that are sold “over the counter”. These design decisions are influenced by everyday activities that users are familiar with.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Clarify the wording in this paragraph.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_medium", "Rewriting_light"], "instruction": "Reword my sentence about entries.", "annotator": "annotator_09"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.15", "parag_1": "The goal is to reconstruct the input with a permutation-invariant latent vector as bottleneck. Varying the set size n and dimensionality of set elements d allows us to control the difficulty of the task. Note that while this appears like a toy task, it is also likely harder than many real-world datasets with similar n and d . This is because the i.i.d. nature of the elements means that there is no global structure for the model to rely on; the only way to solve this task is to memorize every element exactly, which is especially difficult when we want the latent space to be permutation-invariant. ", "parag_2": "The goal is to reconstruct the input with a permutation-invariant latent vector as bottleneck. Varying the set size n and dimensionality of set elements d allows us to control the difficulty of the task. Note that while this appears like a toy task, it is also likely harder than many real-world datasets with similar n and d . This is because the independent nature of the elements means that there is no global structure for the model to rely on; the only way to solve this task is to memorize every element exactly, which is especially difficult when we want the latent space to be permutation-invariant. This task is also challenging because it indirectly requires a version of push apart : similar elements in the initialization Y 0 might need to be moved far away from each other in the output.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.03", "parag_1": "Patrick et al. proposed the Mouse Ether technique on finding out that when using multiple displays with different resolutions, a user loses the cursor because of unnatural cursor movement between displays [5]. The results showed that the technique improved performance by up to 28% by preventing unnatural warping when the cursor was moved between displays.", "parag_2": "Patrick et al. found out that a user loses the cursor when using multiple displays with different resolutions based on an unnatural cursor movement between displays, and proposed a Mouse Ether technique [5]. The proposed technique improved performance by up to 28% by preventing unnatural warping when the cursor was moved between displays.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Improve the writing of this paragraph", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Modify the logical flow of ideas to improve the readability of the paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "jyac3IgQ44.f4au9jfat5.05", "parag_1": "To leverage the natural sparsity of point clouds andfurther improve efficiency, we sparsely implementall our window center searching, window gathering, and balanced window sampling into CUDAoperations. These operations are mainly based on a hash map that establishes the mapping fromcoordinate space to voxel index [23]. Taking the window gathering operation as an example, wequery each possible position w.r.t. the given center within the window, and retrieve the correspondingfeatures if the position is a valid key in the pre-built hash map. More details are available in the supplementary materials.", "parag_2": "We implement all our window center searching,window gathering, and balanced window samplingsparsely in cuda operations to leverage the natural sparsity of point clouds and improve efficiency. These operations are mainly based on a hash mapwhich establishes the mapping from coordinatespace to voxel index as in [20]. For example, forwindow gathering, we query each possible positionwrt the given center within the window and retrievethe feature if the position is a valid key in our prebuilt hash map. More details are available in the supplementary materials.", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Revise this paragraph for better readability.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Improve the flow of ideas for better readability.", "annotator": "annotator_07"}} {"id_paragraph": "nkOpNqg-ip.OwJsIhe_p.01", "parag_1": "The surprising result of our experimental evaluation is that, for runtimes of 1h, the black box optimizers are hardly ever able to improve upon Naive AutoML. In fact, the experiments even show that the “Ex-def” baseline itself is already quite strong in this time frame. This is not a contradiction to the results in Thornton et al. (2013), where a timeout of 30 hours was used. While this observations calls for more exhaustive experiments (with longer runtimes), it is already evident that simple baselines are much stronger than they were supposed to be until now.", "parag_2": "The surprising result of our experimental evaluation is that, for runtimes of 1h, the black box optimizers are hardly ever able to improve upon Naive AutoML. In fact, the experiments even show that the “Ex-def” baseline itself is already quite strong in this time frame. This is not a contradiction to the results in (Thornton et al., 2013), where a timeout of 30 hours was used. While this observations calls for more exhaustive experiments, it is already evident that simple baselines are stronger than perhaps believed.", "annot_1": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "Rewrite the last sentence, making it more concise.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make the last sentence more concise.", "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.06", "parag_1": "It is known that the edge target (placing the target adjacent to the edge of the screen) can reduce the movement time [3,9,10,24,25]. Pointing at a target in the center of the screen requires the cursor to stop just inside the target. When pointing an edge target, the cursor stops at the edge. As a result, the pointing can be completed by moving a cursor horizontally relative to the edge at which the target is adjacent. Additionally, the target adjacent to the corner of the screen can be pointed fast simply by hitting the corner with the cursor [24].", "parag_2": "An edge target (target adjacent to the edge of the screen) can reduce the movement time [3,9,10,27,28]. Pointing at a target at the center of the screen requires the cursor to stop inside the target. The cursor stops at the edge when pointing at an edge target. Thus, the pointing task can be completed by moving a cursor horizontally relative to the edge to which the target is adjacent. Further, a target adjacent to the corner of the screen can be pointed at fast simply by hitting the corner with the cursor [27].", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Rewrite this paragraph and choose better words", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Reorganise the flow of ideas when a sentence is confusing. Try to shorten the paragraph a bit.", "annotator": "annotator_07"}} {"id_paragraph": "ByZyHzZC-.HktKf7-AW.02", "parag_1": "Given the probability density P ( θ ) , we are now interested in deriving the probability of ending at a given minimum, θ A , which we will denote by lowercase p A = ˜ p A C , where C is a normalization constant (the unnormalized probability ˜ p A is all we are interested in when estimating the relative probability of finishing in a given minimum compared to another one). This probability is derived in Appendix D, and given in the following theorem, which is the core result of our theory. ) algorithm in the Bayesian learning setting (Sato & Nakagawa, 2014). While in this setting it was found that the stochastic noise can be ignored, we arrive at a different conclusion.", "parag_2": "Given the probability density P ( θ ) , we are now interested in deriving the probability of ending at a given minimum, θ A , which we will denote by lowercase p A = ˜ p A C , where C is a normalization constant which is the same for every mimnima (the unnormalized probability ˜ p A is all we are interested in when estimating the relative probability of finishing in a given minimum compared to another one). This probability is derived in Appendix D, and given in the following theorem, which is the core result of our theory. context of the stochastic gradient langevin dynamics (SGLD) algorithm in the Bayesian learning setting (Sato & Nakagawa, 2014). While in this setting it was found that the stochastic noise can be ignored, we arrive at a different conclusion.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.12", "parag_1": "We instructed the participants to (1) point the target as quickly and accurately as possible after clicking the starting position, (2) avoid any clutching action (floating the mouse in the middle of an operation) during the trail, and (3) check the presented conditions before starting the trial. Clutching action decreases the model fit of Fitts’ law [7]. We instructed participants to avoid clutching action as possible in order to restrict the effect for model fits to the experimental conditions. Note that any A was a distance that the cursor could be moved without clutching action.", "parag_2": "We instructed the participants to (1) point the target as quickly and accurately as possible after clicking the starting position, (2) avoid any clutching action (floating the mouse in the middle of an operation) during the trial, and (3) check the presented conditions before starting the trial. The clutching action decreases the model fit of Fitts’ law [7]. We also instructed the participants to avoid the clutching action to restrict the effect for the model that fits the experimental conditions. A was set the distance where the cursor could be moved without a clutching action; no participant performed the clutching action during the trial.", "annot_1": {"annotation": ["Content_addition", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "kBsx5htyKn.qV5njV8W5.00", "parag_1": "Active learning is a powerful technique to get the best out of an annotation budget. It consists of integrating the current model itself in the selection of which data points should be annotated next, which are selected based on an heuristic that is supposed to maximize the improvement of the model.", "parag_2": "Active learning is a powerful technique to get the best out of an annotation budget for machine learning services in general, and Natural Language Processing ones in particular. It consists of integrating the current model itself in the selection of which data points should be annotated next, which are selected based on an heuristic that is supposed to maximize the improvement of the model.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.00", "parag_1": "Estimating individual treatment effects from observational data is very challenging due to the existence of treatment selection bias. Most existing representation-based methods mitigate this issue by aligning distributions of different treatment groups in the representation space. However, they still suffer from two critical problems: (1) Mini-batch Sampling Effects (MSE), where the alignment easily fails due to the outcome imbalance or outliers in the batch; (2) Unobserved Confounder Effects (UCE), where the unobserved confounders damage the correct alignment.", "parag_2": "Estimating individual treatment effects from observational data is highly challenging due to the existence of treatment selection bias. Most prevalent approaches mitigate this issue by aligning distributions of different treatment groups in the representation space. However, there are two critical problems circumvented: (1) mini-batch sampling effects (MSE), where the alignment easily fails due to the outcome imbalance or outliers at a mini-batch level; (2) unobserved confounder effects (UCE), where the unobserved confounders damage the correct alignment.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Improve the english of this paragraph.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Edit this paragraph by making more formal choices of wording.", "annotator": "annotator_07"}} {"id_paragraph": "XnxT9Uofth.vN0Ie05Cbd.00", "parag_1": "The learning problem can be arbitrarily difficult, especially when f ∗ ( z ) is close to 1 2 , in which case itwill be difficult to determine the true value of r ( z ) . To give a problem-dependent bound, we assume the following bounded noise assumption. In the literature on statistical learning, this assumption is also referred to as Massart noise [Massart and Nédélec, 2006, Giné and Koltchinskii, 2006, Hanneke et al., 2014]. Our framework can also work under the low noise assumption - due to space limit, wedefer the discussion to Appendix F.3.", "parag_2": "The learning problem can be arbitrarily difficult, especially when f ∗ ( z ) is close to 1 2 , in which caseit will be difficult to determine the true value of r ( z ) . However, the marginal case is rare in manyreal-world problems, and the learning goal is not that difficult to identify for human beings. To give a problem-dependent bound, we assume the following bounded noise assumption. In the literature on statistical learning, this assumption is also referred to as Massart noise [Massart and Nédélec, 2006, Giné and Koltchinskii, 2006, Hanneke et al., 2014].", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "HkW3nTM6X.S1d278zJ4.00", "parag_1": "SDE model. Figures 2c-dclearly illustrate that time-variant drift function leads to reduced prediction error, which hints that the dynamics underlying a golf swing motion are learned better. We also see that the error consistently decreases as the number of inducing points M is increased, and reaches the minimum at M = 80 .", "parag_2": "SDE model. Figures 2c-d illustrate that time-variant drift function can also reduce prediction error, at least for golf swing trajectories that span approximately the same part of the state space. We also see that the error consistently decreases as the number of inducing points M is increased, and reaches the minimum at M = 80 .", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rephrase the sentence related to Figures 2c-d.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rephrase the first long sentence to better fit the academic style.", "annotator": "annotator_07"}} {"id_paragraph": "HU3k56fdo.UBQDwHj6Ebd.00", "parag_1": "One can fine-tune the global model on the local data [41], perform MAML-based personalizedapproaches [37, 38], or achieve the personalization by local batch normalization layers [11]. Our proposed method is perpendicular to the above studies and potentially can be combined with them for further improvement.", "parag_2": "One can fine-tune the global model on the local data [37], perform MAML-based personalizedapproaches [38, 39], or achieve the personalization by local batch normalization layers [11]. Thereare also many other emerging explorations dealing with FL data heterogeneity, such as heterogeneousoptimization [40, 41, 42], robust aggregation [15], etc. Our proposed method is perpendicular to the above studies and potentially can be combined with them for further improvement.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "NwOG107NKJ.0PPYM22rdB.01", "parag_1": "General classes of network formation methods include: 1) exponential random graph models (ERGMs) Lusher et al. [2013]Pattison and Wasserman [1999], meta-networks, and meta-matrices Carley and Hill [2001]Krackhardt and Carley [1998] for multilayer social networks. 2) block modeling Guimerà and Sales-Pardo [2009] 3) geographic or characteristic based approaches, Boucherand Mourifié [2012]Leung [2014]; 4) link formation techniques Christakis et al. [2010]Bramoulléet al. [2012], and 5) subgraph model-based approaches (SUGMs) Chandrasekhar and Jackson [2016].", "parag_2": "General classes of network formation methods include: 1) exponential random graph models (ERGMs) [Lusher et al., 2013][Pattison and Wasserman, 1999], meta-networks, and meta-matrices [Carley and Hill, 2001][Krackhardt and Carley, 1998] for multilayer social networks. 2) block modeling [Guimerà and Sales-Pardo, 2009] 3) geographic or characteristic based approaches, [Boucher and Mourifié, 2012][Leung, 2014]; 4) link formation techniques [Christakis et al., 2010][Bramoullé et al., 2012], and 5) subgraph model-based approaches (SUGMs) [Chandrasekhar and Jackson, 2016].", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_10"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_03"}} {"id_paragraph": "F3z0hchpGy.xeuzrNJiNW.00", "parag_1": "This is done by applying a matrix ρ ( g q → p ) ∈ R C out × C in to the coefficients of the feature at q , in order to obtain the coefficients of the feature vector transported to p , which can be used for the convolution at p . The transporter depends on the geometric type of the feature, denoted by ρ . Details of how the tangent space is defined, how to compute the map to the tangent space, angles θ pq , and the parallel transporter are given in Appendix A.", "parag_2": "This is done by applying a matrix ρ ( g q → p ) ∈ R C out × C in to the coefficients of the feature at q , in order to obtain the coefficients of the feature vector transported to p , which can be used for the convolution at p . The transporter depends on the geometric type (group representation) of the feature, denoted by ρ and described in more detail below. Details of how the tangent space is defined, how to compute the map to the tangent space, angles θ pq , and the parallel transporter are given in Appendix A.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.17", "parag_1": "To avoid the significant variance in the IHDP benchmark, we conduct ablation study on the ACIC benchmark, to evaluate the effectiveness of ESCFR’s components and validate our claims in Section 3. In Table 2, ESCFR firstly augments TARNet with stochastic optimal transport in Section 3.1, which effectively reduces the out-of-sample PEHE from 3.254 to 3.207. Afterwards, it mitigates the MSE issue with RMPR in Section 3.2 and the UCE issue with PFOR in Section 3.3, reducing the out-ofsample PEHE to 2.768 and 2.633, respectively. Finally, ESCFR combines the RMPR and PFOR in a unified framework in Section 3.4 and further reduces the out-of-sample PEHE to 2.316.", "parag_2": "To verify the effectiveness of individual components, an ablation study is conducted on the ACIC benchmark in Table 2. Specifically, ESCFR first augments TARNet with stochastic optimal transport in Section 3.1, which effectively reduces the out-of-sample PEHE from 3.254 to 3.207. Then, it mitigates the MSE issue with RMPR in Section 3.2 and the UCE issue with PFOR in Section 3.3, reducing the out-of-sample PEHE to 2.768 and 2.633, respectively. Finally, ESCFR combines the RMPR and PFOR in a unified framework in Section 3.4, reducing the out-of-sample PEHE to 2.316.", "annot_1": {"annotation": ["Rewriting_medium", "Concision"], "instruction": "Make the first sentence more concise.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Simplify the first sentence. Improve the connections between sentences.", "annotator": "annotator_07"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.10", "parag_1": "We refer to the training steps at which we perform forward and backward passes normally as regular steps . We introduce re-balancing steps at which we force the model to update only one of the unimodal branches in order to accelerate learning from the associated input modality. See Appendix A. for the full explanation of the re-balancing step.", "parag_2": "We refer to the training steps at which we perform forward and backward passes normally as regular steps . We introduce re-balancing steps at which we update one of the uni-modal branches intentionally in order to accelerate the model to learn from the corresponding modality. See Appendix A.2 for the full explanation of the re-balancing step.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Improve the English of this paragraph", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Rewrite this paragraph", "annotator": "annotator_05"}} {"id_paragraph": "UlHNcByJV.W1RxpkrWx8.02", "parag_1": "We compare against four baseline methods: PLOT, NeuralUCB, Greedy (no exploration), and decayed ϵ -greedy method. For ϵ -greedy, we follow (Kveton et al., 2019) and use a decayed schedule, dropping to 0.1% exploration by T=2500. In addition we evaluate the performance of the Standalone Adversarial classifier as an ablation study.", "parag_2": "We compare against four baseline methods: PLOT, NeuralUCB, Greedy (no exploration), and decayed ϵ -greedy method. For ϵ -greedy, we follow (Kveton et al., 2019) and use a decayed schedule, dropping to 0.1% exploration by T=2500. We combine PLOT with ϵ -greedy selection of pseudolabel candidates as in Pacchiano et al. In addition we evaluate the performance of the standalone adversarially de-biased classifier as an ablation study.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.02", "parag_1": "Deep CNN is firstly utilized for image SR in SRCNN (Dong et al., 2014) and continuously shows promising SR performance. There are only three convolutional (Conv) layers in SRCNN, hindering its performance. Kim et al . increased the network depth in VDSR (Kim et al., 2016a) with residual learning and obtained notable improvements over SRCNN. Lim et al . (Lim et al., 2017) built a much deeper network EDSR by using simplified residual blocks. Zhang et al . (Zhang et al., 2018b) proposed RCAN, which is one of the deepest networks in SR. With increased network size, very deep networks, like EDSR (Lim et al., 2017) and RCAN (Zhang et al., 2018b), have achieved remarkable SR performance. However, they also suffer from heavy model parameters, number of operations, and inference time. Therefore, it is infeasible to directly deploy them on resource-limited platformswithout neural processing units or off-chip memory (Lee et al., 2020).", "parag_2": "Deep CNN for image SR is pioneered by SRCNN (Dong et al., 2014) and has continuously shown promising SR performance. There are only three convolutional (Conv) layers in SRCNN, constraining its expressivity. Kim et al . increased the network depth in VDSR (Kim et al., 2016a) with residual learning and obtained notable improvements over SRCNN. Lim et al . (Lim et al., 2017) built a much deeper network EDSR by using simplified residual blocks. Zhang et al . (Zhang et al., 2018b) proposed RCAN, which is one of the deepest networks in SR at present. Empowered by increased network size, deep SR models like EDSR (Lim et al., 2017) and RCAN (Zhang et al., 2018b) have seen remarkable SR performance. However, as a cost, the large model size brings about problems such as excessive memory footprint, slow inference speed. It is thereby infeasible to directly deploy them on resource-constrained platforms (Lee et al., 2020).", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Can you make the last sentence simple?", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "Use shorter, more direct formulations to make this paragraph more concise. Rewrite the last two sentences to make them more understandable.", "annotator": "annotator_04"}} {"id_paragraph": "6Olckfg0rk.4SAcuLPvUb.00", "parag_1": "Our handcrafted backdoor attacks directly modify a pre-trained model’s parameters to introduce malicious functionality. Because our attack does not require training, knowledge of or access to the training data is unnecessary. More importantly, handcrafted attacks have more degrees of freedom in optimizing a model’s behaviors for malicious purposes. Our handcrafted attack works by injecting a decision path between the trigger that appears in the input neurons and the output of the neural network, so that the models exhibit different behaviors in the presence of the trigger.", "parag_2": "Our handcrafted backdoor attacks directly modify a pre-trained model’s parameters to introduce malicious functionality. Because our attack does not require training, knowledge of or access to the training data is unnecessary. More importantly, handcrafted attacks have more degrees of freedom in optimizing a model’s behaviors for malicious purposes. Our handcrafted attack works by injecting 36th Conference on Neural Information Processing Systems (NeurIPS 2022). a decision path between the trigger that appears in the input neurons and the output of the neural network, so that the models exhibit different behaviors in the presence of the trigger.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.09", "parag_1": "We further derive the upper bound of PEHE in the stochastic batch form as in Theorem 3.1 based on Uri et al. (2017), which demonstrates that the PEHE can be optimized by iteratively minimizing the factual outcome estimation error and the optimal transport discrepancy at a mini-batch level .", "parag_2": "To further investigate the effectiveness of this shortcut, Theorem 3.1 demonstrates that PEHE can be optimized by iteratively minimizing the factual outcome estimation error and the mini-batch group discrepancy (6). The proof of the theorem can be found in Appendix A.3.", "annot_1": {"annotation": ["Development", "Rewriting_medium"], "instruction": "At the last part, state that the proof will be shown in the appendix. Also, make the sentence more sophisticated.", "annotator": "annotator_05"}, "annot_2": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "Make this sentence more concise. Add a reference to Appendix A.3. where the proof is.", "annotator": "annotator_07"}} {"id_paragraph": "S1qImCcFQ.Ske132uA7.02", "parag_1": "To validate the model and inference procedure, we used the neural spike train data recorded from the primary visual cortex of an anesthetized macaque monkey collected by Graf et al. The dataset is composed of short trials where the monkey viewed periodic temporal pattern of motions of 72 orientations, each repeated 50 times. Previous state space modeling of the dataset showed that for each orientation of the drifting grating stimulus, the neural response oscillates over time, but in a stimulus dependent geometry (Zhao & Park, 2017b). We used 25 trials each from a subset of 4 stimulus orientations grouped in two (140 and 150 degrees vs 230 and 240 degrees). Each trial contained 140 neurons, and their spike trains were binarized with a 10 ms window. We truncated the onset and offset neural responses, resulting in 111 time bins per trial.", "parag_2": "To validate the model and inference procedure, we used the neural spike train data recorded from the primary visual cortex of an anesthetized macaque monkey collected by Graf et al. The dataset is composed of short trials where the monkey viewed periodic temporal pattern of motions of 72 orientations, each repeated 50 times. Dimensionality reduction of the dataset showed that for each orientation of the drifting grating stimulus, the neural response oscillates over time, but in a stimulus dependent geometry captured in 3-dimensions (Zhao & Park, 2017). We used 50 trials each from a subset of 4 stimulus orientations grouped in two (140 and 150 degrees vs. 230 anddegrees) where each trial contained 140 neurons. Out of the 140 neurons, we selected 63 well-tuned neurons. The spike trains were binarized with a 10 ms window for Bernoulli observation model and we truncated the onset and offset neural responses, resulting in 111 time bins per trial.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "HJZRDGZCb.SkuHQG2zz.00", "parag_1": "To evaluate the proposed spatial-wise and channel-wise sparse-complementary (SW-SC and CWSC) convolutional kernels, we applied them onto on state-of-the-art network architectures for the image classification task, including ResNet (He et al. (2016a;b)) and DenseNet (Huang et al. for the CIFAR-10/100 (Krizhevsky & Hinton(2009)) and ImageNet-1K (Russakovsky et al. (2015)) datasets. For all experiments, we replace all 3 × 3 and 1 × 1 kernels by SW-SC and CW-SC convolutional kernels, respectively. We add a suffix − sc to refer to our models with either SW-SC,", "parag_2": "To evaluate the proposed spatial-wise and channel-wise sparse-complementary (SW-SC and CWSC) convolutional kernels, we applied them onto on state-of-the-art network architectures for the image for the CIFAR-10/ Krizhevsky Hinton and ImageNet-1K ) datasets. For all experiments, we replace all 3 × 3 and 1 × 1 kernels by SW-SC and CW-SC convolutional kernels, respectively. We add a suffix − sc to refer to our models with either SW-SC,", "annot_1": {"annotation": ["Concision"], "instruction": "Exclude unnecessary details.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "5t8NvKONr.tls-ZX2iE.00", "parag_1": "In this section, we would like to clarify the complexity of the DeepONet required for the approximation A and reconstruction R based on the theory in Galanti & Wolf (2020). Furthermore,using the results on the upper bound for the complexity of hypernetwork Galanti & Wolf (2020), we will show that the HyperDeepONet entails a relatively lower complexity than the DeepONet.", "parag_2": "In this section, we would like to clarify the complexity of the DeepONet required for the approximation A and reconstruction R based on the theory in Galanti & Wolf (2020). Furthermore, we will show that the HyperDeepONet entails a relatively lower complexity than the DeepONet using the results on the upper bound for the complexity of hypernetwork (Galanti & Wolf, 2020).", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Use correct citation format.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Reorder the last sentence.", "annotator": "annotator_07"}} {"id_paragraph": "CswFOyPyhT.FUeqrAFby.00", "parag_1": "Previous work has shown GFlowNets are useful in settings with multi-modal posteriors. This is of particular interest to us where many admissible structures can explain the observed data equally well. Next, we discuss a toy system in which has many modes in section 5, then present our GFlowNet-based solution in section 4.", "parag_2": "Previous work has shown GFlowNets are useful in settings with multi-modal posteriors. This is of particular interest to us where many admissible structures can explain the observed data equally well. Next, we present our GFlowNet-based solution in section 4, then discuss a toy system which has many modes in section 5.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Make the citation in correct order.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Revise this paragraph to present the sections in a coherent order.", "annotator": "annotator_02"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.10", "parag_1": "We would like to highlight that, sequence-based(evolution-based) methodsfor single proteins are not suitable for protein-protein interactions due to the lacking of evolutionary information in most cases — protein-protein interactions involve two or more chains. The chains might belong to different species (e.g. host and virus). Sometimes inter-chain co-evolution simply does not happen (e.g. immune response where effective antibodies are produced to induce rapid clearance of pathogens, leaving no time for the pathogen to evolve). These make it infeasible to predict mutational effects via mining sequence databases with existing powerful tools such as multiple sequence alignments, protein language models, etc. Hence, currently, effective ways for predicting mutational effects on binding are based on structures rather than sequences alone.", "parag_2": "However, it is important to note that sequence-based methods are not suitable for predicting mutational effects on general protein-protein interactions due to the lack of evolutionary information in many cases. Protein-protein interactions typically involve two or more chains, which may belong to different species or may not experience inter-chain co-evolution. As such, it is infeasible to predict mutational effects via mining sequence databases using existing powerful tools such as MSAs or PLMs. Thus, effective ways for predicting mutational effects on protein-protein interaction rely on structure-based approaches rather than sequences alone.", "annot_1": {"annotation": ["Concision"], "instruction": "Remove unnecessary examples.", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Concision"], "instruction": "Rewrite this paragraph to make it shorter while keeping all the informations.", "annotator": "annotator_07"}} {"id_paragraph": "X50LVGSli.jqJzurpUu.01", "parag_1": "Previous works on unsupervised learning for CO have studied max-cut Yao et al. and TSP problems Hudson et al. (2021), while these works depend on carefully selected problem-specific objectives. Some works have investigated satisfaction problems Amizadeh et al. ; Toenshoff et al. Applying these approaches to general CO problems requires problem reductions. The works most relevant to us are Karalias & Loukas (2020), Wang et al. and Schuetz et al. Karalias & Loukas (2020) propose an unsupervised learning framework EGN for general CO problems based on the Erd˝os’s probabilistic method, which bonds the quality of the final solutions with probability. Wang et al. (2022) generalize EGN and prove that if the CO objective can be relaxed into an entry-wise concave form, a solution of good quality can be deterministically achieved. This further inspires the design of proxy objectives for the CO problems that may not have closed-form objectives, such as those in circuit design. Schuetz et al. (2022) have recently extended", "parag_2": "Previous works on unsupervised learning for CO have studied max-cut (Yao et al., 2019) and TSP problems (Hudson et al., 2021), while these works depend on carefully selected problem-specific objectives. Some works have investigated satisfaction problems (Amizadeh et al., 2018; Toenshoff et al., 2019). Applying these approaches to general CO problems requires problem reductions. The works most relevant to us are (Karalias & Loukas, 2020), (Wang et al., 2022) and (Schuetz et al., 2022). Karalias & Loukas (2020) propose an unsupervised learning framework EGN for general CO problems based on the Erd˝os’ probabilistic method, which bonds the quality of the final solutions with probability. Wang et al. (2022) generalize EGN and prove that if the CO objective can be relaxed into an entry-wise concave form, a solution of good quality can be deterministically achieved. This further inspires the design of proxy objectives for CO problems that may not have closed-form objectives, such as those in circuit design. Schuetz et al. (2022) have recently extended", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "BkVj6Z-AW.SytnTZWCZ.03", "parag_1": "Deep Learning Approaches. The use of recurrent networks is a natural approach to dealing with the problem of human motion. LSTM and RNN networks are trained to generate output sequentially, each output conditioned on the previous elements in the sequence. These networks have shown much success in Natural Language Processing for generating text Sutskever et al. (2011), hand written characters Graves (2013); Gregor et al. (2015), and even captioning images Vinyals et al.", "parag_2": "Deep Learning Approaches. The use of recurrent networks is a natural approach to dealing with the problem of human motion. LSTM and RNN networks are trained to generate output sequentially, each output conditioned on the previous elements in the sequence. These networks have shown much success in Natural Language Processing for generating text (Sutskever et al., 2011), hand written characters (Graves, 2013; Gregor et al., 2015), and even captioning images (Vinyals et al., 2014).", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.20", "parag_1": "RDE-Linear achieves performance comparable to Rosetta and outperforms some unsupervised representation learning baselines. Though it does not outperform most of the baselines over the whole SKEMPI2 dataset, we find that its performance is much better when considering only single-point mutations (Table 6 in the appendix). The reason might be that simple linear models cannot capture well the non-linear relationship dominating multi-point mutations. Anyway, RDE-Linear shows that using simple statistics of the estimated rotamer density alone can predict ∆∆ G , laying the foundation for the more accurate RDE-Network.", "parag_2": "RDE-Linear achieves comparable performance to Rosetta and outperforms some unsupervised learning baselines. While it does not surpass most baseline methods over the entire SKEMPI dataset, we observe that its performance is better when considering only single-point mutations (Table 6 in the appendix). This might be attributed to the fact that simple linear models cannot capture well the non-linear relationship dominating multi-point mutations. Nevertheless, RDE-Linear demonstrates that using the basic statistics of the estimated rotamer density alone can predict ∆∆ G , which lays the foundation for the more accurate RDE-Network.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Make this paragraph more clear.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve the English in this paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "h-HkkpC3wm.NSZZHkfytf.00", "parag_1": "• The strongest benefit of employing digital circuits (in the form of ASICs or FPGAs) to implement XOR gates is that all XOR gates can be performed simultaneously (unlike GPUs or CPUs where each core needs to simulate only a few XOR gates). Thus, all XOR operations of our proposed decoder are completed within just one clock cycle. • (with shift registers) would increase the latency. Throughput, however, maintains to be the same regardless of through pipelining technique which is a basic hardware design principle. • verall, the design complexity (in terms of area overhead and latency) of XOR-gate decoders is extremely low (note that one XOR gate consumes only 6 transistors). • XOR-gate decoders would work as memory decompressors (located in-between memory and computation logic). In the view of computational units that receive outputs of an XOR-gate decoder, then, the amount of memory is simply reduced while regular memory access patterns are not disturbed. • In the case of , the overall design cost would be 2 additional clock cycles for the latency and a few hundred XOR gates (which would be approximately equivalent to a few thousand transistors). Overall, we can provide full memory bandwidth (based on regular memory access patterns through fixed-to-fixed sparsity formats) while the overall hardware design cost is only marginal. • While designing DNN inference accelerators is gaining increasing attention, our work can provide a new research direction.", "parag_2": "• The strongest benefit of employing digital circuits (in the form of ASICs or FPGAs) to implement XOR gates is that all XOR gates can be performed simultaneously (unlike GPUs or CPUs where each core needs to simulate only a few XOR gates). Thus, all XOR operations of our proposed decoder are completed within just one clock cycle. • N s (with shift registers) would increase the latency. Throughput, however, maintains to be the same regardless of N s through pipelining technique which is a basic hardware design principle. • Overall, the design complexity (in terms of area overhead and latency) of XOR-gate decoders is extremely low (note that one XOR gate consumes only 6 transistors). • XOR-gate decoders would work as memory decompressors (located in-between memory and computation logic). In the view of computational units that receive outputs of an XOR-gate decoder, then, the amount of memory is simply reduced while regular memory access patterns are not disturbed. • Given N s , an XOR-gate decoder requires N s additional clock cycles for the latency.. • Since M matrix has the size of N out × N in and an element of M is randomly filled withor 1 , the number of XOR gates is ( N out · N in / 2) . Thus, the total number of transistors to design an XOR-gate decoder is (3 · N out · N in ) . • Overall, we can provide full memory bandwidth (based on regular memory access patterns through fixed-to-fixed sparsity formats) while the overall hardware design cost is only marginal. • While designing DNN inference accelerators is gaining increasing attention, our work can provide a new research direction.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "YkiRt7L93m.jgDbnUD7s.02", "parag_1": "The most closely related works to ours are Bonneel et al. and Werenski et al. The former develops a regression approach in barycentric coordinates with applications in computer graphics as well as color and shape transport problems. Their method requires solving a computationally costly bilevel optimization problem, which does not need to achieve a global solution. The latter works on a tangential structure like we do, but is based on “Karcher means” (Karcher, 2014, Zemel & Panaretos, 2019). This implies that their method works between absolutely continuous measures with densities that are bounded away from zero, with the target measure lying in the convex hull of the control measures.", "parag_2": "The most closely related works to ours are Bonneel et al. (2016), Mérigot et al. (2020), and Werenski et al. The first develops a regression approach in barycentric coordinates with applications in computer graphics as well as color and shape transport problems. Their method is defined directly on W 2 and requires solving a computationally costly bilevel optimization problem, which does not necessarily yield global solutions. The second introduces a linearization of the 2 -Wasserstein space by lifting it to a L 2 -space anchored at measure that is absolutely continuous with respect to Lebesgue measure. This approach relies on the existence of optimal transport maps between this absolutely continuous “anchor” distribution and other distributions and hence only defines tangent spaces at absolutely continuous measures. The third works on a tangential structure based on “Karcher means” (Karcher, 2014, Zemel & Panaretos, 2019), which is more restrictive still. This implies that their method requires all involved measures to be absolutely continuous measures with densities that are bounded away from zero, with the target measure lying in the convex hull of the control measures.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_01"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "c8pZvSp-5r.zd4IIIuixp.00", "parag_1": "Second, it makes the model not translation-equivariant because a unique positional encoding vector is added to every one patch. The translation equivalence plays an important role in classification because we hope the networks’ responses changes accordingly as the object moves in the image. One may note that the first issue can be remedied by removing the positional encodings since except for the positional encodings, all other components ( e.g ., MHSA and FFN) of the vision transformer can directly be applied to longer sequences. However, this solution severely deteriorates the performance model has no way to extract the order without the positional encodings. The experiment results on", "parag_2": "Second, it makes the model not translation-equivariant because a unique positional encoding vector is added to every one patch. The translation equivalence plays an important role in classification because we hope the networks’ responses changes accordingly as the object moves in the image. can directly be applied to longer sequences. However, this solution severely deteriorates the performance. This is understandable because the order of the input sequence is an important clue and the model has no way to extract the order without the positional encodings. The experiment results on", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "NvI7ejSHFe.ppieLd2M4a.03", "parag_1": "In this paper, we reveal the high sensitivity of PINNs to the choice of activation functions and relate it to various characteristics of the underlying PDE system. To avoid the inefficient manual selection of activation functions when solving different PDEs, we sought to learn specialized activation functions automatically for PINNs. The proposed physics-informed activation function is presented as learnable combinations of a set of candidate functions, whose coefficients can be adapted to the governing PDEs. Intuitively, we show that PIAC makes the neural tangent kernel of PINNs learnable, which partially explains the performance improvement of PIAC. Extensive experiments on a series of challenging benchmarks demonstrate the effectiveness and generalization ability of PIAC.", "parag_2": "In this paper, we reveal the high sensitivity of PINNs to the choice of activation functions and relate it to various characteristics of the underlying PDE system. We sought to learn specialized activation functions automatically for PINNs to avoid the inefficient manual selection of activation functions and to alleviate the optimization difficulty of PINNs. The proposed physics-informed activation function is presented as learnable combinations of a set of candidate functions, whose coefficients can be adapted to the governing PDEs. Intuitively, we show that PIAC makes the neural tangent kernel of PINNs learnable, which partially explains the performance improvement of PIAC. Extensive experiments on a series of challenging benchmarks demonstrate the effectiveness and generalization ability of PIAC.", "annot_1": {"annotation": ["Development", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "CzTbgFKuy.hfDu8DsDq6.03", "parag_1": "Assuming a bound N ≥ 2 on the number of days and B > 0 on the buy price we can show that U t is bounded and Lipschitz w.r.t. λ . We can thus run randomized exponentiated gradient over the product set [ N ] × { δ/ 2 , . . , 1 − δ/ 2 }on the functions U t ( · , x t ) for some δ s.t. \n /δ ∈ Z ≥ 2 to get a bound on the expected regret (proof in A.3).", "parag_2": "Assuming a bound of N ≥ 2 on the number of days and B > 0 on the buy price implies that U t is bounded and Lipschitz w.r.t. λ . We can thus run exponentiated gradient on the functions U t to learn a categorical distribution over the product set [ N ] × { δ/ 2 , . . , 1 − δ/ 2 } for some δ s.t. \n /δ ∈ Z ≥ 2 . This yields the following bound on the expected regret (proof in A.3).", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "7FjOLfaFz.ZBfN_37c2x.00", "parag_1": "In the literature, there is a line of research on estimation and inference of the heterogeneous treatment effects (HTE) (Athey & Imbens, 2016; Taddy et al., 2016; Wager & Athey, 2018; Yu et al., 2020). In particular, Yu et al. (2020) proposed an online test for QTE. We remark that QTE and QTE are related yet fundamentally different hypotheses. There are cases where both the alternative hypothesis in Yu et al. and the null hypothesis in our paper hold. Consequently, applying their test will fail in our setting.", "parag_2": "In the literature, there is a line of research on estimation and inference of the heterogeneous treatment effects (HTE) (Athey & Imbens, 2016; Taddy et al., 2016; Wager & Athey, 2018; Yu et al., 2020). In particular, Yu et al. (2020) proposed an online test for HTE. We remark that HTE and QTE are related yet fundamentally different hypotheses. There are cases where HTE exists whereas QTE does not. See Figure 1 for an illustration. Consequently, applying their test will fail in our setting.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "WldWha1MT.LL2ZsGpJga.02", "parag_1": "Topology aware segmentation Multiple studies have described the importance of topologically correct segmentations in various computer vision applications. Clough et al. (2020) used the concept of Betti numbers as a prior for the segmentation network. A key publication by Hu et al. (2019) introduced the Wasserstein loss as a variation of the Wasserstein distance to improve image segmentation with neural networks. The method computes persistence diagrams for the ground truth and the prediction and matches the points in the persistence diagrams of dimension-1 by minimizing the squared distance of matched points. However, this matching algorithm has a fundamental limitation, in that it cannot guarantee that the 0-dim and 1-dim structures matched by the algorithm are spatially related in any sense, see Figure 2. Put succinctly, thematched cyclesand connected components are matched irrespective of the location within the image, frequently leading to situations where the resulting loss will emphasize the wrong features and penalize the correct features. See Figure 2 for two schematic examples and Figures 1,11 for the result of the Wasserstein matching method on the CREMI dataset. Moreover, the method only considers 1-dimensional structures in their implementation and restricts to superlevel filtrations. Persistent homology and distance-based barcode matching have been usedin different settings, for example, by Abousamra et al. (2021) for crowd localization, by Wang et al. (2022) for gland segmentation, and by Waibel et al. (2022) for reconstructing 3D cell shapes from 2D images. An alternative approach to persistent homology is employed by Hu & Chen (2021), whose method penalizes different locations of critical points using the warping error metric Jain et al. (2010). Otherfrequently used methods in topology-aware segmentation focus on overlap-based segmentation, where the overlapped structures are not the entire volume but certain topologically relevant structures. For example, theskeleta of curvilinear structures play this role in the clDice score Shit et al. (2021), which is calculated as the harmonic mean of the overlap of the prediction skeleton with the label volume and the label skeleton with the prediction volume.", "parag_2": "Topology aware segmentation Multiple works have highlighted the importance of topologically correct segmentations in various computer vision applications. Persistent homology is a popular tool from algebraic topology to address this issue. A key publication by Hu et al. (2019) introduced the Wasserstein loss as a variation of the Wasserstein distance to improve image segmentation. They match points of dimension 1 in the persistence diagrams – an alternative to barcodes as descriptor of persistent homolgy – of ground truth and prediction by minimizing the squared distance of matched points. However, this matching has a fundamental limitation, in that it cannot guarantee that the matched structures are spatially related in any sense (see Fig. 1 and App. A). Put succinctly, the cycles are matched irrespective of the location within the image, which frequently has an adverse impact during training (see App. F). Clough et al. (2020) follows a similar approach and train without knowing the explicit ground truth segmentation, but only the Betti numbers it ought to have. Persistent homology has also been used by Abousamra et al. (2021) for crowd localization and by Waibel et al. (2022) for reconstructing 3D cell shapes from 2D images. Other methods incorporate pixel-overlaps of topologically relevant structures. For example, the clDice score, introduced by Shit et al. (2021), computes the harmonic mean of the overlap of the predicted skeleton with the ground truth volume and vice versa. Hu & Chen (2021) and Jain et al.", "annot_1": {"annotation": ["Concision", "Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "l1R3hsGaL.wDtYQAe21.01", "parag_1": "We test a ResNet-18 [5] on each environment we generate of ∼ 3600 images, and optimize it usingthe Empirical Risk Minimization (ERM) objective [16]. For the sake ofcompleteness, we also testwith group DRO [15], a popular OOD objective that accounts for the environment label e attached toeach data point ( x , y, e ) , and present its results in Appendix A. To test for each DGP construction (Figure 2 a), b) and c)), we construct four experiments within eachregime. In each experiment, we train on three environments and test on one held-out environment. The distributions are detailed in Table 1.", "parag_2": "We conduct tests in which we aim to evaluate the impact of statistical and geometric skews on the performance of a vision classifier. To achieve this, we use a ResNet-18 [4] as our base model architecture and construct four generalization tasks for each skew regime given in Figure 2. Further, we compare the performance of ERM against gDRO: while ERM shuffles all the data in its training domain, gDRO assigns an environment label e to each environment and optimizes the worst-case loss. Results are shown in Table 1, and experiments were performed using NVIDIA 3090 GPUs.", "annot_1": {"annotation": ["Rewriting_heavy", "Content_substitution"], "instruction": NaN, "annotator": "annotator_10"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_03"}} {"id_paragraph": "hoDIDL_gcF.NYncw25Z2M.00", "parag_1": " Figure 2 shows the results. We can observe that in both experiments N. + emp + opt successfullyexploits the fast spectral decay of Gaussian kernel and significantly outperforms other methods. Also,even without using the knowledge of any expectations, N. + emp (and Thinning ) show a decentconvergence rate comparable to Herding or iid Bayes , which actually use the additional information.", "parag_2": "Figure 2 shows the results. We can observe that in both experiments N. + emp + opt successfullyexploits the fast spectral decay of Gaussian kernel and significantly outperforms other methods. Also,even without using the knowledge of any expectations, N. + emp shows a decent convergence ratecomparable to Herding or iid Bayes .", "annot_1": {"annotation": ["Concision"], "instruction": "Make this paragraph shorter", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07"}} {"id_paragraph": "BkVj6Z-AW.SytnTZWCZ.02", "parag_1": "Simulation-based Methods. Simulation-based techniques are able to produce physically plausible animations Levine & Popovi´c (2012); Clegg et al. ; Hämäläinen et al. ; Ha et al. (2012), including realistic balancing, motion on terrain of various heights, and recovery from falling. These techniques tend to consider physical constraints on the human skeleton while optimizing a motion objective. For example, in the work of Levine et al. Levine & Popovi´c (2012), one task they employ is moving the skeleton in a certain direction without falling over. Similarly, in Ha et al. Ha et al. (2012), given initial fall conditions, they seek to minimize joint stress due to landing impact while ensuring a desired landing pose. Though the output is plausible and realistic, they require specific objectives and constraints for each individual task. It is not feasible to use such explicit objectives or constraints for stylized motion. The correct arch of the back and flourishes of the limbs for a dance cannot be described a priori and must instead be inferred from the data.", "parag_2": "Simulation-based Methods. Simulation-based techniques are able to produce physically plausible animations (Levine & Popovi´c, 2012; Clegg et al., 2015; Hämäläinen et al., 2015; Ha et al., 2012), including realistic balancing, motion on terrain of various heights, and recovery from falling. These techniques tend to consider physical constraints on the human skeleton while optimizing a motion objective. For example, in the work of Levine et al. (Levine & Popovi´c, 2012), one task they employ is moving the skeleton in a certain direction without falling over. Similarly, in Ha et al. (Ha et al., 2012), given initial fall conditions, they seek to minimize joint stress due to landing impact while ensuring a desired landing pose. Though the output is plausible and realistic, they require specific objectives and constraints for each individual task. It is not feasible to use such explicit objectives or constraints for stylized motion. The correct arch of the back and flourishes of the limbs for a dance cannot be described a priori and must instead be inferred from the data.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "ssjKKm0b5y.3wi5X8wrM_.01", "parag_1": "In this area, leading approaches include NSGA-III (Deb & Jain, 2013) and MOEA/D (Zhang & Li, 2007). However, these gradient free methods scale poorly with the number of parameters and are not suitable for training large scale neural networks. Sener & Koltun (2018) proposed a gradient based MOO algorithm for MTL, based on MDGA (Désidéri, 2012), suitable for training large scale neural networks. Other recent works include (Lin et al., 2019; Mahapatra & Rajan, 2020) detailed in Section 2. Another recent approach that tries to get a more complete view of the Pareto front is Ma et al. This method extents a given Pareto optimal solution in its local neighborhood. In a concurrent work, Lin et al. propose extending Lin et al. for approximating the entire Pareto front, by utilizing hypernetworks. The proposed method is conceptually similar to our approach. Similarly, Dosovitskiy & Djolonga (2019) proposed learning a single model conditioning on the objective weight vector. The method uses feature-wise linear modulation Perez et al. and dynamically weighted LS loss criterion.", "parag_2": "In this area, leading approaches include NSGA-III (Deb & Jain, 2013) and MOEA/D (Zhang & Li, However, these gradient-free methods scale poorly with the number of parameters and are not suitable for training large-scale neural networks. Sener & Koltun (2018) proposed a gradient-based MOO algorithm for MTL, based on MDGA (Désidéri, 2012), suitable for training large-scale neural networks. Other recent works include (Lin et al., 2019; Mahapatra & Rajan, 2020) detailed in Section 2. Another recent approach that aims at a more complete view of the Pareto front is Ma et al. They extend a given Pareto optimal solution in its local neighborhood. In a concurrent work, Lin et al. (2020) extends Lin et al. for approximating the entire Pareto front. They train a single hypernetwork by constantly changing the reference directions in the PMTL objective. The proposed method is conceptually similar to our approach, but since it builds on PMTL, it may not produce an exact mapping between the preference and the corresponding solution. Similarly, Dosovitskiy & Djolonga (2019) proposed learning a single model conditioning on the objective weight vector. The method uses feature-wise linear modulation Perez et al. and dynamically weighted LS loss criterion.", "annot_1": {"annotation": ["Content_addition", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_08"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "cDJ_gFGag2.YuIURjVry.00", "parag_1": "For the first category of experiments, we averaged our results over 400 random trials. An array of 5 different ResNet architectures were trained in this category: ResNet10, 18, 34, 50, and 101, following a simple pipeline in the Pytorch library example 1 . It is worth noting that we deliberately did not engineer strong features for this category of experiments. The mini-ImageNet dataset was used for these experiments, and 16-way classification was performed on both the validation and novel classes. For this purpose, 16 out of 20 novel classes were chosen at random once and fixed for all the evaluations. We split the validation and novel classes into 90% training and 10% held-out for accuracy evaluation. The imbalanced data settings were chosen to either have 7.5 or 15 average number of shots per class. For the second category of experiments, we used the WideResNet-28pre-trained feature backbones from Mangla et al. ; Yang et al These experiments were conducted on three benchmarks, and each hyper-parameter configuration was averaged over 10, random trials. Additional details and hyper-parameters are covered in the subsequent sections.", "parag_2": "For the first category of experiments, we averaged our results over 400 random trials. An array of 5 different ResNet architectures were trained in this category: ResNet10, 18, 34, 50, and 101, following a simple pipeline in the Pytorch library example 1 . It is worth noting that we deliberately did not engineer strong features for this category of experiments. The mini-ImageNet dataset was used for these experiments, and 16-way classification was performed on both the validation and novel classes. For this purpose, 16 out of 20 novel classes were chosen at random once and fixed for all the evaluations. We split the validation and novel classes into 90% training and 10% held-out for accuracy evaluation. The imbalanced data settings were chosen to either have 7.5 or 15 average number of shots per class. For the second category of experiments, we used the WideResNet-28pre-trained feature backbones from Mangla et al. ; Yang et al. These experiments were conducted on three benchmarks, and each hyper-parameter configuration was averaged over 10, random trials. The backbone parameters and extracted features for all datasets are publicly available in the Illinois Data Bank repository (Saleh et al., 2022). Additional details and hyper-parameters are covered in the subsequent sections.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "IoTyuVEanE.Et-c0vQfeb.01", "parag_1": "Snuba [14] generates candidate rules using a collection of weak learner primitives (e.g. decision stumps, k-nearest neighbors) and then synthesizes and prunes this set of rules to generate final rules for labeling. ReGAL combines elements this framework to allow iterative communication betweendownstream classifiers and rule selectors to allow each to mutually enhance the other. This allows ReGAL to uniquely offer the ease of model-generated LFs while still gaining the nuanced insight ofhuman annotator input.", "parag_2": "Snuba [14] generates candidate rules using a collection of weak learner primitives (e.g. decision stumps, k-nearest neighbors) and then synthesizes and prunes this set of rules to generate final rules for labeling. ReGAL combines elements of this framework to allow iterative communication between downstream classifiers and rule selectors to allow each to mutually enhance the other. ReGAL differs from these by soliciting user feedback on rules, thus enabling it complement model-generated LFs with annotator guidance.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rephrase the last sentence.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the last sentence to better convey the idea.", "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.09", "parag_1": "Recommender system is a natural application of the problem of varying action space where items to recommend change often. Specifically, action interdependence is apparent in the case of listwise actions. For example, a list of diverse ads is more likely to get a user click than ads from the same domain (Zhou et al., 2010). In this work, we experiment with the listwise metric of Complementary Product Recommendation (CPR) (Hao et al., 2020).", "parag_2": "Recommender system (RecSys) is a natural application of varying action space RL — for instance, news articles or videos to recommend are updated daily. Action interdependence is distinctly apparent in the case of listwise actions. A recommended list of diverse videos is more likely to get a user click than videos about the same thing (Zhou et al., 2010). In this work, we experiment with the listwise metric of Complementary Product Recommendation (CPR) (Hao et al., 2020).", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_04"}, "annot_2": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "S1-LZxvKX.rJ009I8RX.00", "parag_1": "Now, the question is: given a full model f , is it possible to find a more efficient reparameterization f ψ that, trained de novo , can generalize comparably well? The success of various model compression techniques suggests that such reparameterizations might exist for most well-known models, and a recent study (Frankle & Carbin, 2018) made successful post hoc identifications of sparse reparameterized small networks with precisely such properties. Nevertheless, all attempts of training small networks de novo have yielded results significantly underperforming those compressed from the original models (Zhu & Gupta, 2017). This has led to a commonly held belief that gross overparameterization is necessary for effective training, and it is further hypothesized that this necessity is due to the haphazard occurrences of good reparameterizations and their sensitivity to initialization (Frankle & Carbin, 2018). To date, a method capable of training a sparse network de novo to match the performance of a dense model compressed to the same size remains out of reach. In this study we present a dynamic sparse reparameterization technique that successfully overcame this challenge.", "parag_2": "Now, the question is: given a full model f , is it possible to find a more efficient reparameterization f ψ that, trained de novo , can generalize comparably well? The success of various model compression techniques suggests that such reparameterizations might exist for most well-known models, and a recent study (Frankle & Carbin, 2018) made successful post hoc identifications of sparse reparameterized small networks with precisely such properties. Nevertheless, attempts at training small networks de novo typically yield results significantly underperforming networks obtained by compressing larger models (Zhu & Gupta, 2017). This has led to a commonly held belief that gross overparameterization is necessary for effective training. Here we argue that this is not true by presenting a dynamic sparse reparameterization technique able to train sparse models de novo without the need to compress a large model, a desirable feature for training on memory- and power-constrained devices.", "annot_1": {"annotation": ["Concision", "Content_substitution"], "instruction": "Make this paragraph more concise by revising the last part and removing the reference to Frankle and Carbin (2018).", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision", "Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "rJFzEaTlE.HkixjxSBE.00", "parag_1": "During the translation/transformation, some domain-specific attributes are changed, such as the colors, texture, and semantics of certain image regions (see e.g. the examples in Fig 1). Although there is no supervised information for these changes, certain consistency during the transformation is desirable. Inspired by graph-based semi-supervised learning (Zhu et al., 2003; Zhu, 2006), we introduce smoothness terms to unpaired image-to-image translation (Zhu et al., 2017a) by providing a stronger regularization for the translation/transformation between the source and target domains, aiming to exploit the “manifold structure” of the source and target domains. For a pair of similar samples (two different locations in an image; one can think of them as two patches although the receptive fields of CNN are quite large), we add a smoothness term to minimize a weighted distance of the corresponding locations in the target image. Note that two spatially distant samples might be neighbors in the feature space. We name our algorithm HarmonicGAN as it behaves harmonically along with the circularity and adversarial constraints to learn a pair of dual translations between the source and the target domains. Metrics defined on two alternative features are adopted: (1) a low-level soft RGB histograms; and (2) CNN (VGG) features with pre-trained semantics.", "parag_2": "During the translation/transformation, some domain-specific attributes are changed, such as the colors, texture, and semantics of certain image regions. Although there is no supervised information for these changes, certain consistency during the transformation is desirable, meaning that for image contents similar in the source space should also be similar in the target space. Inspired by graph-based semi-supervised learning (Zhu et al., 2003; Zhu, 2006), we introduce smoothness terms to unpaired image-to-image translation (Zhu et al., 2017a) by providing a stronger regularization for the translation/transformation between the source and target domains, aiming to exploit the “manifold structure” of the source and target domains. For a pair of similar samples (two different locations in an image; one can think of them as two patches although the receptive fields of CNN are quite large), we add the smoothness term to minimize a weighted distance of the corresponding locations in the target image. Note that two spatially distant samples might be neighbors in the feature space. We name our algorithm HarmonicGAN as it behaves harmonically along with the circularity and adversarial constraints to learn a pair of dual translations between the source and target domains, as shown in Fig. Distance metrics defined on two alternative features are adopted: (1) a low-level soft RGB histograms; and (2) CNN (VGG) features with pre-trained semantics.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "sK_VkqBv2X.rJUJNwFOc.03", "parag_1": "The experimental results are summarized in Table 5, where we take CIFAR- 10 as ID data and CIFAR- 100 as OOD data. Here, the common watermarklearning setup (common) can already lead to improved performance in near-OOD detection compared to the cases without watermarking (w/o watermark). Moreover, watermarking with shiftingaugmentations (perm and rotate) can further boost the detection power of the models, leading to atmost 8 . 60 and 4 . 70 improvements in FPR95 for the softmax and the free energy scoring.", "parag_2": "The near OOD experiments are summarized in Table 5, where we take CIFAR- 10 as ID data and CIFAR- 100 as OOD data. Here, the common learning setup (common) already leads to improved performance compared to the cases without watermarking (w/o watermark). Moreover, watermarking with shifting augmentations (permute and rotate) can further boost the detection power of the models, leading to at most 8 . 60 and 4 . 70 improvements in FPR95 for the softmax and the free energy scoring.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Make expression concrete, correct typos.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve the English of this paragraph.", "annotator": "annotator_02"}} {"id_paragraph": "3686sm4Cs.AJMXMDLVn.00", "parag_1": "SuperWeights, which are linear combination of templates which get reused by multiple layers . These SuperWeights capture a single operation on the input features ( e.g ., edge or texture detectors), and are themselves generated via a weighted combination of one or more templates of trainable parameters held by Weight Templates. Thus, to generate the weights for a single layer, we must first construct SuperWeights from the trainable parameters held by Weight Templates (discussed in Section 3.1), and then concatenate together all SuperWeights used by the layer to create its final weights (process illustrated in center-right column of Figure 2).", "parag_2": "SuperWeights, which are linear combination of templates and get reused by multiple layers and capture a single operation on the input features ( e.g ., edge or texture detectors). Then, to generate the weights for a single layer, we must first construct SuperWeights from the trainable parameters held by Weight Templates (discussed in Section 3.1), and then concatenate together all SuperWeights used by the layer to create its final weights (process illustrated in center-right column of Figure 2).", "annot_1": {"annotation": ["Concision"], "instruction": "Edit the first part of this paragraph for conciseness.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make the first half of the paragraph shorter by merging the two sentences and removing the details about how superweights are generated.", "annotator": "annotator_07"}} {"id_paragraph": "S1AJ47hjr.B1E3IvnsH.00", "parag_1": "Results for K = 20 are depicted in Figure 2b. All approaches update the Q-function with a learning rate of 10 − 3 on the same fixed batch of 10 3 episodes with a percentage of 10% non-optimal transitions. For the multi-step approaches, we set rollout length n = 4 . Since there is no generalization among states in the tabular setting, we update the Shifted Q-function with a learning rate of 10 −and the Truncated Q-functions with a learning rate of 10 − 3 . Please note, however, that the Shifted Q-function bootstraps from the full Q-estimate.", "parag_2": "Results for K = 20 are depicted in Figure 2b. All approaches update the Q-function with a learning rate of 10 − 3 on the same fixed batch of 10 3 episodes with a percentage of 10% non-optimal transitions. For the multi-step approaches, we set rollout length n = 4 . Since there is no generalization among states in the tabular setting, we update the Shifted Q-function with a learning rate of 10 −and the Truncated Q-functions with a learning rate of 10 − 3 .", "annot_1": {"annotation": ["Content_deletion"], "instruction": "I want to remove the last sentence.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "Delete the last sentence.", "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.18", "parag_1": "The datasets for training the rotamer density estimator are derived from PDB-REDO (Joosten et al., 2014), a database containing refined X-ray structures in PDB. Structures with resolution worse than 3.5 ˚A are removed. All the protein chains are clustered by 50% sequence identity. This process leads to 38,413 chain clusters, and they are randomly divided into the training set, the validation set, and the test set by 95%/0.5%/4.5%. At training time, the data loader randomly selects a cluster and then randomly chooses a chain from the cluster. During training, the structure is cropped into a patch containing 128 amino acids. The patch is constructed by first choosing a seed amino acid, and then finding its 127 nearest neighbors according to C-beta distances. To emulate mutations, the rotamers of 10% of amino acids in the patch are masked. Noise is added to the rotamers of amino acids whose C-beta distance to the closest masked amino acid is less than 8 ˚A.", "parag_2": "The dataset for training the rotamer density estimator is derived from PDB-REDO (Joosten et al., 2014), which is a database containing refined X-ray structures in PDB. The protein chains are clustered based on 50% sequence identity, leading to 38,413 chain clusters, which are randomly divided into the training, validation, and test sets by 95%/0.5%/4.5% respectively. During training, the data loader randomly selects a cluster and then randomly chooses a chain from the cluster to ensure balanced sampling. We crop structures into patches containing 128 residues by first choosing a seed residue, and then selecting its 127 nearest neighbors based on C-beta distances. To simulate mutations, we masked the rotamers of 10% of residues in the patch, and we added noise to the rotamers of residues whose C-beta distance to the closest masked residue was less than 8 ˚A.", "annot_1": {"annotation": ["Content_deletion"], "instruction": "Please, remove unnecessary details of this paragraph", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_medium", "Content_deletion"], "instruction": "Delete unnecessary details. Improve the linking between ideas.", "annotator": "annotator_07"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.13", "parag_1": "NVIDIA Dynamic Hand Gesture Dataset, or NVGesture in short (Molchanov et al., 2015), consists of 1,532 video clips (1,050 training and 482 test ones) of hand gestures in 25 classes. We sample 20% training samples as the validation set and use depth and RGB as two modalities. We adopt the configuration by Joze et al. (2020) for data preparation and use I3D Carreira & Zisserman (2017) as uni-modal branches and MMTMs as fusion modules in the six final inception modules.", "parag_2": "NVIDIA Dynamic Hand Gesture Dataset (or NVGesture (Molchanov et al., 2015)), consists of 1, video clips (1,050 training and 482 test ones) of hand gestures in 25 classes. We sample 20% training examples as the validation set and use depth and RGB as the two modalities. We adopt the data preparation steps used in Joze et al. and use the I3D architecture (Carreira & Zisserman, 2017) as uni-modal branches and MMTMs as fusion modules in the six final inception modules.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Check the citation mark format and rewrite", "annotator": "annotator_05"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Make the paragraph slightly more precise.", "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.06", "parag_1": "An on-calendar prescription visualization should support tasks that relate to reading calendar entries (e.g., reading calendar events and accessing events’ information) and to managing calendar entries (e.g., adding, editing and deleting calendar entries). Because such a system requires adding visual information about prescriptions on top of an existing calendar, the first step in the design process is to understand how to support tasks related to reading calendar entries - that include information about prescriptions and their potential conflicts. Below we introduce basic requirements for reading tasks related to reading calendar entries. A design that integrates prescription visualization to a calendar must be presented on a calendar. Therefore, such a design should satisfy the following basic usability requirement:", "parag_2": "An on-calendar prescription visualization should support tasks that relate to reading calendar entries (e.g., reading calendar events and accessing event information) and to managing calendar entries (e.g., adding, editing and deleting calendar entries). Representing medication prescriptions in general-purpose calen- dars requires adding visual information about these prescriptions on top of an existing planned schedule. This means that such a calendar must support tasks related to reading calendar entries (basic function of a calendar) as well as reading information about prescriptions and their potential conflicts. Tasks related to reading standard calendar entries have been dis- cussed in previous work [63–65]. These include tasks related to retrieving temporal features and reading event-related information such as date, time, location and purpose. From this, we derive the following basic usability requirement:", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.00", "parag_1": "The usual approach to representing a set of vectors in memory is to store it as a matrix, for which we need to choose some arbitrary order for the elements. To avoid being affected by this arbitrary order, recent set prediction models make use of a property called permutation-equivariance (Zaheer et al., 2017): changing the order of the input elements should change the order of the output elements in the same way.", "parag_2": "The usual approach to representing a multiset or set of vectors in memory is to store it as a list in which the elements are placed in an arbitrary order. To still treat the list as if it were a multiset or set, recent set prediction models avoid being affected by this arbitrary order by using list-to-list functions that are enforced to be permutation-equivariant (Zaheer et al., 2017): changing the order of the input elements should change the order of the output elements in the same way.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Development", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.08", "parag_1": "As demonstrated in §3.2, the imbalance in conditional utilization rates is a sign of the model exploiting the connection between the target and only one of the input modalities, ignoring crossmodal information. Conditional utilization rate was however can be measured once training was done, making it difficult to use it in real-time during training. We instead derive a proxy metric, called conditional learning speed , that captures relative learning speed between modalities during training.", "parag_2": "As demonstrated in §3.2, the imbalance in conditional utilization rates is a sign of the model exploiting the connection between the target and only one of the input modalities, ignoring crossmodal information. However, conditional utilization rates are measured after training is done, making it expensive to use them in real-time during training. We instead derive a proxy metric, called conditional learning speed , that captures relative learning speed between modalities during training.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Make expression concrete, add conjunction.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Revise the wording of the middle sentence in this paragraph.", "annotator": "annotator_02"}} {"id_paragraph": "pAdnbKIAaL.w-Mm4JV4h.01", "parag_1": "Theorem 1 indicates we can not identify the disentangled model because we can not tell whether the latent explanatory factors are entangled by h or not from the marginal distribution. Compared to the parameter-space identifiability in nonlinear ICA (Khemakhem et al., 2020), identifying a disentangled model is more challenging because even if we can identify the right distribution, the function h may still entangle the latent explanatory factors. In fact, recent generative models in disentangled representationsalways learn the ground-truth distribution, which is an isotropic Gaussian, via regularization (Kumar et al., 2017; Locatello et al., 2020). Therefore, we formulate our identifiability in the function space:", "parag_2": "Theorem 1 indicates that we can not identify the disentangled model because we can not tell whether the latent explanatory factors are entangled by h or not from the marginal distribution. Compared to the parameter-space identifiability in nonlinear ICA (Khemakhem et al., 2020), identifying a disentangled model is more challenging because even if we can identify the right distribution, the function h may still entangle the latent explanatory factors. Another difference is that the groundtruth distribution of the latent variables in disentangled representations is assumed to be isotropic Gaussian, which diminishes the importance of the parameter-space identifiability. Therefore, we formulate our identifiability in the function space:", "annot_1": {"annotation": ["Rewriting_medium", "Development"], "instruction": NaN, "annotator": "annotator_10"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_03"}} {"id_paragraph": "kBsx5htyKn.qV5njV8W5.01", "parag_1": "• Support Vector Machines (SVMs): on top of texts represented via sparse, TF-IDF bag-ofwords (BoW) vectors. SVM is implemented using a linear kernel. • Convolutional Neural Networks (CNNs): the text is represented by a (cid:96) × d matrix, where (cid:96) is the sequence length (120) and d is the dimension of the Glove embeddings [Pennington et al., 2014]. For the convolution, we applied filter sizes of 3, 4, and 5, with 128 filters persize.• Bidirectional Long Short-Term Memory (BiLSTM): words are also represented as Glove embeddings, and the maximum sentence length is set such that it covers 90% of the documents [Lowell et al., 2019].", "parag_2": "• Support Vector Machines (SVMs): on top of texts represented via sparse, TF-IDF bag-ofwords (BoW) vectors. SVM is implemented using a linear kernel. • Convolutional Neural Networks (CNNs): the text is represented by a (cid:96) × d matrix, where (cid:96) is the sequence length (120) and d is the dimension of the Glove embeddings [Pennington et al., 2014]. For the convolution, we applied filter sizes of 3, 4, and 5, with 128 filters per size. • Bidirectional Long Short-Term Memory (BiLSTM): words are also represented as Glove embeddings, and the maximum sentence length is set such that it covers 90% of the documents [Lowell et al., 2019].", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.01", "parag_1": "Estimating individual treatment effect (ITE) with randomized controlled trials (RCTs) is a common practice in causal inference, which has been widely used in e-commerce (Betlei et al., 2021), education (Cordero et al., 2018) and health care (Schwab et al., 2020). For example,application managers would impose a marketing strategy on the randomly sampled users to see its potential benefit on the click-through rate; drug developers would conduct clinical A/B tests to evaluate the drug effects. Although RCT is the golden standard for causal inference (Judea & Dana, 2018), it is always too expensive to conduct randomized experiments. Hence, observational data that can be acquired without intervention has been a tempting shortcut. For example, drug developers tend to assess drug effects with post-marketing monitoring reports instead of the clinical A/B trials. With the growing access to observational data, estimating ITE from observational data has attracted intense research interest.", "parag_2": "Estimating individual treatment effect (ITE) with randomized controlled trials is a common practice in causal inference, which has been widely used in e-commerce (Betlei et al., 2021), education (Cordero et al., 2018), and health care (Schwab et al., 2020). For example, drug developers would conduct clinical A/B tests to evaluate the drug effects. Although randomized controlled trials are the gold standard (Pearl & Mackenzie, 2018) for causal inference, it is often prohibitively expensive to conduct such experiments. Hence, observational data that can be acquired without intervention has been a tempting shortcut. For example, drug developers tend to assess drug effects with post-marketing monitoring reports instead of clinical A/B trials. With the growing access to observational data, estimating ITE from observational data has attracted intense research interest.", "annot_1": {"annotation": ["Content_deletion", "Rewriting_light"], "instruction": "Remove the part of the sentence that talks about application managers. Improve the english of this paragraph.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make the sentence 2 more concise. Make the sentence 3 more formal.", "annotator": "annotator_07"}} {"id_paragraph": "LWsBC35BgW.r1AX7JdZ0.00", "parag_1": "The trend we observe from Lemma 3.2 is that activation functions whose Hermite coefficients decay quickly, such as ω σ , result in a faster decay of the NTK coefficients. We remark that analyzing the rates of decay for l ≥ 3 is challenging due to the calculation of F ( p, k, ¯ α l − 1 ) (4). In Appendix B. we provide some preliminary results in this direction, illustrating how, in a specific setting, depth can lead to slower coefficient decay. Additionally, in Appendix B.4 we show that the zeroth coefficient strictly increases with depth. Finally, we briefly pause here to highlight the potential for using a truncation of (5) in order to perform efficient numerical approximation of the infinite width NTK.", "parag_2": "The trend we observe from Lemma 3.2 is that activation functions whose Hermite coefficients decay quickly, such as ω σ , result in a faster decay of the NTK coefficients. We remark that analyzing the rates of decay for l ≥ 3 is challenging due to the calculation of F ( p, k, ¯ α l − 1 ) (4). In Appendix B. we provide preliminary results in this direction, upper bounding, in a very specific setting, the decay of the NTK coefficients for depths l ≥ 2 . Finally, we briefly pause here to highlight the potential for using a truncation of (5) in order to perform efficient numerical approximation of the infinite width NTK.", "annot_1": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Remove the second last sentence ", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Remove unnecessary information, use accurate expression and evidence.", "annotator": "annotator_08"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.05", "parag_1": "According to Ethier et al. [58], a prescription is made of the follow- ing six building blocks which define the modalities of administration for a given drug: 1) drug prescription item (specifying actions re- lated to one or several drugs), 2) drug administration specification (specifying the drug product), 3) drug course specification (specify- ing duration, initiation, and termination), 4) drug dosage specification (specifying the dosage of a drug),5) drug dose administration specification (administration instructions), and6) drug dispensing specification (specifying the dispensing of a drug product). Kumar et al. [59] summarizes these building blocks as superscription (directive to take), inscription (name and dose), subscription (directions to the pharmacists), and signature (Instructions for Patient), and Fox [60] as drug name, drug dose, drug dose units, drug dose frequency, du- ration (comprising start and end date), and indication. Like others (e.g. [14,31]), we adopt the latter classification by Fox [60] because of the simplicity and directness of the naming convention.", "parag_2": "According to Ethier et al. [58], a prescription is made of the differ- ent parts which define the modalities of administration for a given drug. The underlying building block is the drug prescription item. The drug prescription item comprises the drug administration specification, healthcare objective specification, and drug distribution specification. The drug administration specification is the part that contains information that pertains to drug administration. This in- formation includes the drug product specification (which indicates the name of the drug), drug dosage specification (which indicates the dosage, administration route, and dosing conditions), and drug course specification (which indicates the the starting condition and the duration of the prescription). Kumar et al. [59] summarizes these building blocks as superscription (directive to take), inscrip- tion (name and dose), subscription (directions to the pharmacists), and signature (instructions for Patient), and Fox [60] as drug name, drug dose, drug dose units, drug dose frequency, duration (comprising start and end date), and indication. Like others (e.g. [14,30]), we adopt the latter classification by Fox [60] because of the simplicity and directness of the naming convention.", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite the opening sentences of the paragraph to make them more explicit and clear.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Replace the listing by normal text to better incorporate the six building blocks into the paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "rEgi52I8H8.HZE5FXKX-5.00", "parag_1": "The task we want to address is the inference of the n A = n S − 1 unknown, ancestral sequences (Joy et al., 2016). We show that our probabilistic model, called Draupnir, is about on par with or better than the accuracy of established ASR methods for a standard experimentallyderived data set (Alieva et al., 2008; Randall et al., 2016) and several simulated data sets. ", "parag_2": "The task we want to address is the inference of the n A ≤ n S − 1 unknown, ancestral sequences (Joy et al., 2016). We show that our probabilistic model, called Draupnir, is about on par with or better than the accuracy of established ASR methods for a standard experimentallyderived data set (Alieva et al., 2008; Randall et al., 2016) and several simulated data sets. In addition, we show that Draupnir is capable of capturing coevolution among sequence positions, unlike conventional ASR methods.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "g5N2H6sr7.6J3ec8Dl3p.01", "parag_1": "Meanwhile, a feature transformation can be applied to increase filter strength (Li et al., 2019b). Recap theinput of GDN in Section 3.2.2 is the smoothed representations H (cid:48) and the recovered structure A (cid:48) , the inverse version of GCN can be written as:", "parag_2": "Following GCN (Kipf & Welling, 2017), a feature transformation is applied to increase filter strength. Recap the GDN in Section 3.2, the inverse version of GCN can be written as:", "annot_1": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "Make the text more concise by describing concepts more high-level.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "Make the text more direct and concise.", "annotator": "annotator_07"}} {"id_paragraph": "9T9ueD0PUu.7Pcj7508I2.01", "parag_1": "MIMO’s experiments build onthe Uncertainty Baselines framework. This framework allows us to benchmark the performance and to compare against high-quality, well-optimized implementations of baseline methods (see framework for further baselines than ones highlighted here). We looked at three model/dataset combinations: ResNet28-10/CIFAR10, ResNet28-10/CIFAR100, and ResNet50/ImageNet. MIMO’s code will be open-sourced.", "parag_2": "We described and analyzed MIMO. In this section, we compare MIMO on benchmarks building on Uncertainty Baselines. This framework allows us to benchmark the performance and to compare against high-quality, well-optimized implementations of baseline methods (see framework for further baselines than ones highlighted here). We looked at three model/dataset combinations: ResNet28-10/CIFAR10, ResNet28-10/CIFAR100, and ResNet50/ImageNet. MIMO’s code is open-sourced.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the first sentence to also explain the structure of the section.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the first sentence to better introduce the section.", "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.24", "parag_1": "Dataset Every set has a fixed size with n elements of dimensionality d . As previously mentioned, every element is sampled i.i.d. from N ( 0 , I ) . We assign 64,000 samples to the training set and 6, samples to the test set. Changing the random seed also changes the dataset. As loss and evaluation metric, we use Hungarian matching with Huber loss as pairwise loss. We always use the results for a model after the final epoch without any early-stopping (we did not observe overfitting) or selection based on best loss.", "parag_2": "Dataset. Every set has a fixed size with n elements of dimensionality d . Every element is sampled i.i.d. from N ( 0 , I ) . We assign 64,000 samples to the training set and 6,400 samples to the test set. Changing the random seed also changes the dataset. As loss and evaluation metric, we use Hungarian matching with the Huber loss (Huber, 1964) (quadratic for distances below 1, linear for distances above 1) as pairwise loss. We always use the results for a model after the final epoch without any early-stopping (we did not observe overfitting) or selection based on best loss.", "annot_1": {"annotation": ["Rewriting_light", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "ryESgXktV.BJ4dKdWmr.02", "parag_1": "One remaining issue, however, is the ignorance of the cognitive effort required of the human for understanding an explanation. In previous work, the human is expected to understand any explanation providedbefore the task execution, regardless of how much information is present. In this work, we argue that explanations, especially complex ones, should be provided in an online fashion, which intertwines the communication of explanations with plan execution. In such a manner, an online explanation requires less cognitive effort at any specific point of time. The challenge here, however, is that the different parts of an explanation are dependent on each other, which must be taken into account when generating online explanations. The online explanation generation process spreads out the information to be communicated while ensuring that they do not introduce cognitive gaps so that the different parts of the information are perceived in a smooth fashion.", "parag_2": "One remaining issue, however, is the ignorance of the mental workload required of the human for understanding an explanation. In most earlier work on explanation generation, the human is expected to understand any explanation provided regardless of how much information is present and no discussion has been provided on the process for presenting the information. In this work, we argue that explanations, especially complex ones, should be provided in an online fashion, which intertwines the communication of explanations with plan execution. In such a manner, an online explanation requires less mental workload at any specific point of time. One of the main challenges here, however, is that the different parts of an explanation could be dependent on each other, which must be taken into account when generating online explanations. The online explanation generation process spreads out the information to be communicated while ensuring that they do not introduce cognitive dissonance so that the different parts of the information are perceived in a smooth fashion.", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.03", "parag_1": "Even without exact duplicates in the input, very similar elements can still cause problems for set-equivariant models. For example, in order to learn the push apart function, it is necessary for a model to make a decision of which element is larger. This decision is fundamentally discontinuous. Since most models used in deep learning cannot represent discontinuous jumps, near the discontinuity (when the elements are similar) the modeling error will likely be high. ", "parag_2": "Even without exact duplicates in the input, very similar elements can still cause problems for set-equivariant models. For example, in order to learn the push apart function, it is necessary for a model to decide which element is larger. This decision is fundamentally discontinuous, but most models in deep learning cannot represent discontinuous jumps. A continuous model must approximate the discontinuity, which means that close to the discontinuity (where elements are similar) the modeling error will likely be high. This is a general problem when trying to map similar elements in the input (e.g. due to random initialization of Y 0 ) to dissimilar elements in the output.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.21", "parag_1": "Visual Comparisons. We further provide visual comparisons ( × 4) in Fig.4 for challenging cases. We can observe that most of the compared methods cannot recover structural details with proper directions and suffer from blurring artifacts. In contrast, our SRPN-L can better alleviate the blurring artifacts and recover more structures. These visual comparisons are consistent with the quantitative results, demonstrating the superiority of our SRP method.", "parag_2": "Visual Comparisons. The visual comparisons at × 4 scale are shown in Fig. It is easy to see that most of the comparison algorithms can barely recover the lost texture details in the correct direction; moreover, they suffer from blurring artifacts. In stark contrast, our SRPN-Lite effectively alleviates the blurring artifacts and recovers sharper structural fine textures in the right way, justifying the performance superiority of our SRP approach over other methods.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Please, review the following paragraph, rewrite it in a clearer way", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rephrase the text and change SRPN-L to SRPN-Lite", "annotator": "annotator_06"}} {"id_paragraph": "aTT-Sh8SIb.IZp_-D078x.00", "parag_1": "Architecture. Inspired by Carion et al. (2020), GPE employs a ConvNeXt (Liu et al., 2022) pretrained on ImageNet (Russakovsky et al., 2015) to extract spatial features of input video frames.", "parag_2": "Architecture. Inspired by Carion et al. (2020), GPE employs the combination of convolution- and attention-based models to extract features. Concretely, a ConvNeXt (Liu et al., 2022) pre-trained on ImageNet (Russakovsky et al., 2015) is used to extract spatial features of input video frames.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_01"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "YkiRt7L93m.jgDbnUD7s.04", "parag_1": "We have developed a projection method between sets of probability measures supported on R d based on the tangential structure of the 2-Wasserstein space. Our method seeks to best approximate some target distribution that is potentially multivariate, using some chosen set of control distributions. We provide an implementation which gives unique, interpretable weights in a setting of regular probability measures. For general probability measures, we construct our projection by first creating a regular tangent space through applying barycentric projection to optimal transport plans. Our application to evaluating the first- and second-order effects of Medicaid expansion in Montana via an extension of the synthetic controls estimator (Abadie & Gardeazabal, 2003, Abadie et al., 2010) demonstrates the method’s efficiency and the necessity to have a method that is applicable for general proabbility measures. The approach still works without restricting optimal weights to be in the unit simplex, which would allow for extrapolation beyond the convex hull of the control units, providing a notion of tangential regression. It can also be extended to a continuum of measures, using established consistency results of barycenters (e.g. Le Gouic & Loubes, 2017).", "parag_2": "We have developed a projection method between sets of probability measures supported on R d based on the tangent cone structure of the 2-Wasserstein space. Our method seeks to best approximate some general target measure using some chosen set of control measures. In particular, it provides a global (and in most cases unique) optimal solution. Our application to evaluating the first- and second-order effects of Medicaid expansion in Montana via an extension of the synthetic controls estimator (Abadie & Gardeazabal, 2003, Abadie et al., 2010) demonstrates the method’s utility in allowing for a method that is applicable for general probability measures. The method still works without restricting optimal weights to be in the unit simplex, which would allow for extrapolation beyond the convex hull of the control units, providing a notion of tangential regression. It can also be extended to a continuum of measures, using established consistency results of barycenters (e.g. Le Gouic & Loubes, 2017).", "annot_1": {"annotation": ["Concision"], "instruction": "Please, make this paragraph more concise, delete unnecessary details", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Combine sentences 3 and 4 into a really short one keeping only the main idea. Improve the choice of wording.", "annotator": "annotator_07"}} {"id_paragraph": "l1R3hsGaL.wDtYQAe21.00", "parag_1": "However, recent work has questioned the efficacy of current OOD methods and reported sobering experimental results under rigorous examination [4, 14]. Nagarajan et al. [12] analyzed OOD failure modes, and found that spurious correlations induce two kinds of skews in the data: geometric and statistical skew. Geometric skew occurs when there is an imbalance between groups of types of datapoints, and leads to misclassification when the balance of groups changes. This understanding hasmotivated to simply remove data from the training data to balance between groups of data points [2].", "parag_2": "However, recent work has questioned the efficacy of current OOD methods and reported sobering experimental results under rigorous examination [3, 10]. Nagarajan et al. [8] analyzed OOD failure modes, and found that spurious correlations induce two kinds of skews in the data: geometric and statistical skew. Geometric skew occurs when there is an imbalance between groups of types of data points (such as data points from different environments) which induces a spurious correlation, and leads to misclassification when the balance of groups changes. This understanding has motivated simply removing data points from the training data to balance between groups of data points [2].", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.09", "parag_1": "By applying the implicit function theorem on Equation 8, we obtain the Jacobians ∂ Y ∗ ( z , θ ) /∂ z and ∂ Y ∗ ( z , θ ) /∂ θ , which is all we need in order for iDSPN to fit into an autodiff framework. Appendix B contains the full details of how this works.", "parag_2": "The implicit function theorem allows us to differentiate Equation 8 and obtain the Jacobians ∂ Y ∗ ( z , θ ) /∂ z and ∂ Y ∗ ( z , θ ) /∂ θ , which is all we need in order for iDSPN to fit into an autodiff framework. Appendix B contains the full details on how this works.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Simplify the first sentence.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve the readability of those sentences.", "annotator": "annotator_07"}} {"id_paragraph": "NAxP0iFmBr.5QBuYp8GH.00", "parag_1": "We introduce a proactive multi-camera collaboration framework based on multi-agent reinforcement learning (MARL) for real-time distributive adjustments of multi-camera formation for 3D HPE in a human crowd. In our approach, multiple camera agents perform seamless collaboration for successful reconstructions of 3D human poses. Additionally, it is a decentralized framework which offers flexibility over the size of formation and eliminates dependency to a control hierarchy or a centralized entity. In regards of the first challenge, we argue the utmost importance of the model’s capability to anticipate human motions and future states of the environment. To model these properties into the state representations, we incorporate World Dynamics Learning into the training of our model, i.e ., learning with five auxiliary tasks to predict target’s position, pedestrians’ positions, self state, teammates’ states and team reward.", "parag_2": "In in work, we introduce a proactive multi-camera collaboration framework based on multi-agent reinforcement learning (MARL) for real-time distributive adjustments of multi-camera formation for 3D HPE in a human crowd. In our approach, multiple camera agents perform seamless collaboration for successful reconstructions of 3D human poses. Additionally, it is a decentralized framework that offers flexibility over the formation size and eliminates dependency on a control hierarchy or a centralized entity. For the first challenge, we argue that the model’s ability to predict human movements and environmental changes is crucial. Thus, we incorporate World Dynamics Learning (WDL) to train a state representation with these properties, i.e ., learning with five auxiliary tasks to predict the target’s position, pedestrians’ positions, self state, teammates’ states, and team reward.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Rephrase the text", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Remove redundant words.", "annotator": "annotator_08"}} {"id_paragraph": "fSok4pZwtS.3uQUSQv-u.00", "parag_1": "We observe that after training, TR 2 -GPT2 exhibits an understanding of an optimal strategy to determine when to rotate or not. As shown in Fig. 4, whenever the agent is in a chamber which permits rotation, the agent attends to positions between the next chamber or next next chamber, all of which are indicative of the orientation of the upcoming corner. Attending to these locations enables the agent to successfully bridge the high- to low domain gap in Couch Moving. A video of full trajectory attention analysis can be found in the supplementary materials and our project page .", "parag_2": "We observe that after training, TR 2 -GPT2 exhibits an understanding of an optimal strategy to determine when to rotate or not. As shown in Fig. 4, whenever the agent is in a chamber which permits rotation, the agent attends to positions between the next chamber or next next chamber, all of which are indicative of the orientation of the upcoming corner. Attending to these locations enables the agent to successfully bridge the high- to low domain gap in Couch Moving. Moreover, the agent learns to pay attention mostly to locations up ahead and learns that the past parts are uninformative, despite being given the full abstract trajectory to process at each timestep. A video of full trajectory attention analysis can be found in the supplementary materials and our project page .", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "dPpEo3v01UG.Sni2QzVBLHd.00", "parag_1": "In this paper, we first empirically show that winning tickets are actually more vulnerable to label noisesetting compared to the subnetwork created with the large learning rate; that is, the generalizationability of winning tickets is degraded due to the learning rate constraint. To explain this, we nextapply the PAC-Bayesian theory to LTH and show that it can explain the relationship between LTHand generalization behavior. We use the PAC-Bayes bound for a spike-and-slab distribution toanalyze winning tickets, which is based on our experimental findings that reducing the expectedsharpness restricted to an unpruned parameter space and adding the regularization of distance fromthe initial weights can enhance the test performance of winning tickets. Finally, we revisit existingalgorithms such as IMP, continuous sparsification [34] from the point of view of the PAC-Bayesbound optimization. This consideration gives an interpretation of these methods as an approximationof bound optimization.", "parag_2": "In this paper, we first empirically show that winning tickets are actually more vulnerable to label noisesetting compared to the subnetwork created with the large learning rate; that is, the generalizationability of winning tickets is degraded due to the learning rate constraint. In this connection, we thenfocus on the two concepts flatness and the distance from the initial weights of the winning tickets. Wenext apply the PAC-Bayesian theory to LTH on the basis of the flatness motivation and show that itcan explain the relationship between LTH and generalization behavior. We use the PAC-Bayes boundfor a spike-and-slab distribution to analyze winning tickets, which is based on our experimentalfindings that reducing the expected sharpness restricted to an unpruned parameter space and addingthe regularization of distance from the initial weights can enhance the test performance of winningtickets. Finally, we revisit existing algorithms such as IMP, continuous sparsification [34] from thepoint of view of the PAC-Bayes bound optimization. This consideration gives an interpretation ofthese methods as an approximation of bound optimization.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_04"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "33RNh69fYq.kMvWVl725x.01", "parag_1": "CIFAR-10 [20] is a classical image classification dataset with 10 categories. Each category has 5000images for training and 1000 images for testing. Existing methods [5, 21, 34] evaluate CIFAR-10 inthe separate case, where one class is viewed as normal samples, and others serve as anomalies. Incontrast, we propose the unified case, which is detailed in Sec. 4.4. ", "parag_2": "CIFAR-10 [22] is a classical image classification dataset with 10 categories. Existing methods [6, 23,36 ] evaluate CIFAR-10 mainly in the one-versus-many setting, where one class is viewed as normalsamples, and others serve as anomalies. Semantic AD [1, 9] proposes a many-versus-one setting,treating one class as anomalous and the remaining classes as normal. Different from both, we proposea unified case ( many-versus-many setting), which is detailed in Sec. 4.4. Metrics .", "annot_1": {"annotation": ["Development", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "9ALnOEcGN_.4eEIRZ-dm.03", "parag_1": "MIS task. Another limitation is that applying DIMES to a broader ranges of NP-complete problemsthat variables can take multiple values, such as Mixed Integer Programming (MIP), is not trivial andneeds further understanding of the nature of the problems. Checklist For all authors...", "parag_2": "CO solvers. There do exist problems beyond this assumption, e.g., Mixed Integer Programming (MIP), where variables can take multiple integer values instead of binary values.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "MnewiFDvHZ.iAYttXl-uH.01", "parag_1": "COCO-Soft with adversarial constraints: Adversarial constraints are more difficult to satisfy buthave been considered in the literature [24, 20, 16, 6, 22]. For COCO-Soft with adversarial constraints,the authors in [24] developed an online mirrored descent type algorithm that achieves O p? T q regretand O p T 3 { 4 q violation. Later, [6, 16] generalized the baseline in [24] and still achieve O p ? T q regret and O p T 3 { 4 q violation. With Slater’s condition, [20] presents an online gradient descent algorithmbased on the drift-plus-penalty method [19], which achieves O p? T q regret and O p? T q violation and[22] extended it to an online optimization with sub-modular losses. Note the key improvement inthese work is the Lyapunov drift technique that can provide a refined bound on virtual queues (ordual variables) with Slater’s condition, which thus achieves a smaller soft constraint violation. Itremains open that whether O p T 3 { 4 q violation can be reduced with adversarial constraints (soft orhard) while keeping O p? T q regret without Slater’s condition.", "parag_2": "COCO-Soft with adversarial constraints: Adversarial constraints are more difficult to satisfy buthave been considered in the literature [22, 18, 14, 5, 20]. For COCO-Soft with adversarial constraints,the authors in [22] developed an online mirrored descent type algorithm that achieves O p? T q regretand O p T 3 { 4 q violation. Later, [5, 14] generalized the baseline in [22] and still achieve O p ? T q regret and O p T 3 { 4 q violation. With Slater’s condition, [18] presents an online gradient descent algorithmbased on the drift-plus-penalty method [17], which achieves O p? T q regret and O p? T q violationand [20] extended it to an online optimization with sub-modular losses. It remains open that whether O p T 3 { 4 q violation can be reduced with adversarial constraints (soft or hard) while keeping O p? T qregret without Slater’ condition.", "annot_1": {"annotation": ["Concision"], "instruction": "Remove the second last sentence", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "Remove unnecessary sentence from this paragraph to make it shorter.", "annotator": "annotator_07"}} {"id_paragraph": "XwpokDSFR.0bM_dGxEwf.00", "parag_1": "Training flow-based encoders The next challenge is to design a training procedure for our newly proposed architecture. The main issue is that the statistical distance is not differentiable (as classifier µ ∗ is binary) so wewant to replace it with a differentiable proxy. Pinsker’s inequality guarantees that the statistical distance between Zand Z 1 can be bounded using symmetrized KL divergence between Z 0 and Z 1 (see Appendix A. for more detailed proof). We show a high-level description of our training procedure in Algorithm 1. In each step, we sample a batch of x 0 and xfrom the respective distributions and encode them to the representations z 0 and z 1 . We then estimate a symmetrized KL-divergence between distributions Z 0 and Z 1 , denoted as L 0 + L 1 , and combine it with a classification loss L clf using tradeoff parameter γ , and run a gradient step to minimize the joint loss. While we use a convex scalarization scheme to obtain the joint loss in Algorithm 1, our approach is independent of the concrete multi-objective optimization objective. We will demonstrate the compatibility with other scalarization schemes in Appendix C.", "parag_2": "Training flow-based encoders The next challenge is to design a training procedure for our newly proposed architecture. The main issue is that the statistical distance is not differentiable (as classifier µ ∗ is binary) so we replace it with a differentiable proxy based on symmetrized KL divergence, shown in Lemma 5.3 (proof is shown in ?? ). We show a high-level description of our training procedure in Algorithm 1. In each step, we sample a batch of x 0 and x 1 from the respective distributions and encode them to the representations zand z 1 . We then estimate a symmetrized KL divergence between distributions Z 0 and Z 1 , denoted as L 0 + L 1 , and combine it with a classification loss L clf using tradeoff parameter γ , and run a gradient step to minimize the joint loss. While we use a convex scalarization scheme to obtain the joint loss in Algorithm 1, our approach is independent of the concrete multi-objective optimization objective. We will demonstrate the compatibility with other scalarization schemes in Appendix C.", "annot_1": {"annotation": ["Concision"], "instruction": "Review this paragraph, remove unnecessary details", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make the second sentence more concise and fitting to the academic style.", "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.16", "parag_1": "Representation-based methods mitigate the treatment selection bias and enhance the overall performance. In particular, CFR-WASS reaches an out-of-sample PEHE of 3.207 on ACIC, significantly outperforming most statistical methods. It also gets an AUUC of 0.715 on IHDP, exceeding all other baselines. However, MSE and UCE issues hinder the mitigation of treatment selection bias by these methods. The proposed ESCFR achieves significant improvement over most metrics compared with various state-of-the-art baselines 3 . Combined with aforementioned comparisons, we attribute its superiority to our design of the RMPR and PFOR regularizers, making it robust to MSE and UCE.", "parag_2": "Representation-based methods mitigate the treatment selection bias and enhance overall performance. In particular, CFR-WASS reaches an out-of-sample PEHE of 3.207 on ACIC, significantly outperforming most statistical methods. However, the MSE and UCE issues impede these methods from solving the treatment selection bias. The proposed ESCFR achieves significant improvement over most metrics compared with various prevalent baselines 4 . Combined with the comparisons above, we attribute its superiority to the proposed RMPR and PFOR regularizers, which makes it robust to MSE and UCE.", "annot_1": {"annotation": ["Content_deletion", "Rewriting_light"], "instruction": "Remove redundant information and use more scientific words.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Content_deletion", "Rewriting_light"], "instruction": "Make this text simpler and more readable. Remove unnecessary details about AUUC.", "annotator": "annotator_07"}} {"id_paragraph": "9wfZbn73om.FhHH15YtKt.03", "parag_1": "For the dog images in Figure 2, although they are quite different at the pixel level, they contain similar semantic meanings. Meanwhile, they have a small augmented distance. Thus, the semantic distance can be partially characterized by the proposed augmented distance. Based on the augmented distance, we now introduce the ( σ, δ ) -augmentation to measure the concentration of augmented data.", "parag_2": "For the dog images in Figure 2 as an example, even though their pixel-level differences are significant, their semantic meanings are similar. Meanwhile, they also have a small augmented distance. Thus, the proposed augmented distance can partially capture the semantic distance. Based on the augmented distance, we now introduce the ( σ, δ ) -augmentation to measure the concentration of augmented data.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Use formal words.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve this paragraph (mostly the first sentence) to make it less confusing.", "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.20", "parag_1": "Figure 13(b) shows the result on both train and test actions. AGILE-Tuned, With pre-summrizer , and No twin-GAT showed the similar performance which is better than No target-q-change . The difference between AGILE-Tuned and No target-q-change shows thatsince the cascaded networkconsiders the intermediate list constructed in decision-making, in the computation of the target qvalues the agent also needs to refer to the q-value of the next position in the same list instead of another list from the future time-step.", "parag_2": "Figure 13(b) shows the result on both train and test actions. AGILE-Tuned, With pre-summrizer , and No twin-GAT showed the similar performance which is better than No target-q-change . The difference between AGILE-Tuned and No target-q-change is that that the cascaded network in AGILETuned uses the target q-value from the intermediate list constructed. This is a more accurate target q-value as compared to the target q-value from another list from a future time-step.", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite the last sentence, splitting it into two to make it easier to understand", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "Simplify heavily the explanations in this paragraph keeping the main points.", "annotator": "annotator_03"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.21", "parag_1": "Sequence-based models do not predict ∆∆ G accurately for protein-protein binding in accordance with the discussion in Section 2.2. Figure 3 plots the distribution of per-complex correlation coefficients. Please refer to Section B of the appendix for more results and discussion.", "parag_2": "Sequence-based models do not accurately predict ∆∆ G for protein-protein binding, as discussed in Section 2.2. Figure 3 shows the distribution of per-complex correlation coefficients. Please refer to Section B of the appendix for more results and discussion.", "annot_1": {"annotation": ["Rewriting_light", "Concision"], "instruction": "Simplify the English of this paragraph.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve the English in this paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "NAxP0iFmBr.5QBuYp8GH.01", "parag_1": "• We formulate active 3D human pose estimation in a human crowd problem as a Dec-POMDP, and proposed a novel multi-camera ( n ≥ 3) collaboration framework. • We propose CTCR to improve credit assignment in multi-camera collaboration and demonstrate notable improvements in reconstruction accuracy compared to both passive and active baselines.• We introduce five auxiliary tasks called to help the model learn environment dynamics, further enhancing the model’s ability to handle highly dynamic scenes. • We contribute high-fidelity environments built for simulating realistic- looking human crowds with authentic behaviors, along with visualization software for frame-by-frame video analysis.", "parag_2": "• We formulate the active multi-camera 3D human pose estimation problem as a Dec-POMDP, and proposed a novel multi-camera ( n ≥ 3) collaboration framework based on MARL. • We introduce five auxiliary tasks to help the model learn the dynamics of the environment, further enhancing the model’s ability to handle highly dynamic scenes. • We propose CTCR to address the credit assignment problem in MARL and demonstrate notable improvements in reconstruction accuracy compared to both passive and active baselines. • We contribute high-fidelity environments built for simulating realistic-looking human crowds with authentic behaviors, along with visualization software for frame-by-frame video analysis.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Switch the second and the third sentence and then rephrase the first three sentences", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Change the position of two points.", "annotator": "annotator_08"}} {"id_paragraph": "xTHtKjLGM2.2fh264u-HOf.00", "parag_1": "Attention in the policy network are as follows, d model = 64 , n head = 6 , d v = 32 , d k = 32 . The maximum number of search epochs N is limited to 300 (including NFS, DIFER and F ETCH ), and the number of sampling also parallelized workers per round, is W = 24 . The maximum feature order K is set by K = 3 . Other methods are limited to run for 5 hours respectively, which is the average running time of F ETCH . All methods take their default parameters wherever possible.", "parag_2": "Attention in the policy network are as follows, d model = 64 , n head = 6 , d v = 32 , d k = 32 . The maximum number of search epochs N is limited to 300 (including DIFER and F ETCH ). Due to the requirements of NFS in their paper, we set N to 100 epochs for it. The number of sampling also parallelized workers per round, is W = 24 . The maximum feature order K is set by K = 3 . Other methods are limited to run for 5 hours respectively, which is the average running time of F ETCH . All methods take their default parameters wherever possible.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "zzdwUcxTjWY.rVxmgW1FRK.03", "parag_1": "Jung et al. (2021) proposed a standardized max-logit approach for detecting outliers in semantic segmentation, which is a post-hoc approach. Zhao et al. ; Grcic et al. (2021) trained a generative model and synthesize outliers in the pixel space, which cannot be applied to object detection where a scene consists of both known and unknown objects. Their regularization terms are based on entropy maximization, which is different from VOS .", "parag_2": "Jung et al. (2021) proposed to detect outliers for semantic segmentation task. Jung et al. Grcic et al. (2021) trained a generative model and synthesize outliers in the pixel space, which cannot be applied to object detection where a scene consists of both known and unknown objects. The regularization is based on entropy maximization, which is different from VOS .", "annot_1": {"annotation": ["Concision"], "instruction": "Shorten this paragraph.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make the first sentence a lot shorter.", "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.11", "parag_1": "By default, we sample a random initial set Y 0 ∼ N ( 0 , I / 10) to start the optimization with. Similar to DSPN, we can also use a learned initial set Y 0 to start closer to a solution. Unlike DSPN however, implicit differentiation treats the optimizer of Equation 7 as a black box, so there is no gradient signal for Y 0 . We therefore need to include a regularizer in Equation 6 to give us a gradient for Y 0 , for example by adding the regularizer from Rajeswaran et al. (2019): λ || Y − Y \n || 2 . Wewill use this when comparing iDSPN to DSPN performance in Subsection 4.2.", "parag_2": "By default, we sample a random initial set Y 0 ∼ N ( 0 , I / 10) to start the optimization with. Similar to DSPN, we can also use a learned initial set Y 0 to start closer to a solution. However, implicit differentiation treats the optimizer of Equation 7 as a black box, so there is no gradient signal for Y 0 . We therefore need to include a regularizer in Equation 6 to give us a gradient for Y 0 , for example by adding the regularizer from Rajeswaran et al. λ 2 || Y − Y \n || 2 . We use this only in Section 4.2 to make the forward passes of iDSPN and DSPN the same so that we can compare them fairly.", "annot_1": {"annotation": ["Rewriting_medium", "Rewriting_light"], "instruction": "Clarify your last sentence.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Make the last sentence more precise.", "annotator": "annotator_07"}} {"id_paragraph": "Sx6SnclSL.nQLOUHvx8n.03", "parag_1": "Shape Classification. We fine-tune Point-M2AE on two shape classification datasets: the widely adopted ModelNet40 [43] and the challenging ScanObjectNN [37]. We follow Point-BERT to usethe voting strategy [25] for fair comparison on ModelNet40, which tests the model for several timeswith different point cloud augmentation and ensembles the predictions. To handle the noisy spatial structures, we increase k of k -NN into {32, 16, 16} for ScanObjectNN to encode local patterns with larger receptive fields. As reported in Table 2, Point-M2AE achieves 94.0% accuracy on ModelNet40with 1024 points per sample, which surpasses Point-BERT fine-tuned with 1024 points by +0.8% and 8192 points by +0.2%. For ScanObjectNN in Table 3, our Point-M2AE outperforms the secondbest Point-BERT by a significant margin, +3.79%, +0.69% and +3.36%, respectively for the three splits, indicating our great advantages under complex circumstances by multi-scale encoding. As", "parag_2": "Shape Classification. We fine-tune Point-M2AE on two shape classification datasets: the widely adopted ModelNet40 [53] and the challenging ScanObjectNN [45]. For local spatial attention layers, we set the ball queries’ radii of 3-scale point clouds as {0.32, 0.64, 1.28}. We follow Point-BERT to use the voting strategy [29] for fair comparison on ModelNet40. To handle the noisy spatial structures, we increase k of k -NN into {32, 16, 16} for ScanObjectNN to encode local patterns with larger receptive fields. As reported in Table 2, Point-M2AE achieves 94.0% accuracy on ModelNet with 1024 points per sample, which surpasses Point-BERT fine-tuned with 1024 points by +0.8% and 8192 points by +0.2%. For ScanObjectNN in Table 3, our Point-M2AE outperforms the secondbest Point-BERT by a significant margin, +3.79%, +0.69% and +3.36%, respectively for the three splits, indicating our great advantages under complex circumstances by multi-scale encoding. As", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.05", "parag_1": "This section proposes the ESCFR approach to tackle the treatment selection bias. It is built upon the stochastic optimal transport framework for distribution discrepancy minimization across treatment groups. Based on the framework, we propose a relaxed mass preserving regularizer to address the sampling effect, and a proximal factual outcome regularizer to handle the unobserved confounders.", "parag_2": "In this section, we present the proposed Entire Space CounterFactual Regression (ESCFR) approach based on optimal transport to tackle the treatment selection bias. We first illustrate the stochastic optimal transport framework for distribution discrepancy minimization across treatment groups. Based on the framework, we then propose a relaxed mass-preserving regularizer to address the sampling effect, and a proximal factual outcome regularizer to handle the unobserved confounders.", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "S1-LZxvKX.rJ009I8RX.04", "parag_1": "• Full dense : original large and dense model, with N parameters; • Thin dense : original model with less wide layers, such that it had (1 − s ) N parameters; • Static sparse : original model reparameterized to sparsity s , then trained with connectivity fixed; • Compressed sparse : state-of-the-art compression of the original model by iterative pruning and retraining the original model to target sparsity s (see Appendix A for details of implementation).", "parag_2": "• Full dense : original large and dense model, with N parameters; • Thin dense : original model with less wide layers, such that it had (1 − s ) N parameters; • Static sparse : original model reparameterized to sparsity s , then trained with connectivity fixed; • Compressed sparse : state-of-the-art compression of the original model by iterative pruning and retraining the original model to target sparsity s (Zhu & Gupta, 2017); • DeepR : sparse model trained by using Deep Rewiring (Bellec et al., 2017); • SET : sparse model trained by using Sparse Evolutionary Training (Mocanu et al., 2018). Note that compressed sparse is a compression method that starts training with a dense model, whereas DeepR and SET , like ours, are dynamic reparameterization techniques that maintain sparsity throughout training. See Appendix A for hyperparameters used in the experiments.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "JPHVd17f9N.nH2bI_9hXk.00", "parag_1": "The easiest way to ensure m n = m n +1 , such that the acceptance probability remains nonzero, is to set the time step h = 0 . This, however, makes the sampler useless. If we take the limit h → 0 instead, the Euler-Maruyama scheme gets arbitrarily close to the true SDE trajectory. Crucially however, per theorem 1, the acceptance probability remains 0 for any h > 0 . Thus, it is impossible to use the acceptance probability to monitor the discretisation error. The Euler-Maruyama scheme cannot satisfy detailed balance. Note that, so far, we have not considered stochastic gradients: this result is valid forany choice of ∇ θ U ( θ n ) .", "parag_2": "The easiest way to ensure m n = m n +1 , such that the acceptance probability remains nonzero, is to set the time step h = 0 . This, however, makes the sampler useless. If we take the limit h → 0 instead, the Euler-Maruyama scheme gets arbitrarily close to the true SDE trajectory. Crucially however, per theorem 1, the acceptance probability remains 0 for any h > 0 . Thus, it is impossible to use the acceptance probability to monitor the discretisation error. The Euler-Maruyama scheme cannot satisfy detailed balance. So far, we have not considered stochastic gradients, but this result includes them: it still holds if we substitute an arbitrary g n for ∇ θ U ( θ n ) .", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Make the last sentence logical.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Make the last sentence more formal and academic.", "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.08", "parag_1": "Prescriptions come with constraints associated with drug dosage units, frequency and indications. Constraints can occur within the same medication and between different medications. An example of within-medication constraints is found in the the prescription Take 600 mg of Ibuprofen three times a day as needed with food , that has three constraints: 1) that 600mg should be taken at a given time; 2) that the maximum number of intakes per day is three; and 3) that the drug must be taken with food. But medications are often more complex. Consider for example the following prescription:", "parag_2": "Prescriptions come with constraints, e.g., drug dosage, adminis- tration frequency and other indications. Constraints may relate to a single medication or a set of different medications. An example of within-medication constraints is found in the the prescription Take 600 mg of Ibuprofen three times a day as needed with food , that has three constraints: 1) that 600mg should be taken at a given time; 2) that the maximum number of intakes per day is three; and 3) that the drug must be taken with food. But medications are often more complex. Consider for example the following prescription:", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Improve the English of the first sentence of this paragraph.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Rephrase the two first sentences for better readability.", "annotator": "annotator_07"}} {"id_paragraph": "FNrdpf-6LM.K7SWBMXiWw.00", "parag_1": "The dice-enterprise produces unbiased ancestor variables. For details on efficiency and correctness, please refer to Section 3.1. Our proposed VSMC-PRC bound is constructed through a marginal likelihood estimator obtained by combining the SMC sampler with a PRC step and dice-enterprise . The variance of estimators obtained through SMC-PRC particle filter is usually low (Peters et al., 2012). Therefore, we expect VSMC-PRC to be a tighter bound compared to the standard SMC based bounds used in recent works (Maddison et al., 2017; Naesseth et al., 2017; Le et al., 2017).", "parag_2": "The dice-enterprise produces unbiased ancestor variables. Note that we can easily control the efficiency of the proposed dice-enterprise through hyper-parameter M (similar to Eq. 2) in contrast to existing Bernoulli factory algorithms (Flegal et al., 2012; Schmon et al., 2019). For details on efficiency and correctness, please refer to Section 3.1 and Section 3.3. Our proposed VSMC-PRC bound is constructed through a marginal likelihood estimator obtained by combining the SMC sampler with a PRC step and dice-enterprise . The variance of estimators obtained through SMC-PRC particle filter is usually low (Peters et al., 2012). Therefore, we expect VSMC-PRC to be a tighter bound compared to the standard SMC based bounds used in recent works (Maddison et al., 2017; Naesseth et al., 2017; Le et al., 2017).", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "zDDQ6YzcK8.rFbDzCD2Zq.00", "parag_1": "Attention mechanism in transformer is the key component which models relations between feature representations. In this section, we visualize the self-attentions mechanism of our Entroformer, focusing on a few points in the image. The attention map is rescaled for a better view. In Figure 8, we visualize self-attention heads from different points These attention heads exhibit different behaviours that seem related to structure, semantic and color respectively. These visualizations show how the Entroformer finds related context to support its distribution prediction for the current latent. This allows the Entroformer to capture richer dependencies of the latents and achieve better compression performance. In Figure 9, we visualize self-attentions heads separately from one point.", "parag_2": "Attention mechanism in transformer is the key component which models relations between feature representations. In this section, we visualize the self-attentions mechanism of our Entroformer, focusing on a few points in the image. In Figure 8, the attention maps from different points exhibit different behaviours that seem related to structure, semantic and color respectively. It show how the Entroformer finds related context to support its distribution prediction for the current latent. In Figure 9, we visualize self-attentions heads separately from one point.", "annot_1": {"annotation": ["Concision"], "instruction": "Revise this paragraph to be more concise.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision"], "instruction": "This paragraph needs to be shorter, do it by removing details but don’t touch the first sentence.", "annotator": "annotator_07"}} {"id_paragraph": "3686sm4Cs.AJMXMDLVn.02", "parag_1": "SuperWeight Clustering approach described in Section 3.3. We provide two baselines: a manually defined heuristic, depth binning, which as used for our initialSuperWeight Clusters in Section 3. and prior work that proposed to clusters coefficients α in Eq. (1) to group layers (Plummer et al., 2022). Our results show that clustering based on the coefficients performs on par with the heuristic baseline. In contrast, our gradient analysis approach (Section 3.3) takes into account the direction of change rather than just the current value of the coefficients. As a result we obtain a 2% gain on individual models and a small boost to ensembling performance with our approach (Table 3). We show a visualization of SuperWeight cluster assignment in Appendix G.", "parag_2": "SuperWeight Clustering approach described in Section 3.3. We provide four baselines: Shared Coefficients , which learns SuperWeight Clusters, but shares coefficients between all layers ( i.e ., removing Section 3.2.2); Single SuperWeight Cluster , which allows layers to have their own coefficients, but does not learn clusters ( i.e ., removing Section 3.3); Depth-binning , a heuristic used for our initial SuperWeight Clusters in Section 3.3; and Coefficient Clustering (Plummer et al., 2022), which clusters coefficients α in Eq. (1) to group layers. Our results show that our approach outperforms all of our baselines. Notably, we find we that Coefficient Clustering performs in par or worse than other baselines. In contrast, our gradient analysis approach (Section 3.3) takes into account the direction of change rather than just the current value of the coefficients. Thus, we obtain a 2% gain on individual models and a small boost to ensembling performance with our approach (Table 3). We show a visualization of SuperWeight cluster assignment in Appendix C.6.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_addition", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.09", "parag_1": "Methods for predicting mutational effects for single proteins are either structure-based or sequencebased (evolution-based). Structure-based approaches can also be divided into biophysical methods, statistical methods, and deep learning-based methods. They aim at predicting thermal stability or fitness of the protein rather than binding free energy between proteins (Schymkowitz et al., 2005; Park et al., 2016; Alford et al., 2017). The mutational effects of single proteins can also be predicted using only sequences via mining its evolutionary history. Traditionally, this is done by performing statistics on multiple sequence alignments (MSAs), which are constructed by searching from largescale sequence databases (Hopf et al., 2017; Riesselman et al., 2018; Rao et al., 2021; Luo et al., 2021; Frazer et al., 2021; Notin et al., 2022). Recent studies show that protein language models (PLMs)trained on large sequence databases are capable of evaluating mutations without MSAs (Meier et al., 2021).", "parag_2": "The prediction of mutational effects for single proteins can be achieved using either structure-based or sequence-based (evolution-based) approaches. Structure-based methods can be categorized into biophysical, statistical, and deep learning-based methods, which aim to predict the thermal stability or fitness of the protein rather than the binding free energy between proteins (Schymkowitz et al., 2005; Park et al., 2016; Alford et al., 2017; Lei et al., 2023). Sequence-based methods rely on the mining of evolutionary history, done by performing statistics on multiple sequence alignments (MSAs) constructed from large-scale sequence databases (Hopf et al., 2017; Riesselman et al., 2018; Rao et al., 2021; Luo et al., 2021; Frazer et al., 2021), or leveraging protein language models (PLMs) (Meier et al., 2021; Notin et al., 2022).", "annot_1": {"annotation": ["Rewriting_medium", "Content_addition"], "instruction": "Review the following paragraph", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "Make this paragraph shorter and more fitted to academic style.", "annotator": "annotator_07"}} {"id_paragraph": "XocwbW4QXb.lPaXZHF25y5.00", "parag_1": "Window based critical event prediction: Intervention is a binary decision. At each time-step adecision must be made to intervene or not. This is often modeled directly as a binary classification problem where we look for event occurrence within a specified future window. In window basedintervention (WBI), time-steps where an event is present within a lookahead period are labeled aspositive class examples and the rest are labeled as negative examples. A classifier may then be learntto predict the probability of the positive class. An optimal threshold on this probability is then tunedover a validation set that measures costs of triggered interventions directly to choose an optimal triggerthreshold. WBI policies are standard practice in predictive maintenance[33, 2, 22, 2, 38, 40, 13].", "parag_2": "Window based critical event prediction: Intervention is a binary decision. At each time-step adecision must be made to intervene or not. This is often modeled directly as a binary classification analysis approach. We may treat unobserved critical events as censored. See Section 4.1 for detailsa stopping time in probability theory is random a variable τ such that 1 ( τ = n ) is a function of X n . So we can determine if τ = j or not by only considering X j [29]. problem where we look for event occurrence within a specified future window. In window basedintervention (WBI), time-steps where an event is present within a lookahead period are labeled aspositive class examples and the rest are labeled as negative examples. A classifier may then be learntto predict the probability of the positive class. An optimal threshold on this probability is then tunedover a validation set that measures costs of triggered interventions directly to choose an optimal triggerthreshold. WBI policies are standard practice in predictive maintenance[33, 2, 22, 2, 38, 40, 13].", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.19", "parag_1": "Conflict overlays were easily identifiable on all the designs. The user-preferred way to represent such conflicts is to use indicators for the position of medication entries that are involved in the conflict. The connectors for conflicting pairs should avoid thick solid lines, that create clutter. Instead, thin or dotted lines should be employed. While lines are effective in connecting conflicting entries, employing line style to indicate the nature of the conflict is contrariwise. Dif- ferent line styles may appear similar at a distance and hence fail in communicating the intended message. Line style should be carefully employed to imply an action.", "parag_2": "Conflict overlays were easily identifiable on all the designs. Par- ticipants preferred the use of indicators for the position of medication entries that are involved in the conflict. The connectors (lines) for conflicting entries should use thin or dotted lines rather than thick solid lines. Participants found that different line styles may appear similar at a distance and hence fail in communicating the nature of a conflict.", "annot_1": {"annotation": ["Rewriting_heavy", "Concision"], "instruction": "Rewrite this paragraph to be considerably more concise.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "I want to trim my paragraph so that the readers can read more easily.", "annotator": "annotator_09"}} {"id_paragraph": "jyac3IgQ44.f4au9jfat5.04", "parag_1": "Relative position encoding is necessary for transformer-based networks because fine-grained positioninformation may be lost in high-level features with the deepening of the network. To make better useof position information to facilitate multi-scale feature learning in our case, we adopt a scale-awareadaptive relative position encoding strategy inspired by [30, 46, 52], which can generate the positionalbias dynamically with scales for different head groups.", "parag_2": "The 3D point cloud feature generally contains the original coordinates information, which voxelswill inherit. However, the fine-grained location information may be blurred with the deepening ofthe network, so the relative position encoding is necessary. Furthermore, since MsSVT can extractmulti-scale features, the position-coding should differ at different scales, even for the same relativeposition. In light of these, we adopt an adaptive, scale-aware relative position encoding strategyinspired by [25, 38, 44].", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_substitution", "Rewriting_heavy"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.04", "parag_1": "Graph Attention Network : The action graph G is input to a GAT. Since the graph is fullyconnected, we choose an attention-based graph architecture which can learn to focus on the most relevant actions in the given set. A similar insight was employed by Zambaldi et al. (2018) where the entities inferred from the visual observation are assumed to form a fully connected graph. To enable propagation of sufficient relational information between actions, we use two graph attention layers with an ELU activation in between (Clevert et al., 2015). We found a residual connection after the second GAT layer was crucial in experiments.", "parag_2": "Graph Attention Network : The action graph G is input to a GAT. Since the graph is fullyconnected, we choose an attention-based graph network that can learn to focus on the most relevant actions in the available action set. A similar insight was employed by Zambaldi et al. (2018) where the entities inferred from the visual observation are assumed to form a fully connected graph. To enable propagation of sufficient relational information between actions, we use two graph attention layers with an ELU activation in between (Clevert et al., 2015). We found a residual connection after the second GAT layer was crucial in experiments, while multi-headed attention did not help.", "annot_1": {"annotation": ["Rewriting_light", "Development"], "instruction": NaN, "annotator": "annotator_04"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_01"}} {"id_paragraph": "l1D720s69O.vCKjjOP1ze.02", "parag_1": "Before we define the diffusion distance, we briefly introduce the intuition behind it: two nodes are considered similar when they diffuse in a similar way through the graph, and therefore when they influence the other nodes in a similar manner (Fouss et al., 2012). In other words, two nodes are close if they are in the same cluster which has a consistent local structure. More precisely, the diffusion distance at time K between nodes i and j is defined as follows:", "parag_2": "Two nodes are considered similar when they are diffused in a similar way through the graph, and therefore when they influence the other nodes in a similar manner (Fouss et al., 2012). In other words, two nodes are close if they are in the same cluster which has a consistent local structure. More precisely, the diffusion distance at time K between nodes i and j is defined as follows:", "annot_1": {"annotation": ["Concision"], "instruction": "Remove the ideas which are not particularly essential for the overall paragraph.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision"], "instruction": "Delete the first part of the first sentence and adapt it in consequence.", "annotator": "annotator_07"}} {"id_paragraph": "txe2sPPkO.id6Xr1pUq.01", "parag_1": "Proof. Let A psoc be a real-world adversary that semi-honestly corrupts T out of N servers at the beginning of the protocol Π train . We now present the steps of the ideal-world adversary (simulator) S f for A psoc . Note that, in the semi-honest setting S f already posses the input of A psoc and the final output shares of b val . S f acts on behalf of N − T honest servers, sets their shares as random values in Z 2 ℓ and simulates each step of Π train protocol to the corrupt servers as follows:", "parag_2": "Proof. Given the training framework securely realizes each of the building block used in protocol Π train , we now argue Π train securely realizes functionality F pTrain . Let A psoc be a real-world adversary that semi-honestly corrupts T out of N servers at the beginning of the protocol Π train . We now present the steps of the ideal-world adversary (simulator) S f for A psoc . Note that, in the semi-honest setting S f already posses the input of A psoc and the final output shares of b val . S f acts on behalf of N − T honest servers, sets their shares as random values in Z 2 ℓ and simulates each step of Π train to the corrupt servers as follows:", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.15", "parag_1": "Data and Evaluation. We use DIV2K dataset (Timofte et al., 2017) as training data, following most recent works (Timofte et al., 2017; Lim et al., 2017; Zhang et al., 2018a; Haris et al., 2018). For testing, we use five standard benchmark datasets: Set5 (Bevilacqua et al., 2012), Set14 (Zeyde et al., 2010), B100 (Martin et al., 2001), Urban100 (Huang et al., 2015), and Manga109 (Matsui et al., 2017). The SR results are evaluated with PSNR and SSIM (Wang et al., 2004) on Y channel of transformed YCbCr space. We also provide model size and FLOPs (a.k.a. Mult-Adds) comparisons.", "parag_2": "Data and Evaluation. We use DIV2K dataset (Timofte et al., 2017) and Flickr2K Lim et al. (2017) as training data, following most recent works (Timofte et al., 2017; Lim et al., 2017; Zhang et al., 2018a; Haris et al., 2018). For testing, we use five standard benchmark datasets: Set5 (Bevilacqua et al., 2012), Set14 (Zeyde et al., 2010), B100 (Martin et al., 2001), Urban100 (Huang et al., 2015), and Manga109 (Matsui et al., 2017). The SR results are evaluated with PSNR and SSIM (Wang et al., 2004) on the Y channel in YCbCr space.", "annot_1": {"annotation": ["Development", "Content_deletion"], "instruction": NaN, "annotator": "annotator_01"}, "annot_2": {"annotation": ["Development", "Content_deletion"], "instruction": NaN, "annotator": "annotator_06"}} {"id_paragraph": "H1bNM3ctm.SJJNigQAX.00", "parag_1": "We have presented an analytical method to predict the precision required for partial sum accumulation in the three GEMM functions in deep learning training. Our results prove that our method is able to accurately pinpoint the minimum precision needed for the convergence of benchmark networks to the full-precision baseline. While we have demonstrated the applicability of our work to computer vision datasets, in principle, the theoretical concepts are application agnostic. On the practical side, this analysis is a useful tool for hardware designers implementing reduced-precision FPUs, who in the past have resorted to computationally prohibitive brute-force emulations. We believe this work addresses a critical missing link on the path to truly low-precision floating-point hardware for DNN training.", "parag_2": "We have presented an analytical method to predict the precision required for partial sum accumulation in the three GEMM functions in deep learning training. Our results prove that our method is able to accurately pinpoint the minimum precision needed for the convergence of benchmark networks to the full-precision baseline. Our theoretical concepts are application agnostic, and an interesting extension would be to consider recurrent architectures such as LSTMs. In particular, training via backpropagation in time could make the GRAD accumulation very large depending on the number of past time-steps used. In such a case, our analysis is of great relevance to training precision optimization. On the practical side, this analysis is a useful tool for hardware designers implementing reduced-precision FPUs, who in the past have resorted to computationally prohibitive brute-force emulations. We believe this work addresses a critical missing link on the path to truly low-precision floating-point hardware for DNN training.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "zzdwUcxTjWY.rVxmgW1FRK.01", "parag_1": "What makes OOD detection particularly challenging? To explain this, modern neural networks are commonly optimized on thein-distribution (ID) data only, and lack explicit knowledge of unknowns during training time. The resulting decision boundary, despite being useful on ID tasks such as classification, can undesirably cover OOD data. We illustrate this in Figure 1. The ID data (gray) consists of three class-conditional Gaussians, on which a three-way softmax classifier is trained. The resulting classifier is overconfident for regions far away from the ID data (see the red shade in Figure 1(b)). When such models are directly employed, the decision boundary can be ill-fated for OOD detection. Ideally, a model should learn a more compact decision boundary that produces low uncertainty for the ID data, with high OOD uncertainty elsewhere ( e.g. , Figure 1(c)). However, achieving this goal is non-trivial due to the lack of supervision signal of unknowns. It is hard to comprehensively anticipate unknown data in advance under a large space of OOD uncertainty. This prompts thefollowing question: How can we enable unknown-aware deep neural networks without explicit knowledge of the unknowns in advance?", "parag_2": "The vulnerability to OOD inputs arises due to the lack explicit knowledge of unknowns during training time. In particular, neural networks are typically optimized only on the in-distribution (ID) data. The resulting decision boundary, despite being useful on ID tasks such as classification, can be ill-fated for OOD detection. We illustrate this in Figure 1. The ID data (gray) consists of three class-conditional Gaussians, on which a three-way softmax classifier is trained. The resulting classifier is overconfident for regions far away from the ID data (see the red shade in Figure 1(b)), causing trouble for OOD detection. Ideally, a model should learn a more compact decision boundary that produces low uncertainty for the ID data, with high OOD uncertainty elsewhere ( e.g. , Figure 1(c)). However, achieving this goal is non-trivial due to the lack of supervision signal of unknowns. This motivates the question: Can we synthesize virtual outliers for effective model regularization?", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite this paragraph to make it more concise and convincing.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_heavy", "Concision"], "instruction": "Rewrite this paragraph to make it more precise, clear and concise while fitting the academic style.", "annotator": "annotator_07"}} {"id_paragraph": "LWsBC35BgW.r1AX7JdZ0.01", "parag_1": "In general, Theorem 3.1 does not provide a straightforward path to analyzing the decay of the NTK power series coefficients for depths greater than two. This is due to the difficulty in analyzing F ( p, k, ¯ α l − 1 ) , which we recall is the sum of all ordered products of k elements of ¯ α l − 1 whose indices sum to p , defined in (4). However, in the setting where the squares of the Hermite coefficients, and therefore the series ( α p, 2 ) ∞ p =0 , decay at an exponential rate, this quantity can be characterized and therefore an analysis of the impact of depth conducted. Although admittedly limited in scope, we highlight that this setting is relevant for the study of Gaussian activation functions and radial basis function (RBF) networks, which to the best of our knowledge have not been analyzed in prior works. Under an additional assumption that the activation function has zero bias, which helps simplify the analysis, the following lemma precisely describes the evolution of the coefficients of the related Gaussian Process kernel. We leave relaxing this zero bias assumption, which we expect to not be materially different from the non-zero bias setting, as well as only enforcing exponential decay asymptotically and exploring other decay patterns, to future work. ", "parag_2": "In general, Theorem 3.1 does not provide a straightforward path to analyzing the decay of the NTK power series coefficients for depths greater than two. This is at least in part due to the difficulty of analyzing F ( p, k, ¯ α l − 1 ) , which recall is the sum of all ordered products of k elements of ¯ α l −whose indices sum to p , defined in (4). However, in the setting where the squares of the Hermite coefficients, and therefore the series ( α p, 2 ) ∞ p =0 , decay at an exponential rate, this quantity can be characterized and therefore an analysis, at least to a certain degree, of the impact of depth conducted. Although admittedly limited in scope, we highlight that this setting is relevant for the study of Gaussian activation functions and radial basis function (RBF) networks, which to the best of our knowledge have not been analyzed in prior works. We will also make the additional simplifying assumption that the activation function has zero bias, unfortunately this further reduces the applicability of the following results to any activation function commonly used in practice. We leave the study of relaxing this zero bias assumption, perhaps only enforcing exponential decay asymptotically, as well as a proper exploration of other decay patterns, to future work. The following lemma precisely describes, in the specific setting considered here, the evolution of the coefficients of the Gaussian Process kernel with depth.", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "Change the order of sentences to make the logic clearer.", "annotator": "annotator_08"}} {"id_paragraph": "IoTyuVEanE.Et-c0vQfeb.03", "parag_1": "• Majority voting of the labeling functions, where ties are broken by choosing the most common class as specified by the labling functions. • BERT [23] is a transformer-based language model that has shown exceptional performance on a wide-range of NLP tasks. We train BERT with the majority vote labels from our LFs.• MSWS [16] is a denoising method for multi-source weak supervision that co-trains cotrainsa rule denoiser with a neural classifier to learn optimal weightings for rules and labelunmatched samples. ", "parag_2": "• Majority voting of the labeling functions, where ties are broken by choosing the most common class as specified by the labeling functions. • BERT [23] is a transformer-based language model that has shown exceptional performance on a wide-range of NLP tasks. We train BERT with the majority vote labels from our LFs. • Epoxy [26] is a recent weak supervision paradigm that uses combined pretrained embeddings with anchored weakly-labeled examples to enable interactive model training. • MSWS [16] is a denoising method for multi-source weak supervision that co-trains a rule denoiser with a neural classifier to learn optimal weightings for rules and label unmatched samples. • Fully Supervised We train a BERT model with a fully connected layer to provide baseline performance for a model trained with ground-truth labels.", "annot_1": {"annotation": ["Content_addition", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_08"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.07", "parag_1": "The property of exclusive multiset-equivariance motivates us to study DSPN in greater detail. There are of course other factors that determine the quality of a set predictor, so we begin by pointing out some remaining problems of DSPN.", "parag_2": "Despite being exclusively multiset-equivariant, DSPN is outperformed by the set-equivariant Slot Attention (Locatello et al., 2020). We begin by highlighting some issues in DSPN that might be the cause of this, then propose approximate implicit differentiation as a solution.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "PcoXwm4jl.UxdreZBFz.00", "parag_1": "For all experiments we use an image size of 128 × 128 and a batch size of 12 to 16 depending on memory usage. We use the RMSProp (Tieleman & Hinton. (2012)) optimizer with a learning rate of 1 × 10 − 5 for the foreground module and Adam (Kingma & Ba (2014)) optimizer with a learning rate of 1 × 10 − 3 for the background module except for Figure 5, for which we use a learning rate of 1 × 10 − 4 as SPAIR to ensure fair comparison. We use gradient clipping with a maximum norm of 1.0. For Atari games, we find it beneficial to set α to be fixed for the first several thousand steps, and vary the actual value and number of steps for different games. This allows both the foreground as well as the background module to learn in the early stage of training.", "parag_2": "For all experiments we use an image size of 128 × 128 and a batch size of 12 to 16 depending on memory usage. For the foreground module, we use the RMSProp (Tieleman & Hinton. (2012)) optimizer with a learning rate of 1 × 10 − 5 except for Figure 5, for which we use a learning rate of 1 × 10 − 4 as SPAIR to ensure fair comparison. For the background module, we use the Adam (Kingma & Ba (2014)) optimizer with a learning rate of 1 × 10 − 3 . We use gradient clipping with a maximum norm of 1.0. For Atari games, we find it beneficial to set α to be fixed for the first several thousand steps, and vary the actual value and number of steps for different games. This allows both the foreground as well as the background module to learn in the early stage of training.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Split the long sentences into more concise sentences.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "The second sentence is too long, split it and make it more readable.", "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.23", "parag_1": "One block comprised a random ordering of 2 ( A ) × 2 ( W ) × 3 ( I ) × = 12 conditions. Participants first practiced 48 trials (4 blocks) to become familiar with the movement of Figure 9. Then, participants performed 20 blocks (240 trails) for data collection. Each participant took approximately 10 min for this experiment. We instructed participants the same instruction as in experiment 2.", "parag_2": "One set comprised a random ordering of 2 ( A ) × 2 ( W ) × 3 ( I ) =conditions. Participants first practiced 48 trials (4 sets) to become fa- miliar with the movement shown in Figure 9. Then, the participants performed 20 sets (240 trials) for data collection. Each participant took approximately 10 min to complete this experiment, and the same instruction as in Experiment 2 were provided. No participant performed a clutching action in this experiment as well.", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.20", "parag_1": "Calibrated modality utilization We train multi-modal DNNs as described in §5.2, using each training algorithm. We set the imbalance parameter α to 0.01 and the re-balancing window size Q to 5 for ModelNet40, and α to 0.1 and Q to 5 for NVGesture.", "parag_2": "Calibrated modality utilization We train multi-modal DNNs as described in §5.2, using the guided, the random, and the conventional training algorithm (referred to as vanilla ). For ModelNet40, we set the imbalance tolerance parameter α to 0.01 and the re-balancing window size Q to 5. For NVGesture, we use α of 0.1 and Q of 5.", "annot_1": {"annotation": ["Rewriting_medium", "Development"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Make expression concrete, make format consistent.", "annotator": "annotator_08"}} {"id_paragraph": "H1NH-JS0X.r1_JuWar4.00", "parag_1": "Macro-intent labeling function. We extract weak macro-intent labels ˆ g kt for each player k as done in (Zheng et al., 2016). We segment the left half-court into a 10 × 9 grid of 5 ft × 5 ft boxes. The weak macro-intent ˆ g kt at time t is a 1-hot encoding of dimension 90 of the next box in which player k is stationary (speed (cid:107) x kt +1 − x kt (cid:107) 2 below a threshold). The shared global macro-intent g t is the concatenation of individual macro-intents. Figure 4 shows the distribution ofextracted macro-intents for each player and pseudocode can be found in appendix C.", "parag_2": "Macro-intent labeling function. We extract weak macro-intent labels ˆ g kt for each player k as done in (Zheng et al., 2016). We segment the left half-court into a 10 × 9 grid of 5 ft × 5 ft boxes. The weak macro-intent ˆ g kt at time t is a 1-hot encoding of dimension 90 of the next box in which player k is stationary (speed (cid:107) x kt +1 − x kt (cid:107) 2 below a threshold). The shared global macro-intent g t is the concatenation of individual macro-intents. Figure 4 shows the distribution of macro-intents for each player. We refer to this labeling function as LF-stationary (pseudocode in appendix D).", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_08"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "eYzycFMXwr.8-KFmZiCM.00", "parag_1": "To resolve the aforementioned problem, researchers and practitioners focused on the use of the model-parallel technique to partition large models over multiple accelerators/workers; this allows to further scale up model parameters significantly. However, in conventional model parallelism, computing resources are severely underutilized as only one accelerator can work at a time. Recently, some studies proposed the pipeline-parallel technique to accelerate conventional model parallelism.", "parag_2": "To resolve the aforementioned problem, researchers and practitioners focused on the use of the model-parallel technique to partition large models over multiple accelerators/workers; this allows to further scale up model parameters significantly. However, the conventional model parallelism, which includes inter-layer model parallelism and intra-layer model parallelism, either suffers from low resource utilization or high communication overhead Narayanan et al. Recently, some studies proposed the pipeline-parallel technique to accelerate conventional model-parallel training.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "x8CcXI4Ei.4yg90qT46L.02", "parag_1": "C1) Focusing on the relatively tractable linear models, we derive the excess risk forthe minimum- norm solution to overparameterized gradient-based meta learning including MAML andiMAML. Specifically, the excess risk upper bound adopts the following form Cross-task variance + Per-task variance + Bias where the cross-task variance quantifies the error caused by finite task number and thevariation of the ground truth task specific parameter, which is a unique term compared to", "parag_2": "C1) We derive the upper bound of the excess risk for overparameterized nested meta learningincluding MAML and iMAML, with their minimum norm solution. Specifically, the excess risk upper bound adopts the following formCross-task variance + Per-task variance + Bias where the cross-task variance quantifies the error caused by finite task number and thevariation of the ground truth task specific parameter, which is a unique term compared to", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Make expression concise.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "NAxP0iFmBr.5QBuYp8GH.04", "parag_1": "Figure 3 is an example of how to plug numbers into Eq. 6 to compute CTCR for each of the three cameras. There is a breakdown of Eq. 6 just below it. But just in case if it seems too obscure, here is a more intuitive description of Eq. The CTCR is incentivized by the Shapley Value, so the main idea is that the overall optimality needs to also account for the optimality of every possible sub-formation. In the context of an active HPE task, for a camera agent to receive the highest CTCR possible, its current position and view must be optimal both in terms of its current formation and any sub-formation possible.", "parag_2": "Figure 3 is an example of using Eq. 6 to compute CTCR for each of the three cameras. The CTCR is incentivized by the Shapley Value. The main idea is that the overall optimality needs to also account for the optimality of every possible sub-formation. For a camera agent to receive the highest CTCR possible, its current position and view must be optimal both in terms of its current formation and any sub-formation possible.", "annot_1": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Remove the second sentence and make the paragraph more concise", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Remove unnecessary information, use concise expression.", "annotator": "annotator_08"}} {"id_paragraph": "vrqwgu1o8-.9igvll21Xa.00", "parag_1": "While neuronal spiking occurs on a temporal scale of milliseconds, the behavior spans the timescales from milliseconds to hours and even days (Mathis et al., 2018). As a result, the recorded neuronal and behavioral variables may operate at different timescales and exhibit different statistics.", "parag_2": "While neuronal spiking occurs on a temporal scale of milliseconds, the behavior spans the timescales As a result, the recorded neuronal and behavioral variables may operate at different timescales and exhibit different statistics.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "8_oadXCaRE.Kt4-LpYuM.03", "parag_1": "Euclidean norm, and the plasticity rule we derived updated its weights and biases in an unsupervised manner. We used K = 2000 neurons. First we trained the network for 100 epochs, i.e. randomly ordered presentations of the 60000 training digits. In our validation testing we found that softmax with a base of 1000 (see Section 2.7) performed best. The learning rate η of Eq. 8 decreased linearly from 0.03 to 0 throughout training. Each training experiment we will describe was repeated five times with varying random initializations and input order. We will report the mean and standard deviation of accuracies. Inference of the input labels by the WTA network of 2000 neurons was performed in two different ways. The first approach is single-layer, where after training the network we assigned a label to each of the 2000 neurons, in a standard approach that is used in unsupervised clustering. Namely, for each neuron, we found the label of the training set that makes it win the WTA competition most often. In this single-layer approach, this is the only time when labels were used, and at no point were weights updated using labels. The second approach was two-layer and based on supervised training of a perceptron classifier on top of the WTA layer. The classifier layer was trained with the Adam optimizer and cross-entropy loss for 60 epochs, while the previously-trained WTA parameters were frozen. SoftHebb achieved an accuracy of (96 . 18 ± 0 . 06)% and (96 . 94 ± 0 . 02)% in its 1- and 2-layer form respectively. To confirm the strength of the soft WTA approach combined with training the priors through biases, which makes the network Bayesian, we also trained the weights of anetwork with a hard-WTAsetup, i.e. where the strongest-activated neuron’s output y k is 1, and the other neurons are suppressed to 0, for each input. We found that an initial learning rate of 0.05 was best for the hardWTA network. The SoftHebb model outperformed the hard WTA (Fig. 1A).", "parag_2": "Euclidean norm, and the plasticity rule we derived updated its weights and biases in an unsupervised manner. We used K = 2000 neurons. First we trained the network for 100 epochs, i.e. randomly ordered presentations of the 60000 training digits. Each training experiment we will describe was repeated five times with varying random initializations and input order. We will report the mean and standard deviation of accuracies. Inference of the input labels by the WTA network ofneurons was performed in two different ways. The first approach is single-layer, where after training the network we assigned a label to each of the 2000 neurons, in a standard approach that is used in unsupervised clustering. Namely, for each neuron, we found the label of the training set that makes it win the WTA competition most often. In this single-layer approach, this is the only time when labels were used, and at no point were weights updated using labels. The second approach was two-layer and based on supervised training of a perceptron classifier on top of the WTA layer. The classifier layer was trained with the Adam optimizer and cross-entropy loss for 100 epochs, while the previously-trained WTA parameters were frozen. SoftHebb achieved an accuracy of (96 . 31 ± 0 . 06)% and (97 . 80 ± 0 . 02)% in its 1- and 2-layer form respectively. To test the strengths of the soft-WTA approach combined with training the priors through biases, which makes the network Bayesian, we also trained the weights of a hard-WTA network. The SoftHebb model slightly outperformed the hard WTA (Fig.", "annot_1": {"annotation": ["Content_deletion", "Concision"], "instruction": "Remove training details. Rewrite last paragraph to shorten.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "Remove unnecessary details to make the paragraph shorter.", "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.08", "parag_1": "Nevertheless, these pre-training strategies do not capture well the foundation of protein-protein interactions. More recently, the success of protein language models in has drawn interest in adopting the mask-predict(BERT) paradigm to protein 3D structures (Wang et al., 2018; Shroff et al., 2020; Jing et al., 2020; Yang et al., 2022; Zhang et al., 2022; Hsu et al., 2022). These methods partially mask amino acid types on a given protein backbone and use neural networks to recover the masked information. It has been reported that the difference in the probability of amino acid types before and after mutation show correlation to the change in binding free energy (Yang et al., 2022). Hence, they can serve as unsupervised predictors of the mutational effects on binding.", "parag_2": "However, most pre-training tasks are not designed to capture the foundation of protein-protein interactions. Unsupervised models adopt the mask-predict paradigm to protein 3D structures, partially masking amino acid types on a given protein backbone, and recovering the masked information using neural networks (Wang et al., 2018; Shroff et al., 2020; Jing et al., 2020; Yang et al., 2022; Hsu et al., 2022). These models can serve as unsupervised predictors of the mutational effects on binding, as the difference in the probability of amino acid types before and after mutation correlates mildly to the change in binding free energy.", "annot_1": {"annotation": ["Rewriting_heavy", "Content_deletion"], "instruction": "Please, rewrite this paragraph.", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Concision", "Rewriting_heavy"], "instruction": "Rewrite this paragraph to make it shorter while keeping all the informations.", "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.18", "parag_1": "Visualization of Pruning Process . To figuratively understand how SRP works, in Fig. 3 we plot the mean L 1 -norm of filters in two layers of EDSR baseline during SRP training. The filters are split into two groups, pruned and kept. As seen, the mean L 1 -norm of the pruned filters goes down gradually because the penalty grows stronger and stronger, driving them towards zero. Interestingly, note the L 1 -norms of the kept filters arise themselves (we do not have any regularization term to encourage them to grow larger). It means the network learns to recover itself , akin to the compensation effect in human brain (Duffau et al., 2003). We provide more visualization results in the appendix.", "parag_2": "Visualization of Pruning Process . In Fig. 3, we visualize the pruning process by plotting the mean L 1 -norm of filters in two layers of EDSR baseline during SRP training. The filters are split into two groups, kept filters and pruned filters. As is shown in the figure, the mean L 1 -norm of the pruned filters goes down gradually because the penalty grows stronger and stronger, driving them towards zero. Interestingly, note the L 1 -norms of the kept filters arise themselves. Recall that there is no explicit regularization to promote them to grow. In other words, the network learns to recover by itself , akin to the compensation effect found in human brain (Duffau et al., 2003).", "annot_1": {"annotation": ["Rewriting_light", "Content_deletion"], "instruction": "Rewrite this paragraph in a more formal style and remove any unnecessary details.", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_light", "Content_deletion"], "instruction": "Rewrite this paragraph in a more formal style and remove the last sentence", "annotator": "annotator_06"}} {"id_paragraph": "Z4wa1HedCf.I3bfjZFi15.00", "parag_1": "Although the theoretical understanding of the predictive capacity of high-dimensional ML models continues to advance rapidly, a parallel rigorous theory for UQ is comparatively lagging. The prominent heuristic in modern ML that larger models will typically perform better has become almost axiomatic. However, it is only more recently that this heuristic has become represented in statistical theory (Bartlett et al., 2020; Wang et al., 2021; Derezinski et al., 2020b). Typically, these arguments involve applications of random matrix theory (Edelman & Rao, 2005; Paul & Aue, 2014), most notably the Marchenko-Pastur law, concerning limits of spectral distributions under large data/large dimension regimes.", "parag_2": "Although the theoretical understanding of the predictive capacity of high-dimensional ML models continues to advance rapidly, a parallel rigorous theory for UQ is comparatively lagging. The prominent heuristic in modern ML that larger models will typically perform better has become almost axiomatic. However, it is only more recently that this heuristic has become represented in the theory through the characterisation of benign overfitting (Bartlett et al., 2020). In particular, the double descent curve extends the bias-variance tradeoff curve to account for improving performance with higher model complexity (Belkin et al., 2019; Wang et al., 2021; Derezinski et al., 2020b) (see Figure 1(right)). Typically, these arguments involve applications of random matrix theory (Edelman & Rao, 2005; Paul & Aue, 2014), notably the Marchenko-Pastur law, concerning limits of spectral distributions under large data/large dimension regimes.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "SYn-Ewh5STR.U0gqkwDXp.00", "parag_1": "In the previous section, we saw that reward hacking often leads to phase transitions in agent behaviour. Furthermore, in applications like traffic routing or COVID response, the true reward may be observed only sporadically or not at all. Blindly optimizing the proxy in these cases can lead to catastrophic failure.", "parag_2": "In the previous section, we saw that reward hacking often leads to phase transitions in agent behaviour. Furthermore, in applications like traffic routing or COVID response, the true reward may be observed only sporadically or not at all. Blindly optimizing the proxy in these cases can lead to catastrophic failure (Zhuang & Hadfield-Menell, 2020; Taylor, 2016).", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "atxti8SVk.3K9AmPwALM.10", "parag_1": "We denote O + as the set of segments among images with overlapping object categories. For example, if pixel i is labeled as sofa and another image also contains sofa, all the segments from that image are included in O + ; otherwise they are considered negative segments O − . This semantic context relationship does not require localization annotations or cues yet enhances the global and higher-level regularization on the pixel embedding.", "parag_2": "Let O + denote the set of segments in images with overlapping categories. For example, if pixel i is labeled as sofa and another image also contains sofa , all the segments from that image are included in O + ; otherwise they are considered negative segments in O − . This semantic context relationship does not require localized annotations yet imposes regularization on pixel feature learning.", "annot_1": {"annotation": ["Concision"], "instruction": "Rewrite this paragraph to be more concise.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Make it more concise.", "annotator": "annotator_07"}} {"id_paragraph": "6TxW-2B74.psZBwLHkc2.00", "parag_1": "Result Analysis. We empirically evaluate this method on CIFAR-10 with ResNet-18 backbone (details in Appendix B.1). From Figure 2a, we can see that as we adopt a larger memory bank with a large aggregation step s , the linearprobing accuracycontinues to grow larger and larger. Although the overall accuracy is still relatively low, we can observe a clear benefit of multi-stage aggregation for preventing feature collapse, which previously has onlybeen possible via asymmetric architectural designs. The success of applying multi-stage aggregation to contrastive learning not only suggests a new approach for negative-free contrastive learning, but also helps verify our established connection between contrastive learning and MP-GNNs. In Section D, we also show that theproposed multi-stage aggeregation can also bring clear benefits on benchmark datasets.", "parag_2": "Result Analysis. From Figure 2a, we can see that when directly applying the multi-stage alignment objective, a larger number of aggregation epochs could indeed improve linear accuracy and avoid full feature collapse. Although the overall accuracy is still relatively low, we can observe a clear benefit of multi-stage aggregation (better than random guess when s ≥ 2 ), which was only possible using asymmetric modules, e.g., SimSiam’s predictor (Chen & He, 2021). Inspired by this observation, we further combine multi-stage aggregation with SimSiam to further alleviate feature collapse. As shown in Table 1a, the multi-stage mechanism indeed brings cosistent and significant improvements over SimSiam on all three datasets, which clearly demonstrates the benefits of multi-stage aggregation on improving existing non-contrastive methods by further alleviating their feature collapse issues.", "annot_1": {"annotation": ["Rewriting_heavy", "Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "aFzc_2nNz.WIdHkazOg.01", "parag_1": "One way to check for any correspondences is to simply group the validation samples into M equalmass bins (henceforth called validation-bins) and compare the confidence with the training samples that fall into the same validation-bins. But first we clarify a few notations that are of interest.", "parag_2": "One way to check for any correspondences is to simply group the validation samples into M equalmass bins (henceforth called validation-bins) and compare the confidence with the training samples that fall into the respective validation-bin boundaries. Before proceeding further, we first clarify a few notations of interest.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Make the language of this paragraph more formal.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve the writing in the linking of the two last sentences to make it more formal.", "annotator": "annotator_07"}} {"id_paragraph": "FeGECDygT.LtDlwD0vb3.00", "parag_1": "Rotated MNIST and Permuted MNIST. Since DPGrad is designed for linear regression, we provide two variants of DPGrad — one is a modification suitable for multi-class classification, the other is a modification suitable for non-linear featurizers. Detailed numbers and figures can be found in Appendix E. In brief, both algorithms alleviate catastrophic forgetting and perform much better than vanilla SGD. Furthermore, the performance of both is much more stable than OGD and the accuracy remains at a high level across tasks.", "parag_2": "Rotated MNIST and Permuted MNIST. Since DPGrad is designed specifically for linear regression, we provide two variants of DPGrad (without provable guarantees on their performance, of course)— one is a modification suitable for multi-class classification, the other is a modification suitable for non-linear featurizers. Detailed numbers and figures can be found in Appendix E. In brief, both algorithms alleviate catastrophic forgetting and perform much better than vanilla SGD. Furthermore, the performance of both is much more stable than OGD and the accuracy remains at a high level across tasks.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "nIRqrHmpIE.pL1E71anH.00", "parag_1": "To visualize the advantage of our approach, considerthe semi-circle domain in Figure 1, adapted from [3]:a 2-dimensional agent must navigate to a goal, lo cated somewhere on the semi-circle. A task therefore corresponds to the goal location, and the task distribution is uniform on the 1-dimensional semi-circle.", "parag_2": "To visualize the advantage of our approach, consider the HalfCircle domain in Figure 1, adapted 3]: a 2- dimensional agent must navigate to a goal, lo cated somewhere on the half-circle. A task therefore corresponds to the goal location, and the task distribution is uniform on the 1-dimensional half- circle.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Use \"half\" instead of \"semi.\"", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Replace all semi-circle by half-circle.", "annotator": "annotator_07"}} {"id_paragraph": "aikvsSUCXs.-mHlwLvqO.00", "parag_1": "In Table 1, we evaluate BiLAW using two relatively small networks on two datasets: MNIST (LeCun & Cortes, 2010) and Fashion MNIST (Xiao et al., 2017). Tiny-CNN is a convolutional network withconvolutional and 2 dense layers. FC1 corresponds to a single hidden layer feedforward network with 1024 hidden units. The details of the architectures are given in the Appendix. We consider robustness with respect to (cid:96) ∞ distance. We use two criteria: clean test accuracy (TA) and robust test accuracy (PGD) for a given threshold (cid:15) . Robust test accuracy is computed using Projected Gradient Descent (PGD) (Madry et al., 2018) with 20 iterations. In all testcases, our method matches the performance of GAIRAT and out-performs the other methods. However, we note the overall distribution of both clean and robust accuracy is tight. We note a potential drawback of reweighting algorithms: the MNIST and Fashion-MNIST datasets contain a non-trivial number of misclassified samples which can influence performance (Müller & Markert, 2019). For algorithms which perform weighted training, possible large weights on outliers or mislabeled examples may influence classification performance. We will investigate this in the context of adversarial training in future work", "parag_2": "In Table 1, we evaluate BiLAW using two relatively small networks on two datasets: MNIST (LeCun & Cortes, 2010) and Fashion MNIST (Xiao et al., 2017). Tiny-CNN is a convolutional network with 2 convolutional and 2 dense layers. FC1 corresponds to a single hidden layer feedforward network with 1024 hidden units. The details of the architectures are given in the Appendix. We consider robustness with respect to (cid:96) ∞ distance. We use three criteria: clean test accuracy (clean), robust test accuracy (PGD) for a given threshold (cid:15) and AutoAttack (AA). Robust test accuracy is computed using Projected Gradient Descent (PGD) (Madry et al., 2018) with 20 iterations. In all testcases, our method matches the performance of GAIRAT and out-performs the other methods for clean and PGD accuracy and we out-perform all reweighting methods on AA accuracy. However, we note the overall distribution of both clean and robust accuracy is tight. We note a potential drawback of reweighting algorithms: the MNIST and F-MNIST datasets contain a non-trivial number of misclassified samples which can influence performance (Müller & Markert, 2019). For algorithms which perform weighted training, possible large weights on outliers or mislabeled examples may influence classification performance. We will investigate this in the context of adversarial training in future work", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.07", "parag_1": "The addition of medication entries in the calendar introduces new visual elements. Medication entries also communicate more details (D1 - D5) than the standard title, time, and location of standard calendar events. Therefore, a design that integrates prescription visualization to a calendar should satisfy the following prescription- related usability requirement:", "parag_2": "The addition of medication entries in the calendar introduces new visual elements. Medication entries communicate more details (D1 - D5) than the standard title, time, and location of regular calendar entries [66–68]. Integrating prescriptions in a generalpurpose calendar involves integrating Personal Health Information (PHI) and related activities into calendars [69–71]. Therefore, a design that integrates prescription visualization to a calendar should satisfy the following prescription-related usability requirement:", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "jzQGmT-R1q.ugUt9B3XaO.01", "parag_1": "We are particularly interested in understanding why capacity loss occurs. Two possible causes are immediate: the effect of bootstrapping , and the effect of sequential training . The effect of bootstrapping on capacity has been studied in other contexts (Mobahi et al., 2020; Kumar et al., 2021). We aim to isolate the effect of sequential prediction tasks on capacity loss. To minimize the potential for confounding factors to influence our results, we construct a toy prediction problem on the MNIST data set. We first consider labels computed by a randomly initialized neural network f ✓ : we transform input-label pairs ( x, y ) from the canonical MNIST dataset to ( x, f ✓ ( x )) , where f ✓ ( x ) is the network output. To generate a new task, we simply reinitialize the network; our evaluations consist of 10 such iterations. We further consider a ‘sparse-reward’ version of MNIST: for each of 10 iterations i , we use the label ˆ y i = [ y < i ] . This mimics sparse-reward environments where the agent initially obtains no reward in the environment, then gradually improves its policy and thus increases its prediction targets over the course of training.", "parag_2": "We are particularly interested in understanding why capacity loss occurs. Two possible causes are immediate: the effect of bootstrapping , and the effect of sequential training . The effect of bootstrapping on capacity has been studied in other contexts (Mobahi et al., 2020; Kumar et al., 2021). We aim to isolate the effect of sequential prediction tasks on capacity loss. To minimize the potential for confounding factors to influence our results, we construct our toy iterative prediction problems on the MNIST data set, which consists of images of handwritten digits and corresponding labels, and manually construct a sequence of targets which the network must fit over the course of training. We first consider labels computed by a randomly initialized neural network f ✓ : we transform input-label pairs ( x, y ) from the canonical MNIST dataset to ( x, f ✓ ( x )) , where f ✓ ( x ) is the network output. To generate a new task, we simply reinitialize the network; our evaluations consist of 10 iterations of label-generation followed by a training period during which we run a gradient-based optimizer on the network starting from the parameters we obtained in the previous iteration. We further consider a ‘sparse-reward’ version of MNIST: for each of 10 iterations i , we use the label ˆ y i = [ y < i ] , where y is the true label of the image. For example, at the first iteration, all images are assigned label zero. At the second iteration, the images of the digit zero are assigned label one, while all other inputs retain the zero label. This continues until all inputs are assigned label one at the final iteration. We follow the same training procedure in both cases: optimizing the network for a fixed number of steps on one set of labels, then generating new labels and running the optimization algorithm again from the parameters obtained in the previous phase.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.26", "parag_1": "Note that we use a ResNet18 image encoder, while Slot Attention uses a simpler 4-layer convolutional neural network with 5 × 5 kernels. This difference is not so easily compared: we compress the image into a 512d vector while they operate on a feature map. Their final feature map for CLEVR has 32x spatial positions with 64d each, so it can be argued that their latent space is 65536-dimensional. It is natural that a tighter bottleneck requires more processing. ResNet18 also applies several strided convolutions early, so the amount of processing between it and the Slot Attention image encoder are not too dissimilar. This is reflected by the fact that the time taken to process a sample is similar between ResNet18 + iDSPN and CNN + Slot Attention.", "parag_2": "Note that we use a ResNet18 image encoder, while Slot Attention uses a simpler 4-layer convolutional neural network with 5 × 5 kernels. This difference is not so easily compared: we compress the image into a 512d vector while they operate on a feature map. Their final feature map for CLEVR has 32x spatial positions with 64d each, so it can be argued that their latent space is 65536-dimensional. It is natural that a tighter bottleneck requires more processing. ResNet18 also applies several strided convolutions early, so the amount of processing between it and the Slot Attention image encoder are not too dissimilar. This is reflected by the fact that the time taken to process a sample is similar between ResNet18 + iDSPN and CNN + Slot Attention, even though we use a smaller batch size and could gain a speed-up from better parallelization for higher batch sizes. A small difference between the default Slot Attention setup and our setup is that they normalize the 3d coordinates to be within [0, 1] while we use an interval of [-1, 1]. In Table 3, Slot Attention* uses an interval of [-1, 1] (the same as iDSPN) while Slot Attention † uses the interval of [-3, 3] (the default coordinate range for this dataset). This increases the weight on the coordinates versus the other attributes, so it improves AP for strict thresholds but trades off classification performance at a threshold of infinity.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "HkW3nTM6X.S1d278zJ4.01", "parag_1": "We propose an approach for learning non-parametric spatio-temporal drift and diffusion functions of stochastic differential equation (SDE) systems such that the resulting simulated state distributions match data. The experiment on a real world data set shows that our model can better fit complex dynamics than the spatial counterpart. An interesting future research direction is the study of various vector field kernels, such as divergence-free, curl-free or spectral kernels [12]. The model could be extended to have an observation model, e.g., GPLVM or deep neural network, rather than PCA.", "parag_2": "We propose an approach for learning non-parametric spatio-temporal drift and diffusion functions of stochastic differential equation (SDE) systems such that the resulting simulated state distributions match data. The experiment on a real world data set shows that our model can better fit complex dynamics than the spatial counterpart. This increase in model capacity, however, results in larger data set requirements and makes the model more vulnerable to overfitting, which could be better accounted for using e.g. variational inference. An interesting future research direction is the study of various vector field kernels, such as divergence-free, curl-free or spectral kernels [12]. The model could be extended to have an observation model, e.g., GPLVM or deep neural network, rather than PCA.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "xV0XmrSMtk.sYfR73R9z.00", "parag_1": "Our considerations are focused on invariances of typical combinatorial problems under specific transformations of the cost vector. These transformations usually manifest as projections or normalizations , e.g. as an immediate consequence of the linearity of the objective, the combinatorial solver is agnostic to normalization of the cost vector. Such invariances, if unattended, can hinder fast convergence when used in combination with adaptive optimizers, or can result in divergence and", "parag_2": "Our considerations are focused on invariances of typical combinatorial problems under specific transformations of the cost vector. These transformations usually manifest as projections or normalizations , e.g. as an immediate consequence of the linearity of the objective, the combinatorial solver is agnostic to normalization of the cost vector. Such invariances, if unattended, can hinder fast convergence due to the noise of spurious irrelevant updates, or can result in divergence and", "annot_1": {"annotation": ["Development", "Concision"], "instruction": NaN, "annotator": "annotator_08"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "ryESgXktV.BJ4dKdWmr.01", "parag_1": "In our prior work (Chakraborti et al. 2017), we encapsulate such inconsistencies as model differences , while considering the discrepancies between the human and its own model when generating explanations. An explanation then becomes a request to the human to adjust the model differences in his mind so that the robot’s behavior would make sense in the updated model, which captures the human’s expectation of the robot. The general decision-making process of an agent in the presence of such model differences is termed model reconciliation (Chakraborti et al. 2017; Zhang et al. 2017).", "parag_2": "To address this challenge, the agent should consider the discrepancies between the human and its own model while generating explanations. In our prior work [7], we encapsulate such inconsistencies as model differences . An explanation then becomes a request to the human to adjust the model differences in his mind so that the robot’s behavior would make sense in the updated model, which is used to produce the human’s expectation of the robot. The general decision-making process of an agent in the presence of such model differences is termed model reconciliation [7], [8].", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Revise the opening of this paragraph to make it more compelling.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the first sentence to make it more convincing.", "annotator": "annotator_07"}} {"id_paragraph": "D5US-p_bk.OfHW0CMHoN.00", "parag_1": "DDIM sampler) and for higher number of function evaluations performance is better than the DDIM sampler (benefit coming from the Probability Flow Momentum Sampler). That said, there is no spot in which the Predictor-Correct performs better than both DDIM and the Probability Flow Momentum Sampler. We encourage future research in identifying better pairs of Predictor-Correctors that might outperform both the Predictor and the Corrector in some regime.", "parag_2": "DDIM sampler) and for higher number of function evaluations performance is better than the DDIM sampler (benefit coming from the Probability Flow Momentum Sampler). There is a spot, at NFEs, where the Predictor-Corrector sampler is better than both the Predictor and the Corrector. We encourage future research in identifying even better pairs of Predictor-Corrector samplers.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Change negative form to positive form, remove unnecessary details.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.21", "parag_1": "Another limitation is that since the study was online, we did not have the privilege of observing participant’s full activity cycles. It is likely that remote sessions also lead to participants employing less think-aloud than when participating in person. We also constrained participation to people between the age of 35 and 65 who were either on multiple prescription medications or played the role of caregivers to others on multiple medications. While this allowed us to capture insights for that specific population, these insights do not necessarily generalize to other populations. Our calendar designs were also suited for relatively large screens such as laptops and tablets and were not evaluated on mobile devices. Finally, the study was only limited to tasks that relate to reading calendar entries. In the future, tasks such as adding and modifying medication entries should be included.", "parag_2": "Another limitation is that since the study was online, we did not have the privilege of observing participant’s full activity cycles. It is likely that remote sessions also lead to participants employing less think-aloud than when participating in person. We also constrained participation to people between the age of 35 and 65 who were either on multiple prescription medications or played the role of caregivers to others on multiple medications. While this allowed us to capture insights for that specific population, these insights do not necessarily generalize to other populations. Our calendar designs were suited for relatively large screens such as laptops and tablets and were not evaluated on mobile devices. Given the focus of our study on medication entries, we opted for assigning the same color to all non-medication calendar entries. However, events in real-life calendars are often of several colors. The added colors likely increase visual complexity and visual clutter that must be considered in future studies. Finally, the study was only limited to tasks that relate to reading calendar entries. In the future, tasks such as adding and modifying medication entries should be included.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_09"}} {"id_paragraph": "sIqSoZ9KiO.KLlOZMoJ9G.00", "parag_1": "Other notable modifications include: (a) gated residual instead of a sum residual in the ResNet block (Figure 4), for improved training stability; (b) more flexible observation model based on a discretized mixture of logistics instead of a discretized logistic distribution, for improved performance and (c) mixed-precision (Micikevicius et al., 2018), which reduced memory requirements thus allowing training with larger batches. Without SDNs, we refer to this architecture as IAF-VAE+.", "parag_2": "Other notable include Figure 4) training stability b more flexible based on a mixture of logistics instead of a discretized logistic distribution, for improved performance; and (c) mixed-precision (Micikevicius et al., 2018), which reduced memory requirements thus allowing training with larger batches. Without SDNs, we refer to this architecture as IAF-VAE+.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "oxDnGzBe8n.r-P4EFl_4.01", "parag_1": "Table 1 reports the results. We have the following observations. First, the ASR of E VAS is significantly higher than ResNet18 and the other two random arches. For instance, on CIFAR10, E VAS is 21.8%, 28.3%, and 34.5% more effective than ResNet18 and random arches, respectively. Second, E VAS has the highest ASR across all the datasets. Recall that we use the same arch throughout different datasets. This indicates that the attack vulnerability probably resides at the arch level and is agnostic to concrete datasets. Third, all the arches show higher ASR on simpler datasets such as CIFAR10. This may be explained by that more complex datasets ( e.g. , more classes, higher resolution) imply more intricate manifold structures, which may interfere with arch-level backdoors.", "parag_2": "Table 1 reports the results. We have the following observations. First, the ASR of E VAS is significantly higher than ResNet18 and the other two random arches. For instance, on CIFAR10, E VAS is 21.8%, 28.3%, and 34.5% more effective than ResNet18 and random arches, respectively. Second, E VAS has the highest ASR across all the datasets. Recall that we use the same arch throughout different datasets. This indicates that the attack vulnerability probably resides at the arch level and is insensitive to concrete datasets, which corroborates with prior work on NAS: one performant arch found on one dataset often transfers across different datasets (Liu et al., 2019). This may be explained as follows. An arch α essentially defines a function family F α , while a trained model f α ( · ; θ ) is an instance in F α , thereby carrying the characteristics of F α ( e.g. , effective to extract important features or exploitable by a trigger generator). Third, all the arches show higher ASR on simpler datasets such as CIFAR10. This may be explained by that more complex datasets ( e.g. , more classes, higher resolution) imply more intricate manifold structures, which may interfere with arch-level backdoors.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_10"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_03"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.02", "parag_1": "These fusion modules enable information flow from one branch to another. This network architecture employs intermediate fusion, according the categorization of multi-modal fusion strategy in the deep learning literature (Ngiam et al., 2011; Atrey et al., 2010; Baltrušaitis et al., 2018). It has demonstrated competitive performance against multi-modal DNNs with late fusion in many tasks (Perez et al., 2018; Joze et al., 2020; Anderson et al., 2018; Wang et al., 2020b).", "parag_2": "These fusion modules enable information flow from one branch to another. According to categorization of multi-modal fusion strategies in the deep learning literature, this is intermediate fusion (Ngiam et al., 2011; Atrey et al., 2010; Baltrušaitis et al., 2018). It has demonstrated competitive performance against multi-modal DNNs with late fusion in many tasks (Perez et al., 2018; Joze et al., 2020; Anderson et al., 2018; Wang et al., 2020b).", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Explain the concept more clearly.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite this paragraph to improve its clarity.", "annotator": "annotator_02"}} {"id_paragraph": "3686sm4Cs.AJMXMDLVn.03", "parag_1": "In this work, we introduced SuperWeight Ensembles, a method for parameter sharing in heterogeneous ensembles. SuperWeight Ensembles outperform existing works on the anytime prediction task by leveraging gradient information for effective parameter sharing. We find our automatic sharing improves single member performance by 2% compared to the baselines. SuperWeight Ensembles also match performance of efficient ensembles in the low-parameter regime, while adding flexibility to adjust parameters which prior work doesn’t have. When we add parameters, we outperform even standard ensembles on ImageNet with just 50% of the parameters. We believe that SuperWeight Ensembles represent a promising step forward in parameter-efficiency. Future work will include more deeply exploring architecture diversity; Gontijo-Lopes et al. (2021) show that model architecture heterogeneity can be a key contributor to ensemble diversity on challenging tasks.", "parag_2": "We introduce SuperWeight Ensembles, a method for parameter sharing in heterogeneous ensembles. SuperWeight Ensembles outperform existing anytime prediction work by leveraging gradient information for parameter sharing. Our automatic sharing improves single member performance by 2% compared to the baselines. SuperWeight Ensembles also match performance of efficient ensembles in the low-parameter regime, compared to prior work. When we add parameters, we outperform even deep ensembles on ImageNet with 50% of the parameters. We believe that SuperWeight Ensembles are a promising step forward in parameter-efficiency. Future work will include more deeply exploring architecture diversity; Gontijo-Lopes et al. (2021) show that model architecture heterogeneity can be a key contributor to ensemble diversity on challenging tasks.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Improve the English of this paragraph.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Make this paragraph a bit more concise.", "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.12", "parag_1": "Datasets. Missing counterfactual makes it infeasible to evaluate the ground-truth PEHE over observational benchmarks. Following Liuyi et al. ; Uri et al. (2017), experiments are conducted on two semi-synthetic benchmarks. Specifically, the IHDP benchmark aims to estimate the effect of specialist home visits on infant’s future cognitive scores, with 747 observations and 25 covariates; the ACIC dataset comes from the collaborative perinatal project (Niswander & Gordon, 1972), with 4802 observations and 58 covariates.", "parag_2": "Datasets. Missing counterfactuals impedes the evaluation of PEHE with observational benchmarks. Following Liuyi et al. ; Shalit et al. (2017), experiments are conducted on two semi-synthetic benchmarks. Specifically, the IHDP benchmark aims to estimate the effect of specialist home visits on infants’ potential cognitive scores, with 747 observations and 25 covariates; the ACIC dataset comes from the collaborative perinatal project, with 4802 observations and 58 covariates.", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": "Add a citation in the last sentence and modify the rest so that the total length of the paragraph remains the same. ", "annotator": "annotator_05"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve the language in this text.", "annotator": "annotator_07"}} {"id_paragraph": "8HUuFjx8N.sQTrzfFgZx.00", "parag_1": "With a generative classifier, our GMMSeg handles anomaly segmentation naturally, without neither external datasets of outliers, nor additional image resynthesis models. It also greatly differs from most uncertainty estimation-based methods that are post-processing techniques adjusting the prediction scoresofsoftmax-basedsegmentationnetworks[17,18,41–43].Themostrelevantonesaremaybeafew densityestimation-basedmodels[56,108,109],whichdirectlymeasurethelikelihoodofsamplesw.r.t. the data distribution. However, they are either limited to pre-trained representation [56] or specialized for anomaly detection with simple data [108,109]. To our best knowledge, this is the first time to report promising results on both closed-set and open-world large-scale settings, through a single model instance without any change of network architecture as well as training and inference protocols.", "parag_2": "With a generative classifier, our GMMSeg handles anomaly segmentation naturally, without neither external datasets of outliers, nor additional image resynthesis models. It also greatly differs from most uncertainty estimation-based methods that are post-processing techniques adjusting the prediction scoresofsoftmax-basedsegmentationnetworks[17,18,41–43].Themostrelevantonesaremaybeafew densityestimation-basedmodels[56,109,110],whichdirectlymeasurethelikelihoodofsamplesw.r.t. the data distribution. However, they are either limited to pre-trained representation [56] or specialized for anomaly detection with simple data [109,110]. To our best knowledge, this is the first time to report promising results on both closed-set and open-world large-scale settings, through a single model instance without any change of network architecture as well as training and inference protocols.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.15", "parag_1": "Rotamers with d torsional angles can be viewed as points on the ddimensional torus. The D -dimensional torus is the product of D circles S 1 , i.e. T D = S 1 ×· · ·× S 1 . Therefore, we can model the distribution onT D by modeling the joint distribution of D variables on S 1 . In specific, we adopt the coupling layer technique to model the joint distribution (Dinh et al., 2016). On each coupling layer, we update one dimension using the bijective for S 1 , keeping the other D − 1 dimensions fixed and using them along with the hidden representation of the amino acid as the condition to parameterize the bijective (Figure 2B):", "parag_2": "Rotamers with D torsional angles can be viewed as points on the Ddimensional torus which is the product of D circles S 1 , i.e. T D = S 1 × · · · × S 1 . To model the distribution on T D , we adopt the coupling layer technique to model the joint distribution (Dinh et al., 2016). Each coupling layer updates one dimension using the bijective for S 1 , keeping the other D − 1 dimensions fixed, and uses the D − 1 dimensions along with the hidden representation of the residue as the conditioner to parameterize the bijective (Figure 2B):", "annot_1": {"annotation": ["Content_deletion"], "instruction": "Please, remove unnecessary details of this paragraph", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07"}} {"id_paragraph": "LC37_sQl_t.XlHDVLz97W.00", "parag_1": "Zero-shot recognition of novel concepts. To master a novel hierarchical concept and directly use it for classification and detection only given its relational graph structure, we need a way to compose the In this paper, we use i to index different concepts, j to index relations, and n to index example images. previous concept and relation energy based models. Here we introduce the hierarchical composition rule, using an English letter “F” as an illustrative example 5 .", "parag_2": "Zero-shot recognition of novel concepts. To master a novel hierarchical concept and directly use it for classification and detection only given its relational graph structure, we need a way to compose the previous concept and relation energy based models. Here we introduce the hierarchical composition rule, using an English letter “F” as an illustrative example 6 .", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Concision", "Unusable"], "instruction": "Remove unnecessary explanations.", "annotator": "annotator_08"}} {"id_paragraph": "ZcvguGK9Q.dBwWtd12-n.00", "parag_1": "Human Evaluation of KG Explanations We conduct a user study to measure KG-augmented models’ ability to give plausible explanations, using the original KG or RLRR perturbed KG. For both KGs, we sample 30 questions from the CSQA and OBQA test sets which were correctly answered by MHGRN. For each question, we retrieve the top-scoring path for each answer choice via MHGRN’s path decoder attention. We then ask three human subjects to rate each path for readability and usability, with ratings aggregated via majority voting. Readability (Read) is whether the path makes sense, usability (Use) is whether the path is relevant to the given question-answer pair, and both are measured on a [0 , 1] scale. We obtain a Fleiss’ κ of 0 . 1891 , indicating slight agreement between raters. To illustrate, we provide examples of explanation paths and their consensus ratings.", "parag_2": "Human Evaluation of KG Explanations We conduct a user study to measure the plausibility of KG-augmented models’ path-based explanations. For both the original KG and RL-RR perturbed KG, we sample 30 questions from the CSQA and OBQA test sets which were correctly answered by MHGRN. For each question, we retrieve the top-scoring path for each answer choice via MHGRN’s path decoder attention. We then ask three human subjects to rate each path for readability and usability, with ratings aggregated via majority voting. Readability (Read) is whether the path makes sense, usability (Use) is whether the path is relevant to the given question-answer pair, and both are measured on a [0 , 1] scale. We obtain a Fleiss’ κ of 0 . 1891 , indicating slight agreement between raters. To illustrate, we provide examples of explanation paths and their consensus ratings.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Review the following paragraph, only when necessary make modifications to make it easier to read", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Better balance the length of the first and second sentences.", "annotator": "annotator_07"}} {"id_paragraph": "atxti8SVk.3K9AmPwALM.11", "parag_1": "Feature affinity. We consider propagating the labels from an annotated set to an unlabeled set by nearest neighbor search in the featurespace. We assume that semantic clustersemerge during training with sparse supervision, reinforced by aforementioned pixel-to-segment relationships. By propagating labels in the feature space, we reinforce the learning of semantic clusters.", "parag_2": "Feature affinity. Our goal is to learn a pixel-wise feature that indicates semantic segmentation. It is thus reasonable to assume that pixels and segments of the same semantics form a cluster in the feature space, and we reinforce such clusters with a featural smoothness prior: We find nearest neighbours in the feature space and propagate labels accordingly.", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Rephrase this paragraph to make its goal and explanations much more clear.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite this paragraph to bring the argument through the idea that the goal is to learn a pixel-wise feature for semantic segmentation.", "annotator": "annotator_07"}} {"id_paragraph": "ByZyHzZC-.HktKf7-AW.04", "parag_1": "After deriving the theoretical relation between learning rate, batch size, gradient covariance and properties of the minima, such as loss and width, we experimentally verify that the controllable noise n c = η/S controls the width and height of the minima towards which SGD converges. We also show the impact of the controllable noise on the memorization phenomenon. We discussed the limitations of the theory and in what situations it breaks down, exemplified by when the learning rate gets too large. We also experimentally verify that η and S are exchangeable as long as the controllable noise η/S remains the same. This is true for both cyclic and constant schedules. In the cyclic case, we experimentally show hints that cyclical learning rates oscillate between sharp/deep and wide/shallow minima as long as the stage of increased noise is long enough to allow for mixing.", "parag_2": "Further, we experimentally verify that the controllable noise n c = η/S determines the width and height of the minima towards which SGD converges. We also show the impact of this controllable noise on the memorization phenomenon. We discussed the limitations of the theory and in what situations it breaks down, exemplified by when the learning rate gets too large. We also experimentally verify that η and S can vary in linear proportion as long as the controllable noise η/S remains the same. In addition, our experiments suggest that cyclical learning rates oscillate between sharp/deep and wide/shallow minima as long as the stage of increased noise is long enough to allow for mixing.", "annot_1": {"annotation": ["Concision", "Rewriting_light"], "instruction": "I want to make my paragraph shorter and clearer.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Delete the context in the first sentence. Delete the second sentence from the end. Smooth out the writing.", "annotator": "annotator_07"}} {"id_paragraph": "8_oadXCaRE.Kt4-LpYuM.04", "parag_1": "Finally, we trained SoftHebb on a more difficult dataset, namely Fashion-MNIST (Xiao et al., 2017) which contains grey-scale images of clothing products. A supervised MLP of the same size achieved a test accuracy of (90 . 55 ± 0 . 04)% on this task. The SoftHebb model achieved an accuracy of (87 . 46 ± 0 . 05)% , whereas a hard WTA also reached a similar accuracy of (87 . 49 ± 0 . We did not test the speed advantages of SoftHebb on this dataset. SoftHebb’s generative interpolations (Fig. 3B) are reconfirmed as are its robustness to attacks, whereas, with very small adversarial perturbations, the MLP drops to an accuracy lower than the SoftHebb model (dashed lines in Fig. 2). ", "parag_2": "Finally, we performed preliminary tests on two more difficult datasets, namely Fashion-MNIST (Xiao et al., 2017), which contains grey-scale images of clothing products, and CIFAR-10 (Krizhevsky et al., 2009), which contains RGB images of animals and vehicles. We did not tune the Hebbian networks’ hyper-parameters extensively, so accuracies on these tasks are not definitive but do give a good indication. On F-MNIST, the SoftHebb model achieved a top accuracy of 87 . 46% whereas a hard WTA reached a similar accuracy of 87 . A supervised MLP of the same size achieved a test accuracy of 90 . SoftHebb’s generative interpolations (Fig. 3B) are reconfirmed on the F-MNIST dataset, as are its robustness to attacks, whereas, with very small adversarial perturbations, the MLP drops to an accuracy lower than the SoftHebb model (dashed lines in Fig. 2). On CIFAR-10’s preliminary results, the hard WTA and SoftHebb achieved an accuracy of 49 . 78% and 50 . 27% respectively.", "annot_1": {"annotation": ["Content_addition", "Content_deletion"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.01", "parag_1": "The notch(a black area where no drawing is performed) located in the center of the top edge on theMacBook Pro (2021) display (Figure 1). Although the cursor can enter the notch area, it is partially or entirelyhidden by the notch when the cursor entersto the notch.", "parag_2": "A notch is used to position the web camera in a display for increasing the usable area of the display. For example, the MacBook Pro (2021) has a notch at the center of the top edge on the display (Figure 1). It is the black area on the display that cannot be used; however, the cursor can enter this area and is hidden partially or entirely when the cursor enters the notch.", "annot_1": {"annotation": ["Development", "Rewriting_heavy"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "PDvmJtmgQb.gGrpxbc7UI.00", "parag_1": "Learning Geometry with Mirror Maps: Common to most DP model training algorithms, including DP-SGD, DP-FTRL (Kairouz et al., 2021b), and our algorithm, is a DP estimator of the gradient of the loss ∇ θ L ( θ t ; D priv ) = (cid:80) d ∈ D priv ∇ θ (cid:96) ( θ t ; d ) generated by the private data set D priv at a given model state θ t ∈ R p . This DP estimator essentially adds isotropic Gaussian noise N (0 , σ 2 I p ) to ∇ θ L ( θ t ; D priv ) , where σ depends on the privacy parameters ( ε, δ ) and the maximum allowable value of (cid:107)∇ θ (cid:96) ( θ t ; d ) (cid:107) 2 (a.k.a. the clipping norm (Abadi et al., 2016)). 1 It is well known that for most learning tasks, the set of gradients vectors in L ( θ t ; D priv ) are seldom isotropic (Gur-Ari et al., 2018; Agarwal et al., 2019). Hence, it is natural to wonder if the Gaussian noise in the DP estimator can be made to respect the geometry of the gradients. Prior works (Zhou et al., 2020; Asi et al., 2021; Kairouz et al., 2021a) have used public data ( D pub ) to explicitly learn this geometry, mostly in the form of preconditioner matrices (Duchi et al., 2011) to be multiplied to the estimated noisy gradients. In this paper, we take an implicit approach towards respecting this geometry, by using the loss L ( θ ; D pub ) generated by the public data as the mirror map in classical mirror descent. As a first order approximation (formalized in Section 4), one can view it as doing DP-SGD on L ( θ ; D priv ) while using L ( θ ; D pub ) as a regularizer. This approach has the following advantages: (i) The information of the geometry is “free”, i.e., one does not need to learn the preconditioner explicitly from the public data, (ii) Unlike prior works (Zhou et al., 2020; Kairouz et al., 2021a), one does not need to assume that the gradients of L ( θ ; D priv ) lie in a fixed rank subspace, (iii) The achieved excess population risk guarantees have better dependence on n pub = | D pub | compared to prior results (Asi et al., 2021), and (iv) Because the geometry is implicitly defined, the implementation does not need to maintain an additional data structure for the preconditioner, and hence is much easier to implement. Empirically, under our best-effort comparison, our baseline algorithm improves over the state of the art (Asi et al., 2021). We note that differentially private mirror descent has been considered before by Talwar et al. (2014) and Wang et al. Their results are not directly comparable to ours because (i) they do not have access to in-distribution public data (ii) as shown in Bassily et al. (2014), without public data, it is impossible to achieve the dimension independent bounds we achieve (iii) in our experiments we solve unconstrained optimization problems, but those works choose the mirror map based on the constraint set rather than the data set. We note that the utility bounds we prove in this paper also apply to a public data-assisted variant of the accelerated mirror descent algorithm considered in Wang et al.", "parag_2": "Learning Geometry with Mirror Maps: Common to most DP model training algorithms, including DP-SGD, DPFTRL [24], and our algorithm, is a DP estimator of the gradient of the loss ∇ θ L ( θ t ; D priv ) = (cid:80) d ∈ D priv ∇ θ (cid:96) ( θ t ; d ) generated by the private data set D priv at a given model state θ t ∈ R p . This DP estimator essentially adds isotropic Gaussian noise N (0 , σ 2 I p ) to ∇ θ L ( θ t ; D priv ) , where σ depends on the privacy parameters ( ε, δ ) and the maximum allowable value of (cid:107)∇ θ (cid:96) ( θ t ; d ) (cid:107) 2 (a.k.a. the clipping norm [1]). 1 It is well known that for most learning tasks, the set of gradients vectors in L ( θ t ; D priv ) are seldom isotropic [2, 20]. Hence, it is natural to wonder if the Gaussian noise in the DP estimator can be made to respect the geometry of the gradients. Prior works [4, 23, 39] have used public data ( D pub ) to explicitly learn this geometry, mostly in the form of preconditioner matrices [15] to be multiplied to the estimated noisy gradients. In this paper, we take an implicit approach towards respecting this geometry, by using the loss L ( θ ; D pub ) generated by the public data as the mirror map in classical mirror descent. As a first order approximation (formalized in Section 4), one can view it as doing DP-SGD on L ( θ ; D priv ) while using L ( θ ; D pub ) as a regularizer. This approach has the following advantages: (i) The information of the geometry is “free”, i.e., one does not need to learn the preconditioner explicitly from the public data, (ii) Unlike prior works [23, 39], one does not need to assume that the gradients of L ( θ ; D priv ) lie in a fixed rank subspace, (iii) The achieved excess population risk guarantees have better dependence on n pub = | D pub | compared to prior results [4], and (iv) Because the geometry is implicitly defined, the implementation does not need to maintain an additional data structure for the preconditioner, and hence is much easier to implement. Empirically, under our best-effort comparison, our baseline algorithm improves over the state of the art [4]. We note that differentially private mirror descent has been considered before by [36] and [38]. Their results are not directly comparable to ours because (i) they do not have access to in-distribution public data (ii) as shown in [6], without public data, it is impossible to achieve the dimension independent bounds we achieve (iii) in our experiments we solve unconstrained optimization problems, but those works choose the mirror map based on the constraint set rather than the data set. We note that the utility bounds we prove in this paper also apply to a public data-assisted variant of the accelerated mirror descent algorithm considered in [38].", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_10"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_03"}} {"id_paragraph": "Skrry_KpQ.rJKpSniHN.00", "parag_1": "In this paper, we give generalization error bounds of deep ReLU networks for a Besov space and its variant with mixed smoothness , which includes the H ¨ older space, the Sobolev space, and the function class with total variation as special cases 1 . By doing so, (i) we show that deep learning achieves the minimax optimal rate on the Besov space and notably it outperforms any linear estimator such as the kernel ridge regression, and (ii) we show that deep learning can avoid the curse of dimensionality on the mixed smooth Besov space and achieves the minimax optimal rate. As related work, Mhaskar & Micchelli (1992); Mhaskar (1993); Chui et al. (1994); Mhaskar (1996); Pinkus (1999) also developed an approximation error analysis which essentially leads to analyses for Besov spaces. However, the ReLU activation is basically excluded and comprehensive analyses for the Besov space have not been given. Consequently, it has not been clear whether ReLU neural networks can outperform another representative methods such as kernel methods. As a summary, the contribution of this paper is listed as follows:", "parag_2": "In this paper, we give generalization error bounds of deep ReLU networks for a Besov space and its variant with mixed smoothness , which includes the H ¨ older space, the Sobolev space, and the function class with total variation as special cases. By doing so, (i) we show that deep learning achieves the minimax optimal rate on the Besov space and notably it outperforms any linear estimator such as the kernel ridge regression, and (ii) we show that deep learning can avoid the curse of dimensionality on the mixed smooth Besov space and achieves the minimax optimal rate. As related work, Mhaskar & Micchelli (1992); Mhaskar (1993); Chui et al. (1994); Mhaskar (1996); Pinkus (1999) also developed an approximation error analysis which essentially leads to analyses for Besov spaces. However, the ReLU activation is basically excluded and comprehensive analyses for the Besov space have not been given. As a summary, the contribution of this paper is listed as follows:", "annot_1": {"annotation": ["Content_deletion"], "instruction": "Remove unnecessary details.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "The end of the paragraph is too long, remove the part that fit the less in it.", "annotator": "annotator_07"}} {"id_paragraph": "NvI7ejSHFe.ppieLd2M4a.02", "parag_1": "Linear Unit ( ELU ) (Clevert et al., 2015), the Gaussian Error Linear Unit ( GELU ) (Hendrycks & Gimpel, 2016) and the Swish function ( Swish ) (Ramachandran et al., 2017). The details of each activation function can refer to Appendix A. We also compare PIAC with standard activation functions with the layer-wise adaptive slopes (Jagtap et al., 2020a). PIAC setups. We employ the PIAC in a layer-wise manner by default. We set the candidate function set F as { sin , tanh , GELU , Swish , Softplus } . The learnable parameters { α i } Ni =1 are initialized as zeros and optimized jointly with the weights and biases of neural networks. The scaling factors { β i } Ni =1 are initialized as ones and can be fixed or learnable.", "parag_2": "Linear Unit ( GELU ) and the Swish function ( Swish ). We compare PIAC to other adaptive activation functions which could provide higher-order derivatives, including SLAF Goyal et al. PAU Molina et al. and ACON Ma et al. The details of each activation function can refer to Appendix A and Appendix B. We also compare PIAC with standard activation functions with the layer-wise adaptive slopes (Jagtap et al., 2020a). We employ the PIAC in a layer-wise manner by default. We set the candidate function set F as { sin , tanh , GELU , Swish , Softplus } . The learnable parameters { α i } Ni =1 are initialized as zeros and optimized jointly with the weights and biases of PINNs. The scaling factors { β i } Ni =1 are initialized as ones and can be fixed or learnable.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.24", "parag_1": "Correlation Between Estimated Entropies and B-factors B-factor is an experimental measurement of conformational flexibility. We collect the average b-factor of sidechain atoms of amino acids in the test split of the PDB-REDO dataset. Simultaneously, we use the RDE to estimate the conformational entropy for each amino acid in the testset. The average Pearson correlation coefficient between these two quantities is 0.4637 and the average Spearman coefficient is 0.4282. Detailed results are presented in Table 10 in the appendix. In summary, this supports that the entropy estimated by the RDE is correlated to experimentally determined conformational flexibility.", "parag_2": "Correlation Between Estimated Entropy and B-factors B-factor is an experimental measurement that quantifies the conformational flexibility. We calculate the average b-factor of sidechain atoms of residues in the test split of the PDB-REDO dataset. Then, we estimate the conformational entropy for each residue in the test split using the RDE. The average Pearson correlation coefficient between these two quantities is 0.4637, and the average Spearman coefficient is 0.4282. Detailed results are presented in Table 8 in the appendix. In summary, this indicates that there is a correlation between the entropy estimated by the RDE and experimentally determined conformational flexibility measured by B-factor.", "annot_1": {"annotation": ["Rewriting_light", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "F3z0hchpGy.xeuzrNJiNW.02", "parag_1": "The irregular structure of meshes leads to a variety of approaches to define convolutions. Closely related to our method are graph based methods which are often based on variations of graph convolutional networks Kipf & Welling (2017); Defferrard et al. GCNs have been applied on spherical meshes Perraudin et al. and cortical surfaces Cucurull et al. ; Zhao et al. Verma et al. (2018) augment GCNs with anisotropic kernels which are dynamically computed via an attention mechanism over graph neighbours.", "parag_2": "The irregular structure of meshes leads to a variety of approaches to define convolutions. Closely related to our method are graph based methods which are often based on variations of graph convolutional networks (Kipf & Welling, 2017; Defferrard et al., 2016). GCNs have been applied on spherical meshes (Perraudin et al., 2019) and cortical surfaces (Cucurull et al., 2018; Zhao et al., 2019a). Verma et al. (2018) augment GCNs with anisotropic kernels which are dynamically computed via an attention mechanism over graph neighbours.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Add brackets to the citations", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Put the references between parenthesis.", "annotator": "annotator_07"}} {"id_paragraph": "QWI1hAXHi.FqRTjnqvd.00", "parag_1": "Ensembles, where multiple neural networks are trained individually and their predictions are averaged, have been shown to be widely successful for improving both the accuracy and predictive uncertainty of single neural networks. However, an ensemble’s cost for both training and testing increases linearly with the number of networks.", "parag_2": "Ensembles, where multiple neural networks are trained individually and their predictions are averaged, have been shown to be widely successful for improving both the accuracy and predictive uncertainty of single neural networks. However, an ensemble’s cost for both training and testing increases linearly with the number of networks, which quickly becomes untenable.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "CzTbgFKuy.hfDu8DsDq6.01", "parag_1": "We instead study the setting where all instances come with instance-specific features; this is a natural and practical assumption [27, 30] that encompasses numerical representations of the instance itself— e.g. bits representing a query or a graph—or other information such as weather or day of the week. These are passed to functions—e.g. linear predictors, neural nets, or trees—whose parameters can be learned from data. We study linear predictors, which are often amenable to similar analyses as above since the composition of a convex and affine function is convex. For example, it is straightforward to extend the matching results to bound the regret and sample complexity of learning a linear predictor of duals. Page migration is more challenging because the outputs must lie in the simplex, which we solve by restricting to matrices with columns in the simplex, i.e. rectangular stochastic matrices. Both sets of results are shown in Appendix C. Notably, for page migration our guarantees cover the auto-regressive setting where the server probabilities are determined by a fixed linear transform of past states.", "parag_2": "We instead study the setting where all instances come with instance-specific features, a natural and practical assumption [27, 30] that encompasses numerical representations of the instance itself— e.g. bits representing a query or a graph—or other information such as weather or day of the week. These are passed to functions—e.g. linear predictors, neural nets, or trees—whose parameters can be learned from data. We study linear predictors, which are often amenable to similar analyses as above since the composition of a convex and affine function is convex. For example, it is straightforward to extend the matching results to learning linear predictors of duals. OPM is more challenging because the outputs must lie in the simplex, which can be solved by learning rectangular stochastic matrices. Both sets of results are shown in Appendix C. Notably, for page migration our guarantees cover the auto-regressive setting where the server probabilities are determined by a fixed linear transform of past states.", "annot_1": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Remove unnecessary details, use abbreviation.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make this paragraph a bit more concise.", "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.20", "parag_1": "Figure 6 shows a schematic of the task. In experiment 1, we strictly set the starting position of the trial. In experiment 2, we did not set the starting position of the trial as a condition. The starting area was a rectangle, and the trials started by simply clicking an area once. Except for this change, the task was the same as in experiment 1.", "parag_2": "Figure 6 shows a schematic of the task. We did not set the starting position of the trial as a condition, unlike that in Experiment 1. The starting area was a rectangle, and the trials started by simply clicking an area once. Except for this change, the task was the same as that in Experiment 1.", "annot_1": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Make the second sentence more concise", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make sentence 2 shorter.", "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.12", "parag_1": "In multiset-equivariant tasks, we want the output multiset to be ordered the same as the input multiset: the “first” element in the input corresponds to the “first” in the output, but the overall order is still irrelevant. We can make iDSPN multiset-equivariant to the input – not just to its initialization Y 0 – by concatenating the input set to the set being optimized. In general, when wealready know parts of the desired Y (e.g. certain elements or the values in certain dimensions), we can help the model by keeping those parts fixed and not optimizing them in the forward optimization. We use this in", "parag_2": "When we already know parts of the desired Y (e.g. specific elements in the set or the values along certain dimensions), we can help the model by keeping those parts fixed and not optimizing them in Equation 7. For example, in Section 4.1 we know that the first few dimensions in the output multiset should be the exactly same as in the input multiset. If we fix these dimensions during optimization, the model only needs to learn the remaining dimensions that we do not already know. We use this in", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Completely rewrite and reorder this paragraph to make it less confusing and more appropriate for a research paper.", "annotator": "annotator_07"}} {"id_paragraph": "fJhx73ErBg.NeKLbmOxG8.01", "parag_1": "Problem 1 defines a feature matching problem similar to many inverse reinforcement learning(IRL) [10] and IOC [11] formulations. We design a set of features f as well as a risk feature f φ ξthat capture thedriving and risk management preferences of the demonstrator. From the demonstration data we then learn the risk metric parameters ξ and the weights that combine the feature values. This combination gives us a cost model. Solving for a trajectory that optimizes this costmodel results in driving behaviors similar to the demonstrator (in the context of our defined featuresand risk models). Existing IOC approaches are able to imitate standard driving behaviors, wherethe probability P θ is assumed to be the exponential distribution. As such, the generated trajectories from the cost model are exponentially more preferred by the agent. In our case, the additionof the risk measure allows us to match drivers better under risky situations. While none of theseapproaches exactly mimic the demonstrator, they capture the driving style of the demonstrator andallow generalization to new risky scenarios.", "parag_2": "Problem 1 defines a feature matching problem similar to many inverse reinforcement learning(IRL) [10] and IOC [11] formulations. We design a set of features f to capture driving preferencesin non-risky situations, but also include a risk-based feature f φ ξ capturing the risk managementpreferences seen in demonstrations. From the demonstration data we then learn the risk metric parameters ξ and the weights that combine the feature values. This combination gives us a cost modelthat, when solved, yields driving behaviors similar to the demonstrator (in the context of our definedfeatures and risk models). While existing IOC approaches are able to imitate standard driving behaviors such that the generated trajectories from the cost model are exponentially more preferred bythe agent (under maximum-entropy formulations, e.g. [11]). In our case, the additional risk feature provides the capacity to generalize our model better under risky situations without hindering theability to replicate the driving style of human demonstrators in normal scenarios.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Please, paraphrase this paragraph.", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_medium", "Concision"], "instruction": "This paragraph is confusing, rewrite the second sentence and the two last sentences for clarity. Smooth out the linking between sentences.", "annotator": "annotator_07"}} {"id_paragraph": "HEoVA49MN6.ULHGlZJXhY.00", "parag_1": "As we know that the performance of hand-designed LR schedule is very sensitive to the initial learning rates. To avoid carefully tuning the initial learning rates, we learn the LR schedule from an interval. We set γ = 1 for image tasks, and γ = 20 for text tasks to eliminate the influence of loss magnitude between two tasks.", "parag_2": "As we know that the performance of hand-designed LR schedules and HPO methods is very sensitive to the initial LR. To avoid carefully tuning the initial LR, we learn the LR schedules from an interval [0 , γ ] , and now the initial LR is determined by the output of the MLR-SNet. We set γ = 1 for image tasks, and γ = 40 for text tasks in all our experiments to eliminate the influence of loss magnitude between two different tasks.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_08"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.02", "parag_1": "This is processed by a graph attention network (GAT) (Veliˇckovi´c et al., 2017) which learns action relations. In a GAT, the attention weights are expected to be high for closely interdependent actions. The resultant nodes are pooled together to form a compact relational summary of the action space. This summaryvector and each action’s relational representation is used to compute an action’s utility, which is used as a Q-value oras a probability logit for policy gradient methods (Figure 2).", "parag_2": "A graph attention network (GAT) (Veliˇckovi´c et al., 2017) processes the action graph and learns action relations. The attention weights in the GAT would be high for closely related actions such as a nail and a hammer in Figure 1. A utility network uses the GAT’s resulting relational action representations, the state, and a pooled together action set summary to compute each action’s Qvalue or probability logit for policy gradient methods(Figure 2).", "annot_1": {"annotation": ["Content_substitution", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Exclude unnecessary details, use clearer expression.", "annotator": "annotator_08"}} {"id_paragraph": "atxti8SVk.3K9AmPwALM.12", "parag_1": "To be specific, we assign a semantic label for each unlabeled segment by finding its nearest labeled segment in the feature space. We denote this new labeled set (together with the original labeled set) as ˆ C , where ˆ C + denotes all the same-category segments other than segment s in ˆ C and pixel i belongs to the segment s . Such a formulation is based on three assumptions: 1) The size of the original labeled set is large enough to cover the feature space, 2) the labeled segments are distributed uniformly in the feature space, and 3) the embedding already encodes certain semantic information. Therefore, we only apply this relationship to propagate the keypoint annotations in the DensePose dataset, where each body part is annotated by a point.", "parag_2": "Specifically, we assign a semantic label to each unlabeled segment by finding its nearest labeled segment in the feature space. We denote this expanded labeled set by ˆ C . For pixel i , we define its positive (negative) segment set ˆ C + ( ˆ C − ) according to whether a segment has the same label as i . Our feature affinity relationship works best when: 1) the original labeled set is large enough to cover the feature space, 2) the labeled segments are distributed uniformly in the feature space, and 3) the pixel-wise already encodes certain semantic information. We thus only apply to DensePose keypoint annotations in our experiments, where each body part in the training image is annotated by a point.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite this paragraph to make it considerably clearer.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Revise this paragraph to improve readability and cohesiveness.", "annotator": "annotator_07"}} {"id_paragraph": "wNQ4_8Ym_c.1vd_qn2D93.00", "parag_1": "A critical problem in the literature on post hoc explanations is the lack of a common foundational goal among methods. For example, some methods are motivated by function approximation, some use game theoretic notions such as Shapley-Aumann values, and some are ad hoc, driven by the goal of obtaining clean visualizations. Such fragmentation of goals not only prevents a coherent conceptual understanding of post hoc explainability but also causes the practical challenge of not knowing which method to use when.", "parag_2": "A critical problem in the field of post hoc explainability is the lack of a common foundational goal among methods. For example, some methods are motivated by function approximation, some by game theoretic notions, and some by obtaining clean visualizations. This fragmentation of goals causes not only an inconsistent conceptual understanding of explanations but also the practical challenge of not knowing which method to use when.", "annot_1": {"annotation": ["Concision"], "instruction": "Make as concise as possible the paragraph, removing any ideas that are not essential. Use a clearer word choice.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Concise this paragraph while improving the academic english.", "annotator": "annotator_07"}} {"id_paragraph": "MZYBK_Wp2X.HVFitLjAId.00", "parag_1": "There are a lot of measure benchmarking studies considering node classification and clustering for both generated graphs and datasets (Fouss et al., 2012; Sommer et al., 2016; 2017; Avrachenkov et al., 2017; Ivashkin & Chebotarev, 2016; Guex et al., 2018; 2019; Aynulin, 2019a;b; Courtain et al., 2020; Leleux et al., 2020), etc. Although a large number of experimental results, theoretical results still look unattainable. One of the most important theoretical results for graph measures is a work Luxburg et al. (2010), where problems of Commute Time on big graphs were shown theoretically, and a substantiated amendment was proposed to correct the problem. The paper shows how difficult such proves. Besides difficult proves, there is still no complex empirical understanding of what effects need to be proven. Our empirical work has two main advantages from previous ones. Firstly, we consider the vast amount of graph measures, which for the first time gives the full picture. Secondly, unlike these studies concluding with a global leaderboard, we are looking for the leading measures for each set of LFR parameters.", "parag_2": "There are a lot of measure benchmarking studies considering node classification and clustering for both generated graphs and real-world datasets (Fouss et al., 2012; Sommer et al., 2016; 2017; Avrachenkov et al., 2017; Ivashkin & Chebotarev, 2016; Guex et al., 2018; 2019; Aynulin, 2019a;b; Courtain et al., 2020; Leleux et al., 2020), etc. Despite a large number of experimental results, theoretical results are still a matter of the future. One of the most interesting theoretical results on graph measures is the work by Luxburg et al. (2010), where some unattractive features of the Commute Time distance on large graphs were explained theoretically, and a reasonable amendment was proposed to fix the problem. Beyond the complexity of such proofs, there is still very little empirical understanding of what effects need to be proven. Our empirical work has two main differences from the previous ones. First, we consider a large number of graph measures, which for the first time gives a fairly complete picture. Second, unlike these studies concluding with a global leaderboard, we are looking for the leading measures for each set of the LFR parameters.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Improve the writing of this text", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Use accurate and scientific words.", "annotator": "annotator_08"}} {"id_paragraph": "sIqSoZ9KiO.KLlOZMoJ9G.03", "parag_1": "Limitations and future work. The main downside of SDN remains the computation time. However, we suspect that a more optimized implementation could substantially improve runtime performance. Another possible direction is to explore the applicability of SDN to other generative models such as generative adversarial networks (Goodfellow et al., 2014).", "parag_2": "Limitations and future work. The main downside of SDN remains the computation time. However, we suspect that a more optimized implementation could substantially improve the runtime performance of SDN. In the future, it would be beneficial to explore the applicability of SDN in other settings, e.g. to apply it to other generative models such as generative adversarial networks (Goodfellow et al., 2014) or to any other image processing task such as image super-resolution or image segmentation.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "rNl3dQMBZc.F8CTr2t1Jk.00", "parag_1": "{+Frontier 𝒪 2 love 3 this 4 show \n+} BS & DBS stand for beam search and its variant diverse beam search (Vijayakumar et al., 2016). In our configuration, we use Hamming distance as the diversity function and set the diversity strength to 1.5, following Vijayakumar et al.", "parag_2": "{+Frontier 𝒪 2 love 3 this 4 show \n+} I 5 like 6 this 7 show play musical performance p = 0. play musical amazing …… …… 0.2 0. p = 0. BS & DBS stand for beam search and its variant diverse beam search (Vijayakumar et al., 2018). In our configuration, we use Hamming distance as the diversity function and set the diversity strength to 1.5, following Vijayakumar et al.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "Hk4uWvOCX.rke01q_Cm.00", "parag_1": "However, the implementation of Model-X requires the generation of the so called ’knockoffs’, which has limited existing methods. The goal of our paper is to fill in the gap by proposing a model-free method for generating knockoffs with relaxed assumptions for the data, leveraging the power of the recent development in deep generative models. Specifically, we propose a framework to generate knockoffs based on latent variable Z which captures the correlation structure of X . This provides a great generalization from the existing knockoff generation methods for hidden Markov model (HMM) and mixture Gaussian to (but not limited to) Hidden Markov Random Field. The contribution of this paper is in two folds: first, we tackles the problem of generating Model-X knockoffsfor any data set by proposing and justifying a framework based on latent variable Z ; second, the proposed framework is a novel application for variational auto encoder and its variants. Compared with other deep learning architecture to generate knockoff, the proposed framework has the advantage in less computational complexity and thus easier to be implemented by domain scientists.", "parag_2": "However, the implementation of Model-X requires the generation of the so called ’knockoffs’, which has limited existing methods. The goal of our paper is to fill in the gap by proposing a model-free method for generating knockoffs with relaxed assumptions for the data, leveraging the power of the recent development in deep generative models. Specifically, we propose a framework to generate knockoffs based on latent variable Z which captures the correlation structure of X . This provides a great generalization from the existing knockoff generation methods for hidden Markov model (HMM) and mixture Gaussian to (but not limited to) Hidden Markov Random Field. The contribution of this paper is it propose a framework to tackle the problem of generating Model-X knockoffs based on latent variable Z . We provide theoretical justification of the approach and also demonstrate the state of art variational auto encoder (VAE) can achieve promising results in the task of FDR controlled variable selection. This paper is among the first works for the new application of representative learning in generating model-X knockoffs. And it is a nature generalization of the simple latent variable models based methods in the literature and can be easily implemented with variational auto-encoder. Our discussion and preliminary results for goodness-of-fit also shed lights on future improvement of VAE for generating knockoffs.", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite the second half of the paragraph to make it more convincing.", "annotator": "annotator_07"}} {"id_paragraph": "r1BBY14iH.SyuY2uosS.00", "parag_1": "MCDP-EMD and MCDP-MMD are also obtained similarly. Note that they share the same E and P as the sample mean is the same. The D of q ∗ x is expected to be the best uncertainty measure that OPU can approach theoretically (when the amortization loss is zero, see Sec. 2.4). The baselines for SGLD are obtained in a similar way. Students use categorical entropy (for BDK) or D (for DPN) and P of the output distributions.", "parag_2": "MCDP-EMD and MCDP-MMD are also obtained similarly. Note that they share the same E and P as the sample mean is the same. The D of q ∗ x is expected to be the best uncertainty measure that OPU can approach theoretically (when the amortization loss is zero, see Sec. 2.4). For fairness, the same set of posterior particles is used for obtaining D. The baselines for SGLD are obtained in a similar way. Students use categorical entropy (for BDK) or D (for DPN) and P of the output distributions.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_10"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_03"}} {"id_paragraph": "ssjKKm0b5y.3wi5X8wrM_.03", "parag_1": "Hyperparameter tuning: For our PHN method we select hyperparameters based on the HV computed on a validation set. Selecting hyperparameters for the baselines is non-trivial as there is no clear criteria that is reasonable in terms of runtime; In order to select hyperparameters based on HV, each approach needs to be trained multiple times on all rays. We therefore select hyperparameters based on a single ray, and apply those for all rays. Our selection criterion is as follow: we collect all models trained using all hyperparameter configurations, and filter out the dominated solutions. Finally, we select the combination of hyperparametrs with the highest uniformity.", "parag_2": "Hyperparameter tuning: For PHN, we select hyperparameters based on the HV computed on a validation set. Selecting hyperparameters for the baselines is challenging because there are no clear criteria that can be computed quickly with many rays. Selecting HPs based on HV requires to train each baseline multiple times on all rays. Therefore, we select hyperparameters based on a single ray and apply the selected hyperparameters to all rays. Specifically, we collect all models trained using all hyperparameter configurations, and remove dominated solutions. Finally, we select the combination of HPs with the highest uniformity.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Make the sentence correct, put conjunctions in front of sentences.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Improve the English and the flow of this paragraph.", "annotator": "annotator_02"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.07", "parag_1": "We modify the grid world environment (Chevalier-Boisvert et al., 2018) where an agent navigates a 2D maze with two lava rivers to reach a goal. The agent always has access to 4 direction movements and a turn-left skill (Figure 5). However, two actions out of 4 special skills: turn-right, step-forward, dig-orange-lava and dig-pink-lava , are randomly sampled for the agent in every task instance. The agent can walk over lava for one timestep without dying, but it can remove the lava by using the matching dig-lava skill. Thus, when available, dig-lava skills can be used to build shortcut paths to the goal on the other end, and thus receive a higher reward. We use PPO to train all methods.", "parag_2": "We modify the grid world environment (Chevalier-Boisvert et al., 2018) where an agent navigates a 2D maze with two lava rivers to reach a goal. The agent always has access to four direction movements and a turn-left skill (Figure 5). There are two additional actions randomly sampled out of four special skills: turn-right, step-forward, dig-orange-lava and dig-pink-lava . If the agent enters the lava, it will die unless it uses the matching dig-lava skill to remove the lava in the immediately next timestep. Thus, when available, dig-lava skills can be used to create shortcut paths to the goal and receive a higher reward. We use PPO for all experiments in this environment.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite some sentences, making them more connected and using more formal language.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Use a slightly more academic logical language.", "annotator": "annotator_07"}} {"id_paragraph": "vokZIVWUXN.zMdXRtaisu.00", "parag_1": "To mitigate the burden of data labeling, we aim at improving data efficiency for both classification and regression setups in deep learning. However, the current focus is on classification problems while rare attention has been paid to deep regression, which usually requires more human effort to labeling. Further, due to the intrinsic difference between categorical and continuous label space, the common intuitions for classification, e.g. cluster assumptions or pseudo labeling strategies, cannot be naturally adapted into deep regression. To this end, we first delved into the existing data-efficient methods in deep learning and found that they either encourage invariance to data stochasticity ( e.g. , consistency regularization under different augmentations) or model stochasticity ( e.g. , difference penalty for predictions of models with different dropout). To take the power of both worlds, we propose a novel χ -model by simultaneously encouraging the invariance to data stochasticity and model stochasticity. Further, the χ -model plays a minimax game between the feature extractor and task-specific heads to further enhance the invariance to model stochasticity. Extensive experiments verify the superiority of the χ -model among various tasks, from a single-value prediction task of age estimation to a dense-value prediction task of keypoint localization, a 2D synthetic and a 3D realistic dataset, as well as a multi-category object recognition task.", "parag_2": "To mitigate the burden of data labeling, we aim at improving data efficiency for both classification and regression setups in deep learning. However, the current focus is on classification problems while rare attention has been paid to deep regression, which usually requires more human effort to labeling. Further, due to the intrinsic difference between categorical and continuous label space, the common intuitions for classification, e.g. cluster assumptions or pseudo labeling strategies, cannot be naturally adapted into deep regression. To this end, we first delved into the existing data-efficient methods in deep learning and found that they either encourage invariance to data stochasticity ( e.g. , consistency regularization under different augmentations) or model stochasticity ( e.g. , difference penalty for predictions of models with different dropout). To take the power of both worlds, we propose a novel χ -model by simultaneously encouraging the invariance to data stochasticity and model stochasticity. Extensive experiments verify the superiority of the χ -model among various tasks, from a single-value prediction task of age estimation to a dense-value prediction task of keypoint localization, a 2D synthetic and a 3D realistic dataset, as well as a multi-category object recognition task.", "annot_1": {"annotation": ["Content_deletion"], "instruction": "Remove the second-to-last sentence.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "Delete the sentence about the minmax game.", "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.14", "parag_1": "Very Deep Network. To apply SRP to very deep networks, like over 400 Conv layers in RCAN, we revise RCAN(Zhang et al., 2018b) by removing all channel attention modules. We set the channel number in the revised RCAN as 96 and then prune it to 64. For × 2, we reduce the number of parameters from 34.5M to 15.3M and name the compressed model as SRPN.", "parag_2": "Deep Networks. To apply SRP to the very deep network RCAN (Zhang et al., 2018b), a representative top-performing deep SR network with over 400 Conv layers, we revise RCAN by removing all the channel attention modules (Zhang et al., 2018b). The channel number in the revised RCAN is chosen as 96 and then pruned to 64. For × 2 scale, we reduce the number of parameters from 34.5M to 15.3M and dub the compressed model as SRPN.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Modify the structure of this paragraph to make it clearer when needed", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_06"}} {"id_paragraph": "S1BhqsOsB.1mgtDFRDc.02", "parag_1": "These models use an inference network that takes as input the full video frame sequences to predict the locations of 2D object bounding boxes, as well as frame-to-frame displacements, in order to minimize view prediction error in 2D. We were not able to produce meaningful results from their inference networks. The success of Hsieh et al. (2018) may partially depend on carefully selected priors for 2D object bounding box location and object size parameters that match the moving MNIST dataset statistics used in the paper, as suggested by the publicly available code. We do not assume knowledgeor existence ofsuch object location or size priors for our CARLA data.", "parag_2": "These models consume a video as input, and predict the locations of 2D object bounding boxes, as well as frame-to-frame displacements, in order to minimize a view regression error. We were not able to produce meaningful results from these models. The success of Hsieh et al. (2018) may partially depend on carefully selected priors for the 2D bounding box locations and sizes, to match the statistics of the “Moving MNIST” dataset used in that work (as suggested in the official code). For our CARLA experiments, we do not assume knowledge of priors for location or size.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rephrase the paragraph", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Make this paragraph more clear and precise.", "annotator": "annotator_07"}} {"id_paragraph": "zzdwUcxTjWY.rVxmgW1FRK.02", "parag_1": "We also compare with Outlier Exposure (Hendrycks et al., 2019) (OE), an approach that relies on the real outlier data for model regularization. For a fair comparison, we train the object detector on PASCAL-VOC using the same architecture ResNet-50, and use the OE objective for the classification branch. The real outliers for OE training are sampled from the OpenImages dataset (Kuznetsova et al., 2020). We perform careful deduplication to ensure there is no overlap between the outlier training data and PASCAL-VOC. Our method achieves OOD detection performance on COCO (AUROC: 88.44%) that favorably matches OE (AUROC: 89.41%), and does not impose strong data assumption.", "parag_2": "We also compare with Outlier Exposure (Hendrycks et al., 2019) (OE). OE serves as a strong baseline since it relies on the real outlier data. We train the object detector on PASCAL-VOC using the same architecture ResNet-50, and use the OE objective for the classification branch. The real outliers for OE training are sampled from the OpenImages dataset (Kuznetsova et al., 2020). We perform careful deduplication to ensure there is no overlap between the outlier training data and PASCAL-VOC. Our method achieves OOD detection performance on COCO (AUROC: 88.66%) that favorably matches OE (AUROC: 90.18%), and does not require external data.", "annot_1": {"annotation": ["Concision", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "yPI7Myjuxq.JaLOIf6WQo.00", "parag_1": "When domain labels d are available (not the case in latent domain learning) one strategy is to modulate networks by constraining the underlying generating function of residual networks (He et al., 2016) Φ( x ) = x + f ( x ) to allow at most a linear change V d per each domain from some pretrained mapping Φ 0 (with f 0 in every layer), whereby Φ( x ) − Φ 0 ( x ) = V d x . Note the slight abuse of notation here in letting x denote a feature map with channels C . Rearranging this yields:", "parag_2": "When domain labels d are available (not the case in latent domain learning) one strategy established by Rebuffi et al. (2017) is to modulate networks by constraining the layerwise transformation of residual networks (He et al., 2016) Φ( x ) = x + f ( x ) to allow at most a linear change V d per each domain from some pretrained mapping Φ 0 (with f 0 in every layer), whereby Φ( x ) − Φ 0 ( x ) = V d x . Note the slight abuse of notation here in letting x denote a feature map with channels C . Rearranging this yields:", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "atxti8SVk.3K9AmPwALM.13", "parag_1": "SegSort (Hwang et al., 2019) is an end-to-end image segmentation model that generates a pixel-wise embedding and a consequent over-segmentation. It assumes that each segment has an independent normal distribution on a hypersphere. Spherical K-Means clustering (Banerjee et al., 2005) is used for segmenting an image and learning the discriminative feature clustering jointly. It is worth noting that, such an assumption indicates homogeneous representations within each segment. To learn the embedding, SegSort formulates a maximum likelihood loss thatmaximizes the discrimination between segments. In addition, soft neighborhood assignment (Goldberger et al., 2005) is incorporated to enforce grouping of semantically similar segments. During inference, the segment labels are predicted by K-Nearest Neighbor retrievals.", "parag_2": "SegSort (Hwang et al., 2019) is an end-to-end segmentation model that generates a pixel-wise feature map and a resulting segmentation. Assuming independent normal distributions for individual segments, SegSort seeks a maximum likelihood estimation of the feature mapping, so that the feature induced partitioning in the image and clustering across images provide maximum discrimination among segments. During inference, the segment label is predicted by K-Nearest Neighbor retrievals.", "annot_1": {"annotation": ["Content_deletion", "Concision"], "instruction": "Reduce the explanations in this paragraph and just give a high level explanation, to keep it concise.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make the explanation of SegSort shorter.", "annotator": "annotator_07"}} {"id_paragraph": "q4rMz7ZfFG.uyxGiQeMP.00", "parag_1": "Specially, we randomly sample 20% of nodes V s in data flow, mask direct edges connecting these sampled nodes by add an infinitely negative value in the mask matrix, and then predict these masked edges E mask . Taking the variable x 11 in Figure 2 for an example, we first mask edges (cid:104) x 7 , x 11 (cid:105) and (cid:104) x 9 , x 11 (cid:105) in the graph and then let the model to predict these edges. Formally, the pre-training objective of the task is calculated as Equation 5, where E c = V s × V ∪ V × V s is a set of candidates for edge prediction, δ ( e ij ∈ E ) is 1 if (cid:104) v i , v j (cid:105) ∈ E otherwise 0, and the probability p e ij of existing an edge from i -th to j -th node is calculated by dot product following a sigmoid function using representations of two nodes from GraphCodeBERT. loss EdgePred", "parag_2": "Specially, we randomly sample 20% of nodes V s in data flow, mask direct edges connecting these sampled nodes by add an infinitely negative value in the mask matrix, and then predict these masked edges E mask . Taking the variable x 11 in Figure 2 for an example, we first mask edges (cid:104) x 7 , x 11 (cid:105) and (cid:104) x 9 , x 11 (cid:105) in the graph and then let the model to predict these edges. Formally, the pre-training objective of the task is calculated as Equation 7, where E c = V s × V ∪ V × V s is a set of candidates for edge prediction, δ ( e ij ∈ E ) is 1 if (cid:104) v i , v j (cid:105) ∈ E otherwise 0, and the probability p e ij of existing an edge from i -th to j -th node is calculated by dot product following a sigmoid function using representations of two nodes from GraphCodeBERT. To balance positive-negative ratio of examples, we sample negative and positive samples with the same number for E c . loss EdgePred", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_10"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_03"}} {"id_paragraph": "8_oadXCaRE.Kt4-LpYuM.02", "parag_1": "This is equivalent to Temperature Scaling (Hinton et al., 2015), a mechanism that also maintains the probabilistic interpretation of the output. It can also be implemented by a normalized layer of exponential neurons, and are compatible with our theoretical derivations and the optimization by the plasticity rule of Eq. 8. Moreover, we show in the Appendix that soft WTA models can be constructed by rectified linear units (ReLU) or in general by neurons with any non-negative monotonically increasing activation function, and their weights are also optimized by the same plasticity rule.", "parag_2": "This is equivalent to Temperature Scaling (Hinton et al., 2015), a mechanism that also maintains the probabilistic interpretation of the output. It can also be implemented by a normalized layer of exponential neurons, and are compatible with our theoretical derivations and the optimization by the plasticity rule of Eq. 8. This allows us to integrate the hard WTA into the SoftHebb framework, as a special case with an infinite base. Therefore, the hard WTA, if used with the plasticity rule that we derived, is expected to have some similar properties to a soft WTA implementation. Moreover, we show in the Appendix that soft WTA models can be constructed by rectified linear units (ReLU) or in general by neurons with any non-negative monotonically increasing activation function, and their weights are also optimized by the same plasticity rule.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "8MRWETVKC.gKhhork8y.00", "parag_1": "Human eye gaze is a crucial non-verbal cue used in a wide variety of applications, such as gazecontingent rendering in virtual reality [1, 2, 3, 4, 5], gaze-based interaction [6, 7], gaze-assisted collaboration [8, 9], as well as eye movement-based task recognition [10, 11]. Given the importance of eye gaze, many researchers have focused on the problem of gaze estimation [12, 13, 14, 15], i.e. estimating gaze position or direction from eye images. However, the collection of the large amounts of eye images required to train deep learning models, or the exchange of such data across networks, can pose significant privacy risks. In addition, the heterogeneous data distribution across different users in real-world settings (in-the-wild settings) can significantly hinder the training process of gaze estimation methods [16, 17]. Therefore, preserving privacy and maintaining high performance for heterogeneous data become two main challenges for gaze estimation.", "parag_2": "Human eye gaze is a crucial non-verbal cue used in a wide variety of applications, such as gaze-contingent rendering in virtual reality Hu et al. (2019, 2020b, 2021) ; Hu (2020); Hu et al. (2020a) , gaze-based interaction Mardanbegi et al. ; Piumsomboon et al. (2017), gaze-assisted collaboration Higuch et al. ; Zhang et al. (2017c), as well as eye movementbased task recognition Hu et al. ; Coutrot et al. Given the importance of eye gaze, many researchers have focused on the problem of gaze estimation Baluja and Pomerleau (1993); Liang et al. ; Choi et al. ; Lu et al. (2014), i.e. estimating gaze position or direction from eye images. However, the collection of the large amounts of eye images required to train deep learning models, or the exchange of such data across networks, can pose significant privacy risks. In addition, the heterogeneous data distribution across different users in real-world settings (in-the-wild settings) can significantly hinder the training process of gaze estimation methods Zhang et al. Therefore, preserving privacy and maintaining high performance for heterogeneous data become two main challenges for gaze estimation.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_04"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hj_6iyUM7v.f-7mntqGH.00", "parag_1": "VSG VSG VSG Ƹ𝑟 3 ො𝑎ො𝑎ො𝑎ො𝑣ො𝑣ො𝑣Ƹ𝑠Ƹ𝑠(b) While training, the world model is learned with collected experience, the policy is improved on trajectories unrolled using the world model and new episodes are collected by deploying the policy in the environment. An initial set of episodes are collected using a random policy. As training progresses, new episodes are collected using the latest policy to further improve the world model.", "parag_2": "VSG VSG VSG Ƹ𝑟 3 ො𝑎ො𝑎ො𝑎ො𝑣ො𝑣ො𝑣Ƹ𝑠Ƹ𝑠(b) trajectories unrolled using the world model and new episodes are collected by deploying the policy in the environment. An initial set of episodes are collected using a random policy. As training progresses, new episodes are collected using the latest policy to further improve the world model.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "c_ttWJ9Eth.HK8lpOoAWq.00", "parag_1": "The code for PEPPP and experiments is in the anonymous GitHub repository at https://github. com/barterer/lp . We use QPyTorch (Zhang et al., 2019) to simulate low-precision formats on standard hardware. The 87 datasets and 99 low-precision configurations in experiments are listed in Appendix A. The datasets consist of natural and medical images from various domains. Apart from", "parag_2": "The code for PEPPP and experiments is in the GitHub repository at https://github.com/ chengrunyang/peppp . We use QPyTorch (Zhang et al., 2019) to simulate low-precision formats on standard hardware. The 87 datasets and 99 low-precision configurations in experiments are listed in Appendix A. The datasets consist of natural and medical images from various domains. Apart from", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "YkiRt7L93m.jgDbnUD7s.03", "parag_1": "In contrast to the regular case in Proposition 2.1, the optimal plans γ [-\n-] j transporting P 0 to P j need not be unique. However, the projection for fixed γ \n j is unique.", "parag_2": "The optimal plans γ [-\n-]0 j transporting P 0 to P j need not be unique if P j lies outside the cut locus of P 0 , i.e., when there is more than one optimal way to transport P 0 onto P j . However, the projection for fixed γ \n j is always unique by virtue of the linear regression.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_01"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.07", "parag_1": "P T = \n ψ ( r ) are the representation distributions under treated and untreated groups, respectively, induced by the map r = ψ ( x ) . The discrepancycould be minimized by updating the representation map ψ with gradient-based optimizers, as it is differentiable with respect to ψ (Flamary et al., 2021).", "parag_2": "P T = \n ψ ( r ) are the distributions of representations in treated and untreated groups, respectively, induced by the mapping r = ψ ( x ) . The discrepancy is differentiable with respect to ψ (Flamary et al., 2021), thus can be minimized by updating the representation mapping ψ with gradient-based optimizers.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Brush up the sentence for readability", "annotator": "annotator_05"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Reorganize the paragraph to make it more logical. Improve the language.", "annotator": "annotator_07"}} {"id_paragraph": "eOCMrOCzEa._VAJjrmN5.00", "parag_1": "The core idea of hypernetworks is to use one network to predict parameters for another network (Ha et al., 2017; Knyazev et al., 2021). The original goal of hypernetwork is to decrease the number of training parameters (Ha et al., 2017), by training a hypernetwork with a smaller size to generate the parameters of another network with a larger size. Because of its promising performance, hypernetwork has been gradually applied to various tasks (Krueger et al., 2017; Zhang et al., 2019; von", "parag_2": "The original goal of hypernetwork proposed in (Ha et al., 2017) is to decrease the number of training parameters , by training a hypernetwork with a smaller size to generate the parameters of another network with a larger size on a fixed dataset. Because of its promising performance, hypernetwork has been gradually applied to various tasks (Krueger et al., 2017; Zhang et al., 2019; von", "annot_1": {"annotation": ["Content_deletion", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "S1qImCcFQ.Ske132uA7.00", "parag_1": "Similar to standard rSLDS, the dynamics are conditionally linear given a leaf node z t . It is intuitive to expect that nearby regions in latent space have similar dynamics. In the context of the tree-structured stick breaking partitions that share a common parent should have similar dynamics. We explicitly model this by enforcing a hierarchical tree-structured prior on the dynamics.", "parag_2": "Similar to standard rSLDS, the dynamics are conditionally linear given a leaf node z t . A priori, it is natural to expect that locally linear dynamics of nearby regions in the latent space are similar. Thus, in the context of tree-structured stick breaking, we impose that partitions that share a common parent should have similar dynamics. We explicitly model this by enforcing a hierarchical prior on the dynamics that respects the tree structure.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rephrase the paragraph", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Make the paragraph more formal.", "annotator": "annotator_07"}} {"id_paragraph": "W_s28lKw6.9a9zLuZDB.00", "parag_1": "Figure 5 reports the performance metrics for CIFAR10 and CIFAR100 under corruption. Tabulated results can be found in Section G in the Appendix. The general trend of these results is that the function-space prior frequently provides gains in OOD UQ with only a small decrease in (uncorrupted) test performance. Moreover, the shared function-space prior resulted in remarkable consistency across models, compared to the variety seen in weight-space priors. Due to the predictive prior, the ECE was significantly higher for corruption 0, but this underfitting provided superior ECE when OOD. This suggests the function-space prior should be fine-tuned (i.e. empirical Bayes) if superior ECE is desired in-distribution. For CIFAR100, the higher prior regularization due to higher dimensionality (see Section B.2), resulted in reduced benefit over weight-space models compared to CIFAR10, with improved performance only evident at stronger corruptions. Figure 7 summarizes the performance difference due to the function-space prior for log-likelihood.", "parag_2": "Figure 5 reports the performance metrics for CIFAR10 and CIFAR100 under corruption. Tabulated results can be found in Section G in the Appendix. The general trend of these results is that the function-space prior frequently provides gains in OOD UQ with only a small decrease in (uncorrupted) test performance. This trade-off between accuracy and robustness has been observed and discussed in the adversarial robustness setting (Tsipras et al., 2019; Yang et al., 2020), and remains an open problem if and how both qualities can be achieved in practice. Moreover, the shared function-space prior resulted in remarkable consistency across models, compared to the variety seen in weight-space priors. Due to the predictive prior, the ECE was significantly higher for corruption 0, but this underfitting provided superior ECE when OOD. This suggests the function-space prior should be fine-tuned (i.e. empirical Bayes) if superior ECE is desired in-distribution. For CIFAR100, the higher prior regularization due to higher dimensionality (see Section B.2), resulted in reduced benefit over weight-space models compared to CIFAR10, with improved performance only evident at stronger corruptions. Figure 7 summarizes the LLH difference due to the function-space prior.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_01"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "ogHsB0aJsd.PzEritC2E6.00", "parag_1": "Le et al., 2019) can enjoy polynomial sample complexity under appropriate coverage assumptions, but the guarantee relies on the strong Bellman-completeness assumption on the function class; marginalized importance sampling (MIS) methods, which have gained significant attention recently (Liu et al., 2018; Xie et al., 2019; Uehara et al., 2020; Nachum et al., 2019a), use two function classes to simultaneously approximate the value and the density-ratio (or weight) function and optimize minimax objectives. Notably, it is the only family of methods known to produce accurate return estimates with a polynomial sample complexity, when the function classes only satisfy the relatively weak realizability assumptions (i.e., they contain the true value and weight functions).", "parag_2": "Le et al., 2019) can enjoy polynomial sample complexity under appropriate coverage assumptions, but the guarantee requires the function class to satisfy the strong Bellmancompleteness assumption, i.e. closure under the Bellman operator (Chen & Jiang, 2019; Xie et al., 2021). Marginalized importance sampling (MIS) methods, which have gained significant attention recently (Liu et al., 2018; Xie et al., 2019; Uehara et al., 2020; Nachum et al., 2019a), use two function classes to simultaneously approximate the value and the density-ratio (or weight) function and optimize minimax objectives. Notably, it is the only family of methods known to produce accurate return estimates with a polynomial sample complexity, when the function classes only satisfy the relatively weak realizability assumptions (i.e., they contain the true value and weight functions).", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_04"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.10", "parag_1": "Similar to Fung et al. , we find that this is not only faster to run and easier to implement (no need to solve a linear system involving H ), but also leads to better results than standard implicit differentiation. Hence, we apply this simplification in all of our experiments.", "parag_2": "Other motivations for this approximation are discussed in Fung et al. (2022), where they call this Jacobian-free backpropagation. Matching their results, we find that this is not only faster to run and easier to implement (no need to solve a linear system involving H ), but also leads to better results than standard implicit differentiation in preliminary experiments. Hence, we apply this approximation to differentiate Equation 7 with respect to z and θ in all of our experiments.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "kBsx5htyKn.qV5njV8W5.02", "parag_1": "We investigate this empirically as follows: we start by selecting 2 . 5% random data points and train a model on this subset. At line 7 of Alg. 1, we enrich the unlabelled pool U by 10% with datapoints from another dataset and select the 200 data point with the highest score given by uncertaintysampling. If the selection would be random, then 10% of the selected data points would come from the out-of-domain dataset.", "parag_2": "We investigate this empirically as follows: we start by selecting 2 . 5% random data points and train a model on this subset. At line 7 of Alg. 1, we enrich the unlabelled pool U by 10% with data points from another dataset. We perform three runs, where in each run we inject random data points from only one of the three other datasets. Afterwards the 200 data point with the highest score given by uncertainty sampling are selected. If the selection would be random, then 10% of the selected data points would come from the out-of-domain dataset.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "eSDLAo16hQ.ZFRJqXKAQ.00", "parag_1": "We characterize the expressiveness of the matrices used in our method. In particular, we prove that block butterfly retains the expressiveness of butterfly, and that flat butterfly can accurately approximate the residual form of butterfly. Moreover, flat block butterfly + low-rank (an instance of sparse + low-rank) is more expressive than sparse or low-rank matrices alone. Finally, we analyze the training convergence and generalization of networks with sparse weights. All proofs are in the Appendix.", "parag_2": "We characterize the expressiveness of the matrices used in our method. In particular, we prove that block butterfly retains the expressiveness of butterfly, and that flat butterfly can accurately approximate the residual form of butterfly. Moreover, flat block butterfly + low-rank (an instance of sparse + low-rank) is more expressive than sparse or low-rank matrices alone. Finally, we analyze the training convergence and generalization of networks with sparse weights. All proofs are in the Appendix.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "nElUizYNh.CZNfAQwVJ.00", "parag_1": "MNIST dataset with 1000 training images. All external regularization schemes except learning rate decay and batch normalization have been turned off. We perform the following experiments : 1 ) Full-batch gradient descent with β = \n (i.e., GD) for various learning rate h and the best test accuracy is noted (in Figure 2) to be 95 . Full-batch gradient descent with momentum (GD+M) performed for various β with a fixed step-size h − 0 . 1 and the best test-accuracy is noted (in Figure 3) to be 96 . Our observation is that the best performance of GD (across all learning rates) is worse than the best performance of (GD+M) (across all β ’s). (Cohen et al., 2021) showed that gradient descent (GD) has an overwhelming tendency to increase the sharpness 2 till it reaches 2 h , called “the edge of stability”. And for (GD+M), the sharpness can reach up to 2(1+ β ) h , hence allowing it to enter a sharper region before becoming unstable. As greater allowable sharpness for (GD+M) than that of (GD) may suggest a higher test accuracy for (GD), this is not what we observe from the above experiment. We think the implicit regularization for (GD+M) plays a part in it. We believe IGR for momentum outweighs the sharpness effect in achieving better test accuracy.", "parag_2": "MNIST dataset with 1000 training images. All external regularization schemes except learning rate decay and batch normalization have been turned off. We perform the following experiments : 1 ) Full-batch gradient descent with β = \n (i.e., GD) for various learning rate h and the best test accuracy is noted (in Figure 2) to be 95 . Full-batch gradient descent with momentum (GD+M) performed for various β with a fixed step-size h = 0 . 1 and the best test-accuracy is noted (in Figure 3) to be 96 . Our observation is that the best performance of GD (across all learning rates) is worse than the best performance of (GD+M) (across all β ’s). This observation failed to be explained by the known theory of edge of stability 3 but can be well-explained by our implicit regularization theory for (GD+M) as adding momentum increases the strength of the IGR.", "annot_1": {"annotation": ["Content_deletion", "Concision"], "instruction": "Summarize heavily the results and explanations obtained. Fix any typos.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision"], "instruction": "Summarize the second half of the paragraph to make the paragraph shorter.", "annotator": "annotator_07"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.09", "parag_1": "This definition of µ ( θ ; · ) is inspired by discussion on the effective update of parameters in the literature (Van Laarhoven, 2017; Zhang et al., 2019; Brock et al., 2021). Previous studies found that when normalization techniques are applied, such as batch normalization (Ioffe & Szegedy, 2015), the update of the direction of θ , i.e., θ / || θ || 22 , reflects how much the update on θ changes the model f in order to fit the batch of samples.", "parag_2": "This definition of µ ( θ ; i ) is inspired by discussion on the effective update of parameters in the literature (Van Laarhoven, 2017; Zhang et al., 2019; Brock et al., 2021; Hoffer et al., 2018). When normalization techniques, such as batch normalization (Ioffe & Szegedy, 2015), are applied to the DNNs, the key property of the weight vector, θ , is its direction, i.e., θ / || θ || 22 . Thus, we measure the update on θ using the norm of its gradient normalized by its norm.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Make expression concise, add conjunction, include all citations.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite this paragraph to make it clearer.", "annotator": "annotator_02"}} {"id_paragraph": "jzQGmT-R1q.ugUt9B3XaO.03", "parag_1": "Having shown that networks do indeed lose some notion of target-fitting capacity, we now turn our attention to the interaction between capacity and performance. To study this phenomenon further, we will use a computationally cheaper measure of network capacity which we call the effective dimension. An approximation of the rank of a feature embedding, it is both more computationally efficient – as it does not require training a network for several thousand steps – and requires fewer hyperparameter design choices than the previous capacity metric. While this does not take into account the ability of the network to update early layers, Appendix B shows that it correlates reasonably well with target-fitting capacity while also being a cheap, low-variance estimator against which to compare an agent’s relatively noisy performance.", "parag_2": "Having shown that networks do indeed lose some notion of target-fitting capacity, we now turn our attention to the interaction between capacity and performance. To study this phenomenon further, we will use a computationally cheaper measure of network capacity which we call the effective dimension. An approximation of the rank of a feature embedding, it is both more computationally efficient – as it does not require training a network for several thousand steps – and requires fewer hyperparameter design choices than the previous capacity metric. This notion of capacity also measures more directly the ability of the network to distinguish input observations and so provides a better measure of how network updates will generalize to other states. This notion of state similarity is particularly relevant to sparse-reward environments, where policy improvement depends on the agent’s ability to distinguish a handful of rewarding states from the vast non-rewarding majority.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "Hk6z2qY07.SJ0SHCKR7.00", "parag_1": "Prioritized Experience Replay (Schaul et al., 2015b) ( PER ) improves the performance of DQN (Mnih et al., 2015) by biasing sampling in favor of experiences that cause large temporal-difference (TD) errors. TD errors may signal rare events that would convey useful information to the learner, especially if the rewards are rare. Alternatively ER can be used to train transition models in planning-based RL (Pan et al., 2018), or to train off-policy learners on auxiliary tasks (Schaul et al., 2015a; Jaderberg et al., 2017) helping to shape the network features. Finally, when rewards are extremely rare, exploration may be guided by training the agent to reach rarely-visited states (Andrychowicz et al., 2017). de Bruin et al. (2015) proposes a modification to ER that increases the diversity of behaviors contained in the RM, which is the opposite of what ReF-ER achieves. The ideas proposed by de Bruin et al.", "parag_2": "Prioritized Experience Replay (Schaul et al., 2015b) ( PER ) improves the performance of DQN (Mnih et al., 2015) by biasing sampling in favor of experiences that cause large temporal-difference (TD) errors. TD errors may signal rare events that would convey useful information to the learner. ER can be used to train transition models in planning-based RL (Pan et al., 2018), or to train off-policy learners on auxiliary tasks (Schaul et al., 2015a; Jaderberg et al., 2017) helping to shape the network features. When rewards are very sparse, RL agents can be trained to repeat previous outcomes (Andrychowicz et al., 2017) or to reproduce successful states or episodes (Oh et al., 2018; Goyal et al., 2018). de Bruin et al. (2015) proposes a modification to ER that increases the diversity of behaviors contained in the RM, which is the opposite of what ReF-ER achieves. The ideas proposed by de Bruin et al.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.02", "parag_1": "Our contribution is twofold. First, we propose the design of calen- dars that support medication prescriptions. The calendars allow for the scheduling of medication alongside other everyday activities and provide a way for rendering and resolving scheduling conflicts when they are raised by unsafe drug interactions due to violation of admin- istration constraints specified in the prescription. Second, we present the results of a qualitative study with twelve participants interact- ing with the calendar designs. Results indicate that calendars can be designed to support the integration of medication prescriptions and that people would generally be favorable to using a calendar to manage their medications, if the design does not deviate from familiar calendars. Results also show that conflicts arising from unsafe rescheduling can be rendered within the calendars. These results inform five additional design goals that an integrated calendar should address: the use of familiar design (DG2), avoiding clutter (DG3), allowing for personalization (DG3), supporting personal reflection (DG5), and highlighting for user attention (DG6).", "parag_2": "Our contribution is twofold. First, we identify and discuss consid- erations for the design of calendars that support medication prescriptions. Such calendars allow for the scheduling of medication actions alongside other everyday activities and provide a way for rendering and resolving conflicts when they are raised by unsafe schedules (i.e., schedules that violate constraints specified in the prescriptions). Second, we present the results of a qualitative study with twelve participants interacting with alternative calendar designs. Results indicate the potential benefit of equipping electronic calendars that already in use by many patients with additional functionality to support the scheduling of medication prescriptions. Users are generally in favour of using such an integrated approach that leverages their familiarity with existing tools. Results also show that it is feasible to design calendars that effectively communicate unsafe medication schedules. These results inform five additional design goals that an integrated calendar should address: the use of familiar design (DG2), avoiding clutter (DG3), allowing for personalization (DG3), support- ing personal reflection (DG5), and highlighting for user attention (DG6).", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite this paragraph for improved readability.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite this paragraph to make it more readable and fitting to the academic style.", "annotator": "annotator_07"}} {"id_paragraph": "8MRWETVKC.gKhhork8y.01", "parag_1": "Gaze Estimation Methods Gaze estimation methods can be generally categorised as either modelbased or appearance-based [24]. Model-based methods employ features detected from eye images to estimate gaze direction and can hardly obtain good performance in real-world settings because accurate eye feature detection relies on high-resolution images and homogeneous illumination [24, 25]. In contrast, appearance-based approaches directly regress gaze direction from eye images [12, 13, 14, 15] and can handle low-resolution images and different gaze ranges. However, appearance-based methods require a larger number of training eye images than model-based approaches to cover the significant variability in eye appearance [24], which poses serious privacy risks since eye images contain ample personal information, such as gender [26], identity [27], and personality traits [28]. Moreover, the heterogeneous data distribution across different users in real-world settings, which is caused by many factors including the differences in gaze range, head pose, illumination condition, and personal appearance [24, 17], can significantly hinder the training process of appearance-based gaze estimation methods. ", "parag_2": "Gaze estimation methods can be generally categorised as either model-based or appearance-based Zhang et al. (2017b). Model-based methods employ features detected from eye images to estimate gaze direction and can hardly obtain good performance in real-world settings because accurate eye feature detection relies on high-resolution images and homogeneous illumination Zhang et al. (2017b, 2019). In contrast, appearance-based approaches directly regress gaze direction from eye images Baluja and Pomerleau (1993); Liang et al. ; Choi et al. ; Lu et al. and can handle low-resolution images and different gaze ranges. However, appearance-based methods require a larger number of training eye images than model-based approaches to cover the significant variability in eye appearance Zhang et al. (2017b), which poses serious privacy risks since eye images contain ample personal information, such as gender Sammaknejad et al. (2017), identity Cantoni et al. and personality traits Hoppe et al. Moreover, the heterogeneous data distribution across different users in real-world settings, which is caused by many factors including the differences in gaze range, head pose, illumination condition, and personal appearance Zhang et al. (2017b, 2018), can significantly hinder the training process of appearance-based gaze estimation methods. Recent works Li et al. ; Bozkir et al.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_04"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.11", "parag_1": "Specifically, as shown in Figure 2 (b), existing methods (Yao et al., 2019) just assume unconfoundedness in Assumption A.1 to circumvent the UCE issue. Specifically, given two units r i ∈ P T = 1 ψ ( r ) and r j ∈ P T = 0 ψ ( r ) , the optimal transport in Definition 3.2 directly calculates the unit-wise distance as D ij ∶= ( r i − r j ) 2 . Given Assumption A.1 holds, this approach mitigates the treatment selection bias since it blocks the backdoor path X → T by balancing the distribution of the observed covariates in a latent space. However, Assumption A.1 is usually violated in real scenarios, which invalidates existing methods as the backdoor path X ′ → T is not blocked.", "parag_2": "CFR (Shalit et al., 2017), the unconfoundedness assumption A.1 (see Appendix A) is often taken to circumvent the UCE issue (Ma et al., 2022). Given two units r i ∈ P T = 1 ψ ( r ) and r j ∈ P T = 0 ψ ( r ) , for instance, optimal transport in Definition 3.2 calculates the unit-wise distance as D ij ∶= ∥ r i − r j ∥ 2 . If Assumption A.1 holds, this approach mitigates the treatment selection bias since it blocks the backdoor path X → T in Figure 3(a) by balancing the confounders across groups in a latent space. However, Assumption A.1 is usually violated in practice as per Figure 3(b), which hinder existing methods including OT from handling treatment selection bias since the backdoor path X ′ → T is not blocked.", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_05"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Fluidify this paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "x8CcXI4Ei.4yg90qT46L.00", "parag_1": "Meta learning, also referred to as “learning to learn”, usually learns a prior model from multiple tasksso that the learned model is able to quickly adapt to unseen tasks. Meta learning has been successfullyapplied to few-shot learning and multi-task learning [2,12,20,25,37]. While there are many exciting meta learning methods today, in this paper, we will study a representative meta learning settingwhere the goal is to learn a shared initial model that can quickly adapt to task-specific models. This adaptation may take an explicit form such as the output of one gradient descent step, which is referred to as the model agnostic meta learning (MAML) method [20]. Alternatively, the adaptation step may take an implicit form such as the solution of another optimization problem, which is referred to as the implicit MAML (iMAML) method [33]. Since both MAML and iMAML will solve a bileveloptimization problem, we term them the gradient-based meta learning thereafter. In many cases,overparameterized models are used as the initial models in meta learning for quick adaptation. Forexample, ResNets-based MAML models typically have around 6 million parameters, but are trainedon 1-3 million meta-training data [11]. Training such initial models is often difficult in meta learningbecause the number of training data is much smaller than the dimension of the model parameter.", "parag_2": "Meta learning, also referred to as “learning to learn”, usually learns a prior model from multipletasks so that the learned model is able to quickly adapt to unseen tasks. Meta learning has beensuccessfully applied to few-shot learning and multi-task learning [2, 11, 19, 24, 35]. While there are many exciting meta learning methods today, in this paper, we will study a representative metalearning setting where the goal is to learn a shared initial model that can quickly adapt to task-specific models. This adaptation may take an explicit form such as the output of one gradient descent step, which is referred to as the model agnostic meta learning (MAML) method [19]. Alternatively, the adaptation step may take an implicit form such as the solution of another optimization problem, which is referred to as the implicit MAML (iMAML) method [31]. Since both MAML and iMAML will solve a nested optimization problem, we term them the nested meta learning thereafter. In manycases, overparameterized models are used as the initial models in nested meta learning for quickadaptation. However, training such initial models is often difficult in meta learning because thenumber of training data is much smaller than the dimension of the model parameter.", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_08"}, "annot_2": {"annotation": ["Content_deletion", "Rewriting_light"], "instruction": "Improve the English and remove the example sentence.", "annotator": "annotator_02"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.00", "parag_1": "Proteins rarely act alone but usually interact with other proteins to orchestrate biological processes (Alberts & Miake-Lye, 1992; Kastritis & Bonvin, 2013). For example, antibodies, a kind of immune system protein, recognize pathogens by binding to proteins on their surface and elicit immune responses by interacting with the receptor protein of immune cells (Lu et al., 2018). Since proteinprotein interactions determine a large number of biological functions, developing methods to modulate protein-protein interactions is critical. A typical way to modulate protein-protein interactions is to mutate amino acids on the interface — some mutations improve the strength of binding while others weaken and even disrupt the interaction (Gram et al., 1992; Barderas et al., 2008). Biologists choose either to increase or decrease the binding strength depending on specific goals. For example, if one would like to enhance the effect of a neutralizing antibody against a virus, it is usually necessary to increase the binding strength between the antibody and the viral protein. However, as the combinatorial space of amino acid mutations is large, it is not always feasible or affordable to conduct wet-lab assays to test all the viable mutations. Therefore, computational approaches are needed to guide the identification of desirable mutations via predicting the mutational effect on binding strength measured by the change in binding free energy ( ∆∆ G ).", "parag_2": "Proteins rarely act alone and usually interact with other proteins to perform a diverse range of biological functions (Alberts & Miake-Lye, 1992; Kastritis & Bonvin, 2013). For example, antibodies, a type of immune system protein, recognize and bind to proteins on pathogens’ surfaces, eliciting immune responses by interacting with the receptor protein of immune cells (Lu et al., 2018). Given the importance of protein-protein interactions in many biological processes, developing methods to modulate these interactions is critical. A common strategy to modulate protein-protein interactions is to mutate amino acids on the interface: some mutations enhance the strength of binding, while others weaken or even disrupt the interaction (Gram et al., 1992; Barderas et al., 2008). Biologists may choose to increase or decrease binding strength depending on their specific goals. For example, enhancing the effect of a neutralizing antibody against a virus usually requires increasing the binding strength between the antibody and the viral protein. However, the combinatorial space of amino acid mutations is large, so it is not always feasible or affordable to conduct wet-lab assays to test all viable mutations. Therefore, computational approaches are needed to guide the identification of desirable mutations by predicting their mutational effects on binding strength, typically measured by the change in binding free energy ( ∆∆ G ).", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Improve the English in this paragraph for better readability.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Lightly revise this paragraph to make it more clear and precise while keeping the academic style.", "annotator": "annotator_07"}} {"id_paragraph": "CzTbgFKuy.hfDu8DsDq6.00", "parag_1": "Learning guarantees: Having established an upper bound, in Theorem 4.2 we again show how a learning-theoretic result follows from standard online learning. This time, instead of OGD we run exponentiated gradient (EG) [44], a classic method for learning from experts, on each of n simplices to learn the probabilities p [ j ] ∀ j ∈ [ n ] . EG updates by multiplying the current vector by the exponent of the negative gradient and is notable for having regret logarithmic in the size |K| of the simplices, which is important for large metric spaces. Note that as the relaxation is randomized, our algorithms output a dense probability vector; to obtain a prediction for OPM we sample ˆ s t [ j ] ∼ p [ j ] ∀ j ∈ [ n ] .", "parag_2": "Learning guarantees: Having established an upper bound, in Theorem 4.2 we again show how a learning-theoretic result follows from standard online learning. This time, instead of OGD we run exponentiated (sub)gradient ( EG ) [44], a classic method for learning from experts, on each of n simplices to learn the probabilities p [ j ] ∀ j ∈ [ n ] . The multiplicative update x t +1 ∝ x t (cid:12) exp( − α ∇ U t ( x t )) of EG is notable for yielding regret logarithmic in the size |K| of the simplices, which is important for large metric spaces. Note that as the relaxation is randomized, our algorithms output a dense probability vector; to obtain a prediction for OPM we sample ˆ s t [ j ] ∼ p [ j ] ∀ j ∈ [ n ] .", "annot_1": {"annotation": ["Content_substitution", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "x8CcXI4Ei.4yg90qT46L.03", "parag_1": "This paper studies the generalization performance of the gradient-based meta learning with an overparameterized model. For a precise analysis, we focus on linear models where the total number of data from all tasks is smaller than the dimension of the model parameter. We show that when the data heterogeneity across tasks is relatively small, the per-task data covariance matrices with certain properties lead to benign overfitting for gradient-based meta learning with the minimum-normsolution. This explains why overparameterized meta learning models can generalize well in new data and new tasks. Furthermore, our theory shows that overfitting is more likely to happen in metalearning than in ERM, especially when the data heterogeneity across tasks is relatively high. Onelimitation of this work is that the analysis focuses on themeta linearregression case. While thisanalysis can capture practical cases where we reuse the feature extractor from pre-trained models andonly meta-train the parameters in the last linear layer, it is also promising to extend our analysis tononlinear cases via means of random features and neural tangent kernels in the future work.", "parag_2": "This paper studies the generalization performance of the nested meta learning with an overparameterized model. For a precise analysis, we focus on linear models where the total number of data from all tasks is smaller than the dimension of the model parameter. We show that when the data heterogeneity across tasks is relatively small, the per-task data covariance matrices with certain properties lead to benign overfitting for nested meta learning with the minimum norm solution. This explains why overparameterized meta learning models can generalize well in new data and new tasks. Furthermore, our theory shows that overfitting is more likely to happen in nested meta learning than in ERM, especially when the data heterogeneity across tasks is relatively high in meta learning. Though ourcurrent analysis is limited to the linear models and non-Bayesian estimate, it is possible to extendit for analyzing Bayesian meta learning with more general fully connected or convolutional neuralnetworks in the neural tangent kernel regime. We will pursue those in our future work.", "annot_1": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Exclude unnecessary reasoning, correct the typos.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Improve the english of this paragraph, particularly the last part. Replace all mentions of \"gradient-based meta learning\" with \"nested meta learning\".", "annotator": "annotator_02"}} {"id_paragraph": "OzYyHKPyj7.O9Mk1uqXra.03", "parag_1": "In the original NS-RNN, the controller can read the probability distribution over the PDA’s current top stack symbol, but it cannot read the PDA’s current state at all. To see why this is a problem, consider the language { 𝑣𝑣 R } . While reading 𝑣 , the controller should predict the uniform distribution over { 0 , 1 } , but while reading 𝑣 R , it should predict based on the top stack symbol. The PDA is able to guess whether the current position is in 𝑣 or 𝑣 R , but this information is kept in the state, not the stack. Consequently, the controller has no way to know which prediction strategy to use. In the RNS-RNN, the stack WFA computes a joint distribution over top stack symbols and PDA states, making r 𝑡 a vector of size | 𝑄 || Γ | . Equation 5 becomes", "parag_2": "In the NS-RNN, the controller can read the distribution over the PDA’s current top stack symbol, but it cannot observe its current state. To see why this is a problem, consider the language { 𝑣𝑣 R } . While reading 𝑣 , the controller should predict the uniform distribution, but while reading 𝑣 R , it should predict based on the top stack symbol. A PDA with two states can nondeterministically guess whether the current position is in 𝑣 or 𝑣 R . The controller should interpolate the two distributions based on the weight of being in each state, but it cannot do this without input from the stack WFA, since the state is entangled with the stack contents. We solve this in the RNS-RNN by computing a joint distribution over top stack symbols and PDA states, making r 𝑡 a vector of size | 𝑄 || Γ | . Equation 5 becomes", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_10"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_03"}} {"id_paragraph": "MZYBK_Wp2X.HVFitLjAId.01", "parag_1": "We use Lancichinetti–Fortunato–Radicchi (LFR) graph generator. It generates non-weighted graphs with ground truth non-overlapped communities. The model has mandatory parameters: the number of nodes n ( n > 0 ), the power law exponent for the degree distribution τ 1 ( τ 1 > 1 ), the power law exponent for the community size distribution τ 2 ( τ 2 > 1 ), the fraction of intra-community edges incident to each node µ ( 0 ≤ µ ≤ 1 ), and either minimum degree (min degree) or average degree (avg degree). There are also extra parameters: maximum degree (max degree), minimum community size (min community), maximum community size (max community). Not all LFR parameters space correspondsreal world graphs, usually real graphs correspond to τ 1 ∈ [1 , 4] and µ < 0 . However, there is also separated interesting case of bisect graphs ( µ > 0 . 5 ). We still consider the entire parameter space to cover all the cases.", "parag_2": "We use Lancichinetti–Fortunato–Radicchi (LFR) graph generator. It generates non-weighted graphs with ground truth non-overlapping communities. The model has mandatory parameters: the number of nodes n ( n > 0 ), the power law exponent for the degree distribution τ 1 ( τ 1 > 1 ), the power law exponent for the community size distribution τ 2 ( τ 2 > 1 ), the fraction of intra-community edges incident to each node µ ( 0 ≤ µ ≤ 1 ), and either minimum degree (min degree) or average degree (avg degree). There are also extra parameters: maximum degree (max degree), minimum community size (min community), maximum community size (max community). Not the whole LFR parameter space corresponds to common real-world graphs; most of such graphs are described with τ 1 ∈ [1 , 4] and µ < 0 . However, there is also an interesting case of bipartite/multipartite-like graphs with µ > 0 . 5 . Our choice is to consider the entire parameter space to cover all theoretical and practical cases.", "annot_1": {"annotation": ["Rewriting_medium", "Development"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Correct the grammar error.", "annotator": "annotator_08"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.15", "parag_1": "Ablations : Figure 5 shows ablation results on test actions. Grid World : all ablations utilize the summary of action set as input, aggregated via different mechanisms. Thus, they can identify which dig-lava skills are available and guide each action decision accordingly. In such small action spaces with simple action relations, summary-ablations are on par with AGILE. This also holds true for RecSim and Real RecSys , where the information of the most common category is sufficient to select the same category’s items to maximize CPR (e.g. Figure 6(c)). Therefore, we observe only 5 − 20% gains of AGILE over the ablations. To test consistency of results, we further evaluate on two more RecSim tasks: (i) Direct CPR: task is still to maximize CPR, but agent receives additional CPR metric reward on top of click/no-click reward (Sec B.3, and (ii) Pairing environment: task is to recommend pairs of associated items based on predefined pairings. We reproduce the trend that AGILE > = ablations.", "parag_2": "Ablations : Figure 5 shows ablation results on test actions. Grid World : all ablations utilize the action set summary as input, aggregated via different mechanisms. Thus, they can identify which dig-lava skills are available and enter lava accordingly to create shortcuts. In such small action spaces with simple action relations, summary-ablations are on par with AGILE. This trend also holds for RecSim and Real RecSys , where the summary can find the most common category and its items are then selected to maximize CPR (e.g., Figure 6(c)). Therefore, we observe only 5 − 20% gains of AGILE over the ablations. To test the consistency of results, we further evaluate two more RecSim tasks. (i) Direct CPR: the agent receives additional explicit CPR metric reward on top of click/noclick reward (Sec. B.3), and (ii) pairing environment: the task is to recommend pairs of associated items based on predefined pairings (Sec. B.4). We reproduce the trend that AGILE > = ablations.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Correct some issues in the paragraph and replace certain words to improve it", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Improve the overall clarity of the paragraph.", "annotator": "annotator_03"}} {"id_paragraph": "l1D720s69O.vCKjjOP1ze.04", "parag_1": "To convert text classification into the node classification on graph, there are two relationships considered when forming graphs: (i) the relation between documents and words and(ii) the connection between words. For the first type of relations, we build edges among word nodes and document nodes based on the word occurrence in documents. The weight of the edge between a document node and a word node is the Term Frequency-Inverse Document Frequency (Rajaraman & Ullman, 2011) (TF-IDF) of the word in the document applied to build the Docs-words graph. For the second type of relations, we build edges in graph among word co-occurrences across the whole corpus. To utilize the global word co-occurrence information, we use a fixed-size sliding window on all documents in the corpus to gather co-occurrence statistics. Point-wise Mutual Information (Church &", "parag_2": "For Text GCN, SGC, and our approach, the embedding size of the first convolution layer is 200 and the window size is 20. We set the learning rate to 0.02, dropout rate to 0.5 and the decay rate to 0. The 10% of training set is randomly selected for validation. Following (Kipf & Welling, 2016), we trained our method and Text GCN for a maximum of 200 epochs using the Adam (Kingma & Ba, 2014) optimizer, and we stop training if the validation loss does not decrease for 10 consecutive epochs. The text graph was built according to steps detailed in the supplementary material. To convert text classification into the node classification on graph, there are two relationships considered when forming graphs: (i) the relation between documents and words and (ii) the connection between words. For the first type of relations, we build edges among word nodes and document nodes based on the word occurrence in documents. The weight of the edge between a document node and a word node is the Term Frequency-Inverse Document Frequency (Rajaraman & Ullman, 2011) (TF-IDF) of the word in the document applied to build the Docs-words graph. For the second type of relations, we build edges in graph among word co-occurrences across the whole corpus. To utilize the global word co-occurrence information, we use a fixed-size sliding window on all documents in the corpus to gather co-occurrence statistics. Point-wise Mutual Information (Church &", "annot_1": {"annotation": ["Development", "Content_addition"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "7k_XarZvZi.HEpO0DQe38.00", "parag_1": "X t,j is encoded by its user into a message Y t,j , and then transmitted to a central server. We assume that the communication with the server for each datapoint takes no more than b bits. This paper mainly addresses the communication bottleneck and does not involve privacy concerns.", "parag_2": "X t,j is encoded by its user into a message Y t,j , and then transmitted to a central server. We assumethe server knows which cluster t ∈ [ T ] each transmitted datapoint Y t,j belongs to, as well as thenumber of clusters T . We assume that the communication with the server for each datapoint takes no more than b bits. This paper mainly addresses the communication bottleneck and does not involve privacy concerns.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_04"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "fDUdAYCQqZy.0cNiGAHFml.00", "parag_1": "Another line of methods, on the contrary, uses the returns of the behavior policy as the signal for policy learning, as adopted in (Wang et al., 2018; Peng et al., 2019; Chen et al., 2020). By doing so, they keep the whole learning procedure within the dataset’s support. However, the behavior policy of the dataset can be imperfect and insufficient to guide policy learning. Learning optimal values within the dataset, on the other extreme, can lead to erroneously optimistic value estimates since data is limited and off-policy. To achieve a trade-off between imitation learning and optimal value learning, we propose Expectile V -learning (EVL), which is based on a new expectile operator that smoothly interpolates between the Bellman expectation operator and optimality operator.", "parag_2": "Another line of methods, on the contrary, uses the returns of the behavior policy as the signal for policy learning, as adopted in Wang et al. ; Peng et al. ; Chen et al. By doing so, they keep the value learning procedure completely within the dataset . However, the behavior policy of the dataset can be imperfect and insufficient to guide policy learning. To achieve a tradeoff between imitation learning and optimal value learning while confines learning within the dataset , we propose Expectile V -learning (EVL), which is based on a new expectile operator that smoothly interpolates between the Bellman expectation operator and optimality operator.", "annot_1": {"annotation": ["Content_deletion", "Rewriting_light"], "instruction": "Remove a redundant sentence. Correct citation format.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "Make it shorter by removing what is not essential.", "annotator": "annotator_07"}} {"id_paragraph": "S1fwAltvB.BkzbibmoH.03", "parag_1": "N: to achieve a large explosive yield, a linear implosion weapon needs more material, about 13 kgs. Q: the mint ’s director at the time , nicolas peinado , was also an architect and made the initial plans. N: jetley ’s mother , kaushaliya rani , was the daughter of high court advocate shivram jhingan . Q: their first project is software that lets players connect the company ’s controller to their device N: the city offers a route-finding website that allows users to map personalized bike routes", "parag_2": "NT: to achieve a large explosive yield, a linear implosion weapon needs more material, aboutkgs. Q: the mint ’s director at the time , nicolas peinado , was also an architect and made the initial plans. N: the director is angry at crazy loop and glares at him , even trying to get a woman to kick crazy loop out of the show ( which goes unsuccessfully ) . NT: jetley ’s mother , kaushaliya rani , was the daughter of high court advocate shivram jhingan . Q: their first project is software that lets players connect the company ’s controller to their device N: you could try use norton safe web , which lets you enter a website and show whether there seems to be anything bad in it . NT: the city offers a route-finding website that allows users to map personalized bike routes", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.13", "parag_1": "Encoder Network The encoder network starts with two multi-layer perceptrons (MLPs) that generate embeddings for each single amino acid and each pair of amino acids respectively. The MLP for single amino acid encodes the amino acid type, backbone dihedral angles, and local atom coordinates of each amino acid into a vector e i ( i = 1 . . . n ). The other MLP for amino acid pairs mainly encodes the distance and relative position between two amino acids and we denote a pair embedding vector as z ij ( i, j = 1 . . . n ). Next, we use a self-attention-based network invariant to rotation and translation (Jumper et al., 2021) to transform the single embeddings{ e i} and pair embeddings{ z ij} into hidden representations{ h i } . The hidden representation h i is aimed at capturing the informa- tion of both the i -th amino acid itself and its structural environments. It serves as an encoding of the condition { a j , p j , O j , ˜ χ j } nj =1 for the probability density w.r.t. χ i .", "parag_2": "Encoder Network The encoder network starts with two multi-layer perceptrons (MLPs) that generate embeddings for each individual single residue and each pair of residues respectively. The MLP for single residues encodes the residue type, backbone dihedral angles, and local atom coordinates into a vector e i ( i = 1 . . . n ). The other MLP for residue pairs encodes the distance and the relative position between two residues. We denote a pair embedding vector as z ij ( i, j = 1 . . . n ). To transform the single embeddings e i and pair embeddings z ij into hidden representations h i , we use a self-attention-based network that is invariant to rotation and translation (Jumper et al., 2021). The hidden representation h i aims to capture both the information of the i -th residue itself and its structural environment. It serves as an encoding of the condition { a j , p j , O j , ˜ χ j } nj =1 for the probability density with respect to χ i .", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Replace every apparition of \"amino acids\" or \"amino acids in the protein complex\" by \"residues\"", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Replace all mentions of amino acid by 'residue'. Reorder sentences in a more logical order when needed.", "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.03", "parag_1": "However, as shown in Figure 2 (a), the treatment selection bias shifts covariate distributions across groups. As such, ϕ 1 and ϕ 0 would overfit the respective group’s properties and thus cannot generalize well to the entire population. Therefore, the resulting ˆ τ would be biased.", "parag_2": "However, according to Figure 1(a), the treatment selection bias causes a distribution shift of covariates across groups, which misleads ϕ 1 and ϕ 0 to overfit their respective group’s properties and generalize poorly to the entire population. Therefore, the ITE estimate ˆ τ by these methods would be biased.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Improve the english of this paragraph.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Edit the paragraph to make it more formal and precise.", "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.13", "parag_1": "Median completion time was 4.0 seconds with Design A , 2.0 seconds with Design B , and 1.0 seconds with Design C . To complete this task with Design A , participants had to scroll to count all the times the medication appeared on a given day. Similar complaints about the need to scroll as for T MED ; day were made by P3, P5, and P8.", "parag_2": "All participants successfully completed T MED ; cycle with all de- signs. With Design A , they again had to scroll through the entire week, therefore took longer; with Design B they could rely on the daily summaries; and with Design C the information was readily accessible on the row headers. No comments were made regarding this task.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_09"}} {"id_paragraph": "S1-LZxvKX.rJ009I8RX.02", "parag_1": "Sparse reparameterization Successful training of sparse reparameterized networks invariably employed iterative pruning and retraining (e.g. Han et al. ; Narang et al. (2017); Zhu & Gupta (2017)). Training typically started with a large model and sparsity was scheduled to increase during the course of learning. Training a small, sparse model de novo always fared much worse than training a large one to begin with (Zhu & Gupta, 2017).", "parag_2": "Sparse reparameterization Successful training of sparse reparameterized networks usually employs iterative pruning and retraining, e.g. Han et al. ; Narang et al. (2017); Zhu & Gupta (2017) 3 . Training typically starts with a large pre-trained model and sparsity is gradually increased during the course of fine-tuning. Training a small, static, and sparse model de novo always fared much worse than training a large one to begin with (Zhu & Gupta, 2017).", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Improve the English of this paragraph.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_substitution", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "BkVj6Z-AW.SytnTZWCZ.04", "parag_1": "One can see a qualitative comparison of acLSTM with a basic LSTM-3R in Figure 3, both trained on the Indian dance dataset. We find the performance of the vanilla network to be consistent with the results reported in Fragkiadaki et al. ; Jain et al. ; Bütepage et al. ; Martinez et al.", "parag_2": "One can see a qualitative comparison of acLSTM with a basic LSTM-3R in Figure 3, both trained on the Indian dance dataset. We find the performance of the vanilla network to be consistent with the results reported in (Fragkiadaki et al., 2015; Jain et al., 2016; Bütepage et al., 2017; Martinez et al., 2017), freezing at around 1000 ms.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.10", "parag_1": "This issue stems mainly from the mass preservation constraint. Specifically, units acting as outliers at a mini-batch level would be forced to be transported by this marginal constraint, as depicted in Figure 1. It hinders the transport of normal units andmakes the group discrepancy vulnerable to the sampling effect. Such issue would be exacerbated by the small batch size.", "parag_2": "This issue is attributed to the mass-preservation constraint in (5), which requires that all units in both groups match each other, regardless of the actual situation. Mini-batch outliers, for instance, would be compelled to be transported according to Figure 2, which impedes the transport of normal units and the computation of the actual group discrepancy. A small batch size would exacerbate this defect.", "annot_1": {"annotation": ["Development","Rewriting_medium"], "instruction": "Rewrite so that it looks more organized", "annotator": "annotator_05"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite this paragraph to make it more easily readable.", "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.16", "parag_1": "Training Settings. Following (Zhang et al., 2018b), we perform data augmentation on the training images, which are randomly rotated by 90 ◦ , 180 ◦ , 270 ◦ and flipped horizontally. Each training batch consists of 16 LR color patches, whose sizeis 48 × 48. Our SRP model is trained by ADAM optimizer (Kingma & Ba, 2014) with β 1 =0.9, β 2 =0.999, and (cid:15) = 10 − 8 . We set the initial learning rate as 10 − 4 and then decrease it to half every 2 × 10 5 iterations of back-propagation. We use PyTorch (Paszke et al., 2017) to implement our models with a Tesla V100 GPU 2 .", "parag_2": "Training Settings. Following (Zhang et al., 2018b), data augmentation is used in training – training images are randomly rotated by 90 ◦ , 180 ◦ , 270 ◦ and flipped horizontally. Image patches (patch size 48 × 48) are cropped out to form each training batch. Adam optimizer (Kingma & Ba, 2014) is adopted for training with β 1 =0.9, β 2 =0.999, and (cid:15) = 10 − 8 . Initial learning rate is set to 10 − 4 and then decayed by factor 0 . 5 every 2 × 10 5 iterations. We use PyTorch (Paszke et al., 2017) to implement our models with a Tesla V100 GPU † .", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Improved the writing and reformulate the third sentence", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Improved the writing and reformulate the third sentence", "annotator": "annotator_06"}} {"id_paragraph": "_F_xxvP0sL.hgPlvA7CZ6.00", "parag_1": " The other advantage of message passing is that we can utilize the labels of retrieveddata instances to interact with featuresto generate label-enhanced messages, guide the messageaggregation process, and take advantage of label propagation at the same time. After message passing,the enhanced data instance representations are then used for prediction.", "parag_2": "Notice that (2-way) factorization machine (Rendle, 2010) solely modelling the second-order featureinteractions is widely employed in real-world scenarios and inspires a series of tabular predictors (Wuet al., 2021b; Qin et al., 2021). Our method leverages the hypergraph structure of the given tabulardata to capture high-order interactions beyond the second-order ones, and uses GNNs to representthese high-order features. The other advantage of message passing is that we can utilize the labels ofretrieved data instances to interact with features and take advantage of label propagation at the sametime. The enhanced data instance representations are then used for prediction.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.22", "parag_1": "Because avoid-strategy was shown to be desirable in experiment 2, changing the notch to an area where the cursor cannot enter can be considered. Experiment 3 was almost the same experiment as experiment 2, however, the notch changed to an area where the cursor cannot enter. The apparatuses, participants, task, and measurements were the same as in experiment 2.", "parag_2": "Changing the notch to an area where the cursor cannot enter can be considered as an effective approach as the avoid-strategy was found to be desirable in Experiment 2. Experiment 3 was almost the same experiment as Experiment 2; however, the notch was changed to an area where the cursor cannot enter. The apparatuses, participants, task, and measurements were the same as in Experiment 2.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Switch the two parts of the first sentence", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Reorder the first sentence.", "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.12", "parag_1": "When the pruning process is finished, we remove the unimportant filters (not zeroing out them only, but literally removing them from the model), which results in a small model. Then we finetune the small model to regain performance following the common practice in pruning (Reed, 1993).", "parag_2": "Upon finishing pruning, we take away the unimportant filters (not zeroing out them only, but literally removing them from the model), which will give us a compact SR model. Finally, the compact model will be finetuned to regain performance following the common practice in pruning (Reed, 1993).", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Please rephrase my paragraph.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Revise this academic paragraph for readability.", "annotator": "annotator_07"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.00", "parag_1": "These earlier negative findings compel us to ask, what prevents multi-modal DNNs from achieving better performance? In order to answer this question, we first introduce a metric, conditional utilization rate . For a multi-modal DNN trained with two modalities, m 0 and m 1 , the conditional utilization rate of m 1 given m 0 , denoted by u ( m 1 | m 0 ) , measures how important it is to use m 1 , given the presence of m 0 . We experiment on several multi-modal learning tasks and we consistently observe a significant difference of conditional utilization rates between modalities. For example, we observe u RGB | depth = 0 . 01 and u depth | RGB = 0 . 63 for a model trained to identify gestures in videos using the NVIDIA Dynamic Hand Gesture Dataset (NVGesture) (Molchanov et al., 2016). It indicates that the model relies on the depth modality to make predictions and does not pay attention to the RGB modality. This observation leads to a conjecture that the multi-modal learning process often results in models that under-utilize some of the input modalities.", "parag_2": "These earlier negative findings compel us to ask, what prevents multi-modal DNNs from achieving better performance? In order to answer this question, we first diagnose these DNNs as lacking utilization of all modalities by analyzing their conditional utilization rates. For a multi-modal DNN trained with two modalities, m 0 and m 1 , the conditional utilization rate of m 1 given m 0 , denoted by u ( m 1 | m 0 ) , measures how important it is to use m 1 , given the presence of m 0 . It is computed as the relative difference in accuracy between two derived models from the DNN, one using both modalities and the other using only one modality. In several multi-modal learning tasks, we consistently observe a significant imbalance in conditional utilization rates between modalities. For example, we observe u (RGB | depth) = 0 . 01 and u (depth | RGB) = 0 . 63 for a DNN trained to identify gestures in videos using the NVIDIA Dynamic Hand Gesture Dataset (NVGesture) (Molchanov et al., 2016). It indicates that the DNN relies on the depth modality to make predictions and does not pay attention to the RGB modality. These observations lead to a conjecture that the multi-modal learning process often results in models that under-utilize some of the input modalities.", "annot_1": {"annotation": ["Content_addition", "Development"], "instruction": NaN, "annotator": "annotator_08"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "PDvmJtmgQb.gGrpxbc7UI.03", "parag_1": "Consider the classic differentially private stochastic convex optimization (DP-SCO) (Chaudhuri et al., 2011; Bassily et al., 2014; 2019; 2020b) setting. Let τ be a distribution over a fixed domain D . Given a data set D ∈ D ∗ drawn i.i.d. from τ , and a loss function (cid:96) priv : R p × D → R , the objective is to approximately solve arg min θ ∈C", "parag_2": "Consider the classic differentially private stochastic convex optimization (DP-SCO) [6, 9, 11, 13] setting. Let τ be a distribution over a fixed domain D . Given a data set D ∈ D ∗ drawn i.i.d. from τ , and a loss function (cid:96) priv : R p × D → R , the objective is to approximately solve arg min θ ∈C", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Unusable"], "instruction": "I do not want parenthetical citations.", "annotator": "annotator_09"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.11", "parag_1": "(2) Regularization Form . Although L 1 regularization is well-known to induce sparsity in machine learning, it is hard to tune the coefficient to strike a good balance between sparsity and performance. Therefore, L 2 regularization is adopted in our method given its easy control. Specifically, given the loss function L of a neural network Θ , the total error function E with the proposed adaptive Lregularization term can be formulated as", "parag_2": "(2) Regularization Form . Although L 1 regularization is well-known to induce sparsity in machine learning, it is hard to tune the coefficient to realize a desired trade-off between sparsity and performance. Instead, L 2 regularization is thereby adopted in our method given its more tamable control over the sparsification process – note the gradient of L 2 regularization is proportion to the weight magnitude while the gradient of L 1 regularization is not. Specifically, given the loss function L of a neural network Θ , the total error function E with adaptive L 2 regularization term is formulated as", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "atxti8SVk.3K9AmPwALM.07", "parag_1": "We consider propagating semantic labels from labeled data C to unlabeled data U by exploiting the aforementioned relationships. We denote C and U as the sets of segment indices. Our label propagation is driven by grouping and separating data in a learned feature space. We now describe each pixel-to-segment semantic relationship for augmenting the setsof positive/negative segments using both labeled and unlabeled pixels.", "parag_2": "Our goal is to propagate known semantics from labeled data C to unlabeled data U with the aforementioned priors. C and U denote the sets of segment indices respectively. We detail how to augment positive / negative segment sets using both C and U for each type of relationships (Fig. 4).", "annot_1": {"annotation": ["Concision"], "instruction": "Make this paragraph considerably more concise. Remove any information that is not essential to this paragraph itself.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision"], "instruction": "Delete the third sentence. Concise the first and last one.", "annotator": "annotator_07"}} {"id_paragraph": "tOMAf1V5dI.SNeLZ71pb5.01", "parag_1": "MLP-based Architectures. MLP-Mixer (Tolstikhin et al., 2021) has designed a very concise framework that utilizes matrix transposition and MLP to transmit information between spatial features. Resort to MLP, skip connection between layers and normalization layer, MLP-Mixer obtains promising experimental results. The concurrent work FF (Melas-Kyriazi, 2021) also applies a similar network architecture and reaches similar conclusions. Such experimental results are surprising, which shows that the MLP-based architecture also achieves comparable performance with CNN-based architectures and transformer-based architectures. Subsequently, Res-MLP (Touvron et al., 2021a) is proposed, which also obtains impressive performance with residual MLP only trained on ImageNet1K. gMLP (Liu et al., 2021a) and EA (Guo et al., 2021) introduce Spatial Gating Unit (SGU) and the external attention to improve the performance of the pure MLP-based architecture, respectively.", "parag_2": "MLP-based Architectures. MLP-Mixer (Tolstikhin et al., 2021) designs a very concise framework that utilizes matrix transposition and MLP to transmit information between spatial features, and obtains promising performance. The concurrent work FF (Melas-Kyriazi, 2021) also applies a similar network architecture and reaches similar conclusions. Subsequently, Res-MLP (Touvron et al., 2021a) is proposed, which also obtains impressive performance with residual MLP only trained on ImageNet-1K. gMLP (Liu et al., 2021a) and EA (Guo et al., 2021) introduce Spatial Gating Unit (SGU) and the external attention to improve the performance of the pure MLP-based architecture, respectively.", "annot_1": {"annotation": ["Concision"], "instruction": "Revise this paragraph to be more concise.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make this paragraph shorter by deleting details.", "annotator": "annotator_07"}} {"id_paragraph": "YkiRt7L93m.jgDbnUD7s.00", "parag_1": "We develop a notion of projections between sets of probability measures using the geometric properties of the 2 -Wasserstein space. It is designed forgeneral multivariate probability measures, is computationally efficient to implement, and provides a unique solution in regular settings. The idea is to work on regular tangent cones of the Wasserstein space using generalized geodesics. Its structure and computational properties make the method applicable in a variety of settings, from causal inference to the analysis of object data. An application to estimating causal effects yields a generalization of the notion of synthetic controls for systems with general heterogeneity described via multivariate probability measures, as well as a way to estimate optimal weights jointly over all time periods.", "parag_2": "We develop a notion of projections between sets of probability measures using the geometric properties of the 2 -Wasserstein space. In contrast to existing methods, it is designed for multivariate probability measures that need not be regular, is computationally efficient to implement via a linear regression, and provides a unique solution in general. The idea is to work on tangent cones of the Wasserstein space using generalized geodesics. Its structure and computational properties make the method applicable in a variety of settings where probability measures need not be regular, from causal inference to the analysis of object data. An application to estimating causal effects yields a generalization of the synthetic controls method for systems with general heterogeneity described via multivariate probability measures, something that has been out of reach of existing approaches.", "annot_1": {"annotation": ["Rewriting_medium", "Development"], "instruction": "Please, make this paragraph more clear.", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Development", "Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "_WDAgtb1S.HYGMjPqDyq.00", "parag_1": "The DINO-based high score images have various face angles while the low score images are mostly front-facing. In the third column of Figure 9, the baby faces appear in the low score images, unlike the other two models which have formal adult images in the low score. This can be explained from the previous study that the CLIP model (Radford et al., 2021), which is a self-supervised image-text multi-modal model, can better capture semantics compared to the classifiers. We conjecture that the model may consider the baby faces more similar between them than between the other images because baby faces are less distinctive and share a lot common properties compared to the adult faces. For the user preference, we present a user study on the feature extractors in Figure 24b in Appendix I. On the noise robustness, CLIP is more robust to the Gaussian noise or Gaussian blur compared to VGG16, which is summarized in Appendix J.", "parag_2": "The DINO-based high score images have various face angles while the low score images are mostly front-facing. In the third column of Figure 9, the baby faces appear in the low score images, unlike the other two models which have formal adult images in the low score. This can be explained from the previous study that the CLIP model (Radford et al., 2021), which is a self-supervised image-text multi-modal model, can better capture semantics compared to the classifiers. We conjecture that the model may consider the baby faces more similar between them than between the other images because baby faces are less distinctive and share a lot common properties compared to the adult faces. For the user preference, we present a user study on the feature extractors for FFHQ dataset and found that VGG16 is the most preferred feature extractor for FFHQ dataset. The detailed experimental setting and results are described in Appendix I. For the noise robustness, on the other hand, we found CLIP is more robust to the Gaussian noise or Gaussian blur compared to VGG for various noise levels. The details are described in Appendix J.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "45J3pSnJwb.Hsmw4GNVlK.00", "parag_1": "To summarize, our contribution is introducing a framework of the generalized probability kernel(GPK) which in general treats the function space of probability mass function as a subspace of universal-RKHS F . The resulting family of GPK [ F ] is valuable especially when they are equiped with a plugin-estimator. We introduce a series of techniques in analyzing bias of these plugin-estimators and illustrate a procedure of searching for members of GPK [ F ] family suitable forapplications such as the two-sample test. ", "parag_2": "To summarize, our contribution is introducing a framework of the generalized probability kernel(GPK) which treats the function space of probability mass function as a subspace of universalRKHS F . The resulting family of GPK[ F , φ, p , q ] is valuable especially when they are equipped with a plugin-estimator. We introduce a series of techniques in analyzing bias and convergence bounds of these plugin-estimators. Remarkably, a natural extension of MMD from the viewpoint of GPK, which we call power-MMD, could be used for two-sample test. We further argue that the two-sample test using power-MMD with large ρ in general performs better than MMD(which is with small ρ ).", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_04"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "ryESgXktV.BJ4dKdWmr.00", "parag_1": "Abstract As Artificial Intelligence (AI) becomes an integral part of our life, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. For a robotic teammate, the ability to generate explanations to explain its behavior is one of the key require- ments of an explainable agency. Prior work on explanation generation focuses on supporting the reasoning behind the robot’s behavior. These approaches, however, fail to consider the cognitive effort needed to understand the received expla- nation. In particular, the human teammate is expected to understand any explanation provided before the task execution, no matter how much information is presented in the explanation. In this work, we argue that an explanation, especially complex ones, should be made in an online fashion during the execution, which helps to spread out the information to be explained and thus reducing the cognitive load of humans. However, a challenge here is that the different parts of an ex- planation are dependent on each other, which must be taken into account when generating online explanations. To this end, a general formulation of online explanation generation is presented. We base our explanation generation method in a model reconciliation setting introduced in our prior work. Our approach is evaluated both with human subjects in a standard planning competition (IPC) domain, using NASA Task Load Index, as well as in simulation with ten different problems.", "parag_2": "Abstract —As Artificial Intelligence (AI) becomes an integral part of our life, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. For a robotic teammate, the ability to generate explanations to explain its behavior is one of the key requirements of an explainable agency. Prior work on explanation generation focuses on supporting the reasoning behind the robot’s behavior. These approaches, however, fail to consider the mental workload needed to understand the received explanation. In other words, the human teammate is expected to understand any explanation provided, often before the task execution, no matter how much information is presented in the explanation. In this work, we argue that an explanation, especially complex ones, should be made in an online fashion during the execution, which helps spread out the information to be explained and thus reducing the mental workload of humans. However, a challenge here is that the different parts of an explanation are dependent on each other, which must be taken into account when generating online explanations. To this end, a general formulation of online explanation generation is presented along with three different implementations satisfying different online properties. We base our explanation generation method on a model reconciliation setting introduced in our prior work. Our approaches are evaluated both with human subjects in a standard planning competition (IPC) domain, using NASA Task Load Index (TLX), as well as in simulation with ten different problems across two IPC domains.", "annot_1": {"annotation": ["Content_addition", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.03", "parag_1": "On the other hand, neural network pruning is well-known as an effective technique to reduce model complexity (Reed, 1993; Sze et al., 2017). For acceleration, researchers mainly focus on filter pruning (a.k.a. structured pruning) (Li et al., 2017) rather than weight-element pruning (a.k.a. unstructured pruning) (Han et al., 2015; 2016b). Marrying filter pruning with image SR seems a plausible solution to strike a better performance-complexity trade-off. However, filter pruning methods in classification can hardly translate to SR networks directly. The main reason is that, residual connections are well-known hard to prune in structured pruning (Li et al., 2017) while they are extensively used in state-of-the-art SR networks (e.g., EDSR (Lim et al., 2017) has 32 residual blocks; RCAN (Zhang et al., 2018b) even has nested residual blocks).", "parag_2": "On the other hand, neural network pruning is well-known as an effective technique to reduce model complexity (Reed, 1993; Sze et al., 2017). For acceleration, filter pruning (a.k.a. structured pruning) (Li et al., 2017) attracts more attention than weight-element pruning (a.k.a. unstructured pruning) (Han et al., 2015; 2016b). Introducing filter pruning into image SR is a promising solution to achieve a good trade-off between performance and complexity. However, it is not easy to apply filter pruning methods to image SR networks directly. This is mainly because residual connections are well-known difficult to prune in structured pruning (Li et al., 2017). On the other hand, they are extensively used in state-of-the-art (SOTA) image SR methods (e.g., EDSR (Lim et al., 2017) hasresidual blocks; RCAN (Zhang et al., 2018b) even has nested residual blocks).", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Improve the paragraph.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the third sentence using more appropriate language.", "annotator": "annotator_04"}} {"id_paragraph": "wSf7BpyxTb.ZCPjX5OcL.03", "parag_1": "Results. In fig. 1, we plot the average loss against the epoch number based on 30 simulations (runs). The standard deviations of the runs are also illustrated around the average in lighter color as shaded regions. We observe that SAPD and SAPD-VR consistently outperforms over other algorithms. For a9a , gisette , sido0 datasets, the average training accuracy of SAPD are 84 . 43% , and of SAPD-VR are 84 . 46% , respectively. The best performance for a9a , gisette , sido0 among all the other algorithms are 75 . 43% , respectively. More importantly, we observe that as an accelerated method, SAPD-VR enjoys fast convergence properties while still being robust to gradient noise.", "parag_2": "Results. To fairly compare the performances of algorithms using different batch sizes, we plot loss against epochs in x-axis 5 . In fig. 1, we plot the average loss against the epoch number based on 30 simulations (runs). The standard deviations of the runs are also illustrated around the average in lighter color as shaded regions. We observe that SAPD+ and SAPD+VR consistently outperforms over other algorithms. For a9a , gisette , sido0 datasets, the average training accuracy of SAPD+ are 84 . 43% , and of SAPD+VR are 84 . 46% , respectively. The best performance for a9a , gisette , sido0 among all the other algorithms are 75 . 43% , respectively. More importantly, we observe that as an accelerated method, SAPD+VR enjoys fast convergence properties while still being robust to gradient noise.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "l1D720s69O.vCKjjOP1ze.03", "parag_1": "The effect of the linear projection Z ( K ) (filter) acting on spectrum as f ( λ ) = 1 K (cid:80) {+K k =\n+} [-Kk =\n-] λ k is plotted in Figure 1, from which we observe the following two properties: (i) Z ( K ) preserves leading (large) eigenvalues of T and (ii) the higher K is the stricter the low-pass filter becomes. In other words, as K grows, this filter tries to approximate a low-rank positive semi-definite matrix by keeping the largest positive eigenvalues. Note the relationship between the normalized Laplacian matrixL andthe corresponding normalized adjacent matrix Twhich is L = I − T , and thus keeping large positive eigenvalues for T equals keeping small eigenvalues for L .", "parag_2": "The effect of the linear projection Z ( K ) (filter) acting on spectrum as f ( λ ) = 1 K (cid:80) {+K k =\n+} [-Kk =\n-] λ k (we sum from 0 to include self-loops) is plotted in Figure 1, from which we observe the following properties: (i) Z ( K ) preserves leading (large) eigenvalues of T and (ii) the higher K is the stricter the low-pass filter becomes but the filter also preserves the high frequency. In other words, as K grows, this filter includes larger and larger neighborhood but also maintains the closest locality of nodes. Note that L = I − T where L is the normalized Laplacian matrix and T is the normalized adjacency matrix. Thus keeping large positive eigenvalues for T equals keeping small eigenvalues for L .", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "WFspCOzPdlZ.QOAU2p77wV.00", "parag_1": "Training MoEs is typically done end-to-end, with the experts and gate learning simultaneously, andsparse gradients based on the routing. We found this did not lead to good performance with SM O Es and tensor routing, particularly on regression tasks, as the gate did not learn well. Indeed, in suchcases, the gate rarely changed its routing decisions from initialization. We hypothesize that this is dueto a “mismatch” in the gradients on regression tasks, where they are informative for experts but not thegate (see Appendix B). To address this, we train the gate with a separate, self-supervised loss function, the routing classification (RC) loss. The key idea is to consider routing as a dense, multi-label classification task: selecting the “correct” experts at each point (cf. semantic segmentation). The RC loss does exactly this, and trains the gate by constructing appropriate labels.", "parag_2": "Training MoEs is typically done end-to-end, with the experts and gate learning simultaneously,and sparse gradients based on the routing. We found this did not lead to good performance with SM O Es and tensor routing, particularly on regression tasks, as the gate did not learn well: it rarelychanged its routing decisions from initialization. We hypothesize that this is due to a “mismatch” inthe gradients on regression tasks, where they are informative for experts but not the gate, because regression aims to make a continuous prediction over both positive and negative values, whereasselecting an expert requires a threshold (see §B). To address this, we train the gate with a separate, self-supervised loss function, the routing classification (RC) loss. The key idea is to consider routing as a dense, multi-label classification task: selecting the “correct” experts at each point (cf. semantic segmentation). The RC loss does exactly this, and trains the gate by constructing appropriate labels.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "Sx6SnclSL.nQLOUHvx8n.02", "parag_1": "Local Spatial Self-Attention. For smaller i of shallower stages, we expect each token to mainlyfocus on finer-grained information and not to be disturbed by long-range signals. Thus, we modifythe original self-attention layer by a local spatial constraint that only neighboring tokens within aball query [29] would be available for attention calculation. As the point tokens are downsampled by stages, we set increasing radii { r i } Si =1 of multi-scale ball queries for gradually expanding the attention scopes, which fulfills the local-to-global feature aggregation scheme.", "parag_2": "Local Spatial Self-Attention. During pre-training, we expect point tokens in the multi-stage encoder to capture global cues for 3D shapes, which benefits the reconstruction of masked parts. However, when fine-tuning on downstream tasks without masked autoencoding, point tokens in the shallower stages are better to mainly focus on local information and not to be disturbed by long-range signals, referring to the inductive bias of 3D locality [36]. Thus, during fine-tuning, we modify the original self-attention layer in the encoder with a local spatial constraint that only neighboring tokens within a ball query would be available for attention calculation. As the point tokens are downsampled by stages, we set increasing radii { r i } Si =1 of multi-scale ball queries for gradually expanding the attention scopes, which fulfills the local-to-global feature aggregation scheme.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "atxti8SVk.3K9AmPwALM.14", "parag_1": "Architecture, training and testing. For all the experiments on VOC, we base our architecture as DeepLab (Chen et al., 2017) with ResNet101 (He et al., 2016) as backbone network. For the experiments on DensePose dataset, we adopt PSPNet (Zhao et al., 2017) as backbone network. We only use models pre-trained on ImageNet (Deng et al., 2009) dataset. For training our models, we set λ I , λ C , λ O and λ A according to different types of annotations and datasets, which is shown in table 1. For inference, we follow SegSort (Hwang et al., 2019) to perform k-nearest neighbor retrievals. See Appendix for more detail of hyper-parameters and setting for training and testing.", "parag_2": "Architecture, training and testing. For all the experiments on PASCAL VOC, we base our architecture on DeepLab (Chen et al., 2017) with ResNet101 (He et al., 2016) as the backbone network. For the experiments on DensePose, we adopt PSPNet (Zhao et al., 2017) as the backbone network. Our models are pre-trained on ImageNet (Deng et al., 2009) dataset. See Appendix for details on our inference procedure and hyper-parameter selection for training and testing.", "annot_1": {"annotation": ["Content_deletion", "Concision"], "instruction": "Move the less important details of the training into an appendix. ", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Content_deletion", "Rewriting_light"], "instruction": "VOC is now named PASCAL VOC. Replace the last two sentence by a reference to an appendix. Correct the english in this paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.09", "parag_1": "Owing to the pruned index constraint problem, many filter pruning methods in classification simply do not prune the last Conv layer in a residual blocks (Li et al., 2017; Wang et al., 2021), which can still deliver considerable speedup of practical interest. But, this naive solution cannot translate to the image SR networks. The fundamental reason is that image SR networks typically employ much more residual blocks and each block typically has only two Conv layers. In some top-performed SR networks (e.g., RCAN), there are even nested residual blocks. To see how serious this problem is, taking EDSR as an example, it has 32 residual blocks and each block has two Conv layers. If we do not prune the 2nd Conv layer in a residual block, it means half of the Conv layers are not pruned. Namely, at best , we can only achieve 2 × theoretical acceleration (measured by FLOPs reduction).", "parag_2": "Owing to the pruned index constraint problem, many filter pruning methods in image classification simply do not prune the last Conv layer in residual blocks (Li et al., 2017; Wang et al., 2021), which can still deliver considerable speedup of practical interest. Nevertheless, this doing-nothing solution can barely translate to the image SR networks if we target a considerable speedup. The root cause is that image SR networks typically employ many more residual blocks, and each block usually has only two Conv layers. In some top-performing SR networks (e.g., RCAN), there are even nested residual blocks. Taking EDSR for a concrete example, it has 32 residual blocks and each block has two Conv layers. If the 2nd Conv layer in a residual block is spared from pruning, half of the Conv layers will not be pruned. In other words, at best , we can only achieve 2 × theoretical acceleration measured by FLOPs reduction.", "annot_1": {"annotation": ["Rewriting_medium", "Development"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_light", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "vrqwgu1o8-.9igvll21Xa.01", "parag_1": "Fig. 2 demonstrates that the performance of the parametric Copula-GP model critically depends on the match between the true probability density and the best mixture of parametric copula elements. When the parametric distribution matches the true distribution (Fig. 2A), our Copula-GP framework predictably outperforms all non-parametric methods. Nonetheless, even when the exact reconstruction of the density is not possible (Figs. 2B-C), the mixtures of the copula models are still able to model the changes in tail dependencies, at least qualitatively. As a result, our method performs similarly to the neural-network based method (MINE) and still outperforms KSG-like methods.", "parag_2": "Fig. 2 demonstrates that the performance of the parametric Copula-GP model critically depends on the match between the true probability density and the best mixture of parametric copula elements. When the parametric distribution matches the true distribution (e.g. Fig. 2A or Fig. A5), our Copula-GP framework provides unbiased estimates and predictably outperforms all non-parametric methods. Nonetheless, even when the exact reconstruction of the density is not possible (Figs. 2BC), the mixtures of the copula models are still able to model the changes in tail dependencies, at least qualitatively. In those challenging examples, our method performs similarly to the neural-network based method (MINE) and still outperforms KSG-like methods.", "annot_1": {"annotation": ["Rewriting_light", "Content_addition"], "instruction": "Edit this paragraph for clarity and add a mention of Figure 5.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "S1fwAltvB.BkzbibmoH.02", "parag_1": "Examples Below are a few query words (Q) and their closest neighbours (N). Note the high structural similarity of the entire sentence, as well as the function of the word within it (Q1: last word of subject NP in a middle clause, Q2: possessed noun in sentence initial subject NP, Q3: head of relative clause of a direct object):", "parag_2": "Examples Below are a few query words (Q) and their closest neighbours before (N) and after (NT) the transformation. Note the high structural similarity of the entire sentence, as well as the function of the word within it (Q1: last word of subject NP in a middle clause, Q2: possessed noun in sentence initial subject NP, Q3: head of relative clause of a direct object):", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.17", "parag_1": "In experiment 1, we showed that the notch increases the pointing movement time in specific situations. Furthermore, when the notch is between the targets, participants moved the cursor with two main strategies: (i) to move the cursor along the edge (along-strategy) and (ii) to avoid the notch (avoid-strategy). In experiment 2, we investigated which of the above strategies is preferable in the current specification that allows the cursor to enter the notch.", "parag_2": "In Experiment 1, we showed that the notch increases the pointing movement time under specific scenarios. Further, we found that participants moved the cursor based on two main strategies when the notch is placed between the start area and the target: (i) to move the cursor along the edge (along-strategy) and (ii) to avoid the notch (avoid-strategy). In Experiment 2, we investigated which of the above strategies is preferable in the current specification that allows the cursor to enter the notch.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the second sentence. Replace some words for the better", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Make this paragraph more logical and precise.", "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.10", "parag_1": "We develop on the open-sourced simulator RecSim simulator ((Ie et al., 2019a)) which allows an agent to interact with simulated users by recommending a list of items. We have a base action set of 250 train items and 250 test items, and in each episode 20 items are given to the agent. It must recommend a list of size 6. We take user preference vectoras the state (MDP) and ground truth action representations from the environment. We incorporate CPR by boosting up the probability of user clicking any item proportional to the CPR. This simulates one of the scenarios whereuser preference model is influenced by the entire list. An optimal agent should identify the most common category in the available action set and try to recommend most items in its list from category. This requires understanding the relation with the other candidate items from the same category. We train CDQN-based models to maximize the number of clicks in a user session.", "parag_2": "We use RecSim (Ie et al., 2019a) to simulate user interactions and extend it to the listwise recommendation task. We have a base action set of 250 train and 250 test items, and 20 items are sampled as actions for the agent in each episode. The agent recommends a list-action of size six at each step. We assume a fully observable environment with the state as the user preference vector and the action representations as item characteristics. The objective implicitly incorporates CPR by boosting the probability of a user clicking any item proportional to the list CPR. The implicit CPR objective exemplifies realistic scenarios where the entire list influences user response. One way to optimize CPR is to identify the most common category in the available action set and recommend most items from that category. Such counting of categories requires relational reasoning over all items available in the action set. We train CDQN-based models to maximize the number of clicks in a user session.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the sentences, making them shorter and more connected.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the sentences of this paragraph for better readability and fitting to the academic style.", "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.12", "parag_1": "We evaluate AGILE against baselines from prior work in varying action space, that either assume a fixed action set or work without the knowledge of the other available actions. For ablations, we keep the action set summarizer, but replace the relational action features with raw action representations to compute action utility. Thearchitectures of all methods are visually compared in Appendix C.1. All methods share the same training framework (PPO or CDQN) depending on the environment.", "parag_2": "We evaluate AGILE against baselines from prior work in varying action space, which either assume a fixed action set or act independently of the other actions. We ablate the importance of relational action features by replacing them with the original action representations and compute the action set summary in different ways. The Appendix details the comparison (C.1) and visualization (Figure 17) of all baselines and ablations, hyperparameter-tuning (D.2,D.3) and network designs(C.3).", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the last 2 sentences favoring \"we\" forms and explaining the paper structure more concise.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Content_substitution", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "4cxEpKddZp.QPmWFhqQU6.00", "parag_1": "Recently, Ji et al. (2019) presented Action Genome, a new video dataset annotated by SGs. This dataset includes spatio-temporal SG annotations, where for each video, few individual frames were chosen and spatially annotated by SGs. Here, we use the Something-Something V2 (Goyal et al., 2017) dataset that is larger (200K vs. 10K videos) and more diverse since it includes basic human activities created by a large number of crowd workers. Finally, we propose the Action Graph representation, which we view as a temporal extension of SGs, and argue it is more natural for representing videos of actions.", "parag_2": "Recently, Ji et al. (2019) presented Action Genome, a new video dataset annotated by SGs. This dataset includes spatio-temporal SG annotations, where for each video, few individual frames were chosen and spatially annotated by SGs. Here, we use the Something-Something V2 (Goyal et al., 2017) dataset that is larger (200K vs. 10K videos) and more diverse since it includes basic human activities created by a large number of crowd workers. Finally, we propose the Action Graph representation, which we view as a temporal extension of SGs.", "annot_1": {"annotation": ["Content_deletion"], "instruction": "Remove the argument that Action Graph is a more natural representation.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision"], "instruction": "Remove the less important details in the last sentence.", "annotator": "annotator_07"}} {"id_paragraph": "usz0l2mwO.5ie3V0GP-.04", "parag_1": "We propose to apply a VIB module to reduce over-fitting when fine-tuning large-scale pre-trained language models on low-resource datasets. VIB finds the simplest sentence embedding, predictive of the target labels, by removing task-irrelevant and task-redundant information. Our approach is model agnostic, simple to implement, and highly effective. Extensive experiments show that our method substantially improves transfer performance in low-resource scenarios, including a 2.97 point gain on STS-B and a 2.03 point gain on MRPC. Furthermore, we demonstrate that our model results in a better generalization to out-of-domain NLI datasets. Future work includes exploring incorporating VIB on multiple layers of pre-trained language models and using it to jointly learn relevant features and relevant layers.", "parag_2": "We propose VIBERT, an effective model to reduce over-fitting when fine-tuning large-scale pre-trained language models on low-resource datasets. By leveraging a VIB objective, VIBERT finds the simplest sentence embedding, predictive of the target labels, while removing task-irrelevant and redundant information. Our approach is model agnostic, simple to implement, and highly effective. Extensive experiments and analyses show that our method substantially improves transfer performance in low-resource scenarios. We demonstrate our obtained sentence embeddings are robust to biases and our model results in a substantially better generalization to out-of-domain NLI datasets. Future work includes exploring incorporating VIB on multiple layers of pretrained language models and using it to jointly learn relevant features and relevant layers.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.18", "parag_1": "Color should be made less dominant and should not be used as the primary identifier for medication entries. While solid dull color was effective in indicating busy slots, using color fill for medica- tion entries was cluttering the designs. The amount of medicationidentification information which is visually present in the calendar entry should also be minimized. Medication labels, including name and dosage, tend to overflow the containing entry. Labels were a source of confusion as to which entry they referred to when multiple entries occupied the same cell. They should be abstracted from the overview and instead be made available as details on demand.", "parag_2": "Color should be made less dominant and should not be used as the primary identifier for medication entries. While solid fill color was effective in indicating busy slots, using color fill for medica- tion entries was cluttering the designs. The amount of medication information shown (e.g., labels, including name and dosage) should also be minimized. Labels were a source of confusion as to which entry they referred to when multiple entries occupied the same cell. They should be abstracted from the overview and instead be made available as details on demand.", "annot_1": {"annotation": ["Concision", "Content_deletion"], "instruction": "Remove unnecessary details and explanations.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision"], "instruction": "I want to restate my third sentence.", "annotator": "annotator_09"}} {"id_paragraph": "VyLwGx42v.ZwDBZyU4X.00", "parag_1": "First, we compare the proposed sampling method to several variants of the VAE such as the Wasserstein Autoencoder (Tolstikhin et al., 2018), Regularized Autoencoders (Ghosh et al., 2020), a vampprior VAE (Tomczak & Welling, 2018), a geometry-aware VAE (Chadebec et al., 2020) and a simple", "parag_2": "First, we compare the proposed sampling method to several variants of the VAE such as the Wasserstein Autoencoder (WAE) (Tolstikhin et al., 2018), Regularized Autoencoders (Ghosh et al., 2020) with either L2 decoder’s parameters regularization (RAE-L2), gradient penalty (RAE-GP), spectral normalization (RAE-SN) or simple L2 latent code regularization (RAE), a vamp-prior VAE (VAMP) (Tomczak & Welling, 2018), a geometry-aware VAE (RHVAE) (Chadebec et al., 2020) and a simple", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_01"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "jyac3IgQ44.f4au9jfat5.01", "parag_1": "Voxel transformer for 3D detection. VoTr [23] introduces a voxel-based transformer backbonethat performs self-attention on sparse voxels with the local and the dilated attention mechanisms. Our work improves VoTr by introducing the window-based attention and optimizing the sparseoperation. The recent SST [7] follows a single-stride design and the swin-transformer architecture,which performs well on small objects. Nevertheless, SST is implemented based on pillars. The single window size is not conducive to capturing multi-scale features, resulting in unsatisfactoryperformance on Vehicle when simultaneously detecting multiple categories. In comparison, our MsSVT can capture mixed-scale information to boost detection of objects of various scales.", "parag_2": "Voxel transformer backbone in 3D detection. VoTr [20] introduces two kinds of sparse voxelattention, including local attention and extended attention. Each voxel serves as a query and attendswith the neighbor voxels. We optimize the sparse operation and introduce window attention, whichsignificantly improves efficiency and performance. The recent SST [6] adopts a single-stride designand swin-transformer structure, which performs well on small objects. However, SST is implemented based on a pillar. The single window size is not conducive to capturing multi-scale features, resultingin the low performance of vehicles when multiple categories are detected simultaneously. In contrast,our MsSVT can capture fine and global features and better performance on large and small objects.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_heavy", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.10", "parag_1": "Calendar entries are shown like in standard calendars: the height of rectangular entries indicates their duration,their color hue indicates their type, or category, as set by the user, and their name is conveyed with a label. In this design, we also represent medication entries with rectangles (or bars), whose vertical position and height indicates start and end of the allowed administration period for the medication. Medication entries have an embossed horizontal marker placed at some point along the bar to indicate the preferred administration period (at which point the reminder would trigger if programmed). Preferred administration time of a medication entry is shown with higher opacity and allowed administrative time with lower opacity. Color hue encodes the type of medication.", "parag_2": "The height of rectangular entries indicates their duration, color is used to differentiate types or categories of entries (as set by the user), and their names are conveyed with textual labels. In this design, we also represent medication entries with rectangles (or bars), whose vertical position and height indicates start and end of the allowed administration period for the medication. Medication entries have an embossed horizontal marker placed at some point along the bar to indicate the planned administration time (at which point the reminder would trigger if programmed). Preferred administration time of a medication entry is shown with higher opacity and allowed administrative time with lower opacity. Color hue encodes the type of medication.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Improve the English of this paragraph.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "Make the first sentence more concise and direct.", "annotator": "annotator_07"}} {"id_paragraph": "usz0l2mwO.5ie3V0GP-.00", "parag_1": "Large-scale pre-trained language models act as general-purpose feature extractors, but not all the features are relevant for a given target task.This can cause problems in low-resource scenarios, where fine-tuning such large-scale models often over-fits on the small training set. We propose to use the information bottleneck principle to improve generalization in this scenario. We apply the variational information bottleneck method to remove task-irrelevant and redundant features from sentence embeddings during the fine-tuning of BERT. Evaluation on seven low-resource datasets for different tasks shows that our method significantly improves transfer learning in low-resource scenarios and obtains better generalization on 11 out ofout-of-domain textual entailment datasets.", "parag_2": "While large-scale pretrained language models have obtained impressive results when fine-tuned on a wide variety of tasks, they still often suffer from overfitting in low-resource scenarios. Since such models are general-purpose feature extractors, many of these features are inevitably irrelevant for a given target task. We propose to use Variational Information Bottleneck (VIB) to suppress irrelevant features when fine-tuning on low-resource target tasks, and show that our method successfully reduces overfitting. Moreover, we show that our VIB model finds sentence representations that are more robust to biases in natural language inference datasets, and thereby obtains better generalization to out-of-domain datasets. Evaluation on seven low-resource datasets in different tasks shows that our method significantly improves transfer learning in low-resource scenarios, surpassing prior work. Moreover, it improves generalization on 13 out of 15 out-of-domain natural language inference benchmarks. Our code is publicly available in https://github.com/rabeehk/vibert .", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "S1qImCcFQ.Ske132uA7.03", "parag_1": "In this paper, we propose tree-structured recurrent switching linear dynamical systems (TrSLDS) which is an extension of Linderman et al. (2017) rSLDS. The tree-structured stick breaking removes the dependence on the permutation of the discrete latent states. The tree-structured stick breaking paradigm naturally lends itself to imposing a tree-structured hierarchical prior on the dynamics. The structure of the prior allows for a multi-scale view of the system; one can query at different levels of the trees to see different scales of the resolution. We also developed a fully Bayesian approach to learning the parameters of the model. The analysis of the Graf data suggests that the method can also be used to analyze neural data.", "parag_2": "In this paper we propose tree-structured recurrent switching linear dynamical systems (TrSLDS) which is an extension of rSLDS (Linderman et al., 2017). The system relies on the use of treestructured stick-breaking to partition the space. The tree-structured stick-breaking paradigm naturally lends itself to imposing a hierarchical prior on the dynamics that respects the tree structure. This tree-structured prior allows for a multi-scale view of the system where one can query at different levels of the tree to see different scales of the resolution. We also developed a fully Bayesian sampler, which leverages the Pólya-Gamma augmentation, to learn the parameters of the model and infer latent states. The two synthetic experiments show that TrSLDS can recover a multi-scale view of the system, where the resolution of the system increase as we delve deeper into the tree. The analysis on the real neural data verifies that TrSLDS can find a multi-scale structure. A P ROOF OF T HEOREM Proof.", "annot_1": {"annotation": ["Content_addition", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.08", "parag_1": "We use the CREATE environment (Jain et al., 2020) as a challenging environment with a large base action space and unseen actions for evaluation. The task is to sequentiallyselect tools from a given toolbox and place them appropriately on the screen with the objective to push the red ball towards the green goal. We deal with the hybrid action space with an auxiliary network that takes the state and action set summary as input, following Jain et al. We simulate the need of action relation learning by associating each tool (e.g. cannon) with an activator tool (e.g. fire). A tool functions only if in contact with the right activator, otherwise the objects pass through it. Thus the tool choice must now consider the current environment state as well as whether its activators are available. We borrow the action representations from the prior work and add extra dimensions to accommodate one-hot vectors for the activator tools.", "parag_2": "The CREATE environment (Jain et al., 2020) is a challenging physical reasoning benchmark with a large variety of tools as actions and also supports evaluation with unseen actions. The objective is to sequentially place tools to help push the red ball towards the green goal. An action is a hybrid of a discrete tool-selection from varying toolsets and a continuous ( x, y ) coordinate of tool-placement on the screen. An auxiliary policy network decides the tool-placement based on the effective state s (cid:48) as input, following Jain et al. To emphasize action relations, we augment the environment with special activator tools (e.g., fire) that general tools (e.g., cannon) need in contact for being functional. Thus, a general tool can be useful only if its activator is also available. Action representations of general tools encode their physical behavior (Jain et al., 2020), while those of activator tools are one-hot vectors. We train AGILE and the auxiliary policy jointly with PPO.", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite the paragraph using more appropriate language.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Rewriting_heavy", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "BK860eucgq.L6k9BqiVs.00", "parag_1": "To apply the concept of regular cell complexes to graphs, Bodnar et al. (2021a) define the concept of a cellular lifting map , a function f that transforms a graph to a regular cell complex such that two graphs G 1 , G 2 are isomorphic if and only if f ( G 1 ) , f ( G 2 ) are isomorphic. They prove that a class of lifting maps called skeleton preserving lifting maps together with CWL are at least as expressive as WL. Typically, such lifting maps create cells out of vertices, together with cells that encode other structures such as induced cycles or cliques. They define CW Networks which combine neural networks with cellular message passing, similar to graph neural networks with message passing.", "parag_2": "To apply the concept of regular cell complexes to graphs, Bodnar et al. (2021a) define the concept of a cellular lifting map , a function f that transforms a graph to a regular cell complex such that two graphs G 1 , G 2 are isomorphic if and only if f ( G 1 ) , f ( G 2 ) are isomorphic. They prove that a class of lifting maps called skeleton preserving lifting maps together with CWL are at least as expressive as WL. Typically, such lifting maps create cells out of vertices, together with cells that encode other structures such as induced cycles or cliques. Figure 1 shows an example of this, the original graph (left) is turned into a cell complex (right) where the vertices are 0-dimensional cells, edges are 1-dimensional cells, and cycles are 2-dimensional cells (blue). Bodnar et al. (2021a) define CW Networks which combine neural networks with cellular message passing, similar to graph neural networks with message passing.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "zZ1nBKIPj.D522Xnc6Xm.00", "parag_1": "For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Did you describe the limitations of your work? [N/A]", "parag_2": "For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Did you describe the limitations of your work? [N/A] (c) Did you discuss any potential negative societal impacts of your work? [N/A]", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "82kcycVoyM.V4CEs3gijY.00", "parag_1": "HDNO. Specifically, a dialogue act in HDNO is seen as an option whereas each generated word from NLG becomes a primitive action. Accordingly, dialogue policy and NLG become the policy over option (i.e. high-level policy) and the intra-option policy (i.e. low-level policy) respectively. Distinguished from a conventional modular system, we additionally give a context to NLG to satisfy the option framework. Moreover, since the primitive action space (i.e. a vocabulary) comprises a termination symbol, NLG can take over the responsibility of termination. For this reason, termination policy is absorbed in the intra-option policy. HDNO is formally defined in Definition 1.", "parag_2": "HDNO. Specifically, a dialogue act in HDNO is seen as an option whereas each generated word from NLG is a primitive action. Accordingly, dialogue policy and NLG become the policy over option (i.e. high-level policy) and the intra-option policy (i.e. low-level policy) respectively. Distinguished from a conventional modular system, we additionally give a context to NLG to satisfy the conditions of the option framework. Moreover, since the primitive action space (i.e. a vocabulary) comprises a termination symbol, NLG can take over the responsibility of termination. For this reason, termination policy in the original option framework is absorbed into the intra-option policy. The formal definition of HDNO is shown in Definition 1.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Rewrite some formulations to describe HDNO more as a state than a progress.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Make the last sentence more direct. Give slightly more context for better readability.", "annotator": "annotator_07"}} {"id_paragraph": "m_ZeyuR-w-.BLQRK37Yb5.00", "parag_1": "We also remark recent work on domain shift while controlling from pixels Hansen & Wang (2021); Kostrikov et al. In these papers the authors assume that some pixels may change from an episode to an episode (e.g., red color becomes blue), but the underlying dynamics stay the same. Besides the episodic nature of the context change, this setting differs from ours on two fronts. First, controlling from pictures constitutes a POMDP problem, where the true states (positions, velocities, accelerations) are observed through a proxy (through pixels) and hence need to be inferred. Second, the underlying dynamics stay the same (Hansen & Wang, 2021; Kostrikov et al., 2020) and only observations change. Our case is the opposite: the underlying a dynamics change, but the observation function stays the same. In future work we aim to extend our methods to control from pixels. Continual (lifelong) RL mainly adopts a context incremental setting, where the agent is exposed to a sequence of contexts (Delange et al., 2021). While the agent’s goal is still to adapt efficiently to the unseen contexts, the emphasis is on overcoming catastrophic forgetting, i.e., maintaining a good performance on old contexts while improving performance on the current one (Rolnick et al., 2019; Riemer et al., 2019).", "parag_2": "We also remark recent work on domain shift while controlling from pixels Hansen & Wang (2021); Kostrikov et al. In these papers the authors assume that some pixels may change from an episode to an episode (e.g., red color becomes blue), but the underlying dynamics stay the same. Besides the episodic nature of the context change, this setting differs from ours on two fronts. First, controlling from pictures constitutes a POMDP problem, where the true states (positions, velocities, accelerations) are observed through a proxy (through pixels) and hence need to be inferred. Second, the underlying dynamics stay the same (Hansen & Wang, 2021; Kostrikov et al., 2020) and only observations change. Our case is the opposite: the underlying dynamics change, but the observation a function stays the same. In future work we aim to extend our methods to control from pixels. We also note the work by Ball et al. (2021), who proposed to learn a contextual policy by augmenting an offline world model. In the online setting the algorithm had to learn only the current context thus improving efficiency of the learning procedure while delivering good performance. Continual (lifelong) RL mainly adopts a context incremental setting, where the agent is exposed to a sequence of contexts (Delange et al., 2021). While the agent’s goal is still to adapt efficiently to the unseen contexts, the emphasis is on overcoming catastrophic forgetting, i.e., maintaining a good performance on old contexts while improving performance on the current one (Rolnick et al., 2019; Riemer et al., 2019).", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_10"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_03"}} {"id_paragraph": "CswFOyPyhT.FUeqrAFby.01", "parag_1": "We presented Dyn-GFN, a method for Bayesian causal discovery from dynamics. In low dimensions we found that Dyn-GFN is able to better model the distribution over possible explanatory structures than baseline methods. As a proof of concept, we presented an example of learning the distributionover likely explanatory graphs from single-cell transcriptomic data where there are many possiblegraphs showing Dyn-GFN can better model the uncertainty over possible explanations of this datarather than capturing a single one.", "parag_2": "We presented Dyn-GFN, a method for Bayesian causal discovery from dynamics. In low dimensions we found that Dyn-GFN is able to better model the distribution over possible explanatory structures than baseline methods. As a proof of concept, we presented an example of learning the distribution over likely explanatory graphs from single-cell transcriptomic data where there are many possible graphs showing Dyn-GFN can better model the uncertainty over possible explanations of this data rather than capturing a single explanation. Limitations Although we have demonstrated a degree of efficacy when using Dyn-GFN for Bayesian causal discovery with observational data, a key limitation of Dyn-GFN is scaling to larger systems. To effectively model P ( G, θ, D ) , Dyn-GFN needs to search over an environment state space of possible graphs.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_08"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.19", "parag_1": "Representation-based methods, e . g ., BNN (Johansson et al., 2016) and CFR (Uri et al., 2017), balance the distributions in the latent space. Liuyi et al. and Hassanpour & Greiner (2020) further augment CFR with local similarity and non-confounding factors, respectively. Kallus (2020) and Yoon et al. (2018) propose to balance the distributions of representations with adversarial training. Representation learning has been the primary approach to mitigate the treatment selection bias, owing to its avoidance of the high variance issue and the suitability for large-scale scenarios.", "parag_2": "Begining with BNN (Johansson et al., 2016) and CFR (Shalit et al., 2017), representation-based methods minimize the group discrepancy in the latent space. Liuyi et al. and Hassanpour & Greiner (2020) further augment CFR with local similarity and non-confounding factors, respectively. Kallus (2020) and Yoon et al. (2018) propose to balance the distributions of representations with adversarial training. Due to its scalability and avoidance of the high variance issue, representationbased methods have been predominant for handling the treatment selection bias.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Make the benefit clearer.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Reorder the elements in sentences to improve the readability.", "annotator": "annotator_07"}} {"id_paragraph": "ssjKKm0b5y.3wi5X8wrM_.00", "parag_1": "Mahapatra & Rajan (2020) presented Exact Pareto Optimal (EPO), an MOO optimization method that can converge to a desired ray in loss space. Given a preference ray r , EPO search for an exact Pareto optimal solution, i.e. a solution that is (i) Pareto optimal, and; (ii) Lies on the intersect of the Pareto front and the preference vector r . The EPO method balances two goals: Finding a descent direction and getting to the desired ray. EPO searches for a point in the convex hull of the gradients, known by Désidéri (2012) to include descent directions, that has maximal angle with a vector d bal which pulls the point to the desired ray. EPO combines gradient descent and controlled ascent enabling it to reach an exact Pareto optimal solution if one exists, or the closest Pareto optimal solution.", "parag_2": "Convergence to the desired ray in loss space can be achieved using Exact Pareto Optimal (EPO) (Mahapatra & Rajan, 2020). To find the intersection of the Pareto front with a given preference ray r , EPO balances two goals: Finding a descent direction towards the Pareto front and approaching the desired ray. EPO searches for a point in the convex hull of the gradients, known by Désidéri (2012) to include descent directions, that has a maximal angle with a vector d bal which pulls the point to the desired ray. EPO combines gradient descent and controlled ascent enabling it to reach an exact Pareto optimal solution if one exists, or the closest Pareto optimal solution.", "annot_1": {"annotation": ["Content_substitution", "Rewriting_light"], "instruction": "Exclude unnecessary ideas.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite the first half of this paragraph to make it clearer and easier to read.", "annotator": "annotator_02"}} {"id_paragraph": "hAi0PMz9T7.Ut8ESfYp1.02", "parag_1": "Branch Decider. Since the network context is not known during deployment, this creates the needfor a branch decider module. The branch decider reuses clusters labels from the training stage for a K Nearest Neighbours [45] classification. The light-weight distance based metric is used to classify the inference-time observation into one of the training groupings, and thereby executing the corresponding branch’s symbolic policy. Figure 3 illustrates our complete training and deployment techniques.", "parag_2": "Branch Decider: Since the network context is not known during deployment, one needs a branch decider module. The branch decider reuses cluster labels from the training stage for a K Nearest Neighbors [51] classification. The light-weight distance-based metric is used to classify the inference-time observation into one of the training groupings and thereby executing the corresponding branch’s symbolic policy. Figure 3 illustrates our complete training and deployment techniques.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Rewrite some formulations, preferring shorter ones and fix typos.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve the english of this text.", "annotator": "annotator_07"}} {"id_paragraph": "0u5XXGVu0.asnOE1HIe.00", "parag_1": "Metric function \" #$ - \" #$ - ! ! in a fine-grained classification setting). As a result, understanding and addressing the domain shift problem for few-shot classification is of great interest.", "parag_2": "Metric function \" #$ - \" #$ - ! ! the difficulty to construct large training datasets for rare classes (e.g., , recognizing rare bird species in a fine-grained classification setting). As a result, understanding and addressing the domain shift problem for few-shot classification is of great interest.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.15", "parag_1": "• Colored-and-gray-MNIST: we train multi-modal DNNs using SGD with the momentum coefficient of 0.9 and a batch size of 128. We sample uniformly 20 learning rates at random from the interval [10 − 5 , 1] on a logarithmic scale. We train each model four times while varying random seeds. In total, we train 80 models. ModelNet40: we use SGD without momentum and use minibatches of size 8. Werandomly select nine learning rates from 10 − 3 to 1 . We train each model three times and end up with 27 models. • NVGesture: we use a batch size of four, SGD with momentum of 0.9, and uniformly samplelearning rates at random from the interval [10 − 4 , 10 − 1 . 5 ] on a logarithmic scale. We train each model three times, resulting in 60 models in total.", "parag_2": "• Colored-and-gray-MNIST: we train multi-modal DNNs using SGD with the momentum coefficient of 0.9 and a batch size of 128. We sample 20 learning rates at random from the interval [10 − 5 , 1] on a logarithmic scale. We train the model four times using each of the learning rate and random initialization of the parameters. In total, we train 80 models. ModelNet40: we use SGD without momentum and use minibatches of eight examples. We select nine learning rates from 10 − 3 to 1 and train model using each learning rate for three times. This ends up with 27 models. • NVGesture: we use a batch size of four, SGD with momentum of 0.9, and uniformly samplelearning rates from the interval [10 − 4 , 10 − 1 . 5 ] on a logarithmic scale. We train the model three times using each learning rate, resulting in 60 models in total.", "annot_1": {"annotation": ["Content_substitution"], "instruction": "Rewrite. Try to fix it when there are some missing words.", "annotator": "annotator_05"}, "annot_2": {"annotation": ["Development", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "W7OJo8Ahe.Ypr2DOlEPQ.00", "parag_1": "Until convergence 25: return θ 26: end function then we have successfully found a non-saturating CBF.", "parag_2": "Until convergence 25: return θ 26: end function It is a measure of the best-case saturation at a given state x . When L ( θ, x ) ≤ 0 , then no saturation occurs at x ; when L ( θ, x ) > 0 , it measures how severe the saturation is. Thus, our min-max problem (Eqn. 4) is to minimize the worst best-case saturation over the boundary.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.21", "parag_1": "We did not use Position and start from experiment 1 in experiment 2, and always placed the notch between the start area and the target. Instead, we added the condition of cursor movement strategy ( Strategy ). Strategy = along means moving the cursor along the top edge of the screen (Figure 7 (i)). Strategy = avoid means moving the cursor by avoiding the notch (Figure 7 (ii)). The target was a rectangle, with a height of 6 mm and a width ( W ) of 6 and 23 mm. The interval ( I ) between the notch and target was 0, 12, ∞ mm. To avoid increasing the workload on the participants, we chose the characteristic conditions from experiment 1 for W and I , and used almost equivalent values. The notch size and A were the same as in experiment 1.", "parag_2": "We did not use Position and Start from Experiment 1 in Experiment 2, and we always placed the notch between the start area and the target. Instead, we added the condition of cursor movement strategy ( Strategy ). Strategy = along means moving the cursor along the top edge of the screen (Figure 7 (i)). Strategy = avoid means moving the cursor by avoiding the notch (Figure 7 (ii)). The target was a rectangle with a height of 6 mm (22 pixels) and a width ( W ) of 6 and 23 mm (22 and 84 pixels). The interval ( I ) between the notch and target was 0, 12, ∞ mm (0, 44, ∞ pixels). We chose the characteristic conditions from Experiment 1 for W and I to avoid increasing the workload on the participants, and we used almost equivalent values. The notch size and A were the same as those in Experiment 1.", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.16", "parag_1": "First, many models have high | d util | , especially when trained to use two distinct modalities. This confirms that the standard multi-modal learning process encourages the model to rely on one modality and ignore the other one, which is consistent with our hypothesis. We make this observation across all tasks, confirming that the conventional multi-modal learning process is greedy regardless of network architectures and tasks.", "parag_2": "First, many models have high | d util | . This confirms that the multi-modal learning process encourages the model to rely on one modality and ignore the other one, which is consistent with our hypothesis. We make this observation across all tasks, confirming that the conventional multi-modal learning process is greedy regardless of network architectures and tasks.", "annot_1": {"annotation": ["Concision"], "instruction": "Remove the second part of the first sentence", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Concision"], "instruction": "Exclude unnecessary details.", "annotator": "annotator_08"}} {"id_paragraph": "uTur5gpEC.TOWMu718N.00", "parag_1": "To observe the consequence of having a discriminator trained on empirical distributions, we assume that the reward is represented with a optimal discriminator such that r ( s, a ) = log E ( s,a ) D ( s,a ) . As a result, objective (9) becomes:", "parag_2": "To observe the consequence of having a discriminator trained on empirical distributions, we assume that the reward is represented with an optimal discriminator such that r ( s, a ) = log E ( s,a ) D ( s,a ) . As a result of applying this to objective (10), it becomes:", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Be more specific when talking about the result.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve this text to fit a more academic style.", "annotator": "annotator_07"}} {"id_paragraph": "Sx6SnclSL.nQLOUHvx8n.00", "parag_1": "ConstantResolution To tackle this challenge, we propose M ulti-scale M asked autoencoders for learning the hierarchicalrepresentations of point clouds via self-supervised pre-training, termed as Point-M2AE. We representa point cloud as a set of point tokens depicting different spatial local regions, and inherit MAE’spipeline to first encode visible point tokens and then reconstruct the masked 3D coordinates. Differentfrom 2D images, masked autoencoding for 3D point clouds has three characteristics to be considered. Firstly, it is critical to understand the relations between local parts and the overall 3D shapes, whichhave strong geometric and semantic dependence. As examples, the network can recognize an airplanestarting from its wing, or segment the wing’s part from the airplane’s global feature. Therefore, weregard the standard transformer with the plain encoder and decoder is sub-optimal for capturingsuch local-global spatial relations in 3D, which directly downsamples the input into a low-resolutionrepresentation as shown in Figure 1 (Top). We modify both the encoder and decoder into multistage hierarchies for progressively encoding multi-scale features of point clouds, constructing anasymmetric U-Net [34] like architecture in Figure 1 (Bottom). In detail, the shallower stages of theencoder contain a larger number of point tokens to focus on local patterns, while the deeper stagesmerge spatially adjacent tokens to acquire global understanding. Secondly, as Point-M2AE encodesmulti-scale point clouds unlike the single-scale 2D images, the unmasked visible regions are requiredto be block-wise within one scale and consistent across scales, which are respectively for reservingmore complete local geometries and ensuring coherent feature learning for the network. For this, weintroduce a multi-scale masking strategy, which generates random masks at the final scale with ahigh ratio (e.g., 80%), and back-projects the unmasked positions to all preceding scales. Thirdly, tobetter capture the fine-grained 3D geometries, we adopt a local spatial self-attention mechanism withincreasing attention scopes for point tokens at different stages in the encoder, which refocus eachtoken within neighboring detailed structures.Also, we utilize skip connections to complement thedecoder with fine-grained information from the corresponding stages of the encoder.", "parag_2": "ConstantResolution top. Despite its superiority on grid-based 2D images, we ask the question: can MAE-style masked autoencoding be adapted to irregular point clouds as a powerful 3D representation learner? To tackle this challenge, we propose M ulti-scale M asked autoencoders for learning the hierarchical representations of point clouds via self-supervised pre-training, termed as Point-M2AE. We represent a point cloud as a set of point tokens depicting different spatial local regions, and inherit MAE’s pipeline to first encode visible point tokens and then reconstruct the masked 3D coordinates. Different from 2D images, masked autoencoding for 3D point clouds has three characteristics to be specially considered. Firstly, it is critical to understand the relations between local parts and the overall 3D shapes, which have strong geometric and semantic dependence. As examples, the network can recognize an airplane starting from its wing, or segment the wing’s part from the airplane’s global feature. Therefore, we regard the standard transformer with the plain encoder and decoder is sub-optimal for capturing such local-global spatial relations in 3D, which directly downsamples the input into a low-resolution representation as shown in Figure 1 (Top). We modify both the encoder and decoder into multi-stage hierarchies for progressively encoding multi-scale features of point clouds, constructing an asymmetric U-Net [41] like architecture in Figure 1 (Bottom). Secondly, as our Point-M2AE encodes multi-scale point clouds unlike the single-scale 2D images, the unmasked visible regions are required to be both block-wise within one scale and consistent across scales, which are respectively for reserving complete local geometries and ensuring coherent feature learning for the network. For this, we introduce a multi-scale masking strategy, which generates random masks at the final scale with a high ratio (e.g., 80%), and back-projects the unmasked positions to all preceding scales. Thirdly, to better reconstruct 3D geometries from a local-to-global perspective, we utilize skip connections to complement the decoder with fine-grained information from the corresponding stages of the encoder. During fine-tuning on downstream tasks, we also adopt a local spatial self-attention mechanism with increasing attention scopes for point tokens at different stages of the encoder, which refocus each token within neighboring detailed structures.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "S1fwAltvB.BkzbibmoH.01", "parag_1": "L triplet → 0 as dist ( V A ,V P ) dist ( V A ,V N ) → 0 , as expected. The triplet objective is optimized end-to-end using the Adam optimizer (Kingma & Ba, 2015), with mini-batches of size 500. We train forepochs with a mini-batch of size [-\n A large enough mini-batch is necessary to find challenging negative examples.-] , and take the last model as the final syntactic extractor. During training, the gradient backpropagates through the pair vectors to the parameters f of the Siamese model, to get representations of individual words that are similar for corresponding words in equivalent sentences. ", "parag_2": "L triplet → 0 as dist ( V A ,V P ) dist ( V A ,V N ) → 0 , as expected. The triplet objective is optimized end-to-end using the Adam optimizer (Kingma & Ba, 2015). We train for 5 epochs with a mini-batch of size [-\n A large enough mini-batch is necessary to find challenging negative examples.-] 500 6 , and take the last model as the final syntactic extractor. During training, the gradient backpropagates through the pair vectors to the parameters f of the Siamese model, to get representations of individual words that are similar for corresponding words in equivalent sentences. We note that we do not back-propagate the gradient to the contextualized vectors: we keep them intact, and only adjust the learned transformation.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "g5N2H6sr7.6J3ec8Dl3p.00", "parag_1": "With the proposed spectral-wavelet GDN, we further propose a graph autoencoder (GAE) framework that resembles the symmetric fashion of architectures (Noh et al., 2015). We then evaluate the effectiveness of the proposed GAE framework with two popular and important tasks: unsupervised graph-level representation (Sun et al., 2020) and social recommendation (Jamali & Ester, 2010). In the first task, the proposed GAE outperforms the state-of-the-arts on graph classification in an unsupervised fashion, along with a significant improvement on running time. In the second task, the performance of our proposed GAE is on par with the state-of-the-arts on the recommendation accuracy; at the meantime, the proposed GAE demonstrates strong robustness against rating noises and achieves the best recommendation diversification (Ziegler et al., 2005). ", "parag_2": "With the proposed spectral-wavelet GDN, we further propose a graph autoencoder (GAE) framework that resembles the symmetric fashion of architectures (Noh et al., 2015). We then evaluate the effectiveness of the proposed GAE framework with three popular and important tasks: unsupervised graph-level representation (Sun et al., 2020), social recommendation (Jamali & Ester, 2010) and graph generation. In the first task, the proposed GAE outperforms the state-of-the-arts on graph classification in an unsupervised fashion, along with a significant improvement on running time. In the second task, the performance of our proposed GAE is on par with the state-of-the-arts on the recommendation accuracy; at the meantime, the proposed GAE demonstrates strong robustness against rating noises and achieves the best recommendation diversification (Ziegler et al., 2005). In the third task, our proposed GDN can enhance the generation performance of popular variational autoencoder frameworks including VGAE (Kipf & Welling, 2016) and Graphite (Grover et al., 2019).", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "rJRSWbYPS.SyEoostiB.01", "parag_1": "Untargeted Attack Untargeted attack aims to generate adversarial examples that would be misclassified by the attacked model into any category different from the ground truth one. The overall results are shown in Table 1. Our method is competitive with previous attack methods in terms of adversarial perturbation and success rate, but our query number is reduced.", "parag_2": "Untargeted Attack Untargeted attack aims to generate adversarial examples that would be misclassified by the attacked model into any category different from the ground truth one. The overall results are shown in Table 1, in which, the Meta transfer denotes that meta attacker trained on one dataset is used to attack target models on another dataset and Meta guided denotes using the output of meta attacker to guide the update of original Zoo method. Our method is competitive with baselines in terms of adversarial perturbation and success rate, but our query number is reduced.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_04"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "mmGf2ncaE.CWugKbWqx.00", "parag_1": "Predict 3D positions With 3D spatial positional encoding and pair representation, the model can learn a good 3D representation. However, it still lacks the ability to directly output coordinates, which is essential in 3D spatial tasks. To this end, we introduce an SE(3)-equivariance head to predict the delta positions based on pair representation, denoted as", "parag_2": "Predict 3D positions With 3D spatial positional encoding and pair representation, the model can learn a good 3D representation. However, it still lacks the ability to directly output coordinates, which is essential in 3D spatial tasks. To this end, we introduce an SE(3)-equivariant head to predict the delta positions based on SE(3)-invariant pair representation and equivariant input x i − x j , denoted as", "annot_1": {"annotation": ["Content_addition", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "VWgazAa3VJ.iaVGYcsIw.01", "parag_1": "We examine the performance of AlgebraNet versions of ResNet-50 He et al. and MobileNetv1 Howard et al. (2017) on the ImageNet Russakovsky et al. (2015) dataset. We use a width multiplier on the channels to adjust model capacity. For all experiments we use SGD with momentum of 0 . 9 .", "parag_2": "We examine the performance of AlgebraNet versions of ResNet-50 (He et al., 2016) and MobileNetv1 (Howard et al., 2017) on the ImageNet (Russakovsky et al., 2015) dataset. We use a width multiplier on the channels to adjust model capacity. For all experiments we use SGD with momentum of 0 . 9 .", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Review this paragraph.", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.15", "parag_1": "The design of a calendar should not deviate from calendar interfaces that users are familiar with ( DG2 ) . This was observed in various aspects of the design such as layout, medication entries, and icons used to annotate entries such as those which should be taken with food. Over 80% of the participants preferred Design B because of its prob- able similitude to already existing calendars. This was surprising to us because Design A was the design that was intended to resemble ex- isting calendars. The sidelining of Design A can be attributed mainly to the height of medication entries which spanned the entire allowed administration period. This, according to participants, introduced too much clutter. Results indicate that the preferred layout should be vertically oriented with days of the week at the top and times of the day on the left. The dosage used on the medication entry should be one that users are familiar with. The unit used should also be consistent with the ones used in the prescriptions. It should show the actual quantity (e.g., milligrams) as opposed to relative classifications such as number of pills or tablets. This observation also applies to the icon used to denote medication that should be taken with food.Realistic icons that are related to food should be used instead of custom designed icons. In this case, bananas are more effective in indicating the take-with-food action than an icon resembling a utensil such as spoon. Such icon should be positioned together with the entry and not as part of medication summaries.", "parag_2": "The design of a calendar should not radically deviate from calendar interfaces that users are familiar with ( DG2 ) . This was observed in various aspects of the design such as layout, medication entries, and icons used to annotate entries such as those which should be taken with food. Over 80% of the participants preferred Design B because of its probable similarity to already existing calendars. This was surprising to us because Design A was the design that was intended to resemble existing calendars. The sidelining of Design A is attributed mainly to the height of medication entries which spanned the entire allowed administration period and introduced too much clutter. The results indicate that the preferred layout should be vertically oriented with days of the week at the top and times of the day on the left. The dosage used on the medication entry should be one that users are familiar with. The unit used should be consistent with the one used in the prescription. It should show the actual quantity (e.g., milligrams) as opposed to relative classifications such as number of pills or tablets. Similarly, realistic food-related icons (e.g., a banana) should be used to denote that medication that should be taken with food. Such icon should be positioned together with the entry and not as part of medication summaries.", "annot_1": {"annotation": ["Concision"], "instruction": "Make this paragraph more concise.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Shorten my sentence related to realistic food-related icons.", "annotator": "annotator_09"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.20", "parag_1": "It is necessary to emphasize our difference with emerging causal inference approaches based on optimal transport. Dunipace (2021) augments the IPS method via a propensity score estimator based on optimal transport; however, it is limited by the aforementioned high variance issue. Torous et al. (2021) uses the push forward operator to improve change-in-change models; however, they are designed for multi-phase data which is not available in our case. Li et al. (2022) shares similar settings to us, while they focuses on variable decomposition in latent space and is identical to Uri et al. (2017) in terms of alignment technology. Our contribution lies in investigating the role and flexibility of optimal transport to augment CFR, mitigating the MSE and UCE issues that have been long circumvented in the literature as recent as this year.", "parag_2": "It is necessary to distinguish ourselves from emerging OT-based causal inference approaches. Dunipace (2021) augments the IPS method with a propensity score estimator based on OT; however, it is limited by the aforementioned high variance issue. Torous et al. (2021) uses the push-forward operator to improve change-in-change models; however, they are designed for multi-phase data which is not available in our case. Li et al. (2022) has a similar setup to to us, while it focuses on the decomposition of latent variables and is identical to Shalit et al. (2017) in terms of alignment technology. Our work is a new take on OT under the CFR framework, alleviating the MSE and UCE issues that have been long neglected by the causal inference community until this year.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Revise this paragraph for better clarity.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_light", "Concision"], "instruction": "Make this paragraph more simple to read and concise phrases that are too long when possible.", "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.13", "parag_1": "Training protocol. A fully connected neural network with two hidden layers as 60-60 has been selected to realize the representation map ψ and the factual outcome map ϕ for ESCFR and other neural network based baselines. For fair comparison, all neural models are trained for 400 epochs with Adam Kingma & Ba (2015) optimizer, where both learning rate and weight decay are set to 1 × e − 3 . Other settings of optimizers follow Kingma & Ba (2015). We search hyperparameters within the range in Figure 5, checkpoint the validation performance every 2 epochs, and export the best model to evaluate its performance on the test dataset.", "parag_2": "Training protocol. A fully connected neural network with two 60-dimensional hidden layers is selected to instantiate the representation mapping ψ and the factual outcome mapping ϕ for ESCFR and other neural network based baselines. To ensure a fair comparison, all neural models are trained for 400 epochs with Adam optimizer, with the learning rate and weight decay both set to 0.001. Other settings of optimizers follow Kingma & Ba (2015). We fine-tune hyperparameters within the range in Figure 5, validate performance every two epochs, and save the optimal model for test.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Rewrite for fluency (while keeping the original structure of the sentences)", "annotator": "annotator_05"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve the language in this text and make it slightly more formal.", "annotator": "annotator_07"}} {"id_paragraph": "Rd7TGMaUy.dkY5HcKwZ1.03", "parag_1": "The shortcomings of the discussed methods led us to consider the use of an entropy-based objective. In principle, the proposal entropy objective could be optimized for the L2HMC sampler with variational inference (Poole et al., 2019; Song & Ermon, 2019), but our preliminary experiments using this idea were not promising. Therefore, we take inspiration from the Normalizing Flow model and investigated tractable ways to optimize the proposal entropy directly.", "parag_2": "The shortcomings of the discussed methods led us to consider the use of an entropy-based objective. However, L2HMC does not have tractable proposal probability p ( x (cid:48) | x ) , preventing the direct application of the entropy-based objective. In principle, the proposal entropy objective could be optimized for the L2HMC sampler with variational inference (Poole et al., 2019; Song & Ermon, 2019), but our preliminary experiments using this idea were not promising. Therefore, we designed our sampler that possess tractable proposal probability and investigated tractable optimization of the proposal entropy objective.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "OzYyHKPyj7.O9Mk1uqXra.02", "parag_1": "Whereas the superposition stack can be viewed as calculating expectations over individual stack elements, the Nondeterministic Stack RNN (NS-RNN) model of DuSell & Chiang (2020) maintains a probability distribution over whole stacks, using a weighted PDA. It has cubic time complexity and quadratic space complexity with respect to input length, leading to higher wall-clock run time than other stack RNNs but often better task performance.", "parag_2": "The stack module in the Nondeterministic Stack RNN (NS-RNN) model of DuSell & Chiang (2020) maintains a probability distribution over whole stacks by simulating a weighted PDA. It has cubic time complexity and quadratic space complexity with respect to input length, leading to higher wall-clock run time than other stack RNNs, but often better task performance.", "annot_1": {"annotation": ["Concision"], "instruction": "Make this paragraph shorter", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Concision"], "instruction": "Remove the unessential details from the paragraph.", "annotator": "annotator_03"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.14", "parag_1": "Study design For each task introduced in §5.1, we construct a dataset where we duplicate one of the modalities as the two input modalities, in addition to the original dataset. For example, we predict the digit class using two identical gray-scale images in the case of MNIST. We train a multi-modal DNN on each dataset of each task as explained below:", "parag_2": "Study design For each task introduced in §5.1, in addition to the original dataset, we construct a dataset with two identical input modalities by copying one of the modalities. For example, when using the colored-and-gray-MNIST dataset, we predict the digit class using two identical gray-scale images. We train a multi-modal DNN on these dataset as explained below for each task:", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Improve the understandability of the entire paragraph", "annotator": "annotator_05"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "The wording in this paragraph is confusing, please improve the readability.", "annotator": "annotator_07"}} {"id_paragraph": "a5LsN55zPt.xc-t-aeWL.00", "parag_1": "PDEs of the same family), and a challenging 2D mesh-based simulation of paper folding. In 1D, we show that our model outperforms state-of-the-art deep learning-based surrogate models in terms of long-term evolution error by up to 39.3%, and can adaptively tradeoff computation to improve long-term prediction error. On a 2D mesh-based simulation, our modelcan strategically choose appropriate edges to refine or coarsen, and outperforms a baseline without remeshing.", "parag_2": "PDEs of the same family), and a challenging 2D mesh-based simulation of paper folding. In 1D, we show that our model outperforms state-of-the-art deep learning-based surrogate models in terms of long-term evolution error by 33.7%, and can adaptively tradeoff computation to improve long-term prediction error. On a 2D mesh-based simulation, our model outperforms state-of-the-art MeshGraphNets + classical Adaptive Mesh Refinement (AMR) in 2D mesh-based simulations.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "MZYBK_Wp2X.HVFitLjAId.02", "parag_1": "We range the measures by their ARI score on every graph of the dataset. The rank is defined as the position of the measure in this list, averaged over the dataset (see Table 2). It is important to note that the global leadership does not give a comprehensive advice on which measure is better to use, because for a particular graph, the global leader can perform worse than the others. Here we consider the entire LFR space, not just its zone corresponding to real graphs, so the ranking may differ from similar works.", "parag_2": "We range the measures by their ARI score on every graph of the dataset. The rank is defined as the position of the measure in this list, averaged over the dataset (see Table 2). It is important to note that the global leadership does not give a comprehensive advice on which measure is better to use, because for a particular graph, the global leader can perform worse than the others. Here we consider the entire LFR space, not just its zone corresponding to common real-world graphs, so the ranking may differ from those obtained for restricted settings.", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_08"}} {"id_paragraph": "n49HyDdo13.U-SJnJdoJS.00", "parag_1": "The point cloud completion network is trained with Gridding Loss [44], which is a L1 distancebetween predicted G p = < V p , W p > and ground truth G gt = < V gt , W gt > 3D grids in N Gresolution. Gridding Loss bypasses the un-orderedness of point clouds and is evaluated on the 3D grid. The depth completion network is trained using log L 1 pair-wise loss which forces the pairs of pixels in the predicted depth to regress to similar values as the corresponding pairs in the ground truth depth [47]. Let G describe the set of pixels where the ground truth depth is non-zero, i and j are the pixel pairs, and y and y ∗ denote the ground truth and predicted depths, respectively. We express these two loss functions as:", "parag_2": "The point cloud completion network is trained with Gridding Loss [44], which is a L1 distancebetween predicted G p = < V p , W p > and ground truth G gt = < V gt , W gt > 3D grids in N Gresolution. V = { v i } N 3 G i =1 is collection of all vertices in 3D grid and W = { w i } N 3 G i =1 is the weightscorresponding to each vertex. Gridding Loss bypasses the un-orderedness of point clouds and is evaluated on the 3D grid. The depth completion network is trained using log L 1 pair-wise loss which forces the pairs of pixels in the predicted depth to regress to similar values as the corresponding pairs in the ground truth depth [47]. Let G describe the set of pixels where the ground truth depth is non-zero, i and j are the pixel pairs, and y and y ∗ denote the ground truth and predicted depths, respectively. We express these two loss functions as:", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "HkfoEihoH.QdgynxJz5.00", "parag_1": "This simple sampler tends to produce diverse batches similar to a k -DPP. As shown in Figure 1, switching between the two samplers does not affect the active learner’s statistical performance while greatly improving the computational performance. A thorough comparison on the running times and test accuracies of k - MEANS ++ and k -DPP basedgradient embedding samplingcan be found in Appendix G.", "parag_2": "This simple sampler tends to produce diverse batches similar to a k -DPP. As shown in Figure 1, switching between the two samplers does not affect the active learner’s statistical performance while greatly improving the computational performance. Appendix G compares run time and test accuracy for both k - MEANS ++ and k -DPP based sampling in our proposed gradient space.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Simplify the wording of this paragraph.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "Make the last sentence more concise.", "annotator": "annotator_07"}} {"id_paragraph": "IoTyuVEanE.Et-c0vQfeb.02", "parag_1": "Here, γ ∈ [0 , 1] is a parameter that controls how much our ReGAL’s RPN balances between the coverage of a token (i.e. how often it occurs) and its instance level importance. Low values of γ favor tokens with high coverage while high values of γ favor rules with high impact tokens without regard for coverage. We use γ = 0 . 6 for our experiments, which works well in practice.", "parag_2": "Here, γ ∈ [0 , 1] is a parameter that controls how much our ReGAL’s RPN balances between the coverage of a token (i.e. how often it occurs) and its instance level importance. Low values of γ favor tokens with high coverage while high values of γ favor rules with high impact tokens without regard for coverage. Since the types of rules needed may differ as training progresses, we allow users to choose γ for each round of proposed rules. In practice, we find that γ ∈ [0 . 5 , 0 . 9] tend to produce good rules.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "jyac3IgQ44.f4au9jfat5.03", "parag_1": "Let { s k | s k ∈ Z 3 } Mk =0 denote a series of window sizes, where s 0 is the size of the query windowand s 1 ,...,M are the sizes of M successively larger key windows. Let V = { v i | v i = ( x i , f i ) } |V| i =1 bethe input voxel set, with xyz coordinates x i ∈ Z 3 and feature vector f i ∈ R C for voxel i . We firstpartition the voxel set into non-overlapping 3D windowseach of size s 0, and find the non-emptyones as query windows with their centers denoted by { c i | c i ∈ Z 3 } Li =0 , where L is the total numberof query windows. To get query voxels V c i , s 0for the query window centered on c i, one can simplygather all the non-empty voxels within the window as the queries. While keeping efficiency in mind,we present a novel chessboard samplingstrategy, which will be detailed in Section 3.1.2.", "parag_2": "Given a series of window sizes { s k | s k ∈ Z 3 } Mk =0 , where s 0 denotes the query window size ands 1 ,...,M denote M different key window sizes from small to large. Let V = { v i | v i = ( x i , f i ) } |V| i = 1be the input voxel set, where x i ∈ Z 3 denotes voxel coordinates and f i ∈ R C denotes voxel features. We first partition the voxel set into non-overlap 3D windows s 0 by finding non-empty window centers{ c i | c i ∈ Z 3 } Li =0 , where L is the number of non-empty windows. To get query voxels, we can simplygather all the non-empty voxels V c i , s 0 centered on c i within query window s 0 to guarantee that everyvoxel can serve as a query and be updated after one attention layer. We also prsent another novelchessboard sampling strategy for the query voxel sampling which will be discussed in Section 3.2.", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite this paragraph for improved readability.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "Improve the readability of this paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.13", "parag_1": "• Mask-Output (No representations, No input action set): Assuming a fixed action space in the output, q-values or policy probabilities are masked out for unavailable actions. It represents prior SAS-MDP works - Boutilier et al. ; Chandak et al. (2020a), Huang & Ontañón (2020). • Mask-Input-Output (No representations): Augments the binary availability mask of given actions, 1s for available indices and 0s for unavailable, to the state input of Mask-Output . • Utility-Policy (No input action set): Proposed by Jain et al. (2020), action representations are used independently for computing each action’s utility, ignoring any interdependence. • Simple DQN (No cascade, No input action set): For listwise RL specifically, we include the DQN baseline that simply selects top-K items, instead of reasoning about the overall list. Thus, it ignores both action interdependences: (i) on other available actions, (ii) on other items in the list.", "parag_2": "• Mask-Output (No action representations, No input action set): Assumes a fixed action space output. Q-values or policy probabilities are masked out for unavailable actions. It represents prior work on SAS-MDP: Boutilier et al. ; Chandak et al. (2020a) ; Huang & Ontañón (2020). • Mask-Input-Output (No action representations): Augments Mask-Output with an extra input of action set via a binary availability vector: having 1s at available action indices and 0s otherwise. • Utility-Policy (No input action set): Jain et al. (2020) propose a parallel architecture to compute each action’s utility using action representations. But, it ignores any action interdependence. • Simple DQN (No cascade, No input action set): A non-cascaded DQN baseline for listwise RL that selects top-K items instead of reasoning about the entire list. Thus, it ignores two action interdependences: (i) on other items in the list and (ii) on other available actions.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the middle sentences, preferring active formulations over passive ones.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "\"No representations\" become \"No action representations\". Rewrite the \"Mask-Input-Output\" and \"Utility-Policy\" points in the list for better readability. Uniformise the language in the paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "p8yrWJS4W.eHA5NswPr.01", "parag_1": "• No modification ( p ). This is a baseline experiment where we keep the original strings as are • Text Truncation ( p short ). We truncate texts to 1 / 3 of their original length. • Article Removal ( p no art ). We remove all articles (i e., ‘a’, ‘an’ and ‘the’). • Stopwords Removal ( p no stop ). We remove all stopwords (e g., ‘that’ or ‘so’).• Sentence-level Permutation ( p swap ). We permute the first halves of texts (as delineated by sentences) across the entire corpus (i", "parag_2": "• No modification ( p ). This is a baseline experiment where we keep the original strings as are. • Text Truncation ( p short ). We truncate texts to 1 / 3 of their original length. This allows us to understand whether the divergence metrics pick up on differences in dataset length statistics. • Article Removal ( p no art ). We remove all articles (‘a’, ‘an’ and ‘the’) in the text. This allows us to understand whether the divergence metrics can distinguish between texts with or without basic levels of fluency and grammaticality. • Stopwords Removal ( p no stop ). We remove all stopwords (e.g., ‘that’ or ‘so’) in the text. This allows us to understand whether the divergence metrics can detect differing levels of syntactic coherence, rather than just focusing on content words.• Sentence-level Permutation ( p swap ). We permute the first halves of texts (as delineated by sentences) across the entire corpus (i", "annot_1": {"annotation": ["Content_addition", "Development"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "fJhx73ErBg.NeKLbmOxG8.00", "parag_1": "STL [7] offers a formalism for expressing and reasoning among a rich set of rules. The rules aredefined over predicates (inequalities of real-valued functions). In standard STL, the formulas areforward-looking, meaning that the formulas are evaluated by looking at future trajectories. In thispaper, we need to evaluate formulas using past trajectories (trajectories of road agents we havetracked over the past). Therefore, our rules will be encoded as a set of past time STL (ptSTL) [8] formulas with the following syntax", "parag_2": "STL [7] offers a formalism for expressing and reasoning among a rich set of rules. The rules aredefined over predicates (inequalities of real-valued functions). In standard STL, the formulas areforward-looking, meaning that the formulas are evaluated by looking at future trajectories. In thiswork, we need to evaluate formulas using past trajectories (trajectories of road agents we havetracked over the past), using this past information to compute a risk of violating these rules in thefuture. Therefore, our rules will be encoded as a set of past time STL (ptSTL) [8] formulas with the following syntax", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_01"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "_F_xxvP0sL.hgPlvA7CZ6.01", "parag_1": "GBDT (Chen and Guestrin, 2016) is the widely-used tree model. DeepFM (Guo et al., 2017) is theinner-product interaction-based model. FATE (Wu et al., 2021) and TabGNN (Guo et al., 2021b) arethe recent graph models. DIN (Zhou et al., 2018) and DIEN (Zhou et al., 2019) are the attentionbased sequential models. SIM (Pi et al., 2020), UBR (Qin et al., 2020), and RIM (Qin et al., 2021) are the retrieval-based models. On top-n recommendation tasks, we compare PET with six strongrecommendation models, including factorization-based FPMC (Rendle et al., 2010) and TransRec (He et al., 2017), and recently proposed DNN models NARM (Li et al., 2017), GRU4Rec (Hidasiet al., 2016), SASRec (Kang and McAuley, 2018), and RIM (Qin et al., 2021).", "parag_2": "GBDT (Chen and Guestrin, 2016) is the widely-used tree model. DeepFM (Guo et al., 2017) is theinner-product interaction-based model. DIN (Zhou et al., 2018) and DIEN (Zhou et al., 2019) arethe attention-based sequential models. SIM (Pi et al., 2020), UBR (Qin et al., 2020), and RIM (Qin et al., 2021) are the retrieval-based models. On top-n recommendation tasks, we compare PET withsix strong recommendation models, including factorization-based FPMC (Rendle et al., 2010) and TransRec (He et al., 2017), and recently proposed DNN models NARM (Li et al., 2017), GRU4Rec(Hidasi et al., 2016), SASRec (Kang and McAuley, 2018), and RIM (Qin et al., 2021).", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "8_oadXCaRE.Kt4-LpYuM.00", "parag_1": "In fact, this biological implausibility is also an important issue for machine intelligence. For their impressive performance, ANNs trade off other desired properties, which are present in biological systems. For example, ANN training often demands very large and labelled datasets. When labels are unavailable, self-supervised learning schemes exist, where supervisory error signals generated by the network itself are exploited and backpropagated from the output towards the input to update the network’s parameters (Goodfellow et al., 2014; Devlin et al., 2018; Chen et al., 2020). However, this global propagation of signals in deep networks introduces another limitation. Namely, it prevents the implementation of efficient distributed computing hardware that would be based on only local signals from neighbouring physical nodes in the network, and is in contrast to the local synaptic plasticity rules that are believed to govern biological learning. Several pieces of work have been addressing parts of the biological implausibility and drawbacks of backpropagation in ANNs (Bengio et al., 2015; Lillicrap et al., 2016; Guerguiev et al., 2017; Pfeiffer & Pfeil, 2018; Illing et al., 2019; Pogodin & Latham, 2020; Millidge et al., 2020; Pogodin et al., 2021). Recently, an approximation to backpropagation that is mostly Hebbian, i.e. relies on mostly pre- and post-synaptic activity of each synapse, has been achieved by reducing the global error requirements to 1-bit information (Pogodin & Latham, 2020). Two schemes that further localize the signal that is required for a weight update are Equilibrium Propagation (Scellier & Bengio, 2017) and Predictive Coding (Millidge et al., 2020).", "parag_2": "In fact, this biological implausibility is also an important issue for machine intelligence. For their impressive performance, ANNs trade off other desired properties, which are present in biological systems. For example, ANN training often demands very large and labelled datasets. When labels are unavailable, self-supervised learning schemes exist, where supervisory error signals generated by the network itself are exploited and backpropagated from the output towards the input to update the network’s parameters (Goodfellow et al., 2014; Devlin et al., 2018; Chen et al., 2020). However, this global propagation of signals in deep networks introduces another limitation. Namely, it prevents the implementation of efficient distributed computing hardware that would be based on only local signals from neighbouring physical nodes in the network, and is in contrast to local synaptic plasticity rules that partly govern biological learning. Several pieces of work have been addressing parts of the biological implausibility and hardware-inefficiency of backpropagation in ANNs (Bengio et al., 2015; Lillicrap et al., 2016; Guerguiev et al., 2017; Pfeiffer & Pfeil, 2018; Illing et al., 2019; Pogodin & Latham, 2020; Millidge et al., 2020; Pogodin et al., 2021). such as the need for exactly symmetric forward and backward weights or the waiting time caused by the network’s forward-backward pass between two training updates in a layer (weight transport and update-locking problems). Recently, an approximation to backpropagation that is mostly Hebbian, i.e. relies on mostly pre- and post- synaptic activity of each synapse, has been achieved by reducing the global error requirements to 1-bit information (Pogodin & Latham, 2020). Two schemes that further localize the signal that is required for a weight update are Equilibrium Propagation (Scellier & Bengio, 2017) and Predictive Coding (Millidge et al., 2020).", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "nkOpNqg-ip.OwJsIhe_p.03", "parag_1": "Due to space limitations, we can only summarize the results here. The full results can be found in Table 2 in Sec. C of the appendix. The code (both Java and Python) to reproduce these results is publicly available 1 .", "parag_2": "Due to space limitations, Fig. 2 only summarizes the results very superficially through final ranks. More detailed results can be found in Table 2 and Fig. 4 in Sec. C of the appendix. The code (both Java and Python) to reproduce these results is publicly available 1 .", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_04"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.01", "parag_1": "Our goal is to design a policy framework that is optimal for any given action space by addressing the challenges in Sec. 3.2. To work with action representations, we build on the utility network proposed by Jain et al. Our key insight is to use graph neural networks for both, summarizing the list of given action representations and learning inter-action interdependence.", "parag_2": "Our goal is to design a policy framework that is optimal for any given action set by addressing the challenges in Sec. 3.2. We build on the utility network proposed by Jain et al. (2020) that acts in parallel on each action’s representation. Our central insight is to use graph neural networks for summarizing the set of action representations as a state component and learning action relations.", "annot_1": {"annotation": ["Rewriting_medium", "Development"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Use clearer expression, use accurate words.", "annotator": "annotator_08"}} {"id_paragraph": "NAxP0iFmBr.5QBuYp8GH.02", "parag_1": "Proactive Motion Capture Few previous works studied proactive motion capture with a single mobile camera (Zhou et al., 2018; Cheng et al., 2018; Kiciroglu et al., 2019). In comparison, more works studied the control of a multi-camera team. Among them, many are based on optimization with various system designs, including marker-based (Nägeli et al., 2018), RGBD-based (Xu et al., 2017), two-stage system (Saini et al., 2019; Tallamraju et al., 2019), hierarchical system (Ho et al., 2021), etc . It is important to note that all the above methods deal with static occlusion sources or clean landscapes. Also, the common similarity shared by these works is the use of hand-crafted optimization objectives and some fixed-form camera formations. These factors resulted in poor adaptability to dynamic scenes saturated with uncertainties. Recently, RL-based methods received more attention due to their potential for dynamic formation adjustments. These works focused on active 3D HPE in the Gazebo simulation (Tallamraju et al., 2020) or Panoptic dome (Joo et al., 2015; Pirinen et al., 2019; Gärtner et al., 2020) for active view selection. Among them, AirCapRL (Tallamraju et al., 2020) shares similarities with our work. However, it is restricted to coordinating between two cameras in clean landscapes without occlusion. We study the collaboration between more cameras and resolve the credit assignment issue with novel CTCR incentives. Meanwhile, we consider a more challenging scenario with multiple distracting humans as dynamic occlusions, which requires dedicated algorithms to handle.", "parag_2": "Proactive Motion Capture Few previous works studied proactive motion capture with a single mobile camera (Zhou et al., 2018; Cheng et al., 2018; Kiciroglu et al., 2019). In comparison, more works studied the control of a multi-camera team. Among them, many are based on optimization with various system designs, including marker-based (Nägeli et al., 2018), RGBD-based (Xu et al., 2017), two-stage system (Saini et al., 2019; Tallamraju et al., 2019), hierarchical system (Ho et al., 2021), etc . It is important to note that all the above methods deal with static occlusion sources or clean landscapes. Also, the majority of these works adopt hand-crafted optimization objectives and some forms of fixed camera formations. These factors resulted in poor adaptability to dynamic scenes saturated with uncertainties. Recently, RL-based methods receive more attentions due to their potentials on dynamic formation adjustments. These works studied active 3D HPE in the Gazebo simulation (Tallamraju et al., 2020) or Panoptic dome (Joo et al., 2015; Pirinen et al., 2019; Gärtner et al., 2020) for active view selection. Among them, AirCapRL (Tallamraju et al., 2020) shares similarities with our work. However, it is restricted to coordinating between two cameras in clean landscapes without occlusions. We study the collaborations between multiple cameras ( n ≥ 3) and resolve the credit assignment issue with our novel reward design (CTCR). Meanwhile, we study a more challenging scenario with multiple distracting humans served as the sources of dynamic occlusions, which requires a more sophisticated algorithms to handle.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Improve the english", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Use accurate words and expression.", "annotator": "annotator_08"}} {"id_paragraph": "atxti8SVk.3K9AmPwALM.17", "parag_1": "Visual quality comparison and Ablation study. Fig.shows that our results are well aligned with image boundary and visually close to fully-supervised counterpart. Fig. 7 shows significant improvement of our results by additively introducing different relationships for more regularization. Please refer to the Appendix for more details and ablation studies.", "parag_2": "Visual quality and ablation study. Fig. 6 shows that our results are better aligned with region boundaries and visually closer to fully-supervised counterparts. Fig. 7 shows that our results improve significantly with different relationships for more regularization. See Appendix for more details and ablation studies.", "annot_1": {"annotation": ["Concision"], "instruction": "Edit this paragraph to be more concise.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_light", "Concision"], "instruction": "Improve the english of this paragraph and make it slightly shorter.", "annotator": "annotator_07"}} {"id_paragraph": "Sx6SnclSL.nQLOUHvx8n.01", "parag_1": "Self-supervised Learning for Point Clouds. 3D representation learning without annotations has been widely studied in recent years. Mainstream methods mainly build the pretext tasks to reconstruct the transformed input point cloud based on the encoded latent vectors, such as rotation [27], deformation [1], rearranged parts [35] and occlusion [39]. From another perspective, PointContrast [44]utilizes contrastive learning between features of the same points from different views to learn discriminative 3D representations. DepthContrast [50] further extends the contrast for depth maps of different augmentations. CrossPoint [2] conducts cross-modality contrastive learning between point clouds and their corresponding rendering images to acquire rich self-supervised signals. Point-BERT [48]first introduces BERT-style pre-training for 3D point clouds witha standard transformer network andperforms competitively on various downstream tasks. In this paper, we propose an MAE-style [18]pre-training framework, Point-M2AE, which reconstructs the highly masked 3D coordinates of theinput point cloud for self-supervised learning. Point-M2AE with a hierarchical architecture achievesstate-of-the-art downstream performance by learning the multi-scale representation of point clouds.", "parag_2": "Self-supervised Learning for Point Clouds. 3D representation learning without annotations has been widely studied in recent years. Mainstream methods mainly build the pretext tasks to reconstruct the transformed input point cloud based on the encoded latent vectors, such as rotation [34], deformation [1], rearranged parts [42] and occlusion [49]. From another perspective, PointContrast [55] utilizes contrastive learning between features of the same points from different views to learn discriminative 3D representations. DepthContrast [63] further extends the contrast for depth maps of different augmentations. CrossPoint [2] conducts cross-modality contrastive learning between point clouds and their corresponding rendering images to acquire rich self-supervised signals. Point-BERT [60] and Point-MAE [33] respectively introduce BERT-style [11] and MAE-style [20] pre-training schemes for 3D point clouds with standard transformer networks and performs competitively on various downstream tasks, but both of them can only encode point clouds with a single resolution and ignores the local-global relations between 3D shapes. In this paper, we propose Point-M2AE, an MAE-style framework with a hierarchical transformer for multi-scale point cloud pre-training. We achieve state-of-the-art downstream performance by learning the multi-scale representation of point clouds.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.04", "parag_1": "In this research, we focus on i) visualizing a single patient’s data, and ii) visualizing events on a calendar – a two-dimensional chart where one dimension shows days of a particular year and the other dimension shows time of day [43]. Therefore we then turn to work on on-calendar visualization.", "parag_2": "In this research, we focus on i) visualizing a single patient’s data, and ii) visualizing events on typical calendar layouts, which often consist of two-dimensional charts where one dimension shows days and the other dimension shows the time of day [43]. Alternative layouts exist and we discuss them in the next subsection.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_substitution", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.16", "parag_1": "We use the stochastic method to estimate the entropy. First, we sample a set of rotamers from the distribution using the inversion of the flows (Eq.1, 4). Then, we compute the negative log probability of the samples and take the average as an estimation of the entropy. These two steps can be efficiently done thanks to the capability of computing the exact likelihood of normalizing flows.", "parag_2": "To estimate the entropy, we use a stochastic method: First, we sample a set of rotamers from the distribution using the inverted flows (Eq.1, 4). Then, we compute the negative log probability of the samples and take their average as an estimate of the entropy. Computing these steps is efficient thanks to the ability to compute the exact likelihood of normalizing flows.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Improve the English of this paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "ydudDS_QrK.dSEEUtAQ1H.00", "parag_1": "We have developed a novel influence analysis to understand the effects of graph elements on the parameter changes of GCNs without needing to retrain the GCNs. We chose Simple Graph Convolution due to its convexity and its competitive performance to non-linear GNNs on a variety of tasks. Our influence functions can be used to approximate the changes in model parameters caused by edge or node removals from an attributed graph. Moreover, we provided theoretical bounds on the estimation error of the edge and node influence on model parameters. We experimentally validated the accuracy and effectiveness of our influence functions by comparing its estimation with the actual influence obtained by model retraining. We showed in our experiments that our influence functions could be used to reliably identify edge and node with negative and positive influences on model performance. Finally, we demonstrated that our influence function could be applied to graph rectification and model attacks. A P ROOFS Lemma 3.1.", "parag_2": "We have developed a novel influence analysis to understand the effects of graph elements on the parameter changes of GCNs without needing to retrain the GCNs. We chose Simple Graph Convolution due to its convexity and its competitive performance to non-linear GNNs on a variety of tasks. Our influence functions can be used to approximate the changes in model parameters caused by edge or node removals from an attributed graph. Moreover, we provided theoretical bounds on the estimation error of the edge and node influence on model parameters. We experimentally validated the accuracy and effectiveness of our influence functions by comparing its estimation with the actual influence obtained by model retraining. We showed in our experiments that our influence functions could be used to reliably identify edge and node with negative and positive influences on model performance.", "annot_1": {"annotation": ["Content_deletion"], "instruction": "Remove the last sentence of this paragraph.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "Delete the last sentence of this paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "nURqpCEj2G.oP-8Uo4uFl.00", "parag_1": "DETR [6] applies fixed positional encodings [26, 4] to the input of the transformer architecturefor object detection. Swin Transformer [24] adds relative position biases in similarity computation of self-attention, improving performance over models without these biases. LoFTR [35] uses the2D extension of the position encoding to produce position-dependent features for image matching.", "parag_2": "DETR [ fixed positional encodings [ the input of the transformer architecture position of self-attention, improving performance over models without these biases. LoFTR [35] uses the2D extension of the position encoding to produce position-dependent features for image matching.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.10", "parag_1": "The movement distance ( A ) from the starting position to the center of the target was 100 and 200 mm . We set the A values such that the interval between ID s in Eq. 1 was approximately constant. The interval ( I ) between the notch and the target was 0, 3.41, 6.28, 11.9, and ∞ mm . We set the I values with reference to the previous study [23]. Note that ∞ mm indicated a condition of no notch.", "parag_2": "The movement amplitude ( A ) from the starting position to the center of the target was 100 and 200 mm (364 and 729 pixels). We set the A values such that the interval between ID s in Eq. 1 was approximately constant. The interval ( I ) between the notch and the target was set as 0, 3.41, 6.28, 11.9, and ∞ mm (0, 12, 23, 44, and ∞ pixels). We set the I values with reference to the values of the previous study that investigated the interval between the distractor and the target [25]. Note that an interval of ∞ mm indicates a condition with no notch.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "ogHsB0aJsd.PzEritC2E6.01", "parag_1": "In addition, these results only hold asymptotically, where the function of interest can be exactly identified in a pointwise manner. Such an overly strong guarantee is unrealistic in the finite-sample regime, where one can only hope to approximate the function well in an average sense under some distribution , i.e., finite-sample performance guarantees should ideally bound ∥ (cid:98) q − q π ∥ 2 ,ν for the learned (cid:98) q , where ∥·∥ 2 ,ν is ν -weighted 2-norm. Such fine-grained analyses are non-existent in MIS. Even in the broader literature, such results not only require Bellman-completeness-type assumptions (Uehara et al., 2021), they also come with some fixed ν (which is not necessarily d D ; see Section 2) and the user has no freedom in choosing ν . This creates a gap in the literature, as downstream learning algorithms that use off-policy function estimation as a subroutine often assume the estimation to be accurate under certain specific distributions (Kakade & Langford, 2002; Abbasi-Yadkori et al., 2019).", "parag_2": "In addition, these results only hold asymptotically, where the function of interest can be exactly identified in a pointwise manner. Such an overly strong guarantee is unrealistic in the finite-sample regime, where one can only hope to approximate the function well in an average sense under some distribution , i.e., finite-sample performance guarantees should ideally bound ∥ (cid:98) q − q π ∥ 2 ,ν for the learned (cid:98) q , where ∥·∥ 2 ,ν is ν -weighted 2-norm. Such fine-grained analyses are non-existent in MIS. Even in the broader literature, such results not only require Bellman-completeness-type assumptions (Uehara et al., 2021), they also come with some fixed ν (which is not necessarily d D ; see Section 2) and the user has no freedom in choosing ν . This creates a gap in the literature, as downstream learning algorithms that use offpolicy function estimation as a subroutine often assume the estimation to be accurate under certain specific distributions. For example, in the setting of online policy optimization, Abbasi-Yadkori et al. (2019) require value estimates to be accurate on the occupancy of the (unknown) optimal policy, and (Kakade & Langford, 2002) require them to be accurate on the occupancy of the learning policy at each iteration.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_04"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "NwOG107NKJ.0PPYM22rdB.00", "parag_1": "Social network models have been applied for many purposes to include: modeling an individual’s behavioral patterns to predict future nodal attributes (e.g., connections) over time McConnell et al.[2018] McAvoy et al. [2020], modeling interactions and cluster formation within online communities Fortunato and Hric [2016] Xu et al. Liu et al. [2018], and modeling how network characteristics (e.g., centrality) influences its users Qiu et al. Overall, these models attempt to characterize the relationship amongst network structure and information diffusion, decision making, and individual behavior. Jackson et al.", "parag_2": "Social network models have been applied for many purposes to include: modeling an individual’s behavioral patterns to predict future nodal attributes (e.g., connections) over time [McConnell et al., 2018] [McAvoy et al., 2020], modeling interactions and cluster formation within online communities [Fortunato and Hric, 2016] [Xu et al., 2020] [Liu et al., 2018], and modeling how network characteristics (e.g., centrality) influences its users [Qiu et al., 2017]. Overall, these models attempt to characterize the relationship amongst network structure and information diffusion, decision making, and individual behavior. [Jackson et al., 2017].", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Update the citations part in this paragraph", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_03"}} {"id_paragraph": "oxDnGzBe8n.r-P4EFl_4.00", "parag_1": "Intuitively, ˆΘ( ϑ ) measures the trigger generator’s trainability with respect to a randomly initialized target model. The generator’s trainability indicates the easiness of effectively generating input-aware triggers, implying the model’s vulnerability to input-aware backdoor attacks. To verify the hypothesis, on the CIFAR10 dataset with the generator configured as in Appendix § A, we measure ˆΘ( ϑ ) with respect to 900 randomly generated arches as well as the model accuracy (ACC) on clean inputs and the attack success rate (ASR) on trigger-embedded inputs, with the results shown in Figure 2. Observe that the conditional number of ˆΘ( ϑ ) has a strong negative correlation with ASR, with a smaller value indicating higher attack vulnerability; meanwhile, it has a limited correlation with ACC, with most of the arches having ACC within the range from 80% to 95%.", "parag_2": "Intuitively, ˆΘ( ϑ ) measures the trigger generator’s trainability with respect to a randomly initialized target model. The generator’s trainability indicates the easiness of effectively generating input-aware triggers, implying the model’s vulnerability to input-aware backdoor attacks. To verify the hypothesis, on the CIFAR10 dataset with the generator configured as in Appendix § A, we measure ˆΘ( ϑ ) with respect to 900 randomly generated arches as well as the model accuracy (ACC) on clean inputs and the attack success rate (ASR) on trigger-embedded inputs. Specifically, for each arch α , we first train the model f α to measure ACC and then train the trigger generator g with respect to f α on the same dataset to measure ASR, with results shown in Figure 2. Observe that the conditional number of ˆΘ( ϑ ) has a strong negative correlation with ASR, with a smaller value indicating higher attack vulnerability; meanwhile, it has a limited correlation with ACC, with most of the arches having ACC within the range from 80% to 95%.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_10"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_03"}} {"id_paragraph": "zwf3pzEK8m.baiUuz5EF.00", "parag_1": "[-Similarity − 0 .0 .0 .0 .0 .1 . \n P e r c en t-] ML-GCN Positive/Negative Link Scores PositiveLinks NegativeLinks Similarity We can see that while they behave similarly, the ML-GCN does a better job of ensuring that positive/negative links are well separated. These are computed on Amazon however, BGRL outperforms the ML-GCN (the strongest contrastive baseline) on 3/6 of the datasets and performs similarly on 1 other ( Cora ). It also outperforms GRACE across all of the datasets.", "parag_2": "[-Similarity − 0 .0 .0 .0 .0 .1 . \n P e r c en t-]We can see that while they behave similarly, the ML-GCN does a better job of ensuring that positive/negative links are well separated. These scores are computed on Amazon-Photos . perform poorly relative to the other methods. This is intuitive, as neither method was designed for link prediction and were only evaluated for node classification in their respective papers. Surprisingly, however, BGRL outperforms the ML-GCN (the strongest contrastive baseline) on 3/6 of the datasets and performs similarly on 1 other ( Cora ). It also outperforms GRACE across all of the datasets.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hAi0PMz9T7.Ut8ESfYp1.03", "parag_1": "Interpretability – a universal boon for ML? In the PCC domain, the model interpretability is linked to the wealth of domain knowledges. By distilling a blackbox neural network into a white boxsymbolic rule, the congestion rule is made easier for the networkcongestion practitioners to locatethe bug and modify/improve manually.", "parag_2": "Interpretability – a universal boon for ML? In the PCC domain, the model interpretability is linked to the wealth of domain knowledge. By distilling a black-box neural network into white-box symbolic rules, the resulting rules are easier for the network practitioners to digest and improve.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the last sentence and make it easier to understand.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rephrase the second part of the last sentence.", "annotator": "annotator_07"}} {"id_paragraph": "SJhg8CFRm.BkkucopBV.00", "parag_1": "Taken together, these results have several implications. First, we find that a highly expressive nonlinear decoder does not yield any increase in decoding accuracy, even as we scale up in MDFA complexity. From this, and the fact that we did extensive hyperparameter search in training decoders, we can conclude that the decoder models we chose are expressive enough for the decoding task. Second, we find that decoding accuracy for MDFA states is in general not very high. These two observations suggest the need for a different interpretation of the internal representation of the trained RNN.", "parag_2": "Taken together, these results have several implications. First, we find that a highly expressive nonlinear decoder does not yield any increase in decoding accuracy, even as we scale up in MDFA complexity. We can conclude from this finding and our extensive hyperparameter search for each decoder model that the decoder models we chose are expressive enough for the decoding task. Second, we find that decoding accuracy for MDFA states is in general not very high. These two observations suggest linear decoders are sufficient for the decoding task, but also suggests the need for a different interpretation of the internal representation of the trained RNN.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "wSf7BpyxTb.ZCPjX5OcL.00", "parag_1": "The existing work based on iPPM framework either require compactness, e.g., [39], or some special structure on L , e.g., [35]. This is also true for VR-based methods, e.g.,[15, 26, 37]. To our knowledge, ours is the first one to overcome this difficulty and strictly improve the complexity bounds for WCSC and WCMC settings without compactness assumption; moreover, the same idea also workssimultaneously with a variance reduction technique that will be discussed later (see section 3). Finally,same trick for removing compactness assumption for WCSC setting also helps removing compactnessassumption for the primal domain in WCMC setting as well (see section 4).", "parag_2": "The existing work based on iPPM framework either require compactness, e.g., [41], or some special structure on L , e.g., [37]. This is also true for VR-based methods, e.g.,[16, 27, 39]. To our knowledge, ours is the first one to overcome this difficulty and strictly improve the best known complexity bound for the WCSC setting without compactness assumption; moreover, the same idea also works simultaneously with a variance reduction technique that will be discussed later (see section 4). Finally, the same trick for removing compactness assumption for the WCSC setting also helps removing the compactness assumption for the primal domain in WCMC setting and we still improve the best known complexity for this setting as well (see section 5).", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "ByZyHzZC-.HktKf7-AW.03", "parag_1": "It has been observed that a cyclical learning rate schedule leads to better generalization (Smith, 2015). In Sec. 4.2 we demonstrated that one can exchange cyclic learning rate schedule (CLR) with batch size (CBS) and approximately preserve the practical benefit of CLR. This inspired us to hypothesize that CLR is related to switching between noise levels ( ηS ), switching between sharp/deep and wide/shallow minima. To validate that, we run VGG-11 on CIFAR10 using 4 training schedules: CLR with stepsize of 4 and 15 epochs in each stage, CBS with stepsize 4 and 15 epochs in each stage. Each run is repeated 8 times and for each variant we track evolution of sharpness and accuracy. We observe that CBS and CLR with longer stages lead to 89 . 9 ± 0 . 20% and 90 . 2 ± 0 . 10% test accuracy in both cases 0 . 2 − 0 . 3% above variance with shorter range. CBS and CLR with long stages times seem to be promising schedules for training DNNs. Finally, we validate that CBS and CLR switch between sharp/deep and wide/shallow minima, suggesting that CLR improves mixing time. Plot included in appendix.", "parag_2": "It has been observed that a cyclic learning rate (CLR) schedule leads to better generalization (Smith, 2015). In Sec. 4.2 we demonstrated that one can exchange cyclic learning rate schedule (CLR) with batch size (CBS) and approximately preserve the practical benefit of CLR. This inspired us to hypothesize that by changing between controllable noise levels ( ηS ) CLR switches between sharp/deep and wide/shallow minima. To validate that, we run VGG-11 on CIFAR10 using 4 training schedules: CLR with stepsize of 4 and 15 epochs in each stage, CBS with stepsize 4 and 15 epochs in each stage. Each run is repeated 8 times and for each variant we track evolution of sharpness and accuracy. We observe that CBS and CLR with longer stages lead to 89 . 9 ± 0 . 20% and 90 . 2 ± 0 . 10% test accuracy, in both cases 0 . 2 − 0 . 3% above variance with shorter range. CBS and CLR with long stepsize each seem to be promising schedules for training DNNs. Finally, we validate that CBS and CLR switch between sharp/deep and wide/shallow minima, suggesting that CLR improves convergence time to stationary distribution, as seen in Fig. 13 of Appendix G.4.", "annot_1": {"annotation": ["Rewriting_light", "Development"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_light", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "NvI7ejSHFe.ppieLd2M4a.01", "parag_1": "In this section, we investigate the influence of activation functions in PINNs for solving PDE/ODE systems. We evaluate and compare the effectiveness of several common activation functions on some simple problems with analytical solutions. The results show that the choice of activation functions is crucial for PINNs and depends on the problem. Motivated by this observation, we propose to learn specialized activation function for different PDE systems.", "parag_2": "In this section, we first investigate the influence of activation functions in PINNs for solving simple ODE systems with analytical solutions. The results show that the choice of activation functions is crucial for PINNs and depends on the problem. Motivated by this observation, we propose to learn specialized activation functions for different PDE systems.", "annot_1": {"annotation": ["Concision"], "instruction": "Remove the second sentence", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Concision"], "instruction": "Remove unnecessary details to make this paragraph shorter.", "annotator": "annotator_07"}} {"id_paragraph": "ryESgXktV.BJ4dKdWmr.03", "parag_1": "The above example demonstrates the importance of providing parts of an explanation in an online fashion. Mark gradually reveals the reasoning to maintain his plan as the execution unfolds so that it also becomes acceptable (and understandable) to Emma. The key point here is to explain minimally and only when necessary, as long as the next action becomes understandable. In this way, the information to be conveyed is spread out into the future so that there is less cognitive requirement at the current step–from Emma’s perspective, the interaction with Mark is simpler and requires less thought.", "parag_2": "The above example demonstrates the importance of providing an explanation in an online fashion. Mark gradually reveals the reasoning to maintain his plan as the execution unfolds so that it also becomes both acceptable and understandable to Emma, even though being subject to different values due to model differences (e.g., Mark values lunch break more than Emma thinks he does). The key point here is to explain minimally and only when necessary. In this way, the information to be conveyed is spread out throughout the plan execution, potentially with even a reduced amount of information, so that there is less mental workload requirement at the current step–from Emma’s perspective, the interaction with Mark is more straightforward.", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "rJRSWbYPS.SyEoostiB.00", "parag_1": "Implementation Details For all the experiments, we use the same architecture for the meta attacker A , which consists of four convolutional layers and four deconvolutional layers. We use Reptile (Nichol et al., 2018) with 0 . 01 learning rate to train meta attacker. Fine-tuning parameters are set as m = 5 for MNIST and CIFAR10, and m = 3 for tiny-Imagenet. Top q = 128 coordinates are selected as part coordinates for attacker fine-tuning and model attacking on MNIST; and q =on CIFAR10 and tiny-Imagenet.", "parag_2": "Meta-training Details For all the experiments , we use the same architecture for the meta attacker A as shown in Table 6. We use Reptile (Nichol et al., 2018) with 0 . 01 learning rate to train meta attackers. We use 10000 randomly selected images from the training set to train the meta-attackers in three datasets. The proportion of the selected images to the whole training set are 16%, 20%, and 10% respectively. Fine-tuning parameters are set as m = 5 for MNIST and CIFAR10, and m =for tiny-Imagenet. Top q = 128 coordinates are selected as part coordinates for attacker fine-tuning and model attacking on MNIST; and q = 500 on CIFAR10 and tiny-Imagenet.", "annot_1": {"annotation": ["Content_substitution", "Content_addition"], "instruction": NaN, "annotator": "annotator_10"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_03"}} {"id_paragraph": "H42kh6pA-9.I_7n_H-nw.00", "parag_1": "We give some intuition as to why parameter-efficient methods seem to be more effective for private fine-tuning. For simplicity, we assume that the fine-tuning method is additive as in (3), such that the fine-tuned weights W FT = W PT + π ( θ ) . We can imagine that W FT lies on a manifold passing through W PT of very small dimension (equal to the dimension of θ ) compared to the dimension of W PT .", "parag_2": "We give some intuition as to why parameter-efficient methods can to be more effective for private fine-tuning, especially on smaller datasets. For simplicity, we assume that the fine-tuning method is additive as in (3), such that the fine-tuned weights W FT = W PT + π ( θ ) . We can imagine that W FT lies on a manifold passing through W PT of very small dimension (equal to the dimension of θ ) compared to the dimension of W PT .", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_08"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.17", "parag_1": "Regularization is known to impact the learning speed when training a deep neural net with SGD. Here, we investigate the effect of regularization on multi-modal learning. We demonstrate that, aspredicted in the last conjecture 3.2, strong regularization encourages the multi-modal learning process to be greedy.", "parag_2": "We investigate L1 regularization’s impact on multi-modal DNNs and demonstrate that, as the second conjecture in §3.2 says, strong regularization encourages greediness in multi-modal learning.", "annot_1": {"annotation": ["Concision", "Development"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Concision"], "instruction": "Exclude unnecessary details.", "annotator": "annotator_08"}} {"id_paragraph": "GMTWHrfodB.Ej59bqE_5P.00", "parag_1": "Jaccard index for segmentation. During perturbation generation, we set (cid:15) as 0.01 and stop at the 10-th iteration for all the attack methods. The α in Eq. 4 is set as 1. We provide more results including using different Gaussian kernels W and show the pseudo code in the supplementary files.", "parag_2": "Jaccard index for segmentation. During perturbation generation, we set (cid:15) as 0 . 05 , 0 . 01 , 5 × 10 −for lung segmentation, artefact detection and diabetic retinopathy grading respectively. We stop at the 10-th iteration for all the attack methods. The α in Eq. 4 is set as 1. We provide more results including using different Gaussian kernels W and show the pseudo code in the supplementary files.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "TFoRhVCpnb.yqo5NaW74.01", "parag_1": "MS COCO 2014 dataset: Table 2 illustrates the segmentation performance on MS COCO 2014 compared with other methods. Our method achieves 42.6% in terms of the mIoU values on validation set, surpassing 1.2% over the IRN [1], regarded as our baseline and outperforming the other recentcompetitive methods [8, 56, 49, 1] by a large margin. In particular, we reproduce different resultsof IRN [1] with CONTA [56], in which we achieve 41.4% mIoU values. Hence, we compare therelative improvements for comparison: CONTA reaches a 0.8% mIoU improvement compared with IRN (32.6 to 33.4), while our method achieves 1.2% mIoU improvement (41.4 to 42.6).", "parag_2": "MS COCO 2014 dataset: Table 2 illustrates the segmentation performance on MS COCO 2014 compared with other methods. Our method achieves 42.6% in terms of the mIoU values on validation set, surpassing 1.2% over the IRN [1], also regarded as our baseline and outperforming the other recent competitive methods [1, 9, 52, 61] by a large margin. We further compare the relative improvements for comparison: CONTA reaches a 0.8% mIoU improvement compared with IRN (32.6 to 33.4), while our method achieves 1.2% mIoU improvement (41.4% to 42.6%).", "annot_1": {"annotation": ["Content_deletion", "Rewriting_light"], "instruction": "Remove unnecessary details and make my numbers clear.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Concision"], "instruction": "Remove the sentence about reproduction", "annotator": "annotator_07"}} {"id_paragraph": "atxti8SVk.3K9AmPwALM.18", "parag_1": "S UMMARY We propose a novel universal weakly-supervised semantic segmentation method via Semisupervised Pixel-wise Metric Learning. Four common types of pixel-to-segment attraction and repulsion relationships can be derived, whether the partial annotation is coarse image tags and bounding boxes, or sparse keypoints and scribbles. Our results on PASCAL VOC and DensePose show consistent and substantial gains over SOTA, especially for the sparsest keypoint supervision. ", "parag_2": "Summary. We propose a novel weakly-supervised semantic segmentation method via Semisupervised Pixel-wise Metric Learning, based on four common types of pixel-to-segment attraction and repulsion relationships. It is universally applicable to various weak supervision settings, whether the training images are coarsely annotated by image tags or bounding boxes, or sparsely annotated by keypoints or scribbles. Our results on PASCAL VOC and DensePose show consistent and substantial gains over SOTA, especially for the sparsest keypoint supervision. Acknowledgements.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite this paragraph to improve readability and make contributions more evident.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Improve the logical flow of ideas in this text.", "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.09", "parag_1": "We started the design process with an ideation session with eight researchers with background in human-computer interaction and visualization. The goal of the session was to explore layout variations and features that a calendar that integrates medication prescriptions should have. We asked the researchers to sketch calendars that show all the data (D1 - D5) and satisfy the given requirements. The ideation session lasted 30 minutes. We identified several design dimensions and their variations from the different sketches that were produced: the layout of the calendar (linear or cyclic), the positioning of the days and times of the day (on the left or at the top), the shape of drug entries (rectangular, cylindrical, or circular), and the orientation of the calendar (vertical or horizontal). Various sizes, colors, and shapes were also used in the designs. Connecting lines were predominantly used to denote the presence (and absence) of conflicts. We created three designs ( Design A , Design B , Design C ) by con- sidering i) variations according to these design dimensions, ii) the constraint of compatibility with already existing calendars, and iii) the intention to remain as close as possible to the design of regular medication schedules. The three designs cover a range of design variations regarding layout, representation of medication entries, and representation of conflicts. This will allow us to assess the usefulness of design variations at a component level, rather than at an overall design level. Next, we structure the presentation of these three design variations according to the usability requirements.", "parag_2": "We started the design process with an ideation session involving eight researchers with background in human-computer interaction and visualization. The goal of the session was to explore layout variations and features that a calendar with integrated medication prescriptions should have. We asked the researchers to sketch calen- dars that show all the data (D1 - D5) and satisfy the given requirements. One ideation session was held for this task and it lasted for 30 minutes. Participants used pens, colored markers, pencils, and regular printing paper for their designs. We identified several design dimensions and their variations from the different sketches that were produced: the layout of the calendar (linear or cyclic), the positioning of the days and times of the day (on the left or at the top), the shape of drug entries (rectangular, cylindrical, or circular), and the orientation of the calendar (vertical or horizontal). Various sizes, colors, and shapes were also used in the designs. Connecting lines were predominantly used to denote the presence (and absence) of conflicts. We created three designs ( Design A , Design B , Design C ) by considering i) variations according to these design dimensions, ii) the constraint of compatibility with already existing calendars, and iii) the intention to remain as close as possible to the design of regular medication schedules. The three designs cover a range of design variations regarding layout, representation of medication entries, and representation of conflicts. This allowed us to assess the usefulness of design variations at a component level, rather than at an overall design level. The following subsections discuss the resulting three design variations according to our usability requirements.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "Z3g7qGrao.v3MXAzVXjk.00", "parag_1": "We choose for M a small constant number, since L D , [ X D , X M ] is a lower bound for the log marginal likelihood log p ( D ) for M > 1 , and we set r to a number of particle functions that can represent the posterior SP reasonably well. Thus, we are interested in estimating (cid:101) ∇ F L D , X ( Q [ T ] ) (cid:12)(cid:12) F =0 with mini-batches. In principle, an unbiased estimate of ℓ ( D , f iX D ) is n/s · ℓ ( D s , f i (cid:101) X ) , which suggests that λ = s/n. Although (in general) L D s , X is not a lower bound of log p ( D ) , we found in a practice setting that λ to s/n still results in reasonable performance. However, our theoretical framework gives the reassuring guarantee that if we use full-batch training, we would, in fact, maximize a lower bound of log p ( D ) . In the following, we present two algorithms, namely Stein functional variational NNs and Stein functional variational gradient boosting (A.3.1), based on the estimated Stein functional variational gradient – i.e., they depend on the score gradient of the functional prior evaluated at X . If there exists no analytical score gradient, we can use a score gradient estimator, as suggested in Sun et al. This only requires function samples of the prior process evaluated at X , but estimating the score gradient is usually computationally expensive (Zhou et al., 2020). Since our approach builds upon SVGD, there exists an additional approach in our framework based on a gradient-free SVGD (Han & Liu, 2018) that only requires the evaluation of the marginal densities of the prior process.", "parag_2": "We choose for M a small constant number, since L D , [ X D , X M ] is a lower bound for the log marginal likelihood log p ( D ) for M > 1 , and we set r to a number of particle functions that can represent the posterior SP reasonably well. Thus, we are interested in estimating (cid:101) ∇ F L D , X ( Q [ T ] ) (cid:12)(cid:12) F =0 with mini-batches. In principle, an unbiased estimate of ℓ ( D , f iX D ) is n/s · ℓ ( D s , f i (cid:101) X ) , which suggests that λ = s/n. Although (in general) L D s , X is not a lower bound of log p ( D ) , we found in a practice setting that λ to s/n still results in reasonable performance. In the following, we present two algorithms based on the estimated Stein functional variational gradient – i.e., they depend on the score gradient of the functional prior evaluated at X . If there exists no analytical score gradient, we can use a score gradient estimator, as suggested in Sun et al. This only requires function samples of the prior process evaluated at X , but estimating the score gradient is usually computationally expensive (Zhou et al., 2020). Since our approach builds upon SVGD, there exists an additional approach in our framework based on a gradient-free SVGD (Han & Liu, 2018) that only requires the evaluation of the marginal densities of the prior process.", "annot_1": {"annotation": ["Content_deletion"], "instruction": "Remove details about the theoretical framework and make this paragraph more concise.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "Remove unnecessary content to make this paragraph shorter.", "annotator": "annotator_07"}} {"id_paragraph": "atxti8SVk.3K9AmPwALM.06", "parag_1": "Low-level image similarity : Positive and negative segments can be generated by selecting pixel i ’s own segment or all the other segments, respectively. Semantic annotation : Pseudo-labeled segments can be generated bytheir majority pixels’ annotations. Here, the annotations can be dense masks, scribbles, points, or the localization cues generated by Class Activation Maps (Zhou et al., 2016). Semantic co-occurrence : Positive (negative) segments can be generated by using other images that include (exclude) the same category asin pixel i ’s image. Feature affinity : Pseudo-labeled segments can be generated by propagating labels from known pixels to their nearest neighbor segments in the feature space, assuming that semantic clusters emerge during training and their territories are thus expanded.", "parag_2": "Low-level image similarity : We impose a spatial smoothness prior on the pixel-wise feature to keep pixels together in visually coherent regions. The segment pixel i belongs to based on low-level image cues is a positive segment to pixel i ; any other segments are negative ones. Semantic annotation : We expand the semantics from labeled points and scribbles to pseudolabels inferred from image- or box-wise CAM. The label of a segment can be estimated by majority vote among pixels; if it is the same as pixel i ’s, the segment is a positive segment to i . Semantic co-occurrence : We expand the semantics by assuming that pixels in similar semantic contexts tend to be grouped together. If a segment appears in an image that shares the same semantic classes as pixel i ’s image, it is a positive segment to i and otherwise a negative one. Feature affinity : We impose a featural smoothness prior assuming that pixels and segments of the same semantics form a cluster in the feature space. We propagate the semantics within and across images from pixel i to its closest segment s in the feature space.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_heavy", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "yisfWhlCl.VQ4udSyl4.00", "parag_1": "Split-CIFAR10 and Split-CIFAR100 . In Split-CIFAR10, we split CIFAR10 (Krizhevsky & Hinton, 2009) into five tasks in the same manner as Split-MNIST. For Split-CIFAR100, we buildtasks, each containing five classes according to the pre-defined superclasses in CIFAR100. The training sets of CIFAR10 and CIFAR100 consist of 50K examples each. To the best of our knowledge, we are first to report Split-CIFAR100 performance without using task information at test time. In Split-CIFAR100 experiments of all previous works (Rebuffi et al., 2017; Zenke et al., 2017; Lopez-Paz & Ranzato, 2017; Aljundi et al., 2019c; Chaudhry et al., 2019a) a distinct output head is used for each task, and the task information to select the corresponding output head is given at both training and test time. Knowing the right output head, however, the task reduces to 5-way classification. Therefore, our setting is far more difficult than the prior works since the model has to perform 100-way classification only from the given input.", "parag_2": "Split-CIFAR10 and Split-CIFAR100 . In Split-CIFAR10, we split CIFAR10 (Krizhevsky & Hinton, 2009) into five tasks in the same manner as Split-MNIST. For Split-CIFAR100, we buildtasks, each containing five classes according to the pre-defined superclasses in CIFAR100. The training sets of CIFAR10 and CIFAR100 consist of 50K examples each. Note that most of the previous works (Rebuffi et al., 2017; Zenke et al., 2017; Lopez-Paz & Ranzato, 2017; Aljundi et al., 2019c; Chaudhry et al., 2019a), except Maltoni & Lomonaco (2019), use task information at test time in Split-CIFAR100 experiments. They assign distinct output heads for each task and utilize the task identity to choose the responsible output head at both training and test time. Knowing the right output head, however, the task reduces to 5-way classification. Therefore, our setting is far more difficult than the prior works since the model has to perform 100-way classification only from the given input.", "annot_1": {"annotation": ["Content_substitution"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_medium", "Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "WlT4trVlEC.BCHxqci7k.00", "parag_1": "We also optimized LeNet-5, which is trained with the MNIST dataset. The optimization goal is togenerate an image that makes the LeNet-5 predict a target number with a maximum score (maximum [s = 0.999] 1 : [s = 0.959] 2 : [s = 0.999] 3 : [s = 0.999] 4 : [s = 0.999] prediction score = 1 . 0 ). The LeNet-5 is regarded as a non-differentiable black-box. After 50 , 000function calls, the final scores of generated images reach very close to the maximum score (Figure 6).", "parag_2": "We also optimized LeNet-5, which is trained with the MNIST dataset. The optimization goal is togenerate an image that makes the LeNet-5 predict a target number with a maximum score (maximumprediction score = 1 . 0 ). The LeNet-5 is regarded as a non-differentiable black-box. After 50 , 000function calls, the final scores of generated images reach very close to the maximum score (Figure 6).", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "SyF8k7bCW.HytIRPamf.02", "parag_1": "At every time step in training the RNN decoder, the probability of the next word is computed based on the left-context embedding, ground-truth current word embedding, and sentence representation. The existence of the ground-truth current word embedding potentially decreases the tendency for the decoder toexploit other information from the sentence representation.", "parag_2": "Finding II: The model with an autoregressive decoder works roughly the same as the model with a predict-all-words decoder. With Finding I, we noticed that the correct ground-truth input words to the autoregressive decoder is not necessary in terms of learning sentence representations.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "I want to modify my paragraph.", "annotator": "annotator_09"}} {"id_paragraph": "H42kh6pA-9.I_7n_H-nw.01", "parag_1": "Li et al. (2022) show the performance of fine-tuning the full model can be significantly improved with proper configuration. In this section, we re-evaluate the tasks in Table 3 and 4 under the configuration in Li et al. and show such a configuration also improves the performance of our methods.", "parag_2": "Li et al. (2022) show the performance of fine-tuning the full model can be significantly improved with proper configuration. In this section, we re-evaluate the tasks in Table 3 and 4 under the configuration in Li et al. and show such a configuration also improves the performance of our methods.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_08"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "X0Ili3hbB9.pHlFDwY24q.01", "parag_1": "To verify basic operations of our proposed method, we perform post-training weighted quantization using pre-trained models of BERT-base (Devlin et al., 2018) with a GLUE benchmark (Wang et al., 2018) on MNLI and MRPC dataset. In the case of BERT models, we quantize all weights except those of a segment embedding layer and a classification layer which show a tiny storage footprint. For conventional or weighted Alternating quantization methods, we conduct iterative refinements of α and B values 20 times over which no further noticeable quantization error improvement is recognized. Given a weight matrix or tensor, α and B are computed for each row, independently. Due to the space limit, see Appendix for additional experimental results with various models not included in this section.", "parag_2": "To verify basic operations of our proposed method, we perform post-training weighted quantization using fine-tuned models of BERT-base (Devlin et al., 2018) on MNLI and MRPC dataset within a GLUE benchmark (Wang et al., 2018). In the case of fine-tuned BERT models, we quantize all weights except those of a segment embedding layer and a classification layer which show a tiny storage footprint. For conventional or weighted Alternating quantization methods, we conduct iterative refinements of α and B values 20 times over which no further noticeable quantization error improvement is recognized. Given a weight matrix or tensor, α and B are computed for each row, independently (hence, we study row-wise quantization in this work). Due to the space limit, see Appendix for additional experimental results with various models not included in this section.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_01"}, "annot_2": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.15", "parag_1": "Matching methods, e . g ., PSM exhibit promising performance on ranking metric, which explains why they are favored by counterfactual ranking applications (Betlei et al., 2021) in practice. However, their poor performance on PEHE hinders their application in counterfactual estimation applications such as advertising systems where accuracy metrics are more critical.", "parag_2": "Matching methods, e . g ., PSM exhibit compelling ranking performance, which explains why they are favored in counterfactual ranking practice (Betlei et al., 2021). However, their poor performance on PEHE hinders their application in counterfactual estimation applications such as advertising systems that place more emphasis on the accuracy of treatment effect estimation.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Make first half concise and second half precise.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve the writing in this paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "SRquLaHRM4.vI2x5N-YHC.02", "parag_1": "ImageNet dataset and can further ensemble them to obtain 60 . 38 % top 1 accuracy. In this section,we replace the cosine distance between the global visual feature and prompt ensemble with the OT distance between the feature map and all 7 prompts. However, without any learning, the OTdistance only obtains 58 . 78 % accuracy. It is a limitation of the PLOT to still need few-shot datafor optimization, which cannot be directly applied in the zero-shot setting. We argue there are two reasons why the OT distance does not work without learning: 1) prompt engineering selects prompts based on the global feature and cosine distance, instead of OT distance with feature map; 2) all these selected prompts are closed to the global feature and lack the complementarity.", "parag_2": "ImageNet dataset and can further ensemble them to obtain 60 . 38 % top 1 accuracy. In this section,we replace the cosine distance between the global visual feature and prompt ensemble with the OT distance between the feature map and all 7 prompts. However, without any learning, the OT distanceonly obtains 58 . 78 % accuracy. We argue there are two reasons why the OT distance does not work without learning: 1) prompt engineering selects prompts based on the global feature and cosine distance, instead of OT distance with feature map; 2) all these selected prompts are closed to the global feature and lack the complementarity.", "annot_1": {"annotation": ["Content_deletion"], "instruction": "Remove any information that is not essential to the main points of the paragraph.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "I do not want to mention a limitation.", "annotator": "annotator_09"}} {"id_paragraph": "CY59sJuayM.KvjXsfaNO4.00", "parag_1": "Highlight and free-text explanations are the most prominent explanation types for teaching NLP models (Wiegreffe and Marasovic, 2021). Highlight explanations ( HIGHLIGHT ) are subsets of input elements that are deemed relevant for a prediction. For text-based NLP tasks, they correspond to sets of words, phrases or sentences. Free-text explanations ( FREE - TEXT ) are texts in natural language that are not constrained to be grounded in the input elements. Some recent works rely on semistructured text explanations ( SEMI - STRUCTURED ) (Wiegreffe and Marasovic, 2021), which combine properties of both highlight and free-text explanations. They consist of text in natural language and contain an explicit indication of the input elements that the free-text applies to. If and how much a model can learn from such explanations depends on the amount of information contained in the explanation (§ 2.2), and to what extent this information can be integrated into the learning process (§ 2.1). User satisfaction is affected by the effort required to produce explanations and by the difficulty of the task, that might in turn affect explanation quality (§ 2.3). In the following, we discuss these factors in detail and where possible contrast them with respect to explanation type.", "parag_2": "Highlight and free-text explanations are the most prominent explanation types used to improve NLP models (Wiegreffe and Marasovic, 2021). Highlight explanations ( HIGHLIGHT ) are subsets of input elements that are deemed relevant for a prediction. For text-based NLP tasks, they correspond to sets of words, phrases or sentences. Free-text explanations ( FREE - TEXT ) are texts in natural language that are not constrained to be grounded in the input elements and contain implicit or explicit information about why an instance is assigned a specific label. Some recent works rely on semistructured text explanations ( SEMI - STRUCTURED ) (Wiegreffe and Marasovic, 2021), which combine properties of both highlight and free-text explanations. They consist of text in natural language and contain an explicit indication of the input elements that the free-text applies to. If and how much a model can be improved based on such explanations depends on the amount of information contained in the explanation (§ 2.2), and to what extent this information can be integrated into the learning process (§ 2.1). User satisfaction is affected by the effort required to produce explanations and by the difficulty of the task, that might in turn affect explanation quality (§ 2.3). In the following, we discuss these factors in detail and where possible contrast them with respect to explanation type.", "annot_1": {"annotation": ["Development", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "jzQGmT-R1q.ugUt9B3XaO.04", "parag_1": "Intriguingly, over the course of training we do not see a consistent downward trend across all games and update objectives, suggesting that while effective dimension is indeed effective at identifying representation collapse it does not perfectly correlate with network capacity.", "parag_2": "Unlike in target-fitting capacity, over the course of training we do not see a consistent downward trend across all games and update objectives, suggesting that while effective dimension is indeed effective at identifying representation collapse, it is measuring a subtly different notion of capacity than simply an agent’s ability to fit new target functions.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "kBsx5htyKn.qV5njV8W5.03", "parag_1": "Fig 2 plots this proportion for different datasets and models. Each bar is an averageover three runs,injecting in each run 10% random data points from – respectively – one of the three other datasets. The biggest influence on how much uncertainty sampling selects outliers is the dataset, with the biggest proportion for newsgroup , where 80% of the selected data points are outliers on average (for", "parag_2": "Fig 2 plots this proportion for different datasets and models. Each bar is the average of the three runs. The biggest influence on how much uncertainty sampling selects outliers is the dataset, with the biggest proportion for newsgroup , where 80% of the selected data points are outliers on average (for", "annot_1": {"annotation": ["Concision"], "instruction": "Shorten this paragraph by removing details about the figure.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision"], "instruction": "Remove unnecessary details to make this paragraph shorter.", "annotator": "annotator_07"}} {"id_paragraph": "jd7eZJSVj.YFwsqqBl_J.00", "parag_1": "We present the training curves of Algorithm 1 on AKI dataset in Figure 3 and 4. • Policy improvement during RL training. In Figure 3, we showed the performance of the RL policies evaluated on train, validation and test sets. We note that the three curves closely match one another, confirming the generalizability of the learned dynamic classification policy. • SM-DDPO learns the disease model. In end-to-end training, the diagnostic classifier is trained from the scratch. It maps any partially-observed patient state to a diagnosis/prediction. We evaluate this classifier on static data distributions, in order to eliminate the effect of dynamic test selection and focus on classification quality. Figure 4 shows that the classifier learns to make the right prediction with improved quality during RL, via training only on data selected by the RL algorithm.", "parag_2": "We present the training curves on AKI dataset in Figure 3. We refer more results to Appendix B. • SM-DDPO learns the disease model. In end-to-end training, the diagnostic classifier is trained from the scratch. It maps any partially-observed patient state to a diagnosis/prediction. We evaluate this classifier on static data distributions, in order to eliminate the effect of dynamic test selection and focus on classification quality. Figure 3 shows that the classifier learns to make the high-quality prediction with improved quality during RL, via training only on data selected by the RL algorithm.", "annot_1": {"annotation": ["Content_deletion"], "instruction": "Move the less important results to an appendix.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "Replace less important results by a reference to Appendix B. Revise this paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "DS70t5j1I_.wzVcx2T0D.00", "parag_1": "VT. Furthermore, Kahn et al. (2018) design a self-supervised approach to model environments by reinforcement learning. A Bayesian relational memory is introduced by Wu et al. (2019) to explore the spatial layout among rooms rather than steering an agent to desired objects with least steps. Meanwhile, Shen et al. (2019) employ multiple visual representations to generate multiple actions and then fuse those actions to produce an effective one. However, requesting such a large number of visual representations may restrict the transferring ability of a navigation system and increases the difficulty of data labeling. Note that Fang et al. (2019) propose a transformer to select the embedded scene memory slot, while our VT is designed to learn expressive visual representations correlated with directional signals.", "parag_2": "VT. Furthermore, Kahn et al. (2018) design a self-supervised approach to model environments by reinforcement learning. Tang et al. (2021) customize a specialized network for visual navigation via an Auto-Navigator. A Bayesian relational memory is introduced by Wu et al. (2019) to explore the spatial layout among rooms rather than steering an agent to desired objects with least steps. Meanwhile, Shen et al. (2019) employ multiple visual representations to generate multiple actions and then fuse those actions to produce an effective one. However, requesting such a large number of visual representations may restrict the transferring ability of a navigation system and increases the difficulty of data labeling. Note that Fang et al. (2019) propose a transformer to select the embedded scene memory slot, while our VT is designed to learn expressive visual representations correlated with directional signals.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_08"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}} {"id_paragraph": "X50LVGSli.jqJzurpUu.00", "parag_1": "Moreover, recently, Angelini & Ricci-Tersenghi (2022) have shown that the learning-based method in (Schuetz et al., 2022) could not achieve comparable results with the degree-based greedy algorithm (DGA) (Angelini & Ricci-Tersenghi, 2019) in the max independent set (MIS) problem on large-scaled random-regular graphs (RRGs), which raises attentions from machine learning community. We observe the issues come from two aspects: (1) graph neural networks (GNNs) used to encode the regular graph suffer from the node ambiguity issue due to their limited expressive power ( ? ); (2) the model in (Schuetz et al., 2022) was not trained and learned from history. By addressing these two issues, Meta-EGN can consistently outperform DGA while maintaining the same time complexity to generate solutions. Fig. 1 show the results.", "parag_2": "Moreover, recently, Angelini & Ricci-Tersenghi (2022) have shown that the learning-based method in (Schuetz et al., 2022) could not achieve comparable results with the degree-based greedy algorithm (DGA) (Angelini & Ricci-Tersenghi, 2019) in the max independent set (MIS) problem on large-scaled random-regular graphs (RRGs), which raises attentions from machine learning community. We observe the issues come from two aspects: (1) graph neural networks (GNNs) used to encode the regular graph suffer from the node ambiguity issue due to their limited expressive power (Xu et al., 2019); (2) the model in (Schuetz et al., 2022) did not learn from history but was directly optimized over each testing case, which tends to be trapped into a local optimum. By addressing these two issues, Meta-EGN can consistently outperform DGA while maintaining the same time complexity to generate solutions. Fig. 1 show the results.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "atxti8SVk.3K9AmPwALM.05", "parag_1": "Under the supervised setting, we can define the positive and negative sets as the same- and differentcategory pixels with respect to pixel i , which are denoted as C + and C − . However, this idea is not applicable to weakly- or un-supervised setting, when the label at x i is unknown. Particularly for weakly-supervised segmentation, we only have sparsely labeled pixels in the image, resulting in much smaller sets of C + and C − and degraded learning efficiency.", "parag_2": "In the fully supervised setting, we can define pixel i ’s positive and negative sets, denoted by C + and C − respectively, as pixels in the same (different) category. However, this idea is not applicable to weakly- or un-supervised settings where the label is not available on every pixel. In the labeled points setting, C + and C − would only contain a few exemplars according to the sparse pixel labels.", "annot_1": {"annotation": ["Concision"], "instruction": "Make this paragraph heavily more concise, keeping the main ideas.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_heavy", "Concision"], "instruction": "Concise and improve this explanation to make it easier to understand.", "annotator": "annotator_07"}} {"id_paragraph": "usz0l2mwO.5ie3V0GP-.01", "parag_1": "NLP (Cherry et al., 2019). Much of the information in such sentence embeddings is irrelevant to the target task, and it can be difficult to distinguish relevant from irrelevant information when fine-tuning the languagemodels with a large number of parameters on a small amount of target task data, resulting in over-fitting. For many real-world applications, it can be difficult and expensive tosolve this problem by collecting sufficient annotated data for these large neural models to excel.", "parag_2": "If the amount of target task data is small, it can be hard for fine-tuning to distinguish relevant from irrelevant information, leading to overfitting on statistically spurious correlations between the irrelevant information and target labels. Learning low-resource tasks is an important topic in NLP (Cherry et al., 2019) because annotating more data can be very costly and time-consuming, and because in several tasks access to data is limited.", "annot_1": {"annotation": ["Rewriting_heavy"], "instruction": "Improve the readability of the text. Use more concise and straight-forward ideas.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "Rewrite this paragraph to better fit the academic writing style.", "annotator": "annotator_07"}} {"id_paragraph": "S1-LZxvKX.rJ009I8RX.01", "parag_1": "Ours is the first systematic method able to train sparse models directly without an increased parameter footprint during the entire course of training, and still achieve performance on par with post-training compression of dense models, the best result at a given sparsity.We described the first dynamic reparameterization method for training convolutional networks. We showed that ourdynamic sparse reparameterization significantly outperformed static ones. Our method not only outperformed existing dynamic sparse reparameterization techniques, but also incurred much lower computational costs.", "parag_2": "We showed that it is possible to train a small sparse network directly without a larger than inference-time parameter footprint at any stages of training, yet still achieving generalization performance at least on par with post-training iterative pruning of large dense models, yielding the most parameter efficient model at a given sparsity. We showed that our method is more scalable and efficient, and leads to significantly better accuracy than existing dynamic sparse reparameterization training techniques.", "annot_1": {"annotation": ["Concision"], "instruction": "Rewrite this paragraph, removing any redundant information for a more concise version.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Rewriting_heavy", "Concision"], "instruction": "Fully rewrite this paragraph in a more concise and direct way fitting the academic style.", "annotator": "annotator_07"}} {"id_paragraph": "BkVj6Z-AW.SytnTZWCZ.00", "parag_1": "The synthesis of realistic human motion has recently seen increased interest Holden et al. Fragkiadaki et al. ; Jain et al. ; Bütepage et al. ; Martinez et al. (2017) with applications beyond animation and video games. The simulation of human looking virtual agents is likely to become mainstream with the dramatic advancement of Artificial Intelligence and the democratization of Virtual Reality. A challenge for human motion synthesis is to automatically generate new variations of motions while preserving a certain style, e.g., generating large numbers of different Bollywood dances for hundreds of characters in an animated scene of an Indian party. Aided by the availability of large human-motion capture databases, many database-driven frameworks have been employed to this end, including motion graphs Kovar et al. ; Safonova & Hodgins (2007); Min & Chai (2012), as well as linear Safonova et al. ; Chai & Hodgins (2005); Tautges et al. and kernel methods Mukai (2011); Park et al. ; Levine et al. (2012); Grochow et al. ; Moeslund et al. ; Wang et al. (2008), which blend key-frame motions from a database. It is hard for these methods, however, to add new variations to existing motions in the database while keeping the style consistent. This is especially true for motions with a complex style such as dancing and martial arts. More recently, with the rapid development in deep learning, people have started to use neural networks to accomplish this task Holden et al. These works have shown promising results, demonstrating the ability of using high-level parameters (such as a walking-path) to synthesize locomotion tasks such as jumping, running, walking, balancing, etc. These networks do not generate new variations of complex motion, however, being instead limited to specific use cases.", "parag_2": "The synthesis of realistic human motion has recently seen increased interest (Holden et al., 2016; 2017; Fragkiadaki et al., 2015; Jain et al., 2016; Bütepage et al., 2017; Martinez et al., 2017) with applications beyond animation and video games. The simulation of human looking virtual agents is likely to become mainstream with the dramatic advancement of Artificial Intelligence and the democratization of Virtual Reality. A challenge for human motion synthesis is to automatically generate new variations of motions while preserving a certain style, e.g., generating large numbers of different Bollywood dances for hundreds of characters in an animated scene of an Indian party. Aided by the availability of large human-motion capture databases, many database-driven frameworks have been employed to this end, including motion graphs (Kovar et al., 2002; Safonova & Hodgins, 2007; Min & Chai, 2012), as well as linear (Safonova et al., 2004; Chai & Hodgins, 2005; Tautges et al., 2011) and kernel methods (Mukai, 2011; Park et al., 2002; Levine et al., 2012; Grochow et al., 2004; Moeslund et al., 2006; Wang et al., 2008), which blend key-frame motions from a database. It is hard for these methods, however, to add new variations to existing motions in the database while keeping the style consistent. This is especially true for motions with a complex style such as dancing and martial arts. More recently, with the rapid development in deep learning, people have started to use neural networks to accomplish this task (Holden et al., 2017; 2016; 2015). These works have shown promising results, demonstrating the ability of using high-level parameters (such as a walking-path) to synthesize locomotion tasks such as jumping, running, walking, balancing, etc. These networks do not generate new variations of complex motion, however, being instead limited to specific use cases.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "I-N2JgVIgy.BNpWofyXgi.00", "parag_1": "In this subsection, we conduct ablation study on the superiority of thelow-dimensional contrastive embedding ( i.e. , our method) over the traditional contrastive embedding ( i.e. , thebaseline method). We use the STL-10and CIFAR-10 datasets to train the baseline SimCLR [7] and two implementations of CLLR, i.e. , the ℓ 2 , 1 -normbased regularization and nuclear-normbased regularization. We train all models with 100 and 400 epochs with the same batch size and learning rate, respectively, and we record the test accuracy of all methods by finetuning a linear softmax . The baseline method learns contrastive embeddings in the high-dimensionalspace (dimension = 2048 , 3072 , and 4096 ) and the simply fixed low-dimensional space (dimension =256 and 512 ). We also include the baseline results that do not use the ℓ 2 , 1 -norm and nuclear normconstraints ( i.e. , α = 0 ). Our method learns embeddings in low-dimensional space, where we use theregularizer to maintain the corresponding non-zero columns in the projection matrix L .", "parag_2": "In this subsection, we conduct ablation study on the superiority of thelow-dimensional contrastive embedding ( i.e. , our method) over the traditional contrastive embedding ( i.e. , thebaseline method). We use the STL-10and CIFAR-10 datasets to train the baseline SimCLR [7] and two implementations of CLLR, i.e. , the (cid:96) 2 , 1 -norm basedregularization and nuclear-norm basedregularization. We train all models with 100 and 400 epochs with the same batch size and learning rate, respectively, and we record the test accuracy of all methods by fine-tuning a linear softmax . The baseline method learns contrastive embeddings in the high-dimensional space (where a commonsetting is 2048 -dimension) and the simply fixed low-dimensional space ( 256 -dimension and 512dimension). Our method learns embeddings in low-dimensional space, where we use the regularizerto maintain the corresponding non-zero columns in the projection matrix L .", "annot_1": {"annotation": ["Content_deletion", "Rewriting_light"], "instruction": "Remove details about the baseline results and improve the readability.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_deletion", "Rewriting_light"], "instruction": "Remove sentences that are unnecessary here. Simplify this text a bit.", "annotator": "annotator_07"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.18", "parag_1": "We compare the version of AGILE where the GAT only receives action features as input and no state. Thus, the decision-choice is still aware of other actions, but the learned relations are fixed, not dependent on the state. Figure 7 shows a drop in performance for Grid World and CREATE, where the relevant action relations change based on the state. However, there is no drop in RecSim because CPR task only requires knowing the most common category, which is independent of user state.", "parag_2": "We evaluate a version of AGILE where the GAT only receives action representations as input and no state. Thus, the action relations are inferred independently of the state. Figure 7 shows a drop in performance for Grid World and CREATE, where the relevant action relations change based on the state. However, this effect is less apparent on RecSim because CPR requires only knowing the most common category, independent of user state.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the second sentence of the paragraph and improve the English in the remainder", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "Make this paragraph more concise, keeping the main points of each sentence.", "annotator": "annotator_03"}} {"id_paragraph": "wSf7BpyxTb.ZCPjX5OcL.02", "parag_1": "Therefore, to optimize the performance of SREDA further, we tune q, m from a grid search over { 10 , 100 , 200 } . For methods without variance reduction, i.e., for SAPD, SMDA and PASGDA, wetune the batch size from { 10 , 100 , 200 } as well. For SAPD and SAPD-VR, we tune the momentum θ from { 0 . 8 , 0 . 85 , 0 . 9 } and let the inner iteration numbers N = ln(265)ln( 1 θ ) according to eq.", "parag_2": "Therefore, to optimize the performance of SREDA further, we tune q, m from a grid search over { 10 , 100 , 200 } . For methods without variance reduction, i.e., for SAPD+ , SMDA and PASGDA , we also use mini-batch to estimate the gradients and tune the batch size from { 10 , 100 , 200 } as well. For SAPD+ and SAPD+VR , we tune the momentum θ from { 0 . 8 , 0 . 85 , 0 . 9 } and the inner iteration number from N = { 10 , 50 , 100 } .", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "F3z0hchpGy.xeuzrNJiNW.01", "parag_1": "Vectors and scalars are not the only kind of geometric features that can be inputs and outputs of a GEM-CNN layer. In general, the coefficients of a geometric feature of C dimensions changes by a linear transformation ρ ( − g ) ∈ R C × C if the gauge is rotated by angle g . The map ρ : [0 , 2 π ) → R C × C is called the type of the geometric quantity and is formally known as a group representation of the planar rotation group SO(2) . From the theory of group representations, we know that any feature type can be composed from “irreducible representations” (irreps). For SO(2) , these are the one dimensional invariant scalar representation ρ 0 and for all n ∈ N > 0 , a two dimensional representation ρ n ,", "parag_2": "Vectors and scalars are not the only type of geometric features that can be inputs and outputs of a GEM-CNN layer. In general, the coefficients of a geometric feature of C dimensions changes by an invertible linear transformation ρ ( − g ) ∈ R C × C if the gauge is rotated by angle g . The map ρ : [0 , 2 π ) → R C × C is called the type of the geometric quantity and is formally known as a group representation of the planar rotation group SO(2) . Group representations have the property that ρ ( g + h ) = ρ ( g ) ρ ( h ) (they are group homomorphisms), which implies in particular that ρ (0) = and ρ ( − g ) = ρ ( g ) − 1 . For more background on group representation theory, we refer the reader to (Serre, 1977) and, specifically in the context of equivariant deep learning, to (Lang & Weiler, 2020). From the theory of group representations, we know that any feature type can be composed from “irreducible representations” (irreps). For SO(2) , these are the one dimensional invariant scalar representation ρ 0 and for all n ∈ N > 0 , a two dimensional representation ρ n ,", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "rSY5h1VyMd.2RqWouzq_W.00", "parag_1": "BERT in natural settings. They can provide the appropriate prompts and tasks to answer questions about linguistic mechanisms underlying predictive responses. This paper adopted psycholinguistic datasets to probe language models’ commonsense reasoning. Findings suggest that DistillBERT had some understanding of the (implied) intent that’s shared among most people. Such intent is implicitly reflected in the usage of conversational implicatures and presuppositions. Whether or not fine-tuning improved its performance to human-level depends on the type of commonsense reasoning.", "parag_2": "BERT in natural settings. They can provide the appropriate prompts and tasks to answer questions about linguistic mechanisms underlying predictive responses. This paper adopted psycholinguistic datasets to probe language models’ commonsense reasoning. Findings suggest that GPT-3’s performance was mostly at chance in the psycholinguistic tasks. We also showed that DistillBERT had some understanding of the (implied) intent that’s shared among most people. Such intent is implicitly reflected in the usage of conversational implicatures and presuppositions. Whether or not fine-tuning improved its performance to human-level depends on the type of commonsense reasoning.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "hcpw8dPgCX.1UO51C7swt.00", "parag_1": "Hofbauer & Weibull (1996) showed that in a class of learning dynamics which includes replicator dynamics — the continuous-time variant of FTRL, all iteratively strictly dominated actions vanish over time, while Mertikopoulos & Moustakas (2010) proved similar results for stochastic replicator dynamics; however, neither work provides finite-time guarantees. Cohen et al. (2017) proved that Hedge eliminates dominated actions in finite time, but did not extend their results to the more challenging case of iteratively dominated actions.", "parag_2": "For this equivalence to hold, we need to allow dominance by mixed strategies, and correlated beliefs when there are more than two players. These conditions are met in the setting of this work. strictly dominated actions vanish over time, while Mertikopoulos & Moustakas (2010) proved similar results for stochastic replicator dynamics; however, neither work provides finite-time guarantees. Cohen et al. (2017) proved that Hedge eliminates dominated actions in finite time, but did not extend their results to the more challenging case of iteratively dominated actions.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "nCTSF9BQJ.DGhBYSP_sR.19", "parag_1": "Results According to Table 1, our RDE-Network outperforms all the baselines. Notably,RDENetwork improves per-structure correlations by a large margin, which implies that it is significantly more reliable for practical applications. The advantage of RDE-Network over MIF-Network shows that representations obtained by fitting rotamer densities are more effective than those from masked inverse folding because protein binding is driven by atomic interactions which RDE captures well by modeling the conformation of sidechain atoms.", "parag_2": "Results According to Table 1, our RDE-Network outperforms all the baselines. Notably, it demonstrates a significant improvement in per-structure correlations, indicating its greater reliability for practical applications. The superior performance of RDE-Network over MIF-Network suggests that representations derived from fitting rotamer densities are more effective than those from masked inverse folding, as RDE captures atomic interactions well by modeling the conformation of sidechain atoms.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Paraphrase this paragraph using formal language", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Revise this paragraph in a more academic style.", "annotator": "annotator_07"}} {"id_paragraph": "7_CwM-IzWd.zcm6f5HDI.06", "parag_1": "In this section, we introduce the greedy learner hypothesis to explain challenges observed in training multi-modal DNNs. Before describing our hypothesis, we start by discussing some assumptions on the multi-modal data and preliminary observations made in the literature on multi-modal learning.", "parag_2": "In this section, we introduce the greedy learner hypothesis to explain challenges observed in training multi-modal DNNs. Before describing our hypothesis, we discuss some assumptions on the multi-modal data and preliminary observations made in the multi-modal learning literature.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Make expression concise.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve the English of this paragraph.", "annotator": "annotator_02"}} {"id_paragraph": "oH-CV7Qprn.l7-CEr3ki.00", "parag_1": "Compared with ESTR in [17], (cid:15) -FALB in [15] and LowESTR in [ ], our algorithms are designedfor nonlinear reward framework. Compared with LowGLOC in [26], our algorithms achieve a betterregret bound, can work with varying action sets and are computationally feasible. For G-ESTT,we extend the GLM-UCB algorithms [11] via a regularization technique along with some noveltechniques. Our proposed G-ESTS is simple and could be easily implemented based on anystate-of-the-art generalized linear bandit algorithms. In particular, when we combine G-ESTS withsome efficient algorithms (e.g. SGD-TS [9]), the total time complexity after a warm-up stage scalesas O ( Tr ( d 1 + d 2 )) . We verify that G-ESTT and G-ESTS are the first two algorithms to attain the˜ O (( d 1 + d 2 ) r √ T ) optimal regret bound of low-rank matrix bandit problems up to logarithmic terms.", "parag_2": "Compared with ESTR in [17], (cid:15) -FALB in [15] and LowESTR in [26], our algorithms are proposed forthe nonlinear reward framework with arbitrary action matrices. Compared with LowGLOC in [26], our algorithms not only achieve a better regret bound in theory, but also are computationally feasiblein practice. For G-ESTT, we extend the GLM-UCB algorithms [11] via a novel regularizationtechnique. Our proposed G-ESTS is simple and could be easily implemented based on anystate-of-the-art generalized linear bandit algorithms. In particular, when we combine G-ESTS withsome efficient algorithms (e.g. SGD-TS [9]), the total time complexity after a warm-up stage scalesas O ( Tr ( d 1 + d 2 )) . We verify that G-ESTT and G-ESTS are the first two algorithms to attain the˜ O (( d 1 + d 2 ) r √ T ) optimal regret bound of low-rank matrix bandit problems up to logarithmic terms.", "annot_1": {"annotation": ["Rewriting_light", "Concision"], "instruction": "Improve the English of this paragraph", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rewrite the first part of the paragraph to make it more convincing.", "annotator": "annotator_07"}} {"id_paragraph": "u9NaukzyJ-.hh0KECXQLv.12", "parag_1": "A and Design B , and 7.0 seconds with Design C . To complete this task with Design A , participants had tovertically scroll to the end of the day to see all entries. Three participants (P3, P5, and P7) complained about this. For example, P3 said “I find it’s a lot of scrolling down. It would be helpful if there was a way to condense it or to make it possible to see the entire calendar available in terms of morning, afternoon, and evening.” , and P8 said “The time frames are a bit big. So it makes like I said, it really makes it scroll off that you can’t see it all in one consolidated view” . With Design B , participants were expected to use the daily medica- tion summaries provided at the top. Three participants (P1, P6, and P9) found the daily summaries helpful in performing this task. For example, P9 said “Yeah, I like the idea of having the first row on the calendar dedicated only for the medications that needs to be taken. I think it brings an overall idea [of] what should be taken during that day.” . Three participants (P3, P4, and P5) also complained about the lines demarcating days not being clear. For example, P4 said “I have a harder time differentiating the calendar component the days, because there’s not a strong border between the days of the week.” .", "parag_2": "Three participants (P3, P5, and P7) complained about the need to scroll to the end of the day with Design A . For example, P3 said “I find it’s a lot of scrolling down. It would be helpful if there was a way to condense it or to make it possible to see the entire calendar available in terms of morning, afternoon, and evening.” , and P8 said “The time frames are a bit big. So it makes like I said, it really makes it scroll off that you can’t see it all in one consolidated view” . With Design B , participants were expected to use the daily medication summaries provided at the top. Three participants (P1, P6, and P9) found the daily summaries helpful in performing this task. For example, P9 said “Yeah, I like the idea of having the first row on the calendar dedicated only for the medica- tions that needs to be taken. I think it brings an overall idea [of] what should be taken during that day.” . Three participants (P3, P4, and P5) also complained about the lines demarcating days not being clear. For example, P4 said “I have a harder time differentiating the calendar component the days, because there’s not a strong border between the days of the week.” .", "annot_1": {"annotation": ["Concision", "Content_deletion"], "instruction": "Remove unnecessary details for the paragraph.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision"], "instruction": "The first sentence is a bit unclear.", "annotator": "annotator_09"}} {"id_paragraph": "hAi0PMz9T7.Ut8ESfYp1.00", "parag_1": "However, these “black box\" policies lack interpretability, and reliability and, moreimportantly, cannot operate under the TCP datapath’s ultra-contingent latencyand computational constraints. This paper proposes a novel two-stage solutionto achieve the best of both worlds: first to train a deep RL agent, then distill its(over-)parameterized NN policy into white-box, light-weight rules in the formof symbolic expressions that are much easier to understand and to implementin constrained environments. At the core of our proposal is a novel symbolicbranching algorithm that allows the rule to be “context-aware” of various networkconditions, eventually converting the NN policy into a symbolic tree. The distilledsymbolic rules preserve and often improve performance over state-of-the-art NNpolicies while being faster and simpler than a standard neural network. We validatethe performance of our distilled symbolic rules on both simulation and emulationnetwork systems. Our code will be released upon acceptance.", "parag_2": "However, such “black-box” policies lack interpretability and reliability, and often, they need to operate outside the traditional TCP datapath due to the use of complex NNs. This paper proposes a novel two-stage solution to achieve the best of both worlds: first to train a deep RL agent, then distill its (over-)parameterized NN policy into white-box, light-weight rules in the form of symbolic expressions that are much easier to understand and to implement in constrained environments. At the core of our proposal is a novel symbolic branching algorithm that enables the rule to be aware of the context in terms of various network conditions, eventually converting the NN policy into a symbolic tree. The distilled symbolic rules preserve and often improve performance over state-of-the-art NN policies while being faster and simpler than a standard neural network. We validate the performance of our distilled symbolic rules on both simulation and emulation environments. Our code is available at https://github.com/VITA-Group/SymbolicPCC .", "annot_1": {"annotation": ["Rewriting_medium", "Content_substitution"], "instruction": "Review the following paragraph, update if possible, delete unnecessary details.", "annotator": "annotator_01"}, "annot_2": {"annotation": ["Rewriting_light", "Content_substitution"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "S1fwAltvB.BkzbibmoH.00", "parag_1": "We choose f to be a simple model: a single linear layer that maps from dimensionalityto 75. We use triplet loss (Schroff et al., 2015) to move the representation of the anchor vector V A closer to the representation of the positive vector V P and farther apart from the representation of the negative vector V N . Following Hoffer & Ailon (2015), we calculate the softmax version of the triplet loss:", "parag_2": "We choose f to be a simple model: a single linear layer that maps from dimensionalityto 75. The dimensional of the transformation was chosen according to development set performance. We use triplet loss (Schroff et al., 2015) to move the representation of the anchor vector V A closer to the representation of the positive vector V P and farther apart from the representation of the negative vector V N . Following Hoffer & Ailon (2015), we calculate the softmax version of the triplet loss:", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "yxeD_Ju-SM.p9Au1Sb-uj.00", "parag_1": "KG, D RAGON consists of a cross-modal encoder (GreaseLM) that fuses the input text-KG pair bidirectionally (§2.2), and a pretraining objective that performs bidirectional self-supervision on the text-KG input (§2.3). Our pretraining objective unifies masked language modeling (MLM) and KG link prediction (LinkPred) to make text and KG mutually inform each other and learn joint reasoning over them. Finally, we describe how we finetune the pretrained D RAGON model for downstream tasks (§2.4). While the individual piece of our approach (GreaseLM, MLM, LinkPred) is not new in itself, our contribution is that we are the first to bring these pieces together, present how to unify them effectively and show that this produces a significantly performant pretrained model (§3, §4).", "parag_2": "KG, D RAGON consists of a cross-modal encoder (GreaseLM) that fuses the input text-KG pair bidirectionally (§2.2), and a pretraining objective that performs bidirectional self-supervision on the text-KG input (§2.3). Our pretraining objective unifies masked language modeling (MLM) and KG link prediction (LinkPred) to make text and KG mutually inform each other and learn joint reasoning over them. Finally, we describe how we finetune the pretrained D RAGON model for downstream tasks (§2.4). While each individual piece of our approach (GreaseLM, MLM, LinkPred) is not new in itself, we are the first to bring them together effectively and demonstrate that the resulting model has strong empirical results.", "annot_1": {"annotation": ["Concision"], "instruction": "Make this paragraph more concise.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make the last sentence more concise.", "annotator": "annotator_07"}} {"id_paragraph": "aomiOZE_m2.rxb2TiQ6bq.08", "parag_1": "Pruned Index Constraint . Filter pruning in residual networks is well-known tricky because the add operators in residual blocks demand the pruned filter indices must be aligned. Filter pruning within a residual block is shown in Fig. 2(b). A typical residual block (e.g., in EDSR (Lim et al., 2017), RCAN (Zhang et al., 2018b)) is made up of two convolutional layers. All the convolutional layers can be categorized into two groups based on their connection relationship among one another. One group comprises the layers that can be pruned without any constraint , dubbed free Conv layers in this work; the other comprises layers in which the filters must be pruned at the same indices , dubbed constrained Conv layers . For a concrete example, in Fig. 2(b), the layer W ( i ) is a free Conv layer and layer W ( i +1) is a constrained Conv layer.", "parag_2": "Pruned Index Constraint . Pruning filters in residual networks is well-known non-trivial as the Add operators in residual blocks require the pruned filter indices across different residual blocks must be aligned. A figurative illustration of filter pruning within a residual block is shown in Fig. 2(b). A typical residual block (e.g., in EDSR (Lim et al., 2017), RCAN (Zhang et al., 2018b)) consists of two convolutional layers. According to the mutual connection relationship, the convolutional layers can be categorized into two groups. One group is made up with the layers that can be pruned without any constraint , dubbed free Conv layers in this work; the other comprises Conv layers whose filters must be pruned at the same indices , dubbed constrained Conv layers . Concretely, the layer W ( i ) in Fig. 2(b) is a free Conv layer, while the layer W ( i +1) is a constrained one.", "annot_1": {"annotation": ["Development", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Improve the language to make it more fitting to the academic style.", "annotator": "annotator_07"}} {"id_paragraph": "vrvf56Ug_C.PgzrILJ_er.00", "parag_1": "Another interesting future direction is to improve upper bounds on the pseudo-dimension by restricting heuristic functions to some classes, as mentioned in Section 2. Further study of this direction will be important, particularly when applying GBFS/A* with learned heuristics to path-finding instances with extremely many vertices. In Appendix D, we present an illustrative example where we can achieve polylog( n ) upper bounds on the pseudo-dimension by assuming that heuristic functions with much fewer tunable parameters than n can be designed in an instance-specific manner.", "parag_2": "Another interesting future direction is to improve upper bounds on the pseudo-dimension by restricting heuristic functions to some classes. Appendix D will present an illustrative example where we can achieve polylog( n ) upper bounds on the pseudo-dimension by assuming that heuristic functions with much fewer tunable parameters than n can be designed in an instance-specific manner.", "annot_1": {"annotation": ["Concision"], "instruction": "Make this paragraph shorter by eliminating details about further work.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_deletion"], "instruction": "Delete the sentence about further study and the reference to section 2.", "annotator": "annotator_07"}} {"id_paragraph": "OV5v_wBMHk.bw4cqlpLh.14", "parag_1": "Statistical estimators exhibit competitive performance on the PEHE metric. In particular, neuralnetwork estimators outperformthe linear and random forest methods because they can depict the nonlinearity in data. TARNet obtains better overall performance than other statistic estimators by absorbing the advantages (R et al., 2019) ofboth T-learner and S-learner. However, the treatment selection bias makes these estimators biased, leading to sub-optimal performance.", "parag_2": "Statistical estimators exhibit competitive performance on the PEHE metric. Due to the superiority to depict non-linearity, neural estimators outperform linear and random forest methods. In particular, TARNet that absorbs the advantage of T-learner and S-learner achieves the best overall performance in statistic estimators. However, the circumvention to treatment selection bias leads to inferior performance.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "The second sentence is too complicated. Make it more understandable. Also brush up the rest.", "annotator": "annotator_05"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Reorganize the ideas in the sentences to improve the logical flow of the text.", "annotator": "annotator_07"}} {"id_paragraph": "c-9Hob6rd2.H4aN8Z9LDS.00", "parag_1": "We then propose to divide states into several groups and incorporate them with an attention mechanism to select an appropriate state as a goal to encourage further exploration. In order tohelp RL agents learn efficiently, we update goal generation hindsight and value estimation with related trajectories.", "parag_2": "Specifically, we first divide states into several groups according to their uncertainty and locations in the graph. We then adopt an attention mechanism to select an appropriate group and assign the state with the highest value in the graph as a goal to encourage further exploration. We also propose to update goal generation hindsightly and value estimation with related trajectories, to help RL agents learn efficiently.", "annot_1": {"annotation": ["Development", "Rewriting_heavy"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Rewriting_heavy"], "instruction": "I need more detailed explanations.", "annotator": "annotator_09"}} {"id_paragraph": "MXi6uEx-hp.rdZfFcGyf9.19", "parag_1": "• AGILE-Tuned without sync-freq-change: In Mnih et al. (2015), the authors used the periodic syncing between the target and the main networks to alleviate the issue of the frequently moving target in updating the Q-network. In this work, we compared two extreme cases of the frequency period in syncing the networks; 10 ( Sync-freq=10 in Fig. 13 (a)) and 500 ( AGILE-Tuned ). • AGILE-Tuned without graph-dim-change: In order to understand the difficulty in expressing the action relation through the compact representation, we compared the big and the small representations in the action graph, i e., the node-features are encoded in 32 ( Graph-dim=32 ) or 64( AGILE-Tuned ) dimensions.", "parag_2": "• AGILE-Tuned without sync-freq-change: In Mnih et al. (2015), the authors used the periodic syncing between the target and the main networks to alleviate the issue of frequently moving Qvalue targets. In this work, we compare two extreme cases of the sync frequency: 10 depicted by Sync-freq=10 in Fig. 13 (a) and 500 depicted by AGILE-Tuned . • AGILE-Tuned without graph-dim-change: To understand the difficulty in expressing the action relations through a compact representation, we compare two hidden dimension sizes. The node-features are encoded in 32 ( Graph-dim=32 ) or 64( AGILE-Tuned ) dimensions.", "annot_1": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "Improve the English of this paragraph and make it shorter.", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "Make the wording of this paragraph much more straight forward, to be more concise.", "annotator": "annotator_03"}} {"id_paragraph": "t0ArcyG8Tb.rF5n2PkfMW.00", "parag_1": "Cross-Modal Alignment Objective Functions Most previous methods adopt triplet loss as a major objective function for video-language modeling. CGMSCD [13] points out that the triplet loss sometimes leads to a wrong learning direction and thus devises an adaptive margin triplet loss for representation learning. More recent works [40, 17, 18] propose to apply the InfoNCE contrastiveloss [46, 37, 6] to enhance representation learning. Particularly, COTS [31] introduces a momentummechanism [14] to maintain more negative samples for image-text contrastive learning. Following COTS, we propose momentum video-level contrastive learning for video-text global alignment. Note that MIL-NCE [34] enhances the InfoNCE loss with multiple-instance learning (MIL) to cope with the misaligned narration descriptions in HowTo100M [35]. In this paper, we thus propose momentum frame-level MSL-contrastive learning to assist in addressing the misaligned frame problem.", "parag_2": "Functions Most previous methods adopt triplet loss as a major objective function for video-language modeling. CGMSCD [14] points out that the triplet loss sometimes leads to a wrong learning direction and thus devises an adaptive margin triplet loss for representation learning. More recent works [41, 18, 19] propose to apply the InfoNCE contrastive loss [47, 38, 6] to enhance representation learning. Particularly, COTS [32] and BriVL [12] introduce a momentum mechanism [15] to maintain more negative samples for image-text contrastive learning. Following these two state-of-the-art models, we propose momentum video-level contrastive learning for video-text global alignment in this paper. Note that MIL-NCE [35] enhances the InfoNCE loss with multiple-instance learning (MIL) to cope with the misaligned narration descriptions in HowTo100M [36]. In this work, we thus propose momentum frame-level MSL-contrastive learning to assist in addressing the misaligned frame problem.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "CVRUl83zah.I75TtW0V7.08", "parag_1": "Problems in DSPN Increasing the number of optimization steps in DSPN generally results in a better solution, but requires significantly more memory and computation time. To be able to backpropagate through the inner optimization, the activations of every intermediate step have to be kept in memory. Furthermore, each additional optimization step requires backpropagating that step in the backward pass as well. These issues limit the number of iterations (10 in DSPN) that are computationally feasible, which can have a negative effect on the modeling capacity. We aim to address these problems in the following.", "parag_2": "Problems in DSPN. Increasing the number of optimization steps for solving Equation 7 generally results in a better solution (Zhang et al., 2019), but requires significantly more memory and computation time and can lead to training problems (Belanger et al., 2017). To be able to backpropagate through Equation 7, the activations of every intermediate gradient descent step have to be kept in memory. Each additional optimization step in the forward pass also requires backpropagating that step in the backward pass. These issues limit the number of iterations that are computationally feasible (DSPN uses only 10 steps), which can have a negative effect on the modeling capacity due to insufficient minimization of Equation 7. We aim to address these problems in the following.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "xCwJIwby8o.8vT6si6OaEQ.00", "parag_1": "AdaptFormer is only employed in recognition tasks in this work, it’s unclear whether it can workwell in tasks beyond recognition, e.g. , object detection and semantic segmentation. We leave it forthe future exploration. Since our method is specially designed for efficient fine-tuning, we do notforesee obvious undesirable ethical/social impacts at this moment. ", "parag_2": "AdaptFormer is only employed in recognition tasks in this work, it’s unclear whether it can workwell in tasks beyond recognition, e.g. , object detection and semantic segmentation. We leave it forthe future exploration. Since our method is specially designed for efficient fine-tuning, we do notforesee obvious undesirable ethical/social impacts at this moment. Checklist For all authors...", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "fB-ZoDze-Q.ZgK6YOyT9W.00", "parag_1": "Reproducibility Statement. For theory, we provide proof and additional results in the Appendix. For empirical results, we provide implementation and environment details and hyperparameters in the Appendix. We also submit anonymous code in the supplemental materials.", "parag_2": "R EPRODUCIBILITY S TATEMENT For theory, we provide proof and additional results in the Appendix. For empirical results, we provide implementation and environment details and hyperparameters in the Appendix. We also submit anonymous code in the supplemental materials.", "annot_1": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_03"}, "annot_2": {"annotation": ["Unusable"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "pAdnbKIAaL.w-Mm4JV4h.00", "parag_1": "Functional Causal Model In functional causal model (FCM), the relationships between variables are expressed through deterministic, functional equations: x i = f i ( pa i , u i ) , i = 1 , ..., N . The uncertainty in FCM is introduced via the assumption that variables u i , i = 1 , ..., N , are not observed (Pearl et al., 2000). If each function in FCM represents an autonomous mechanism, such FCM is called a structural model. Moreover, if each mechanism determines the value of one and only one variable, then the model is called a structural causal model (SCM). Taking the view from the SCM’s perspective, we want to learn a mixture of causal models whose inputs are pure latent variables and whose output is a single high-dimensional variable that describes complex data such as images.", "parag_2": "Functional Causal Model In functional causal model (FCM), the relationships between variables are expressed through deterministic, functional equations: x i = f i ( pa i , u i ) , i = 1 , ..., N . The uncertainty in FCM is introduced via the assumption that variables u i , i = 1 , ..., N , are not observed (Pearl et al., 2000). If each function in FCM represents an autonomous mechanism, such FCM is called a structural model. Moreover, if each mechanism determines the value of one and only one variable, then the model is called a structural causal model (SCM). The SCMs form the basis for many statistical methods (Mooij & Heskes, 2013; Mooij et al., 2016) that aim at inferring knowledge of the underlying causal structure from data (Bongers et al., 2016). Taking the view from the SCM’s perspective, we want to learn a mixture of causal models whose inputs are pure latent variables and whose output is a single high-dimensional variable that describes complex data such as images.", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_10"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_03"}} {"id_paragraph": "Sx6SnclSL.nQLOUHvx8n.04", "parag_1": "Hierarchical Modules. As reported in Table 7, on top of our final solution of Point-M2AE in the first row, we respectively experiment with removing the hierarchical encoder, hierarchical decoder,skip connections, and local spatial self-attention layers from our framework. Specifically, we replace our encoder and decoder with 1-stage plain architectures similar to MAE, which contains 15 and 2blocks of vanilla self-attention layers, respectively. We observe the absence of multi-stage structures either in encoder or decoder would hurt the performance, and the hierarchical encoder plays a better role than the decoder. Also, the skip connectionsand local spatial attention can well benefit thenetwork by providing complementary information and local inductive bias.", "parag_2": "Hierarchical Modules. As reported in Table 7, on top of our final solution, Point-M2AE, in the first row, we respectively experiment with removing the hierarchical encoder, hierarchical decoder, and skip connections from our framework. Specifically, we replace our encoder and decoder with 1-stage plain architectures similar to MAE, which contains 15 and 2 vanilla transformer blocks, respectively. We observe the absence of multi-stage structures either in encoder or decoder hurts the performance, and the hierarchical encoder plays a better role than the decoder. Also, the skip connections well benefits the accuracy by providing complementary information for the decoder.", "annot_1": {"annotation": ["Rewriting_light", "Concision"], "instruction": "Rewrite the last sentence to make it more concise.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make the paragraph shorter.", "annotator": "annotator_07"}} {"id_paragraph": "hAi0PMz9T7.Ut8ESfYp1.01", "parag_1": "Conventional TCP CC adopts a heuristic-based approach where the heuristic functions are manually crafted to adjust the traffic rate in a deterministic manner. Some proposals use packet loss as a signal for network congestion, e.g., Cubic [4], Reno [20],and NewReno [6], and others rely onthe variation of delay, e.g., Vegas [5]. Other CC designs combine packet lossand delay [21, 22]. Recently, different CC techniques specialized for data-center networks are alsoproposed [2, 3, 23].", "parag_2": "Conventional TCP CC adopts a heuristic-based approach where the heuristic functions are manually crafted to adjust the traffic rate in a deterministic manner. Some proposals use packet loss as a signal for network congestion, e.g., Cubic [4], Reno [24], and NewReno [6]; while others rely on the variation of delay, e.g., Vegas [5], or combine packet loss and delay [25, 26]. Different CC techniques specialized for datacenter networks are also proposed [3, 27].", "annot_1": {"annotation": ["Concision"], "instruction": "Make the last sentence slightly shorter.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make the paragraph slightly shorter.", "annotator": "annotator_07"}} {"id_paragraph": "BkxG1CvhWf.wcpE7maMLZ4.01", "parag_1": "In this work we try to address that gap, and study the suitability of different state space topological properties as completeness thresholds for cost optimal planning with actions with 0-cost. We identify the sublist diameter as a completeness threshold, which has the advantage of being practically bounded. We also identify a new topological property, the subset diameter , as a completeness threshold and show that no tighter completeness threshold can be computed for a given problem without exploiting cost information, the ini- tial state, or the goal. To test the practical utility of the completeness thresholds we found, we devise a SAT compilation for cost optimal planning, and use that in an any-time planning as satisfiability algorithm, where the horizon is fixed from the beginning to the completeness threshold. This algorithm starts with an upper bound on the total cost and improves that cost upper bound every iteration. Experiments show that the algorithm is able to compute plans with costs better than the initial costs, and in many cases it can compute plans whose cost matches the optimal cost. Furthermore, the algorithm is able to prove the optimality of certain costs for a number of instances, some of which could not be proven optimal by the widely used LM-cut (Pommerening and Helmert 2012) planning heuristic.", "parag_2": "In this work we try to address that gap, and study the suitability of different state space topological properties for being completeness thresholds for cost optimal planning with actions with 0-cost. We identify a completeness threshold that can be practically bounded, and show that no tighter completeness threshold can be computed for a given problem without exploiting cost information, the initial state, or the goal. To test the practical utility of this completeness threshold, we devise a SAT compilation for cost optimal planning, and use that in an any-time planning as satisfiabil- ity algorithm, where the horizon is fixed from the beginning to the completeness threshold. This algorithm starts with an upper bound on the total cost and improves that cost upper bound every iteration. Experiments show that the algorithm is able to compute plans with costs better than the initial costs, and in many cases it can compute plans whose cost matches the optimal cost. Furthermore, the algorithm is able to prove the optimality of certain costs for a number of instances, some of which could not be proven optimal by the widely used LM-cut planning heuristic.", "annot_1": {"annotation": ["Concision"], "instruction": "Shorten this paragraph.", "annotator": "annotator_02"}, "annot_2": {"annotation": ["Concision"], "instruction": "Make the beginning of this paragraph shorter.", "annotator": "annotator_07"}} {"id_paragraph": "33RNh69fYq.kMvWVl725x.03", "parag_1": "Discussion . In this work, different kinds of objects are handled without being distinguished. We havenot used the category labels that may help the model better fit multi-class data. How to incorporatethe unified model with category labels should be further studied. In practical applications, the normalsamples are not as consistent as those in MVTec-AD. Therefore, the ability to deal with the scenarioswhere the normal samples share some diversity is important. Our UniAD is capable of handling all15 categories in MVTec-AD, hence would be more suitable for real scenes. ", "parag_2": "Discussion . In this work, different kinds of objects are handled without being distinguished. We havenot used the category labels that may help the model better fit multi-class data. How to incorporatethe unified model with category labels should be further studied. In practical uses, normal samples arenot as consistent as those in MVTec-AD, often manifest themselves in some diversity. Our UniADcould handle all 15 categories in MVTec-AD, hence would be more suitable for real scenes. However,anomaly detection may be used for video surveillance, which may infringe personal privacy.", "annot_1": {"annotation": ["Development", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "WldWha1MT.LL2ZsGpJga.01", "parag_1": "We show how induced matchings guarantee the spatially correct matching between barcodes in a segmentation setting. Furthermore, we propose an efficient algorithm to compute TopoMatch for images. We show that TopoMatch is an interpretable metric to evaluate the topological correctness of segmentations. Moreover,we demonstrate how induced matchings can be used to train segmentation networks and improve the topological correctness of the segmentations across all 6 baseline datasets while preserving volumetricsegmentation performance.", "parag_2": "We show how induced matchings guarantee the spatially correct matching between barcodes in a segmentation setting. Furthermore, we propose an efficient algorithm to compute TopoMatch for images. We show that TopoMatch is an interpretable metric to evaluate the topological correctness of segmentations, which is more sensitive than the well-established Betti number error. Moreover, the differentiability of the TopoMatch loss enables its use as a loss function. It improves the topological performance of segmentation networks across six diverse datasets while preserving the volumetric performance.", "annot_1": {"annotation": ["Rewriting_medium", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "9wfZbn73om.FhHH15YtKt.01", "parag_1": "In summary, our contributions include: 1) proposing a novel ( σ, δ ) -measure to quantify the data augmentation; 2) proposing a theoretical framework for contrastive SSL, which suggests that alignment, divergence, and concentration are key factors of generalization ability; 3) provably verifying that not only the InfoNCE loss but also the cross-correlation loss satisfy the alignment and divergence; 4) empirically showing that the concentration w.r.t. the proposed augmented distance is highly related to the downstream performance.", "parag_2": "In summary, our contributions include: 1) proposing a novel ( σ, δ ) -measure to quantify data augmentation; 2) presenting a theoretical framework for contrastive SSL that highlights alignment, divergence, and concentration as key factors for generalization ability; provably verifying that not only the InfoNCE loss but also the cross-correlation loss satisfy alignment and divergence; 4) showing a strong correlation between downstream performance and concentration of augmented data.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Make the sentence precise.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Improve english in this text.", "annotator": "annotator_07"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.13", "parag_1": "We observed the main effect of W ( F 2 , 22 = 25 . 3, p < 0 . 001, η 2 p = 0 . 967) (Figure 4 (i)). Pair-wise comparisons showed that the error rates increased as W decreased. The other parameters did not show the main effects . No significant interaction was observed.", "parag_2": "We observed the main effect of W ( F 2 , 22 = 25 . 3, p < 0 . 001, η 2 p = 0 . 967) (Figure 4 (i)). The pair-wise comparisons showed that error rates increased with a decrease in W . The other parameters did not show the main effects. No significant interaction was observed.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Replace some words in the paragraph", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Slightly revise for readability.", "annotator": "annotator_07"}} {"id_paragraph": "F3z0hchpGy.xeuzrNJiNW.03", "parag_1": "As all figures in the FAUST data set are similarly meshed and oriented, breaking the gauge equivariance in higher layers can actually be beneficial. As shown in Weiler & Cesa (2019), symmetry can be broken by treating non-invariant features as invariant features as input to the final 1 × 1 convolution. Such architectures are equivariant on lower levels, while allowing orientation sensitivity at higher layers.", "parag_2": "As all meshes in the FAUST dataset share the same topology, breaking the gauge equivariance in higher layers can actually be beneficial. As shown in (Weiler & Cesa, 2019), symmetry can be broken by treating non-invariant features as invariant features as input to the final 1 × 1 convolution. Such architectures are equivariant on lower levels, while allowing orientation sensitivity at higher layers.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Rephrase the paragraph", "annotator": "annotator_06"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Rephrase the first sentence.", "annotator": "annotator_07"}} {"id_paragraph": "r1bMvE4aE.Sy4sJiGkr.00", "parag_1": "In order to evaluate the feasibility of our suggested approach for deriving diverse sets of plans according to various existing metrics, we have implemented our approach on top of the Fast Downward planning system (Helmert 2006). The code can be made available upon request. Further, we implemented an external component, that given a set of plans and a metric returns the score of the set under that metric (Katz and Sohrabi 2019).", "parag_2": "In order to evaluate the feasibility of our suggested approach for deriving diverse sets of plans according to various existing metrics, we have implemented our approach on top of the Fast Downward planning system (Helmert 2006). Our planners, ForbidIterative (FI) diverse planners are available as part of the collection of ForbidIterative planners (Katz, Sohrabi, and Udrea 2019a). Further, we implemented an external component, that given a set of plans and a metric returns the score of the set under that metric (Katz and Sohrabi 2019).", "annot_1": {"annotation": ["Content_substitution", "Development"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "p8yrWJS4W.eHA5NswPr.00", "parag_1": "In practice, we do not have access to the true distribution p w . Rather, we are typically given a corpus { w p w n } N n =1 , whose instances we assume to be sampled i.i.d. from p w . The common approach to address this shortcoming is (when possible) to derive a statistical estimator (cid:98) ∆ that uses this corpus to approximate ∆ . There are two common strategies for building such estimators: Monte Carlo estimation and plug-in estimation. 3.2. M ONTE C ARLO E STIMATION Our i.i.d. assumption w.r.t. samples w p w ∼ p w allows us to derive a Monte Carlo estimator for certain divergences. We start with the forward KL divergence—present in both ∆ → and ∆ exp :", "parag_2": "In practice, we do not have access to the true distribution p w . Rather, we are typically given a corpus { w p w n } N n =1 , whose instances we assume to be sampled i.i.d. from p w . The common approach to address this issue is thus to derive a statistical estimator (cid:98) ∆ that uses this corpus to approximate ∆ . There are two common strategies for building such estimators: Monte Carlo and plug-in estimation. Monte Carlo Estimation. Our i.i.d. assumption w.r.t. samples in { w p w n } Nn =1 allows us to derive a Monte Carlo estimator for certain divergences. We start with the forward KL divergence:", "annot_1": {"annotation": ["Concision"], "instruction": "Fix formatting issues and simplify the wording of the paragraph.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Fix the caplocks problem. Slightly shorten the paragraph.", "annotator": "annotator_07"}} {"id_paragraph": "SRquLaHRM4.vI2x5N-YHC.03", "parag_1": "Q: What is the extra computation time cost of PLOT over CoOp baseline? A: Around 10 %inference speed and 5 % training time . Despite the performance improvement, the extra computationcost is still a limitation of PLOT. Please see the detailed analysis in the supplementary materials.", "parag_2": "Q: What is the extra computation time cost of PLOT over CoOp baseline? A: Around 10 %inference speed and 5 % training time . Please see the detailed comparisons and analysis in thesupplementary materials.", "annot_1": {"annotation": ["Content_deletion"], "instruction": "Remove any redundant information that is not essential for the research question answered.", "annotator": "annotator_03"}, "annot_2": {"annotation": ["Content_deletion", "Development"], "instruction": NaN, "annotator": "annotator_09"}} {"id_paragraph": "nkOpNqg-ip.OwJsIhe_p.00", "parag_1": "Surprisingly, the baselines used to assess the performance of AutoML tools are typically only other AutoML tools but no “simple” baselines. For example, a very simple baseline would be to imitate the steps a human data scientist would take, so that such an approach should be at least considered as a baseline. Without such baselines, we do not learn how AutoML tools improve upon ad-hoc techniques but only how they compare relatively to each other. To our knowledge, the only work accounting for such baselines is Thornton et al. (2013) using the Exhaustive-Default (“Ex-def”) baseline, which is to take the default parametrized model that is best in a cross-validation. They also discuss a grid search, which is however not applicable in practice.", "parag_2": "The baselines used to assess the performance of AutoML tools are often other AutoML tools or random search. A simple but perhaps more sensible baseline than random search would be to imitate the steps a human data scientist would take. Without such baselines, we do not learn how AutoML tools improve upon ad-hoc techniques but only how they compare relatively to each other. To our knowledge, the only work accounting for such baselines is (Thornton et al., 2013), using the Exhaustive-Default (“Ex-def”) baseline which is to take the default parametrized model that is best in a cross-validation. They also discuss a grid search, which is however not applicable in practice.", "annot_1": {"annotation": ["Rewriting_light"], "instruction": "Edit some formulations to sound more neutral.", "annotator": "annotator_04"}, "annot_2": {"annotation": ["Concision", "Rewriting_light"], "instruction": "Make the beginning of the paragraph shorter.", "annotator": "annotator_07"}} {"id_paragraph": "Iw0CmVAYR5.JQTOJMtn3t.00", "parag_1": "Quantization effect In Appendix 4.6, we also study how the performance of DAT is robust to gradient quantization. We find that when the number of bits is reduced from 32 to 8 , the resulting TA and RA becomes slightly worse than the best 32 -bit case. For example, in the worst case of CIFAR-10, TA drops 0 . 91% and 6 . 33% for DAT-PGD and DAT-FGSM, respectively. And RA drops 4 . 73% and 5 . 22% , respectively. However, the use of quantization reduces the amount of data transmission per iteration. We also show that if a high performance computing cluster of nodes (with NVLink high-speed GPU interconnect (Foley & Danskin, 2017)) is used, the communication cost can be further reduced.", "parag_2": "Quantization effect In Appendix 4.6, we also study how the performance of DAT is affected by gradient quantization. We find that when the number of bits is reduced from 32 to 8 , the resulting TA and RA becomes worse than the best 32 -bit case. For example, in the worst case (8-bit 2-sided quantization) of CIFAR-10, TA drops 1 . 52% and 6 . 32% for DAT-PGD and DAT-FGSM, respectively. And RA drops 4 . 74% and 5 . 58% , respectively. Note that our main communication configuration is given by Ring-AllReduce that calls for 1-sided (rather than 2-sided) quantization. We also observe that DAT-FGSM is more sensitive to effect of gradient quantization than DAT-PGD. Even in the centralized setting, the use of 8-bit quantization can lead to a non-trivial drop in TA (see Table A5). However, the use of quantization reduces the amount of data transmission per iteration. We also show that if a high performance computing cluster of nodes (with NVLink high-speed GPU interconnect (Foley & Danskin, 2017)) is used, the communication cost can be further reduced.", "annot_1": {"annotation": ["Content_addition", "Rewriting_light"], "instruction": NaN, "annotator": "annotator_06"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_08"}} {"id_paragraph": "hegI87bI5S.fL6Q48sfx8.14", "parag_1": "There was a main effect in Position, and Position = Inside has a longer movement time than Position = Outside . We observed a significant interaction of I × Position . At I = 0 , the movement time increased compared to the condition no notch by approxi- mately 11.8% in Position = Inside , and approximately 4.93% in Position = Outside .", "parag_2": "Another effect was observed, in that Position = Inside had a longer movement time than that for Position = Outside . We observed a significant interaction of I × Position . At I = 0 , the movement time increased compared to that for the condition of no notch by approximately 11.8% in Position = Inside , and by approximately 4.93% in Position = Outside .", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Modify the first sentence", "annotator": "annotator_10"}, "annot_2": {"annotation": ["Rewriting_light"], "instruction": "Revise this text to make it more readable and direct.", "annotator": "annotator_07"}} {"id_paragraph": "r1aSglP6b.Bk3yZFv6-.00", "parag_1": "• The shape of the loss function after spontaneous symmetry breaking has the same shape observed by Goodfellow et al. (2014) towards the end of training, see Figure 1. • A cyclical learning rate (Smith & Topin, 2017) helps to get to the new minimum faster, see Section 2.5. • Stochasticity in gradient descent juggles the loss function such that the weights are no longer at the local maximum of Figure 1. A gradient descent step is taken to further take the weights towards the local minimum. Stochasticity helps the network to generalize better. • When the learning rate is too small to move away from A in Figure 1. Non-linearities move the weight away from A , this corresponds to breaking the symmetry explicitly in Theorem 1. PReLU’s (He et al., 2015b)performance could be related to the optimization of this process. • Results from Shwartz-Ziv & Tishby (2017) are due to spontaneous symmetry breaking, see Section 4. • Identity mapping outperforms other skip connections (He et al., 2016) is a result of the residual unit’s output being small. Then the residual units can be decoupled leading to a small λ and so it is easier for spontaneous symmetry breaking to occur, from m 2 = − µ 2 + 14 λη 2 . • Skip connection across residual units breaks additional symmetry. Suppose now an identity skip connection connects x 1 and the output of F 2 . Now perform a symmetry transformation on x 1 and x 2 , Q 1 and Q 2 ∈ G , respectively. Then the output after two residual untis is Q x = Q 1 x 1 + Q 2 x 2 + Q 2 F 2 . Neither Q = Q 1 nor Q = Q 2 can satisfies the covariance under G . This is observed by Orhan & Pitkow (2017). • The shattered gradient problem (Balduzzi et al., 2017). It is observed that the gradient in deep (non-residual) networks is very close to white noise. This is reflected in the exponential in Equation (6). This effect on ResNet is reduced because of the decoupling limit λ → 0 . This leads to the weight eigenvalues m 2 being larger in non-residual networks owing to m 2 = − µ 2 + 14 λη 2 . And so a higher oscillation frequency in the correlation function. • In recurrent neural networks, multiplicative gating (Yuhuai et al., 2016) combines the input x and the hidden state h by an element-wise product. Their method outperforms the method with an addition x + h because the multiplication gating breaks the covariance of the output. A transformation Q x ∗ Q h (cid:54) = Q ( x ∗ h ) , whereas for addition the output remains covariant Q x + Q h = Q ( x + h ) .", "parag_2": "• The shape of the loss function after spontaneous symmetry breaking has the same shape observed by Goodfellow et al. (2014) towards the end of training, see Figure 1. • The training error typically drops drastically when learning rate is decreased. This occurs when the learning rate drops below η c , forcing a phase transition so that new minima develop. See Figure 1. • A cyclical learning rate (Smith & Topin, 2017) helps to get to the new minimum faster, see Section 2.5. • Stochasticity in gradient descent juggles the loss function such that the weights are no longer at the local maximum of Figure 1. A gradient descent step is taken to further take the weights towards the local minimum. Stochasticity helps the network to generalize better. • When the learning rate is too small to move away from A in Figure 1. PReLU’s (He et al., 2015b) could move the weight away from A through the training of the non-linearity. This corresponds to breaking the symmetry explicitly in Theorem 1. • Results from Shwartz-Ziv & Tishby (2017) are due to spontaneous symmetry breaking, see Section 4. • Identity mapping outperforms other skip connections (He et al., 2016) is a result of the residual unit’s output being small. Then the residual units can be decoupled leading to a small λ and so it is easier for spontaneous symmetry breaking to occur, from m 2 = − µ 2 + 14 λη 2 . • Skip connection across residual units breaks additional symmetry. Suppose now an identity skip connection connects x 1 and the output of F 2 . Now perform a symmetry transformation on x 1 and x 2 , Q 1 and Q 2 ∈ G , respectively. Then the output after two residual untis is Q x = Q 1 x 1 + Q 2 x 2 + Q 2 F 2 . Neither Q = Q 1 nor Q = Q 2 can satisfies the covariance under G . This is observed by Orhan & Pitkow (2017). • The shattered gradient problem (Balduzzi et al., 2017). It is observed that the gradient in deep (non-residual) networks is very close to white noise. This is reflected in the exponential in Equation (6). This effect on ResNet is reduced because of the decoupling limit λ → 0 . This leads to the weight eigenvalues m 2 being larger in non-residual networks owing to m 2 = − µ 2 + 14 λη 2 . And so a higher oscillation frequency in the correlation function. • In recurrent neural networks, multiplicative gating (Yuhuai et al., 2016) combines the input x and the hidden state h by an element-wise product. Their method outperforms the method with an addition x + h because the multiplication gating breaks the covariance of the output. A transformation Q x ∗ Q h (cid:54) = Q ( x ∗ h ) , whereas for addition the output remains covariant Q x + Q h = Q ( x + h ) .", "annot_1": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_02"}, "annot_2": {"annotation": ["Content_addition", "Rewriting_medium"], "instruction": NaN, "annotator": "annotator_07"}} {"id_paragraph": "BkxG1CvhWf.wcpE7maMLZ4.03", "parag_1": "Practically, the existing methods to compute the recurrence diameter have a doubly exponential worst case running time (Kroening and Strichman 2003; Abdulaziz and Berger 2021), and they are only useful when applied to small abstractions in the context of compositionally computing upper bounds on other topological properties. Furthermore, there is not a compositional algorithm that can compute upper bounds on the recurrence diameter using abstractions’ recurrence diameters (Abdulaziz 2017)[Chapter 3, Theorem 2]. Accordingly, due to this absence of a practical way to compute it or tightly bound it, the recurrence diameter cannot be practically used as a completeness threshold.", "parag_2": "Practically, the existing methods to compute the recurrence diameter have a doubly exponential worst case running time (Kroening and Strichman 2003; Abdulaziz and Berger 2021), and they are only useful when applied to small abstractions in the context of compositionally computing upper bounds on other topological properties. Furthermore, there is not a compositional algorithm that can compute upper bounds on the recurrence diameter using abstractions recurrence diameter. Accordingly, the recurrence diameter cannot be practically used as a completeness threshold due to the absence of a practical way to compute it or tightly bound it.", "annot_1": {"annotation": ["Concision", "Rewriting_medium"], "instruction": "I do not need references in the in second sentence. Rephrase the last sentence.", "annotator": "annotator_09"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Remove the references in the second half of the paragraph. Reorder the last sentence to improve readability.", "annotator": "annotator_07"}} {"id_paragraph": "OzYyHKPyj7.O9Mk1uqXra.00", "parag_1": "To remedy this, some previous work has investigated the addition of differentiable stack data structures to RNNs (Sun et al., 1995; Grefenstette et al., 2015; Joulin & Mikolov, 2015; DuSell & Chiang, 2020). Just as adding a stack to a finite state machine, which makes it a pushdown automaton (PDA), enables it to recognize context-free languages (CFLs), the hope is that adding stacks to RNNs will increase the range of problems on which they can be used effectively. We also expect stacks to aid training by introducing an inductive bias for learning hierarchical patterns, and to increase generalization power by structuring the model’s memory in a way that better predicts held-out hierarchical data.", "parag_2": "To remedy this, some previous work has investigated the addition of differentiable stack data structures to RNNs (Sun et al., 1995; Grefenstette et al., 2015; Joulin & Mikolov, 2015; DuSell & Chiang, 2020), which is closely related to work on neural networks that model shift-reduce parsers (Bowman et al., 2016; Dyer et al., 2016; Shen et al., 2019a). Just as adding a stack to a finite state machine, which makes it a pushdown automaton (PDA), enables it to recognize context-free languages (CFLs), the hope is that adding stacks to RNNs will increase the range of problems on which they can be used effectively. We also expect stacks to aid training by introducing an inductive bias for learning hierarchical patterns, and to increase generalization power by structuring the model’s memory in a way that better predicts held-out hierarchical data.", "annot_1": {"annotation": ["Development"], "instruction": NaN, "annotator": "annotator_10"}, "annot_2": {"annotation": ["Content_addition"], "instruction": NaN, "annotator": "annotator_03"}} {"id_paragraph": "Byyb66j52G.hR5KKRfhQm.14", "parag_1": "Delayed augmentation . We experiment on the generalization when we start to use augmentationlately as 10M, 20M. As shown in Figure 2(d) and Figure 2(e), the generalization rapidly increases after using augmentation at 10M and 20M. Although we use augmentation lately, the augmentation helps the generalization regardless of the usage timing. Golatkar et al . [9] shows that delayed augmentation cannot achieve as much as using augmentation during whole training in supervised-learning. However, (10, 25) improves the generalization comparable with (0, 25), which use augmentation throughouttraining, unlike supervised learning. However, when augmentation noticeably helps the training, suchas Figure 2(e), delayed augmentation struggles to follow earlier one in Figure 2(f), because the RLgradually improves the policy and trajectory by Markov property. Furthermore, RL has a limitednumber of samples unlike supervised learning, so using augmentation from the initial time is morecritical than supervised learning if augmentation helps the training.", "parag_2": "Delayed augmentation . To determine when we start to use augmentation, we delayed its use until after 10M or 20M steps. The generalization rapidly increases after using augmentation at 10M and 20M (Figre 2(d), 2(e)). Although we impose augmentation late, the augmentation helps the generalization regardless of the start timing. In SL, delayed augmentation cannot achieve as much as using augmentation during whole training [9]. However, (10, 25) improves the generalization to be comparable with that of (0, 25), which use augmentation throughout training; this result differs from the case of supervised learning. However, when augmentation noticeably helps the training, the performance achieved using delayed augmentation may not catch up (Figure 2(e)) to the performance achieved using early augmentation (Figure 2(f)), because the RL gradually improves the policy and trajectory, as a result of its Markov property. Furthermore, the number of samples is limited for RL, but not for supervised learning, so using augmentation from the initial time is more critical than supervised learning if augmentation helps the training.", "annot_1": {"annotation": ["Rewriting_medium"], "instruction": "Use clearer expression, use concise words.", "annotator": "annotator_08"}, "annot_2": {"annotation": ["Rewriting_medium"], "instruction": "Revise this paragraph to use clearer and more precise words.", "annotator": "annotator_07"}}