shuffled_text
stringlengths 267
4.47k
| A
stringclasses 6
values | B
stringclasses 6
values | C
stringclasses 6
values | D
stringclasses 6
values | label
stringclasses 4
values |
---|---|---|---|---|---|
**A**: The LGO generating set offers a variety of advantages**B**: Consequently, algorithms in the composition tree data structure, both in MAGMA and in GAP, store elements in classical groups as words in the LGO generators. Moreover, the LGO generators can be used directly to verify representations of classical groups [12].
**C**: In practice it is the generating set produced by the constructive recognition algorithms from [10, 11] as implemented in MAGMA | CAB | ACB | BAC | CAB | Selection 2 |
**A**: We note that the idea of performing global static condensation goes back to the Variational Multiscale Finite Element Method–VMS [MR1660141, MR2300286]. Recently variations of the VMS**B**:
It is essential for the performing method that the static condensation is done efficiently**C**: The solutions of (22) decay exponentially fast if w𝑤witalic_w has local support, so instead of solving the problems in the whole domain it would be reasonable to solve it locally using patches of elements | ACB | ACB | CAB | ACB | Selection 3 |
**A**: Alg-CM uses an involved subroutine (far more complicated than ours given in Algorithm 1) to update the coordinates in each iteration, which accumulates the inaccuracy of coordinates. Even worse, this subroutine computes three angles and selects the smallest to decide how to proceed each time, and due to float issue it is possible to select a wrong angle when angles are close, which causes the subroutine performs incorrectly.**B**: These coordinates are computed somehow and their true values can differ from their values stored in the computer**C**: Moreover, Alg-A is more stable than the alternatives.
During the iterations of Alg-CM, the coordinates of three corners and two midpoints of a P-stable triangle (see Figure 37) are maintained | ABC | ABC | BAC | CBA | Selection 4 |
**A**: We trade-off this by debunking at single tweet level and let each tweet vote for the credibility of its event. We show the CreditScore measured over time in Figure 5(a)**B**: It can be seen that although the credibility of some tweets are low (rumor-related), averaging still makes the CreditScore of Munich shooting higher than the average of news events (hence, close to a news). In addition, we show the feature analysis for ContainNews (percentage of URLs containing news websites) for the event Munich shooting in Figure 5(b). We can see the curve of Munich shooting event is also close to the curve of average news, indicating the event is more news-related.**C**:
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments | BCA | CAB | CBA | BAC | Selection 1 |
**A**: We should not rely on plateauing of the training loss or on the loss (logistic or exp or cross-entropy) evaluated on a validation data, as measures to decide when to stop**B**: We might improve the validation and test errors even when when the decrease in the training loss is tiny and even when the validation loss itself increases.
**C**: Instead, we should look at the 00–1111 error on the validation dataset | ABC | CAB | BAC | ACB | Selection 4 |
**A**: In contrast to mere sentiment features, this approach is more tailored rumor context (difference not evaluated in (liu2015real, )). We simplified and generalized the “dictionary” by keeping only a set of carefully curated negative words. We call them “debunking words” e.g., hoax, rumor or not true**B**: CrowdWisdom. Similar to (liu2015real, ), the core idea is to leverage the public’s common sense for rumor detection: If there are more people denying or doubting the truth of an event, this event is more likely to be a rumor. For this purpose, (liu2015real, ) use an extensive list of bipolar sentiments with a set of combinational rules**C**: Our intuition is, that the attitude of doubting or denying events is in essence sufficient to distinguish rumors from news. What is more, this generalization augments the size of the crowd (covers more ’voting’ tweets), which is crucial, and thus contributes to the quality of the crowd wisdom. In our experiments, “debunking words” is an high-impact feature, but it needs substantial time to “warm up”; that is explainable as the crowd is typically sparse at early stage.
| BCA | BAC | CBA | CBA | Selection 2 |
**A**: Multi-Criteria Learning. Our task is to minimize the global relevance loss function, which evaluates the overall training error, instead of assuming the independent loss function, that does not consider the correlation and overlap between models**B**: We adapted the L2R RankSVM [12]. The goal of RankSVM is learning a linear model that minimizes the number of discordant pairs in the training data**C**: We modified the objective function of RankSVM following our global loss function, which takes into account the temporal feature specificities of event entities. The temporal and type-dependent ranking model is learned by minimizing the following objective function:
| ACB | BAC | ABC | CAB | Selection 3 |
**A**: The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients**B**: The only difference happens to patient 10 and 12 whose intakes are earlier at day.
Further, patient 12 takse approx**C**: 3 times the average insulin dose of others in the morning. | CBA | ABC | CBA | BCA | Selection 2 |
**A**: A quantitative comparison of results on independent test datasets was carried out to characterize how well our proposed network generalizes to unseen images**B**: The final outcome for the 2017 release of the SALICON dataset is therefore not reported in this work but our model results can be viewed on the public leaderboard111https://competitions.codalab.org/competitions/17136 under the user name akroner.
**C**: Here, we were mainly interested in estimating human eye movements and regarded mouse tracking measurements merely as a substitute for attention | BCA | CBA | CAB | ACB | Selection 4 |
**A**:
We call a marking sequence σ𝜎\sigmaitalic_σ for a word α𝛼\alphaitalic_α block-extending, if every symbol that is marked except the first one has at least one block-extending occurrence**B**: We answer this question in the negative.**C**: This definition leads to the general combinatorial question of whether every word has an optimal marking sequence that is block-extending, or whether the seemingly bad choices of marking a symbol that has only isolated occurrences (and that is not the first symbol) is necessary for optimal marking sequences | CAB | ABC | BAC | ACB | Selection 4 |
**A**: Notable exceptions are the works of
Oh et al. (2017), Sodhani et al. (2019), Ha & Schmidhuber (2018), Holland et al**B**: Oh et al. (2017) use a model of rewards to augment model-free learning with good results on a number of Atari games. However, this method does not actually aim to model or predict future frames, and achieves clear but relatively modest gains in efficiency.**C**: (2018), Leibfried et al. (2018) and Azizzadenesheli et al. (2018) | BAC | ACB | ABC | CBA | Selection 2 |
**A**: We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim**B**: In this section, we explore the autonomous locomotion mode transition of the Cricket robot**C**: This design facilitates the decision-making process when transitioning between the robot’s rolling and walking locomotion modes. Through energy consumption analyses during step negotiations of varied heights, we establish energy criterion thresholds that guide the robot’s transition from rolling to walking mode. Our simulation studies reveal that the Cricket robot can autonomously switch to the most suitable locomotion mode based on the height of the steps encountered.
| CBA | ACB | BAC | ACB | Selection 3 |
**A**: For problems such as bin packing,**B**: While our work addresses issues similar to [24] and [29], in that trusted advice is related to consistency whereas untrusted advice is related to robustness, it differs in two significant aspects: First, our ideal objective is to identify an optimal
family of algorithms, and we show that in some cases (ski rental, online bidding), this is indeed possible; when this is not easy or possible, we can still provide approximations**C**: Note that finding a Pareto-optimal family of algorithms presupposes that the exact competitiveness of the online problem with no advice is known | ABC | ACB | CAB | BAC | Selection 3 |
**A**: From the previous analysis, it is clear that useful information can be obtained from the study of those cases where our approach was not able to correctly predict a class**B**: In Figure 8 we exemplify each case with one subject from the test set, described in more detail below:
**C**: With this goal in mind, we also carried out an error analysis and identified four common error cases which could be divided into two groups: those that arise from bad labeling of the test set and those that arise from bad classifier performance | CAB | CAB | ACB | BAC | Selection 3 |
**A**: Furthermore, to enhance the convergence performance when using more aggressive sparsification compressors (e.g., RBGS), we extend GMC to GMC+. We prove the convergence of GMC and GMC+ theoretically. Empirical results verify the superiority of global momentum and show that GMC and GMC+ can outperform other baselines to achieve state-of-the-art performance.**B**:
In this paper, we propose a novel method, called global momentum compression (GMC), for sparse communication in distributed learning**C**: To the best of our knowledge, this is the first work that introduces global momentum for sparse communication in DMSGD | CBA | BAC | CAB | BAC | Selection 3 |
**A**: Using cross-correlation would produce the same results and would not require flipping the kernels during visualization**B**: operation.**C**:
, where ∗*∗ is the convolution333We use convolution instead of cross-correlation only as a matter of compatibility with previous literature and computational frameworks | ACB | BCA | CBA | ABC | Selection 2 |
**A**:
Coverage is another factor which determines the performance of each UAV. As presented in Fig. 1 (c), the altitude of UAV plays an important role in coverage adjusting**B**: The higher altitude it is, the larger coverage size a UAV has. A large coverage size means a substantial opportunity of supporting more users, but a higher SNR will be needed**C**: Furthermore, the turbulence of upper air disrupts the stability of UAVs with more energy consumption. Thus, a suitable height is essential to determine the coverage area. | CBA | BAC | ABC | ACB | Selection 3 |
**A**: italic_e . , in the experiment,**B**: and the simulation was run until around 220μ220μ220\upmu220 roman_μs**C**: Note that I~lev<1subscript~𝐼𝑙𝑒𝑣1\widetilde{I}_{lev}<1over~ start_ARG italic_I end_ARG start_POSTSUBSCRIPT italic_l italic_e italic_v end_POSTSUBSCRIPT < 1
at t=0𝑡0t=0italic_t = 0, because tlev=−50μsubscript𝑡𝑙𝑒𝑣50μt_{lev}=-50\upmuitalic_t start_POSTSUBSCRIPT italic_l italic_e italic_v end_POSTSUBSCRIPT = - 50 roman_μs , i.e.,formulae-sequence𝑖𝑒i.e.,italic_i | ABC | CBA | CAB | ABC | Selection 3 |
**A**: In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and after applying Dropout (Dropout methods DQN)**B**: There was a statistically significant decrease in Variance (14.72% between Gaussian Dropout and DQN, 48.89% between Variational Dropout and DQN). Furthermore one of the Dropout methods outperformed DQN score.**C**:
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to | BCA | CAB | ACB | ABC | Selection 1 |
**A**: Moreover, Figure 2 shows a high-level overview of the deep semantic segmentation pipeline, and where each of the categories mentioned in Figure 1 belong in the pipeline.**B**:
We group the semantic image segmentation literature into six different categories based on the nature of their contributions: architectural improvements, optimization function based improvements, data synthesis based improvements, weakly supervised models, sequenced models, and multi-task models**C**: Figure 1 indicates the categories we cover in this review, along with a timeline of the most influential papers in the respective categories | ABC | ACB | CAB | BAC | Selection 3 |
**A**: The evaluation is performed on all nine datasets, and results for different numbers of training examples are shown (increasing from left to right)**B**: The overall performance of each method is summarized in the last column.
For neural random forest imitation, a network architecture with 128128128128 neurons in both hidden layers is used. From the analysis, we can make the following observations:**C**: Here, we additionally include decision trees, support vector machines, random forests, and neural networks in the comparison | CAB | CAB | ACB | BCA | Selection 4 |
**A**:
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020), as well as nonlinear settings involving general function approximators (Wen and Van Roy, 2017; Jiang et al., 2017; Du et al., 2019b; Dong et al., 2019). In particular, our setting is the same as the linear setting studied by Ayoub et al. (2020); Zhou et al. (2020), which generalizes the one proposed by Yang and Wang (2019a). We remark that our setting differs from the linear setting studied by Yang and Wang (2019b); Jin et al**B**: In comparison, we focus on policy-based reinforcement learning, which is significantly less studied in theory. In particular, compared with the work of Yang and Wang (2019b, a); Jin et al. (2019); Ayoub et al. (2020); Zhou et al. (2020), which focuses on value-based reinforcement learning, OPPO attains the same T𝑇\sqrt{T}square-root start_ARG italic_T end_ARG-regret even in the presence of adversarially chosen reward functions. Compared with optimism-led iterative value-function elimination (OLIVE) (Jiang et al., 2017; Dong et al., 2019), which handles the more general low-Bellman-rank setting but is only sample-efficient, OPPO simultaneously attains computational efficiency and sample efficiency in the linear setting. Despite the differences between policy-based and value-based reinforcement learning, our work shows that the general principle of “optimism in the face of uncertainty” (Auer et al., 2002; Bubeck and Cesa-Bianchi, 2012) can be carried over from existing algorithms based on value iteration, e.g., optimistic LSVI, into policy optimization algorithms, e.g., NPG, TRPO, and PPO, to make them sample-efficient, which further leads to a new general principle of “conservative optimism in the face of uncertainty and adversary” that additionally allows adversarially chosen reward functions.**C**: (2019). It can be shown that the two settings are incomparable in the sense that one does not imply the other (Zhou et al., 2020). Also, our setting is related to the low-Bellman-rank setting studied by Jiang et al. (2017); Dong et al. (2019) | CBA | BAC | ACB | BCA | Selection 3 |
**A**: They are not suited to execute generic compressed models and are therefore not included in the following experiments.
**B**: Furthermore, in particular for the TPU, experimentation is often hindered due to limitations in the tool chain which is not flexible enough to support such optimizations**C**: While domain-specific accelerators, such as Google’s TPU, excel in their specific performance, they are usually limited to a set of specific operations and are neither flexible in terms of data types nor sparse calculations | ACB | ABC | CBA | ABC | Selection 3 |
**A**: Moreover, we consider a generalization of the filling radius and also define a strong notion of filling radius which is akin to the so called maximal persistence in the realm of topological data analysis.**B**:
In this section, we recall the notions of spread and filling radius, as well as their relationship**C**: In particular, we prove a number of statements about the filling radius of a closed connected manifold | ABC | CAB | ACB | BCA | Selection 2 |
**A**: First, they were shown a video tutorial which discussed t-SNE itself and the main features of the tool (cf**B**: Study Design
Each participant took part individually (i.e., the study was performed asynchronously for each subject, in a silent room), using the same hardware, and the study was organized into four main steps, which were identical for both groups except that each interacted with the corresponding group’s tool (GEP or t-viSNE)**C**: supplemental material of this work). An illustrated transcription of this tutorial was available at all times in the form of a printout. | ABC | CBA | BCA | BAC | Selection 4 |
**A**: This paper develops and applies a test to known algorithms, including Grey Wolf Optimizer, Whale Optimization, and Harris Hawk, which fail this test. However, algorithms such as DE, GA, and PSO pass the test. This test is a useful tool to solve the centre-bias problem that has already been studied in [25].
**B**: A Simple statistical test against origin-biased metaheuristics - 2024 [31]: The authors have developed a test to determine algorithm bias. The test is based on the idea that an unbiased algorithm can choose either direction for one of two different local optima in a function**C**: If there is a difference in behavior between independent runs, then the algorithm is likely biased. Algorithms that are biased in terms of the fitness function can lead to undesired behavior | CAB | ACB | CBA | BAC | Selection 1 |
**A**: All codes are downloaded from the homepages of authors.
**B**: Besides, four GAE-based methods are used, including GAE [20], MGAE [21], GALA [32], and SDCN [31]**C**: Three deep clustering methods for general data, DEC [8] DFKM [9], and SpectralNet [7], also serve as an important baseline | ACB | BCA | CAB | CBA | Selection 4 |
**A**: In general, tests against Web servers have a higher applicability rate than the tests with Email or DNS servers, regardless of which technique was used (IPID or PMTUD). The number of Web servers is much larger than the others**B**: Furthermore, we find that when a Web server is not available (“N/A”), both Email and DNS servers cannot be tested, either. This also results in much higher N/A outcomes for tests against Email and DNS servers as opposed to Web servers.
**C**: It is much easier to setup a Web server than Email server or DNS server. Considering that DNS servers and Email servers are more likely to be hosted by providers, they also have higher probability to get new system updates | ABC | BCA | ACB | ABC | Selection 3 |
**A**: This design introduces variation in training inputs, which makes it harder to learn consistent context patterns. For this task, semisupervised learning techniques, such as self-labeled samples, may help**B**:
The current design of the context-based network relies on labeled data because the odor samples for a given class are presented as ordered input to the context layer. However, the model can be modified to be trained on unlabeled data, simply by allowing arbitrary data samples as input to the context layer**C**: If the context layer can process unlabeled data, then it is no longer necessary to include every class in every batch. The full six-gas sensor drift dataset can be used, as well as other unbalanced and therefore realistic datasets. | ABC | CBA | CBA | BAC | Selection 4 |
**A**:
There is a quite interesting evolution of constructions to present free groups in a self-similar way or even as automaton groups (see [15] for an overview). This culminated in constructions to present free groups of arbitrary rank as automaton groups where the number of states coincides with the rank [18, 17]**B**: In fact, the construction to generate these semigroups is quite simple [4, Proposition 4.1] (compare also to 3). The same construction can also be used to generate free monoids as automaton semigroups or monoids. Here, the main difference is that the free monoid in one generator can indeed be generated by an automaton: it is generated by the adding machine (see 1), which also generates the free group of rank one if inverses are added. On a side note, it is also worthwhile to point out that – although there does not seem to be much research on the topic – there are examples to generate the free inverse semigroup of rank one as a subsemigroup of an automaton semigroup [14, Theorem 25] and an adaption to present the free inverse monoid of rank one as an automaton semigroup [6, Example 2] (see also [8, Example 23]).**C**: While these constructions and the involved proofs are generally deemed quite complicated, the situation for semigroups turns out to be much simpler. While it is known that the free semigroup of rank one is not an automaton semigroup [4, Proposition 4.3], the free semigroups of higher rank can be generated by an automaton [4, Proposition 4.1] | ACB | BCA | CAB | BCA | Selection 1 |
**A**: We hypothesize that degrading performance on the train set helps forget linguistic biases, which in turn helps accuracy on VQA-CPv2’s test set but hurts accuracy on VQAv2’s val set.
**B**: As shown in Table 1, the baseline method has the highest training results, while the other methods cause 6.0−14.0%6.0percent14.06.0-14.0\%6.0 - 14.0 % and 3.3−10.5%3.3percent10.53.3-10.5\%3.3 - 10.5 % drops in the training accuracy on VQA-CPv2 and VQAv2, respectively**C**: We compare the training accuracies to analyze the regularization effects | BCA | CBA | BCA | ACB | Selection 2 |
**A**:
Prior research on the readability based on small corpora of privacy policies had found that they were generally hard to understand for the average internet user**B**: Our large scale analysis using the Flesch-Kincaid readability metric was consistent with prior findings**C**: We found that on average about 14.87 years or roughly about two years of U.S. college education was required to understand a privacy policy. | BCA | BAC | ABC | BCA | Selection 3 |
**A**: The data set is a binary classification problem and contains 165 diseased and 138 healthy patients.
Hence, we choose micro-average to weight the importance of the largest class, even though the impact is low because of the lack of any significant imbalance for the dependent variable**B**: The dice glyphs visible on the right hand side of StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(a) are static and only used to indicate that specific views do not use all pre-selected metrics. For instance, the performance comparison view StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(c) only uses four metrics. After this initial tuning of the metrics, we press the Confirm button to move further to the exploration of algorithms.**C**: Weighted-average calculates the metrics for each label and finds their average weighted by support (the number of true instances for each label) | CBA | BAC | BCA | ABC | Selection 3 |
**A**: FewRel is a relation classification dataset with 65/5/10 tasks for meta-training/meta-validation/meta-testing.**B**:
In Experiment I: Text Classification, we use FewRel [Han et al., 2018] and Amazon [He and McAuley, 2016]**C**: They are datasets for 5-way 5-shot classification, which means 5 classes are randomly sampled from the full dataset for each task, and each class has 5 samples | ABC | CBA | CBA | CAB | Selection 4 |
**A**:
Activated Subarray with Limited DREs: As shown in Fig. 1, given a certain azimuth angle, there are limited DREs that can be activated**B**: If an inappropriate subarray is activated, the beam angle may go beyond the radiation range of certain subarray elements, degrading the beam gain and SE.**C**: Due to the directivity, the DREs of the CCA subarray at different positions are anisotropic, and this phenomenon is different from the UPA | ABC | BAC | ABC | ACB | Selection 4 |
**A**: This will be bootstrapped to the multi-color case in later sections**B**: We**C**: Note that the 1111-color case with the completeness requirement is not very interesting, and also not useful for the general case: completeness states that every node on
the left must be connected, via the unique edge relation, to every node on the right – regardless of the matrix | ABC | BAC | BAC | ACB | Selection 4 |
**A**: The key to our analysis is a mean-field perspective, which allows us to associate the evolution of a finite-dimensional parameter with its limiting counterpart over an infinite-dimensional Wasserstein space (Villani, 2003, 2008; Ambrosio et al., 2008; Ambrosio and Gigli, 2013)**B**: The evolution of such a population distribution is characterized by a partial differential equation (PDE) known as the continuity equation. In particular, we develop a generalized notion of one-point monotonicity (Harker and Pang, 1990), which is tailored to the Wasserstein space, especially the first variation formula therein (Ambrosio et al., 2008), to characterize the evolution of such a PDE solution, which, by a discretization argument, further quantifies the evolution of the induced feature representation.
**C**: Specifically, by exploiting the permutation invariance of the parameter, we associate the neural network and its induced feature representation with an empirical distribution, which, at the infinite-width limit, further corresponds to a population distribution | ABC | ACB | CBA | CBA | Selection 2 |
**A**: Yu et al. (2018) suggest that skip connections are “shallow” themselves, and only fuse by simple, one-step operations, and therefore Yu et al. (2018) augment standard architectures with deeper aggregation to better fuse information across layers to improve recognition and resolution**B**: (2018) propose a multi-layer representation fusion approach to learning a better representation from the layer stack. Dou et al. (2018) simultaneously expose all layer representations with layer aggregation. Dou et al. (2019) propose to use routing-by-agreement strategies to aggregate layers dynamically.
**C**: Shen et al. (2018) propose a densely connected NMT architecture to create new features with dense connections. Wang et al | ACB | CBA | BCA | BAC | Selection 1 |
**A**: pre-spectral space**B**: We are going to exhibit
a surjective map f𝑓fitalic_f from Y𝑌Yitalic_Y to the logical sum X𝑋Xitalic_X of**C**: Recall that ⟨Y,τY,𝒦∘(Y)⟩𝑌subscriptτ𝑌superscript𝒦𝑌\langle Y,\uptau_{Y},\mathcal{K}^{\circ}\!\left(Y\right)\rangle⟨ italic_Y , roman_τ start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT , caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_Y ) ⟩ is a lpps | ACB | CAB | CAB | BAC | Selection 1 |
**A**: The rectification results on the synthesized and real-world scenarios also demonstrated our approach’s superiority compared with the state-of-the-art methods**B**: Like most of the assumptions in the other works [21, 23, 8, 11, 12, 14], our approach has two main limitations to extend to more complicated applications.**C**:
In this work, we presented a new learning representation for the deep distortion rectification and implemented a standard and widely-used camera model to validate its effectiveness | CBA | BCA | CBA | CBA | Selection 2 |
**A**: We adopt the linear learning rate decay strategy as default in the Transformers framework.
Table 5 shows the test accuracy results of the methods with different batch sizes**B**: We don’t use training tricks such as warm-up [7]**C**: SNGM achieves the best performance for almost all batch size settings. | CBA | BAC | CBA | CBA | Selection 2 |
**A**: We use the suffixes BB and Poly to distinguish these settings. For example, 2S-Sup-BB is the previously defined 2S-Sup in the black-box model.
**B**: The most general way to represent the scenario distribution 𝒟𝒟\mathcal{D}caligraphic_D is the black-box model [24, 12, 22, 19, 25], where we have access to an oracle to sample scenarios A𝐴Aitalic_A according to 𝒟𝒟\mathcal{D}caligraphic_D**C**: We also consider the polynomial-scenarios model [23, 15, 21, 10], where the distribution 𝒟𝒟\mathcal{D}caligraphic_D is listed explicitly | CBA | BAC | CAB | BAC | Selection 3 |
**A**: Then we substitute this upper bound into the Lyapunov function difference inequality of the consensus error, and obtain the estimated convergence rate of mean square consensus (Lemma 3.3)**B**: (Lemma 3.1).
To this end, we estimate the upper bound of the mean square increasing rate of the local optimizers’ states at first (Lemma 3.2)**C**: Further, the estimations of these rates are substituted into the recursive inequality of the conditional mean square error between the states and the global optimal solution. Finally, by properly choosing the step sizes, we prove that the states of all local optimizers converge to the same global optimal solution almost surely by the non-negative supermartingale convergence theorem. The key lies in that the algorithm step sizes should be chosen carefully to eliminate the possible increasing effect caused by the linear growth of the subgradients and to balance the rates between achieving consensus and seeking the optimal solution. | BCA | BCA | ABC | BAC | Selection 4 |
**A**: We use the US Census data [29], eliminate the tuples with missing values, and randomly select 40,152 tuples with eight attributes. The QI attributes are gender, age, relationship, marital status, race, education, and hours per week, and the sensitive attribute is salary. Table 1 describes the attributes in detail.**B**: We apply Mondrian [14], which is one of the most effective generalization approaches, and Anatomy [33], which always preserves the best information utility, as the baselines**C**:
This section evaluates the effectiveness of the proposed MuCo algorithm | BCA | BAC | BCA | CBA | Selection 4 |
**A**: Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.9 mAP and surpasses MaskRCNN by a remarkable margin of 9.7 mAP. More Points Test. By increasing the number of subdivision points from default 28 to 70 during inference, we gain another 1.1 mAP with free training cost. Large Backbone**B**: P6 level of FPN is also added for both coarse prediction head and fine-grained point head, which finally yields 74.3 mAP on our splitted validation set. Other tricks we tried on PointRend give little improvement, including MaskScoring head, GC Block and DoubleHead Wu et al. (2020).
In the following, we refer the model in the last row (74.3 mAP) of Table 2 as PointRend baseline. The baseline trained on the official training set finally reaches 79.17 and 77.38 mAP on validation and testing set respectively, as shown in Table 1 and Table 3. It surpasses SOLOv2 by a large margin: 6.2, 4.5 and 3.5 mAP respectively for small, medium and large size on validation set. We believe that PointRend’s iteratively rendering process acts as a pivot for generating high-quality masks, especially fine-grained instance boundaries. Due to its superior performance, we only choose PointRend as ensemble candidates for the final submission.**C**: X101-64x4d Xie et al. (2017) is then used as large backbone and it boosts 6 mAP against ResNet50. DCN and More Points Train. We adopt more interpolated points during training, by increasing the number of sampled points from original 14 to 26 for coarse prediction head, and from 14 to 24 for fine-grained point head. Then by adopting DCN Dai et al. (2017), we gain 71.6 mAP, which already outperforms HTC and SOLOV2 from our offline observation. Large Resolution and P6 Feature. Due to PointRend’s lightweight segmentation head and less memory consumption compared to HTC, the input resolution can be further increased from range [800,1000] to [1200,1400] during multi-scale training | CBA | ABC | CAB | ACB | Selection 4 |
**A**: For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture**B**: [KKLMS] establishes a weaker version of the conjecture**C**: Its introduction is also a good source of information on the problem.
| ABC | CBA | BCA | BAC | Selection 1 |
**A**:
However, all of the aforementioned empirical and theoretical works on RL with function approximation assume the environment is stationary, which is insufficient to model problems with time-varying dynamics. For example, consider online advertising**B**: The instantaneous reward is the payoff when viewers are redirected to an advertiser, and the state is defined as the the details of the advertisement and user contexts. If the target users’ preferences are time-varying, time-invariant reward and transition function are unable to capture the dynamics**C**: In general nonstationary random processes naturally occur in many settings and are able to characterize larger classes of problems of interest (Cover & Pombra, 1989). Can one design a theoretically sound algorithm for large-scale nonstationary MDPs? In general it is impossible to design algorithm to achieve sublinear regret for MDPs with non-oblivious adversarial reward and transition functions in the worst case (Yu et al., 2009). Then what is the maximum nonstationarity a learner can tolerate to adapt to the time-varying dynamics of an MDP with potentially infinite number of states? This paper addresses these two questions. | CBA | CAB | ABC | BAC | Selection 3 |
**A**: The rising attention of fake news in the local scene has motivated various research including studies on the perceptions and motivations of fake news sharing (Chen et al., 2015) and responses to fake news (Edson C Tandoc et al., 2020). Although there are parallels between these studies and ours, we want to highlight that our study explores fake news in general media instead of solely social media, examining both usage and trust. Furthermore, we investigate more broadly the attitudes and behaviors on news sharing and fake news.
**B**: As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Government to more directly address falsehoods that hurt the public interest**C**: Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019) | ABC | CBA | CAB | ABC | Selection 2 |
**A**: Overall, DAN exhibits significantly better performance than GCN, GAT, or their combination. The decentralized attention, which considers neighbors as queries, consistently outperforms the centralized GAT across varying entity degrees.**B**:
The results on the ZH-EN dataset are depicted in Figure 7. For entities with only a few neighbors, the advantage of leveraging DAN is not significant**C**: However, as the degree increases, incorporating DAN yields more performance gain. This upward trend halts until the degree exceeds 20 | ABC | CAB | BAC | ACB | Selection 2 |
**A**: Recall that we have ”Common” modules and ”VDM-specific” models according to Tab. II**B**: Common modules are used for policy optimization rather than exploration, and all compared methods use the same common modules and not be tuned. ”VDM-specific” modules include hyper-parameters for the proposed VDM, and these hyper-parameters are tuned through grid search [53, 54, 55] for better performance.
**C**: In this section, we present the results of the ablation study of VDM | CAB | BAC | ABC | BCA | Selection 4 |
**A**: The observations made in 2D remain valid**B**: However, Floater-Hormann becomes indistinguishable from 5thsuperscript5𝑡ℎ5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT-order splines.
Further, when considering the amount of coefficients/nodes required to determine the interpolant, plotted in the right panel (with logarithmic scales on both axes)**C**: The polynomial convergence rates of Floater-Hormann and all | CBA | CAB | CAB | ABC | Selection 4 |
**A**: the disentangled factors) and correlated components Z𝑍Zitalic_Z, a.k.a as nuisance variables, which encode the details information not stored in the independent components. A series of works starting from [beta] aims to achieve that via regularizing the models by up-weighting certain terms in the ELBO formulation which penalize the (aggregate) posterior to be factorized over all or some of the latent dimensions [kumar2017variational, factor, mig].
I think I would make what these methods doing clearer**B**: They aren’t really separating into nuisance and independent only.. they are also throwing away nuisance.**C**: Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e | CBA | ABC | CAB | BCA | Selection 4 |
**A**: In short, the direction of current, which is the flow of electricity, is determined only by the height of the potential, not by the structure or shape of the circuit.**B**:
Exploration based on previous experiments and graph theory found errors in structural computers with electricity as a medium**C**: The cause of these errors is the basic nature of electric charges: ‘flowing from high potential to low’ | CAB | ABC | ACB | ACB | Selection 1 |
**A**: There has been extensive study about a family of polynomial maps defined through a parameter a∈𝔽𝑎𝔽a\in\mathbb{F}italic_a ∈ blackboard_F over finite fields**B**: Conditions for such families of maps to define a permutation of the field 𝔽𝔽\mathbb{F}blackboard_F are well studied and established for special classes like Dickson polynomials [20], linearized polynomials [21] and few other specific forms [13, 14] to name a few.
**C**: Some well-studied families of polynomials include the Dickson polynomials and reverse Dickson polynomials, to name a few | BCA | ACB | BAC | ABC | Selection 2 |
**A**: Its high FPR in view selection appeared to negatively influence its test accuracy, as there was generally at least one sparser model with better accuracy in both our simulations and real data examples. Although nonnegative ridge regression shows that the nonnegativy constrains alone already cause many coefficients to be set to zero, if one assumes the true underlying model to be sparse, one should probably choose one of the meta-learners specifically aimed at view selection.**B**: This is not surprising considering it performs view selection only through its nonnegativity constraints**C**:
Excluding the interpolating predictor, nonnegative ridge regression produced the least sparse models | CBA | BCA | ABC | ACB | Selection 1 |
**A**: The normal dependency pattern is represented by the expected value of a variable given the values of its relevant variables, while the observed value of the variable along with the values of its relevant variables constitutes the observed pattern. This comparison facilitates a comprehensive understanding of the anomaly’s characteristics and the factors contributing to its detection.**B**:
To interpret an anomaly detected by DepAD, we begin by identifying variables with substantial dependency deviations. This is achieved by comparing the observed values of variables with their corresponding expected values**C**: A larger deviation indicates a higher contribution of that variable to the anomaly. Furthermore, we gain insights into how the anomaly differs from normal behaviors by contrasting the observed dependency pattern with the normal dependency pattern between a variable and its relevant variables | BCA | BCA | BCA | CAB | Selection 4 |
**A**:
Comparison with Amani & Thrampoulidis [2021] While the authors in Amani & Thrampoulidis [2021] also extend the algorithms of Faury et al**B**: They model various click-types for the same advertisement (action) via the multinomial distribution. further, they consider actions played at each round to be non-combinatorial, i.e., a single action as opposed to a bundle of actions, which differs from the assortment optimization setting in this work. Therefore, their approach and technical analysis are different from ours.**C**: [2020] to a multinomial problem, their setting is materially different from ours | CBA | BCA | ACB | CBA | Selection 3 |
**A**: Following FPN, some methods are proposed to further improve the architecture for higher efficiency and better accuracy, such as PANet [25], NAS-FPN [12], BiFPN [34]. Our proposed cross-scale graph pyramid (xGPN) adopts the idea of FPN and builds a pyramid of video features in the temporal domain instead of images in the spatial domain. Moreover, we embed cross-scale graph networks in the pyramid levels.**B**: FPN has become a popular base architecture for many object detection methods in recent years (e.g., [30, 35, 37, 47])**C**:
A representative work for object scale invariance is the feature pyramid network (FPN) [22], which generates multi-scale features using an architecture of encoder and decoder pyramids | ACB | ACB | CAB | CBA | Selection 4 |
**A**: The color-encoding diverges from purple to green for negative to positive difference.
In case the K-means clustering functionality is active, we use bar charts to depict the distribution of instances in the 100 individual cells (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(g), bottom)**B**: Afterwards, the predictive power for every cell is computed on average from all the instances that belong to it.**C**: Each cell of the grid (as shown in Figure 2(d.2–d.4)) then presents the computed difference in predictive power for all its instances (from −--100% to +++100%) for the selected against all models | BAC | ACB | BCA | BAC | Selection 3 |
**A**: The paper is organized as follows**B**: The decentralized state-dependent Markov matrix synthesis (DSMC) algorithm is introduced in Section III.
Section IV introduces the probabilistic swarm guidance problem formulation, and presents numerical simulations of swarms converging to desired distributions. The paper is concluded in Section V.**C**: Section II presents the consensus protocol with state-dependent weights | ACB | BCA | CBA | BAC | Selection 1 |
**A**: Moreover, for general non-rigid settings learning these basis functions has also been proposed [43].
A wide variety of extensions to make functional maps more robust or more flexible have been developed. This includes orientation-preservation [56], image co-segmentation [75], denoising [23, 55], partiality [58], and non-isometries [22].**B**: The functional mapping is represented as a low-dimensional matrix for suitably chosen basis functions**C**: The classic choice are the eigenfunctions of the LBO, which are invariant under isometries and predestined for this setting | BAC | CBA | CAB | CBA | Selection 3 |
**A**: Both graph classes are characterized very similarly in [18], and we extended the simpler characterization of path graphs in [1] to include directed path graphs as well; this result can be of interest itself**B**: Thus, now these two graph classes can be recognized in the same way both theoretically and algorithmically.
**C**: We presented the first recognition algorithm for both path graphs and directed path graphs | ABC | BCA | ACB | CAB | Selection 2 |
**A**: For the four datasets, the true labels are suggested by the original authors, and they are regarded as the “ground truth” to investigate the performances of Mixed-SLIM methods in this paper.**B**: The four datasets can be downloaded from
http://www-personal.umich.edu/~mejn/netdata/**C**: In this section, four real-world network datasets with known label information are analyzed to test the performances of our Mixed-SLIM methods for community detection | BAC | ACB | CBA | CAB | Selection 3 |
**A**: See, e.g., Udriste (1994); Ferreira and Oliveira (2002); Absil et al. (2009); Ring and Wirth (2012); Bonnabel (2013); Zhang and Sra (2016); Zhang et al. (2016); Liu et al**B**: (2018); Boumal et al. (2018); Bécigneul and Ganea (2018); Zhang and Sra (2018); Sato et al. (2019); Zhou et al. (2019); Weber and Sra (2019) and the references therein.
Also see recent reviews (Ferreira et al., 2020; Hosseini and Sra, 2020)**C**: (2017); Agarwal et al. (2018); Zhang et al. (2018); Tripuraneni et al | BCA | BCA | ACB | ABC | Selection 3 |
**A**: The results of MetaVIM is superior to CoLight on each scenario and configuration, resulting mean 43 improvement**B**: 4) The neighbors’ information is modeled in CoLight and it performs well.It indicates modeling neighbors is critical for the coordination**C**: Compared to Colight, MetaVIM proposes an intrinsic reward to help the policies learning stable, and use latent variable to better trade off the exploration and exploitation.
In addition, Colight needs the agents’ communications in testing, which is unnecessary in MetaVIM. It makes MetaVIM easy to deploy. | BAC | CBA | CAB | ACB | Selection 1 |
**A**: The algorithm builds on the concept of a profile set, which serves as an approximation of the items that are expected to appear in the sequence, given the frequency predictions**B**:
We first present and analyze an algorithm called ProfilePacking, that achieves optimal consistency, and is also efficient if the prediction error is relatively small**C**: This is a natural concept that, perhaps surprisingly, has not been exploited in the long history of competitive analysis of bin packing, and which can be readily applicable to other online packing problems, such as multi-dimensional packing (?) and vector packing (?), as we discuss in Section 7. | CAB | BCA | CBA | BAC | Selection 4 |
**A**: Throughout all experiments, we train models with Chamfer distance. We also set λ=0.0001𝜆0.0001\lambda=0.0001italic_λ = 0.0001. We denote LoCondA-HC when HyperCloud is used as the autoencoder architecture (Part A in Fig. 1) and LoCondA-HF for the HyperFlow version.
**B**: In this section, we describe the experimental results of the proposed method. First, we evaluate the generative capabilities of the model**C**: Second, we provide the reconstruction result with respect to reference approaches. Finally, we check the quality of generated meshes, comparing our results to baseline methods | CAB | BAC | CBA | ACB | Selection 1 |
**A**: As a result, we get a common saddle point problem that includes both primal and dual variables. After that, we employ the Mirror-Prox algorithm and bound the norms of dual variables at solution to assist the theoretical analysis. Finally, we demonstrate the effectiveness of our approach on the problem of computing Wasserstein barycenters (both theoretically and numerically).**B**: Our reformulation is built upon moving the consensus constraints into the problem by adding Lagrangian multipliers**C**:
We proposed a decentralized method for saddle point problems based on non-Euclidean Mirror-Prox algorithm | BAC | CAB | CBA | BCA | Selection 3 |
**A**: In this section we present some experimental results to reinforce
Conjecture 14**B**: In the first part, we focus on the complete analysis of small graphs, that is: graphs of at most 9 nodes. In the second part, we analyze larger families of graphs by random sampling instances.**C**: We proceed by trying to find a counterexample based on our previous observations | ACB | BCA | CAB | ABC | Selection 1 |
**A**: A major part of this paper, all of Sections 3 and 4, is devoted to adapt it to handle the k𝑘kitalic_k-partite structure of colorful intersection patterns.**B**: This technique, which we briefly outline here, was specifically designed for complete intersection patterns**C**:
The proof of Theorem 2.1 is quite involved and builds on the method of constrained chain maps developed in [18, 35] to study intersection patterns via homological minors [37] | CBA | BCA | ABC | ABC | Selection 1 |
**A**: Ground truth versus per class predicted probability.
In Section 4.1, we explain that the data space view presents the predicted probabilities for the ground truth class**B**: Although this idea appears valuable and straightforward for binary classification problems, it will not scale well with multiclass problems addressed by our VA system. It will be tough to present confusion between classes when more than a couple of class labels are available, which is typical in multiclass problems. Moreover, the limited amount of space in that view is another reason we abandoned this idea, despite being a valid alternative.**C**: A different approach could have been to visualize the predicted probability for every class | ACB | BAC | BCA | CBA | Selection 1 |
**A**: Using Bayesian optimization-based tuning for enhanced performance has been further demonstrated for cascade controllers of linear axis drives, where data-driven performance metrics have been used to specifically increase the traversal time and the tracking accuracy while reducing vibrations in the systems [11, 12]. The approach has been successfully applied to linear and rotational axis embedded in grinding machines and shown to standardize and automate tuning of multiple parameters [13].
**B**: In MPC, closed-loop performance is pushed to the limits only if the plant under control is accurately modeled, alternatively, the performance degrades due to imposed robustness constraints. Instead of adapting the controller for the worst case scenarios, the prediction model can be selected to provide the best closed-loop performance by tuning the parameters in the MPC optimization objective for maximum performance [8, 9, 10]**C**: MPC accounts for the real behavior of the machine and the axis drive dynamics can be excited to compensate for the contour error to a big extent, even without including friction effects in the model [4, 5]. High-precision trajectories or set points can be generated prior to the actual machining process following various optimization methods, including MPC, feed-forward PID control strategies, or iterative-learning control [6, 7], where friction or vibration-induced disturbances can be corrected | CAB | ABC | BCA | CBA | Selection 4 |
**A**: Rather, it is ideal if the methods can generalize without being tuned on the test distribution and we study this ability by comparing models selected through varying tuning distributions**B**: To control the tuning distribution, we define a generalization of the mean per group accuracy (MPG) metric, that can interpolate within as well as extrapolate beyond the train and test distributions:
**C**: Assuming access to the test distribution for model selection is unrealistic and can result in models being right for the wrong reasons [64] | BCA | ACB | ABC | ACB | Selection 1 |
**A**: In [127], they estimate the general visual attention and human’s gaze directions in images at the same time. Kellnhofer et al. propose a temporal 3D gaze network [43]. They use bi-LSTM [128] to process a sequence of 7 frames to estimate not only gaze directionS but also gaze uncertainty.**B**: Also, visual saliency shows strong correlation with human gaze in scene images [125, 126]**C**: Recasens et al. present an approach for following gaze in video by predicting where a person (in the video) is looking, even when the object is in a different frame [124].
They build a CNN to predict the gaze location in each frame and the probability containing the gazed object of each frame | BCA | ACB | CBA | CAB | Selection 3 |
**A**: Experimental results are carried out on Real-world Masked Face Recognition Dataset (RMFRD) and Simulated Masked Face Recognition Dataset (SMFRD) presented in wang2020masked . We start by localizing the mask region**B**: To do so, we apply a cropping filter in order to obtain only the informative regions of the masked face (i.e. forehead and eyes)**C**: Next, we describe the selected regions using a pre-trained deep learning model as a feature extractor. This strategy is more suitable in real-world applications comparing to restoration approaches. Recently, some works have applied supervised learning on the missing region to restore them such as in din2020novel . This strategy, however, is a difficult and highly time-consuming process.
| BCA | CAB | ABC | CBA | Selection 3 |
**A**: Relatedly, refer to Das and Pfenning [DP20a] for a proof of type safety for a session type system with arithmetic refinements**B**: Now, we are ready to prove termination**C**: In contrast to the termination proof for base SAX [DPP20], we explicitly construct a model of SAX in sets of terminating configurations, also known as semantic typing [App01, HLKB21]. This leaves open several possibilities—for example, we could reason about programs that fail to syntactically typecheck [JJKD17, DTK+19] or analyze fixed points of semantic type constructors. Our approach mirrors that for natural deduction:
| BCA | ABC | ABC | BAC | Selection 4 |
**A**: [3]-I and [3]-II represent the first scheme and the second scheme in [3], respectively. Compared with [3]-II, the main advantage of FairCMS-II is that it solves the problem that users can escape traceability by generating two different fingerprints, as discussed in the third last paragraph of Section V-A.
**B**: ✓∖limit-from✓\checkmark\mkern-11.0mu{\smallsetminus}✓ ∖ means that the privacy of cloud media is protected, but that protection is not IND-CPA secure**C**: ‘−--’ indicates that the property is not scored because the involvement of cloud is not considered | CAB | BCA | BAC | CBA | Selection 4 |
**A**: As a consequence, we can model only these beneficial interactions with the next interaction aggregation component**B**: To check the necessity of this component, we remove this components, so that all pair of feature interactions are modeled as a fully-connected graph.**C**:
GraphFM(-S): interaction selection is the first component in each layer of GraphFM, which selects only the beneficial feature interactions and treat them as edges | BAC | BCA | BAC | ACB | Selection 2 |
**A**: We note that the LBTFW-GSC algorithm from Dvurechensky et al**B**: [2020]. In the next section, we provide improved convergence guarantees for various cases of interest for this algorithm, which we refer to as the Frank-Wolfe algorithm with Backtrack (B-FW) for simplicity.
**C**: [2022] is in essence the Frank-Wolfe algorithm with a modified version of the backtracking line search of Pedregosa et al | CAB | ACB | ABC | CAB | Selection 2 |
**A**: Nevertheless, we show how to set parameters so that putting on hold DFS over large trees increases the number of passes only by a poly1/εpoly1𝜀\operatorname{poly}1/\varepsilonroman_poly 1 / italic_ε factor.
**B**: Note that pausing DFS execution of some search trees increases the time required to explore the entire graph**C**: Our algorithms “puts on hold” (or pauses) DFS over search trees that become too large | ABC | BAC | CBA | BCA | Selection 3 |
**A**: In the second part of this paper, we propose a broadcast-like CPP algorithm (B-CPP) that allows for asynchronous updates of the agents: at every iteration of the algorithm, only a subset of the agents wake up to perform prescribed updates**B**: Thus, B-CPP is more flexible, and due to its broadcast nature, it can further save communication over CPP in certain scenarios [63]**C**: We show that B-CPP also achieves linear convergence for minimizing strongly convex and smooth objectives.
| CBA | ACB | ACB | ABC | Selection 4 |
**A**: We make a detailed comparison with them in Appendix C. Due to the fact that we consider a personalized setting, we can have a significant gain in communications. For example, when λ=0𝜆0\lambda=0italic_λ = 0 or small enough in (1) the importance of local models increases and we may communicate less frequently.
We now outline the main contribution of our work as follows (please refer also Table 1 for an overview of the results):**B**: In the literature, there are works on general (non-personalized) SPPs**C**: To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting | CBA | BAC | BCA | CAB | Selection 1 |
**A**: multi-agent problem. These tools are amenable to scaling approaches; including utilizing reinforcement learning, function approximation, and online solution solvers, however we leave this to future work.
**B**: We provide a tractable approach to select from the space of (C)CEs (MG), and a novel training framework that converges to this solution (JPSRO). The result is a set of tools for theoretically solving any complete information444Payoffs for all players are required for the correlation device**C**: In this work we propose using correlated equilibrium (CE) (Aumann, 1974) and coarse correlated equilibrium (CCE) as a suitable target equilibrium space for n-player, general-sum games333We mean games (also called environments) in a very general sense: extensive form games, multi-agent MDPs and POMDPs (stochastic games), imperfect information games, are all solvable with this approach.. The (C)CE solution concept has two main benefits over NE; firstly, it provides a mechanism for players to correlate their actions to arrive at mutually higher payoffs and secondly, it is computationally tractable to compute solutions for n-player, general-sum games (Daskalakis et al., 2009) | CAB | CBA | BCA | BAC | Selection 2 |
**A**:
Differential privacy essentially provides the optimal asymptotic generalization guarantees given adaptive queries (Hardt and Ullman, 2014; Steinke and Ullman, 2015)**B**: However, its optimality is for worst-case adaptive queries, and the guarantees that it offers only beat the naive intervention—of splitting a dataset so that each query gets fresh data—when the input dataset is quite huge (Jung et al., 2020)**C**: A worst-case approach makes sense for privacy, but for statistical guarantees like generalization, we only need statements that hold with high probability with respect to the sampled dataset, and only on the actual queries issued. | CBA | ABC | BAC | BAC | Selection 2 |
**A**: We use reduction steps inspired by the kernelization algorithms [12, 46] for Feedback Vertex Set to bound the size of 𝖺𝗇𝗍𝗅𝖾𝗋𝖺𝗇𝗍𝗅𝖾𝗋\mathsf{antler}sansserif_antler in the size of 𝗁𝖾𝖺𝖽𝗁𝖾𝖺𝖽\mathsf{head}sansserif_head, by analyzing an intermediate structure called feedback vertex cut**B**:
Our algorithmic results are based on a combination of graph reduction and color coding [6] (more precisely, its derandomization via the notion of universal sets)**C**: After such reduction steps, the size of the entire structure we are trying to find can be bounded in terms of the parameter k𝑘kitalic_k. We then use color coding [6] to identify antler structures. A significant amount of effort goes into proving that the reduction steps preserve antler structures and the optimal solution size. | ACB | CAB | BAC | ACB | Selection 3 |
**A**: Differently, a few methods [2, 190, 147] attempt to address the unreasonable occlusion when it occurs. Specifically, they first estimate the relative depth relation between the foreground object and the surrounding background objects**B**: Then, they remove the occluded part of foreground object. In this way, they are able to generate composite images with reasonable inter-object occlusions.**C**:
In the end, we briefly discuss the occlusion issue. Most of the above methods seek for reasonable placements to avoid the occurrence of occlusion, i.e., the inserted foreground is not occluded by background objects | CAB | ABC | BCA | CBA | Selection 3 |
**A**: Transfer learning: Firstly, it can serve as an ideal testbed for transfer learning algorithms, including meta-learning [5], AutoML [23], and transfer learning on spatio-temporal graphs under homogeneous or heterogeneous representations**B**: In the field of urban computing, it is highly probable that the knowledge required for different tasks, cities, or time intervals is correlated**C**: By leveraging this transferable knowledge across domains with this multi-city, multi-task data, CityNet can help researcher alleviate the data scarcity problems that arise in newly-built or under-developed cities.
| ACB | CAB | CAB | ABC | Selection 4 |
**A**: The benefit of working with models that are built upon or include a point predictor is that one also gets a direct estimate of the response variable. Since this is important in many situations, the R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficients are reported (as noted in the section on quantile regression, this method cannot estimate the conditional mean, instead the conditional median is estimated).**B**: In the experiments only the predictive power is considered**C**:
Aside from the above quality measures, some other quantities might be of importance depending on the application | CAB | BCA | CBA | BCA | Selection 3 |
**A**: We can further differentiate Bar(new) and Bar(cont), representing respectively the beginning of a new bar and a continuation of the current bar and always have one of them before a Sub-bar token. This way, the tokens would always occur in a group of four for MIDI scores.
For MIDI performances, six tokens would be grouped together, including Velocity and Tempo. Following the logic of Bar, if there is no tempo change, we simply repeat the tempo value.**B**: 1(a) shows that, except for Bar, the other tokens in a REMI sequence always occur consecutively in groups, in the order of Sub-bar, Pitch, Duration**C**: Fig | BCA | BAC | ACB | CBA | Selection 4 |
**A**: This description draws a comparison e.g**B**: [10] for a survey), where the colors of any two adjacent vertices have to differ by at least k𝑘kitalic_k and the colors of any two vertices within distance 2222 have to be distinct.
**C**: to L(k,1)𝐿𝑘1L(k,1)italic_L ( italic_k , 1 )-labeling problem (see e.g | BAC | ABC | ACB | CAB | Selection 3 |
**A**: Based on JSCC, an image transmission system, integrating channel output feedback, can improve image reconstruction[15]. Similar to text transmission, IoT applications for image transmission have been carried out**B**:
Recently, there are also investigations on semantic communications for other transmission contents, such as image and speech. A DL-enabled semantic communication system for image transmission, named JSCC, has been developed in[14]**C**: Particularly, a joint image transmission-recognition system has been developed in[16] to achieve high recognition accuracy. A deep joint source-channel coding architecture, name DeepJSCC, has been investigated in[17] to process image with low computation complexity. | ACB | BCA | BAC | ACB | Selection 3 |
**A**: Then, we randomly label 10% of points in each class for the sampled input point clouds. The final predictions will be back-projected to the original point clouds**B**: Therefore, only 10% of the network input training data and only 0.4% of the original point cloud data are labeled. We also perform experiments with fewer labels that only 1% of the input points are labeled.**C**:
Weak labels: We follow [13] to annotate only 10% of the points. We first sample 4% of the points from the original data as the network inputs | CAB | BCA | CBA | BAC | Selection 2 |
**A**: Table 4 shows more depth estimation results on KITTI val set via comparing the enhanced baseline and our method. Specifically, we evaluate the depth estimation by computing Scale Invariant Logarithmic (SILog) error, squared Relative (sqRel) error, absolute Relative (absRel) error, and Root Mean Squared Error of the inverse depth (iRMSE)**B**: The depth estimation results clearly demonstrate the effectiveness of our proposed idea of using geometry-guided representation learning to boost depth estimation from monocular images for advancing the monocular 3D object detection.
**C**: Our method outperforms the enhanced baseline by large margins on all these evaluation metrics | ACB | BCA | ABC | CBA | Selection 1 |
**A**: FPNS (Node) rectifies false detections by measuring attributes of the text segments in local graph structures and upgrades GCNs to a multiple-task network rather than only linkage reasoning, modifications which support each other**B**: This explains why the overall performance is often further improved when both FPNS (Node) and FPNS (GGTR) are applied. The performance improvements reflect that our FPNS strategies can suppress false detections while not overly affecting true detections.
**C**: Both node classification and link prediction utilize the same relational features and boost each other’s performance | ABC | BCA | ABC | ACB | Selection 4 |
**A**: A hash table is an effective method for collecting the statistics of IP addresses Sanders2015HS . It uses a hash function to compute a hash codes for an array of buckets with the statistical results**B**: The hash function assigns each key to a unique bucket for each IP address. Unfortunately, the hash function can generate the same hash code for more than one IP address**C**: With the increase in the generation of big data, millions or tens of millions of records have become ubiquitous in network traffic. Therefore, this approach could cause several hash collisions, especially for a large number of IP addresses. Although many strategies can be employed to avoid collisions, such as linear probing, quadratic probing, and double hashing, they require extra storage space and computation.
| CAB | ABC | CBA | CBA | Selection 2 |
**A**: KKT system or saddle point system**B**: Usually, D𝐷Ditalic_D is assumed to be a symmetric and semi-positive definite**C**: In this paper, we only make some assumptions
that can guarantee the invertibility of 𝒜𝒜\mathcal{A}caligraphic_A. We assume that A𝐴Aitalic_A and the Schur complement Schur(𝒜)Schur𝒜\mbox{Schur}(\mathcal{A})Schur ( caligraphic_A ) | ABC | BAC | CBA | CBA | Selection 1 |
**A**: For each Q𝑄Qitalic_Q and K𝐾Kitalic_K, we let TDCD train for 5,000 iterations for CIFAR-10, 10,000 iterations for MIMIC-III, and 4,000 iterations for ModelNet40, and pick the learning rate with the lowest training loss.**B**:
In each experiment, for each value of Q𝑄Qitalic_Q, we choose the learning rate using a grid search**C**: For CIFAR-10 we search for a learning rate in the range [0.0001, 0.00001], for MIMIC-III we search in the range [0.1, 0.001], and for ModelNet40 we search in the range [0.001, 0.00005] | ACB | BCA | CAB | BAC | Selection 3 |
**A**: Given the significance of pseudospectra in solving matrix problems, we aim to extend this tool to tensors based on the theoretical analysis in Subsection 4.1.**B**:
The study of spectra and pseudospectra in matrix cases indicates that while eigenvalues are successful tools for solving mathematical problems in various fields, they may not always provide a satisfactory answer for questions that mainly depend on the spectra, especially for nonnormal matrices**C**: As an alternative, pseudospectra attempt to provide approximate solutions by offering reasonably tight bounds and engaging geometric interpretations trefethen2005spectra | BAC | CAB | BAC | BAC | Selection 2 |
**A**: Correspondingly, a two-branch discriminator is developed to estimate the performance of this generation, which supervises the model to synthesize realistic pixels and sharp edges simultaneously for global optimization. In addition, we introduce a novel Bi-directional Gated Feature Fusion (Bi-GFF) module to integrate the rebuilt structure and texture feature maps to enhance their consistency, along with a Contextual Feature Aggregation (CFA) module to highlight the clues from distant spatial locations to render finer details. Due to the dual generation network as well as the specifically designed modules, our approach is able to achieve more visually convincing structures and textures (see Figure 1, zoom in for a better view).**B**: In this way, the two parallel-coupled streams are individually modeled and combined to complement each other**C**:
In this paper, we propose a novel two-stream network which casts image inpainting into two collaborative subtasks, i.e., structure-constrained texture synthesis and texture-guided structure reconstruction | CBA | BAC | ABC | BAC | Selection 1 |
**A**: The concept of BEC was first introduced by Elias in 1955 InfThe **B**: Together with the binary symmetric channel (BSC), they are frequently used in coding theory and information theory because they are among the simplest channel models, and many problems in communication theory can be reduced to problems in a BEC. Here we consider more generally a q𝑞qitalic_q-ary erasure channel in which a q𝑞qitalic_q-ary symbol is either received correctly, or totally erased with probability ε𝜀\varepsilonitalic_ε.
**C**: In a binary erasure channel (BEC), a binary symbol is either received correctly or totally erased with probability ε𝜀\varepsilonitalic_ε | BAC | CBA | ABC | BCA | Selection 4 |