text_with_holes
stringlengths
220
2.18k
text_candidates
stringlengths
217
742
A
stringclasses
6 values
B
stringclasses
6 values
C
stringclasses
6 values
D
stringclasses
6 values
label
stringclasses
4 values
One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method not so practical. Here in this paper, in the presence of rough coefficients, spectral techniques are employed to overcome such hurdle, and by solving local eigenvalue problems we define a space where the exponential decay of solutions is insensitive to high-contrast coefficients. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
**A**: We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. **B**: Additionally, the spectral techniques remove macro-elements corner singularities that occur in LOD methods based on mixed finite elements. **C**: Here, we propose eigenvalue problems based on edges of macro element removing the dependence.
BAC
BAC
BAC
CAB
Selection 1
<|MaskedSetence|> For the credibility model we, therefore, leverage the signals derived from tweet contents. Related work often uses aggregated content [18, 20, 32], since individual tweets are often too short and contain slender context to draw a conclusion. However, content aggregation is problematic for hierarchical events and especially at early stage, in which tweets are likely to convey doubtful and contradictory perspectives. Thus, a mechanism for carefully considering the ‘vote’ for individual tweets is required. <|MaskedSetence|> <|MaskedSetence|>
**A**: In this work, we overcome the restrictions (e.g., semantic sparsity) of traditional text representation methods (e.g., bag of words) in handling short text by learning low-dimensional tweet embeddings. **B**: In this way, we achieve a rich hidden semantic representation for a more effective classification. . **C**: Early in an event, the related tweet volume is scanty and there are no clear propagation pattern yet.
CAB
CAB
CBA
CAB
Selection 2
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> First we split the features in 7 catalogues as in Table 1: Tweet_Feature, User_Feature, Text_Feature, CreditScore, SpikeM Features, Epidemiological Features, CrowdWisdom and the BestSet. The BestSet is a combination of the top 9 most important features which is mentioned in the below paragraph. The results over 48 hours are in Figure 9 . .
**A**: 4.5.1. **B**: Feature Analyzing Over Time Here we present the performance of features over time. **C**: We use the RF permutation-based (that account for possible feature correlations) for measuring feature importance.
ABC
BCA
ABC
ABC
Selection 3
<|MaskedSetence|> <|MaskedSetence|> We adapted the L2R RankSVM [12]. <|MaskedSetence|> We modified the objective function of RankSVM following our global loss function, which takes into account the temporal feature specificities of event entities. The temporal and type-dependent ranking model is learned by minimizing the following objective function: .
**A**: Multi-Criteria Learning. **B**: The goal of RankSVM is learning a linear model that minimizes the number of discordant pairs in the training data. **C**: Our task is to minimize the global relevance loss function, which evaluates the overall training error, instead of assuming the independent loss function, that does not consider the correlation and overlap between models.
ACB
ACB
ACB
ABC
Selection 2
To restore the original image resolution, extracted features were processed by a series of convolutional and upsampling layers. <|MaskedSetence|> (2018); Liu and Han (2018), but we argue that a carefully chosen decoder architecture, similar to the model by Pan et al. (2017), results in better approximations. Here we employed three upsampling blocks consisting of a bilinear scaling operation, which doubled the number of rows and columns, and a subsequent convolutional layer with kernel size 3×3333\times 33 × 3. This setup has previously been shown to prevent checkerboard artifacts in the upsampled image space in contrast to deconvolution Odena et al. <|MaskedSetence|> Besides an increase of resolution throughout the decoder, the amount of channels was halved in each block to yield 32 feature maps. <|MaskedSetence|> The outputs of all but the last linear layer were modified via rectified linear units. Figure 2 visualizes the overall architecture design as described in this section. .
**A**: Previous work on saliency prediction has commonly utilized bilinear interpolation for that task Cornia et al. **B**: (2016). **C**: Our last network layer transformed activations into a continuous saliency distribution by applying a final 3×3333\times 33 × 3 convolution.
ABC
ABC
ABC
ACB
Selection 1
Our work advances the state-of-the-art in model-based reinforcement learning by introducing a system that, to our knowledge, is the first to successfully handle a variety of challenging games in the ALE benchmark. <|MaskedSetence|> We present an approach, called Simulated Policy Learning (SimPLe), that utilizes these video prediction techniques and trains a policy to play the game within the learned model. With several iterations of dataset aggregation, where the policy is deployed to collect more data in the original game, we learn a policy that, for many games, successfully plays the game in the real environment (see videos on the project webpage https://goo.gl/itykP8). In our empirical evaluation, we find that SimPLe is significantly more sample-efficient than a highly tuned version of the state-of-the-art Rainbow algorithm (Hessel et al., 2018) on almost all games. In particular, in low data regime of 100100100100k samples, on more than half of the games, our method achieves a score which Rainbow requires at least twice as many samples. In the best case of Freeway, our method is more than 10x more sample-efficient, see Figure 3. Since the publication of the first preprint of this work, it has been shown in van Hasselt et al. <|MaskedSetence|> <|MaskedSetence|> (2019) compares with the results of our first preprint, later improved)..
**A**: (2019); Kielak (2020) that Rainbow can be tuned to have better results in low data regime. **B**: The results are on a par with SimPLe – both of the model-free methods are better in 13 games, while SimPLe is better in the other 13 out of the total 26 games tested (note that in Section 4.2 van Hasselt et al. **C**: To that end, we experiment with several stochastic video prediction techniques, including a novel model based on discrete latent variables.
CAB
CAB
CAB
CAB
Selection 1
Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the rear legs (depicted by the green line) exceeded the predetermined threshold values set by the rear body climbing gait for heights of 2h. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Despite the noticeable improvement in energy consumption, the transition to the rear body climbing gait takes more time for the robot to tackle a 2h step. .
**A**: The overlap between the red line (ongoing energy consumption of the robot) and the blue line (pre-studied energy consumption of step negotiation in rolling locomotion mode only) illustrates this. **B**: Following the preparation phase, the robot switches to the rear body climbing gait. **C**: After the mode transition is triggered, the robot enters a well-defined preparation phase, wherein it moves backward a short distance to ensure the rear tracks are separated from the step.
ACB
ACB
ACB
ABC
Selection 1
<|MaskedSetence|> Advice bits, as all information, are prone to transmission errors. In addition, the known advice models often allow information that one may arguably consider unrealistic, e.g., an encoding of some part of the offline optimal solution. <|MaskedSetence|> For a very simple example, consider the well-known ski rental problem: this is a simple, yet fundamental resource allocation, in which we have to decide ahead of time whether to rent or buy equipment without knowing the time horizon in advance. In the traditional advice model, one bit suffices to be optimal: 0 for renting throughout the horizon, 1 for buying right away. <|MaskedSetence|> In contrast, an online algorithm that does not use advice at all has competitive ratio at most 2, i.e., its output can be at most twice as costly as the optimal one. .
**A**: It should be fairly clear that such assumptions are very unrealistic or undesirable. **B**: Last, and perhaps more significantly, a malicious entity that takes control of the advice oracle can have a catastrophic impact. **C**: However, if this bit is wrong, then the online algorithm has unbounded competitive ratio, i.e., can perform extremely badly.
ABC
BAC
ABC
ABC
Selection 4
We organize this paper as follows. In section II, we introduce the related works. In section III, we first introduce the UAV’s power control in the multi-channel communication and coverage problems, then form a system model in highly dynamic scenarios. Moreover, in section IV, we formulate our work as an aggregative game and prove the existence of the NE. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
**A**: Ultimately, section VII gives a conclusion of the whole study. . **B**: In section V, we propose the two algorithms for approaching the NE. **C**: Section VI presents the simulation results and discussions.
BCA
BCA
BCA
BCA
Selection 4
multiplication (e.g.,formulae-sequence𝑒𝑔e.g.,italic_e . <|MaskedSetence|> , (a¯⁢b¯)i=ai⁢bisubscript¯𝑎¯𝑏𝑖subscript𝑎𝑖subscript𝑏𝑖(\overline{a}\,\,\overline{b})_{i}=a_{i}b_{i}( over¯ start_ARG italic_a end_ARG over¯ start_ARG italic_b end_ARG ) start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_b start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT), and the symbol /\,/\,/ represents Hadamard division, piecewise element-by element division (e.g.,formulae-sequence𝑒𝑔e.g.,italic_e . <|MaskedSetence|> , (a¯/b¯)i=ai/bisubscript¯𝑎¯𝑏𝑖subscript𝑎𝑖subscript𝑏𝑖(\overline{a}\,/\,\overline{b})_{i}=a_{i}/b_{i}( over¯ start_ARG italic_a end_ARG / over¯ start_ARG italic_b end_ARG ) start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT / italic_b start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT). The symbol ∗*∗ between two matrices (e.g.,formulae-sequence𝑒𝑔e.g.,italic_e . italic_g . <|MaskedSetence|> The superscript T𝑇Titalic_T implies.
**A**: italic_g . **B**: italic_g . **C**: , C¯¯=A¯¯∗B¯¯¯¯𝐶¯¯𝐴¯¯𝐵\overline{\overline{C}}=\overline{\overline{A}}\,*\,\overline{\overline{B}}over¯ start_ARG over¯ start_ARG italic_C end_ARG end_ARG = over¯ start_ARG over¯ start_ARG italic_A end_ARG end_ARG ∗ over¯ start_ARG over¯ start_ARG italic_B end_ARG end_ARG) implies regular matrix multiplication.
ABC
ACB
ABC
ABC
Selection 1
<|MaskedSetence|> Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Classic Control Environment. The game of CARTPOLE was selected due to its widespread use and the ease with which the DQN can achieve a steady state policy. For the experiments, fully connected neural network architecture was used. <|MaskedSetence|> <|MaskedSetence|>
**A**: It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers. **B**: To minimize the DQN loss, ADAM optimizer was used[25].. **C**: To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs.
CAB
CAB
CAB
ABC
Selection 1
3.5 Sequenced Models The Recurrent Neural Network (RNN) was designed for handling sequences. <|MaskedSetence|> In the medical image analysis domain, RNNs have been used to model the temporal dependency in image sequences. Bai et al. (2018) proposed an image sequence segmentation algorithm by combining a fully convolutional network with a recurrent neural network, which incorporates both spatial and temporal information into the segmentation task. Similarly, Gao et al. <|MaskedSetence|> (2019a) applied U-Net to obtain initial segmentation probability maps and further improve them using LSTM for pancreas segmentation from 3D CT scans. <|MaskedSetence|>
**A**: Similarly, other works have also applied RNNs (LSTMs) (Alom et al., 2019; Chakravarty and Sivaswamy, 2018; Yang et al., 2017b; Zhao and Hamarneh, 2019a, b) to medical image segmentation. . **B**: The long short-term memory (LSTM) network is a type of RNN that introduces self-loops to enable the gradient flow for long duration (Hochreiter and Schmidhuber, 1997). **C**: (2018) applied LSTM and CNN to model temporal relationship in brian MRI slices to improve segmentation performance in 4D volumes. Li et al.
BCA
CAB
BCA
BCA
Selection 1
<|MaskedSetence|> Zhang et al. (2017) demonstrate that deep neural networks are capable of fitting random labels and memorizing the training data. Bornschein et al. <|MaskedSetence|> (2018) evaluate the performance of modern neural networks using the same test strategy as Fernández-Delgado et al. <|MaskedSetence|>
**A**: Neural networks are universal function approximators. The generalization performance has been widely studied. **B**: (2020) analyze the performance across different dataset sizes. Olson et al. **C**: (2014) and find that neural networks achieve good results but are not as strong as random forests. .
ABC
ABC
ABC
ABC
Selection 3
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020), as well as nonlinear settings involving general function approximators (Wen and Van Roy, 2017; Jiang et al., 2017; Du et al., 2019b; Dong et al., 2019). In particular, our setting is the same as the linear setting studied by Ayoub et al. <|MaskedSetence|> (2020), which generalizes the one proposed by Yang and Wang (2019a). We remark that our setting differs from the linear setting studied by Yang and Wang (2019b); Jin et al. (2019). <|MaskedSetence|> <|MaskedSetence|> (2017); Dong et al. (2019). In comparison, we focus on policy-based reinforcement learning, which is significantly less studied in theory. In particular, compared with the work of Yang and Wang (2019b, a); Jin et al. (2019); Ayoub et al. (2020); Zhou et al. (2020), which focuses on value-based reinforcement learning, OPPO attains the same T𝑇\sqrt{T}square-root start_ARG italic_T end_ARG-regret even in the presence of adversarially chosen reward functions. Compared with optimism-led iterative value-function elimination (OLIVE) (Jiang et al., 2017; Dong et al., 2019), which handles the more general low-Bellman-rank setting but is only sample-efficient, OPPO simultaneously attains computational efficiency and sample efficiency in the linear setting. Despite the differences between policy-based and value-based reinforcement learning, our work shows that the general principle of “optimism in the face of uncertainty” (Auer et al., 2002; Bubeck and Cesa-Bianchi, 2012) can be carried over from existing algorithms based on value iteration, e.g., optimistic LSVI, into policy optimization algorithms, e.g., NPG, TRPO, and PPO, to make them sample-efficient, which further leads to a new general principle of “conservative optimism in the face of uncertainty and adversary” that additionally allows adversarially chosen reward functions. .
**A**: It can be shown that the two settings are incomparable in the sense that one does not imply the other (Zhou et al., 2020). **B**: Also, our setting is related to the low-Bellman-rank setting studied by Jiang et al. **C**: (2020); Zhou et al.
CBA
CAB
CAB
CAB
Selection 4
<|MaskedSetence|> <|MaskedSetence|> Isbell [52] proved that every metric space admits a smallest hyperconvex hull (cf. the definition of tight span below). Dress rediscovered this concept in [31] and subsequent work provided much development in the context of phylogenetics [77, 32]. <|MaskedSetence|>
**A**: These were studied by Aronszajn and Panitchpakdi [8] who showed that every hyperconvex space is an absolute 1-Lipschitz retract. **B**: More recently, in [53] Joharinad and Jost considered relaxations of hyperconvexity and related it to a certain notion of curvature applicable to general metric spaces.. **C**: 2.2 Injective (Hyperconvex) metric spaces A hyperconvex metric space is one where any collection of balls with non-empty pairwise intersections forces the non-empty intersection of all balls.
BCA
CAB
CAB
CAB
Selection 2
She decides, then, to use t-SNE to explore the Breast Cancer Wisconsin data set which she downloaded from the UCI machine learning repository [58]. The data set contains measurements for 699 breast cancer cases, labeled into benign or malignant cancer. <|MaskedSetence|> However, she read on the Internet that t-SNE is a complex algorithm, and most of its decisions are hidden from the user perspective. <|MaskedSetence|> After the execution, she sees several projections that accurately separate the two classes. <|MaskedSetence|> After the resulting scatterplot is loaded in the main view, she starts to investigate the overall quality by looking at the Shepard Heatmap, see Figure 6(b). Most values are situated along the diagonal of the heatmap, which—as she learned from the documentation of the tool—suggests that it is a rather accurate projection. Also, by examining the distribution of points by color in the overview (Figure 6(a)), she gets the impression that the points are mostly correctly arranged into two classes (malignant cancer cases on the left and benign cancer cases on the right). Since labels are not used by t-SNE (it is an unsupervised technique), this further supports her initial assumption that the produced results are accurate..
**A**: The nine dimensions included in this data set are cytological characteristics rated from 1 to 10 (higher means closer to malignant) when the instances were collected. **B**: After finding that t-viSNE allows her to interpret and assess t-SNE’s results, she decides to use it. Overall Accuracy   Anna loads the data into t-viSNE and starts the hyper-parameter exploration with a grid search. **C**: As she does not have any special preference, she selects the top-left projection, because the projections are sorted from best to worst based on the average of all the provided quality metrics.
ABC
BCA
ABC
ABC
Selection 1
<|MaskedSetence|> The most clear example can be found in the different animal species, which have developed over generations very specialized capabilities by evolutionary mechanisms. Indeed, evolution has allowed animals to adapt to harsh environments, foraging, very difficult tasks of orientation, and to resiliently withstand radical climatic changes, among other threats. <|MaskedSetence|> This renowned success of biological organisms has inspired all kinds of solvers for optimization problems, which have been so far referred to as bio-inspired optimization algorithms. <|MaskedSetence|>
**A**: This family of optimization methods simulates biological processes such as natural evolution, where solutions are represented by individuals that reproduce and mutate to generate new, potentially improved candidate solutions for the problem at hand. . **B**: In this context, complexity is not unusual in Nature: a plethora of complex systems, processes and behaviors have evinced a surprising performance to efficiently address intricate optimization tasks. **C**: Animals, when organized in independent systems, groups or swarms or colonies (systems quite complex on their own) have managed to colonize the Earth completely, and eventually achieve a global equilibrium that has permitted them to endure for thousands of years.
BCA
BAC
BCA
BCA
Selection 3
<|MaskedSetence|> Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods. In this paper, we propose an Adaptive Graph Auto-Encoder (AdaGAE) to extend graph auto-encoder into common scenarios. The main contributions are listed as follows: (1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for decoders. (2) As we utilize GAE to exploit the high-level information to construct a desirable graph, we find that the model suffers from a severe collapse due to the simple update of the graph. <|MaskedSetence|> We further propose a simple but effective strategy to avoid it. (3) AdaGAE is a scalable clustering model that works stably on different scale and type datasets, while the other deep clustering models usually fail when the training set is not large enough. <|MaskedSetence|>
**A**: However, the existing methods are limited to graph type data while no graph is provided for general data clustering. **B**: We analyze the degeneration theoretically and experimentally to understand the phenomenon. **C**: Besides, it is insensitive to different initialization of parameters and needs no pretraining..
ABC
ABC
CAB
ABC
Selection 4
<|MaskedSetence|> The measurement community provided indispensable studies for assessing “spoofability” in the Internet, and has had success in detecting the ability to spoof in some individual networks using active measurements, e.g., via agents installed on those networks (Mauch, 2013; Lone et al., 2018), or by identifying spoofed packets using offline analysis of traffic, e.g., (Lone et al., 2017; Luckie et al., 2019). <|MaskedSetence|> Therefore it is not clear how representative these statistics are. Unfortunately, this limitation to a small set of networks creates a bias in the assessments of the overall number of spoofable networks. <|MaskedSetence|> As we show, the number of spoofable networks is above 72% which is significantly higher than what was previous believed. .
**A**: Limitations of filtering studies. **B**: The need to install agents on networks or the ability to obtain traces only from some networks limits the studies to non-uniform coverage of the Internet. **C**: The extrapolation from the small set of networks to the entire Internet typically result in assessment that at least 30% of the Internet networks do not filter spoofed packets (Luckie et al., 2019; Man et al., 2020).
ABC
BCA
ABC
ABC
Selection 4
III-A Dataset description Experiments in this paper used the gas sensor drift array dataset [7]. <|MaskedSetence|> Every batch contains between 161161161161 to 3,60036003{,}6003 , 600 samples, and each sample is represented by a 128-dimensional feature vector; 8 features each from 16 metal oxide-based gas sensors. <|MaskedSetence|> The experiments used six gases, ammonia, acetaldehyde, acetone, ethylene, ethanol, and toluene, presented in arbitrary order and at variable concentrations. Chemical interferents were also presented to the sensors between batches, and the time between presentations varied, both of which contributed to further sensor variability. <|MaskedSetence|>
**A**: The dataset thus exemplifies sensor variance due to contamination and variable odor concentration in a controlled setting. . **B**: These features summarizing the time series sensor responses are the raw and normalized steady-state features and the exponential moving average of the increasing and decaying transients taken at three different alpha values. **C**: The data consists of 10 sequential collection periods, called batches.
CBA
CBA
CBA
ABC
Selection 2
<|MaskedSetence|> This culminated in constructions to present free groups of arbitrary rank as automaton groups where the number of states coincides with the rank [18, 17]. <|MaskedSetence|> While it is known that the free semigroup of rank one is not an automaton semigroup [4, Proposition 4.3], the free semigroups of higher rank can be generated by an automaton [4, Proposition 4.1]. In fact, the construction to generate these semigroups is quite simple [4, Proposition 4.1] (compare also to 3). The same construction can also be used to generate free monoids as automaton semigroups or monoids. Here, the main difference is that the free monoid in one generator can indeed be generated by an automaton: it is generated by the adding machine (see 1), which also generates the free group of rank one if inverses are added. <|MaskedSetence|>
**A**: While these constructions and the involved proofs are generally deemed quite complicated, the situation for semigroups turns out to be much simpler. **B**: There is a quite interesting evolution of constructions to present free groups in a self-similar way or even as automaton groups (see [15] for an overview). **C**: On a side note, it is also worthwhile to point out that – although there does not seem to be much research on the topic – there are examples to generate the free inverse semigroup of rank one as a subsemigroup of an automaton semigroup [14, Theorem 25] and an adaption to present the free inverse monoid of rank one as an automaton semigroup [6, Example 2] (see also [8, Example 23]). .
BAC
CBA
BAC
BAC
Selection 4
2.2.2 Enhancing Visual Sensitivities Both Human Importance Aware Network Tuning (HINT) Selvaraju et al. (2019) and Self Critical Reasoning (SCR) Wu and Mooney (2019), train the network to be more sensitive towards salient image regions by improving the alignment between visual cues and gradient-based sensitivity scores. HINT proposes a ranking loss between human-based importance scores Das et al. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
**A**: In contrast, SCR does not require exact saliency ranks. **B**: Instead, it penalizes the model if correct answers are more sensitive towards non-important regions as compared to important regions, and if incorrect answers are more sensitive to important regions than correct answers.. **C**: (2016) and the gradient-based sensitivities.
CAB
CAB
CAB
ACB
Selection 3
<|MaskedSetence|> Privacy policies in this corpus have a mean word length of about 1,871 words and range between a minimum of 143 words and a maximum of 16,980 words. The corpus contains policies from over 800 different top level domains (TLDs). .com, .org, and .net make up a major share of the corpus covering 63%, 5% and 3% respectively. <|MaskedSetence|> <|MaskedSetence|> Moreover, CommonCrawl release statistics estimating the representativeness of monthly crawls which support the claim that monthly crawl archives and in turn the PrivaSeer Corpus are a representative sample of the web. In addition to monthly crawl dumps, Common Crawl releases web graphs with PageRanks of the domains in a crawl. The PageRank values were calculated from the web graph using the Gauss-Seidel algorithm (Arasu et al., 2002). PageRank values can be used as a substitute for popularity where higher values suggest more popular domains. .
**A**: The distribution of popular TLDs (.com, .org, .net) roughly matches internet TLD trends suggesting that the corpus contains a random sample of internet web domains. **B**: The PrivaSeer Corpus consists of 1,005,380 privacy policies from 995,475 different web domains. **C**: Country-level domains like .uk, .au, .ca and .du show the geographic variety of the sources of the corpus covering 12%, 4%, and 2% respectively.
BCA
BCA
ABC
BCA
Selection 1
Hence, we want to further investigate cases that cause problems (i.e., we have to look for large points). The parallel coordinates plot in Figure 3(b) is used to investigate the features of the data set in detail. The Ca attribute, for example, has a range of 0–3, but by selection we can see five points with Ca values of ‘4’, see Figure 3(b). <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> One of those is rather large which affects negatively the prediction accuracy of our classification (see Figure 3(c.1) in the upper right corner). In Figure 3(c.2), we select the point with our lasso interaction. We have then several options to manipulate this point as shown in Figure 3(c.3): we can remove the point’s instance entirely from the data set or merge a set of points into a new one, which receives either their mean or median values per feature..
**A**: These values can be considered as unknown and should be further examined. **B**: One of these points belongs to the healthy class (due to the olive color) but is very small in Figure 3(c.1)—meaning that it does not reduce the accuracy. **C**: Four points are part of the diseased class.
ABC
ABC
ABC
ACB
Selection 3
Other works use MAML for multi-domain and low-resource language generation, such as few-shot dialogue system [Mi et al., 2019, Madotto et al., 2019, Qian and Yu, 2019, Song et al., 2020] and low-resource machine translation [Gu et al., 2018]. When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Few works have thoroughly studied these impact factors. .
**A**: For example, PAML [Madotto et al., 2019] regards each person’s dialogues as a task for MAML and they have different personal profiles. **B**: Secondly, while vanilla MAML assumes that the data distribution is the same across tasks, in real-world NLP tasks, the data distributions can differ significantly [Li et al., 2018, Balaji et al., 2018]. **C**: This variation manifests both between training tasks and between training and testing tasks, similarly affecting the performance of MAML.
ACB
BAC
BAC
BAC
Selection 4
Note that directly solving the above beam tracking problem is very challenging, especially in the considered highly dynamic UAV mmWave network. Therefore, developing new and efficient beam tracking solution for the CA-enabled UAV mmWave network is the major focus of our work. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Due to the directivity, the DREs of the CCA subarray at different positions are anisotropic, and this phenomenon is different from the UPA. If an inappropriate subarray is activated, the beam angle may go beyond the radiation range of certain subarray elements, degrading the beam gain and SE. .
**A**: Recall that several efficient codebook-based beam training and tracking schemes have been proposed for conventional mmWave network with uniform ULA and UPA [22, 23]. **B**: These prior works inspire us to propose a specialized new codebook design and the corresponding codeword selection/processing strategy that can drive the CCA to achieve fast beam tracking in the highly dynamic UAV mmWave network. **C**: To this end, the properties of the CCA should be exploited in the design of the codebook, which are briefly discussed as follows. Activated Subarray with Limited DREs: As shown in Fig. 1, given a certain azimuth angle, there are limited DREs that can be activated.
ABC
ABC
ABC
ABC
Selection 3
In contrast to Mei et al. <|MaskedSetence|> <|MaskedSetence|> We defer the detailed discussion on the approximation analysis to §B. <|MaskedSetence|> We are interested in the evolution of the feature representation .
**A**: (2018, 2019), the PDE in (3.4) can not be cast as a gradient flow, since there does not exist a corresponding energy functional. **B**: Proposition 3.1 allows us to convert the TD dynamics over the finite-dimensional parameter space to its counterpart over the infinite-dimensional Wasserstein space, where the infinitely wide neural network Q⁢(⋅;ρ)𝑄⋅𝜌Q(\cdot;\rho)italic_Q ( ⋅ ; italic_ρ ) in (3.2) is linear in the distribution ρ𝜌\rhoitalic_ρ. Feature Representation. **C**: Thus, their analysis is not directly applicable to our setting.
ACB
BAC
ACB
ACB
Selection 4
2.1.   Depth-Wise LSTM The computation of depth-wise LSTM is the same as the conventional LSTM except that depth-wise LSTM connects stacked Transformer layers instead of tokens in a token sequence as in conventional LSTMs. <|MaskedSetence|> In our work, we regard the outputs of stacked layers as a “vertical” sequence, and utilize the same gate mechanisms to selectively aggregate information from stacked Transformer layer outputs and to address the gradient vanishing issue of deep Transformers. <|MaskedSetence|> In a sense, the layer-by-layer computations in Transformer encoder and decoder stacks are just such sequences where information from a Transformer layer n−1𝑛1n-1italic_n - 1 is passed on to layer n𝑛nitalic_n. Our depth-wise LSTMs connect layers of multi-head attention information instead of token embeddings. <|MaskedSetence|>
**A**: The gate mechanisms in the original LSTM are to enhance its ability in capturing long-distance relations and to address the gradient vanishing/exploding issue in sequence modeling. **B**: LSTMs are able to capture long-distance relationships: they can learn to selectively use the representations of distant tokens in the processing of a current input token in a sequence. **C**: Because of the different types of attention (self, cross and masked), we develop tailored ways of connecting (sub-) layers in encoder stacks and decoder stacks with depth-wise LSTMs. .
ABC
ABC
BAC
ABC
Selection 2
<|MaskedSetence|> Apply [33, Corollary 5.14] to A𝐴Aitalic_A and B𝐵Bitalic_B. Then A~⊧φmodels~𝐴𝜑\widetilde{A}\models\varphiover~ start_ARG italic_A end_ARG ⊧ italic_φ because A→A~→𝐴~𝐴A\to\widetilde{A}italic_A → over~ start_ARG italic_A end_ARG and φ𝜑\varphiitalic_φ is closed under homomorphisms. <|MaskedSetence|> Finally, B~→B→C→~𝐵𝐵→𝐶\widetilde{B}\to B\to Cover~ start_ARG italic_B end_ARG → italic_B → italic_C, thus C⊧φmodels𝐶𝜑C\models\varphiitalic_C ⊧ italic_φ because φ𝜑\varphiitalic_φ is closed under homomorphisms. <|MaskedSetence|>
**A**: Therefore B~⊧φmodels~𝐵𝜑\widetilde{B}\models\varphiover~ start_ARG italic_B end_ARG ⊧ italic_φ because A~~𝐴\widetilde{A}over~ start_ARG italic_A end_ARG and B~~𝐵\widetilde{B}over~ start_ARG italic_B end_ARG are n𝑛nitalic_n-elementary equivalent. **B**: furthermore B→C→𝐵𝐶B\to Citalic_B → italic_C. **C**: This shows that φ𝜑\varphiitalic_φ is closed.
BAC
BAC
CBA
BAC
Selection 4
Quantitative Evaluation: To demonstrate a quantitative comparison with the state-of-the-art approaches, we evaluate the rectified images based on the PSNR (peak signal-to-noise ratio), SSIM (structural similarity index), and the proposed MDLD (mean distortion level deviation). All the comparison methods are used to conduct the distortion rectification on the test dataset including 2,000 distorted images. For the PSNR and SSIM, we compute these two metrics using the pixel difference between each rectified image and the ground truth image. For the MDLD, we first exploit the estimated distortion parameters to obtain all distortion levels of the test distorted image based on Eq. 5. Then, the value of MDLD can be calculated by the difference between estimated distortion levels and the ground truth distortion levels based on Eq. <|MaskedSetence|> Note that the generated-based methods such as Li [11] and Liao [12] directly learn the transformation manner of the pixel mapping instead of estimating the distortion parameters, so we only evaluate these two methods in terms of the PSNR and SSIM. As listed in Table II, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods [23, 24] based on the hand-crafted features, our approach overcomes the scene limitation and simple camera model assumption, showing more promising generality and flexibility. Compared with the learning distortion rectification methods [8][11][12], which omit the prior knowledge of the distortion, our approach transfers the heterogeneous estimation problem into a homogeneous one, eliminating the implicit relationship between image features and predicted values in a more explicit expression. As benefits of the effective ordinal supervision and guidance of distortion information during the learning process, our approach outperforms Liao [12] by a significant margin, with approximately 23% improvement on PSNR and 17% improvement on SSIM. <|MaskedSetence|> <|MaskedSetence|>
**A**: However, the generation-based methods [11][12] mainly focus on the pixel reconstruction of a rectified image and ignore the parameter estimation.. **B**: Besides the high quality of the rectified image, our approach can obtain the accurate distortion parameters of a distorted image, which is crucial for the subsequent tasks such as the camera calibration. **C**: 21.
CBA
CBA
CBA
BCA
Selection 2
<|MaskedSetence|> As usual in two-stage stochastic problems, this has three steps. First, we develop algorithms for the simpler polynomial-scenarios model. <|MaskedSetence|> <|MaskedSetence|> This overall methodology is called Sample Average Approximation (SAA). .
**A**: Finally, we extrapolate the solution to the original black-box problem. **B**: 1.2 Our Generalization Scheme and Comparison with Previous Results Our main goal is to develop algorithms for the black-box setting. **C**: Second, we sample a small number of scenarios from the black-box oracle and use our polynomial-scenarios algorithms to (approximately) solve the problems on them.
BCA
BCA
BCA
CBA
Selection 2
README.md exists but content is empty.
Downloads last month
46