context
stringlengths
250
4.63k
A
stringlengths
250
6.41k
B
stringlengths
250
5.14k
C
stringlengths
250
3.8k
D
stringlengths
250
8.2k
label
stringclasses
4 values
that adds the results of 1+(n−m)/21𝑛𝑚21+(n-m)/21 + ( italic_n - italic_m ) / 2 Gaussian integrations for moments xD−1+n−2⁢ssuperscript𝑥𝐷1𝑛2𝑠x^{D-1+n-2s}italic_x start_POSTSUPERSCRIPT italic_D - 1 + italic_n - 2 italic_s end_POSTSUPERSCRIPT. The disadvantage
{n,n^{\prime}}.∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_R start_POSTSUBSCRIPT italic_n start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_d italic_x = divide start_ARG 1 end_ARG start_ARG 2 italic_n + italic_D end_ARG italic_δ start_POSTSUBSCRIPT italic_n , italic_n start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT .
Gaussian integration rules for integrals ∫01xD−1⁢Rnm⁢(x)⁢f⁢(x)⁢𝑑xsuperscriptsubscript01superscript𝑥𝐷1superscriptsubscript𝑅𝑛𝑚𝑥𝑓𝑥differential-d𝑥\int_{0}^{1}x^{D-1}R_{n}^{m}(x)f(x)dx∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_f ( italic_x ) italic_d italic_x
+x\left[D-1-(D+1)x^{2}\right]\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) = [ italic_n italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_n + italic_D ) - italic_m ( italic_D - 2 + italic_m ) ] italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) end_CELL end_ROW start_ROW start_CELL + italic_x [ italic_D - 1 - ( italic_D + 1 ) italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ] divide start_ARG italic_d end_ARG start_ARG italic_d italic_x end_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) . end_CELL end_ROW
rules for the lifted integrals ∫01xD−1⁢[1+Rnm⁢(x)]⁢f⁢(x)⁢𝑑xsuperscriptsubscript01superscript𝑥𝐷1delimited-[]1superscriptsubscript𝑅𝑛𝑚𝑥𝑓𝑥differential-d𝑥\int_{0}^{1}x^{D-1}[1+R_{n}^{m}(x)]f(x)dx∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT [ 1 + italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) ] italic_f ( italic_x ) italic_d italic_x
D
On the other hand, if the instruction Itsubscript𝐼𝑡I_{t}italic_I start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT was Show⁡(A)Show𝐴\operatorname{Show}(A)roman_Show ( italic_A ) then Eval⁡(S,M,s,t)Eval𝑆𝑀𝑠𝑡\operatorname{Eval}(S,M,s,t)roman_Eval ( italic_S , italic_M , italic_s , italic_t ) is defined to be the list of elements stored in memory slots M⁢[i]𝑀delimited-[]𝑖M[i]italic_M [ italic_i ]
This adds only one extra MSLP instruction, in order to form and store the element x⁢v−1𝑥superscript𝑣1xv^{-1}italic_x italic_v start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT needed in the conjugate on the right-hand side of (2) (this element can later be overwritten and so does not add to the overall maximum memory quota; recall also that x𝑥xitalic_x is no longer the identity when d𝑑ditalic_d is odd). Observe that the formula (1) differs from the d𝑑ditalic_d odd case only in the sense that v𝑣vitalic_v is replaced by x−1superscript𝑥1x^{-1}italic_x start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT, and hence the initial computation of T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT requires the same number of instructions and memory slots as before.
does not yield an upper bound for the memory requirement in a theoretical analysis. Moreover, the result of SlotUsagePattern improves the memory usage but it is not necessarily optimized overall and, hence, the number of slots can still be greater than the number of slots of a carefully computed MSLP. It should also be mentioned that in some cases the number of slots can even be smaller than that of a constructed MSLP but it is not possible to predict this without a careful analysis which would result in an MSLP construction as in this paper.
Instruction type (i) above simply copies an element already in memory to a different memory slot. These instructions can arguably be disregarded for the purpose of determining the length of an MSLP, because in a practical implementation they could be handled via relabelling.
For the purposes of determining the cost of Taylor’s algorithm in terms of matrix operations, namely determining the length of an MSLP for the algorithm, we assume that the field elements −gi⁢c⁢gr⁢c−1subscript𝑔𝑖𝑐superscriptsubscript𝑔𝑟𝑐1-g_{ic}g_{rc}^{-1}- italic_g start_POSTSUBSCRIPT italic_i italic_c end_POSTSUBSCRIPT italic_g start_POSTSUBSCRIPT italic_r italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT in (11) (and similarly in (12)) are given to us as polynomials of degree at most f−1𝑓1f-1italic_f - 1 in the primitive element ω𝜔\omegaitalic_ω, where q=pf𝑞superscript𝑝𝑓q=p^{f}italic_q = italic_p start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT for some prime p𝑝pitalic_p.
C
where Ω⊂ℝdΩsuperscriptℝ𝑑\Omega\subset\mathbb{R}^{d}roman_Ω ⊂ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with d=2𝑑2d=2italic_d = 2 or 3333 for simplicity, and is an open bounded domain with polyhedral boundary ∂ΩΩ\partial\Omega∂ roman_Ω, the symmetric tensor 𝒜∈[L∞⁢(Ω)]symd×d𝒜superscriptsubscriptdelimited-[]superscript𝐿Ωsym𝑑𝑑\mathcal{A}\in[L^{\infty}(\Omega)]_{\text{sym}}^{d\times d}caligraphic_A ∈ [ italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( roman_Ω ) ] start_POSTSUBSCRIPT sym end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_d × italic_d end_POSTSUPERSCRIPT is uniformly positive definite and bounded, and g𝑔gitalic_g is part of the given data.
In [MR2718268] is shown that the number of eigenvalues that are very large is related to the number of connected sub-regions on τ¯∪τ¯′¯𝜏superscript¯𝜏′\bar{\tau}\cup{\bar{\tau}}^{\prime}over¯ start_ARG italic_τ end_ARG ∪ over¯ start_ARG italic_τ end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with large coefficients surrounded by regions with small coefficients. Generalized eigenvalue problems also have been used on overlapping domain decomposition solvers [MR2718268, MR2916377, MR3175183, MR3033238]. The design of robust discretizations with respect to coefficients using domain decomposition ideas have been studied in [MR2666649, MR1642758, MR3350765] assuming some regularity on the solution, and in [MR2718268] for a class of problems when the weighted Poincaré constant [MR3047947, MR3013465, MR2867661] is not large, otherwise the exponential decay of the multiscale functions deteriorates. See also [MR2753343, MR3109775] where a priori error estimates are obtained in terms of spectral norms.
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local computations are required, although these are not restricted to a single element. It is interesting to notice that, although the formulation is based on hybridization, the final numerical solution is defined by a sequence of elliptic problems.
One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method not so practical. Here in this paper, in the presence of rough coefficients, spectral techniques are employed to overcome such hurdle, and by solving local eigenvalue problems we define a space where the exponential decay of solutions is insensitive to high-contrast coefficients. Additionally, the spectral techniques remove macro-elements corner singularities that occur in LOD methods based on
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85, MR1979846, MR2058933, HMV, MR1642758, MR3584539, MR2030161, MR2383203, vs1, vs2, MR2740478]. Some methods work even considering that the solution has low regularity [MR2801210, MR2753343, MR3225627, MR3177856, MR2861254] but are based on ideas that differ considerably from what we advocate here
D
On the contrary, we may need to use a function θ𝜃\thetaitalic_θ of variable (b,c)𝑏𝑐(b,c)( italic_b , italic_c ); see the description of 𝖪𝗂𝗅𝗅Fsubscript𝖪𝗂𝗅𝗅𝐹\mathsf{Kill}_{F}sansserif_Kill start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT in subsection 3.1 for an example. As such, the flow of Rotate-and-Kill is different from RC.
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM.
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
We think Alg-A is better in almost every aspect. This is because it is essentially simpler. Among other merits, Alg-A is much faster, because it has a smaller constant behind the asymptotic complexity O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) than the others:
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
C
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet level and let each tweet vote for the credibility of its event. We show the CreditScore measured over time in Figure 5(a). It can be seen that although the credibility of some tweets are low (rumor-related), averaging still makes the CreditScore of Munich shooting higher than the average of news events (hence, close to a news). In addition, we show the feature analysis for ContainNews (percentage of URLs containing news websites) for the event Munich shooting in Figure 5(b). We can see the curve of Munich shooting event is also close to the curve of average news, indicating the event is more news-related.
As observed in [19, 20], rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in [20]. We base our credibility feature on the time series approach and train the classifier with features from diffent high-level contexts (i.e., users, Twitter and propagation) in a cascaded manner. In this section, we first detail the employed Dynamic Series-Time Structure, then describe the high and low-level ensemble features used for learning in this pipeline step.
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analysis on the wide range of features change during diffusion time. Ma et al. [19] used Recurrent Neural Networks for rumor detection, they batch tweets into time intervals and model the time series as a RNN sequence. Without any other handcrafted features, they got almost 90% accuracy for events reported in Snope.com. As the same disadvantage of all other deep learning models, the process of learning is a black box, so we cannot envisage the cause of the good performance based only on content features. The model performance is also dependent on the tweet retrieval mechanism, of which quality is uncertain for stream-based trending sub-events.
In this work, we propose an effective cascaded rumor detection approach using deep neural networks at tweet level in the first stage and wisdom of the “machines”, together with a variety of other features in the second stage, in order to enhance rumor detection performance in the early phase of an event. The proposed approach outperforms state of the
at an early stage. Our fully automatic, cascading rumor detection method follows the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumors.  [7, 19] also use RNN for rumor debunking. However, in their work, RNN is used at event-level. The classification leverages only the deep data representations of aggregated tweet contents of the whole event, while ignoring exploiting other –in latter stage–effective features such as user-based features and propagation features. Although, tweet contents are merely the only reliable source of clue at early stage, they are also likely to have doubtful perspectives and different stands in this specific moment. In addition, they could relate to rumorous sub-events (see e.g., the Munich shooting). Aggregating all relevant tweets of the event at this point can be of noisy and harm the classification performance. One could think of a sub-event detection mechanism as a solution, however, detecting sub-events at real-time over Twitter stream is a challenging task [22], which increases latency and complexity. In this work, we address this issue by deep neural modeling only at single tweet level. Our intuition is to leverage the “wisdom of the crowd” theory; such that even a certain portion of tweets at a moment (mostly early stage) are weakly predicted (because of these noisy factors), the ensemble of them would attribute to a stronger prediction.
C
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile continuing to optimize long after we have zero training error, and
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is also independent of the step-size
Let ℓℓ\ellroman_ℓ be the logistic loss, and 𝒱𝒱\mathcal{V}caligraphic_V be an independent validation set, for which ∃𝐱∈𝒱𝐱𝒱\exists\mathbf{x}\in\mathcal{V}∃ bold_x ∈ caligraphic_V such that 𝐱⊤⁢𝐰^<0superscript𝐱top^𝐰0\mathbf{x}^{\top}\hat{\mathbf{w}}<0bold_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT over^ start_ARG bold_w end_ARG < 0. Then the validation loss increases as
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameterization asymptotically to the maximum margin solution with unit nuclear norm. Unlike the case of squared loss, the result for exponential loss are independent of initialization and with only mild conditions on the step size. Here again, we see the asymptotic nature of exponential loss on separable data nullifying the initialization effects thereby making the analysis simpler compared to squared loss.
We should not rely on plateauing of the training loss or on the loss (logistic or exp or cross-entropy) evaluated on a validation data, as measures to decide when to stop. Instead, we should look at the 00–1111 error on the validation dataset. We might improve the validation and test errors even when when the decrease in the training loss is tiny and even when the validation loss itself increases.
D
To overcome this issue, we set up a threshold 72 hours. We only consider the first candidate within 72 hours before or after the beginning time of the event as timestamp of human confirming rumors. On average the human editors of Snopes need 25.49 hours to verify the rumors and post it. Our system already achieves 87% accuracy in 25 hours. We illustrate two examples here in Figures 12(a) and 12(b). Figure 12(a) is a rumor about ‘Okra curing diabetes’ 161616http://www.snopes.com/medical/homecure/okra.asp which we detected the beginning time is 01.31.2014 04:00. Snope debunked it at 01.28.2014 21:00, 55 hours earlier than our study time period. However, Snopes does not provide any information regarding how they detect the rumor. Figure 12(b) depicts another example, showing that human detect it 71 hour after the event starts, which is the latest detection in our study. Despite those issues, we show the comparision results in Table 12.
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet level and let each tweet vote for the credibility of its event. We show the CreditScore measured over time in Figure 13(a). It can be seen that although the credibility of some tweets are low (rumor-related), averaging still makes the CreditScore of Munich shooting higher than the average of news events (hence, close to a news). In addition, we show the feature analysis for ContainNews (percentage of URLs containing news websites) for the event Munich shooting in Figure 13(b). We can see the curve of Munich shooting event is also close to the curve of average news, indicating the event is more news-related.
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumors.  (madetecting, ) also use RNN for rumor debunking. However, in their work, RNN is used at event-level. The classification leverages only the deep data representations of aggregated tweet contents of the whole event, while ignoring exploiting other –in latter stage–effective features such as user-based features and propagation features. Although, tweet contents are merely the only reliable source of clue at early stage, they are also likely to have doubtful perspectives and different stands in this specific moment. In addition, they could relate to rumorous sub-events (see e.g., the Munich shooting). Aggregating all relevant tweets of the event at this point can be of noisy and harm the classification performance. One could think of a sub-event detection mechanism as a solution, however, detecting sub-events at real-time over Twitter stream is a challenging task (meladianos2015degeneracy, ), which increases latency and complexity. In this work, we address this issue by deep neural modeling only at single tweet level. Our intuition is to leverage the “wisdom of the crowd” theory; such that even a certain portion of tweets at a moment (mostly early stage) are weakly predicted (because of these noisy factors), the ensemble of them would attribute to a stronger prediction.
At 18:22 CEST, the first tweet was posted. There might be some certain delay, as we retrieve only tweets in English and the very first tweets were probably in German. The tweet is ”Sadly, i think there’s something terrible happening in #Munich #Munchen. Another Active Shooter in a mall. #SMH”.
The time period of a rumor event is sometimes fuzzy and hard to define. One reason is a rumor may have been triggered for a long time and kept existing, but it did not attract public attention. However it can be triggered by other events after a uncertain time and suddenly spreads as a bursty event. E.g., a rumor999http://www.snopes.com/robert-byrd-kkk-photo/ claimed that Robert Byrd was member of KKK. This rumor has been circulating in Twitter for a while. As shown in Figure 7(a) that almost every day there were several tweets talking about this rumor. But this rumor was triggered by a picture about Robert Byrd kissing Hillary Clinton in 2016 101010http://www.snopes.com/clinton-byrd-photo-klan/ and Twitter users suddenly noticed this rumor and it was spreaded burstily. In this work, what we are really interested in is the tweets which are posted in hours around the bursty peak. We defined the hour with the most tweets’ volume as tm⁢a⁢xsubscript𝑡𝑚𝑎𝑥t_{max}italic_t start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT and we want to detect the rumor event as soon as possible before its burst, so we define the time of the first tweet before tm⁢a⁢xsubscript𝑡𝑚𝑎𝑥t_{max}italic_t start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT within 48 hours as the beginning of this rumor event, marked as t0subscript𝑡0t_{0}italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. And the end time of the event is defined as te⁢n⁢d=t0+48subscript𝑡𝑒𝑛𝑑subscript𝑡048t_{end}=t_{0}+48italic_t start_POSTSUBSCRIPT italic_e italic_n italic_d end_POSTSUBSCRIPT = italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + 48. We show the tweet volumes in Figure 7(b) of the above rumor example.
C
\mathcal{C}_{k}|a,t)\sum\limits_{l=1}^{m}P(\mathcal{T}_{l}|a,t,\mathcal{C}_{k}% )\hat{y_{a}},y_{a})sansserif_f start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = roman_arg roman_min start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT ∀ italic_a end_POSTSUBSCRIPT caligraphic_L ( ∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT italic_P ( caligraphic_C start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | italic_a , italic_t ) ∑ start_POSTSUBSCRIPT italic_l = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT italic_P ( caligraphic_T start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT | italic_a , italic_t , caligraphic_C start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) over^ start_ARG italic_y start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT end_ARG , italic_y start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT )
Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we propose an adaptive approach based on the ensemble of multiple ranking models learned from training data, which is partitioned by entities’ temporal and type aspects. In more detail, we learn multiple models, which are co-trained using data soft partitioning / clustering method in Section 4.2, and finally combine the ranking results of different models in an ensemble manner. This approach allows sub-models to learn for different types and times (where feature sets can perform differently), without hurting each other. The adaptive global loss then co-optimizes all sub-models in a unified framework. We describe in details as follows.
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The results are shown in Table 3-bottom, showing that our cascaded model, with features inherited from the performance of SVM in previous task, substantially improves the single model. However, the overall modest results show the difficulty of this multi-class classification task.
For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type specification (RQ2). We then evaluate our ensemble ranking model (results from the cascaded evaluation) and show it robustly improves the baselines for all studied cases (RQ3). Notice that, we do not use the learned classifier in Section 5.2 for our ensemble model, since they both use the same time period for training, but opt for the on-the-fly ranking-sensitive clustering technique, described in Section 4.2.
Multi-Criteria Learning. Our task is to minimize the global relevance loss function, which evaluates the overall training error, instead of assuming the independent loss function, that does not consider the correlation and overlap between models. We adapted the L2R RankSVM [12]. The goal of RankSVM is learning a linear model that minimizes the number of discordant pairs in the training data. We modified the objective function of RankSVM following our global loss function, which takes into account the temporal feature specificities of event entities. The temporal and type-dependent ranking model is learned by minimizing the following objective function:
D
In this case, the agent must sequentially learn both the underlying dynamics (La,Σa;∀asubscript𝐿𝑎subscriptΣ𝑎for-all𝑎L_{a},\Sigma_{a};\forall aitalic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ; ∀ italic_a) and the conditional reward function’s variance (σa2,∀a)\sigma_{a}^{2},\forall a)italic_σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , ∀ italic_a ),
We observe noticeable (almost linear) regret increases when the dynamics of the parameters swap the identity of the optimal arm. However, SMC-based Thompson sampling and Bayes-UCB agents are able to learn the evolution of the dynamic latent parameters,
We now describe in detail how to use the SMC-based posterior random measure pM⁢(θt+1,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡1𝑎subscriptℋ:1𝑡p_{M}(\theta_{t+1,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t + 1 , italic_a end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) for both Thompson sampling and Bayes-UCB policies: i.e., which are the specific instructions to execute in steps 5 and 7 of Algorithm 1.
For the more interesting case of unknown parameters, we marginalize parameters Lasubscript𝐿𝑎L_{a}italic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and ΣasubscriptΣ𝑎\Sigma_{a}roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT of the transition distributions
If the support of q⁢(⋅)𝑞⋅q(\cdot)italic_q ( ⋅ ) includes the support of the distribution of interest p⁢(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ ), one computes the IS estimator of a test function based on the normalized weights w(m)superscript𝑤𝑚w^{(m)}italic_w start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT,
A
Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available. The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14.
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only two glucose measurements per day on average and measured glucose within 4 hours or less after a meal only 5 out of 54 times.
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day. In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app.
Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening. For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it is possible the discrepancy is a result of missing (glucose and carbohydrate) measurements.
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
D
Our proposed encoder-decoder model clearly demonstrated competitive performance on two datasets towards visual saliency prediction. The ASPP module incorporated multi-scale information and global context based on semantic feature representations, which significantly improved the results both qualitatively and quantitatively on five eye tracking datasets. This suggests that convolutional layers with large receptive fields at different dilation factors can enable a more holistic estimation of salient image regions in complex scenes. Moreover, our approach is computationally lightweight compared to prior state-of-the-art approaches and could thus be implemented in (virtual) robotic systems that require computational efficiency. It also outperformed all other networks defined with a pre-trained VGG16 backbone as calculated by the cumulative rank on a subset of evaluation metrics to resolve some of the inconsistencies in ranking models by a single measure or a set of correlated ones Riche et al. (2013); Bylinskii et al. (2018).
To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation metrics. Table 1 summarizes our results on the test dataset of MIT1003, namely MIT300 Judd et al. (2012), in the context of previous approaches. The evaluation shows that our model only marginally failed to achieve state-of-the-art performance on any of the individual metrics. When computing the cumulative rank (i.e. the sum of ranks according to the standard competition ranking procedure) on a subset of weakly correlated measures (sAUC, CC, KLD) Riche et al. (2013); Bylinskii et al. (2018), we ranked third behind the two architectures DenseSal and DPNSal from Oyama and Yamanaka (2018). However, their approaches were based on a pre-trained Densely Connected Convolutional Network with 161 layers Huang et al. (2017) and Dual Path Network with 131 layers Chen et al. (2017) respectively, both of which are computationally far more expensive than the VGG16 model used in this work (see Table 5 by Oyama and Yamanaka (2018) for a comparison of the computational efficiency). Furthermore, DenseSal and DPNSal implemented a multi-path design where two images of different resolutions are simultaneously fed to the network, which substantially reduces the execution speed compared to single-stream architectures. Among all entries of the MIT300 benchmark with a VGG16 backbone Cornia et al. (2016); Huang et al. (2015); Cornia et al. (2018); Kruthiventi et al. (2017), our model clearly achieved the highest performance.
Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer et al. (2014) and II Kümmerer et al. (2016) employed a pre-trained classification model to read out salient image locations from a small subset of encoding layers. This is similar to the network by Cornia et al. (2016) which utilizes the output at three stages of the hierarchy. Oyama and Yamanaka (2018) demonstrated that classification performance of pre-trained architectures strongly correlates with the accuracy of saliency predictions, highlighting the importance of object information. Related approaches also focused on the potential benefits of incorporating activation from both coarse and fine image resolutions Huang et al. (2015), and recurrent connections to capture long-range spatial dependencies in convolutional feature maps Cornia et al. (2018); Liu and Han (2018). Our model explicitly combines semantic representations at multiple spatial scales to include contextual information in the predictive process. For a more complete account of existing saliency architectures, we refer the interested reader to a comprehensive review by Borji (2018).
Further improvements of benchmark results could potentially be achieved by a number of additions to the processing pipeline. Our model demonstrates a learned preference for predicting fixations in central regions of images, but we expect performance gains from modeling the central bias in scene viewing explicitly Kümmerer et al. (2014, 2016); Cornia et al. (2016, 2018); Kruthiventi et al. (2017). Additionally, Bylinskii et al. (2015) summarized open problems for correctly assigning saliency in natural images, such as robustness in detecting semantic features, implied gaze and motion, and importance weighting of multiple salient regions. While the latter was addressed in this study, Figure 4 indicates that the remaining obstacles still persist for our proposed model.
For related visual tasks such as semantic segmentation, information distributed over convolutional layers at different levels of the hierarchy can aid the preservation of fine spatial details Hariharan et al. (2015); Long et al. (2015). The prediction of fixation density maps does not require accurate class boundaries but still benefits from combined mid- to high-level feature responses Kümmerer et al. (2014, 2016); Cornia et al. (2016). Hence, we adapted the multi-level design proposed by Cornia et al. (2016) and concatenated the output from layers 10, 14, and 18 into a common tensor with 1,280 activation maps.
C
Finally, we have to show that in this pd-marking scheme, the maximum number of activeactive\operatorname{\texttt{active}}act positions is bounded by 2⁢k+12𝑘12k+12 italic_k + 1. This is obviously true at step p1subscript𝑝1p_{1}italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Now let s𝑠sitalic_s with 1≤s≤|α|−11𝑠𝛼11\leq s\leq|\alpha|-11 ≤ italic_s ≤ | italic_α | - 1 be arbitrary. Since the total number of activeactive\operatorname{\texttt{active}}act positions at step pssubscript𝑝𝑠p_{s}italic_p start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT and ps+1subscript𝑝𝑠1p_{s+1}italic_p start_POSTSUBSCRIPT italic_s + 1 end_POSTSUBSCRIPT are bounded by 2⁢k2𝑘2k2 italic_k, we only have to show that the maximum number of activeactive\operatorname{\texttt{active}}act positions in the marking scheme transforming pssubscript𝑝𝑠p_{s}italic_p start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT into ps+1subscript𝑝𝑠1p_{s+1}italic_p start_POSTSUBSCRIPT italic_s + 1 end_POSTSUBSCRIPT is bounded by 2⁢k+12𝑘12k+12 italic_k + 1. Let us assume that at stage s𝑠sitalic_s and s+1𝑠1s+1italic_s + 1 of σ𝜎\sigmaitalic_σ, there are kssubscript𝑘𝑠k_{s}italic_k start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT (ks+1subscript𝑘𝑠1k_{s+1}italic_k start_POSTSUBSCRIPT italic_s + 1 end_POSTSUBSCRIPT, respectively) marked blocks, and exactly ks,1subscript𝑘𝑠1k_{s,1}italic_k start_POSTSUBSCRIPT italic_s , 1 end_POSTSUBSCRIPT (ks+1,1subscript𝑘𝑠11k_{s+1,1}italic_k start_POSTSUBSCRIPT italic_s + 1 , 1 end_POSTSUBSCRIPT, respectively) blocks have size 1111; note that this means that at step pssubscript𝑝𝑠p_{s}italic_p start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT there are ks,1+2⁢(ks−ks,1)subscript𝑘𝑠12subscript𝑘𝑠subscript𝑘𝑠1k_{s,1}+2(k_{s}-k_{s,1})italic_k start_POSTSUBSCRIPT italic_s , 1 end_POSTSUBSCRIPT + 2 ( italic_k start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT - italic_k start_POSTSUBSCRIPT italic_s , 1 end_POSTSUBSCRIPT ) activeactive\operatorname{\texttt{active}}act positions.
j𝑗jitalic_j joins two blocks of size 1111: the number of activeactive\operatorname{\texttt{active}}act positions increases by 1111. This is due to the fact that by setting j𝑗jitalic_j to activeactive\operatorname{\texttt{active}}act, we do not create any internal activeactive\operatorname{\texttt{active}}act positions that could be set to closedclosed\operatorname{\texttt{closed}}closed.
We first prove pw⁡(Gα)≤2⁢loc⁡(α)pwsubscript𝐺𝛼2loc𝛼\operatorname{\textsf{pw}}(G_{\alpha})\leq 2\operatorname{\textsf{loc}}(\alpha)pathwidth ( italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ) ≤ 2 loc ( italic_α ). Intuitively speaking, we will translate the stages of a marking sequence σ𝜎\sigmaitalic_σ for α𝛼\alphaitalic_α into steps of a pd-marking scheme for Gαsubscript𝐺𝛼G_{\alpha}italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT in a natural way: each marked block α[s..t]\alpha[s..t]italic_α [ italic_s . . italic_t ] is represented by letting the border positions s𝑠sitalic_s and t𝑡titalic_t be activeactive\operatorname{\texttt{active}}act, the internal position s+1,s+2,…,t−1𝑠1𝑠2…𝑡1s+1,s+2,\ldots,t-1italic_s + 1 , italic_s + 2 , … , italic_t - 1 closedclosed\operatorname{\texttt{closed}}closed, and all other positions openopen\operatorname{\texttt{open}}open. In particular, this means that each stage of the marking sequence with k𝑘kitalic_k marked blocks is represented by at most 2⁢k2𝑘2k2 italic_k activeactive\operatorname{\texttt{active}}act positions in the corresponding step of the pd-marking scheme (note that marked blocks of size 1111 are represented by only one activeactive\operatorname{\texttt{active}}act position). The difficulty will be to show that in the process of transforming one such step of the pd-marking scheme into the next one, we do not produce more than 2⁢πσ⁢(α)+12subscript𝜋𝜎𝛼12\pi_{\sigma}(\alpha)+12 italic_π start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT ( italic_α ) + 1 activeactive\operatorname{\texttt{active}}act positions. This is non-trivial, since due to the cover-property of the pd-marking scheme, we must first set all positions to activeactive\operatorname{\texttt{active}}act that correspond to occurrences of the next symbol to be marked by σ𝜎\sigmaitalic_σ before we can set them from activeactive\operatorname{\texttt{active}}act to closedclosed\operatorname{\texttt{closed}}closed.
This completes the definition of the marking scheme. Figure 7 contains an example of how step ps+1subscript𝑝𝑠1p_{s+1}italic_p start_POSTSUBSCRIPT italic_s + 1 end_POSTSUBSCRIPT is obtained from step pssubscript𝑝𝑠p_{s}italic_p start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT. In this example, we first set extending positions to activeactive\operatorname{\texttt{active}}act that do not join marked blocks, and then we set the remaining extending positions to activeactive\operatorname{\texttt{active}}act. This is done for illustrational reasons (recall that we have not restricted the order in which we set extending positions to activeactive\operatorname{\texttt{active}}act).
In the first phase of the marking scheme, i. e., the phase where we only set extending positions to activeactive\operatorname{\texttt{active}}act, the following different situations can arise, whenever we set some position j𝑗jitalic_j to activeactive\operatorname{\texttt{active}}act (see Figure 7 for an illustration):
D
Zubair et al.[75] detected the R-peak using a non-linear transformation and formed a beat segment around it. Then, they used the segments to train a three layer 1D CNN with variable learning rate depending on the mean square error and achieved better results than previous state-of-the-art.
In their article Kiranyaz et al.[77] trained patient-specific CNNs that can be used to classify long ECG data stream or for real-time ECG monitoring and early alert system on a wearable device. The CNN consisted of three layers of an adaptive implementation of 1D convolution layers.
Taji et al.[91] trained a DBN to classify acceptable from unacceptable ECG segments to reduce the false alarm rate caused by poor quality ECG during AF detection. Eight different levels of ECG quality are provided by contaminating ECG with motion artifact from the NSTDB for validation.
Another three models were trained using the signals as 1D. The first model was a FNN with dropout, the second a three layer 1D CNN and the third a 2D CNN same as the first but trained with a stacked version of the signal (also trained with data augmentation).
Experiments by the authors showed that the three layer 1D CNN created better and more stable results. In[101] the authors trained a network with an one convolutional layer with dropout followed by two RNNs to identify stress using short-term ECG data.
A
Our predictive model has stochastic latent variables so it can be applied in highly stochastic environments. Studying such environments is an exciting direction for future work, as is the study of other ways in which the predictive neural network model could be used. Our approach uses the model as a learned simulator and directly applies model-free policy learning to acquire the policy. However, we could use the model for planning. Also, since our model is differentiable, the additional information contained in its gradients could be incorporated into the reinforcement learning process. Finally, the representation learned by the predictive model is likely be more meaningful by itself than the raw pixel observations from the environment. Incorporating this representation into the policy could further accelerate and improve the reinforcement learning process.
The results in these figures are generated by averaging 5555 runs for each game. The model-based agent is better than a random policy for all the games except Bank Heist. Interestingly, we observed that the best of the 5555 runs was often significantly better. For 6666 of the games, it exceeds the average human score (as reported in Table 3 of Pohlen et al. (2018)). This suggests that further stabilizing SimPLe should improve its performance, indicating an important direction for future work. In some cases during training we observed high variance of the results during each step of the loop. There are a number of possible reasons, such as mutual interactions of the policy training and the supervised training or domain mismatch between the model and the real environment. We present detailed numerical results, including best scores and standard deviations, in Appendix D.
In our empirical evaluation, we find that SimPLe is significantly more sample-efficient than a highly tuned version of the state-of-the-art Rainbow algorithm (Hessel et al., 2018) on almost all games. In particular, in low data regime of 100100100100k samples, on more than half of the games, our method achieves a score which Rainbow requires at least twice as many samples. In the best case of Freeway, our method is more than 10x more sample-efficient, see Figure 3. Since the publication of the first preprint of this work, it has been shown in van Hasselt et al. (2019); Kielak (2020) that Rainbow can be tuned to have better results in low data regime. The results are on a par with SimPLe – both of the model-free methods are better in 13 games, while SimPLe is better in the other 13 out of the total 26 games tested (note that in Section 4.2 van Hasselt et al. (2019) compares with the results of our first preprint, later improved).
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, and PPO (Schulman et al., 2017), a model-free policy gradient algorithm (see Appendix E for details of tuning of Rainbow and PPO). The results of the comparison are presented in Figure 3. For each game, we plot the number of time steps needed for either Rainbow or PPO to reach the same score that our method reaches after 100100100100K interaction steps. The red line indicates 100100100100K steps: any bar larger than this indicates a game where the model-free method required more steps. SimPLe outperforms the model-free algorithms in terms of learning speed on nearly all of the games, and in the case of a few games, does so by over an order of magnitude. For some games, it reaches the same performance that our PPO implementation reaches at 10101010M steps. This indicates that model-based reinforcement learning provides an effective approach to learning Atari games, at a fraction of the sample complexity.
While SimPLe is able to learn more quickly than model-free methods, it does have limitations. First, the final scores are on the whole lower than the best state-of-the-art model-free methods. This can be improved with better dynamics models and, while generally common with model-based RL algorithms, suggests an important direction for future work. Another, less obvious limitation is that the performance of our method generally varied substantially between different runs on the same game. The complex interactions between the model, policy, and data collection were likely responsible for this. In future work, models that capture uncertainty via Bayesian parameter posteriors or ensembles (Kurutach et al., 2018; Chua et al., 2018) may improve robustness.
D
However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model. Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level features of the ‘base models’ might not be suitable for spectrogram-like images such as those created by S2Is.
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable parameters such as convolutional and linear layers or it is non-trainable such as traditional time-frequency methods.
Future work could include testing this hypothesis by initializing a ‘base model’ using transfer learning or other initialization methods. Moreover, trainable S2Is and 1D ‘base model’ variations could also be used for other physiological signals besides EEG such as Electrocardiography, Electromyography and Galvanic Skin Response.
This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data. Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for diagnosis and prediction problems.
However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model. Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level features of the ‘base models’ might not be suitable for spectrogram-like images such as those created by S2Is.
B
Hybrid robots typically transition between locomotion modes either by “supervised autonomy” [11], where human operators make the switch decisions, or the autonomous locomotion mode transition approach, where robots autonomously swap the modes predicated on pre-set criteria [8]. However, the execution of supervised control of locomotion mode transition hinges on constant operator-robot interaction, which might not always be feasible or reliable, especially in confined and complex environments typical in search and rescue missions [12]. In such situations, operators might struggle to maintain absolute situational awareness. To address the locomotion mode transition conundrum, various solutions have been proposed. These include adopting specialized mechanical designs [13, 14] and applying pre-programmed solutions [15]. Although these methods have enhanced the autonomy of locomotion mode transitions, universally applicable autonomous solutions remain in the early stages of development. In fact, most locomotion mode transitions in hybrid robots are currently achieved via high-level human operator control. This applies to cutting-edge wheel/track-legged robots, including DRC-HUBO, CHIMP, Momaro, and RoboSimian, depicted in Fig. 1, which were four of the top five robot designs crafted for the DARPA Robotics Challenge [1].
There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that determine the best mode—either rolling or walking—based on the robot’s environmental interactions and internal states [7, 8]. In addressing the first challenge, the dynamics of rolling locomotion are well understood and are similar to those of traditional wheeled/tracked robots. However, despite extensive research on the walking dynamics of standard legged robots, focused studies on the walking patterns specific to wheel/track-legged robots are limited [9]. Transition control between these locomotion modes for wheel/track-legged robots also requires more exploration [6]. In this study, we focus on the second challenge to develop efficient decision-making algorithms for transitioning between locomotion modes. This remains a very less explored area [3], but is essential to achieve an autonomous locomotion transition in hybrid robots. Building upon our prior work, we employ two climbing gaits to ensure smooth walking locomotion for wheel/track-legged robots, particularly when navigating steps [10].
The Cricket robot, as referenced in [20], forms the basis of this study, being a fully autonomous track-legged quadruped robot. Its design specificity lies in embodying fully autonomous behaviors, and its locomotion system showcases a unique combination of four rotational joints in each leg, which can be seen in Fig. 3. Moreover, every leg is equipped with a drivable track that circumnavigates the outermost leg segment. This design enables the robot to steer in a manner reminiscent of traditional tank robots. However, unlike its contemporaries, the Cricket robot possesses the ability to conduct intricate movements, such as navigating through uneven terrain, in its walking locomotion mode [21]. The two primary forms of the robot’s movement are rolling, which leverages tracks for efficient movement across semi-flat terrains, and walking, which is primarily used for maneuvering across challenging and uneven terrains. In this paper, these modes will be referred to as rolling and walking, respectively. Similar to many other hybrid robots, the default locomotion mode of the Cricket robot is rolling. This mode is preferred on flat and rigid surfaces due to its efficiency in terms of time and energy consumption. In the rolling locomotion mode, the robot maintains its home configuration, where all joints are positioned at their central positions as illustrated in Fig. 3.
Hybrid robots typically transition between locomotion modes either by “supervised autonomy” [11], where human operators make the switch decisions, or the autonomous locomotion mode transition approach, where robots autonomously swap the modes predicated on pre-set criteria [8]. However, the execution of supervised control of locomotion mode transition hinges on constant operator-robot interaction, which might not always be feasible or reliable, especially in confined and complex environments typical in search and rescue missions [12]. In such situations, operators might struggle to maintain absolute situational awareness. To address the locomotion mode transition conundrum, various solutions have been proposed. These include adopting specialized mechanical designs [13, 14] and applying pre-programmed solutions [15]. Although these methods have enhanced the autonomy of locomotion mode transitions, universally applicable autonomous solutions remain in the early stages of development. In fact, most locomotion mode transitions in hybrid robots are currently achieved via high-level human operator control. This applies to cutting-edge wheel/track-legged robots, including DRC-HUBO, CHIMP, Momaro, and RoboSimian, depicted in Fig. 1, which were four of the top five robot designs crafted for the DARPA Robotics Challenge [1].
A major obstacle in achieving seamless autonomous locomotion transition lies in the need for an efficient sensing methodology that can promptly and reliably evaluate the interaction between the robot and the terrain, referred to as terramechanics. These methods generally involve performing comprehensive on-site measurements of soil attributes prior to robot deployment [9]. Moreover, it’s important to consider that these terramechanics models, striving to predict robot-terrain interactions, often involve substantial computational costs due to their complexity [16]. Therefore, terramechanics methods are unsuitable for use in autonomous locomotion mode transition control directly, particularly in scenarios where robots need to move at high speeds, for example in search and rescue missions. To bypass the limitations of terramechanics methods, researchers have probed into alternative strategies for accomplishing autonomous locomotion transition. For example, certain studies have utilized energy consumption as a metric for evaluating the transverse-ability of different locomotion modes in wheel/track-legged robots [8]. By scrutinizing the energy expenditure for different locomotion modes, researchers can evaluate their efficiency in navigating various terrains. Additionally, other general parameters like stability margin and motion efficiency have been examined in the quest to achieve autonomous locomotion transition [2].
D
For paid exchanges at the beginning of the phase, Tog incurs a cost that is less than m2superscript𝑚2m^{2}italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. Before serving the last request σℓsubscript𝜎ℓ\sigma_{\ell}italic_σ start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT of the phase, the access cost of Tog is less than m3superscript𝑚3m^{3}italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT by definition, and the access cost to σℓsubscript𝜎ℓ\sigma_{\ell}italic_σ start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT is at most m𝑚mitalic_m. ∎
In an ignoring phase, the cost of Tog for the phase is in the range (β⁢m3,β⁢m3⁢(1+1/m2))𝛽superscript𝑚3𝛽superscript𝑚311superscript𝑚2(\beta m^{3},\beta m^{3}(1+1/m^{2}))( italic_β italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , italic_β italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( 1 + 1 / italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) ) (excluding the last phase).
The worst-case ratio between the costs of Tog and Mtf2 is maximized when the last phase is an ignoring phase. In this case, we have k𝑘kitalic_k trusting phases and k𝑘kitalic_k ignoring phases. The total cost of Mtf2 is at least k⁢m3+k⁢(β⁢m3/2−m2)=k⁢m3⁢(1+β/2−1/m)𝑘superscript𝑚3𝑘𝛽superscript𝑚32superscript𝑚2𝑘superscript𝑚31𝛽21𝑚km^{3}+k(\beta m^{3}/2-m^{2})=km^{3}(1+\beta/2-1/m)italic_k italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT + italic_k ( italic_β italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT / 2 - italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) = italic_k italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( 1 + italic_β / 2 - 1 / italic_m ). By Lemma 21, the cost of Tog is at most k⁢m3⁢(1+β+3/m)𝑘superscript𝑚31𝛽3𝑚km^{3}(1+\beta+3/m)italic_k italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( 1 + italic_β + 3 / italic_m ). The ratio between the two algorithms will be less than
For a trusting phase, the cost of Tog is in the range (m3,m3⁢(1+1/m+1/m2))superscript𝑚3superscript𝑚311𝑚1superscript𝑚2(m^{3},m^{3}(1+1/m+1/m^{2}))( italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( 1 + 1 / italic_m + 1 / italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) )
Similar arguments apply for an ignoring phase with the exception that the threshold is β⋅m2⋅𝛽superscript𝑚2\beta\cdot m^{2}italic_β ⋅ italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT and there are no paid exchanges performed by Tog. So, we can observe the following.
D
Regarding the support that SS3 provides for early classification we can say that, even though the rules we used are very simple, they are more effective than more elaborated and complex mechanisms used in the pilot task. For instance, some mechanisms to stop reading and classifying a subject included complex decision mechanisms based on specific rules for different chunks [Villegas et al., 2017]. These rules take into account the decisions of different classifiers, the probability that each classifier assigned to its prediction, “white lists” containing the words with the highest information gain, and other sources of information. Another approach that showed a good performance relied on hand-crafted rules specifically designed for this problem [Trotzek et al., 2017], of the form: “if output ≥αnabsentsubscript𝛼𝑛\geq\alpha_{n}≥ italic_α start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and number of writings ≥nabsent𝑛\geq n≥ italic_n, then classify as positive”, “if output ≤βnabsentsubscript𝛽𝑛\leq\beta_{n}≤ italic_β start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and the number of writings ≥nabsent𝑛\geq n≥ italic_n, then classify as non-depressed”, etc.
Regarding document representations some research groups used simple features like standard Bag of Words [Trotzek et al., 2017, Villegas et al., 2017, Farıas-Anzaldúa et al., 2017], bigrams and trigrams [Villegas et al., 2017, Almeida et al., 2017, Farıas-Anzaldúa et al., 2017], while others used more elaborated and domain-specific ones like lexicon-based features555Such as emotion words from WordNet, sentiment words from Vader, and preexisting depression-related dictionaries.[Malam et al., 2017, Trotzek et al., 2017, Sadeque et al., 2017, Almeida et al., 2017], LIWIC features [Trotzek et al., 2017, Villegas et al., 2017], Part-of-Speech tags [Almeida et al., 2017], statistical features666Such as the average number of posts, the average number of words per post, post timestamps, etc.[Malam et al., 2017, Almeida et al., 2017, Farıas-Anzaldúa et al., 2017] or even hand-crafted features [Trotzek et al., 2017]. Some other groups made use of more sophisticated features such as Latent Semantic Analysis [Trotzek et al., 2017], Concise Semantic Analysis [Villegas et al., 2017], Doc2Vec [Trotzek et al., 2017] or even graph-based representations [Villatoro-Tello et al., 2017].
Most research groups [Malam et al., 2017, Trotzek et al., 2017, Sadeque et al., 2017, Villatoro-Tello et al., 2017, Villegas et al., 2017, Almeida et al., 2017] applied a simple policy in which, the same way as in [Losada & Crestani, 2016], a subject is classified as depressed when the classifier outputs a value greater than a fixed threshold. Some other groups [Farıas-Anzaldúa et al., 2017] applied no policy at all and no early classification was performed, i.e. their classifiers made their predictions only after seeing the entire subject’s history888Note that this is not a realistic approach, usually there is no such thing as a subject’s “last writing” in real life since subjects are able to create new writings over time.. It is worth mentioning that some groups [Malam et al., 2017, Trotzek et al., 2017, Villegas et al., 2017] added extra conditions to the given policy, for instance [Trotzek et al., 2017] used a list of manually-crafted rules of the form: “if output ≥αnabsentsubscript𝛼𝑛\geq\alpha_{n}≥ italic_α start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and the number of writings ≥nabsent𝑛\geq n≥ italic_n, then classify as positive”, “if output ≤βnabsentsubscript𝛽𝑛\leq\beta_{n}≤ italic_β start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and the number of writings ≥nabsent𝑛\geq n≥ italic_n, then classify as non-depressed”, etc.
Regarding classification models, some groups used standard classifiers777Such as Multinomial Naive Bayes(MNB), Logistic Regression (LOGREG), Support Vector Machine(SVM), Random Forest, Decision Trees, etc.[Malam et al., 2017, Trotzek et al., 2017, Sadeque et al., 2017, Villegas et al., 2017, Almeida et al., 2017, Farıas-Anzaldúa et al., 2017] while others made use of more complex methods such as different types of Recurrent Neural Networks [Trotzek et al., 2017, Sadeque et al., 2017], graph-based models [Villatoro-Tello et al., 2017], or even combinations or ensemble of different classifiers [Trotzek et al., 2017, Sadeque et al., 2017, Villegas et al., 2017, Almeida et al., 2017].
It is true that more elaborated methods that simultaneously learn the classification model and the policy to stop reading could have been used, such as in [Dulac-Arnold et al., 2011, Yu et al., 2017]. However, for the moment it is clear that this very simple approach is effective enough to outperform the remainder methods, leaving for future work the use of more elaborated approaches.
D
Stochastic gradient descent (SGD) and its variants (Robbins and Monro, 1951; Bottou, 2010; Johnson and Zhang, 2013; Zhao et al., 2018, 2020, 2021) have been the dominating optimization methods for solving (1). In each iteration, SGD calculates a (mini-batch) stochastic gradient and uses it to update the model parameters. Inspired by momentum and Nesterov’s accelerated gradient descent, momentum SGD (MSGD) (Polyak, 1964; Tseng, 1998; Lan, 2012; Kingma and Ba, 2015) has been proposed and widely used in machine learning. In practice, MSGD often outperforms SGD (Krizhevsky et al., 2012; Sutskever et al., 2013). Many machine learning platforms, such as TensorFlow, PyTorch and MXNet, include MSGD as one of their optimization methods.
Furthermore, when we distribute the training across multiple workers, the local objective functions may differ from each other due to the heterogeneous training data distribution. In Section 5, we will demonstrate that the global momentum method outperforms its local momentum counterparts in distributed deep model training.
Recently, parameter server (Li et al., 2014) has been one of the most popular distributed frameworks in machine learning. GMC can also be implemented on the parameter server framework. In this paper, we adopt the parameter server framework for illustration. The theories in this paper can also be adapted for the all-reduce framework.
With the rapid growth of data, distributed SGD (DSGD) and its variant distributed MSGD (DMSGD) have garnered much attention. They distribute the stochastic gradient computation across multiple workers to expedite the model training. These methods can be implemented on distributed frameworks like parameter server and all-reduce frameworks.
GMC can be easily implemented on the all-reduce distributed framework in which each worker sends the sparsified vector 𝒞⁢(𝐞t+12,k)𝒞subscript𝐞𝑡12𝑘\mathcal{C}({\bf e}_{t+\frac{1}{2},k})caligraphic_C ( bold_e start_POSTSUBSCRIPT italic_t + divide start_ARG 1 end_ARG start_ARG 2 end_ARG , italic_k end_POSTSUBSCRIPT ) to all the other workers, then each worker updates 𝐰t+1subscript𝐰𝑡1{\bf w}_{t+1}bold_w start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT after receiving the sparsified vectors from all the other workers.
C
Olshausen et al. [43] presented an objective function that considers subjective measures of sparseness of the activation maps, however in this work we use the direct measure of compression ratio. Previous work by [44] have used a weighted combination of the number of neurons, percentage root-mean-squared difference and a correlation coefficient for the optimization function of a FNN as a metric but without taking consideration the number of non-zero activations.
The increased number of weights and non-zero activations make DNNs more complex, and thus more difficult to use in problems that require corresponding causality of the output with a specific set of neurons. The majority of domains where machine learning is applied, including critical areas such as healthcare [26], require models to be interpretable and explainable before considering them as a solution.
A limitation of SANs is the use of varying amplitude-only kernels, which are not sufficient for more complex data and also do not fully utilize the compressibility of the data. A possible solution would be using a grid sampler [45] on the kernel allowing it to learn more general transformations (such as scale) than simple amplitude variability.
It is interesting to note that in some cases SANs reconstructions, such as for the Extrema-Pool indices, performed even better than the original data. This suggests the overwhelming presence of redundant information that resides in the raw pixels of the original data and further indicates that SANs extract the most representative features of the data.
The φ𝜑\varphiitalic_φ metric is also related to the rate-distortion theory [40], in which the maximum distortion is defined according to human perception, which however inevitably introduces a bias. There is also relation with the field of Compressed Sensing [41] in which the sparsity of the data is exploited allowing us to reconstruct it with fewer samples than the Nyquist-Shannon theorem requires and the field of Robust Feature Extraction [42] where robust features are generated with the aim to characterize the data.
B
The essence of PBLLA is selecting an alternative UAV randomly in one iteration and improving its utility by altering power and altitude with a certain probability, which is determined by the utilities of two strategies and τ𝜏\tauitalic_τ. UAV prefers to select the power and altitude which provide higher utility. Nevertheless, highly dynamic scenarios will cause UAVs to make mistakes and pick the worse strategy. The dynamic degree index τ𝜏\tauitalic_τ determines the dynamic degree of the situation and UAV’s performance. Small τ𝜏\tauitalic_τ means less dynamic scenarios and fewer mistakes when UAVs are making decisions. When τ→0→𝜏0\tau\rightarrow 0italic_τ → 0 which equals to stabilization, UAV will always select the power and altitude with higher utility; when τ→∞→𝜏\tau\rightarrow\inftyitalic_τ → ∞ where exists sever dynamics, UAV will choose them randomly. However, PBLLA has its limitations that PBLLA is only one single UAV is allowed for altering strategies in one iteration. We will propose a new algorithm in the next section to overcome the restrictions.
Since PBLLA only allows one single UAV to alter strategies in one iteration, such defect would cause computation time to grow exponentially in large-scale UAVs systems. In terms of large-scale UAVs ad-hoc networks with a number of UAVs denoted as M𝑀Mitalic_M, M2superscript𝑀2M^{2}italic_M start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT times of exchange messages will be needed to coordinate and guarantee that only one UAV changes strategy in each iteration. Such a process not only consumes large energy but also prolongs convergence time. Algorithms that can improve the learning rate and reduce messages exchange is urgently needed. Thus, we propose the Synchronous Payoff-based Binary Log-linear Learning Algorithm (SPBLLA), which permits each UAV altering their strategies synchronously and learning with no message exchange.
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approaching [9][32]. The BLLA has been employed by [33], which is modified from LLA to update strategies in each iteration to converge to the NE. However, only a single agent is allowed to alter strategies in one iteration. In large-scale scenarios, more iterations are required, which makes BLLA inefficient. It is obvious that more UAVs altering strategies in one iteration would be more efficient. To achieve it, the works in [34] and [35] have provided a novel synchronous algorithm. However, there exist superabundant restrictions that make the algorithm impractical in most scenarios. Compared with the formers, SPBLLA has fewer constraints and can achieve synchronous operation, which can significantly improve the computational efficiency.
Fig. 15 presents the learning rate of PBLLA and SPBLLA when τ=0.01𝜏0.01\tau=0.01italic_τ = 0.01. As m𝑚mitalic_m increases the learning rate of SPBLLA decreases, which has been shown in Fig. 15. However, when m𝑚mitalic_m is small, SPBLLA’s learning rate is about 3 times that of PBLLA showing the great advantage of synchronous learning. When τ=0.015𝜏0.015\tau=0.015italic_τ = 0.015 and τ=0.02𝜏0.02\tau=0.02italic_τ = 0.02 as shown in Fig. 15, such phenomenon also exists. Since PBLLA merely permits a single UAV to alter strategies in one iteration, SPBLLA’s synchronous learning rate will much larger than PBLLA. Moreover, in the large-scale UAV network with high dynamic, PBLLA needs information exchange to decide the update order, which would severely prolong the learning time. PBLLA’s learning time might be four times as long as that of SPBLLA. Thus we can make the conclusion that in the same condition (the same τ𝜏\tauitalic_τ and other indexes), SPBLLA performs better and is more suitable for large-scale highly dynamic environment than PBLLA, and SPBLLA can improve the learning rate several times. With larger altering strategies probability, SPBLLA will be even more powerful.
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV changes strategy in the next iteration based on the new game state. It means that UAVs are not permitted to update strategies at the same time. Besides, to determine which UAV to update strategy, the coordinating process will occupy plenty of channel capacities and require more time between two iterations [15]. If the algorithm can learn synchronously, more than one UAV can update strategies based on the current game state in one iteration. Thus, the algorithm can be more efficient. To sum up, synchronous update algorithms which can learn from previous experiences are desirable, but only a little research investigated on it.
A
=Σej⁢Be⁢se3absentsubscript𝑒𝑗absentΣsuperscript𝐵𝑒superscript𝑠𝑒3\displaystyle=\overset{e_{j}}{\underset{}{\Sigma}}\,B^{e}\frac{s^{e}}{3}= start_OVERACCENT italic_e start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_OVERACCENT start_ARG start_UNDERACCENT end_UNDERACCENT start_ARG roman_Σ end_ARG end_ARG italic_B start_POSTSUPERSCRIPT italic_e end_POSTSUPERSCRIPT divide start_ARG italic_s start_POSTSUPERSCRIPT italic_e end_POSTSUPERSCRIPT end_ARG start_ARG 3 end_ARG
U¯r′superscriptsubscript¯𝑈𝑟′\displaystyle\overline{U}_{r}^{\prime}over¯ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT =D⁢r¯¯∗U¯absent¯¯𝐷𝑟¯𝑈\displaystyle=\overline{\overline{Dr}}*\overline{U}= over¯ start_ARG over¯ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_U end_ARG
U^r′superscriptsubscript^𝑈𝑟′\displaystyle\widehat{U}_{r}^{\prime}over^ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT =D⁢r^¯∗U¯absent¯^𝐷𝑟¯𝑈\displaystyle=\overline{\widehat{Dr}}*\overline{U}= over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_U end_ARG
=S¯¯−1∗(M^¯T∗S^^∗D⁢r^¯)absentsuperscript¯¯𝑆1superscript¯^𝑀𝑇^^𝑆¯^𝐷𝑟\displaystyle=\overline{\overline{S}}^{-1}*\left(\overline{\widehat{M}}^{T}*% \widehat{\widehat{S}}*\overline{\widehat{Dr}}\right)= over¯ start_ARG over¯ start_ARG italic_S end_ARG end_ARG start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∗ ( over¯ start_ARG over^ start_ARG italic_M end_ARG end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∗ over^ start_ARG over^ start_ARG italic_S end_ARG end_ARG ∗ over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG )
U¯r′superscriptsubscript¯𝑈𝑟′\displaystyle\overline{U}_{r}^{\prime}over¯ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT =(S¯¯−1∗(M^¯T∗S^^∗D⁢r^¯))∗U¯absentsuperscript¯¯𝑆1superscript¯^𝑀𝑇^^𝑆¯^𝐷𝑟¯𝑈\displaystyle=\left(\overline{\overline{S}}^{-1}*\left(\overline{\widehat{M}}^%
D
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, in order to get a semantic of comparability closer to equality. Even more, it could be possible to make the functions reflexive on all values but null where some freedom is allowed.
Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality) by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, any value yA≥AxAsubscript𝐴subscript𝑦𝐴subscript𝑥𝐴y_{A}\geq_{A}x_{A}italic_y start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ≥ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT must be set to 1111 since it is closer to
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, in order to get a semantic of comparability closer to equality. Even more, it could be possible to make the functions reflexive on all values but null where some freedom is allowed.
fA⁢(u,v)=fB⁢(u,v)={1if ⁢u=v≠nullaif ⁢u≠null,v≠null and ⁢u≠vbif ⁢u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\ a&\text{if }u\neq\texttt{null},v\neq\texttt{null}\text{ and }u\neq v\\
Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it. Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly
D
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optimal value function can be computed exactly.
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Reinforcement Learning is concerned with finding a sequence of actions an agent can follow that could lead to solve the task on the environment [1][2][3]. Most of Reinforcement Learning techniques estimate the consequences of actions in order to find an optimal policy in the form of sequence of actions that can be followed by the agent to solve the task. The process of choosing the optimal policy is based on selecting actions that maximize the future payoff of an action. Finding an optimal policy is the main concern of Reinforcement Learning for that reason many algorithms have been introduced over a course of time, e.g, Q-learning[4], SARSA[5], and policy gradient methods[6]. These methods use linear function approximation techniques to estimate action value, where convergence is guaranteed [7]. However, as challenges in modeling complex patterns increase, the need for expressive and flexible non-linear function approximators becomes clear. The recent advances in deep neural networks helped to develop artificial agent named deep Q-network(DQN)[8] that can learn successful policies directly from high-dimensional features. Despite the remarkable flexibility and the huge representative capability of DQN, some issues emerge from the combination of Q-learning and neural networks. One of these issues, known as ”overestimation phenomenon,” was first explored by [9]. They noted that the expansion of the action space in the Q-learning algorithm, along with generalization errors in neural networks, often results in an overestimation and increased variance of state-action values. They suggested that to counter these issues, further modifications and enhancements to the standard algorithm would be necessary to boost training stability and diminish overestimation. In response, [10] introduced Double-DQN, an improvement that incorporates the double Q-learning estimator [11], aiming to address the challenges of variance and overestimation. Additionally, [31] developed the Averaged-DQN algorithm, a significant improvement over the standard DQN. By averaging previously learned Q-values, Averaged-DQN effectively lowers the variance in target value estimates, thus enhancing training stability and overall performance.
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Classic Control Environment. The game of CARTPOLE was selected due to its widespread use and the ease with which the DQN can achieve a steady state policy.
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes because of the unseen state transitions and the finite size of experience reply buffer. This type of variance leads to converging to sub-optimal policies and brutally hurts DQN performance. The second source of variance Target Approximation Error which is the error coming from the inexact minimization of DQN parameters. Many of the proposed extensions focus on minimizing the variance that comes from AGE by finding methods to optimize the learning trajectory or from TAE by using methods like averaging to exact DQN parameters. Dropout methods have the ability to assemble these two solutions which minimize different source of variance. Dropout methods can achieve a consistence learning trajectory and exact DQN parameters with averaging, which comes inherently with Dropout methods.
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optimal value function can be computed exactly.
B
As one of the first high impact CNN-based segmentation models, Long et al. (2015) proposed fully convolutional networks for pixel-wise labeling. They proposed up-sampling (deconvolving) the output activation maps from which the pixel-wise output can be calculated. The overall architecture of the network is visualized in Figure 3.
As one of the first high impact CNN-based segmentation models, Long et al. (2015) proposed fully convolutional networks for pixel-wise labeling. They proposed up-sampling (deconvolving) the output activation maps from which the pixel-wise output can be calculated. The overall architecture of the network is visualized in Figure 3.
Several modified versions (e.g. deeper/shallower, adding extra attention blocks) of encoder-decoder networks have been applied to semantic segmentation (Amirul Islam et al., 2017; Fu et al., 2019b; Lin et al., 2017a; Peng et al., 2017; Pohlen et al., 2017; Wojna et al., 2017; Zhang et al., 2018d). Recently in 2018, DeepLabV3+ (Chen et al., 2018b) has outperformed many state-of-the-art segmentation networks on PASCAL VOC 2012 (Everingham et al., 2015) and Cityscapes (Cordts et al., 2016) datasets. Zhao et al. (2017b) modified the feature fusing operation proposed by Long et al. (2015) using a spatial pyramid pooling module or encode-decoder structure (Figure 10) are used in deep neural networks for semantic segmentation tasks. The spatial pyramid networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information.
In order to preserve the contextual spatial information within an image as the filtered input data progresses deeper into the network, Long et al. (2015) proposed to fuse the output with shallower layers’ output. The fusion step is visualized in Figure 4.
Vorontsov et al. (2019), using a dataset defined in Cohen et al. (2018), proposed an image-to-image based framework to transform an input image with object of interest (presence domain) like a tumor to an image without the tumor (absence domain) i.e. translate diseased image to healthy; next, their model learns to add the removed tumor to the new healthy image. This results in capturing detailed structure from the object, which improves the segmentation of the object. Zhou et al. (2018) proposed a rewiring method for the long skip connections used in U-Net and tested their method on nodule segmentation in the low-dose CT scans of the chest, nuclei segmentation in the microscopy images, liver segmentation in abdominal CT scans, and polyp segmentation in colonoscopy videos.
C
Interestingly, the Dense architecture achieves the best performance on MUTAG, indicating that in this case, the connectivity of the graps does not carry useful information for the classification task. The performance of the Flat baseline indicates that in Enzymes and COLLAB pooling operations are not necessary to improve the classification accuracy.
Contrarily to graph classification, DiffPool and TopK𝐾Kitalic_K fail to solve this task and achieve an accuracy comparable to random guessing. On the contrary, the topological pooling methods obtain an accuracy close to a classical CNN, with NDP significantly outperforming the other two techniques.
When compared to other methods for graph pooling, NDP performs significantly better than other techniques that pre-compute the topology of the coarsened graphs, while it achieves a comparable performance with respect to state-of-the-art feature-based pooling methods.
In Fig. 7, we report the training time for the five different pooling methods. As expected, GNNs configured with GRACLUS, NMF, and NDP are much faster to train compared to those based on DiffPool and TopK𝐾Kitalic_K, with NDP being slightly faster than the other two topological methods.
Figure 9: Example of coarsening on one graph from the Proteins dataset. In (a), the original adjacency matrix of the graph. In (b), (c), and (d) the edges of the Laplacians at coarsening level 0, 1, and 2, as obtained by the 3 different pooling methods GRACLUS, NMF, and the proposed NDP.
C
where wD∈ℝnTsuperscript𝑤𝐷superscriptℝsubscript𝑛𝑇w^{D}\in\mathbb{R}^{n_{T}}italic_w start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_n start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT end_POSTSUPERSCRIPT. This optimization finds a weighting of the number of decision trees so that the generated confidences cover the full range equally. For that, the number of samples per bin hijsubscriptsuperscriptℎ𝑗𝑖h^{j}_{i}italic_h start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is summed up, weighted over all numbers of decision trees. After determining wDsuperscript𝑤𝐷w^{D}italic_w start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT, the number of decision trees can be sampled depending on wjDsubscriptsuperscript𝑤𝐷𝑗w^{D}_{j}italic_w start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT.
The proposed method generates data from a random forest and trains a neural network that imitates the random forest. The goal is that the neural network approximates the same function as the random forest. This also implies that the network reaches the same accuracy if successful.
Our proposed approach, called Neural Random Forest Imitation (NRFI), implicitly transforms random forests into neural networks. The main concept includes (1) generating training data from decision trees and random forests, (2) adding strategies for reducing conflicts and increasing the variety of the generated examples, and (3) training a neural network that imitates the random forest by learning the decision boundaries.
Finally, a neural network that imitates the random forest is trained. The network learns the decision boundaries from the generated data and approximates the same function as the random forest. The network architecture is based on a fully connected network with one or multiple hidden layers.
NRFI with and without the original data is shown for different network architectures. The smallest architecture has 2222 neurons in both hidden layers and the largest 128128128128. For NRFI (gen-ori), we can see that a network with 16161616 neurons in both hidden layers (NN-16-16) is already sufficient to learn the decision boundaries of the random forest and achieve the same accuracy. When fewer training samples are available, NN-8-8 already has the required capacity. In the following, we will further analyze the accuracy and number of network parameters.
C
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy optimization remain rather limited from both computational and statistical perspectives. More specifically, from the computational perspective, it remains unclear until recently whether policy optimization converges to the globally optimal policy in a finite number of iterations, even given infinite data. Meanwhile, from the statistical perspective, it still remains unclear how to attain the globally optimal policy with a finite regret or sample complexity.
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient (PG) (Williams, 1992; Baxter and Bartlett, 2000; Sutton et al., 2000), natural policy gradient (NPG) (Kakade, 2002), trust-region policy optimization (TRPO) (Schulman et al., 2015), proximal policy optimization (PPO) (Schulman et al., 2017), and actor-critic (AC) (Konda and Tsitsiklis, 2000), converge to the globally optimal policy at sublinear rates of convergence, even when they are coupled with neural networks (Liu et al., 2019; Wang et al., 2019). However, such computational efficiency guarantees rely on the regularity condition that the state space is already well explored. Such a condition is often implied by assuming either the access to a “simulator” (also known as the generative model) (Koenig and Simmons, 1993; Azar et al., 2011, 2012a, 2012b; Sidford et al., 2018a, b; Wainwright, 2019) or finite concentratability coefficients (Munos and Szepesvári, 2008; Antos et al., 2008; Farahmand et al., 2010; Tosatto et al., 2017; Yang et al., 2019b; Chen and Jiang, 2019), both of which are often unavailable in practice.
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In particular, OPPO is based on PPO (and similarly, NPG and TRPO), which is shown to converge to the globally optimal policy at sublinear rates in tabular and linear settings, as well as nonlinear settings involving neural networks (Liu et al., 2019; Wang et al., 2019). However, without assuming the access to a “simulator” or finite concentratability coefficients, both of which imply that the state space is already well explored, it remains unclear whether any of such algorithms is sample-efficient, that is, attains a finite regret or sample complexity. In comparison, by incorporating uncertainty quantification into the action-value function at each update, which explicitly encourages exploration, OPPO not only attains the same computational efficiency as NPG, TRPO, and PPO, but is also shown to be sample-efficient with a d2⁢H3⁢Tsuperscript𝑑2superscript𝐻3𝑇\sqrt{d^{2}H^{3}T}square-root start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT italic_T end_ARG-regret up to logarithmic factors.
for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018, 2019; Yang and Wang, 2019b, a), where the reward function is fixed across all the episodes.
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020), as well as nonlinear settings involving general function approximators (Wen and Van Roy, 2017; Jiang et al., 2017; Du et al., 2019b; Dong et al., 2019). In particular, our setting is the same as the linear setting studied by Ayoub et al. (2020); Zhou et al. (2020), which generalizes the one proposed by Yang and Wang (2019a). We remark that our setting differs from the linear setting studied by Yang and Wang (2019b); Jin et al. (2019). It can be shown that the two settings are incomparable in the sense that one does not imply the other (Zhou et al., 2020). Also, our setting is related to the low-Bellman-rank setting studied by Jiang et al. (2017); Dong et al. (2019). In comparison, we focus on policy-based reinforcement learning, which is significantly less studied in theory. In particular, compared with the work of Yang and Wang (2019b, a); Jin et al. (2019); Ayoub et al. (2020); Zhou et al. (2020), which focuses on value-based reinforcement learning, OPPO attains the same T𝑇\sqrt{T}square-root start_ARG italic_T end_ARG-regret even in the presence of adversarially chosen reward functions. Compared with optimism-led iterative value-function elimination (OLIVE) (Jiang et al., 2017; Dong et al., 2019), which handles the more general low-Bellman-rank setting but is only sample-efficient, OPPO simultaneously attains computational efficiency and sample efficiency in the linear setting. Despite the differences between policy-based and value-based reinforcement learning, our work shows that the general principle of “optimism in the face of uncertainty” (Auer et al., 2002; Bubeck and Cesa-Bianchi, 2012) can be carried over from existing algorithms based on value iteration, e.g., optimistic LSVI, into policy optimization algorithms, e.g., NPG, TRPO, and PPO, to make them sample-efficient, which further leads to a new general principle of “conservative optimism in the face of uncertainty and adversary” that additionally allows adversarially chosen reward functions.
A
In experiments, we demonstrated on two benchmark data sets the difficulty of finding a good trade-off among prediction quality, representational efficiency and computational efficiency. Considering three embedded hardware platforms, we showed that massive parallelism is required for inference efficiency and that quantization as well as structured pruning map well onto these accelerators.
We furthermore point out that hardware properties and the corresponding computational efficiency form a large fraction of resource efficiency. This highlights the need to consider particular hardware targets when searching for resource-efficient machine learning models.
In experiments, we demonstrated on two benchmark data sets the difficulty of finding a good trade-off among prediction quality, representational efficiency and computational efficiency. Considering three embedded hardware platforms, we showed that massive parallelism is required for inference efficiency and that quantization as well as structured pruning map well onto these accelerators.
The computational cost of performing inference should match the (usually limited) resources in deployed systems and exploit the available hardware optimally in terms of time and energy. Computational efficiency, in particular, also includes mapping the representational efficiency to available hardware structures.
In this regard, resource-efficient neural networks for embedded systems are concerned with the trade-off between prediction quality and resource efficiency (i.e., representational efficiency and computational efficiency). This is highlighted in Figure 1. Note that this requires observing overall constraints such as prediction quality as well as inference latency and/or throughput, chip area and power consumption.
A
In Section 7, we prove a number of results concerning the homotopy types of Vietoris-Rips filtrations of spheres and complex projective spaces. Also, we fully compute the homotopy types of Vietoris-Rips filtration of spheres with ℓ∞superscriptℓ\ell^{\infty}roman_ℓ start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT-norm.
In Section 8, we reprove Rips and Gromov’s result about the contractibility of the Vietoris-Rips complex of hyperbolic geodesic metric spaces, by using our method consisting of isometric embeddings into injective metric spaces. As a result, we will be able to bound the length of intervals in Vietoris-Rips persistence barcode by the hyperbolicity of the underlying space.
The simplicial complex nowadays referred to as the Vietoris-Rips complex was originally introduced by Leopold Vietoris in the early 1900s in order to build a homology theory for metric spaces [79]. Later, Eliyahu Rips and Mikhail Gromov [47] both utilized the Vietoris-Rips complex in their study of hyperbolic groups.
Of central interest in topological data analysis has been the question of providing a complete characterization of the Vietoris-Rips persistence barcodes of spheres of different dimensions. Despite the existence of a complete answer to the question for the case of 𝕊1superscript𝕊1\mathbb{S}^{1}blackboard_S start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT [4] due to Adams and Adamaszek, relatively little is known for higher dimensional spheres. In [5] the authors consider a variant of the Vietoris-Rips filtration, which they call Vietoris-Rips metric thickening. The authors are able to obtain information about the succesive homotopy types of this filtration on spheres of different dimension (see Section 5 of [5]) for a certain range of values of the scale parameter.
In Section 7, we prove a number of results concerning the homotopy types of Vietoris-Rips filtrations of spheres and complex projective spaces. Also, we fully compute the homotopy types of Vietoris-Rips filtration of spheres with ℓ∞superscriptℓ\ell^{\infty}roman_ℓ start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT-norm.
A
One way to obtain an indication of a projection’s quality is to compute a single scalar value, equivalent to a final score. Examples are Normalized Stress [7], Trustworthiness and Continuity [24], and Distance Consistency (DSC) [25]. More recently, ClustMe [26] was proposed as a perception-based measure that ranks scatterplots based on cluster-related patterns. While this might be useful for quick overviews or automatic selection of projections, a single score fails to capture more intricate details, such as where and why a projection is good or bad [27]. In contrast, local measures such as the projection precision score (pps) [18] describe the quality for each individual point of the projection, which can then be visualized as an extra layer on top of the scatterplot itself. These measures usually focus on the preservation of neighborhoods [28, 29, 30] or distances [27, 31, 32].
We present a Neighborhood Preservation plot (Figure 1(g)) that shows an overview of the preservation of neighborhoods of different sizes (k𝑘kitalic_k) in both the entire projection and the current selection, based on the Jaccard distance between sets:
we present t-viSNE, a tool designed to support the interactive exploration of t-SNE projections (an extension to our previous poster abstract [17]). In contrast to other, more general approaches, t-viSNE was designed with the specific problems related to the investigation of t-SNE projections in mind, bringing to light some of the hidden internal workings of the algorithm which, when visualized, may provide important insights about the high-dimensional data set under analysis. Our proposed solution is composed of a set of coordinated views that work together in order to fulfill four main goals: (G1) facilitate the choice of hyper-parameters through visual exploration and the use of quality metrics; (G2) provide a quick overview of the accuracy of the projection, to support the decision of either moving forward with the analysis or repeating the process of hyper-parameter exploration; (G3) provide the means to investigate quality further, differentiating between the trustworthiness of different regions of the projection; and (G4) allow the interpretation of different visible patterns of the projection in terms of the original data set’s dimensions.
The difference line plot (d), on the other hand, builds on the standard plot by highlighting the differences between the selection and the global average, shown as positive and negative values around the 0 value of the y-axis. It provides a clearer overall picture of the difference in preservation among all the shown scales, but compromises the precision and simplicity of interpretation of the y-axis (where the exact percentage of Neighborhood Preservation was previously shown). The difference bar chart (b) is a combination of the designs (a) and (d). Similar to (d), the interpretation of the y-values might be misleading.
As an example, the set difference from Martins et al. [33] uses the Jaccard set-distance between the two sets of neighbors of a point in low- and high-dimensional space in order to compute a measure of Neighborhood Preservation. We have chosen to adopt it in our work, in contrast to others, because of its intuitive interpretation, simple computation, and straightforward adaptation for displaying the preservation of neighborhoods of different scales.
D
Similarity in metaheuristics: A gentle step towards a comparison methodology - 2022 [27]: This paper uses a pool template as a framework for decomposing and analyzing metaheuristics, inspired by another previous work. This template works as a framework for decomposing and analyzing metaheuristics based on these concepts explained in such work: generation method, pool of solutions, archive of solutions, selected pool of solutions, updating mechanism, updated pool, and the archiving and output functions. The authors provide some measures and methodologies to identify their similarities and novelties based on the updating mechanism component, similar to our second taxonomy. They review 15 metaheuristics and their insights confirm that many metaheuristics are special cases of others.
50 years of metaheuristics - 2024 [40]: This overview traces the last 50 years of the field, starting from the roots of the area to the latest proposals to hybridize metaheuristics with machine learning. The revision encompasses constructive (GRASP and ACO), local search (iterated local search, Tabu search, variable neighborhood search), and population-based heuristics (memetic algorithms, biased random-key genetic algorithms, scatter search, and path relinking). Each category presents its core characteristics and the description of the mentioned algorithms. This review presents metaheuristic frameworks to guide the design of heuristic optimization algorithms during the last 50 years. It discusses the role of the journal in which it is published in introducing solid heuristic papers. This work also recalls the maturity of the field, which leads to solving very complex problems, with a growing number of researchers applying them, as shown in the numerous conferences and related events. Also, they criticize the fragmentation as each group of research usually applies the same methods regardless of the type of problem being solved, the lack of theoretical foundations, the limited analytical understanding of novel proposals, the problem-specific tuning of metaheuristics, the lack of standardized benchmarking protocols and the absence of general guidelines. Several research directions are also annotated for researchers to be applied in the future.
Good practices for designing metaheuristics: It gathers several works that are guidelines for good practices related to research orientation to measure novelty [26], to measure similarity in metaheuristics [27], Metaheuristics “In the Large” (to support the development, analysis, and comparison of new approaches) [28], to design manual or automatic new metaheuristics [29], to guide the learning strategy in design and improvement of metaheuristics [30], to use statistical test in metaheuristics [31], and to detect the novelties in metaphor-based algorithms [32].
Metaheuristics “In the Large” - 2022 [28]: The objective of this work is to provide a useful tool for researchers. To address the lack of novelty, the authors propose a new infrastructure to support the development, analysis, and comparison of new approaches. This framework is based on (1) the use of algorithm templates for reuse without modification, (2) white box problem descriptions that provide generic support for the injection of domain-specific knowledge, and (3) remotely accessible frameworks, components, and problems. This can be considered as a step towards the improvement of the reproducibility of results.
The constant evolution of the field leads to a significant issue: the lack of novelty in metaheuristics. However, researchers recognize the need to address this problem and have proposed methods to evaluate the novelty of new algorithms. This section shows different studies and guidelines to measure novelty, to design new metaheuristics, and to perform statistical tests between metaheuristics. We list these approaches as follows:
C
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update the graph from the learned embedding with a larger sparsity, k𝑘kitalic_k. With the new graph, we re-train the GAE. These steps are repeated until the convergence.
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore, they are widely used in practice. Due to the success of deep learning, how to combine neural networks and traditional clustering models has been studied a lot [7, 8, 9]. In particular, CNN-based clustering models have been extensively investigated [10, 11, 12]. However, the convolution operation may be unavailable on other kinds of datasets, e.g., text, social network, signal, data mining, etc.
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for decoders. (2) As we utilize GAE to exploit the high-level information to construct a desirable graph, we find that the model suffers from a severe collapse due to the simple update of the graph. We analyze the degeneration theoretically and experimentally to understand the phenomenon. We further propose a simple but effective strategy to avoid it.
In recent years, GCNs have been studied a lot to extend neural networks to graph type data. How to design a graph convolution operator is a key issue and has attracted a mass of attention. Most of them can be classified into 2 categories, spectral methods [24] and spatial methods[25].
However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods. In this paper, we propose an Adaptive Graph Auto-Encoder (AdaGAE) to extend graph auto-encoder into common scenarios. The main contributions are listed as follows:
D
We also want to understand the types of networks that we could test via domains-wide scans. To derive the business types we use the PeeringDB. We classify the ASes according to the following business types: content, enterprise, Network Service Provider (NSP), Cable/DSL/ISP, non-profit, educational/research, route server at Internet Exchange Point (IXP)111A route server directs traffic among Border Gateway Protocol (BGP) routers. We plot the networks that do not enforce ingress filtering according to business types in Figure 12. According to our study enterprise and non-profit networks enforce ingress filtering more than other networks. In contrast, NSPs contain the most networks that do not enforce ingress filtering.
There is a strong correlation between the AS size and the enforcement of spoofing, see Figure 13. Essentially, the larger the AS, the higher the probability that our tools identify that it does not filter spoofed packets. The reason can be directly related to our methodologies and the design of our study: the larger the network the more services it hosts. This means that we have more possibilities to test if spoofing is possible: for instance, we can identify a higher fraction of servers with a globally incremental IPID counters, which are not “load balanced”. In Figure 14 we plot the statistics of the tested networks according to their size and type. The results show a correlation between the size of the network and its type. For instance, most NSP networks are large, with CIDR/6. This is aligned with our finding that among NSP networks there was the highest number of spoofable networks.
SMap (The Spoofing Mapper). In this work we present the first Internet-wide scanner for networks that filter spoofed inbound packets, we call the Spoofing Mapper (SMap). We apply SMap for scanning ingress-filtering in more than 90% of the Autonomous Systems (ASes) in the Internet. The measurements with SMap show that more than 80% of the tested ASes do not enforce ingress filtering (i.e., 72.4% of all the ASes in the routing system), in contrast to 2.4% identified by the latest measurement of the Spoofer Project (Luckie et al., 2019). The reason for this significant difference is the limitation of the previous studies of ingress filtering to a small set of networks.
Domain-scan and IPv4-scan both show that the number of spoofable ASes grows with the overall number of the ASes in the Internet, see Figure 1. Furthermore, there is a correlation between fraction of scanned domains and ASes. Essentially the more domains are scanned, the more ASes are covered, and more spoofable ASes are discovered; see Figure 7. This result is of independent interest as it implies that one can avoid scanning the IPv4 and instead opt for domains-scan, obtaining a good enough approximation. This not only reduces the volume of traffic needed to carry out studies but also makes the study much more efficient.
Identifying servers with global IPID counters. We send packets from two hosts (with different IP addresses) to a server on a tested network. We implemented probing over TCP SYN, ping and using requests/responses to Name servers and we apply the suitable test depending on the server that we identify on the tested network. If the responses contain globally incremental IPID values - we use the service for ingress filtering measurement with IPID technique. We located globally incremental IPID in 63.27%percent63.2763.27\%63.27 % of the measured networks. There are certainly more hosts on networks that support globally incremental IPID values, yet our goal was to validate our measurement techniques while keeping the measurement traffic low - hence we avoided scanning the networks for additional hosts and only checked for Web, Email or Name servers with globally incremental IPID counters via queries to the tested domain.
A
Machine learning applications frequently deal with data-generating processes that change over time. Applications in such nonstationary environments include power use forecasting, recommendation systems, and environmental sensors [9]. Semisupervised learning, which has received a lot of attention in the sensor community, is characterised by the combined use of easily attainable unlabeled data in addition to the initial labeled dataset [10, 11, 12]. Extreme learning machines are also frequently deployed in these settings to efficiently reconfigure neural networks based on the new data [13, 14, 15]. Within the standard backpropagation framework, ensembles have been used successfully in this setting; therefore that is what we compare with in this paper [7].
Biology frequently deals with drift [16]. For instance olfactory systems are constantly adapting, predominantly through feedback mechanisms. This section details some such models from computer science and neuroscience [17]. One example is the KIII model, a dynamic network resembling the olfactory bulb and feedforward and feedback connections to and from the higher-level anterior olfactory nucleus and piriform cortex [18]. Applied to an odor recognition task, KIII performed better than an artificial neural network under sensor drift and variable concentrations, a similar setting to the one in this paper.
The purpose of this study was to demonstrate that explicit representation of context can allow a classification system to adapt to sensor drift. Several gas classifier models were placed in a setting with progressive sensor drift and were evaluated on samples from future contexts. This task reflects the practical goal to deploy an artificial nose in a dynamic environment without recalibration.
While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this paper introduced an approach based on continual adaptation. A recurrent neural network uses a sequence of previously seen gas recordings to form a representation of the current state of the sensors. It then modulates the skill of odor recognition with this context, allowing the system to adapt to sensor drift. Context models can thus play a useful role in lifelong adaptation to changing environments in artificial systems.
One prominent feature of the mammalian olfactory system is feedback connections to the olfactory bulb from higher-level processing regions. Activity in the olfactory bulb is heavily influenced by behavioral and value-based information [19], and in fact, the bulb receives more neural projections from higher-level regions than from the nose [20]. In computational modeling, this principle has been taken into account by the piriform cortical region that recognizes familiar background odors through associative memory [21]. It projects this information to the olfactory bulb to improve odor recognition when there are background odors. Following this same principle, the neural network classifier in this paper integrates context that is outside the immediate input signal.
A
The goal would be to obtain an algorithm with running time 2O⁢(f⁢(δ)⁢n)superscript2𝑂𝑓𝛿𝑛2^{O(f(\delta)\sqrt{n})}2 start_POSTSUPERSCRIPT italic_O ( italic_f ( italic_δ ) square-root start_ARG italic_n end_ARG ) end_POSTSUPERSCRIPT, where f⁢(n)=O⁢(n1/6)𝑓𝑛𝑂superscript𝑛16f(n)=O(n^{1/6})italic_f ( italic_n ) = italic_O ( italic_n start_POSTSUPERSCRIPT 1 / 6 end_POSTSUPERSCRIPT ). Such a running time becomes 2O⁢(n)superscript2𝑂𝑛2^{O(\sqrt{n})}2 start_POSTSUPERSCRIPT italic_O ( square-root start_ARG italic_n end_ARG ) end_POSTSUPERSCRIPT for constant δ𝛿\deltaitalic_δ (which is optimal for TSP in ℝ2superscriptℝ2\mathbb{R}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, under ETH), and it becomes 2O⁢(n2/3)superscript2𝑂superscript𝑛232^{O(n^{2/3})}2 start_POSTSUPERSCRIPT italic_O ( italic_n start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT for δ=n𝛿𝑛\delta=nitalic_δ = italic_n (which is optimal for TSP in ℝ3superscriptℝ3\mathbb{R}^{3}blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT, assuming ETH).
It would be interesting to see whether a direct proof can be given for this fundamental result. We note that the proof of Theorem 2.1 can easily be adapted to point sets of which the x𝑥xitalic_x-coordinates of the points need not be integer, as long as the difference between x𝑥xitalic_x-coordinates of any two consecutive points is at least 1.
First of all, the ΔisubscriptΔ𝑖\Delta_{i}roman_Δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are now independent. Second, as we will prove next, the expected running time of an algorithm on a uniformly distributed point set can be bounded by the expected running time of that algorithm on a point set generated this way.
We believe that our algorithm can serve as the basis of an algorithm solving such a problem, under the assumption that the point sets are dense enough to ensure that the solution will generally follow these curves / segments. Making this precise, and investigating how the running time depends on the number of line segments, would be interesting.
In the second step, we therefore describe a method to generate the random point set in a different way, and we show how to relate the expected running times in these two settings. In the third step, we will explain which changes are made to the algorithm.
C
The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the element on the (full) subtree rooted at the node is the same as that of a (possibly different) element on the entire tree (i. e. at the root). The idea for the name here is that the action on a full subtree is similar to the action of the group or semigroup on the entire tree. An important special case of such a self-similar presentation occurs when there is a finite set of generators such that the action of any generator on the subtree below any node is the same as the action of some (potentially different) generator at the root. By identifying the nodes of the infinite regular tree with the strings over an appropriate finite alphabet, we can describe such an action using a finite automaton (more precisely, a finite-state letter-to-letter – or synchronous – transducer), which leads to the class of automaton semigroups and automaton groups (also often called ‘automata groups’). If we relax the finite-state requirement and also consider infinite automata, we can even describe any self-similar action in this way. This is the approach we will take in this paper.
There are quite a few results on free (and related) products of self-similar or automaton groups (again see [15] for an overview) but many of them present the product as a subgroup of an automaton/self-similar group and, thus, loose the self-similarity property. An exception here is a line of research based on the Bellaterra automaton which resulted in a construction to generate the free product of an arbitrary number of copies of the group of order two as an automaton group [16] (see also [17]).
The construction used to prove Theorem 6 can also be used to obtain results which are not immediate corollaries of the theorem (or its corollary for automaton semigroups in 8). As an example, we prove in the following theorem that it is possible to adjoin a free generator to every self-similar semigroup without losing the self-similarity property and that the analogous statement for automaton semigroups holds as well. The version for automaton semigroups does not follow directly from 8, as the free monogenic semigroup is not a complete automaton semigroup [4, Proposition 4.3] or even a (partial) automaton semigroup (see [8, Theorem 18] or [20, Theorem 1.2.1.4]).
There is a quite interesting evolution of constructions to present free groups in a self-similar way or even as automaton groups (see [15] for an overview). This culminated in constructions to present free groups of arbitrary rank as automaton groups where the number of states coincides with the rank [18, 17]. While these constructions and the involved proofs are generally deemed quite complicated, the situation for semigroups turns out to be much simpler. While it is known that the free semigroup of rank one is not an automaton semigroup [4, Proposition 4.3], the free semigroups of higher rank can be generated by an automaton [4, Proposition 4.1]. In fact, the construction to generate these semigroups is quite simple [4, Proposition 4.1] (compare also to 3). The same construction can also be used to generate free monoids as automaton semigroups or monoids. Here, the main difference is that the free monoid in one generator can indeed be generated by an automaton: it is generated by the adding machine (see 1), which also generates the free group of rank one if inverses are added. On a side note, it is also worthwhile to point out that – although there does not seem to be much research on the topic – there are examples to generate the free inverse semigroup of rank one as a subsemigroup of an automaton semigroup [14, Theorem 25] and an adaption to present the free inverse monoid of rank one as an automaton semigroup [6, Example 2] (see also [8, Example 23]).
from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata). Third, we show this result in the more general setting of self-similar semigroups111Note that the constructions from [2, Theorem 2], [3, Theorem 4] and [19] mentioned above do not use that the generating automata for S𝑆Sitalic_S and for T𝑇Titalic_T are finite. Therefore, these constructions also work for self-similar semigroups, although this is not explicitly stated there. (Theorem 6) but observe that the constructed generating automaton for S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is finite (and/or complete) if this was the case for the original two automata generating S𝑆Sitalic_S and T𝑇Titalic_T. The existence of a homomorphism from S𝑆Sitalic_S to T𝑇Titalic_T (or vice-versa) is a very lax requirement and is satisfied by large classes of semigroups. For example, it suffices to have an idempotent (10) or a length function (11) in (at least) one of the two semigroups. By induction, we can even extend the result to arbitrary free products of (finitely many) semigroups where at least one contains an idempotent (12). The construction itself yields further results. As an example, we modify it to show that a new free generator can be adjoined to any self-similar semigroup (or automaton semigroup) without losing the property of self-similarity (or being an automaton semigroup; Theorem 14). This is noteworthy because – as mentioned above – the free semigroup of rank one is not an automaton semigroup (not even if we allow partial automata, see [8, Theorem 19] and [20, Theorem 1.2.1.4]).
C
SCR divides the region proposals into influential and non-influential regions and penalizes the model if: 1) 𝒮⁢(ag⁢t)𝒮subscript𝑎𝑔𝑡\mathcal{S}(a_{gt})caligraphic_S ( italic_a start_POSTSUBSCRIPT italic_g italic_t end_POSTSUBSCRIPT ) of a non-influential region is higher than an influential region, and 2) the region most influential for the correct answer has even higher sensitivity for incorrect answers.
Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible to achieve such gains even when the model is trained to look at: a) irrelevant visual regions, and b) random visual regions. Second, we show that differences in the predictions from the variants trained with relevant, irrelevant and random visual regions are not statistically significant. Third, we show that these methods degrade performance when the priors remain intact and instead work on VQA-CPv2 by hurting its train accuracy.
We probe the reasons behind the performance improvements of HINT and SCR. We first analyze if the results improve even when the visual cues are irrelevant (Sec. 4.2) or random (Sec. 4.3) and examine if their differences are statistically significant (Sec. 4.4). Then, we analyze the regularization effects by evaluating the performance on VQA-CPv2’s train split (Sec. 4.5) and the behavior on a dataset without changing priors (Sec. 2). We present a new metric to assess visual grounding in Sec. 4.7 and describe our regularization method in Sec. 5.
We test our regularization method on random subsets of varying sizes. Fig. A6 shows the results when we apply our loss to 1−100%1percent1001-100\%1 - 100 % of the training instances. Clearly, the ability to regularize the model does not vary much with respect to the size of the train subset, with the best performance occurring when our loss is applied to 1%percent11\%1 % of the training instances. These results support our claims that it is possible to improve performance without actually performing visual grounding.
As observed by Selvaraju et al. (2019) and as shown in Fig. 2, we observe small improvements on VQAv2 when the models are fine-tuned on the entire train set. However, if we were to compare against the improvements in VQA-CPv2 in a fair manner, i.e., only use the instances with visual cues while fine-tuning, then, the performance on VQAv2 drops continuously during the course of the training. This indicates that HINT and SCR help forget linguistic priors, which is beneficial for VQA-CPv2 but not for VQAv2.
B
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020), and it surpasses the aggregate of unique websites represented in all other publicly available web privacy policy corpora combined. We describe the corpus creation pipeline, with stages including a web crawler, language detection, document classification, duplicate and near-duplication removal, and content extraction. We then analyse the lengths and top level distribution of the privacy policies in the corpus and use topic modelling to explore the component topics. Subsequently, we pretrain PrivBERT, a transformer-based language model, using the corpus and evaluate it on data practice classification and question answering tasks. We release the corpus, a search engine for the corpus (Srinath et al., 2021), the document collection pipeline, and a language model to support further research in the privacy domain.111All artifacts are available at https://privaseer.ist.psu.edu/.
For the data practice classification task, we leveraged the OPP-115 Corpus introduced by Wilson et al. (2016). The OPP-115 Corpus contains manual annotations of 23K fine-grained data practices on 115 privacy policies annotated by legal experts. To the best of our knowledge, this is the most detailed and widely used dataset of annotated privacy policies in the research community. The OPP-115 Corpus contains paragraph-sized segments annotated according to one or more of the twelve coarse-grained categories of data practices. We fine-tuned PrivBERT on the OPP-115 Corpus to predict the coarse-grained categories of data practices. We divided the corpus in the ratio 3:1:1 for training, validation and testing respectively. Since each segment in the corpus could belong to more than one category and there are twelve categories in total, we treated the problem as a multi-class, multi-label classification problem. After manually tuning hyperparameters, we trained the model with a dropout of 0.15 and a learning rate of 2.5e-5.
Prior collections of privacy policy corpora have led to progress in privacy research. Wilson et al. (2016) released the OPP-115 Corpus, a dataset of 115 privacy policies with manual annotations of 23k fine-grained data practices, and they created a baseline for classifying privacy policy text into one of ten categories. The corpus was used to train models to extract opt-out choices from privacy policies (Sathyendra et al., 2016), to automatically identify policies on websites and find compliance issues (Story et al., 2019), and to classify privacy practices and answer privacy related non-factoid questions (Harkous et al., 2018).
Other corpora similar to OPP-115 Corpus have enabled research on privacy practices. The PrivacyQA corpus contains 1,750 questions and expert-annotated answers for the privacy question answering task (Ravichander et al., 2019). Similarly, Lebanoff and Liu (2018) constructed the first corpus of human-annotated vague words and sentences in privacy policies and studied automatic vagueness detection. Sathyendra et al. (2017) presented a dataset and developed a model to automatically identify and label opt-out choices offered in privacy policies. Similarly, Zimmeck et al. (2019) released a set of over 400k URLs to Android app privacy policy pages collected by crawling the Google Play store. Amos et al. (2020) collected privacy policies from around 130,000 websites from over two decades and analysed the evolution of the online privacy landscape. Finally, Nokhbeh Zaeem and Barber (2021) collected a corpus of around 100k privacy policies using the domains from DMOZ, a website which maintained categories of websites on the internet.
Natural language processing (NLP) provides an opportunity to automate the extraction of salient details from privacy policies, thereby reducing human effort and enabling the creation of tools for internet users to understand and control their online privacy. Existing research has achieved some success using expert annotated corpora of a few hundred or a few thousand privacy policies Wilson et al. (2016); Zimmeck et al. (2019); Ramanath et al. (2014), but issues of accuracy, scalability and generalization remain. More importantly, annotations in the privacy policy domain are expensive. Privacy policies are difficult to understand and many tasks such as privacy practice classification (Wilson et al., 2016), privacy question answering (Ravichander et al., 2019), vague sentence detection (Lebanoff and Liu, 2018), and detection of compliance issues (Zimmeck et al., 2019) require skilled legal experts to annotate the dataset. In contrast, approaches involving large amounts of unlabeled privacy policies remain relatively unexplored.
B
Figure 1: Knowledge generation model for ensemble learning with VA derived from the model by Sacha et al. [44]. On the left, it illustrates how a VA system can enable the exploration of the data and the models with the use of visualization. On the right, a number of design goals assist the human in the exploration, verification, and knowledge generation for ensemble learning.
In a bucket of models, the best model for a specific problem is automatically chosen from a set of available options. This strategy is conceptually different to the ideas of bagging, boosting, and stacking, but still related to ensemble learning. Chen et al. [6] utilize a bucket of latent Dirichlet allocation (LDA) models for combining topics based on criteria such as distinctiveness and coverage of the set of actions performed.
The rest of this paper is organized as follows. In the next section, we discuss the literature related to visualization of ensemble learning. Afterwards, we describe the knowledge generation model for ensemble learning with VA, design goals, and analytical tasks for attaching VA to ensemble learning.
Visualization systems have been developed for the exploration of diverse aspects of bagging, boosting, and further strategies such as “bucket of models”. Stacking, however, has so far not received comparable attention by the InfoVis/VA communities: actually, we have not found any literature describing the construction and improvement of stacking ensemble learning with the use of VA.
Figure 1: Knowledge generation model for ensemble learning with VA derived from the model by Sacha et al. [44]. On the left, it illustrates how a VA system can enable the exploration of the data and the models with the use of visualization. On the right, a number of design goals assist the human in the exploration, verification, and knowledge generation for ensemble learning.
C
We thus have 3333 cases, depending on the value of the tuple (p⁢(v,[010]),p⁢(v,[323]),p⁢(v,[313]),p⁢(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v , [ 313 ] ) , italic_p ( italic_v , [ 003 ] ) ):
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
{0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],% [112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end_ARG , over¯ start_ARG 2 end_ARG , over¯ start_ARG 3 end_ARG , [ 013 ] , [ 010 ] , [ 323 ] , [ 313 ] , [ 112 ] , [ 003 ] , [ 113 ] } .
p⁢(v,[013])=p⁢(v,[313])=p⁢(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1. Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ],
B
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla MAML assumes that the data distribution is the same across tasks, in real-world NLP tasks, the data distributions can differ significantly [Li et al., 2018, Balaji et al., 2018]. For example, PAML [Madotto et al., 2019] regards each person’s dialogues as a task for MAML and they have different personal profiles. This variation manifests both between training tasks and between training and testing tasks, similarly affecting the performance of MAML. Few works have thoroughly studied these impact factors.
In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy: RQ1. Since the parameter initialization learned by MAML can be seen as a general language model of training tasks, when the training and testing tasks have different data distributions, how can the general language model training affect the model’s task-specific adaptation ability?
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the meta-testing set before fine-tuning, using the quality performance (accuracy for classification and BLEU for generation) to
The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation. Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the language model becomes too “general”, it will lose the ability of adapting to specific tasks. It is noteworthy that the ”too general” problem is not the same as over-fitting, since the ”too general” model performs well before fine-tuning, which means it does not over-fit to the training data.
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla MAML assumes that the data distribution is the same across tasks, in real-world NLP tasks, the data distributions can differ significantly [Li et al., 2018, Balaji et al., 2018]. For example, PAML [Madotto et al., 2019] regards each person’s dialogues as a task for MAML and they have different personal profiles. This variation manifests both between training tasks and between training and testing tasks, similarly affecting the performance of MAML. Few works have thoroughly studied these impact factors.
A
In this paper, we consider a dynamic mission-driven UAV network with UAV-to-UAV mmWave communications, wherein multiple transmitting UAVs (t-UAVs) simultaneously transmit to a receiving UAV (r-UAV). In such a scenario, we focus on inter-UAV communications in UAV networks, and the UAV-to-ground communications are not involved. In particular, each UAV is equipped with a cylindrical conformal array (CCA), and a novel-codebook-based mmWave beam tracking scheme is proposed for such a highly dynamic UAV network. More specifically, the codebook consists of the codewords corresponding to various subarray patterns and beam patterns. Based on the joint UAV position-attitude prediction, an efficient codeword selection scheme is further developed with tracking error (TE) awareness, which achieves fast subarray activation/partition and array weighting vector selection. It is verified that our proposed scheme achieves a higher spectrum efficiency, lower outage probability and stronger robustness for inter-UAV mmWave communications. In summary, the key contributions of this paper are listed as follows.
The first study on the beam tracking framework for CA-enabled UAV mmWave networks. We propose an overall beam tracking framework to exemplify the idea of the DRE-covered CCA integrated with UAVs, and reveal that CA can offer full-spatial coverage and facilitate beam tracking, thus enabling high-throughput inter-UAV data transmission for mission-driven UAV networking. To the best of our knowledge, this is the first work on the beam tracking framework for CA-enabled UAV mmWave networks.
The specialized codebook design of the DRE-covered CCA for multi-UAV mobile mmWave communications. Under the guidance of the proposed framework, a novel hierarchical codebook is designed to encompass both the subarray patterns and beam patterns. The newly proposed CA codebook can fully exploit the potentials of the DRE-covered CCA to offer full spatial coverage. Moreover, the corresponding codeword selection scheme is also carefully designed to facilitate fast multi-UAV beam tracking/communication in the considered CA-enabled UAV mmWave network.
When considering UAV communications with UPA or ULA, a UAV is typically modeled as a point in space without considering its size and shape. Actually, the size and shape can be utilized to support more powerful and effective antenna array. Inspired by this basic consideration, the conformal array (CA) [16] is introduced to UAV communications. A CA is usually in a shape of cylindrical or spherical conforming to a predefined surface, e.g., a part of an airplane or UAV, and can reap full spatial coverage with proper array designs. Compared with surface-mounted multiple UPAs, a CA, conforming to the surface of a UAV, can compact the UAV design, reduce the extra drag and fuel consumption, and also facilitate an array of a larger size [16]. Furthermore, directional radiating elements (DREs) are commonly integrated with antenna array to enhance the beamforming ability [16, 17, 18]. In such a case, the coverage capability of CA is far stronger than that of UPA and ULA via proper array designs, due to the exploitation of size and shape. Specifically, a CA can enable the potential to enlarge (roll up) the surface of antenna array. This advantage not only achieves a larger array gain to combat path-loss but also sustains full-spatial transmitting/receiving to facilitate fast beam tracking for mobile UAV mmWave networks [19]. Note that in mission-driven UAV networks, agile and robust beam tracking is very challenging yet critical for inter-UAV mmWave communications [10], because UAV position and attitude may vary very fast. By carefully exploiting the CA’s full spatial transmission/reception property, the stringent constraints on beam tracking for highly dynamic moving UAVs can be relieved considerably. So far, however, the CA-enabled UAV mmWave network is almost untouched in the literature. Regarding the mmWave CA, there are only a few recent works on the radiation patterns and beam scanning characteristics [20] and the performance evaluation of CA-based beamforming for static mmWave cellular networks [21]. These works validate the potential advantage of CA in the static mmWave networks, which are not applicable to mobile UAV mmWave networks.
For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam tracking and channel estimation methods. For example, considering the ULA with omnidirectional radiating elements (REs), the hierarchical-codebook-based subarray and antenna deactivating strategies are proposed to achieve efficient beam training for single-user scenarios [12, 24]. The multiuser downlink beam training algorithms regarding the ULA are proposed with the multi-resolution codebook designs for partially-connected [25] and fully-connected [15] hybrid structures, respectively. However, extending the aforementioned works to the CA is not straightforward. The reasons are as follows: When the commonly-adopted DRE is integrated with CA, the limited radiation range of DREs is no longer the same and each is affected by the DRE’s location on CA, as the DRE-covered array plane is rolled up. The determined radiation direction of CA is only within a part of DREs’ radiation range. This observation indicates that only a part of the DREs or some specific subarrays need to be activated with reference to the AOA or angle of departure (AOD) of transceivers. Therefore, the dynamic subarray localization and activation are very coupled and critical for the efficient utilization of the DRE-covered CA. Note that conventional ULA/UPA-oriented codebook designs mainly focus on the beam direction/width controlling via the random-like subarray activation/deactivation without specific subarray localization. In contrast, the codebook design for DRE-covered CA should emphasize the location of the activated subarray to achieve the promise of full-spatial coverage of the CA in UAV networks. Nevertheless, such work is still missing now in the literature. These points mentioned above motivate us to study a new beam tracking framework with the well-tailored codebook for CA-enabled UAV mmWave networks.
A
Thus, a¯|b¯conditional¯𝑎¯𝑏\bar{a}|\bar{b}over¯ start_ARG italic_a end_ARG | over¯ start_ARG italic_b end_ARG-regular digraphs with size M¯¯𝑀\bar{M}over¯ start_ARG italic_M end_ARG can be characterized as a¯|b¯conditional¯𝑎¯𝑏\bar{a}|\bar{b}over¯ start_ARG italic_a end_ARG | over¯ start_ARG italic_b end_ARG-biregular graphs with size M¯|M¯conditional¯𝑀¯𝑀\bar{M}|\bar{M}over¯ start_ARG italic_M end_ARG | over¯ start_ARG italic_M end_ARG
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the arguments, and will also be used as the base cases in inductive constructions for the case with arbitrary colors.
The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges. The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from
This will be bootstrapped to the multi-color case in later sections. Note that the 1111-color case with the completeness requirement is not very interesting, and also not useful for the general case: completeness states that every node on the left must be connected, via the unique edge relation, to every node on the right – regardless of the matrix. We
To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict
D
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et al., 2017; Nachum et al., 2017; Haarnoja et al., 2017). In particular, we aim to characterize how an overparameterized two-layer neural network and its induced feature representation evolve in TD and Q-learning, especially their rate of convergence and global optimality. A fundamental obstacle, however, is that such an evolving feature representation possibly leads to the divergence of TD and Q-learning. For example, TD converges when the value function approximator is linear in a feature representation, which is fixed throughout learning, and possibly diverges otherwise (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997).
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature representation is able to deviate from the initial one and subsequently evolve into the globally optimal one, which corresponds to the global minimizer of the MSPBE. We further extend our analysis to soft Q-learning, which is connected to policy gradient.
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, TD possibly diverges (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997). Bhatnagar et al. (2009) propose nonlinear gradient TD, which converges but only to a locally optimal solution. See Geist and Pietquin (2013); Bertsekas (2019) for a detailed survey. When the value function approximator is an overparameterized multi-layer neural network, Cai et al. (2019) prove that TD converges to the globally optimal solution in the NTK regime. See also the independent work of Brandfonbrener and Bruna (2019a, b); Agazzi and Lu (2019); Sirignano and Spiliopoulos (2019), where the state space is required to be finite. In contrast to the previous analysis in the NTK regime, our analysis allows TD to attain a data-dependent feature representation that is globally optimal.
corresponding to θ(m)⁢(k)=(θ1⁢(k),…,θm⁢(k))∈ℝD×msuperscript𝜃𝑚𝑘subscript𝜃1𝑘…subscript𝜃𝑚𝑘superscriptℝ𝐷𝑚\theta^{(m)}(k)=(\theta_{1}(k),\ldots,\theta_{m}(k))\in\mathbb{R}^{D\times m}italic_θ start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( italic_k ) = ( italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_k ) , … , italic_θ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_k ) ) ∈ blackboard_R start_POSTSUPERSCRIPT italic_D × italic_m end_POSTSUPERSCRIPT. Such a feature representation is used to analyze the TD dynamics θ(m)⁢(k)superscript𝜃𝑚𝑘\theta^{(m)}(k)italic_θ start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( italic_k ) in (3.3) in the NTK regime (Cai et al., 2019), which corresponds to setting α=m𝛼𝑚\alpha=\sqrt{m}italic_α = square-root start_ARG italic_m end_ARG in (3.1). Meanwhile, the nonlinear gradient TD dynamics (Bhatnagar et al., 2009) explicitly uses such a feature representation at each iteration to locally linearize the Q-function. Moreover, up to a rescaling, such a feature representation corresponds to the kernel
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear whether the attained solution is globally optimal. On the other hand, when the value function approximator in TD is an overparameterized multi-layer neural network, which is required to be properly scaled, such a feature representation stabilizes at the initial one (Cai et al., 2019), making the explicit local linearization in nonlinear gradient TD unnecessary. Moreover, the implicit local linearization enabled by overparameterization allows TD (and Q-learning) to converge to the globally optimal solution. However, such a required scaling, also known as the neural tangent kernel (NTK) regime (Jacot et al., 2018), effectively constrains the evolution of the induced feature presentation to an infinitesimal neighborhood of the initial one, which is not data-dependent.
D
Table 4 shows that, even though this is counter-intuitive, element-wise addition (with fewer parameters) empirically results in slightly higher BLEU than the concatenation operation. Furthermore, even though using 2 depth-wise LSTM sub-layers connecting cross- and masked self-attention sub-layers leads to the highest BLEU score, showing the advantage of fully replacing residual connections with depth-wise LSTMs, it also introduces more parameters and increases the decoder depth in terms of sub-layers. For fair comparison, we use the simpler element-wise addition operation in our experiments by default.
Directly replacing residual connections with LSTM units will introduce a large amount of additional parameters and computation. Given that the task of computing the LSTM hidden state is similar to the feed-forward sub-layer in the original Transformer layers, we propose to replace the feed-forward sub-layer with the newly introduced LSTM unit, which only introduces one LSTM unit per layer, and the parameters of the LSTM can be shared across layers.
Table 5 shows that: 1) Sharing parameters for the computation (Equation 6) of the depth-wise LSTM hidden state significantly hampers performance, which is consistent with our conjecture. 2) Sharing parameters for the computation of gates (Equations 2, 3, 4) leads to slightly higher BLEU with fewer parameters introduced than without sharing them (“None” in Table 5). Thus, in the other experiments, we bind parameters for the computation of LSTM gates across stacked layers by default.
In our approach (“with depth-wise LSTM”), we used the 2-layer neural network for the computation of the LSTM hidden state (Equation 6) and shared LSTM parameters across stacked encoder layers and different shared parameters across decoder layers for computing the LSTM gates (Equations 2, 3, 4). Details are provided in our ablation study.
As the number of Transformer layers is pre-specified, the parameters of the depth-wise LSTM can either be shared across layers or be independent. Table 3 documents the importance of the capacity of the module for the hidden state computation, and sharing the module is likely to hurt its capacity. We additionally study to share only parameters for gate computation (Equations 2, 3, 4) and to share all parameters (i.e. parameters for both the computation of gates and of the hidden state). Results are shown in Table 5.
D
the corresponding Alexandroff topologies: X≜⟨X,τ→,𝖥𝖮⁢[σ]⟩≜𝑋𝑋subscriptτ→𝖥𝖮delimited-[]σX\triangleq\left\langle X,\uptau_{\to},\mathsf{FO}[\upsigma]\right\rangleitalic_X ≜ ⟨ italic_X , roman_τ start_POSTSUBSCRIPT → end_POSTSUBSCRIPT , sansserif_FO [ roman_σ ] ⟩ and for n∈ℕ𝑛ℕn\in\mathbb{N}italic_n ∈ blackboard_N, let Xn≜⟨X,τ→n,𝖥𝖮⁢[σ]⟩≜subscript𝑋𝑛𝑋subscriptτsubscript→𝑛𝖥𝖮delimited-[]σX_{n}\triangleq\left\langle X,\uptau_{\to_{n}},\mathsf{FO}[\upsigma]\right\rangleitalic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ≜ ⟨ italic_X , roman_τ start_POSTSUBSCRIPT → start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT , sansserif_FO [ roman_σ ] ⟩.
For A∈Fin⁡(σ)𝐴FinσA\in\operatorname{Fin}(\upsigma)italic_A ∈ roman_Fin ( roman_σ ) and n≥1𝑛1n\geq 1italic_n ≥ 1, there exists a structure Coren⁡(A)superscriptCore𝑛𝐴\operatorname{Core}^{n}(A)roman_Core start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( italic_A ) of tree-depth at most n𝑛nitalic_n such that
A→nCoren⁡(A)subscript→𝑛𝐴superscriptCore𝑛𝐴A\to_{n}\operatorname{Core}^{n}(A)italic_A → start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT roman_Core start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( italic_A ), Coren⁡(A)→nAsubscript→𝑛superscriptCore𝑛𝐴𝐴\operatorname{Core}^{n}(A)\to_{n}Aroman_Core start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( italic_A ) → start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT italic_A, and furthermore A→nBsubscript→𝑛𝐴𝐵A\to_{n}Bitalic_A → start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT italic_B if and only if Coren⁡(A)→B→superscriptCore𝑛𝐴𝐵\operatorname{Core}^{n}(A)\to Broman_Core start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( italic_A ) → italic_B [33, definitions 3.6 and 3.10 and Lemma 3.11]. Notice that for
For all A∈Fin⁡(σ)𝐴FinσA\in\operatorname{Fin}(\upsigma)italic_A ∈ roman_Fin ( roman_σ ), let ψA𝖤𝖥𝖮superscriptsubscript𝜓𝐴𝖤𝖥𝖮\psi_{A}^{\mathsf{EFO}}italic_ψ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT sansserif_EFO end_POSTSUPERSCRIPT be the diagram sentence such that ⟦ψA𝖤𝖥𝖮⟧Struct⁡(σ)\llbracket\psi^{\mathsf{EFO}}_{A}\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ italic_ψ start_POSTSUPERSCRIPT sansserif_EFO end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT
all n≥1𝑛1n\geq 1italic_n ≥ 1, if A∈X𝐴𝑋A\in Xitalic_A ∈ italic_X then Coren⁡(A)∈XsuperscriptCore𝑛𝐴𝑋\operatorname{Core}^{n}(A)\in Xroman_Core start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( italic_A ) ∈ italic_X since X𝑋Xitalic_X is downwards closed.
A
Qualitative Comparison: To qualitatively show the performance of different learning representations, we visualize the 3D distortion distribution maps (3D DDM) derived from the ground truth and these two schemes in Fig. 8, in which each pixel value of the distortion distribution map represents the distortion level. Since the ordinal distortion estimation pays more attention to the realistic distortion perception and reasonable learning strategy, our scheme achieves results much closer to the ground truth 3D DDM. Due to implicit learning, the distortion parameter estimation generates inferior reconstructed results, such as the under-fitting (left) and over-fitting (right) on the global distribution approximation as shown in Fig. 8.
Figure 11: Qualitative evaluations of the rectified distorted images on indoor (left) and outdoor (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified results of our proposed approach, from left to right.
We visually compare the corrected results from our approach with state-of-the-art methods using our synthetic test set and the real distorted images. To show the comprehensive rectification performance under different scenes, we split the test set into four types of scenes: indoor, outdoor, people, and challenging scenes. The indoor and outdoor scenes are shown in Fig. 11, and the people and challenging scenes are shown in Fig. 12. Our approach performs well on all scenes, while the traditional methods [23, 24] show inferior corrected results under the scene that lacks sufficient hand-crafted features, especially in the people and challenging scenes. On the other hand, the learning methods [8, 11, 12] lag behind in the sufficient distortion perception and cannot easily adapt to scenes with strong geometric distortion. For example, the results obtained by Rong [8] show coarse rectified structures, which are induced by the implicit learning of distortion and simple model assumption. Li [11] leveraged the estimated distortion flow to generate the rectified images. However, the accuracy of the pixel-wise reconstruction heavily relies on the performance of scene analysis, leading to some stronger distortion results under complex scenes. Although Liao [12] generated better rectified images than the above learning methods in terms of global distribution; the results display unpleasant blur local appearances due to the used adversarial learning manner. In contrast, our results achieve the best performance on global distribution and local appearance, which benefit from the proposed learning-friendly representation and the effective learning model.
Figure 12: Qualitative evaluations of the rectified distorted images on people (left) and challenging (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified results of our proposed approach, from left to right.
Figure 13: Qualitative evaluations of the rectified distorted images on real-world scenes. For each evaluation, we show the distorted image and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified results of our proposed approach, from left to right.
A
Apart from these empirical findings, there have been some theoretical studies on large-batch training. For example, the convergence analyses of LARS have been reported in [34]. The work in [37] analyzed the inconsistency bias in decentralized momentum SGD and proposed DecentLaM for decentralized large-batch training.
We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework. Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings.
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the four baselines. Furthermore, it achieves faster convergence rates than LARS for the small and large batch sizes, which is consistent with our convergence analysis for the block-wise update strategy.
Furthermore, researchers in [19] argued that the extrapolation technique is suitable for large-batch training and proposed EXTRAP-SGD. However, experimental implementations of these methods still require additional training tricks, such as warm-up, which may make the results inconsistent with the theory.
Many methods have been proposed for improving the performance of SGD with large batch sizes. The works in [7, 33] proposed several tricks, such as warm-up and learning rate scaling schemes, to bridge the generalization gap under large-batch training settings. Researchers in [11]
C
{\mathcal{F}}roman_support ( caligraphic_D ) ⊆ 2 start_POSTSUPERSCRIPT caligraphic_C end_POSTSUPERSCRIPT × blackboard_R start_POSTSUPERSCRIPT caligraphic_F end_POSTSUPERSCRIPT and, in the black-box setting, |𝒟|𝒟|\mathcal{D}|| caligraphic_D | may be uncountably infinite.
The most general way to represent the scenario distribution 𝒟𝒟\mathcal{D}caligraphic_D is the black-box model [24, 12, 22, 19, 25], where we have access to an oracle to sample scenarios A𝐴Aitalic_A according to 𝒟𝒟\mathcal{D}caligraphic_D. We also consider the polynomial-scenarios model [23, 15, 21, 10], where the distribution 𝒟𝒟\mathcal{D}caligraphic_D is listed explicitly. We use the suffixes BB and Poly to distinguish these settings. For example, 2S-Sup-BB is the previously defined 2S-Sup in the black-box model.
Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific knowledge of the distribution is unknown but we have the ability to sample or simulate from the distribution. To our knowledge, radius minimization has not been previously considered in the two-stage stochastic paradigm. Most prior work in this setting has focused on Facility Location [23, 24, 21, 22, 11, 19, 25]. On similar lines, [1] studies a stochastic k𝑘kitalic_k-center variant, where points arrive independently and each point only needs to get covered with some given probability. 2S-Sup is the natural two-stage counterpart of the well-known Knapsack-Supplier problem, which has a well-known 3333-approximation [14].
The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage stochastic problem. This is similar to a robust supplier problem considered in [3] under the name priority center, and many of the approximation algorithms of [3] can be adapted to our setting.
Stochastic optimization, first introduced in the work of Beale [4] and Dantzig [8], provides a way to model uncertainty in the realization of the input data. In this paper, we give approximation algorithms for a family of problems in stochastic optimization, and more precisely in the 2222-stage recourse model [27]. Our formal problem definitions follow.
B
Both (sub)gradient noises and random graphs are considered in [11]-[13]. In [11], the local gradient noises are independent with bounded second-order moments and the graph sequence is i.i.d. In [12]-[14], the (sub)gradient measurement noises are martingale difference sequences and their second-order conditional moments depend on the states of the local optimizers. The random graph sequences in [12]-[15] are i.i.d. with connected and undirected mean graphs. In addition, additive communication noises are considered in [14]-[15].
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition. The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error. This leads the nonnegative supermartingale convergence theorem not to be applied directly
such as the economic dispatch in power grids ([1]) and the traffic flow control in intelligent transportation networks ([2]), et al. Considering the various uncertainties in practical network environments, distributed stochastic optimization algorithms have been widely studied. The (sub)gradients of local cost functions are used in many distributed optimization algorithms. However, it is difficult to get accurate (sub)gradients in many practical applications. For example, in distributed statistical machine learning ([3]), the local loss functions are the mathematical expectations of random functions so that the local optimizers can only obtain the measurement of the (sub)gradients with random noises. The influence of (sub)gradient measurement noises has been considered for distributed optimization algorithms in [4]-[7].
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be spatially and temporally dependent. The local cost functions are not required to be differentiable, nor do their subgradients need to be bounded. The local optimizers can only obtain measurement information of the local subgradients with random noises. The additive and multiplicative communication noises co-exist in communication links. We consider the distributed stochastic subgradient optimization algorithm and prove that if the sequence of random digraphs is conditionally balanced and uniformly conditionally jointly connected, then the states of all local optimizers converge to the same global optimal solution almost surely. The main contributions of our paper are listed as follows.
In addition to uncertainties in information exchange, different assumptions on the cost functions have been discussed. In the most of existing works on the distributed convex optimization, it is assumed that the subgradients are bounded if the local cost
D
Typically, the attributes in microdata can be divided into three categories: (1) Explicit-Identifier (EI, also known as Personally-Identifiable Information), such as name and social security number, which can uniquely or mostly identify the record owner; (2) Quasi-Identifier (QI), such as age, gender and zip code, which can be used to re-identify the record owner when taken together; and (3) Sensitive Attribute (SA), such as salary and disease, which contains the confidential information of individuals. According to the work of Sweeney [31], even with all EI attributes being removed, the record owners can still be re-identified by matching the combination of QI values.
Specifically, there are three main steps in the proposed approach. First, MuCo partitions the tuples into groups and assigns similar records into the same group as far as possible. Second, the random output tables, which control the distribution of random output values within each group, are calculated to make similar tuples to cover for each other at the minimal cost. Finally, MuCo generates anonymized microdata by replacing the original QI values with random values according to the random output tables. For instance, for the original table in Figure 1(a), MuCo partitions the records into four groups and calculates random output tables on age as shown in Figure 3. In the random output tables, the rows correspond to the records, and the columns correspond to the ranges of age values. Every entry value denotes the probability that the record carries the column value in the anonymized table. For example, we can observe that Helen is covered with Daphne and Dean, and her age outputs 28 with a probability of 0.7129 and outputs 29 with a probability of 0.2871. Then, MuCo generates an anonymized table in which the original QI values are replaced by the random values according to the random output tables.
Generalization [8, 26] is one of the most widely used privacy-preserving techniques. It transforms the values on QI attributes into general forms, and the tuples with equally generalized values constitute an equivalence group. In this way, records in the same equivalence group are indistinguishable. k𝑘kitalic_k-Anonymity [31, 28] ensures that the probability of identity disclosure is at most 1/k1𝑘1/k1 / italic_k. For instance, Figure 1(b) is a generalized table of Figure 1(a) that complies with 2-anonymity, and the adversary has to acquire at least two different tuples by matching the age value of any person.
However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserve the ranges of QI values and the number of records. Consequently, the distributions of QI values are hardly maintained and the information utility is reduced significantly. For instance, as shown in Figure 2, the red polyline and the magenta polyline represent the distributions on age in Figure 1(a) and Figure 1(c), respectively. We can observe that the original distribution is barely preserved in the generalized table. On the other hand, the partition of equivalence groups also increases the information loss of anonymized table because the results of query statements are always the matching equivalence groups rather than the specific matching tuples. For example, if we want to select the tuples whose age values are more than 30 in Figure 1(c), both equivalence groups are considered as the results.
Although the generalization for k𝑘kitalic_k-anonymity provides enough protection for identities, it is vulnerable to the attribute disclosure [23]. For instance, in Figure 1(b), the sensitive values in the third equivalence group are both “pneumonia”. Therefore, an adversary can infer the disease value of Dave by matching his age without re-identifying his exact record. To prevent such disclosure, many effective principles have been proposed, such as l𝑙litalic_l-diversity [23] and t𝑡titalic_t-closeness [19]. For example, Figure 1(c) is the generalized version of Figure 1(a) complying with 5-diversity, such that the proportion of each sensitive value inside the equivalence group is no more than 1/5151/51 / 5. Thus, for any individual, the adversary has to obtain at least five different sensitive values by matching the age value.
B
In this section, we introduce our practice on three competitive segmentation methods including HTC, SOLOv2 and PointRend. We show step-by-step modifications adopted on PointRend, which achieves better performance and outputs much smoother instance boundaries than other methods.
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRend Kirillov et al. (2020). Most of these detectors focus on an overall performance on public datasets like COCO, which contains much smaller instances than 3D-FUTURE, while paying less attention to large objects segmentation. As illustrated in Figure 1, the size distribution of bounding boxes in 3D-FUTURE and COCO indicates that the former contains much larger objects while the latter is dominated by smaller instances. Thus, the prominent methods used in COCO, like MaskRCNN He et al. (2017) and HTC, may generate blurry contours for large instances. Their mask heads output segmentation from a limited small feature size (e.g., 14×14141414\times 1414 × 14), which is dramatically insufficient to represent large objects. All of these motivate us to segment large instances in a fine-grained and high-quality manner. SOLOv2 builds an efficient single-shot framework with strong performance and dynamically generates predictions with much larger mask size (e.g., 1/4 scale of input size) than HTC. PointRend iteratively renders the output mask over adaptively sampled uncertain points in a coarse-to-fine fashion, which is naturally suitable for generating smooth and fine-grained instance boundaries. By conducting extensive experiments on HTC, SOLOv2 and PointRend, PointRend succeeds in producing finer mask boundaries and significantly outperforms other methods by a large margin. Our step-by-step modifications adopted on PointRend finally achieves state-of-the-art performance on 3D-FUTURE dataset, which yields 79.2 mAP and 77.38 mAP on validation and test set respectively. The final submission is an ensemble of 5 PointRend models with slightly different settings, reaching the 1st place in this competition.
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.9 mAP and surpasses MaskRCNN by a remarkable margin of 9.7 mAP. More Points Test. By increasing the number of subdivision points from default 28 to 70 during inference, we gain another 1.1 mAP with free training cost. Large Backbone. X101-64x4d Xie et al. (2017) is then used as large backbone and it boosts 6 mAP against ResNet50. DCN and More Points Train. We adopt more interpolated points during training, by increasing the number of sampled points from original 14 to 26 for coarse prediction head, and from 14 to 24 for fine-grained point head. Then by adopting DCN Dai et al. (2017), we gain 71.6 mAP, which already outperforms HTC and SOLOV2 from our offline observation. Large Resolution and P6 Feature. Due to PointRend’s lightweight segmentation head and less memory consumption compared to HTC, the input resolution can be further increased from range [800,1000] to [1200,1400] during multi-scale training. P6 level of FPN is also added for both coarse prediction head and fine-grained point head, which finally yields 74.3 mAP on our splitted validation set. Other tricks we tried on PointRend give little improvement, including MaskScoring head, GC Block and DoubleHead Wu et al. (2020). In the following, we refer the model in the last row (74.3 mAP) of Table 2 as PointRend baseline. The baseline trained on the official training set finally reaches 79.17 and 77.38 mAP on validation and testing set respectively, as shown in Table 1 and Table 3. It surpasses SOLOv2 by a large margin: 6.2, 4.5 and 3.5 mAP respectively for small, medium and large size on validation set. We believe that PointRend’s iteratively rendering process acts as a pivot for generating high-quality masks, especially fine-grained instance boundaries. Due to its superior performance, we only choose PointRend as ensemble candidates for the final submission.
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains another 2 mAP. Armed with DCN, GC block and SyncBN training, our HTC with Res2NetR101 backbone yields 74.58 mAP on validation set, as shown in Table 1. However, the convolutional mask heads adopted in all stages bring non-negligible computation and memory costs, which constrain the mask resolution and further limit the segmentation quality for large instances.
Due to limited mask representation of HTC, we move on to SOLOv2, which utilizes much larger mask to segment objects. It builds an efficient yet simple instance segmentation framework, outperforming other segmentation methods like TensorMask Chen et al. (2019c), CondInst Tian et al. (2020) and BlendMask Chen et al. (2020) on COCO. In SOLOv2, the unified mask feature branch is dynamically convoluted by learned kernels, and the adaptively generated mask for each location benefits from the whole image view instead of cropped region proposals like HTC. Using ResNeXt101-64x4d plugined with DCN and GC block, SOLOv2 achieves 75.29 mAP on validation set (see Table 1). It’s worth noting that other attempts, including NASFPN, data augmentation and Mask Scoring, bring little improvement in our experiments.
C
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi⁢(δ1,…,δn)=δisubscript𝜀𝑖subscript𝛿1…subscript𝛿𝑛subscript𝛿𝑖\varepsilon_{i}(\delta_{1},\dots,\delta_{n})=\delta_{i}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_δ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_δ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) = italic_δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. For a subset A𝐴Aitalic_A of [n]:={1,…,n}assigndelimited-[]𝑛1…𝑛[n]:=\{1,\dots,n\}[ italic_n ] := { 1 , … , italic_n } we denote WA=∏i∈Aεisubscript𝑊𝐴subscriptproduct𝑖𝐴subscript𝜀𝑖W_{A}=\prod_{i\in A}\varepsilon_{i}italic_W start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT = ∏ start_POSTSUBSCRIPT italic_i ∈ italic_A end_POSTSUBSCRIPT italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, WA:{−1,1}n→{−1,1}:subscript𝑊𝐴→superscript11𝑛11W_{A}:\{-1,1\}^{n}\to\{-1,1\}italic_W start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 }. The WAsubscript𝑊𝐴W_{A}italic_W start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT-s are the characters of the Cantor group {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT (with coordintewise multiplication) and form an orthonormal basis in L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT of the Cantor group equipped with the normalized counting measure. In this note we shall be concerned with functions from {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT into the complex plane, ℂℂ\mathbb{C}blackboard_C. These can also be considered as a couple of real functions. Each such function f:{−1,1}n→ℂ:𝑓→superscript11𝑛ℂf:\{-1,1\}^{n}\to\mathbb{C}italic_f : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → blackboard_C has a unique expansion
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Maybe the presentation below is what was known.
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma some time ago (see [K], comment from April 2, 2011). More specifically, we proved
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
C
Figure 1: Comparisons of different methods on cumulative reward under two different environments. The results are averaged over 10 trials and the error bars show the standard deviations. The environment changes abruptly in the left subfigure, whereas the environment changes gradually in the right subfigure.
For the case when the environment changes abruptly L𝐿Litalic_L times, our algorithm enjoys an O~⁢(L1/3⁢T2/3)~𝑂superscript𝐿13superscript𝑇23\tilde{O}(L^{1/3}T^{2/3})over~ start_ARG italic_O end_ARG ( italic_L start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) dynamic regret bound, which is sub-optimal compared to Wei & Luo (2021). The reason is that periodic restart is not a suitable strategy to handle abrupt changes since the passive nature indicates that we cannot guarantee detecting the abrupt environment change within a reasonably short delay. Wei & Luo (2021) overcome this issue by running two tests on top of multiple base instances with different scales to detect the environmental change. Similar ideas have also been used in the piecewise-stationary bandit literature (Besson & Kaufmann, 2019), where a change detection subroutine is run to detect the environmental change, so the regret incurred by the environmental drift can be better controlled.
Figure 2 shows that the running times of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart are roughly the same. They are much less compared with MASTER, OPT-WLSVI, LSVI-UCB, Epsilon-Greedy. This is because LSVI-UCB-Restart and Ada-LSVI-UCB-Restart can automatically restart according to the variation of the environment and thus have much smaller computational burden since it does not need to use the entire history to compute the current policy at each time step. The running time of LSVI-UCB-Unknown is larger than LSVI-UCB-restart since the epoch larger is larger due to the lack of the knowlege of total variation B𝐵Bitalic_B, but it still does not use the entire history to compute its policy. Although Random-Exploration takes the least time, it cannot find the near-optimal policy. This result further demonstrates that our algorithms are not only sample-efficient, but also computationally tractable.
From Figure 1, we find that the restart strategy works better under abrupt changes than under gradual changes, since the gap between our algorithms and the baseline algorithms designed for stationary environments is larger in this setting. The reason is that the algorithms designed to explore in stationary MDPs are generally insensitive to abrupt change in the environment. For example, UCB-type exploration does not have incentive to take actions other than the one with the largest upper confidence bound of Q𝑄Qitalic_Q-value, and if it has collected sufficient number of samples, it very likely never explores the new optimal action thereby taking the former optimal action forever. On the other hand, in gradually-changing environment, LSVI-UCB and Epsilon-Greedy can perform well in the beginning when the drift of environment is small. However, when the change of environment is greater, they no longer yield satisfactory performance since their Q𝑄Qitalic_Q function estimate is quite off. This also explains why LSVI-UCB and Epsilon-Greedy outperform ADA-LSVI-UCB at the beginning in the gradually-changing environment, as shown in Figure 1.
From Figure 1, we see LSVI-UCB-Restart with the knowledge of global variation drastically outperforms all other methods designed for stationary environments , in both abruptly-changing and gradually-changing environments, since it restarts the estimation of the Q𝑄Qitalic_Q function with knowledge of the total variations. Ada-LSVI-UCB-Restart also outperforms the baselines because it also takes the nonstationarity into account by periodically updating the epoch size for restart. In addition, Ada-LSVI-UCB-Restart has a huge gain compared to LSVI-UCB-Unknown, which agrees with our theoretical analysis. This suggests that Ada-LSVI-UCB-Restart works well when the knowledge of global variation is unavailable. Our proposed algorithms not only perform systemic exploration, but also adapt to the environment change.
C
README.md exists but content is empty.
Downloads last month
31