context
stringlengths
250
4.63k
A
stringlengths
250
6.41k
B
stringlengths
250
5.14k
C
stringlengths
250
3.8k
D
stringlengths
250
8.2k
label
stringclasses
4 values
that adds the results of 1+(n−m)/21𝑛𝑚21+(n-m)/21 + ( italic_n - italic_m ) / 2 Gaussian integrations for moments xD−1+n−2⁢ssuperscript𝑥𝐷1𝑛2𝑠x^{D-1+n-2s}italic_x start_POSTSUPERSCRIPT italic_D - 1 + italic_n - 2 italic_s end_POSTSUPERSCRIPT. The disadvantage
{n,n^{\prime}}.∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_R start_POSTSUBSCRIPT italic_n start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_d italic_x = divide start_ARG 1 end_ARG start_ARG 2 italic_n + italic_D end_ARG italic_δ start_POSTSUBSCRIPT italic_n , italic_n start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT .
Gaussian integration rules for integrals ∫01xD−1⁢Rnm⁢(x)⁢f⁢(x)⁢𝑑xsuperscriptsubscript01superscript𝑥𝐷1superscriptsubscript𝑅𝑛𝑚𝑥𝑓𝑥differential-d𝑥\int_{0}^{1}x^{D-1}R_{n}^{m}(x)f(x)dx∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_f ( italic_x ) italic_d italic_x
+x\left[D-1-(D+1)x^{2}\right]\frac{d}{dx}R_{n}^{m}(x).start_ROW start_CELL italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) = [ italic_n italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_n + italic_D ) - italic_m ( italic_D - 2 + italic_m ) ] italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) end_CELL end_ROW start_ROW start_CELL + italic_x [ italic_D - 1 - ( italic_D + 1 ) italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ] divide start_ARG italic_d end_ARG start_ARG italic_d italic_x end_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) . end_CELL end_ROW
rules for the lifted integrals ∫01xD−1⁢[1+Rnm⁢(x)]⁢f⁢(x)⁢𝑑xsuperscriptsubscript01superscript𝑥𝐷1delimited-[]1superscriptsubscript𝑅𝑛𝑚𝑥𝑓𝑥differential-d𝑥\int_{0}^{1}x^{D-1}[1+R_{n}^{m}(x)]f(x)dx∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT [ 1 + italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) ] italic_f ( italic_x ) italic_d italic_x
D
On the other hand, if the instruction Itsubscript𝐼𝑡I_{t}italic_I start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT was Show⁡(A)Show𝐴\operatorname{Show}(A)roman_Show ( italic_A ) then Eval⁡(S,M,s,t)Eval𝑆𝑀𝑠𝑡\operatorname{Eval}(S,M,s,t)roman_Eval ( italic_S , italic_M , italic_s , italic_t ) is defined to be the list of elements stored in memory slots M⁢[i]𝑀delimited-[]𝑖M[i]italic_M [ italic_i ]
This adds only one extra MSLP instruction, in order to form and store the element x⁢v−1𝑥superscript𝑣1xv^{-1}italic_x italic_v start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT needed in the conjugate on the right-hand side of (2) (this element can later be overwritten and so does not add to the overall maximum memory quota; recall also that x𝑥xitalic_x is no longer the identity when d𝑑ditalic_d is odd). Observe that the formula (1) differs from the d𝑑ditalic_d odd case only in the sense that v𝑣vitalic_v is replaced by x−1superscript𝑥1x^{-1}italic_x start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT, and hence the initial computation of T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT requires the same number of instructions and memory slots as before.
does not yield an upper bound for the memory requirement in a theoretical analysis. Moreover, the result of SlotUsagePattern improves the memory usage but it is not necessarily optimized overall and, hence, the number of slots can still be greater than the number of slots of a carefully computed MSLP. It should also be mentioned that in some cases the number of slots can even be smaller than that of a constructed MSLP but it is not possible to predict this without a careful analysis which would result in an MSLP construction as in this paper.
Instruction type (i) above simply copies an element already in memory to a different memory slot. These instructions can arguably be disregarded for the purpose of determining the length of an MSLP, because in a practical implementation they could be handled via relabelling.
For the purposes of determining the cost of Taylor’s algorithm in terms of matrix operations, namely determining the length of an MSLP for the algorithm, we assume that the field elements −gi⁢c⁢gr⁢c−1subscript𝑔𝑖𝑐superscriptsubscript𝑔𝑟𝑐1-g_{ic}g_{rc}^{-1}- italic_g start_POSTSUBSCRIPT italic_i italic_c end_POSTSUBSCRIPT italic_g start_POSTSUBSCRIPT italic_r italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT in (11) (and similarly in (12)) are given to us as polynomials of degree at most f−1𝑓1f-1italic_f - 1 in the primitive element ω𝜔\omegaitalic_ω, where q=pf𝑞superscript𝑝𝑓q=p^{f}italic_q = italic_p start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT for some prime p𝑝pitalic_p.
C
where Ω⊂ℝdΩsuperscriptℝ𝑑\Omega\subset\mathbb{R}^{d}roman_Ω ⊂ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with d=2𝑑2d=2italic_d = 2 or 3333 for simplicity, and is an open bounded domain with polyhedral boundary ∂ΩΩ\partial\Omega∂ roman_Ω, the symmetric tensor 𝒜∈[L∞⁢(Ω)]symd×d𝒜superscriptsubscriptdelimited-[]superscript𝐿Ωsym𝑑𝑑\mathcal{A}\in[L^{\infty}(\Omega)]_{\text{sym}}^{d\times d}caligraphic_A ∈ [ italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( roman_Ω ) ] start_POSTSUBSCRIPT sym end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_d × italic_d end_POSTSUPERSCRIPT is uniformly positive definite and bounded, and g𝑔gitalic_g is part of the given data.
In [MR2718268] is shown that the number of eigenvalues that are very large is related to the number of connected sub-regions on τ¯∪τ¯′¯𝜏superscript¯𝜏′\bar{\tau}\cup{\bar{\tau}}^{\prime}over¯ start_ARG italic_τ end_ARG ∪ over¯ start_ARG italic_τ end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with large coefficients surrounded by regions with small coefficients. Generalized eigenvalue problems also have been used on overlapping domain decomposition solvers [MR2718268, MR2916377, MR3175183, MR3033238]. The design of robust discretizations with respect to coefficients using domain decomposition ideas have been studied in [MR2666649, MR1642758, MR3350765] assuming some regularity on the solution, and in [MR2718268] for a class of problems when the weighted Poincaré constant [MR3047947, MR3013465, MR2867661] is not large, otherwise the exponential decay of the multiscale functions deteriorates. See also [MR2753343, MR3109775] where a priori error estimates are obtained in terms of spectral norms.
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local computations are required, although these are not restricted to a single element. It is interesting to notice that, although the formulation is based on hybridization, the final numerical solution is defined by a sequence of elliptic problems.
One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method not so practical. Here in this paper, in the presence of rough coefficients, spectral techniques are employed to overcome such hurdle, and by solving local eigenvalue problems we define a space where the exponential decay of solutions is insensitive to high-contrast coefficients. Additionally, the spectral techniques remove macro-elements corner singularities that occur in LOD methods based on
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85, MR1979846, MR2058933, HMV, MR1642758, MR3584539, MR2030161, MR2383203, vs1, vs2, MR2740478]. Some methods work even considering that the solution has low regularity [MR2801210, MR2753343, MR3225627, MR3177856, MR2861254] but are based on ideas that differ considerably from what we advocate here
D
On the contrary, we may need to use a function θ𝜃\thetaitalic_θ of variable (b,c)𝑏𝑐(b,c)( italic_b , italic_c ); see the description of 𝖪𝗂𝗅𝗅Fsubscript𝖪𝗂𝗅𝗅𝐹\mathsf{Kill}_{F}sansserif_Kill start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT in subsection 3.1 for an example. As such, the flow of Rotate-and-Kill is different from RC.
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM.
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
We think Alg-A is better in almost every aspect. This is because it is essentially simpler. Among other merits, Alg-A is much faster, because it has a smaller constant behind the asymptotic complexity O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) than the others:
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
C
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet level and let each tweet vote for the credibility of its event. We show the CreditScore measured over time in Figure 5(a). It can be seen that although the credibility of some tweets are low (rumor-related), averaging still makes the CreditScore of Munich shooting higher than the average of news events (hence, close to a news). In addition, we show the feature analysis for ContainNews (percentage of URLs containing news websites) for the event Munich shooting in Figure 5(b). We can see the curve of Munich shooting event is also close to the curve of average news, indicating the event is more news-related.
As observed in [19, 20], rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in [20]. We base our credibility feature on the time series approach and train the classifier with features from diffent high-level contexts (i.e., users, Twitter and propagation) in a cascaded manner. In this section, we first detail the employed Dynamic Series-Time Structure, then describe the high and low-level ensemble features used for learning in this pipeline step.
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analysis on the wide range of features change during diffusion time. Ma et al. [19] used Recurrent Neural Networks for rumor detection, they batch tweets into time intervals and model the time series as a RNN sequence. Without any other handcrafted features, they got almost 90% accuracy for events reported in Snope.com. As the same disadvantage of all other deep learning models, the process of learning is a black box, so we cannot envisage the cause of the good performance based only on content features. The model performance is also dependent on the tweet retrieval mechanism, of which quality is uncertain for stream-based trending sub-events.
In this work, we propose an effective cascaded rumor detection approach using deep neural networks at tweet level in the first stage and wisdom of the “machines”, together with a variety of other features in the second stage, in order to enhance rumor detection performance in the early phase of an event. The proposed approach outperforms state of the
at an early stage. Our fully automatic, cascading rumor detection method follows the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumors.  [7, 19] also use RNN for rumor debunking. However, in their work, RNN is used at event-level. The classification leverages only the deep data representations of aggregated tweet contents of the whole event, while ignoring exploiting other –in latter stage–effective features such as user-based features and propagation features. Although, tweet contents are merely the only reliable source of clue at early stage, they are also likely to have doubtful perspectives and different stands in this specific moment. In addition, they could relate to rumorous sub-events (see e.g., the Munich shooting). Aggregating all relevant tweets of the event at this point can be of noisy and harm the classification performance. One could think of a sub-event detection mechanism as a solution, however, detecting sub-events at real-time over Twitter stream is a challenging task [22], which increases latency and complexity. In this work, we address this issue by deep neural modeling only at single tweet level. Our intuition is to leverage the “wisdom of the crowd” theory; such that even a certain portion of tweets at a moment (mostly early stage) are weakly predicted (because of these noisy factors), the ensemble of them would attribute to a stronger prediction.
C
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile continuing to optimize long after we have zero training error, and
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is also independent of the step-size
Let ℓℓ\ellroman_ℓ be the logistic loss, and 𝒱𝒱\mathcal{V}caligraphic_V be an independent validation set, for which ∃𝐱∈𝒱𝐱𝒱\exists\mathbf{x}\in\mathcal{V}∃ bold_x ∈ caligraphic_V such that 𝐱⊤⁢𝐰^<0superscript𝐱top^𝐰0\mathbf{x}^{\top}\hat{\mathbf{w}}<0bold_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT over^ start_ARG bold_w end_ARG < 0. Then the validation loss increases as
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameterization asymptotically to the maximum margin solution with unit nuclear norm. Unlike the case of squared loss, the result for exponential loss are independent of initialization and with only mild conditions on the step size. Here again, we see the asymptotic nature of exponential loss on separable data nullifying the initialization effects thereby making the analysis simpler compared to squared loss.
We should not rely on plateauing of the training loss or on the loss (logistic or exp or cross-entropy) evaluated on a validation data, as measures to decide when to stop. Instead, we should look at the 00–1111 error on the validation dataset. We might improve the validation and test errors even when when the decrease in the training loss is tiny and even when the validation loss itself increases.
D
To overcome this issue, we set up a threshold 72 hours. We only consider the first candidate within 72 hours before or after the beginning time of the event as timestamp of human confirming rumors. On average the human editors of Snopes need 25.49 hours to verify the rumors and post it. Our system already achieves 87% accuracy in 25 hours. We illustrate two examples here in Figures 12(a) and 12(b). Figure 12(a) is a rumor about ‘Okra curing diabetes’ 161616http://www.snopes.com/medical/homecure/okra.asp which we detected the beginning time is 01.31.2014 04:00. Snope debunked it at 01.28.2014 21:00, 55 hours earlier than our study time period. However, Snopes does not provide any information regarding how they detect the rumor. Figure 12(b) depicts another example, showing that human detect it 71 hour after the event starts, which is the latest detection in our study. Despite those issues, we show the comparision results in Table 12.
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet level and let each tweet vote for the credibility of its event. We show the CreditScore measured over time in Figure 13(a). It can be seen that although the credibility of some tweets are low (rumor-related), averaging still makes the CreditScore of Munich shooting higher than the average of news events (hence, close to a news). In addition, we show the feature analysis for ContainNews (percentage of URLs containing news websites) for the event Munich shooting in Figure 13(b). We can see the curve of Munich shooting event is also close to the curve of average news, indicating the event is more news-related.
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumors.  (madetecting, ) also use RNN for rumor debunking. However, in their work, RNN is used at event-level. The classification leverages only the deep data representations of aggregated tweet contents of the whole event, while ignoring exploiting other –in latter stage–effective features such as user-based features and propagation features. Although, tweet contents are merely the only reliable source of clue at early stage, they are also likely to have doubtful perspectives and different stands in this specific moment. In addition, they could relate to rumorous sub-events (see e.g., the Munich shooting). Aggregating all relevant tweets of the event at this point can be of noisy and harm the classification performance. One could think of a sub-event detection mechanism as a solution, however, detecting sub-events at real-time over Twitter stream is a challenging task (meladianos2015degeneracy, ), which increases latency and complexity. In this work, we address this issue by deep neural modeling only at single tweet level. Our intuition is to leverage the “wisdom of the crowd” theory; such that even a certain portion of tweets at a moment (mostly early stage) are weakly predicted (because of these noisy factors), the ensemble of them would attribute to a stronger prediction.
At 18:22 CEST, the first tweet was posted. There might be some certain delay, as we retrieve only tweets in English and the very first tweets were probably in German. The tweet is ”Sadly, i think there’s something terrible happening in #Munich #Munchen. Another Active Shooter in a mall. #SMH”.
The time period of a rumor event is sometimes fuzzy and hard to define. One reason is a rumor may have been triggered for a long time and kept existing, but it did not attract public attention. However it can be triggered by other events after a uncertain time and suddenly spreads as a bursty event. E.g., a rumor999http://www.snopes.com/robert-byrd-kkk-photo/ claimed that Robert Byrd was member of KKK. This rumor has been circulating in Twitter for a while. As shown in Figure 7(a) that almost every day there were several tweets talking about this rumor. But this rumor was triggered by a picture about Robert Byrd kissing Hillary Clinton in 2016 101010http://www.snopes.com/clinton-byrd-photo-klan/ and Twitter users suddenly noticed this rumor and it was spreaded burstily. In this work, what we are really interested in is the tweets which are posted in hours around the bursty peak. We defined the hour with the most tweets’ volume as tm⁢a⁢xsubscript𝑡𝑚𝑎𝑥t_{max}italic_t start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT and we want to detect the rumor event as soon as possible before its burst, so we define the time of the first tweet before tm⁢a⁢xsubscript𝑡𝑚𝑎𝑥t_{max}italic_t start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT within 48 hours as the beginning of this rumor event, marked as t0subscript𝑡0t_{0}italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. And the end time of the event is defined as te⁢n⁢d=t0+48subscript𝑡𝑒𝑛𝑑subscript𝑡048t_{end}=t_{0}+48italic_t start_POSTSUBSCRIPT italic_e italic_n italic_d end_POSTSUBSCRIPT = italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + 48. We show the tweet volumes in Figure 7(b) of the above rumor example.
C
\mathcal{C}_{k}|a,t)\sum\limits_{l=1}^{m}P(\mathcal{T}_{l}|a,t,\mathcal{C}_{k}% )\hat{y_{a}},y_{a})sansserif_f start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = roman_arg roman_min start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT ∀ italic_a end_POSTSUBSCRIPT caligraphic_L ( ∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT italic_P ( caligraphic_C start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | italic_a , italic_t ) ∑ start_POSTSUBSCRIPT italic_l = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT italic_P ( caligraphic_T start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT | italic_a , italic_t , caligraphic_C start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) over^ start_ARG italic_y start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT end_ARG , italic_y start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT )
Learning a single model for ranking event entity aspects is not effective due to the dynamic nature of a real-world event driven by a great variety of multiple factors. We address two major factors that are assumed to have the most influence on the dynamics of events at aspect-level, i.e., time and event type. Thus, we propose an adaptive approach based on the ensemble of multiple ranking models learned from training data, which is partitioned by entities’ temporal and type aspects. In more detail, we learn multiple models, which are co-trained using data soft partitioning / clustering method in Section 4.2, and finally combine the ranking results of different models in an ensemble manner. This approach allows sub-models to learn for different types and times (where feature sets can perform differently), without hurting each other. The adaptive global loss then co-optimizes all sub-models in a unified framework. We describe in details as follows.
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The results are shown in Table 3-bottom, showing that our cascaded model, with features inherited from the performance of SVM in previous task, substantially improves the single model. However, the overall modest results show the difficulty of this multi-class classification task.
For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type specification (RQ2). We then evaluate our ensemble ranking model (results from the cascaded evaluation) and show it robustly improves the baselines for all studied cases (RQ3). Notice that, we do not use the learned classifier in Section 5.2 for our ensemble model, since they both use the same time period for training, but opt for the on-the-fly ranking-sensitive clustering technique, described in Section 4.2.
Multi-Criteria Learning. Our task is to minimize the global relevance loss function, which evaluates the overall training error, instead of assuming the independent loss function, that does not consider the correlation and overlap between models. We adapted the L2R RankSVM [12]. The goal of RankSVM is learning a linear model that minimizes the number of discordant pairs in the training data. We modified the objective function of RankSVM following our global loss function, which takes into account the temporal feature specificities of event entities. The temporal and type-dependent ranking model is learned by minimizing the following objective function:
D
In this case, the agent must sequentially learn both the underlying dynamics (La,Σa;∀asubscript𝐿𝑎subscriptΣ𝑎for-all𝑎L_{a},\Sigma_{a};\forall aitalic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ; ∀ italic_a) and the conditional reward function’s variance (σa2,∀a)\sigma_{a}^{2},\forall a)italic_σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , ∀ italic_a ),
We observe noticeable (almost linear) regret increases when the dynamics of the parameters swap the identity of the optimal arm. However, SMC-based Thompson sampling and Bayes-UCB agents are able to learn the evolution of the dynamic latent parameters,
We now describe in detail how to use the SMC-based posterior random measure pM⁢(θt+1,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡1𝑎subscriptℋ:1𝑡p_{M}(\theta_{t+1,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t + 1 , italic_a end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) for both Thompson sampling and Bayes-UCB policies: i.e., which are the specific instructions to execute in steps 5 and 7 of Algorithm 1.
For the more interesting case of unknown parameters, we marginalize parameters Lasubscript𝐿𝑎L_{a}italic_L start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and ΣasubscriptΣ𝑎\Sigma_{a}roman_Σ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT of the transition distributions
If the support of q⁢(⋅)𝑞⋅q(\cdot)italic_q ( ⋅ ) includes the support of the distribution of interest p⁢(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ ), one computes the IS estimator of a test function based on the normalized weights w(m)superscript𝑤𝑚w^{(m)}italic_w start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT,
A
Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available. The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14.
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only two glucose measurements per day on average and measured glucose within 4 hours or less after a meal only 5 out of 54 times.
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day. In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app.
Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening. For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it is possible the discrepancy is a result of missing (glucose and carbohydrate) measurements.
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
D
Our proposed encoder-decoder model clearly demonstrated competitive performance on two datasets towards visual saliency prediction. The ASPP module incorporated multi-scale information and global context based on semantic feature representations, which significantly improved the results both qualitatively and quantitatively on five eye tracking datasets. This suggests that convolutional layers with large receptive fields at different dilation factors can enable a more holistic estimation of salient image regions in complex scenes. Moreover, our approach is computationally lightweight compared to prior state-of-the-art approaches and could thus be implemented in (virtual) robotic systems that require computational efficiency. It also outperformed all other networks defined with a pre-trained VGG16 backbone as calculated by the cumulative rank on a subset of evaluation metrics to resolve some of the inconsistencies in ranking models by a single measure or a set of correlated ones Riche et al. (2013); Bylinskii et al. (2018).
To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation metrics. Table 1 summarizes our results on the test dataset of MIT1003, namely MIT300 Judd et al. (2012), in the context of previous approaches. The evaluation shows that our model only marginally failed to achieve state-of-the-art performance on any of the individual metrics. When computing the cumulative rank (i.e. the sum of ranks according to the standard competition ranking procedure) on a subset of weakly correlated measures (sAUC, CC, KLD) Riche et al. (2013); Bylinskii et al. (2018), we ranked third behind the two architectures DenseSal and DPNSal from Oyama and Yamanaka (2018). However, their approaches were based on a pre-trained Densely Connected Convolutional Network with 161 layers Huang et al. (2017) and Dual Path Network with 131 layers Chen et al. (2017) respectively, both of which are computationally far more expensive than the VGG16 model used in this work (see Table 5 by Oyama and Yamanaka (2018) for a comparison of the computational efficiency). Furthermore, DenseSal and DPNSal implemented a multi-path design where two images of different resolutions are simultaneously fed to the network, which substantially reduces the execution speed compared to single-stream architectures. Among all entries of the MIT300 benchmark with a VGG16 backbone Cornia et al. (2016); Huang et al. (2015); Cornia et al. (2018); Kruthiventi et al. (2017), our model clearly achieved the highest performance.
Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer et al. (2014) and II Kümmerer et al. (2016) employed a pre-trained classification model to read out salient image locations from a small subset of encoding layers. This is similar to the network by Cornia et al. (2016) which utilizes the output at three stages of the hierarchy. Oyama and Yamanaka (2018) demonstrated that classification performance of pre-trained architectures strongly correlates with the accuracy of saliency predictions, highlighting the importance of object information. Related approaches also focused on the potential benefits of incorporating activation from both coarse and fine image resolutions Huang et al. (2015), and recurrent connections to capture long-range spatial dependencies in convolutional feature maps Cornia et al. (2018); Liu and Han (2018). Our model explicitly combines semantic representations at multiple spatial scales to include contextual information in the predictive process. For a more complete account of existing saliency architectures, we refer the interested reader to a comprehensive review by Borji (2018).
Further improvements of benchmark results could potentially be achieved by a number of additions to the processing pipeline. Our model demonstrates a learned preference for predicting fixations in central regions of images, but we expect performance gains from modeling the central bias in scene viewing explicitly Kümmerer et al. (2014, 2016); Cornia et al. (2016, 2018); Kruthiventi et al. (2017). Additionally, Bylinskii et al. (2015) summarized open problems for correctly assigning saliency in natural images, such as robustness in detecting semantic features, implied gaze and motion, and importance weighting of multiple salient regions. While the latter was addressed in this study, Figure 4 indicates that the remaining obstacles still persist for our proposed model.
For related visual tasks such as semantic segmentation, information distributed over convolutional layers at different levels of the hierarchy can aid the preservation of fine spatial details Hariharan et al. (2015); Long et al. (2015). The prediction of fixation density maps does not require accurate class boundaries but still benefits from combined mid- to high-level feature responses Kümmerer et al. (2014, 2016); Cornia et al. (2016). Hence, we adapted the multi-level design proposed by Cornia et al. (2016) and concatenated the output from layers 10, 14, and 18 into a common tensor with 1,280 activation maps.
C
Finally, we have to show that in this pd-marking scheme, the maximum number of activeactive\operatorname{\texttt{active}}act positions is bounded by 2⁢k+12𝑘12k+12 italic_k + 1. This is obviously true at step p1subscript𝑝1p_{1}italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Now let s𝑠sitalic_s with 1≤s≤|α|−11𝑠𝛼11\leq s\leq|\alpha|-11 ≤ italic_s ≤ | italic_α | - 1 be arbitrary. Since the total number of activeactive\operatorname{\texttt{active}}act positions at step pssubscript𝑝𝑠p_{s}italic_p start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT and ps+1subscript𝑝𝑠1p_{s+1}italic_p start_POSTSUBSCRIPT italic_s + 1 end_POSTSUBSCRIPT are bounded by 2⁢k2𝑘2k2 italic_k, we only have to show that the maximum number of activeactive\operatorname{\texttt{active}}act positions in the marking scheme transforming pssubscript𝑝𝑠p_{s}italic_p start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT into ps+1subscript𝑝𝑠1p_{s+1}italic_p start_POSTSUBSCRIPT italic_s + 1 end_POSTSUBSCRIPT is bounded by 2⁢k+12𝑘12k+12 italic_k + 1. Let us assume that at stage s𝑠sitalic_s and s+1𝑠1s+1italic_s + 1 of σ𝜎\sigmaitalic_σ, there are kssubscript𝑘𝑠k_{s}italic_k start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT (ks+1subscript𝑘𝑠1k_{s+1}italic_k start_POSTSUBSCRIPT italic_s + 1 end_POSTSUBSCRIPT, respectively) marked blocks, and exactly ks,1subscript𝑘𝑠1k_{s,1}italic_k start_POSTSUBSCRIPT italic_s , 1 end_POSTSUBSCRIPT (ks+1,1subscript𝑘𝑠11k_{s+1,1}italic_k start_POSTSUBSCRIPT italic_s + 1 , 1 end_POSTSUBSCRIPT, respectively) blocks have size 1111; note that this means that at step pssubscript𝑝𝑠p_{s}italic_p start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT there are ks,1+2⁢(ks−ks,1)subscript𝑘𝑠12subscript𝑘𝑠subscript𝑘𝑠1k_{s,1}+2(k_{s}-k_{s,1})italic_k start_POSTSUBSCRIPT italic_s , 1 end_POSTSUBSCRIPT + 2 ( italic_k start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT - italic_k start_POSTSUBSCRIPT italic_s , 1 end_POSTSUBSCRIPT ) activeactive\operatorname{\texttt{active}}act positions.
j𝑗jitalic_j joins two blocks of size 1111: the number of activeactive\operatorname{\texttt{active}}act positions increases by 1111. This is due to the fact that by setting j𝑗jitalic_j to activeactive\operatorname{\texttt{active}}act, we do not create any internal activeactive\operatorname{\texttt{active}}act positions that could be set to closedclosed\operatorname{\texttt{closed}}closed.
We first prove pw⁡(Gα)≤2⁢loc⁡(α)pwsubscript𝐺𝛼2loc𝛼\operatorname{\textsf{pw}}(G_{\alpha})\leq 2\operatorname{\textsf{loc}}(\alpha)pathwidth ( italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ) ≤ 2 loc ( italic_α ). Intuitively speaking, we will translate the stages of a marking sequence σ𝜎\sigmaitalic_σ for α𝛼\alphaitalic_α into steps of a pd-marking scheme for Gαsubscript𝐺𝛼G_{\alpha}italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT in a natural way: each marked block α[s..t]\alpha[s..t]italic_α [ italic_s . . italic_t ] is represented by letting the border positions s𝑠sitalic_s and t𝑡titalic_t be activeactive\operatorname{\texttt{active}}act, the internal position s+1,s+2,…,t−1𝑠1𝑠2…𝑡1s+1,s+2,\ldots,t-1italic_s + 1 , italic_s + 2 , … , italic_t - 1 closedclosed\operatorname{\texttt{closed}}closed, and all other positions openopen\operatorname{\texttt{open}}open. In particular, this means that each stage of the marking sequence with k𝑘kitalic_k marked blocks is represented by at most 2⁢k2𝑘2k2 italic_k activeactive\operatorname{\texttt{active}}act positions in the corresponding step of the pd-marking scheme (note that marked blocks of size 1111 are represented by only one activeactive\operatorname{\texttt{active}}act position). The difficulty will be to show that in the process of transforming one such step of the pd-marking scheme into the next one, we do not produce more than 2⁢πσ⁢(α)+12subscript𝜋𝜎𝛼12\pi_{\sigma}(\alpha)+12 italic_π start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT ( italic_α ) + 1 activeactive\operatorname{\texttt{active}}act positions. This is non-trivial, since due to the cover-property of the pd-marking scheme, we must first set all positions to activeactive\operatorname{\texttt{active}}act that correspond to occurrences of the next symbol to be marked by σ𝜎\sigmaitalic_σ before we can set them from activeactive\operatorname{\texttt{active}}act to closedclosed\operatorname{\texttt{closed}}closed.
This completes the definition of the marking scheme. Figure 7 contains an example of how step ps+1subscript𝑝𝑠1p_{s+1}italic_p start_POSTSUBSCRIPT italic_s + 1 end_POSTSUBSCRIPT is obtained from step pssubscript𝑝𝑠p_{s}italic_p start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT. In this example, we first set extending positions to activeactive\operatorname{\texttt{active}}act that do not join marked blocks, and then we set the remaining extending positions to activeactive\operatorname{\texttt{active}}act. This is done for illustrational reasons (recall that we have not restricted the order in which we set extending positions to activeactive\operatorname{\texttt{active}}act).
In the first phase of the marking scheme, i. e., the phase where we only set extending positions to activeactive\operatorname{\texttt{active}}act, the following different situations can arise, whenever we set some position j𝑗jitalic_j to activeactive\operatorname{\texttt{active}}act (see Figure 7 for an illustration):
D
Zubair et al.[75] detected the R-peak using a non-linear transformation and formed a beat segment around it. Then, they used the segments to train a three layer 1D CNN with variable learning rate depending on the mean square error and achieved better results than previous state-of-the-art.
In their article Kiranyaz et al.[77] trained patient-specific CNNs that can be used to classify long ECG data stream or for real-time ECG monitoring and early alert system on a wearable device. The CNN consisted of three layers of an adaptive implementation of 1D convolution layers.
Taji et al.[91] trained a DBN to classify acceptable from unacceptable ECG segments to reduce the false alarm rate caused by poor quality ECG during AF detection. Eight different levels of ECG quality are provided by contaminating ECG with motion artifact from the NSTDB for validation.
Another three models were trained using the signals as 1D. The first model was a FNN with dropout, the second a three layer 1D CNN and the third a 2D CNN same as the first but trained with a stacked version of the signal (also trained with data augmentation).
Experiments by the authors showed that the three layer 1D CNN created better and more stable results. In[101] the authors trained a network with an one convolutional layer with dropout followed by two RNNs to identify stress using short-term ECG data.
A
Our predictive model has stochastic latent variables so it can be applied in highly stochastic environments. Studying such environments is an exciting direction for future work, as is the study of other ways in which the predictive neural network model could be used. Our approach uses the model as a learned simulator and directly applies model-free policy learning to acquire the policy. However, we could use the model for planning. Also, since our model is differentiable, the additional information contained in its gradients could be incorporated into the reinforcement learning process. Finally, the representation learned by the predictive model is likely be more meaningful by itself than the raw pixel observations from the environment. Incorporating this representation into the policy could further accelerate and improve the reinforcement learning process.
The results in these figures are generated by averaging 5555 runs for each game. The model-based agent is better than a random policy for all the games except Bank Heist. Interestingly, we observed that the best of the 5555 runs was often significantly better. For 6666 of the games, it exceeds the average human score (as reported in Table 3 of Pohlen et al. (2018)). This suggests that further stabilizing SimPLe should improve its performance, indicating an important direction for future work. In some cases during training we observed high variance of the results during each step of the loop. There are a number of possible reasons, such as mutual interactions of the policy training and the supervised training or domain mismatch between the model and the real environment. We present detailed numerical results, including best scores and standard deviations, in Appendix D.
In our empirical evaluation, we find that SimPLe is significantly more sample-efficient than a highly tuned version of the state-of-the-art Rainbow algorithm (Hessel et al., 2018) on almost all games. In particular, in low data regime of 100100100100k samples, on more than half of the games, our method achieves a score which Rainbow requires at least twice as many samples. In the best case of Freeway, our method is more than 10x more sample-efficient, see Figure 3. Since the publication of the first preprint of this work, it has been shown in van Hasselt et al. (2019); Kielak (2020) that Rainbow can be tuned to have better results in low data regime. The results are on a par with SimPLe – both of the model-free methods are better in 13 games, while SimPLe is better in the other 13 out of the total 26 games tested (note that in Section 4.2 van Hasselt et al. (2019) compares with the results of our first preprint, later improved).
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, and PPO (Schulman et al., 2017), a model-free policy gradient algorithm (see Appendix E for details of tuning of Rainbow and PPO). The results of the comparison are presented in Figure 3. For each game, we plot the number of time steps needed for either Rainbow or PPO to reach the same score that our method reaches after 100100100100K interaction steps. The red line indicates 100100100100K steps: any bar larger than this indicates a game where the model-free method required more steps. SimPLe outperforms the model-free algorithms in terms of learning speed on nearly all of the games, and in the case of a few games, does so by over an order of magnitude. For some games, it reaches the same performance that our PPO implementation reaches at 10101010M steps. This indicates that model-based reinforcement learning provides an effective approach to learning Atari games, at a fraction of the sample complexity.
While SimPLe is able to learn more quickly than model-free methods, it does have limitations. First, the final scores are on the whole lower than the best state-of-the-art model-free methods. This can be improved with better dynamics models and, while generally common with model-based RL algorithms, suggests an important direction for future work. Another, less obvious limitation is that the performance of our method generally varied substantially between different runs on the same game. The complex interactions between the model, policy, and data collection were likely responsible for this. In future work, models that capture uncertainty via Bayesian parameter posteriors or ensembles (Kurutach et al., 2018; Chua et al., 2018) may improve robustness.
D
However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model. Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level features of the ‘base models’ might not be suitable for spectrogram-like images such as those created by S2Is.
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable parameters such as convolutional and linear layers or it is non-trainable such as traditional time-frequency methods.
Future work could include testing this hypothesis by initializing a ‘base model’ using transfer learning or other initialization methods. Moreover, trainable S2Is and 1D ‘base model’ variations could also be used for other physiological signals besides EEG such as Electrocardiography, Electromyography and Galvanic Skin Response.
This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data. Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for diagnosis and prediction problems.
However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model. Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level features of the ‘base models’ might not be suitable for spectrogram-like images such as those created by S2Is.
B
Hybrid robots typically transition between locomotion modes either by “supervised autonomy” [11], where human operators make the switch decisions, or the autonomous locomotion mode transition approach, where robots autonomously swap the modes predicated on pre-set criteria [8]. However, the execution of supervised control of locomotion mode transition hinges on constant operator-robot interaction, which might not always be feasible or reliable, especially in confined and complex environments typical in search and rescue missions [12]. In such situations, operators might struggle to maintain absolute situational awareness. To address the locomotion mode transition conundrum, various solutions have been proposed. These include adopting specialized mechanical designs [13, 14] and applying pre-programmed solutions [15]. Although these methods have enhanced the autonomy of locomotion mode transitions, universally applicable autonomous solutions remain in the early stages of development. In fact, most locomotion mode transitions in hybrid robots are currently achieved via high-level human operator control. This applies to cutting-edge wheel/track-legged robots, including DRC-HUBO, CHIMP, Momaro, and RoboSimian, depicted in Fig. 1, which were four of the top five robot designs crafted for the DARPA Robotics Challenge [1].
There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that determine the best mode—either rolling or walking—based on the robot’s environmental interactions and internal states [7, 8]. In addressing the first challenge, the dynamics of rolling locomotion are well understood and are similar to those of traditional wheeled/tracked robots. However, despite extensive research on the walking dynamics of standard legged robots, focused studies on the walking patterns specific to wheel/track-legged robots are limited [9]. Transition control between these locomotion modes for wheel/track-legged robots also requires more exploration [6]. In this study, we focus on the second challenge to develop efficient decision-making algorithms for transitioning between locomotion modes. This remains a very less explored area [3], but is essential to achieve an autonomous locomotion transition in hybrid robots. Building upon our prior work, we employ two climbing gaits to ensure smooth walking locomotion for wheel/track-legged robots, particularly when navigating steps [10].
The Cricket robot, as referenced in [20], forms the basis of this study, being a fully autonomous track-legged quadruped robot. Its design specificity lies in embodying fully autonomous behaviors, and its locomotion system showcases a unique combination of four rotational joints in each leg, which can be seen in Fig. 3. Moreover, every leg is equipped with a drivable track that circumnavigates the outermost leg segment. This design enables the robot to steer in a manner reminiscent of traditional tank robots. However, unlike its contemporaries, the Cricket robot possesses the ability to conduct intricate movements, such as navigating through uneven terrain, in its walking locomotion mode [21]. The two primary forms of the robot’s movement are rolling, which leverages tracks for efficient movement across semi-flat terrains, and walking, which is primarily used for maneuvering across challenging and uneven terrains. In this paper, these modes will be referred to as rolling and walking, respectively. Similar to many other hybrid robots, the default locomotion mode of the Cricket robot is rolling. This mode is preferred on flat and rigid surfaces due to its efficiency in terms of time and energy consumption. In the rolling locomotion mode, the robot maintains its home configuration, where all joints are positioned at their central positions as illustrated in Fig. 3.
Hybrid robots typically transition between locomotion modes either by “supervised autonomy” [11], where human operators make the switch decisions, or the autonomous locomotion mode transition approach, where robots autonomously swap the modes predicated on pre-set criteria [8]. However, the execution of supervised control of locomotion mode transition hinges on constant operator-robot interaction, which might not always be feasible or reliable, especially in confined and complex environments typical in search and rescue missions [12]. In such situations, operators might struggle to maintain absolute situational awareness. To address the locomotion mode transition conundrum, various solutions have been proposed. These include adopting specialized mechanical designs [13, 14] and applying pre-programmed solutions [15]. Although these methods have enhanced the autonomy of locomotion mode transitions, universally applicable autonomous solutions remain in the early stages of development. In fact, most locomotion mode transitions in hybrid robots are currently achieved via high-level human operator control. This applies to cutting-edge wheel/track-legged robots, including DRC-HUBO, CHIMP, Momaro, and RoboSimian, depicted in Fig. 1, which were four of the top five robot designs crafted for the DARPA Robotics Challenge [1].
A major obstacle in achieving seamless autonomous locomotion transition lies in the need for an efficient sensing methodology that can promptly and reliably evaluate the interaction between the robot and the terrain, referred to as terramechanics. These methods generally involve performing comprehensive on-site measurements of soil attributes prior to robot deployment [9]. Moreover, it’s important to consider that these terramechanics models, striving to predict robot-terrain interactions, often involve substantial computational costs due to their complexity [16]. Therefore, terramechanics methods are unsuitable for use in autonomous locomotion mode transition control directly, particularly in scenarios where robots need to move at high speeds, for example in search and rescue missions. To bypass the limitations of terramechanics methods, researchers have probed into alternative strategies for accomplishing autonomous locomotion transition. For example, certain studies have utilized energy consumption as a metric for evaluating the transverse-ability of different locomotion modes in wheel/track-legged robots [8]. By scrutinizing the energy expenditure for different locomotion modes, researchers can evaluate their efficiency in navigating various terrains. Additionally, other general parameters like stability margin and motion efficiency have been examined in the quest to achieve autonomous locomotion transition [2].
D
For paid exchanges at the beginning of the phase, Tog incurs a cost that is less than m2superscript𝑚2m^{2}italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. Before serving the last request σℓsubscript𝜎ℓ\sigma_{\ell}italic_σ start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT of the phase, the access cost of Tog is less than m3superscript𝑚3m^{3}italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT by definition, and the access cost to σℓsubscript𝜎ℓ\sigma_{\ell}italic_σ start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT is at most m𝑚mitalic_m. ∎
In an ignoring phase, the cost of Tog for the phase is in the range (β⁢m3,β⁢m3⁢(1+1/m2))𝛽superscript𝑚3𝛽superscript𝑚311superscript𝑚2(\beta m^{3},\beta m^{3}(1+1/m^{2}))( italic_β italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , italic_β italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( 1 + 1 / italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) ) (excluding the last phase).
The worst-case ratio between the costs of Tog and Mtf2 is maximized when the last phase is an ignoring phase. In this case, we have k𝑘kitalic_k trusting phases and k𝑘kitalic_k ignoring phases. The total cost of Mtf2 is at least k⁢m3+k⁢(β⁢m3/2−m2)=k⁢m3⁢(1+β/2−1/m)𝑘superscript𝑚3𝑘𝛽superscript𝑚32superscript𝑚2𝑘superscript𝑚31𝛽21𝑚km^{3}+k(\beta m^{3}/2-m^{2})=km^{3}(1+\beta/2-1/m)italic_k italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT + italic_k ( italic_β italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT / 2 - italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) = italic_k italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( 1 + italic_β / 2 - 1 / italic_m ). By Lemma 21, the cost of Tog is at most k⁢m3⁢(1+β+3/m)𝑘superscript𝑚31𝛽3𝑚km^{3}(1+\beta+3/m)italic_k italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( 1 + italic_β + 3 / italic_m ). The ratio between the two algorithms will be less than
For a trusting phase, the cost of Tog is in the range (m3,m3⁢(1+1/m+1/m2))superscript𝑚3superscript𝑚311𝑚1superscript𝑚2(m^{3},m^{3}(1+1/m+1/m^{2}))( italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( 1 + 1 / italic_m + 1 / italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) )
Similar arguments apply for an ignoring phase with the exception that the threshold is β⋅m2⋅𝛽superscript𝑚2\beta\cdot m^{2}italic_β ⋅ italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT and there are no paid exchanges performed by Tog. So, we can observe the following.
D
Regarding the support that SS3 provides for early classification we can say that, even though the rules we used are very simple, they are more effective than more elaborated and complex mechanisms used in the pilot task. For instance, some mechanisms to stop reading and classifying a subject included complex decision mechanisms based on specific rules for different chunks [Villegas et al., 2017]. These rules take into account the decisions of different classifiers, the probability that each classifier assigned to its prediction, “white lists” containing the words with the highest information gain, and other sources of information. Another approach that showed a good performance relied on hand-crafted rules specifically designed for this problem [Trotzek et al., 2017], of the form: “if output ≥αnabsentsubscript𝛼𝑛\geq\alpha_{n}≥ italic_α start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and number of writings ≥nabsent𝑛\geq n≥ italic_n, then classify as positive”, “if output ≤βnabsentsubscript𝛽𝑛\leq\beta_{n}≤ italic_β start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and the number of writings ≥nabsent𝑛\geq n≥ italic_n, then classify as non-depressed”, etc.
Regarding document representations some research groups used simple features like standard Bag of Words [Trotzek et al., 2017, Villegas et al., 2017, Farıas-Anzaldúa et al., 2017], bigrams and trigrams [Villegas et al., 2017, Almeida et al., 2017, Farıas-Anzaldúa et al., 2017], while others used more elaborated and domain-specific ones like lexicon-based features555Such as emotion words from WordNet, sentiment words from Vader, and preexisting depression-related dictionaries.[Malam et al., 2017, Trotzek et al., 2017, Sadeque et al., 2017, Almeida et al., 2017], LIWIC features [Trotzek et al., 2017, Villegas et al., 2017], Part-of-Speech tags [Almeida et al., 2017], statistical features666Such as the average number of posts, the average number of words per post, post timestamps, etc.[Malam et al., 2017, Almeida et al., 2017, Farıas-Anzaldúa et al., 2017] or even hand-crafted features [Trotzek et al., 2017]. Some other groups made use of more sophisticated features such as Latent Semantic Analysis [Trotzek et al., 2017], Concise Semantic Analysis [Villegas et al., 2017], Doc2Vec [Trotzek et al., 2017] or even graph-based representations [Villatoro-Tello et al., 2017].
Most research groups [Malam et al., 2017, Trotzek et al., 2017, Sadeque et al., 2017, Villatoro-Tello et al., 2017, Villegas et al., 2017, Almeida et al., 2017] applied a simple policy in which, the same way as in [Losada & Crestani, 2016], a subject is classified as depressed when the classifier outputs a value greater than a fixed threshold. Some other groups [Farıas-Anzaldúa et al., 2017] applied no policy at all and no early classification was performed, i.e. their classifiers made their predictions only after seeing the entire subject’s history888Note that this is not a realistic approach, usually there is no such thing as a subject’s “last writing” in real life since subjects are able to create new writings over time.. It is worth mentioning that some groups [Malam et al., 2017, Trotzek et al., 2017, Villegas et al., 2017] added extra conditions to the given policy, for instance [Trotzek et al., 2017] used a list of manually-crafted rules of the form: “if output ≥αnabsentsubscript𝛼𝑛\geq\alpha_{n}≥ italic_α start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and the number of writings ≥nabsent𝑛\geq n≥ italic_n, then classify as positive”, “if output ≤βnabsentsubscript𝛽𝑛\leq\beta_{n}≤ italic_β start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and the number of writings ≥nabsent𝑛\geq n≥ italic_n, then classify as non-depressed”, etc.
Regarding classification models, some groups used standard classifiers777Such as Multinomial Naive Bayes(MNB), Logistic Regression (LOGREG), Support Vector Machine(SVM), Random Forest, Decision Trees, etc.[Malam et al., 2017, Trotzek et al., 2017, Sadeque et al., 2017, Villegas et al., 2017, Almeida et al., 2017, Farıas-Anzaldúa et al., 2017] while others made use of more complex methods such as different types of Recurrent Neural Networks [Trotzek et al., 2017, Sadeque et al., 2017], graph-based models [Villatoro-Tello et al., 2017], or even combinations or ensemble of different classifiers [Trotzek et al., 2017, Sadeque et al., 2017, Villegas et al., 2017, Almeida et al., 2017].
It is true that more elaborated methods that simultaneously learn the classification model and the policy to stop reading could have been used, such as in [Dulac-Arnold et al., 2011, Yu et al., 2017]. However, for the moment it is clear that this very simple approach is effective enough to outperform the remainder methods, leaving for future work the use of more elaborated approaches.
D
Stochastic gradient descent (SGD) and its variants (Robbins and Monro, 1951; Bottou, 2010; Johnson and Zhang, 2013; Zhao et al., 2018, 2020, 2021) have been the dominating optimization methods for solving (1). In each iteration, SGD calculates a (mini-batch) stochastic gradient and uses it to update the model parameters. Inspired by momentum and Nesterov’s accelerated gradient descent, momentum SGD (MSGD) (Polyak, 1964; Tseng, 1998; Lan, 2012; Kingma and Ba, 2015) has been proposed and widely used in machine learning. In practice, MSGD often outperforms SGD (Krizhevsky et al., 2012; Sutskever et al., 2013). Many machine learning platforms, such as TensorFlow, PyTorch and MXNet, include MSGD as one of their optimization methods.
Furthermore, when we distribute the training across multiple workers, the local objective functions may differ from each other due to the heterogeneous training data distribution. In Section 5, we will demonstrate that the global momentum method outperforms its local momentum counterparts in distributed deep model training.
Recently, parameter server (Li et al., 2014) has been one of the most popular distributed frameworks in machine learning. GMC can also be implemented on the parameter server framework. In this paper, we adopt the parameter server framework for illustration. The theories in this paper can also be adapted for the all-reduce framework.
With the rapid growth of data, distributed SGD (DSGD) and its variant distributed MSGD (DMSGD) have garnered much attention. They distribute the stochastic gradient computation across multiple workers to expedite the model training. These methods can be implemented on distributed frameworks like parameter server and all-reduce frameworks.
GMC can be easily implemented on the all-reduce distributed framework in which each worker sends the sparsified vector 𝒞⁢(𝐞t+12,k)𝒞subscript𝐞𝑡12𝑘\mathcal{C}({\bf e}_{t+\frac{1}{2},k})caligraphic_C ( bold_e start_POSTSUBSCRIPT italic_t + divide start_ARG 1 end_ARG start_ARG 2 end_ARG , italic_k end_POSTSUBSCRIPT ) to all the other workers, then each worker updates 𝐰t+1subscript𝐰𝑡1{\bf w}_{t+1}bold_w start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT after receiving the sparsified vectors from all the other workers.
C
Olshausen et al. [43] presented an objective function that considers subjective measures of sparseness of the activation maps, however in this work we use the direct measure of compression ratio. Previous work by [44] have used a weighted combination of the number of neurons, percentage root-mean-squared difference and a correlation coefficient for the optimization function of a FNN as a metric but without taking consideration the number of non-zero activations.
The increased number of weights and non-zero activations make DNNs more complex, and thus more difficult to use in problems that require corresponding causality of the output with a specific set of neurons. The majority of domains where machine learning is applied, including critical areas such as healthcare [26], require models to be interpretable and explainable before considering them as a solution.
A limitation of SANs is the use of varying amplitude-only kernels, which are not sufficient for more complex data and also do not fully utilize the compressibility of the data. A possible solution would be using a grid sampler [45] on the kernel allowing it to learn more general transformations (such as scale) than simple amplitude variability.
It is interesting to note that in some cases SANs reconstructions, such as for the Extrema-Pool indices, performed even better than the original data. This suggests the overwhelming presence of redundant information that resides in the raw pixels of the original data and further indicates that SANs extract the most representative features of the data.
The φ𝜑\varphiitalic_φ metric is also related to the rate-distortion theory [40], in which the maximum distortion is defined according to human perception, which however inevitably introduces a bias. There is also relation with the field of Compressed Sensing [41] in which the sparsity of the data is exploited allowing us to reconstruct it with fewer samples than the Nyquist-Shannon theorem requires and the field of Robust Feature Extraction [42] where robust features are generated with the aim to characterize the data.
B
The essence of PBLLA is selecting an alternative UAV randomly in one iteration and improving its utility by altering power and altitude with a certain probability, which is determined by the utilities of two strategies and τ𝜏\tauitalic_τ. UAV prefers to select the power and altitude which provide higher utility. Nevertheless, highly dynamic scenarios will cause UAVs to make mistakes and pick the worse strategy. The dynamic degree index τ𝜏\tauitalic_τ determines the dynamic degree of the situation and UAV’s performance. Small τ𝜏\tauitalic_τ means less dynamic scenarios and fewer mistakes when UAVs are making decisions. When τ→0→𝜏0\tau\rightarrow 0italic_τ → 0 which equals to stabilization, UAV will always select the power and altitude with higher utility; when τ→∞→𝜏\tau\rightarrow\inftyitalic_τ → ∞ where exists sever dynamics, UAV will choose them randomly. However, PBLLA has its limitations that PBLLA is only one single UAV is allowed for altering strategies in one iteration. We will propose a new algorithm in the next section to overcome the restrictions.
Since PBLLA only allows one single UAV to alter strategies in one iteration, such defect would cause computation time to grow exponentially in large-scale UAVs systems. In terms of large-scale UAVs ad-hoc networks with a number of UAVs denoted as M𝑀Mitalic_M, M2superscript𝑀2M^{2}italic_M start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT times of exchange messages will be needed to coordinate and guarantee that only one UAV changes strategy in each iteration. Such a process not only consumes large energy but also prolongs convergence time. Algorithms that can improve the learning rate and reduce messages exchange is urgently needed. Thus, we propose the Synchronous Payoff-based Binary Log-linear Learning Algorithm (SPBLLA), which permits each UAV altering their strategies synchronously and learning with no message exchange.
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approaching [9][32]. The BLLA has been employed by [33], which is modified from LLA to update strategies in each iteration to converge to the NE. However, only a single agent is allowed to alter strategies in one iteration. In large-scale scenarios, more iterations are required, which makes BLLA inefficient. It is obvious that more UAVs altering strategies in one iteration would be more efficient. To achieve it, the works in [34] and [35] have provided a novel synchronous algorithm. However, there exist superabundant restrictions that make the algorithm impractical in most scenarios. Compared with the formers, SPBLLA has fewer constraints and can achieve synchronous operation, which can significantly improve the computational efficiency.
Fig. 15 presents the learning rate of PBLLA and SPBLLA when τ=0.01𝜏0.01\tau=0.01italic_τ = 0.01. As m𝑚mitalic_m increases the learning rate of SPBLLA decreases, which has been shown in Fig. 15. However, when m𝑚mitalic_m is small, SPBLLA’s learning rate is about 3 times that of PBLLA showing the great advantage of synchronous learning. When τ=0.015𝜏0.015\tau=0.015italic_τ = 0.015 and τ=0.02𝜏0.02\tau=0.02italic_τ = 0.02 as shown in Fig. 15, such phenomenon also exists. Since PBLLA merely permits a single UAV to alter strategies in one iteration, SPBLLA’s synchronous learning rate will much larger than PBLLA. Moreover, in the large-scale UAV network with high dynamic, PBLLA needs information exchange to decide the update order, which would severely prolong the learning time. PBLLA’s learning time might be four times as long as that of SPBLLA. Thus we can make the conclusion that in the same condition (the same τ𝜏\tauitalic_τ and other indexes), SPBLLA performs better and is more suitable for large-scale highly dynamic environment than PBLLA, and SPBLLA can improve the learning rate several times. With larger altering strategies probability, SPBLLA will be even more powerful.
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV changes strategy in the next iteration based on the new game state. It means that UAVs are not permitted to update strategies at the same time. Besides, to determine which UAV to update strategy, the coordinating process will occupy plenty of channel capacities and require more time between two iterations [15]. If the algorithm can learn synchronously, more than one UAV can update strategies based on the current game state in one iteration. Thus, the algorithm can be more efficient. To sum up, synchronous update algorithms which can learn from previous experiences are desirable, but only a little research investigated on it.
A
=Σej⁢Be⁢se3absentsubscript𝑒𝑗absentΣsuperscript𝐵𝑒superscript𝑠𝑒3\displaystyle=\overset{e_{j}}{\underset{}{\Sigma}}\,B^{e}\frac{s^{e}}{3}= start_OVERACCENT italic_e start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_OVERACCENT start_ARG start_UNDERACCENT end_UNDERACCENT start_ARG roman_Σ end_ARG end_ARG italic_B start_POSTSUPERSCRIPT italic_e end_POSTSUPERSCRIPT divide start_ARG italic_s start_POSTSUPERSCRIPT italic_e end_POSTSUPERSCRIPT end_ARG start_ARG 3 end_ARG
U¯r′superscriptsubscript¯𝑈𝑟′\displaystyle\overline{U}_{r}^{\prime}over¯ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT =D⁢r¯¯∗U¯absent¯¯𝐷𝑟¯𝑈\displaystyle=\overline{\overline{Dr}}*\overline{U}= over¯ start_ARG over¯ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_U end_ARG
U^r′superscriptsubscript^𝑈𝑟′\displaystyle\widehat{U}_{r}^{\prime}over^ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT =D⁢r^¯∗U¯absent¯^𝐷𝑟¯𝑈\displaystyle=\overline{\widehat{Dr}}*\overline{U}= over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_U end_ARG
=S¯¯−1∗(M^¯T∗S^^∗D⁢r^¯)absentsuperscript¯¯𝑆1superscript¯^𝑀𝑇^^𝑆¯^𝐷𝑟\displaystyle=\overline{\overline{S}}^{-1}*\left(\overline{\widehat{M}}^{T}*% \widehat{\widehat{S}}*\overline{\widehat{Dr}}\right)= over¯ start_ARG over¯ start_ARG italic_S end_ARG end_ARG start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∗ ( over¯ start_ARG over^ start_ARG italic_M end_ARG end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∗ over^ start_ARG over^ start_ARG italic_S end_ARG end_ARG ∗ over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG )
U¯r′superscriptsubscript¯𝑈𝑟′\displaystyle\overline{U}_{r}^{\prime}over¯ start_ARG italic_U end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT =(S¯¯−1∗(M^¯T∗S^^∗D⁢r^¯))∗U¯absentsuperscript¯¯𝑆1superscript¯^𝑀𝑇^^𝑆¯^𝐷𝑟¯𝑈\displaystyle=\left(\overline{\overline{S}}^{-1}*\left(\overline{\widehat{M}}^%
D
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, in order to get a semantic of comparability closer to equality. Even more, it could be possible to make the functions reflexive on all values but null where some freedom is allowed.
Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality) by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, any value yA≥AxAsubscript𝐴subscript𝑦𝐴subscript𝑥𝐴y_{A}\geq_{A}x_{A}italic_y start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ≥ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT must be set to 1111 since it is closer to
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, in order to get a semantic of comparability closer to equality. Even more, it could be possible to make the functions reflexive on all values but null where some freedom is allowed.
fA⁢(u,v)=fB⁢(u,v)={1if ⁢u=v≠nullaif ⁢u≠null,v≠null and ⁢u≠vbif ⁢u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\ a&\text{if }u\neq\texttt{null},v\neq\texttt{null}\text{ and }u\neq v\\
Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it. Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly
D
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optimal value function can be computed exactly.
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Reinforcement Learning is concerned with finding a sequence of actions an agent can follow that could lead to solve the task on the environment [1][2][3]. Most of Reinforcement Learning techniques estimate the consequences of actions in order to find an optimal policy in the form of sequence of actions that can be followed by the agent to solve the task. The process of choosing the optimal policy is based on selecting actions that maximize the future payoff of an action. Finding an optimal policy is the main concern of Reinforcement Learning for that reason many algorithms have been introduced over a course of time, e.g, Q-learning[4], SARSA[5], and policy gradient methods[6]. These methods use linear function approximation techniques to estimate action value, where convergence is guaranteed [7]. However, as challenges in modeling complex patterns increase, the need for expressive and flexible non-linear function approximators becomes clear. The recent advances in deep neural networks helped to develop artificial agent named deep Q-network(DQN)[8] that can learn successful policies directly from high-dimensional features. Despite the remarkable flexibility and the huge representative capability of DQN, some issues emerge from the combination of Q-learning and neural networks. One of these issues, known as ”overestimation phenomenon,” was first explored by [9]. They noted that the expansion of the action space in the Q-learning algorithm, along with generalization errors in neural networks, often results in an overestimation and increased variance of state-action values. They suggested that to counter these issues, further modifications and enhancements to the standard algorithm would be necessary to boost training stability and diminish overestimation. In response, [10] introduced Double-DQN, an improvement that incorporates the double Q-learning estimator [11], aiming to address the challenges of variance and overestimation. Additionally, [31] developed the Averaged-DQN algorithm, a significant improvement over the standard DQN. By averaging previously learned Q-values, Averaged-DQN effectively lowers the variance in target value estimates, thus enhancing training stability and overall performance.
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Classic Control Environment. The game of CARTPOLE was selected due to its widespread use and the ease with which the DQN can achieve a steady state policy.
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes because of the unseen state transitions and the finite size of experience reply buffer. This type of variance leads to converging to sub-optimal policies and brutally hurts DQN performance. The second source of variance Target Approximation Error which is the error coming from the inexact minimization of DQN parameters. Many of the proposed extensions focus on minimizing the variance that comes from AGE by finding methods to optimize the learning trajectory or from TAE by using methods like averaging to exact DQN parameters. Dropout methods have the ability to assemble these two solutions which minimize different source of variance. Dropout methods can achieve a consistence learning trajectory and exact DQN parameters with averaging, which comes inherently with Dropout methods.
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optimal value function can be computed exactly.
B
As one of the first high impact CNN-based segmentation models, Long et al. (2015) proposed fully convolutional networks for pixel-wise labeling. They proposed up-sampling (deconvolving) the output activation maps from which the pixel-wise output can be calculated. The overall architecture of the network is visualized in Figure 3.
As one of the first high impact CNN-based segmentation models, Long et al. (2015) proposed fully convolutional networks for pixel-wise labeling. They proposed up-sampling (deconvolving) the output activation maps from which the pixel-wise output can be calculated. The overall architecture of the network is visualized in Figure 3.
Several modified versions (e.g. deeper/shallower, adding extra attention blocks) of encoder-decoder networks have been applied to semantic segmentation (Amirul Islam et al., 2017; Fu et al., 2019b; Lin et al., 2017a; Peng et al., 2017; Pohlen et al., 2017; Wojna et al., 2017; Zhang et al., 2018d). Recently in 2018, DeepLabV3+ (Chen et al., 2018b) has outperformed many state-of-the-art segmentation networks on PASCAL VOC 2012 (Everingham et al., 2015) and Cityscapes (Cordts et al., 2016) datasets. Zhao et al. (2017b) modified the feature fusing operation proposed by Long et al. (2015) using a spatial pyramid pooling module or encode-decoder structure (Figure 10) are used in deep neural networks for semantic segmentation tasks. The spatial pyramid networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information.
In order to preserve the contextual spatial information within an image as the filtered input data progresses deeper into the network, Long et al. (2015) proposed to fuse the output with shallower layers’ output. The fusion step is visualized in Figure 4.
Vorontsov et al. (2019), using a dataset defined in Cohen et al. (2018), proposed an image-to-image based framework to transform an input image with object of interest (presence domain) like a tumor to an image without the tumor (absence domain) i.e. translate diseased image to healthy; next, their model learns to add the removed tumor to the new healthy image. This results in capturing detailed structure from the object, which improves the segmentation of the object. Zhou et al. (2018) proposed a rewiring method for the long skip connections used in U-Net and tested their method on nodule segmentation in the low-dose CT scans of the chest, nuclei segmentation in the microscopy images, liver segmentation in abdominal CT scans, and polyp segmentation in colonoscopy videos.
C
Interestingly, the Dense architecture achieves the best performance on MUTAG, indicating that in this case, the connectivity of the graps does not carry useful information for the classification task. The performance of the Flat baseline indicates that in Enzymes and COLLAB pooling operations are not necessary to improve the classification accuracy.
Contrarily to graph classification, DiffPool and TopK𝐾Kitalic_K fail to solve this task and achieve an accuracy comparable to random guessing. On the contrary, the topological pooling methods obtain an accuracy close to a classical CNN, with NDP significantly outperforming the other two techniques.
When compared to other methods for graph pooling, NDP performs significantly better than other techniques that pre-compute the topology of the coarsened graphs, while it achieves a comparable performance with respect to state-of-the-art feature-based pooling methods.
In Fig. 7, we report the training time for the five different pooling methods. As expected, GNNs configured with GRACLUS, NMF, and NDP are much faster to train compared to those based on DiffPool and TopK𝐾Kitalic_K, with NDP being slightly faster than the other two topological methods.
Figure 9: Example of coarsening on one graph from the Proteins dataset. In (a), the original adjacency matrix of the graph. In (b), (c), and (d) the edges of the Laplacians at coarsening level 0, 1, and 2, as obtained by the 3 different pooling methods GRACLUS, NMF, and the proposed NDP.
C
where wD∈ℝnTsuperscript𝑤𝐷superscriptℝsubscript𝑛𝑇w^{D}\in\mathbb{R}^{n_{T}}italic_w start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_n start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT end_POSTSUPERSCRIPT. This optimization finds a weighting of the number of decision trees so that the generated confidences cover the full range equally. For that, the number of samples per bin hijsubscriptsuperscriptℎ𝑗𝑖h^{j}_{i}italic_h start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is summed up, weighted over all numbers of decision trees. After determining wDsuperscript𝑤𝐷w^{D}italic_w start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT, the number of decision trees can be sampled depending on wjDsubscriptsuperscript𝑤𝐷𝑗w^{D}_{j}italic_w start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT.
The proposed method generates data from a random forest and trains a neural network that imitates the random forest. The goal is that the neural network approximates the same function as the random forest. This also implies that the network reaches the same accuracy if successful.
Our proposed approach, called Neural Random Forest Imitation (NRFI), implicitly transforms random forests into neural networks. The main concept includes (1) generating training data from decision trees and random forests, (2) adding strategies for reducing conflicts and increasing the variety of the generated examples, and (3) training a neural network that imitates the random forest by learning the decision boundaries.
Finally, a neural network that imitates the random forest is trained. The network learns the decision boundaries from the generated data and approximates the same function as the random forest. The network architecture is based on a fully connected network with one or multiple hidden layers.
NRFI with and without the original data is shown for different network architectures. The smallest architecture has 2222 neurons in both hidden layers and the largest 128128128128. For NRFI (gen-ori), we can see that a network with 16161616 neurons in both hidden layers (NN-16-16) is already sufficient to learn the decision boundaries of the random forest and achieve the same accuracy. When fewer training samples are available, NN-8-8 already has the required capacity. In the following, we will further analyze the accuracy and number of network parameters.
C
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy optimization remain rather limited from both computational and statistical perspectives. More specifically, from the computational perspective, it remains unclear until recently whether policy optimization converges to the globally optimal policy in a finite number of iterations, even given infinite data. Meanwhile, from the statistical perspective, it still remains unclear how to attain the globally optimal policy with a finite regret or sample complexity.
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient (PG) (Williams, 1992; Baxter and Bartlett, 2000; Sutton et al., 2000), natural policy gradient (NPG) (Kakade, 2002), trust-region policy optimization (TRPO) (Schulman et al., 2015), proximal policy optimization (PPO) (Schulman et al., 2017), and actor-critic (AC) (Konda and Tsitsiklis, 2000), converge to the globally optimal policy at sublinear rates of convergence, even when they are coupled with neural networks (Liu et al., 2019; Wang et al., 2019). However, such computational efficiency guarantees rely on the regularity condition that the state space is already well explored. Such a condition is often implied by assuming either the access to a “simulator” (also known as the generative model) (Koenig and Simmons, 1993; Azar et al., 2011, 2012a, 2012b; Sidford et al., 2018a, b; Wainwright, 2019) or finite concentratability coefficients (Munos and Szepesvári, 2008; Antos et al., 2008; Farahmand et al., 2010; Tosatto et al., 2017; Yang et al., 2019b; Chen and Jiang, 2019), both of which are often unavailable in practice.
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In particular, OPPO is based on PPO (and similarly, NPG and TRPO), which is shown to converge to the globally optimal policy at sublinear rates in tabular and linear settings, as well as nonlinear settings involving neural networks (Liu et al., 2019; Wang et al., 2019). However, without assuming the access to a “simulator” or finite concentratability coefficients, both of which imply that the state space is already well explored, it remains unclear whether any of such algorithms is sample-efficient, that is, attains a finite regret or sample complexity. In comparison, by incorporating uncertainty quantification into the action-value function at each update, which explicitly encourages exploration, OPPO not only attains the same computational efficiency as NPG, TRPO, and PPO, but is also shown to be sample-efficient with a d2⁢H3⁢Tsuperscript𝑑2superscript𝐻3𝑇\sqrt{d^{2}H^{3}T}square-root start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT italic_T end_ARG-regret up to logarithmic factors.
for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018, 2019; Yang and Wang, 2019b, a), where the reward function is fixed across all the episodes.
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020), as well as nonlinear settings involving general function approximators (Wen and Van Roy, 2017; Jiang et al., 2017; Du et al., 2019b; Dong et al., 2019). In particular, our setting is the same as the linear setting studied by Ayoub et al. (2020); Zhou et al. (2020), which generalizes the one proposed by Yang and Wang (2019a). We remark that our setting differs from the linear setting studied by Yang and Wang (2019b); Jin et al. (2019). It can be shown that the two settings are incomparable in the sense that one does not imply the other (Zhou et al., 2020). Also, our setting is related to the low-Bellman-rank setting studied by Jiang et al. (2017); Dong et al. (2019). In comparison, we focus on policy-based reinforcement learning, which is significantly less studied in theory. In particular, compared with the work of Yang and Wang (2019b, a); Jin et al. (2019); Ayoub et al. (2020); Zhou et al. (2020), which focuses on value-based reinforcement learning, OPPO attains the same T𝑇\sqrt{T}square-root start_ARG italic_T end_ARG-regret even in the presence of adversarially chosen reward functions. Compared with optimism-led iterative value-function elimination (OLIVE) (Jiang et al., 2017; Dong et al., 2019), which handles the more general low-Bellman-rank setting but is only sample-efficient, OPPO simultaneously attains computational efficiency and sample efficiency in the linear setting. Despite the differences between policy-based and value-based reinforcement learning, our work shows that the general principle of “optimism in the face of uncertainty” (Auer et al., 2002; Bubeck and Cesa-Bianchi, 2012) can be carried over from existing algorithms based on value iteration, e.g., optimistic LSVI, into policy optimization algorithms, e.g., NPG, TRPO, and PPO, to make them sample-efficient, which further leads to a new general principle of “conservative optimism in the face of uncertainty and adversary” that additionally allows adversarially chosen reward functions.
A
In experiments, we demonstrated on two benchmark data sets the difficulty of finding a good trade-off among prediction quality, representational efficiency and computational efficiency. Considering three embedded hardware platforms, we showed that massive parallelism is required for inference efficiency and that quantization as well as structured pruning map well onto these accelerators.
We furthermore point out that hardware properties and the corresponding computational efficiency form a large fraction of resource efficiency. This highlights the need to consider particular hardware targets when searching for resource-efficient machine learning models.
In experiments, we demonstrated on two benchmark data sets the difficulty of finding a good trade-off among prediction quality, representational efficiency and computational efficiency. Considering three embedded hardware platforms, we showed that massive parallelism is required for inference efficiency and that quantization as well as structured pruning map well onto these accelerators.
The computational cost of performing inference should match the (usually limited) resources in deployed systems and exploit the available hardware optimally in terms of time and energy. Computational efficiency, in particular, also includes mapping the representational efficiency to available hardware structures.
In this regard, resource-efficient neural networks for embedded systems are concerned with the trade-off between prediction quality and resource efficiency (i.e., representational efficiency and computational efficiency). This is highlighted in Figure 1. Note that this requires observing overall constraints such as prediction quality as well as inference latency and/or throughput, chip area and power consumption.
A
In Section 7, we prove a number of results concerning the homotopy types of Vietoris-Rips filtrations of spheres and complex projective spaces. Also, we fully compute the homotopy types of Vietoris-Rips filtration of spheres with ℓ∞superscriptℓ\ell^{\infty}roman_ℓ start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT-norm.
In Section 8, we reprove Rips and Gromov’s result about the contractibility of the Vietoris-Rips complex of hyperbolic geodesic metric spaces, by using our method consisting of isometric embeddings into injective metric spaces. As a result, we will be able to bound the length of intervals in Vietoris-Rips persistence barcode by the hyperbolicity of the underlying space.
The simplicial complex nowadays referred to as the Vietoris-Rips complex was originally introduced by Leopold Vietoris in the early 1900s in order to build a homology theory for metric spaces [79]. Later, Eliyahu Rips and Mikhail Gromov [47] both utilized the Vietoris-Rips complex in their study of hyperbolic groups.
Of central interest in topological data analysis has been the question of providing a complete characterization of the Vietoris-Rips persistence barcodes of spheres of different dimensions. Despite the existence of a complete answer to the question for the case of 𝕊1superscript𝕊1\mathbb{S}^{1}blackboard_S start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT [4] due to Adams and Adamaszek, relatively little is known for higher dimensional spheres. In [5] the authors consider a variant of the Vietoris-Rips filtration, which they call Vietoris-Rips metric thickening. The authors are able to obtain information about the succesive homotopy types of this filtration on spheres of different dimension (see Section 5 of [5]) for a certain range of values of the scale parameter.
In Section 7, we prove a number of results concerning the homotopy types of Vietoris-Rips filtrations of spheres and complex projective spaces. Also, we fully compute the homotopy types of Vietoris-Rips filtration of spheres with ℓ∞superscriptℓ\ell^{\infty}roman_ℓ start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT-norm.
A
One way to obtain an indication of a projection’s quality is to compute a single scalar value, equivalent to a final score. Examples are Normalized Stress [7], Trustworthiness and Continuity [24], and Distance Consistency (DSC) [25]. More recently, ClustMe [26] was proposed as a perception-based measure that ranks scatterplots based on cluster-related patterns. While this might be useful for quick overviews or automatic selection of projections, a single score fails to capture more intricate details, such as where and why a projection is good or bad [27]. In contrast, local measures such as the projection precision score (pps) [18] describe the quality for each individual point of the projection, which can then be visualized as an extra layer on top of the scatterplot itself. These measures usually focus on the preservation of neighborhoods [28, 29, 30] or distances [27, 31, 32].
We present a Neighborhood Preservation plot (Figure 1(g)) that shows an overview of the preservation of neighborhoods of different sizes (k𝑘kitalic_k) in both the entire projection and the current selection, based on the Jaccard distance between sets:
we present t-viSNE, a tool designed to support the interactive exploration of t-SNE projections (an extension to our previous poster abstract [17]). In contrast to other, more general approaches, t-viSNE was designed with the specific problems related to the investigation of t-SNE projections in mind, bringing to light some of the hidden internal workings of the algorithm which, when visualized, may provide important insights about the high-dimensional data set under analysis. Our proposed solution is composed of a set of coordinated views that work together in order to fulfill four main goals: (G1) facilitate the choice of hyper-parameters through visual exploration and the use of quality metrics; (G2) provide a quick overview of the accuracy of the projection, to support the decision of either moving forward with the analysis or repeating the process of hyper-parameter exploration; (G3) provide the means to investigate quality further, differentiating between the trustworthiness of different regions of the projection; and (G4) allow the interpretation of different visible patterns of the projection in terms of the original data set’s dimensions.
The difference line plot (d), on the other hand, builds on the standard plot by highlighting the differences between the selection and the global average, shown as positive and negative values around the 0 value of the y-axis. It provides a clearer overall picture of the difference in preservation among all the shown scales, but compromises the precision and simplicity of interpretation of the y-axis (where the exact percentage of Neighborhood Preservation was previously shown). The difference bar chart (b) is a combination of the designs (a) and (d). Similar to (d), the interpretation of the y-values might be misleading.
As an example, the set difference from Martins et al. [33] uses the Jaccard set-distance between the two sets of neighbors of a point in low- and high-dimensional space in order to compute a measure of Neighborhood Preservation. We have chosen to adopt it in our work, in contrast to others, because of its intuitive interpretation, simple computation, and straightforward adaptation for displaying the preservation of neighborhoods of different scales.
D
Similarity in metaheuristics: A gentle step towards a comparison methodology - 2022 [27]: This paper uses a pool template as a framework for decomposing and analyzing metaheuristics, inspired by another previous work. This template works as a framework for decomposing and analyzing metaheuristics based on these concepts explained in such work: generation method, pool of solutions, archive of solutions, selected pool of solutions, updating mechanism, updated pool, and the archiving and output functions. The authors provide some measures and methodologies to identify their similarities and novelties based on the updating mechanism component, similar to our second taxonomy. They review 15 metaheuristics and their insights confirm that many metaheuristics are special cases of others.
50 years of metaheuristics - 2024 [40]: This overview traces the last 50 years of the field, starting from the roots of the area to the latest proposals to hybridize metaheuristics with machine learning. The revision encompasses constructive (GRASP and ACO), local search (iterated local search, Tabu search, variable neighborhood search), and population-based heuristics (memetic algorithms, biased random-key genetic algorithms, scatter search, and path relinking). Each category presents its core characteristics and the description of the mentioned algorithms. This review presents metaheuristic frameworks to guide the design of heuristic optimization algorithms during the last 50 years. It discusses the role of the journal in which it is published in introducing solid heuristic papers. This work also recalls the maturity of the field, which leads to solving very complex problems, with a growing number of researchers applying them, as shown in the numerous conferences and related events. Also, they criticize the fragmentation as each group of research usually applies the same methods regardless of the type of problem being solved, the lack of theoretical foundations, the limited analytical understanding of novel proposals, the problem-specific tuning of metaheuristics, the lack of standardized benchmarking protocols and the absence of general guidelines. Several research directions are also annotated for researchers to be applied in the future.
Good practices for designing metaheuristics: It gathers several works that are guidelines for good practices related to research orientation to measure novelty [26], to measure similarity in metaheuristics [27], Metaheuristics “In the Large” (to support the development, analysis, and comparison of new approaches) [28], to design manual or automatic new metaheuristics [29], to guide the learning strategy in design and improvement of metaheuristics [30], to use statistical test in metaheuristics [31], and to detect the novelties in metaphor-based algorithms [32].
Metaheuristics “In the Large” - 2022 [28]: The objective of this work is to provide a useful tool for researchers. To address the lack of novelty, the authors propose a new infrastructure to support the development, analysis, and comparison of new approaches. This framework is based on (1) the use of algorithm templates for reuse without modification, (2) white box problem descriptions that provide generic support for the injection of domain-specific knowledge, and (3) remotely accessible frameworks, components, and problems. This can be considered as a step towards the improvement of the reproducibility of results.
The constant evolution of the field leads to a significant issue: the lack of novelty in metaheuristics. However, researchers recognize the need to address this problem and have proposed methods to evaluate the novelty of new algorithms. This section shows different studies and guidelines to measure novelty, to design new metaheuristics, and to perform statistical tests between metaheuristics. We list these approaches as follows:
C
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update the graph from the learned embedding with a larger sparsity, k𝑘kitalic_k. With the new graph, we re-train the GAE. These steps are repeated until the convergence.
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore, they are widely used in practice. Due to the success of deep learning, how to combine neural networks and traditional clustering models has been studied a lot [7, 8, 9]. In particular, CNN-based clustering models have been extensively investigated [10, 11, 12]. However, the convolution operation may be unavailable on other kinds of datasets, e.g., text, social network, signal, data mining, etc.
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for decoders. (2) As we utilize GAE to exploit the high-level information to construct a desirable graph, we find that the model suffers from a severe collapse due to the simple update of the graph. We analyze the degeneration theoretically and experimentally to understand the phenomenon. We further propose a simple but effective strategy to avoid it.
In recent years, GCNs have been studied a lot to extend neural networks to graph type data. How to design a graph convolution operator is a key issue and has attracted a mass of attention. Most of them can be classified into 2 categories, spectral methods [24] and spatial methods[25].
However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods. In this paper, we propose an Adaptive Graph Auto-Encoder (AdaGAE) to extend graph auto-encoder into common scenarios. The main contributions are listed as follows:
D
We also want to understand the types of networks that we could test via domains-wide scans. To derive the business types we use the PeeringDB. We classify the ASes according to the following business types: content, enterprise, Network Service Provider (NSP), Cable/DSL/ISP, non-profit, educational/research, route server at Internet Exchange Point (IXP)111A route server directs traffic among Border Gateway Protocol (BGP) routers. We plot the networks that do not enforce ingress filtering according to business types in Figure 12. According to our study enterprise and non-profit networks enforce ingress filtering more than other networks. In contrast, NSPs contain the most networks that do not enforce ingress filtering.
There is a strong correlation between the AS size and the enforcement of spoofing, see Figure 13. Essentially, the larger the AS, the higher the probability that our tools identify that it does not filter spoofed packets. The reason can be directly related to our methodologies and the design of our study: the larger the network the more services it hosts. This means that we have more possibilities to test if spoofing is possible: for instance, we can identify a higher fraction of servers with a globally incremental IPID counters, which are not “load balanced”. In Figure 14 we plot the statistics of the tested networks according to their size and type. The results show a correlation between the size of the network and its type. For instance, most NSP networks are large, with CIDR/6. This is aligned with our finding that among NSP networks there was the highest number of spoofable networks.
SMap (The Spoofing Mapper). In this work we present the first Internet-wide scanner for networks that filter spoofed inbound packets, we call the Spoofing Mapper (SMap). We apply SMap for scanning ingress-filtering in more than 90% of the Autonomous Systems (ASes) in the Internet. The measurements with SMap show that more than 80% of the tested ASes do not enforce ingress filtering (i.e., 72.4% of all the ASes in the routing system), in contrast to 2.4% identified by the latest measurement of the Spoofer Project (Luckie et al., 2019). The reason for this significant difference is the limitation of the previous studies of ingress filtering to a small set of networks.
Domain-scan and IPv4-scan both show that the number of spoofable ASes grows with the overall number of the ASes in the Internet, see Figure 1. Furthermore, there is a correlation between fraction of scanned domains and ASes. Essentially the more domains are scanned, the more ASes are covered, and more spoofable ASes are discovered; see Figure 7. This result is of independent interest as it implies that one can avoid scanning the IPv4 and instead opt for domains-scan, obtaining a good enough approximation. This not only reduces the volume of traffic needed to carry out studies but also makes the study much more efficient.
Identifying servers with global IPID counters. We send packets from two hosts (with different IP addresses) to a server on a tested network. We implemented probing over TCP SYN, ping and using requests/responses to Name servers and we apply the suitable test depending on the server that we identify on the tested network. If the responses contain globally incremental IPID values - we use the service for ingress filtering measurement with IPID technique. We located globally incremental IPID in 63.27%percent63.2763.27\%63.27 % of the measured networks. There are certainly more hosts on networks that support globally incremental IPID values, yet our goal was to validate our measurement techniques while keeping the measurement traffic low - hence we avoided scanning the networks for additional hosts and only checked for Web, Email or Name servers with globally incremental IPID counters via queries to the tested domain.
A
Machine learning applications frequently deal with data-generating processes that change over time. Applications in such nonstationary environments include power use forecasting, recommendation systems, and environmental sensors [9]. Semisupervised learning, which has received a lot of attention in the sensor community, is characterised by the combined use of easily attainable unlabeled data in addition to the initial labeled dataset [10, 11, 12]. Extreme learning machines are also frequently deployed in these settings to efficiently reconfigure neural networks based on the new data [13, 14, 15]. Within the standard backpropagation framework, ensembles have been used successfully in this setting; therefore that is what we compare with in this paper [7].
Biology frequently deals with drift [16]. For instance olfactory systems are constantly adapting, predominantly through feedback mechanisms. This section details some such models from computer science and neuroscience [17]. One example is the KIII model, a dynamic network resembling the olfactory bulb and feedforward and feedback connections to and from the higher-level anterior olfactory nucleus and piriform cortex [18]. Applied to an odor recognition task, KIII performed better than an artificial neural network under sensor drift and variable concentrations, a similar setting to the one in this paper.
The purpose of this study was to demonstrate that explicit representation of context can allow a classification system to adapt to sensor drift. Several gas classifier models were placed in a setting with progressive sensor drift and were evaluated on samples from future contexts. This task reflects the practical goal to deploy an artificial nose in a dynamic environment without recalibration.
While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this paper introduced an approach based on continual adaptation. A recurrent neural network uses a sequence of previously seen gas recordings to form a representation of the current state of the sensors. It then modulates the skill of odor recognition with this context, allowing the system to adapt to sensor drift. Context models can thus play a useful role in lifelong adaptation to changing environments in artificial systems.
One prominent feature of the mammalian olfactory system is feedback connections to the olfactory bulb from higher-level processing regions. Activity in the olfactory bulb is heavily influenced by behavioral and value-based information [19], and in fact, the bulb receives more neural projections from higher-level regions than from the nose [20]. In computational modeling, this principle has been taken into account by the piriform cortical region that recognizes familiar background odors through associative memory [21]. It projects this information to the olfactory bulb to improve odor recognition when there are background odors. Following this same principle, the neural network classifier in this paper integrates context that is outside the immediate input signal.
A
The goal would be to obtain an algorithm with running time 2O⁢(f⁢(δ)⁢n)superscript2𝑂𝑓𝛿𝑛2^{O(f(\delta)\sqrt{n})}2 start_POSTSUPERSCRIPT italic_O ( italic_f ( italic_δ ) square-root start_ARG italic_n end_ARG ) end_POSTSUPERSCRIPT, where f⁢(n)=O⁢(n1/6)𝑓𝑛𝑂superscript𝑛16f(n)=O(n^{1/6})italic_f ( italic_n ) = italic_O ( italic_n start_POSTSUPERSCRIPT 1 / 6 end_POSTSUPERSCRIPT ). Such a running time becomes 2O⁢(n)superscript2𝑂𝑛2^{O(\sqrt{n})}2 start_POSTSUPERSCRIPT italic_O ( square-root start_ARG italic_n end_ARG ) end_POSTSUPERSCRIPT for constant δ𝛿\deltaitalic_δ (which is optimal for TSP in ℝ2superscriptℝ2\mathbb{R}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, under ETH), and it becomes 2O⁢(n2/3)superscript2𝑂superscript𝑛232^{O(n^{2/3})}2 start_POSTSUPERSCRIPT italic_O ( italic_n start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT for δ=n𝛿𝑛\delta=nitalic_δ = italic_n (which is optimal for TSP in ℝ3superscriptℝ3\mathbb{R}^{3}blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT, assuming ETH).
It would be interesting to see whether a direct proof can be given for this fundamental result. We note that the proof of Theorem 2.1 can easily be adapted to point sets of which the x𝑥xitalic_x-coordinates of the points need not be integer, as long as the difference between x𝑥xitalic_x-coordinates of any two consecutive points is at least 1.
First of all, the ΔisubscriptΔ𝑖\Delta_{i}roman_Δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are now independent. Second, as we will prove next, the expected running time of an algorithm on a uniformly distributed point set can be bounded by the expected running time of that algorithm on a point set generated this way.
We believe that our algorithm can serve as the basis of an algorithm solving such a problem, under the assumption that the point sets are dense enough to ensure that the solution will generally follow these curves / segments. Making this precise, and investigating how the running time depends on the number of line segments, would be interesting.
In the second step, we therefore describe a method to generate the random point set in a different way, and we show how to relate the expected running times in these two settings. In the third step, we will explain which changes are made to the algorithm.
C
The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the element on the (full) subtree rooted at the node is the same as that of a (possibly different) element on the entire tree (i. e. at the root). The idea for the name here is that the action on a full subtree is similar to the action of the group or semigroup on the entire tree. An important special case of such a self-similar presentation occurs when there is a finite set of generators such that the action of any generator on the subtree below any node is the same as the action of some (potentially different) generator at the root. By identifying the nodes of the infinite regular tree with the strings over an appropriate finite alphabet, we can describe such an action using a finite automaton (more precisely, a finite-state letter-to-letter – or synchronous – transducer), which leads to the class of automaton semigroups and automaton groups (also often called ‘automata groups’). If we relax the finite-state requirement and also consider infinite automata, we can even describe any self-similar action in this way. This is the approach we will take in this paper.
There are quite a few results on free (and related) products of self-similar or automaton groups (again see [15] for an overview) but many of them present the product as a subgroup of an automaton/self-similar group and, thus, loose the self-similarity property. An exception here is a line of research based on the Bellaterra automaton which resulted in a construction to generate the free product of an arbitrary number of copies of the group of order two as an automaton group [16] (see also [17]).
The construction used to prove Theorem 6 can also be used to obtain results which are not immediate corollaries of the theorem (or its corollary for automaton semigroups in 8). As an example, we prove in the following theorem that it is possible to adjoin a free generator to every self-similar semigroup without losing the self-similarity property and that the analogous statement for automaton semigroups holds as well. The version for automaton semigroups does not follow directly from 8, as the free monogenic semigroup is not a complete automaton semigroup [4, Proposition 4.3] or even a (partial) automaton semigroup (see [8, Theorem 18] or [20, Theorem 1.2.1.4]).
There is a quite interesting evolution of constructions to present free groups in a self-similar way or even as automaton groups (see [15] for an overview). This culminated in constructions to present free groups of arbitrary rank as automaton groups where the number of states coincides with the rank [18, 17]. While these constructions and the involved proofs are generally deemed quite complicated, the situation for semigroups turns out to be much simpler. While it is known that the free semigroup of rank one is not an automaton semigroup [4, Proposition 4.3], the free semigroups of higher rank can be generated by an automaton [4, Proposition 4.1]. In fact, the construction to generate these semigroups is quite simple [4, Proposition 4.1] (compare also to 3). The same construction can also be used to generate free monoids as automaton semigroups or monoids. Here, the main difference is that the free monoid in one generator can indeed be generated by an automaton: it is generated by the adding machine (see 1), which also generates the free group of rank one if inverses are added. On a side note, it is also worthwhile to point out that – although there does not seem to be much research on the topic – there are examples to generate the free inverse semigroup of rank one as a subsemigroup of an automaton semigroup [14, Theorem 25] and an adaption to present the free inverse monoid of rank one as an automaton semigroup [6, Example 2] (see also [8, Example 23]).
from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata). Third, we show this result in the more general setting of self-similar semigroups111Note that the constructions from [2, Theorem 2], [3, Theorem 4] and [19] mentioned above do not use that the generating automata for S𝑆Sitalic_S and for T𝑇Titalic_T are finite. Therefore, these constructions also work for self-similar semigroups, although this is not explicitly stated there. (Theorem 6) but observe that the constructed generating automaton for S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is finite (and/or complete) if this was the case for the original two automata generating S𝑆Sitalic_S and T𝑇Titalic_T. The existence of a homomorphism from S𝑆Sitalic_S to T𝑇Titalic_T (or vice-versa) is a very lax requirement and is satisfied by large classes of semigroups. For example, it suffices to have an idempotent (10) or a length function (11) in (at least) one of the two semigroups. By induction, we can even extend the result to arbitrary free products of (finitely many) semigroups where at least one contains an idempotent (12). The construction itself yields further results. As an example, we modify it to show that a new free generator can be adjoined to any self-similar semigroup (or automaton semigroup) without losing the property of self-similarity (or being an automaton semigroup; Theorem 14). This is noteworthy because – as mentioned above – the free semigroup of rank one is not an automaton semigroup (not even if we allow partial automata, see [8, Theorem 19] and [20, Theorem 1.2.1.4]).
C
SCR divides the region proposals into influential and non-influential regions and penalizes the model if: 1) 𝒮⁢(ag⁢t)𝒮subscript𝑎𝑔𝑡\mathcal{S}(a_{gt})caligraphic_S ( italic_a start_POSTSUBSCRIPT italic_g italic_t end_POSTSUBSCRIPT ) of a non-influential region is higher than an influential region, and 2) the region most influential for the correct answer has even higher sensitivity for incorrect answers.
Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible to achieve such gains even when the model is trained to look at: a) irrelevant visual regions, and b) random visual regions. Second, we show that differences in the predictions from the variants trained with relevant, irrelevant and random visual regions are not statistically significant. Third, we show that these methods degrade performance when the priors remain intact and instead work on VQA-CPv2 by hurting its train accuracy.
We probe the reasons behind the performance improvements of HINT and SCR. We first analyze if the results improve even when the visual cues are irrelevant (Sec. 4.2) or random (Sec. 4.3) and examine if their differences are statistically significant (Sec. 4.4). Then, we analyze the regularization effects by evaluating the performance on VQA-CPv2’s train split (Sec. 4.5) and the behavior on a dataset without changing priors (Sec. 2). We present a new metric to assess visual grounding in Sec. 4.7 and describe our regularization method in Sec. 5.
We test our regularization method on random subsets of varying sizes. Fig. A6 shows the results when we apply our loss to 1−100%1percent1001-100\%1 - 100 % of the training instances. Clearly, the ability to regularize the model does not vary much with respect to the size of the train subset, with the best performance occurring when our loss is applied to 1%percent11\%1 % of the training instances. These results support our claims that it is possible to improve performance without actually performing visual grounding.
As observed by Selvaraju et al. (2019) and as shown in Fig. 2, we observe small improvements on VQAv2 when the models are fine-tuned on the entire train set. However, if we were to compare against the improvements in VQA-CPv2 in a fair manner, i.e., only use the instances with visual cues while fine-tuning, then, the performance on VQAv2 drops continuously during the course of the training. This indicates that HINT and SCR help forget linguistic priors, which is beneficial for VQA-CPv2 but not for VQAv2.
B
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020), and it surpasses the aggregate of unique websites represented in all other publicly available web privacy policy corpora combined. We describe the corpus creation pipeline, with stages including a web crawler, language detection, document classification, duplicate and near-duplication removal, and content extraction. We then analyse the lengths and top level distribution of the privacy policies in the corpus and use topic modelling to explore the component topics. Subsequently, we pretrain PrivBERT, a transformer-based language model, using the corpus and evaluate it on data practice classification and question answering tasks. We release the corpus, a search engine for the corpus (Srinath et al., 2021), the document collection pipeline, and a language model to support further research in the privacy domain.111All artifacts are available at https://privaseer.ist.psu.edu/.
For the data practice classification task, we leveraged the OPP-115 Corpus introduced by Wilson et al. (2016). The OPP-115 Corpus contains manual annotations of 23K fine-grained data practices on 115 privacy policies annotated by legal experts. To the best of our knowledge, this is the most detailed and widely used dataset of annotated privacy policies in the research community. The OPP-115 Corpus contains paragraph-sized segments annotated according to one or more of the twelve coarse-grained categories of data practices. We fine-tuned PrivBERT on the OPP-115 Corpus to predict the coarse-grained categories of data practices. We divided the corpus in the ratio 3:1:1 for training, validation and testing respectively. Since each segment in the corpus could belong to more than one category and there are twelve categories in total, we treated the problem as a multi-class, multi-label classification problem. After manually tuning hyperparameters, we trained the model with a dropout of 0.15 and a learning rate of 2.5e-5.
Prior collections of privacy policy corpora have led to progress in privacy research. Wilson et al. (2016) released the OPP-115 Corpus, a dataset of 115 privacy policies with manual annotations of 23k fine-grained data practices, and they created a baseline for classifying privacy policy text into one of ten categories. The corpus was used to train models to extract opt-out choices from privacy policies (Sathyendra et al., 2016), to automatically identify policies on websites and find compliance issues (Story et al., 2019), and to classify privacy practices and answer privacy related non-factoid questions (Harkous et al., 2018).
Other corpora similar to OPP-115 Corpus have enabled research on privacy practices. The PrivacyQA corpus contains 1,750 questions and expert-annotated answers for the privacy question answering task (Ravichander et al., 2019). Similarly, Lebanoff and Liu (2018) constructed the first corpus of human-annotated vague words and sentences in privacy policies and studied automatic vagueness detection. Sathyendra et al. (2017) presented a dataset and developed a model to automatically identify and label opt-out choices offered in privacy policies. Similarly, Zimmeck et al. (2019) released a set of over 400k URLs to Android app privacy policy pages collected by crawling the Google Play store. Amos et al. (2020) collected privacy policies from around 130,000 websites from over two decades and analysed the evolution of the online privacy landscape. Finally, Nokhbeh Zaeem and Barber (2021) collected a corpus of around 100k privacy policies using the domains from DMOZ, a website which maintained categories of websites on the internet.
Natural language processing (NLP) provides an opportunity to automate the extraction of salient details from privacy policies, thereby reducing human effort and enabling the creation of tools for internet users to understand and control their online privacy. Existing research has achieved some success using expert annotated corpora of a few hundred or a few thousand privacy policies Wilson et al. (2016); Zimmeck et al. (2019); Ramanath et al. (2014), but issues of accuracy, scalability and generalization remain. More importantly, annotations in the privacy policy domain are expensive. Privacy policies are difficult to understand and many tasks such as privacy practice classification (Wilson et al., 2016), privacy question answering (Ravichander et al., 2019), vague sentence detection (Lebanoff and Liu, 2018), and detection of compliance issues (Zimmeck et al., 2019) require skilled legal experts to annotate the dataset. In contrast, approaches involving large amounts of unlabeled privacy policies remain relatively unexplored.
B
Figure 1: Knowledge generation model for ensemble learning with VA derived from the model by Sacha et al. [44]. On the left, it illustrates how a VA system can enable the exploration of the data and the models with the use of visualization. On the right, a number of design goals assist the human in the exploration, verification, and knowledge generation for ensemble learning.
In a bucket of models, the best model for a specific problem is automatically chosen from a set of available options. This strategy is conceptually different to the ideas of bagging, boosting, and stacking, but still related to ensemble learning. Chen et al. [6] utilize a bucket of latent Dirichlet allocation (LDA) models for combining topics based on criteria such as distinctiveness and coverage of the set of actions performed.
The rest of this paper is organized as follows. In the next section, we discuss the literature related to visualization of ensemble learning. Afterwards, we describe the knowledge generation model for ensemble learning with VA, design goals, and analytical tasks for attaching VA to ensemble learning.
Visualization systems have been developed for the exploration of diverse aspects of bagging, boosting, and further strategies such as “bucket of models”. Stacking, however, has so far not received comparable attention by the InfoVis/VA communities: actually, we have not found any literature describing the construction and improvement of stacking ensemble learning with the use of VA.
Figure 1: Knowledge generation model for ensemble learning with VA derived from the model by Sacha et al. [44]. On the left, it illustrates how a VA system can enable the exploration of the data and the models with the use of visualization. On the right, a number of design goals assist the human in the exploration, verification, and knowledge generation for ensemble learning.
C
We thus have 3333 cases, depending on the value of the tuple (p⁢(v,[010]),p⁢(v,[323]),p⁢(v,[313]),p⁢(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v , [ 313 ] ) , italic_p ( italic_v , [ 003 ] ) ):
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
{0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],% [112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end_ARG , over¯ start_ARG 2 end_ARG , over¯ start_ARG 3 end_ARG , [ 013 ] , [ 010 ] , [ 323 ] , [ 313 ] , [ 112 ] , [ 003 ] , [ 113 ] } .
p⁢(v,[013])=p⁢(v,[313])=p⁢(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1. Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ],
B
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla MAML assumes that the data distribution is the same across tasks, in real-world NLP tasks, the data distributions can differ significantly [Li et al., 2018, Balaji et al., 2018]. For example, PAML [Madotto et al., 2019] regards each person’s dialogues as a task for MAML and they have different personal profiles. This variation manifests both between training tasks and between training and testing tasks, similarly affecting the performance of MAML. Few works have thoroughly studied these impact factors.
In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy: RQ1. Since the parameter initialization learned by MAML can be seen as a general language model of training tasks, when the training and testing tasks have different data distributions, how can the general language model training affect the model’s task-specific adaptation ability?
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the meta-testing set before fine-tuning, using the quality performance (accuracy for classification and BLEU for generation) to
The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation. Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the language model becomes too “general”, it will lose the ability of adapting to specific tasks. It is noteworthy that the ”too general” problem is not the same as over-fitting, since the ”too general” model performs well before fine-tuning, which means it does not over-fit to the training data.
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla MAML assumes that the data distribution is the same across tasks, in real-world NLP tasks, the data distributions can differ significantly [Li et al., 2018, Balaji et al., 2018]. For example, PAML [Madotto et al., 2019] regards each person’s dialogues as a task for MAML and they have different personal profiles. This variation manifests both between training tasks and between training and testing tasks, similarly affecting the performance of MAML. Few works have thoroughly studied these impact factors.
A
In this paper, we consider a dynamic mission-driven UAV network with UAV-to-UAV mmWave communications, wherein multiple transmitting UAVs (t-UAVs) simultaneously transmit to a receiving UAV (r-UAV). In such a scenario, we focus on inter-UAV communications in UAV networks, and the UAV-to-ground communications are not involved. In particular, each UAV is equipped with a cylindrical conformal array (CCA), and a novel-codebook-based mmWave beam tracking scheme is proposed for such a highly dynamic UAV network. More specifically, the codebook consists of the codewords corresponding to various subarray patterns and beam patterns. Based on the joint UAV position-attitude prediction, an efficient codeword selection scheme is further developed with tracking error (TE) awareness, which achieves fast subarray activation/partition and array weighting vector selection. It is verified that our proposed scheme achieves a higher spectrum efficiency, lower outage probability and stronger robustness for inter-UAV mmWave communications. In summary, the key contributions of this paper are listed as follows.
The first study on the beam tracking framework for CA-enabled UAV mmWave networks. We propose an overall beam tracking framework to exemplify the idea of the DRE-covered CCA integrated with UAVs, and reveal that CA can offer full-spatial coverage and facilitate beam tracking, thus enabling high-throughput inter-UAV data transmission for mission-driven UAV networking. To the best of our knowledge, this is the first work on the beam tracking framework for CA-enabled UAV mmWave networks.
The specialized codebook design of the DRE-covered CCA for multi-UAV mobile mmWave communications. Under the guidance of the proposed framework, a novel hierarchical codebook is designed to encompass both the subarray patterns and beam patterns. The newly proposed CA codebook can fully exploit the potentials of the DRE-covered CCA to offer full spatial coverage. Moreover, the corresponding codeword selection scheme is also carefully designed to facilitate fast multi-UAV beam tracking/communication in the considered CA-enabled UAV mmWave network.
When considering UAV communications with UPA or ULA, a UAV is typically modeled as a point in space without considering its size and shape. Actually, the size and shape can be utilized to support more powerful and effective antenna array. Inspired by this basic consideration, the conformal array (CA) [16] is introduced to UAV communications. A CA is usually in a shape of cylindrical or spherical conforming to a predefined surface, e.g., a part of an airplane or UAV, and can reap full spatial coverage with proper array designs. Compared with surface-mounted multiple UPAs, a CA, conforming to the surface of a UAV, can compact the UAV design, reduce the extra drag and fuel consumption, and also facilitate an array of a larger size [16]. Furthermore, directional radiating elements (DREs) are commonly integrated with antenna array to enhance the beamforming ability [16, 17, 18]. In such a case, the coverage capability of CA is far stronger than that of UPA and ULA via proper array designs, due to the exploitation of size and shape. Specifically, a CA can enable the potential to enlarge (roll up) the surface of antenna array. This advantage not only achieves a larger array gain to combat path-loss but also sustains full-spatial transmitting/receiving to facilitate fast beam tracking for mobile UAV mmWave networks [19]. Note that in mission-driven UAV networks, agile and robust beam tracking is very challenging yet critical for inter-UAV mmWave communications [10], because UAV position and attitude may vary very fast. By carefully exploiting the CA’s full spatial transmission/reception property, the stringent constraints on beam tracking for highly dynamic moving UAVs can be relieved considerably. So far, however, the CA-enabled UAV mmWave network is almost untouched in the literature. Regarding the mmWave CA, there are only a few recent works on the radiation patterns and beam scanning characteristics [20] and the performance evaluation of CA-based beamforming for static mmWave cellular networks [21]. These works validate the potential advantage of CA in the static mmWave networks, which are not applicable to mobile UAV mmWave networks.
For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam tracking and channel estimation methods. For example, considering the ULA with omnidirectional radiating elements (REs), the hierarchical-codebook-based subarray and antenna deactivating strategies are proposed to achieve efficient beam training for single-user scenarios [12, 24]. The multiuser downlink beam training algorithms regarding the ULA are proposed with the multi-resolution codebook designs for partially-connected [25] and fully-connected [15] hybrid structures, respectively. However, extending the aforementioned works to the CA is not straightforward. The reasons are as follows: When the commonly-adopted DRE is integrated with CA, the limited radiation range of DREs is no longer the same and each is affected by the DRE’s location on CA, as the DRE-covered array plane is rolled up. The determined radiation direction of CA is only within a part of DREs’ radiation range. This observation indicates that only a part of the DREs or some specific subarrays need to be activated with reference to the AOA or angle of departure (AOD) of transceivers. Therefore, the dynamic subarray localization and activation are very coupled and critical for the efficient utilization of the DRE-covered CA. Note that conventional ULA/UPA-oriented codebook designs mainly focus on the beam direction/width controlling via the random-like subarray activation/deactivation without specific subarray localization. In contrast, the codebook design for DRE-covered CA should emphasize the location of the activated subarray to achieve the promise of full-spatial coverage of the CA in UAV networks. Nevertheless, such work is still missing now in the literature. These points mentioned above motivate us to study a new beam tracking framework with the well-tailored codebook for CA-enabled UAV mmWave networks.
A
Thus, a¯|b¯conditional¯𝑎¯𝑏\bar{a}|\bar{b}over¯ start_ARG italic_a end_ARG | over¯ start_ARG italic_b end_ARG-regular digraphs with size M¯¯𝑀\bar{M}over¯ start_ARG italic_M end_ARG can be characterized as a¯|b¯conditional¯𝑎¯𝑏\bar{a}|\bar{b}over¯ start_ARG italic_a end_ARG | over¯ start_ARG italic_b end_ARG-biregular graphs with size M¯|M¯conditional¯𝑀¯𝑀\bar{M}|\bar{M}over¯ start_ARG italic_M end_ARG | over¯ start_ARG italic_M end_ARG
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the arguments, and will also be used as the base cases in inductive constructions for the case with arbitrary colors.
The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges. The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from
This will be bootstrapped to the multi-color case in later sections. Note that the 1111-color case with the completeness requirement is not very interesting, and also not useful for the general case: completeness states that every node on the left must be connected, via the unique edge relation, to every node on the right – regardless of the matrix. We
To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict
D
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et al., 2017; Nachum et al., 2017; Haarnoja et al., 2017). In particular, we aim to characterize how an overparameterized two-layer neural network and its induced feature representation evolve in TD and Q-learning, especially their rate of convergence and global optimality. A fundamental obstacle, however, is that such an evolving feature representation possibly leads to the divergence of TD and Q-learning. For example, TD converges when the value function approximator is linear in a feature representation, which is fixed throughout learning, and possibly diverges otherwise (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997).
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature representation is able to deviate from the initial one and subsequently evolve into the globally optimal one, which corresponds to the global minimizer of the MSPBE. We further extend our analysis to soft Q-learning, which is connected to policy gradient.
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, TD possibly diverges (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997). Bhatnagar et al. (2009) propose nonlinear gradient TD, which converges but only to a locally optimal solution. See Geist and Pietquin (2013); Bertsekas (2019) for a detailed survey. When the value function approximator is an overparameterized multi-layer neural network, Cai et al. (2019) prove that TD converges to the globally optimal solution in the NTK regime. See also the independent work of Brandfonbrener and Bruna (2019a, b); Agazzi and Lu (2019); Sirignano and Spiliopoulos (2019), where the state space is required to be finite. In contrast to the previous analysis in the NTK regime, our analysis allows TD to attain a data-dependent feature representation that is globally optimal.
corresponding to θ(m)⁢(k)=(θ1⁢(k),…,θm⁢(k))∈ℝD×msuperscript𝜃𝑚𝑘subscript𝜃1𝑘…subscript𝜃𝑚𝑘superscriptℝ𝐷𝑚\theta^{(m)}(k)=(\theta_{1}(k),\ldots,\theta_{m}(k))\in\mathbb{R}^{D\times m}italic_θ start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( italic_k ) = ( italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_k ) , … , italic_θ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_k ) ) ∈ blackboard_R start_POSTSUPERSCRIPT italic_D × italic_m end_POSTSUPERSCRIPT. Such a feature representation is used to analyze the TD dynamics θ(m)⁢(k)superscript𝜃𝑚𝑘\theta^{(m)}(k)italic_θ start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( italic_k ) in (3.3) in the NTK regime (Cai et al., 2019), which corresponds to setting α=m𝛼𝑚\alpha=\sqrt{m}italic_α = square-root start_ARG italic_m end_ARG in (3.1). Meanwhile, the nonlinear gradient TD dynamics (Bhatnagar et al., 2009) explicitly uses such a feature representation at each iteration to locally linearize the Q-function. Moreover, up to a rescaling, such a feature representation corresponds to the kernel
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear whether the attained solution is globally optimal. On the other hand, when the value function approximator in TD is an overparameterized multi-layer neural network, which is required to be properly scaled, such a feature representation stabilizes at the initial one (Cai et al., 2019), making the explicit local linearization in nonlinear gradient TD unnecessary. Moreover, the implicit local linearization enabled by overparameterization allows TD (and Q-learning) to converge to the globally optimal solution. However, such a required scaling, also known as the neural tangent kernel (NTK) regime (Jacot et al., 2018), effectively constrains the evolution of the induced feature presentation to an infinitesimal neighborhood of the initial one, which is not data-dependent.
D
Table 4 shows that, even though this is counter-intuitive, element-wise addition (with fewer parameters) empirically results in slightly higher BLEU than the concatenation operation. Furthermore, even though using 2 depth-wise LSTM sub-layers connecting cross- and masked self-attention sub-layers leads to the highest BLEU score, showing the advantage of fully replacing residual connections with depth-wise LSTMs, it also introduces more parameters and increases the decoder depth in terms of sub-layers. For fair comparison, we use the simpler element-wise addition operation in our experiments by default.
Directly replacing residual connections with LSTM units will introduce a large amount of additional parameters and computation. Given that the task of computing the LSTM hidden state is similar to the feed-forward sub-layer in the original Transformer layers, we propose to replace the feed-forward sub-layer with the newly introduced LSTM unit, which only introduces one LSTM unit per layer, and the parameters of the LSTM can be shared across layers.
Table 5 shows that: 1) Sharing parameters for the computation (Equation 6) of the depth-wise LSTM hidden state significantly hampers performance, which is consistent with our conjecture. 2) Sharing parameters for the computation of gates (Equations 2, 3, 4) leads to slightly higher BLEU with fewer parameters introduced than without sharing them (“None” in Table 5). Thus, in the other experiments, we bind parameters for the computation of LSTM gates across stacked layers by default.
In our approach (“with depth-wise LSTM”), we used the 2-layer neural network for the computation of the LSTM hidden state (Equation 6) and shared LSTM parameters across stacked encoder layers and different shared parameters across decoder layers for computing the LSTM gates (Equations 2, 3, 4). Details are provided in our ablation study.
As the number of Transformer layers is pre-specified, the parameters of the depth-wise LSTM can either be shared across layers or be independent. Table 3 documents the importance of the capacity of the module for the hidden state computation, and sharing the module is likely to hurt its capacity. We additionally study to share only parameters for gate computation (Equations 2, 3, 4) and to share all parameters (i.e. parameters for both the computation of gates and of the hidden state). Results are shown in Table 5.
D
the corresponding Alexandroff topologies: X≜⟨X,τ→,𝖥𝖮⁢[σ]⟩≜𝑋𝑋subscriptτ→𝖥𝖮delimited-[]σX\triangleq\left\langle X,\uptau_{\to},\mathsf{FO}[\upsigma]\right\rangleitalic_X ≜ ⟨ italic_X , roman_τ start_POSTSUBSCRIPT → end_POSTSUBSCRIPT , sansserif_FO [ roman_σ ] ⟩ and for n∈ℕ𝑛ℕn\in\mathbb{N}italic_n ∈ blackboard_N, let Xn≜⟨X,τ→n,𝖥𝖮⁢[σ]⟩≜subscript𝑋𝑛𝑋subscriptτsubscript→𝑛𝖥𝖮delimited-[]σX_{n}\triangleq\left\langle X,\uptau_{\to_{n}},\mathsf{FO}[\upsigma]\right\rangleitalic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ≜ ⟨ italic_X , roman_τ start_POSTSUBSCRIPT → start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT , sansserif_FO [ roman_σ ] ⟩.
For A∈Fin⁡(σ)𝐴FinσA\in\operatorname{Fin}(\upsigma)italic_A ∈ roman_Fin ( roman_σ ) and n≥1𝑛1n\geq 1italic_n ≥ 1, there exists a structure Coren⁡(A)superscriptCore𝑛𝐴\operatorname{Core}^{n}(A)roman_Core start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( italic_A ) of tree-depth at most n𝑛nitalic_n such that
A→nCoren⁡(A)subscript→𝑛𝐴superscriptCore𝑛𝐴A\to_{n}\operatorname{Core}^{n}(A)italic_A → start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT roman_Core start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( italic_A ), Coren⁡(A)→nAsubscript→𝑛superscriptCore𝑛𝐴𝐴\operatorname{Core}^{n}(A)\to_{n}Aroman_Core start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( italic_A ) → start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT italic_A, and furthermore A→nBsubscript→𝑛𝐴𝐵A\to_{n}Bitalic_A → start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT italic_B if and only if Coren⁡(A)→B→superscriptCore𝑛𝐴𝐵\operatorname{Core}^{n}(A)\to Broman_Core start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( italic_A ) → italic_B [33, definitions 3.6 and 3.10 and Lemma 3.11]. Notice that for
For all A∈Fin⁡(σ)𝐴FinσA\in\operatorname{Fin}(\upsigma)italic_A ∈ roman_Fin ( roman_σ ), let ψA𝖤𝖥𝖮superscriptsubscript𝜓𝐴𝖤𝖥𝖮\psi_{A}^{\mathsf{EFO}}italic_ψ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT sansserif_EFO end_POSTSUPERSCRIPT be the diagram sentence such that ⟦ψA𝖤𝖥𝖮⟧Struct⁡(σ)\llbracket\psi^{\mathsf{EFO}}_{A}\rrbracket_{\operatorname{Struct}(\upsigma)}⟦ italic_ψ start_POSTSUPERSCRIPT sansserif_EFO end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT
all n≥1𝑛1n\geq 1italic_n ≥ 1, if A∈X𝐴𝑋A\in Xitalic_A ∈ italic_X then Coren⁡(A)∈XsuperscriptCore𝑛𝐴𝑋\operatorname{Core}^{n}(A)\in Xroman_Core start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( italic_A ) ∈ italic_X since X𝑋Xitalic_X is downwards closed.
A
Qualitative Comparison: To qualitatively show the performance of different learning representations, we visualize the 3D distortion distribution maps (3D DDM) derived from the ground truth and these two schemes in Fig. 8, in which each pixel value of the distortion distribution map represents the distortion level. Since the ordinal distortion estimation pays more attention to the realistic distortion perception and reasonable learning strategy, our scheme achieves results much closer to the ground truth 3D DDM. Due to implicit learning, the distortion parameter estimation generates inferior reconstructed results, such as the under-fitting (left) and over-fitting (right) on the global distribution approximation as shown in Fig. 8.
Figure 11: Qualitative evaluations of the rectified distorted images on indoor (left) and outdoor (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified results of our proposed approach, from left to right.
We visually compare the corrected results from our approach with state-of-the-art methods using our synthetic test set and the real distorted images. To show the comprehensive rectification performance under different scenes, we split the test set into four types of scenes: indoor, outdoor, people, and challenging scenes. The indoor and outdoor scenes are shown in Fig. 11, and the people and challenging scenes are shown in Fig. 12. Our approach performs well on all scenes, while the traditional methods [23, 24] show inferior corrected results under the scene that lacks sufficient hand-crafted features, especially in the people and challenging scenes. On the other hand, the learning methods [8, 11, 12] lag behind in the sufficient distortion perception and cannot easily adapt to scenes with strong geometric distortion. For example, the results obtained by Rong [8] show coarse rectified structures, which are induced by the implicit learning of distortion and simple model assumption. Li [11] leveraged the estimated distortion flow to generate the rectified images. However, the accuracy of the pixel-wise reconstruction heavily relies on the performance of scene analysis, leading to some stronger distortion results under complex scenes. Although Liao [12] generated better rectified images than the above learning methods in terms of global distribution; the results display unpleasant blur local appearances due to the used adversarial learning manner. In contrast, our results achieve the best performance on global distribution and local appearance, which benefit from the proposed learning-friendly representation and the effective learning model.
Figure 12: Qualitative evaluations of the rectified distorted images on people (left) and challenging (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified results of our proposed approach, from left to right.
Figure 13: Qualitative evaluations of the rectified distorted images on real-world scenes. For each evaluation, we show the distorted image and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified results of our proposed approach, from left to right.
A
Apart from these empirical findings, there have been some theoretical studies on large-batch training. For example, the convergence analyses of LARS have been reported in [34]. The work in [37] analyzed the inconsistency bias in decentralized momentum SGD and proposed DecentLaM for decentralized large-batch training.
We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework. Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings.
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the four baselines. Furthermore, it achieves faster convergence rates than LARS for the small and large batch sizes, which is consistent with our convergence analysis for the block-wise update strategy.
Furthermore, researchers in [19] argued that the extrapolation technique is suitable for large-batch training and proposed EXTRAP-SGD. However, experimental implementations of these methods still require additional training tricks, such as warm-up, which may make the results inconsistent with the theory.
Many methods have been proposed for improving the performance of SGD with large batch sizes. The works in [7, 33] proposed several tricks, such as warm-up and learning rate scaling schemes, to bridge the generalization gap under large-batch training settings. Researchers in [11]
C
{\mathcal{F}}roman_support ( caligraphic_D ) ⊆ 2 start_POSTSUPERSCRIPT caligraphic_C end_POSTSUPERSCRIPT × blackboard_R start_POSTSUPERSCRIPT caligraphic_F end_POSTSUPERSCRIPT and, in the black-box setting, |𝒟|𝒟|\mathcal{D}|| caligraphic_D | may be uncountably infinite.
The most general way to represent the scenario distribution 𝒟𝒟\mathcal{D}caligraphic_D is the black-box model [24, 12, 22, 19, 25], where we have access to an oracle to sample scenarios A𝐴Aitalic_A according to 𝒟𝒟\mathcal{D}caligraphic_D. We also consider the polynomial-scenarios model [23, 15, 21, 10], where the distribution 𝒟𝒟\mathcal{D}caligraphic_D is listed explicitly. We use the suffixes BB and Poly to distinguish these settings. For example, 2S-Sup-BB is the previously defined 2S-Sup in the black-box model.
Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific knowledge of the distribution is unknown but we have the ability to sample or simulate from the distribution. To our knowledge, radius minimization has not been previously considered in the two-stage stochastic paradigm. Most prior work in this setting has focused on Facility Location [23, 24, 21, 22, 11, 19, 25]. On similar lines, [1] studies a stochastic k𝑘kitalic_k-center variant, where points arrive independently and each point only needs to get covered with some given probability. 2S-Sup is the natural two-stage counterpart of the well-known Knapsack-Supplier problem, which has a well-known 3333-approximation [14].
The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage stochastic problem. This is similar to a robust supplier problem considered in [3] under the name priority center, and many of the approximation algorithms of [3] can be adapted to our setting.
Stochastic optimization, first introduced in the work of Beale [4] and Dantzig [8], provides a way to model uncertainty in the realization of the input data. In this paper, we give approximation algorithms for a family of problems in stochastic optimization, and more precisely in the 2222-stage recourse model [27]. Our formal problem definitions follow.
B
Both (sub)gradient noises and random graphs are considered in [11]-[13]. In [11], the local gradient noises are independent with bounded second-order moments and the graph sequence is i.i.d. In [12]-[14], the (sub)gradient measurement noises are martingale difference sequences and their second-order conditional moments depend on the states of the local optimizers. The random graph sequences in [12]-[15] are i.i.d. with connected and undirected mean graphs. In addition, additive communication noises are considered in [14]-[15].
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition. The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error. This leads the nonnegative supermartingale convergence theorem not to be applied directly
such as the economic dispatch in power grids ([1]) and the traffic flow control in intelligent transportation networks ([2]), et al. Considering the various uncertainties in practical network environments, distributed stochastic optimization algorithms have been widely studied. The (sub)gradients of local cost functions are used in many distributed optimization algorithms. However, it is difficult to get accurate (sub)gradients in many practical applications. For example, in distributed statistical machine learning ([3]), the local loss functions are the mathematical expectations of random functions so that the local optimizers can only obtain the measurement of the (sub)gradients with random noises. The influence of (sub)gradient measurement noises has been considered for distributed optimization algorithms in [4]-[7].
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be spatially and temporally dependent. The local cost functions are not required to be differentiable, nor do their subgradients need to be bounded. The local optimizers can only obtain measurement information of the local subgradients with random noises. The additive and multiplicative communication noises co-exist in communication links. We consider the distributed stochastic subgradient optimization algorithm and prove that if the sequence of random digraphs is conditionally balanced and uniformly conditionally jointly connected, then the states of all local optimizers converge to the same global optimal solution almost surely. The main contributions of our paper are listed as follows.
In addition to uncertainties in information exchange, different assumptions on the cost functions have been discussed. In the most of existing works on the distributed convex optimization, it is assumed that the subgradients are bounded if the local cost
D
Typically, the attributes in microdata can be divided into three categories: (1) Explicit-Identifier (EI, also known as Personally-Identifiable Information), such as name and social security number, which can uniquely or mostly identify the record owner; (2) Quasi-Identifier (QI), such as age, gender and zip code, which can be used to re-identify the record owner when taken together; and (3) Sensitive Attribute (SA), such as salary and disease, which contains the confidential information of individuals. According to the work of Sweeney [31], even with all EI attributes being removed, the record owners can still be re-identified by matching the combination of QI values.
Specifically, there are three main steps in the proposed approach. First, MuCo partitions the tuples into groups and assigns similar records into the same group as far as possible. Second, the random output tables, which control the distribution of random output values within each group, are calculated to make similar tuples to cover for each other at the minimal cost. Finally, MuCo generates anonymized microdata by replacing the original QI values with random values according to the random output tables. For instance, for the original table in Figure 1(a), MuCo partitions the records into four groups and calculates random output tables on age as shown in Figure 3. In the random output tables, the rows correspond to the records, and the columns correspond to the ranges of age values. Every entry value denotes the probability that the record carries the column value in the anonymized table. For example, we can observe that Helen is covered with Daphne and Dean, and her age outputs 28 with a probability of 0.7129 and outputs 29 with a probability of 0.2871. Then, MuCo generates an anonymized table in which the original QI values are replaced by the random values according to the random output tables.
Generalization [8, 26] is one of the most widely used privacy-preserving techniques. It transforms the values on QI attributes into general forms, and the tuples with equally generalized values constitute an equivalence group. In this way, records in the same equivalence group are indistinguishable. k𝑘kitalic_k-Anonymity [31, 28] ensures that the probability of identity disclosure is at most 1/k1𝑘1/k1 / italic_k. For instance, Figure 1(b) is a generalized table of Figure 1(a) that complies with 2-anonymity, and the adversary has to acquire at least two different tuples by matching the age value of any person.
However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserve the ranges of QI values and the number of records. Consequently, the distributions of QI values are hardly maintained and the information utility is reduced significantly. For instance, as shown in Figure 2, the red polyline and the magenta polyline represent the distributions on age in Figure 1(a) and Figure 1(c), respectively. We can observe that the original distribution is barely preserved in the generalized table. On the other hand, the partition of equivalence groups also increases the information loss of anonymized table because the results of query statements are always the matching equivalence groups rather than the specific matching tuples. For example, if we want to select the tuples whose age values are more than 30 in Figure 1(c), both equivalence groups are considered as the results.
Although the generalization for k𝑘kitalic_k-anonymity provides enough protection for identities, it is vulnerable to the attribute disclosure [23]. For instance, in Figure 1(b), the sensitive values in the third equivalence group are both “pneumonia”. Therefore, an adversary can infer the disease value of Dave by matching his age without re-identifying his exact record. To prevent such disclosure, many effective principles have been proposed, such as l𝑙litalic_l-diversity [23] and t𝑡titalic_t-closeness [19]. For example, Figure 1(c) is the generalized version of Figure 1(a) complying with 5-diversity, such that the proportion of each sensitive value inside the equivalence group is no more than 1/5151/51 / 5. Thus, for any individual, the adversary has to obtain at least five different sensitive values by matching the age value.
B
In this section, we introduce our practice on three competitive segmentation methods including HTC, SOLOv2 and PointRend. We show step-by-step modifications adopted on PointRend, which achieves better performance and outputs much smoother instance boundaries than other methods.
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRend Kirillov et al. (2020). Most of these detectors focus on an overall performance on public datasets like COCO, which contains much smaller instances than 3D-FUTURE, while paying less attention to large objects segmentation. As illustrated in Figure 1, the size distribution of bounding boxes in 3D-FUTURE and COCO indicates that the former contains much larger objects while the latter is dominated by smaller instances. Thus, the prominent methods used in COCO, like MaskRCNN He et al. (2017) and HTC, may generate blurry contours for large instances. Their mask heads output segmentation from a limited small feature size (e.g., 14×14141414\times 1414 × 14), which is dramatically insufficient to represent large objects. All of these motivate us to segment large instances in a fine-grained and high-quality manner. SOLOv2 builds an efficient single-shot framework with strong performance and dynamically generates predictions with much larger mask size (e.g., 1/4 scale of input size) than HTC. PointRend iteratively renders the output mask over adaptively sampled uncertain points in a coarse-to-fine fashion, which is naturally suitable for generating smooth and fine-grained instance boundaries. By conducting extensive experiments on HTC, SOLOv2 and PointRend, PointRend succeeds in producing finer mask boundaries and significantly outperforms other methods by a large margin. Our step-by-step modifications adopted on PointRend finally achieves state-of-the-art performance on 3D-FUTURE dataset, which yields 79.2 mAP and 77.38 mAP on validation and test set respectively. The final submission is an ensemble of 5 PointRend models with slightly different settings, reaching the 1st place in this competition.
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.9 mAP and surpasses MaskRCNN by a remarkable margin of 9.7 mAP. More Points Test. By increasing the number of subdivision points from default 28 to 70 during inference, we gain another 1.1 mAP with free training cost. Large Backbone. X101-64x4d Xie et al. (2017) is then used as large backbone and it boosts 6 mAP against ResNet50. DCN and More Points Train. We adopt more interpolated points during training, by increasing the number of sampled points from original 14 to 26 for coarse prediction head, and from 14 to 24 for fine-grained point head. Then by adopting DCN Dai et al. (2017), we gain 71.6 mAP, which already outperforms HTC and SOLOV2 from our offline observation. Large Resolution and P6 Feature. Due to PointRend’s lightweight segmentation head and less memory consumption compared to HTC, the input resolution can be further increased from range [800,1000] to [1200,1400] during multi-scale training. P6 level of FPN is also added for both coarse prediction head and fine-grained point head, which finally yields 74.3 mAP on our splitted validation set. Other tricks we tried on PointRend give little improvement, including MaskScoring head, GC Block and DoubleHead Wu et al. (2020). In the following, we refer the model in the last row (74.3 mAP) of Table 2 as PointRend baseline. The baseline trained on the official training set finally reaches 79.17 and 77.38 mAP on validation and testing set respectively, as shown in Table 1 and Table 3. It surpasses SOLOv2 by a large margin: 6.2, 4.5 and 3.5 mAP respectively for small, medium and large size on validation set. We believe that PointRend’s iteratively rendering process acts as a pivot for generating high-quality masks, especially fine-grained instance boundaries. Due to its superior performance, we only choose PointRend as ensemble candidates for the final submission.
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains another 2 mAP. Armed with DCN, GC block and SyncBN training, our HTC with Res2NetR101 backbone yields 74.58 mAP on validation set, as shown in Table 1. However, the convolutional mask heads adopted in all stages bring non-negligible computation and memory costs, which constrain the mask resolution and further limit the segmentation quality for large instances.
Due to limited mask representation of HTC, we move on to SOLOv2, which utilizes much larger mask to segment objects. It builds an efficient yet simple instance segmentation framework, outperforming other segmentation methods like TensorMask Chen et al. (2019c), CondInst Tian et al. (2020) and BlendMask Chen et al. (2020) on COCO. In SOLOv2, the unified mask feature branch is dynamically convoluted by learned kernels, and the adaptively generated mask for each location benefits from the whole image view instead of cropped region proposals like HTC. Using ResNeXt101-64x4d plugined with DCN and GC block, SOLOv2 achieves 75.29 mAP on validation set (see Table 1). It’s worth noting that other attempts, including NASFPN, data augmentation and Mask Scoring, bring little improvement in our experiments.
C
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi⁢(δ1,…,δn)=δisubscript𝜀𝑖subscript𝛿1…subscript𝛿𝑛subscript𝛿𝑖\varepsilon_{i}(\delta_{1},\dots,\delta_{n})=\delta_{i}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_δ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_δ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) = italic_δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. For a subset A𝐴Aitalic_A of [n]:={1,…,n}assigndelimited-[]𝑛1…𝑛[n]:=\{1,\dots,n\}[ italic_n ] := { 1 , … , italic_n } we denote WA=∏i∈Aεisubscript𝑊𝐴subscriptproduct𝑖𝐴subscript𝜀𝑖W_{A}=\prod_{i\in A}\varepsilon_{i}italic_W start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT = ∏ start_POSTSUBSCRIPT italic_i ∈ italic_A end_POSTSUBSCRIPT italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, WA:{−1,1}n→{−1,1}:subscript𝑊𝐴→superscript11𝑛11W_{A}:\{-1,1\}^{n}\to\{-1,1\}italic_W start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 }. The WAsubscript𝑊𝐴W_{A}italic_W start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT-s are the characters of the Cantor group {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT (with coordintewise multiplication) and form an orthonormal basis in L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT of the Cantor group equipped with the normalized counting measure. In this note we shall be concerned with functions from {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT into the complex plane, ℂℂ\mathbb{C}blackboard_C. These can also be considered as a couple of real functions. Each such function f:{−1,1}n→ℂ:𝑓→superscript11𝑛ℂf:\{-1,1\}^{n}\to\mathbb{C}italic_f : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → blackboard_C has a unique expansion
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Maybe the presentation below is what was known.
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma some time ago (see [K], comment from April 2, 2011). More specifically, we proved
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
C
Figure 1: Comparisons of different methods on cumulative reward under two different environments. The results are averaged over 10 trials and the error bars show the standard deviations. The environment changes abruptly in the left subfigure, whereas the environment changes gradually in the right subfigure.
For the case when the environment changes abruptly L𝐿Litalic_L times, our algorithm enjoys an O~⁢(L1/3⁢T2/3)~𝑂superscript𝐿13superscript𝑇23\tilde{O}(L^{1/3}T^{2/3})over~ start_ARG italic_O end_ARG ( italic_L start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) dynamic regret bound, which is sub-optimal compared to Wei & Luo (2021). The reason is that periodic restart is not a suitable strategy to handle abrupt changes since the passive nature indicates that we cannot guarantee detecting the abrupt environment change within a reasonably short delay. Wei & Luo (2021) overcome this issue by running two tests on top of multiple base instances with different scales to detect the environmental change. Similar ideas have also been used in the piecewise-stationary bandit literature (Besson & Kaufmann, 2019), where a change detection subroutine is run to detect the environmental change, so the regret incurred by the environmental drift can be better controlled.
Figure 2 shows that the running times of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart are roughly the same. They are much less compared with MASTER, OPT-WLSVI, LSVI-UCB, Epsilon-Greedy. This is because LSVI-UCB-Restart and Ada-LSVI-UCB-Restart can automatically restart according to the variation of the environment and thus have much smaller computational burden since it does not need to use the entire history to compute the current policy at each time step. The running time of LSVI-UCB-Unknown is larger than LSVI-UCB-restart since the epoch larger is larger due to the lack of the knowlege of total variation B𝐵Bitalic_B, but it still does not use the entire history to compute its policy. Although Random-Exploration takes the least time, it cannot find the near-optimal policy. This result further demonstrates that our algorithms are not only sample-efficient, but also computationally tractable.
From Figure 1, we find that the restart strategy works better under abrupt changes than under gradual changes, since the gap between our algorithms and the baseline algorithms designed for stationary environments is larger in this setting. The reason is that the algorithms designed to explore in stationary MDPs are generally insensitive to abrupt change in the environment. For example, UCB-type exploration does not have incentive to take actions other than the one with the largest upper confidence bound of Q𝑄Qitalic_Q-value, and if it has collected sufficient number of samples, it very likely never explores the new optimal action thereby taking the former optimal action forever. On the other hand, in gradually-changing environment, LSVI-UCB and Epsilon-Greedy can perform well in the beginning when the drift of environment is small. However, when the change of environment is greater, they no longer yield satisfactory performance since their Q𝑄Qitalic_Q function estimate is quite off. This also explains why LSVI-UCB and Epsilon-Greedy outperform ADA-LSVI-UCB at the beginning in the gradually-changing environment, as shown in Figure 1.
From Figure 1, we see LSVI-UCB-Restart with the knowledge of global variation drastically outperforms all other methods designed for stationary environments , in both abruptly-changing and gradually-changing environments, since it restarts the estimation of the Q𝑄Qitalic_Q function with knowledge of the total variations. Ada-LSVI-UCB-Restart also outperforms the baselines because it also takes the nonstationarity into account by periodically updating the epoch size for restart. In addition, Ada-LSVI-UCB-Restart has a huge gain compared to LSVI-UCB-Unknown, which agrees with our theoretical analysis. This suggests that Ada-LSVI-UCB-Restart works well when the knowledge of global variation is unavailable. Our proposed algorithms not only perform systemic exploration, but also adapt to the environment change.
C
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures  1 and  2) which is statistically significant (r⁢(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and the presence of fake news, which is deceptive and usually meant to serve hidden agendas, may erode trust. It is worthwhile to consider whether the trust in media items is due to people’s own encounters with fake news, or because of secondary factors. In Singapore, there have been active efforts through campaigns from various organizations (e.g., S.U.R.E. (Board, [n.d.]), Better Internet (Council, [n.d.]), VacciNationSG (Lai, 2021)) to raise awareness on misinformation, disinformation and fake news. If it is through the exposure to the messages of these campaigns that people’s trust in media items have been influenced, especially those who might not have personally encountered fake news, this suggests the importance of media literacy education in addressing fake news, particularly when secondary effects such as practicing greater caution due to a lack of trust comes into play.
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Government to more directly address falsehoods that hurt the public interest. The rising attention of fake news in the local scene has motivated various research including studies on the perceptions and motivations of fake news sharing (Chen et al., 2015) and responses to fake news (Edson C Tandoc et al., 2020). Although there are parallels between these studies and ours, we want to highlight that our study explores fake news in general media instead of solely social media, examining both usage and trust. Furthermore, we investigate more broadly the attitudes and behaviors on news sharing and fake news.
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures  1 and  2) which is statistically significant (r⁢(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and the presence of fake news, which is deceptive and usually meant to serve hidden agendas, may erode trust. It is worthwhile to consider whether the trust in media items is due to people’s own encounters with fake news, or because of secondary factors. In Singapore, there have been active efforts through campaigns from various organizations (e.g., S.U.R.E. (Board, [n.d.]), Better Internet (Council, [n.d.]), VacciNationSG (Lai, 2021)) to raise awareness on misinformation, disinformation and fake news. If it is through the exposure to the messages of these campaigns that people’s trust in media items have been influenced, especially those who might not have personally encountered fake news, this suggests the importance of media literacy education in addressing fake news, particularly when secondary effects such as practicing greater caution due to a lack of trust comes into play.
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms, and post corrections and warnings when they encounter fake news. That respondents show strong trust and reliance on government communication platforms, such as official websites and hotlines, signifies the relatively strong faith that Singapore residents have in the Singapore Government to provide truthful and helpful information and to debunk fake news. This may be attributed to the successful ongoing efforts in making transparent government decisions and the readiness of the government in addressing public concerns through online forums and dialogues (REACH, [n.d.]). There is opportunity here for the government to launch programs such as campaigns, call-to-actions and civic tech initiatives that aim to more actively involve the public in discussing the local impacts of fake news and the strategies to manage it, and to encourage them to play a part through personal and community actions.
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by political and financial gains, and its influence has led to increasing social costs due to the adverse effects it has on people’s truth discernment and behavior (Duffy et al., 2020). With fake news stemming mainly from digital media and causing misguided dissent that could compromise collaboration among people, we see this to be of concern to the CSCW community. As global efforts addressing fake news take off, we aim to understand what the perceptions and practices of news sharing and fake news are in a local context, with Singapore as the place of interest, to gain insights on where best to direct local mitigation efforts.
C
Consider the instance of encoding the relational information of the entity W3C into an embedding. All relevant information is structured in the form of triplets, such as (RDF,developer,W3C)RDFdeveloperW3C(\textit{RDF},\textit{developer},\textit{W3C})( RDF , developer , W3C ). Removing the self-entity W3C does not compromise the integrity of the information. One might argue that W3C carries useful information like images and attributes. However, multi-modal KG embedding methods often encode different modalities of information separately and then merge the outputs through a fusion layer [14, 15, 16, 17, 18]. Hence, excluding the self-entity when encoding relational information appears reasonable.
Now, let’s consider a scenario where DAN is responsible for generating embeddings for the neighbors of W3C, specifically 𝐠Tim Berners-Leesubscript𝐠Tim Berners-Lee\mathbf{g}_{\text{Tim Berners-Lee}}bold_g start_POSTSUBSCRIPT Tim Berners-Lee end_POSTSUBSCRIPT, 𝐠RDFsubscript𝐠RDF\mathbf{g}_{\text{RDF}}bold_g start_POSTSUBSCRIPT RDF end_POSTSUBSCRIPT, and 𝐠XML Schemasubscript𝐠XML Schema\mathbf{g}_{\text{XML Schema}}bold_g start_POSTSUBSCRIPT XML Schema end_POSTSUBSCRIPT. In this context, 𝐞W3Csubscript𝐞W3C\mathbf{e}_{\text{W3C}}bold_e start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT is employed as one of the input embeddings (Figure 3b). Consequently, if W3C is a known entity in the training set, 𝐞W3Csubscript𝐞W3C\mathbf{e}_{\text{W3C}}bold_e start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT can assimilate some information about its neighborhood through back-propagation. We anticipate that 𝐠W3Csubscript𝐠W3C\mathbf{g}_{\text{W3C}}bold_g start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT should, at the very least, encapsulate the information embedded in 𝐞W3Csubscript𝐞W3C\mathbf{e}_{\text{W3C}}bold_e start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT to more effectively represent W3C. In the case where W3C is an unknown entity, it becomes even more critical for DAN to learn how to encode this entity using its neighbor embeddings as input.
Drawing inspiration from the CBOW schema, we propose Decentralized Attention Network (DAN) to distribute the relational information of an entity exclusively over its neighbors. DAN retains complete relational information and empowers the induction of embeddings for new entities. For example, if W3C is a new entity, its embedding can be computed based on its established relationships with existing entities, such as Tim Berners-Lee, RDF, and XML Schema. In contrast, the existing methods additionally rely on the embedding of W3C, thus constraining their capacity to generate embeddings for new entities.
Although 𝐞W3Csubscript𝐞W3C{\mathbf{e}}_{\text{W3C}}bold_e start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT dose not directly contribute to its output embedding 𝐠W3Csubscript𝐠W3C{\mathbf{g}}_{\text{W3C}}bold_g start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT, it plays a pivotal role in learning the embeddings of its neighbors, such as 𝐠Tim Berners-Leesubscript𝐠Tim Berners-Lee{\mathbf{g}}_{\text{Tim Berners-Lee}}bold_g start_POSTSUBSCRIPT Tim Berners-Lee end_POSTSUBSCRIPT and 𝐠RDFsubscript𝐠RDF{\mathbf{g}}_{\text{RDF}}bold_g start_POSTSUBSCRIPT RDF end_POSTSUBSCRIPT. Hence, 𝐞W3Csubscript𝐞W3C{\mathbf{e}}_{\text{W3C}}bold_e start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT may encapsulate valuable knowledge about its neighbors, which we would like to distill into 𝐠W3Csubscript𝐠W3C{\mathbf{g}}_{\text{W3C}}bold_g start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT by maximizing their mutual information (MI) [26]. This process not only helps DAN to encode the relational information of known entities, but also trains DAN to produce desired embeddings for new entities by effectively encoding the neighbors to align with the output embedding of the known entities. Theoretical proofs are provided to substantiate this concept.
To gain a deeper understanding of self-distillation, it is essential to analyze the relationship between the input embedding and the decentralized output embedding. Let’s consider the example of the entity W3C, denoted as 𝐞W3Csubscript𝐞W3C\mathbf{e}_{\text{W3C}}bold_e start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT for the input embedding and 𝐠W3Csubscript𝐠W3C\mathbf{g}_{\text{W3C}}bold_g start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT for the decentralized embedding. As W3C has several neighbors, , such as Tim Berners-Lee, RDF, and XML Schema, DAN uses the embeddings 𝐞Tim Berners-Leesubscript𝐞Tim Berners-Lee\mathbf{e}_{\text{Tim Berners-Lee}}bold_e start_POSTSUBSCRIPT Tim Berners-Lee end_POSTSUBSCRIPT, 𝐞RDFsubscript𝐞RDF\mathbf{e}_{\text{RDF}}bold_e start_POSTSUBSCRIPT RDF end_POSTSUBSCRIPT, and so forth, as input to generate 𝐠W3Csubscript𝐠W3C\mathbf{g}_{\text{W3C}}bold_g start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT (Figure 3a).
B
To evaluate the adaptability, we further adopt the policies learned from the Level 1111 to other levels. More specifically, for each method, we first save the last policy when training in the Level 1111, and then fine-tune such a policy in the Levels 2222 and 3333. Since the VDM and RFM methods perform the best in the Level 1111, we conduct adaptability experiments exclusive on VDM and RFM. We illustrate the result in Fig. 8(c) and Fig 8(d). We observe that VDM performs similarly to RFM at the Level 2222, while performing much better than RFM when transferring to the Level 3333. As a result, we conclude that the policy learned by VDM demonstrates better adaptability to novel environments.
To further investigate the capability of our method in coping with highly stochastic environments, we conduct experiments on games where both the agent and its opponent are controlled by self-supervised exploratory policies. The stochasticity of the transition dynamics is much higher for both sides of the game since the opponent’s evolution in policy changes the agent’s transition. In contrast, when playing Atari games, the opponent is controlled by a hardcoded policy, which yields a relatively stable transition. We use the Two-player Pong game for the experiment. The extrinsic reward is not appropriate for evaluating different methods in this experiment, since both sides are controlled by policies that evolved together. Alternatively, we use the length of the episode as the evaluation criterion. Such a criterion is appropriate since, on the one hand, to maximize the intrinsic rewards, both the agent and its opponent aims to beat each other while avoiding the dead ball that terminates the episode. On the other hand, as the policy evolves, both sides of the game eventually utilize approximately the same policy, which yields a long episode.
We illustrate the results in Fig. 9. We observe that the episode length becomes longer over training time with the intrinsic reward estimated from VDM, as anticipated. We observe that our method reaches the episode length of 104superscript10410^{4}10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT with the minimum iteration steps. After reaching the maximum episode length, the game rallies eventually get so long that they break our Atari emulator, causing the colors to change radically, which crashes the policy. The two images of observation in Fig. 9 illustrate the change of emulator. The RFM achieves similar results with twice the training step as of VDM. In conclusion, the pure-exploratory policy learned by VDM enables the control of both sides to improve their policies, which achieves the Nash equilibrium more efficiently in the Two-player Pong game than the baseline methods.
We observe that our method performs the best in most of the games, in both the sample efficiency and the performance of the best policy. The reason our method outperforms other baselines is the multimodality in dynamics that the Atari games usually have. Such multimodality is typically caused by other objects that are beyond the agent’s control. Affected by these objects, taking the same action may yield different outcomes. For example, in MsPacman, the ghosts choose directions at each fork of the maze freely, which is beyond the control of the agent. Similar to different image classes in the Noisy-Mnist example, different behavior of ghosts leads to the different modes in the transition dynamics. VDM captures the multimodality of the dynamic when measuring the novelty of transitions, which leads to better intrinsic rewards for exploration. Moreover, in VDM, the features encoding multimodality and stochasticity are contained in posterior and prior networks separated from the reconstruction features in the generative network. Hence, VDM prevents the features of multimodality and stochasticity from being ruined in the training of the generative model.
The complete procedure of self-supervised exploration with VDM is summarized in Algorithm 1. In each episode, the agent interacts with the environment to collect the transition st,at,st+1subscript𝑠𝑡subscript𝑎𝑡subscript𝑠𝑡1s_{t},a_{t},s_{t+1}italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT. Then a random CNN that extracts feature is utilized to embed the states, and the embedded transition is fed into VDM to estimate the intrinsic reward following (15). After the end of an episode, we use the collected T𝑇Titalic_T transitions to update the parameters of the policy by following the PPO gradient defined in (2) and (5) associated with generalized advantage estimation [19]. Meanwhile, we update the proposal network, prior network and generative network in VDM based on (12) and (13). We further use the updated policy and VDM for the interaction in the next episode.
A
If we would add nodes to make the grid symmetric or tensorial, then the number of nodes of the resulting (sparse) tensorial grid would scale exponentially 𝒪⁢(nm)𝒪superscript𝑛𝑚\mathcal{O}(n^{m})caligraphic_O ( italic_n start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ) with space dimension m∈ℕ𝑚ℕm\in\mathbb{N}italic_m ∈ blackboard_N. In contrast, our proposed interpolation nodes scale sub-exponentially o⁢(nm)𝑜superscript𝑛𝑚o(n^{m})italic_o ( italic_n start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ) and
We complement the established notion of unisolvent nodes by the dual notion of unisolvence. That is: For given arbitrary nodes P𝑃Pitalic_P, determine the polynomial space ΠΠ\Piroman_Π such that P𝑃Pitalic_P is unisolvent with respect to ΠΠ\Piroman_Π. In doing so, we revisit earlier results by Carl de Boor and Amon Ros [28, 29] and answer their question from our perspective.
for a given polynomial space ΠΠ\Piroman_Π and a set of nodes P⊆ℝm𝑃superscriptℝ𝑚P\subseteq\mathbb{R}^{m}italic_P ⊆ blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT that is not unisolvent with respect to ΠΠ\Piroman_Π, find a maximum subset P0⊆Psubscript𝑃0𝑃P_{0}\subseteq Pitalic_P start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ⊆ italic_P and a polynomial subspace ΠP0⊆ΠsubscriptΠsubscript𝑃0Π\Pi_{P_{0}}\subseteq\Piroman_Π start_POSTSUBSCRIPT italic_P start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ⊆ roman_Π,
We realize the algorithm of Carl de Boor and Amon Ros [28, 29] in terms of Corollary 6.5 in case of the torus M=𝕋R,r2𝑀subscriptsuperscript𝕋2𝑅𝑟M=\mathbb{T}^{2}_{R,r}italic_M = blackboard_T start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_R , italic_r end_POSTSUBSCRIPT. That is, we consider
Here, we answer Questions 1–2. To do so, we generalize the notion of unisolvent nodes PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, A⊆ℕm𝐴superscriptℕ𝑚A\subseteq\mathbb{N}^{m}italic_A ⊆ blackboard_N start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT to non-tensorial grids. This allows us to extend Newton (NI) and Lagrange (LI) interpolation to arbitrary-dimensional spaces such that:
A
In the second case, the distributions μ𝜇\muitalic_μ and ν𝜈\nuitalic_ν are both d𝑑ditalic_d-dimensional Gaussian distributions with the same mean vector but different covariance metrics, where d∈{30,60}𝑑3060d\in\{30,60\}italic_d ∈ { 30 , 60 }. More specifically, μ=𝒩⁢(0,Id)𝜇𝒩0subscript𝐼𝑑\mu=\mathcal{N}(0,I_{d})italic_μ = caligraphic_N ( 0 , italic_I start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) and ν=𝒩⁢(0,Σ)𝜈𝒩0Σ\nu=\mathcal{N}(0,\Sigma)italic_ν = caligraphic_N ( 0 , roman_Σ ) with Σ=diag⁢(4,4,1,…,1)Σdiag441…1\Sigma=\mathrm{diag}(4,4,1,\ldots,1)roman_Σ = roman_diag ( 4 , 4 , 1 , … , 1 ).
Several data-efficient two-sample tests [20, 21, 22] are constructed based on Maximum Mean Discrepancy (MMD), which quantifies the distance between two distributions by introducing test functions in a Reproducing Kernel Hilbert Space (RKHS). However, it is pointed out in [23] that when the bandwidth is chosen based on the median heuristic, the MMD tests suffer from decaying power in high dimensions.
However, the two-sample tests based on concentration inequalities in Section III give conservative results in practice. We examine the two-sample tests using the projected Wasserstein distance via the permutation approach. Specifically, we permute the collected data points for Np=100subscript𝑁𝑝100N_{p}=100italic_N start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT = 100 times, and the p𝑝pitalic_p-value of the proposed test can be computed as the fraction of times that the projected Wasserstein distances under permuted samples are greater than the projected Wasserstein distance under the original empirical samples.
In other words, we only scale the first two diagonal entries in the covariance matrix of ν𝜈\nuitalic_ν to make the hypothesis testing problem difficult to perform. We compare the performance of the PW test with the MMD test discussed in [20], where the kernel function is chosen to be the standard Gaussian kernel with bandwidth being the empirical median of data points.
The last two plots correspond to covariance-shifted Gaussian distributions, where Fig. 1c) examines the power for different n𝑛nitalic_n with fixed d=60𝑑60d=60italic_d = 60, and Fig. 1d) examines the power for different d𝑑ditalic_d with fixed n=75𝑛75n=75italic_n = 75. We can see that the power of all methods increases when the sample size increases, and the power of the PW test is greater than the MMD test especially in high dimensions.
C
VAE-type DGMs use amortized variational inference to learn an approximate posterior qϕ⁢(H|x)subscript𝑞italic-ϕconditional𝐻𝑥q_{\phi}(H|x)italic_q start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_H | italic_x ) by maximizing an evidence lowerbound (ELBO) to the log-marginal likelihood of the data under the model pθ⁢(X)subscript𝑝𝜃𝑋p_{\theta}(X)italic_p start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_X ).
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervised, semi-supervised or unsupervised. In the Appendix we present such implementations. where we significantly constrain the capacity of the learned representation and heavily regularize the model to produce independent factors. As we explained above, such a model will likely learn a good disentangled representation, however, its reconstruction will be of low quality as it will only be able to generate the information captured by the disentangled factors while averaging the details. For example, in Figure 1, the model uses β𝛽\betaitalic_β-TCVAE [mig] to retrieve the pose of the model as a latent factor. In the reconstruction, the rest of the details are averaged, resulting in a blurry image (1b). The goal of the second part of the model, is to add the details while maintaining the semantic information retrieved in the first stage. In Figure 1 that means to transform Image 1b (the output of the first stage) to be as similar as possible to Image 1a (the target observation). We can view this as a style transfer task and use a technique from [adaIN] to achieve our goal.
Amortization of the inference is achieved by parameterising the variational posterior with another deep neural network (called the encoder or the inference network) that outputs the variational posterior parameters as a function of X𝑋Xitalic_X. Thus, after jointly training the encoder and decoder, a VAE model can perform two complementary tasks: extract a low dimensional representation of a given observation x𝑥xitalic_x as well as reconstruct an observation from its low dimensional representation.
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Zitalic_Z while maintaining the semantic information captured in C𝐶Citalic_C to obtain the final reconstruction (Image 1d in our example).
Deep generative models (DGMs) such as variational autoencoders (VAEs) [dayan1995helmholtz, vae, rezende2014stochastic] and generative adversarial networks (GANs) [gan] have enjoyed great success at modeling high dimensional data such as natural images. As the name suggests, DGMs leverage deep learning to model a data generating process. These models work on the underlying assumption that the high dimensional observations X∈ℝD𝑋superscriptℝ𝐷X\in\mathbb{R}^{D}italic_X ∈ blackboard_R start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT can be meaningfully described by a small set of low-dimensional latent factors H∈ℝK𝐻superscriptℝ𝐾H\in\mathbb{R}^{K}italic_H ∈ blackboard_R start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT, where K<D𝐾𝐷K<Ditalic_K < italic_D. More precisely, the observation (X=x)𝑋𝑥(X=x)( italic_X = italic_x ) is assumed to be generated by first sampling a set of low dimensional factors hℎhitalic_h from a simple prior distribution p⁢(H)𝑝𝐻p(H)italic_p ( italic_H ) and then sampling x∼pθ⁢(X|h)similar-to𝑥subscript𝑝𝜃conditional𝑋ℎx\sim p_{\theta}(X|h)italic_x ∼ italic_p start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_X | italic_h ). DGMs realize pθsubscript𝑝𝜃p_{\theta}italic_p start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT through a deep neural network also known as the decoder or the generative network.
B
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. However, one can think about whether the four pin designs are the minimum number of pins required by structural computers. In other words, operating a structural computer with a minimal lead is also a task to be addressed by this study because one of the most important factors in computer hardware design is aggregation. Let’s look at the role of the four pins that transmit signals in a 4 pin based signal system. Four pins are paired into two pairs, each representing/delivering true and inverted values as a connection state. When checking the output, place a voltage on one of the two wires in a pair and ground the other. In this case, the study inferred that of the four wires, two wires acting as ground can be replaced by one wire, and based on this reasoning, the method in which the 4 pin signal system can be described as 3-pin based logic as the same 3 pin signal system. As mentioned above, a 3-pin based logic consists of a ground cable in the center and two signal lines representing true and inverted values above and below, and is capable of operating NOT, AND and OR operations through the structural transformations shown below.
Fig. 3 is AND and/or gate consisting of 3-pin based logics, Fig. 3 also shows the connection status of the output pin when A=0, B=1 is entered in the AND gate. when A=0, B=1, or A is connected, and B is connected, output C is connected only to the following two pins, and this is the correct result for AND operation.
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. However, one can think about whether the four pin designs are the minimum number of pins required by structural computers. In other words, operating a structural computer with a minimal lead is also a task to be addressed by this study because one of the most important factors in computer hardware design is aggregation. Let’s look at the role of the four pins that transmit signals in a 4 pin based signal system. Four pins are paired into two pairs, each representing/delivering true and inverted values as a connection state. When checking the output, place a voltage on one of the two wires in a pair and ground the other. In this case, the study inferred that of the four wires, two wires acting as ground can be replaced by one wire, and based on this reasoning, the method in which the 4 pin signal system can be described as 3-pin based logic as the same 3 pin signal system. As mentioned above, a 3-pin based logic consists of a ground cable in the center and two signal lines representing true and inverted values above and below, and is capable of operating NOT, AND and OR operations through the structural transformations shown below.
Optical logic aggregates can be designed in the same way as in Implementation of Structural Computer Using Mirrors and Translucent Mirrors, and for the convenience of expression and the exploration of mathematical properties (especially their association with matrices), the number shown in Fig. 5 can be applied to the label of the window operator to express the AND gate as shown below, which is referred to as the matrix representation of the optical logic. Fig. 7 shows, however, that some rays of light can be counted on the lower beta signal, which can interfere with the operation of other Thus, a black body gate was implemented using i cells to make input everywhere into NULL state. Including this, functions derived from the properties of light that are only available in structural-based optical computing can be modularized with window operators, which can be organized into the following seven categories. 222AND- Logic in Boolean algebra, OR- Logic in Boolean algebra, CROS- Vertical Reflection/Crossing of Two Logics, CNOT- Vertical Reflection/Crossing of Two Logics, Only Intersects and Both Logics are NOT-operated. INVS- Transmittance of Two Logics, COPY- Cloning Logic, BLAK- Absorption of logic (to make it all NULL)
The NOT gate can be operated in a logic-negative operation through one ‘twisting’ as in a 4-pin. To be exact, the position of the middle ground pin is fixed and is a structural transformation that changes the position of the remaining two true and false pins.
D
Any permutation polynomial f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ) decomposes the finite field 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboard_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT into sets containing mutually exclusive orbits, with the cardinality of each set being equal to the cycle length of the elements in that set. The cycle structure111The cycle structure of a permutation ΣfsubscriptΣ𝑓\Sigma_{f}roman_Σ start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT in general is the set containing information about cycle (or orbit) lengths along with their multiplicities. But in this work, we use this term to only denote the orbit lengths of distinct cycles without considering their multiplicity. ΣfsubscriptΣ𝑓\Sigma_{f}roman_Σ start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT of a permutation polynomial f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ) is the set of all cycle lengths of the permutation. Computing the cycle structure of permutations represented by PPs is an important problem encountered in cryptography, coding theory, and communication systems [15, 16] with no known efficient algorithm for a general class of PPs. Computing the cycle structure of specific forms of PP is a well-studied problem in the theory of finite fields [17, 18, 19].
Univariate polynomials f⁢(x):𝔽→𝔽:𝑓𝑥→𝔽𝔽f(x):\mathbb{F}\to\mathbb{F}italic_f ( italic_x ) : blackboard_F → blackboard_F that induces a bijection over the field 𝔽𝔽\mathbb{F}blackboard_F are called permutation polynomials (in short, PP) and have been studied extensively in the literature. For instance, given a general polynomial f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ) over 𝔽𝔽\mathbb{F}blackboard_F deciding whether it is a PP is a well-researched problem in literature [10]. Though computational verification of a given polynomial f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ) to be PP is a polynomial time problem in its degree d𝑑ditalic_d, conditions for any polynomial to be PP is well understood only for certain polynomials with specific structures such as monomials, linearized polynomials, and Dickson polynomials, to name a few.
There has been extensive study about a family of polynomial maps defined through a parameter a∈𝔽𝑎𝔽a\in\mathbb{F}italic_a ∈ blackboard_F over finite fields. Some well-studied families of polynomials include the Dickson polynomials and reverse Dickson polynomials, to name a few. Conditions for such families of maps to define a permutation of the field 𝔽𝔽\mathbb{F}blackboard_F are well studied and established for special classes like Dickson polynomials [20], linearized polynomials [21] and few other specific forms [13, 14] to name a few.
Given an n𝑛nitalic_n-dimensional vector space 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT over finite field 𝔽𝔽\mathbb{F}blackboard_F, maps F:𝔽n→𝔽n:𝐹→superscript𝔽𝑛superscript𝔽𝑛F:\mathbb{F}^{n}\to\mathbb{F}^{n}italic_F : blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT are ubiquitous in the representation of Finite Automata [1, 2], recurrence sequences through Feedback Shift Registers (FSR) [3, 4], mathematical models of Stream Ciphers [5], state updates of Genetic Networks [6, 7, 8, 9] to name a few. In these applications, computations of compositions and inverses of such maps, as well as their representations in polynomials over the finite field 𝔽𝔽\mathbb{F}blackboard_F, play an important role.
The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}blackboard_F, this paper explores a completely new approach using the Koopman operator defined by the iterates of the map. This helps define the linear representation of non-linear maps, which translates non-linear compositions of the map to matrix multiplications. This linear representation naturally defines a notion of linear complexity for non-linear maps, which can be viewed as a measure of computational complexity associated with computations involving such maps. The framework of linear representation is then extended to parameter dependent maps over 𝔽𝔽\mathbb{F}blackboard_F, and the conditions on parametric invertibility of such maps are established, leading to a construction of the parametric inverse map (under composition). It is shown that the framework can be extended to multivariate maps over 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, and the conditions are established for invertibility of such maps, and the inverse is constructed using the linear representation. Further, the problem of linear representation of the group generated by a finite set of permutation maps over 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT under composition is also solved by extending the theory of linear representation of a single map. This leads to the notion of complexity of a group of permutation maps under composition.
B
In this study we only considered different meta-learners within the MVS framework. Of course, many other algorithms for training classifiers exist. Some of those classifiers may be expected to perform better in terms of classification performance than the classifiers presented here, but not many have the embedded view selection properties of MVS-based methods. For example, a random forest would probably perform very well in terms of classification, but the resulting classifier is hard to interpret and does not automatically select the most important views for prediction. One non-MVS method which does automatically select views is the group lasso (M. Yuan \BBA Lin, \APACyear2007), but we did not include it here as an extensive comparison between StaPLR/MVS and the group lasso has already been performed elsewhere (Van Loon \BOthers., \APACyear2020).
In this article we investigated how different view-selecting meta-learners affect the performance of multi-view stacking. In our simulations, the interpolating predictor often performed worse than the other meta-learners on at least one outcome measure. For example, when the sample size was larger than the number of views, the interpolating predictor often had the lowest TPR in view selection, as well as the lowest test accuracy, particularly when there was no correlation between the different views. When the sample size was smaller than the number of views, the interpolating predictor had a FPR in view selection that was considerably higher than that of all other meta-learners. In terms of accuracy it performed very well in the breast cancer data, but less so in the colitis data. However, in both cases it produced very dense models, which additionally had low view selection stability. The fact that its behavior varied considerably across our experimental conditions, combined with its tendency to select very dense models when the meta-learning problem is high-dimensional, suggests that the interpolating predictor should not be used when view selection is among the goals of the study under consideration. However, it may have some use when its interpretation as a weighted mean of the view-specific models is of particular importance.
For each experimental condition, we simulate 100 multi-view data training sets. For each such data set, we randomly select 10 views. In 5 of those views, we determine all of the features to have a relationship with the outcome. In the other 5 views, we randomly determine 50% of the features to have a relationship with the outcome. The relationship between features and response is determined by a logistic regression model, where each feature related to the outcome is given a regression weight. In the setting with 30 views, we use the same regression weight as a similar simulation study in Van Loon \BOthers. (\APACyear2020). This regression weight is either 0.040.040.040.04 or −0.040.04-0.04- 0.04, each with probability 0.5. In the setting with 300 views, the number of features per view is reduced by a factor 10. To compensate for the reduction in the number of features, the aforementioned regression weights are multiplied by 1010\sqrt{10}square-root start_ARG 10 end_ARG in this setting.
Excluding the interpolating predictor, stability selection produced the sparsest models in our simulations. However, this led to a reduction in accuracy whenever the correlation within features from the same view was of a similar magnitude as the correlations between features from different views. In both gene expression data sets stability selection also produced the sparsest models, but it also had the worst classification accuracy of all meta-learners. In applying stability selection, one has to specify several parameters. We calculated the values of these parameters in part by specifying a desired bound on the PFER (in our case 1.5). This kind of error control is much less strict than the typical family-wise error rate (FWER) or FDR control one would apply when doing statistical inference. In fact, one can observe in Figures 3 and 4 that although stability selection has a low FPR, for a sample size of 200 its FDR is still much higher than one would typically consider acceptable when doing inference (common FDR control levels are 0.05 or 0.1). Additionally, we gave the meta-learner information about the number of views containing signal in the data (parameter q𝑞qitalic_q), which the other meta-learners did not have access to. It is also worth noting that the sets of views selected by stability selection in both gene expression data sets had low view selection stability. Ideally, selecting views based on their stability would lead to a set of selected views that is itself highly stable, but evidently this is not the case. It follows then that stability selection may produce a set of selected views which is neither particularly useful for prediction, nor for inference. One could add additional assumptions (Shah \BBA Samworth, \APACyear2013), which may increase predictive performance, but may also increase FDR. Or one could opt for stricter error control, but this would likely reduce classification performance even further. This implies that performing view selection for both the aims of prediction and inference using a single procedure may produce poor results, since the resulting set of selected views may not be suitable for either purpose.
Any simulation study is limited by its choice of experimental factors. In particular, in our simulations we assumed that all features corresponding to signal have the same regression weight, and that all views contain an equal number of features. The correlation structures we used are likely simpler than those encountered in real data sets. Additionally, we defined the view selection problem in such a way that we want to select any view which contains at least some (in our simulations at least 50%) features truly related to the outcome. In practice, the amount of signal present in a view may be lower, leading to considerations of exactly how much signal should be present in a view in order for the researcher to be considered worth selecting. Additionally, we only considered settings where views are mutually exclusive, but in practice views may overlap (L. Yuan \BOthers., \APACyear2011; Park \BOthers., \APACyear2015), meaning that a single feature may correspond to multiple views. In general, the MVS algorithm can handle overlapping views by simply ‘copying’ a feature for each additional view in which it occurs. However, an exploration of the implications of overlapping views for view selection, both in MVS and in general, would make an interesting topic for future research. We also did not include the possibility of missing data. In multi-view data, it is quite likely that if missing data occurs, all features within a view will be simultaneously missing. Future work may focus on developing optimal strategies for handling missing data in the multi-view context.
D
To study the impact of the anomaly percentage, we randomly select a certain percentage of objects and 10% of variables in each dataset to inject anomalous values. The percentage of anomalies range from 1% to 10%. The anomalies are generated by adding a perturbation to the original values and also ensure the perturbed values remaining within the observed range of the corresponding variables. Specifically, for a selected variable Xjsubscript𝑋𝑗X_{j}italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, the perturbation, d𝑑ditalic_d is calculated as d=(m⁢a⁢x⁢(Xj)−m⁢i⁢n⁢(Xj))/2𝑑𝑚𝑎𝑥subscript𝑋𝑗𝑚𝑖𝑛subscript𝑋𝑗2d=(max(X_{j})-min(X_{j}))/2italic_d = ( italic_m italic_a italic_x ( italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) - italic_m italic_i italic_n ( italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) ) / 2, where min⁡(Xj)subscript𝑋𝑗\min(X_{j})roman_min ( italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) and m⁢a⁢x⁢(Xj)𝑚𝑎𝑥subscript𝑋𝑗max(X_{j})italic_m italic_a italic_x ( italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) represent the minimum and maximum observed values of Xjsubscript𝑋𝑗X_{j}italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, respectively. Then, for an object 𝐱i⁣∗subscript𝐱𝑖\mathbf{x}_{i*}bold_x start_POSTSUBSCRIPT italic_i ∗ end_POSTSUBSCRIPT, its anomalous value on variable Xjsubscript𝑋𝑗X_{j}italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT is computed as (xi⁢j+d)modm⁢a⁢x⁢(Xj)+⌊m⁢i⁢n⁢(Xj)∗(xi⁢j+d)m⁢a⁢x⁢(Xj)⌋modulosubscript𝑥𝑖𝑗𝑑𝑚𝑎𝑥subscript𝑋𝑗𝑚𝑖𝑛subscript𝑋𝑗subscript𝑥𝑖𝑗𝑑𝑚𝑎𝑥subscript𝑋𝑗(x_{ij}+d)\bmod max(X_{j})+\lfloor\frac{min(X_{j})*(x_{ij}+d)}{max(X_{j})}\rfloor( italic_x start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT + italic_d ) roman_mod italic_m italic_a italic_x ( italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) + ⌊ divide start_ARG italic_m italic_i italic_n ( italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) ∗ ( italic_x start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT + italic_d ) end_ARG start_ARG italic_m italic_a italic_x ( italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_ARG ⌋, where modmodulo\bmodroman_mod is the modulus operator.
Sensitivity Experiments: DepAD algorithms are not sensitive to the average correlation, sparseness, or dimensionality of datasets. DepAD methods exhibit stability when data contains noisy variables. However, the percentage of anomalies can negatively affect their effectiveness.
Figure 11(b) demonstrates the impact of the number of noisy variables, ranging from 0 to 20, accounting for 0% to 18% of the original variables, with a fixed percentage of anomalies at 10%. FBED-CART-PS exhibits low sensitivity to noisy variables in terms of both ROC AUC and AP. This behavior can be attributed to the relevant variable selection step within the DepAD framework, which effectively excludes noisy variables from the prediction models, ensuring they do not compromise the accuracy of expected value predictions.
Regarding the experiments on noisy variables, we introduce noisy variables into the synthetic datasets following the process in existing literature [28]. Specifically, to ensure minimal dependency between the noisy and the original variables, the values of these noisy variables are drawn from a uniform distribution between 0 and 1, and their correlation with the original variables is controlled below 0.1. The experimental results are presented in Figure 11.
The number of noisy variables: noisy variables are the variables that are unrelated to the data generation process. Research [86, 87, 28] has shown that these variables can hide the characteristics of anomalies, making anomaly detection more challenging.
C
Δpred⁢\del⁢𝐗𝒬t,θsuperscriptΔpred\delsubscript𝐗subscript𝒬𝑡𝜃\Delta^{\text{pred}}\del{\mathbf{X}_{\mathcal{Q}_{t}},\theta}roman_Δ start_POSTSUPERSCRIPT pred end_POSTSUPERSCRIPT bold_X start_POSTSUBSCRIPT caligraphic_Q start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUBSCRIPT , italic_θ represents the difference in perceived rewards due to the inaccuracy in the estimation of the parameter θ∗subscript𝜃\theta_{*}italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT.
Algorithm 1 follows the template of in the face of uncertainty (OFU) strategies [Auer et al., 2002, Filippi et al., 2010, Faury et al., 2020]. Technical analysis of OFU algorithms relies on two key factors: the design of the confidence set and the ease of choosing an action using the confidence set.
CB-MNL enforces optimism via an optimistic parameter search (e.g. in Abbasi-Yadkori et al. [2011]), which is in contrast to the use of an exploration bonus as seen in Faury et al. [2020], Filippi et al. [2010]. Optimistic parameter search provides a cleaner description of the learning strategy. In non-linear reward models, both approaches may not follow similar trajectory but may have overlapping analysis styles (see Filippi et al. [2010] for a short discussion).
Comparison with Faury et al. [2020] Faury et al. [2020] use a bonus term for optimization in each round, and their algorithm performs non-trivial projections on the admissible log-odds. While we do reuse the Bernstein-style concentration inequality as proposed by them, their results do not seem to extend directly to the MNL setting without requiring significantly more work. Further, our algorithm CB-MNL performs an optimistic parameter search for making decisions instead of using a bonus term, which allow for a cleaner and shorter analysis.
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of uncertainty approaches)  [Abbasi-Yadkori et al., 2011, Abeille et al., 2021]. We use Bernstein-style concentration for self-normalized martingales, which were previously proposed in the context of scalar logistic bandits in Faury et al. [2020], to define our confidence set over the true parameter, taking into account the effects of the local curvature of the reward function. We show that the performance of CB-MNL (as measured by regret) is bounded as O~⁢\del⁢d⁢T+κ~O\del𝑑𝑇𝜅\tilde{\mathrm{O}}\del{d\sqrt{T}+\kappa}over~ start_ARG roman_O end_ARG italic_d square-root start_ARG italic_T end_ARG + italic_κ, significantly improving the theoretical performance over existing algorithms where κ𝜅\kappaitalic_κ appears as a multiplicative factor in the leading term. We also leverage a self-concordance [Bach, 2010] like relation for the multinomial logit reward function [Zhang & Lin, 2015], which helps us limit the effect of κ𝜅\kappaitalic_κ on the final regret upper bound to only the higher-order terms. Finally, we propose a different convex confidence set for the optimization problem in the decision set of CB-MNL, which reduces the optimization problem to a constrained convex problem.
B
We use an input sequence length L=1280𝐿1280L=1280italic_L = 1280, a channel dimension C=256𝐶256C=256italic_C = 256 throughout the network and a short factor γ=0.4𝛾0.4\gamma=0.4italic_γ = 0.4. We have 5 levels in the encoder and decoder pyramids respectively, with lengths L/2(l+1)𝐿superscript2𝑙1L/2^{(l+1)}italic_L / 2 start_POSTSUPERSCRIPT ( italic_l + 1 ) end_POSTSUPERSCRIPT, where 1≤l≤51𝑙51\leq l\leq 51 ≤ italic_l ≤ 5 is the level index. For each level, we have 2 different anchor sizes {s1×2(l−1),s2×2(l−1)}subscript𝑠1superscript2𝑙1subscript𝑠2superscript2𝑙1\{s_{1}\times 2^{(l-1)},s_{2}\times 2^{(l-1)}\}{ italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT × 2 start_POSTSUPERSCRIPT ( italic_l - 1 ) end_POSTSUPERSCRIPT , italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT × 2 start_POSTSUPERSCRIPT ( italic_l - 1 ) end_POSTSUPERSCRIPT }, where s1,s2subscript𝑠1subscript𝑠2s_{1},s_{2}italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are 4, 6 for THUMOS and 32, 48 for ActivityNet. The number of edges for each node is K=10𝐾10K=10italic_K = 10, and the gap is G=30𝐺30G=30italic_G = 30. λc⁢l⁢s=λa⁢d⁢j=λs⁢c⁢r=0.2subscript𝜆𝑐𝑙𝑠subscript𝜆𝑎𝑑𝑗subscript𝜆𝑠𝑐𝑟0.2\lambda_{cls}=\lambda_{adj}=\lambda_{scr}=0.2italic_λ start_POSTSUBSCRIPT italic_c italic_l italic_s end_POSTSUBSCRIPT = italic_λ start_POSTSUBSCRIPT italic_a italic_d italic_j end_POSTSUBSCRIPT = italic_λ start_POSTSUBSCRIPT italic_s italic_c italic_r end_POSTSUBSCRIPT = 0.2 for THUMOS and λc⁢l⁢s=λa⁢d⁢j=λs⁢c⁢r=1subscript𝜆𝑐𝑙𝑠subscript𝜆𝑎𝑑𝑗subscript𝜆𝑠𝑐𝑟1\lambda_{cls}=\lambda_{adj}=\lambda_{scr}=1italic_λ start_POSTSUBSCRIPT italic_c italic_l italic_s end_POSTSUBSCRIPT = italic_λ start_POSTSUBSCRIPT italic_a italic_d italic_j end_POSTSUBSCRIPT = italic_λ start_POSTSUBSCRIPT italic_s italic_c italic_r end_POSTSUBSCRIPT = 1 for ActivityNet. All these hyper-parameters are empirically selected.
Datasets and evaluation metrics. We present our experimental results on two representative datasets THUMOS-14 (THUMOS for short) [15] and ActivityNet-v1.3 (ActivityNet for short) [7]. THUMOS-14 contains 413 temporally annotated untrimmed videos with 20 action categories, in which 200 videos are for training and 213 videos for validation333The training and validation sets of THUMOS are temporally annotated videos from the validation and testing sets of UCF101 [33], respectively.. ActivityNet-v1.3 has 19994 temporally annotated untrimmed videos in 200 action categories, which are split into training, validation and testing sets by the ratio of 2:1:1. For both datasets, we use mean Average Precision (mAP) at different tIoU thresholds as the evaluation metric. On THUMOS-14, we use tIoU thresholds {0.3,0.4,0.5,0.6,0.7}0.30.40.50.60.7\{0.3,0.4,0.5,0.6,0.7\}{ 0.3 , 0.4 , 0.5 , 0.6 , 0.7 }; on ActivityNet-v1.3, we choose 10 values in the range [0.5,0.95]0.50.95[0.5,0.95][ 0.5 , 0.95 ] with a step size 0.05 as tIoU thresholds following the official evaluation practice.
The training batch size is 32 for both datasets. We train 10 epochs at learning rate 0.00005 for THUMOS and 15 epochs at learning rate 0.0001 for ActivityNet. We directly predict the 20 action categories for THUMOS; we conduct binary classification and then fuse our prediction scores with video-level classification scores from [41] for ActivityNet following [21]. In post-processing, we apply soft-NMS [6] to suppress redundant predictions, keeping 200 predictions for THUMOS and 100 predictions for ActivityNet for final evaluation.
Implementation Details. In order to achieve higher performance, some works directly process video frames and learn features for the task of temporal action localization (TAL) in an end-to-end fashion [24, 42]. However, this has humongous requirements for GPU memory and computational capability. Instead, we follow the practice of using off-the-shelf pre-extracted features, without further finetuning on the target TAL task [3, 19, 21, 44]. For THUMOS, we sample at the original frame rate of each video and pre-extract features using the two-stream network TSN [41] trained on Kinects [16]. For ActivityNet, we evaluate on two different types of features: TSN features at 5 snippets per second and I3D [8] features at 1.5 snippets per second (both networks are trained on Kinetics [16]).
Specifically, we propose a Video self-Stitching Graph Network (VSGN) for improving performance of short actions in the TAL problem. Our VSGN is a multi-level cross-scale framework that contains two major components: video self-stitching (VSS); cross-scale graph pyramid network (xGPN). In VSS, we focus on a short period of a video and magnify it along the temporal dimension to obtain a larger scale. Then using our self-stitching strategy, we piece together both the original-scale clip and its magnified counterpart into one single sequence as the network input. In xGPN, we progressively aggregate features from cross scales as well as from the same scale via a pyramid of cross-scale graph networks. Hence, we enable direct information pass between the two feature scales. Compared to simply using one scale, our VSGN adaptively rectifies distorted features in either scales from one another by learning to localize actions, therefore, it is able to retain more information for the localization task. In addition to enhancing the features, our VSGN augments the datasets with more short actions to mitigate the bias towards long actions during the learning process, and enables more anchors, even those with large scales, to predict short actions.
B
There are relevant works that involve the human in interpreting, debugging, refining, and comparing ensembles of models [DCCE19, LXL∗18, NP20, SJS∗18, XXM∗19, ZWLC19]. These papers use bagging [Bre01] and boosting [CG16, FSA99, KMF∗17] techniques for ranking and identifying the best combination of models in different application scenarios. StackGenVis [CMKK21] is a VA system for composing powerful and diverse stacking ensembles [Wol92] from a pool of pre-trained models. On the one hand, we also enable the user to assess the various models and build his/her own ensemble of models. On the other hand, we support the process of generating new models through genetic algorithms and highlight the necessity for the best and most diverse models in the simplest possible voting ensemble. Finally, our approach is model-agnostic and generalizable, since we use both bagging and boosting techniques along with both NNs and simpler models [LXL∗18, NP20, ZWLC19].
To provide a holistic view on the performance of the models for the selected validation metrics, we use a UMAP [MHM18] projection, as seen in Figure 2(a), that consists of the 500 randomly-sampled models (MDS [Kru64] and t-SNE [vdMH08] are also available). Each model uses a set of particular hyperparameters, and it is projected from the space of validation metric values (here 4 dimensions, but could be more).
Moreover, the ranking of models is often based on a single validation metric, leading to the risks discussed in Section 1. The aforementioned works that make use of genetic algorithms contain similar mechanisms as in VisEvol, but without VA support for (1) the exploration of the interconnected hyperparameters, and (2) the selection of the proper number of models that should crossover and mutate.
In the Sankey diagram (see Figure 3(a)), the user tracks the progress of the evolutionary process and is able to limit the number of models that will be generated through crossover and mutation for each algorithm (Step 4 in Figure 1). The default here is defined as user-selected random search value / 2222 for each algorithm, to sustain the vertical symmetry in the Sankey diagram, as shown in Figure 3(a), left. For S1subscript𝑆1S_{1}italic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, we choose to keep the default values for crossover and mutation, but an analyst with prior knowledge and experiences could fine-tune this process. While moving toward S2subscript𝑆2S_{2}italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, we notice from Figure 3(b) that KNN and MLP perform similarly. The output of S1subscript𝑆1S_{1}italic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT in Figure 3(a) becomes the input for S2subscript𝑆2S_{2}italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, assisting us in the selection of appropriate numbers for model generation for S2subscript𝑆2S_{2}italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. When we hover over a path of the Sankey diagram, we see how many models perform better or worse than the already-explored models for each particular algorithm. The color-encoding is the same as Figure 2(d.2–d.4)), and it is measured as the number of overperforming models compared to the initial models / total crossover or mutate models for each algorithm. If there are no overperforming models, then we show number of underperforming models compared to the initial models / total crossover or mutate models for each algorithm. This approach primarily allows the user to identify how many models are improved based on each transformation (crossover or mutation), but it also highlights cases with very bad results from crossover or mutation, where no better ML models could be found. In our example, KNN mutation produced bad results, hence, we set the subsequent KNN and MLP mutations (due to the previously-discussed similarity in Figure 3(a)) to lower values than the default (10101010 vs. the default of 25252525). The visualization reduces the width of each path line in the Sankey diagram accordingly when the values are smaller than the maximum permitted. Next, we apply an equivalent procedure for all the algorithms. Finally, an analysis is conducted in a similar way to previous sections, with the selection of points in Figure 3(c) as an outcome.
VA tools have also been developed to visualize buckets of models [CAA∗19, TLKT09, ZWM∗19], where the best model for a specific problem is automatically chosen from a set of available options. These works feature exploration of the space in search for a final model, but the best model might not be the optimal when compared to a set of models (i.e., multiple hyperparameters) from several algorithms. Additionally, the models are already generated before the exploration, and there is no involvement of an optimization method.
D
The fundamental idea underlying MCMC algorithms is to synthesize a Markov chain that converges to a specified steady-state distribution. Random sampling of a large state space while adhering to a predefined probability distribution is the predominant use of MCMC algorithms.
The current literature covers a broad spectrum of methodologies for Markov chain synthesis, incorporating both heuristic approaches and optimization-based techniques [4, 5, 6]. Each method provides specialized algorithms tailored to the synthesis of Markov chains in alignment with specific objectives or constraints. Markov chain synthesis plays a central role in probabilistic swarm guidance, which has led to the development of various algorithms incorporating additional transition and safety constraints [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17].
This algorithm treats the spatial distribution of swarm agents, called the density distribution, as a probability distribution and employs the Metropolis-Hastings (M-H) algorithm to synthesize a Markov chain that guides the density distribution toward a desired state. The probabilistic guidance algorithm led to the development of numerous Markov chain synthesis algorithms involving specific objectives and constraints [8, 9, 10, 11, 12, 13, 14, 15, 16, 17].
Unlike the homogeneous Markov chain synthesis algorithms in [4, 7, 5, 6, 8, 9], the Markov matrix, synthesized by our algorithm, approaches the identity matrix as the probability distribution converges to the desired steady-state distribution. Hence the proposed algorithm attempts to minimize the number of state transitions, which eventually converge to zero as the probability distribution converges to the desired steady-state distribution. Whereas previous time-inhomogeneous Markov chain synthesis algorithms in [14, 15] only provide asymptotic convergence, the DSMC algorithm provides an exponential convergence rate guarantee.
In this section, we apply the DSMC algorithm to the probabilistic swarm guidance problem and provide numerical simulations that show the convergence rate of the DSMC algorithm is considerably faster as compared to the previous Markov chain synthesis algorithms in [7] and [14].
A
Other learning methods rely on a given template for each class [25] or local neighbourhood encoding to learn a compact representation [39]. The recently conducted SHREC correspondence contest on isometric and non-isometric 3D shapes [20] revealed that there is still room for improvement in both fields.
A shortcoming when applying the mentioned multi-shape matching approaches to isometric settings is that they do not exploit structural properties of isometric shapes. Hence, they lead to suboptimal multi-matchings, which we experimentally confirm in Sec. 5. One exception is the recent work on spectral map synchronisation [31], which builds upon functional maps and is, in principal, well-suited for isometric multi-shape matching. However, although the authors take into account cycle consistency, respective penalties are only imposed on pairwise functional maps, rather than on the point-wise correspondences. In Sec. 5 we demonstrate that it leads to multi-matchings that have large cycle errors.
The multi-matching problem is relatively well-studied for generic settings, e.g. for matching multiple graphs [79, 78, 65, 6, 69, 77], or matching keypoints in image collections [76, 72, 42]. A desirable property of multi-matchings is cycle consistency (which we will formally define in Sec. 3.1).
Although multi-matchings obtained by synchronisation procedures are cycle-consistent, the matchings are often spatially non-smooth and noisy, as we illustrate in Sec. 5. From a theoretical point of view, the most appropriate approach for addressing multi-shape matching is based on a unified formulation, where cycle consistency is assured already when the multi-matchings are computed. Although some approaches fit into this category [18, 9], none of the existing methods are tailored explicitly towards isometric multi-shape matching in order to take full advantage in this setting.
There are various works that particularly target the matching of multiple shapes. In [30, 32], semidefinite programming relaxations are proposed for the multi-shape matching problem. However, due to the employed lifting strategy, which drastically increases the number of variables, these methods are not scalable to large problems and only sparse correspondences are obtained. In [18], a game-theoretic formulation for establishing multi-matchings is introduced. Due to the use of a sparse modelling approach, the method also has the disadvantage that only few points per shape are matched, see Fig. 1. In [29], tensor maps are introduced for synchronising heterogeneous shape collections using a low-rank tensor decomposition formulation. The work [26] presents a self-supervised learning approach for finding surface deformations. A higher-order projected power iteration approach was presented in [9], which was applied to various multi-matching settings, such as multi-image matching or multi-shape matching.
B
If there exists a polynomial algorithm that tests if a graph G𝐺Gitalic_G is a path graph and returns a clique path tree of G𝐺Gitalic_G when the answer is “yes”, then there exists an algorithm with the same complexity to test if a graph is a directed path graph.
In this section we introduce some results and notations in [1], that give a new characterization of path graphs resumed in Theorem 6. Indirectly, some of these results allow us to efficiently recognize directed path graphs too (see Section 5 and Theorem 9).
The paper is organized as follows. In Section 2 we present the characterization of path graphs and directed path graphs given by Monma and Wei [18], while in Section 3 we explain the characterization of path graphs by Apollonio and Balzotti [1]. In Section 4 we present our recognition algorithm for path graphs, we prove its correctness, we report some implementation details and we compute its time complexity. Finally, in Section 5 we provide a similar analysis for directed path graphs.
interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs.interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs\text{interval graphs $\subset$ rooted path graphs $\subset$ directed path % graphs $\subset$ path graphs $\subset$ chordal graphs}.interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs .
On the side of directed path graphs, at the state of art, our algorithm is the only one that does not use the results in [4], in which it is given a linear time algorithm able to establish whether a path graph is a directed path graph too (see Theorem 5 for further details). Thus, prior to this paper, it was necessary to implement two algorithms to recognize directed path graphs, while we obtain our recognition algorithm for directed path graphs by slightly modifying the recognition algorithm for path graphs.
A
In this section, four real-world network datasets with known label information are analyzed to test the performances of our Mixed-SLIM methods for community detection. The four datasets can be downloaded from http://www-personal.umich.edu/~mejn/netdata/. For the four datasets, the true labels are suggested by the original authors, and they are regarded as the “ground truth” to investigate the performances of Mixed-SLIM methods in this paper.
The ego-networks dataset contains more than 1000 ego-networks from Facebook, Twitter, and GooglePlus. In an ego-network, all the nodes are friends of one central user and the friendship groups or circles (depending on the platform) set by this user can be used as ground truth communities. The SNAP ego-networks are open to the public, and it can be downloaded from http://snap.stanford.edu/data/. It is applied to test the performances of OCCAM (OCCAM, ) after some preprocessing. We obtain the SNAP ego-networks parsed by Yuan Zhang (the first author of the OCCAM method (OCCAM, )). The parsed SNAP ego-networks are slightly different from those used in OCCAM . To get a better sense of what the different social networks look like and how different characteristics potentially affect the performance of our Mixed-SLIM, we report the following summary statistics for each network: number of nodes n𝑛nitalic_n, number of communities K𝐾Kitalic_K, and the proportion of overlapping nodes rosubscript𝑟𝑜r_{o}italic_r start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT, i.e., ro=number⁢of⁢nodes⁢with⁢mixed⁢membershipnsubscript𝑟𝑜numberofnodeswithmixedmembership𝑛r_{o}=\frac{\mathrm{number~{}of~{}nodes~{}with~{}mixed~{}membership}}{n}italic_r start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT = divide start_ARG roman_number roman_of roman_nodes roman_with roman_mixed roman_membership end_ARG start_ARG italic_n end_ARG. We report the means and standard deviations of these measures for each of the social networks in Table 3.
In this section, four real-world network datasets with known label information are analyzed to test the performances of our Mixed-SLIM methods for community detection. The four datasets can be downloaded from http://www-personal.umich.edu/~mejn/netdata/. For the four datasets, the true labels are suggested by the original authors, and they are regarded as the “ground truth” to investigate the performances of Mixed-SLIM methods in this paper.
Dolphins: this network consists of frequent associations between 62 dolphins in a community living off Doubtful Sound. In the Dolphins network, node denotes a dolphin, and edge stands for companionship dolphins0 ; dolphins1 ; dolphins2 . The network splits naturally into two large groups females and males dolphins1 ; dolphinnewman , which are seen as the ground truth in our analysis.
The development of the Internet not only changes people’s lifestyles but also produces and records a large number of network structure data. Therefore, networks are often associated with our life, such as friendship networks and social networks, and they are also essential in science, such as biological networks (2002Food, ), information networks (Newman2004, ) and social networks pizzuti2008ga ; Scoot2014 . To analyze networks, many researchers present them in a form of a graph in which subjects/individuals are presented by nodes, and the relationships are measured by the edges, directions of edges, and weights fortunato2010community ; fortunato2016community . Some authors consider ‘pure’ networks in which each node at most belongs to one community/cluster, and in each community, the nodes which have similar proprieties or functions are more likely to be linked with each other than random pairs of nodes (RSC, ; SCORE, ). While few networks can be deemed as ‘pure’ in our real life. In a network, if some nodes are potentially belonging to two or more communities at a time, the network is known as ‘mixed membership’ (mixedSCORE, ; OCCAM, ; SPACL, ). Compared with pure networks, mixed membership networks are more realistic. In this paper, we focus on the problem of community detection for mixed membership networks.
C
These works utilize the property that the diffusion process associated with Langevin dynamics in 𝒳𝒳\mathcal{X}caligraphic_X corresponds to the Wasserstein gradient flow of the KL-divergence in 𝒫2⁢(𝒳)subscript𝒫2𝒳\mathcal{P}_{2}(\mathcal{X})caligraphic_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( caligraphic_X ) (Jordan et al., 1998), and the methods proposed in these works apply various time-discretization techniques.
artifacts adopted only for theoretical analysis. We present the details of such a modified algorithm in Algorithm 2 in §A. Without these modifications, Algorithm 2 reduces to the general method proposed in Algorithm 1, a deterministic particle-based algorithm, which is more advisable for
In addition to gradient-based MCMC, variational transport also shares similarity with Stein variational gradient descent (SVGD) (Liu and Wang, 2016), which is a more recent particle-based algorithm for Bayesian inference. Variants of SVGD have been subsequently proposed. See, e.g.,
In each iteration, variational transport approximates the update in (1.1) by first solving the dual maximization problem associated with the variational form of the objective and then using the obtained solution to specify a direction to push each particle. The variational transport algorithm can be viewed as a forward discretization of the Wasserstein gradient flow (Santambrogio, 2017)
Our Contribution. Our contribution is two fold. First, utilizing the optimal transport framework and the variational form of the objective functional, we propose a novel variational transport algorithmic framework for solving the distributional optimization problem via particle approximation. In each iteration, variational transport first solves the variational problem associated with the objective to obtain an estimator of the Wasserstein gradient and then approximately implements Wasserstein gradient descent by pushing the particles.
B
, i.e., each agent makes decision for its own. This type of methods is usually easy to scale, but may have difficulty to achieve global optimal performance due to the lack of collaboration. To address the problem, another way is to jointly model the action among learning agents with centralized optimization [16, 15]. However, as the number of agents increases, joint optimization usually leads to dimensional explosion, which has inhibited the widespread adoption of such methods to a large-scale traffic signal control. To overcome the difficulty, another type of methods are implemented in a decentralized manner. For example, the methods proposed in [32, 44] directly add neighboring information into states, and the neighbors’ hidden features are merged into states in [45, 46, 47, 3]. Compared with them, our method uses neighbor information to form intrinsic motivation rather than as additional input of the policy. It makes our method easy to transfer to a new scenario which may have different neighbour topology with the training scenario. Besides, the neighborhood travel time is optimized in [48] as an additional reward. However, simple concatenation of neighboring information is not reasonable enough because the influence of neighboring intersections is not balanced.
To make the policy transferable, traffic signal control is also modeled as a meta-learning problem in [14, 49, 36]. Specifically, the method in [14] performs meta-learning on multiple independent MDPs and ignores the influences of neighbor agents. A data augmentation method is proposed in [49] to generates diverse traffic flows to enhance meta-RL, and also regards agents as independent individuals, without explicitly considering neighbors. In addition, a model-based RL method is proposed in [36] for high data efficiency. However it may introduce cumulative errors due to error of the learned environment model and it is hard to achieve the asymptotic performance of model-free methods. Our method both belongs to meta-RL paradigms, the main advantages are two main aspects Firstly, we consider the neighbour information during the meta-learning, which is critical for the multi-agent coordination. Secondly, our method learns a latent variable to represent task-specific information, which can not only balance exploration and exploitation [50], but also help to learn the shared structures of reward and transition across tasks. As far as we know, our work is the first to propose an intrinsic motivation to enhance the robustness of the policy on traffic signal control. See Appendix F for a brief overview of the above methods.
2) The performances of Individual RL and PressLight drop 38% and 41% when the model is transferred. It shows that the models learned by the regular RL algorithms indeed rely on the training scenario. MetaLight is more robust to various scenarios than Individual RL and PressLight, and it indicates the advantage of the meta-learning framework. The meta-learning framework could help to learn task-shared model. Overall, MetaVIM achieves the state-of-the-art performances and only drops 9% when transferring the model. The main reason is that: the task-specific information is modeled by the latent variable in our method, and the learned policy function could be adaptive to diverse latent variables. That is, given a novel or unseen task, the task-specific information would be represented as latent variable rather than acting as distractors. Hence, the latent variable helps to learn the across-task shared policy function better.
We can obtain the following findings: 1) Among these 5 models, the performance of Baseline is the worst. The reason is that it is hard to learn the effective decentralized policy independently in the multi-agent traffic signal control task, where one agent’s reward and transition are affected by its neighbors. 2) Compared with the baseline, the improvement of Baseline + m𝑚mitalic_m demonstrates the effectiveness of the latent variable m𝑚mitalic_m. The latent variable not only identifies the POMDP-specific information and helps to learn POMDP-shared policy network, but also trades off the exploration and exploitation during the RL procedure. 3) The tran_RS and rew_RS are both effective because each of them encourages the policy learning stable. Compared to them, the superiority of MetaVIM indicates tran_RS and rew_RS are complementary to each other. 4) Overall, all of the proposed components contribute positively to the final model.
In this paper, we propose a novel Meta RL method MetaVIM for multi-intersection traffic signal control, which can make the policy learned from a training scenario generalizable to new unseen scenarios. MetaVIM learns the decentralized policy for each intersection which considers neighbor information in a latent way. We conduct extensive experiments and demonstrate the superior performance of our method over the state-of-the-art. We have collected and released more complex scenarios containing different structures 777https://github.com/zhuliwen/RoadnetSZ, and will improve the method based on these scenarios in the future. In addition, the utilization of latent variable in model-based RL for traffic signal control will also be explored to improve sample efficiency.
A
a curve {y=x2,z=x3}formulae-sequence𝑦superscript𝑥2𝑧superscript𝑥3\{y\,=\,x^{2},\,z\,=\,x^{3}\}{ italic_y = italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , italic_z = italic_x start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT } along with three lines, and a surface {x2+y2+z2= 1}superscript𝑥2superscript𝑦2superscript𝑧21\{x^{2}+y^{2}+z^{2}\,=\,1\}{ italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_y start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_z start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 1 } are of dimensions 00, 1111 and
Let 𝐱=(x1,x2,x3,x4)𝐱subscript𝑥1subscript𝑥2subscript𝑥3subscript𝑥4\mathbf{x}\,=\,(x_{1},x_{2},x_{3},x_{4})bold_x = ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT ) and the mapping
{x1=−x3,x2=−x4,x3⁢x4=±1,t= 1}.formulae-sequencesubscript𝑥1subscript𝑥3formulae-sequencesubscript𝑥2subscript𝑥4formulae-sequencesubscript𝑥3subscript𝑥4plus-or-minus1𝑡1\{x_{1}\,=\,-x_{3},\,x_{2}\,=\,-x_{4},\,x_{3}\,x_{4}\,=\,\pm 1,~{}t\,=\,1\}.{ italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = - italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = - italic_x start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT = ± 1 , italic_t = 1 } .
the cyclic-4 system in 𝐱=(x1,x2,x3,x4)∈ℂ4𝐱subscript𝑥1subscript𝑥2subscript𝑥3subscript𝑥4superscriptℂ4\mathbf{x}\,=\,(x_{1},x_{2},x_{3},x_{4})\in\mathbbm{C}^{4}bold_x = ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT ) ∈ blackboard_C start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT with a parameter t∈ℂ𝑡ℂt\in\mathbbm{C}italic_t ∈ blackboard_C:
obtains a solution (x1,x2,x3,x4,t)subscript𝑥1subscript𝑥2subscript𝑥3subscript𝑥4𝑡(x_{1},x_{2},x_{3},x_{4},\,t)( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT , italic_t ) as
C
While the standard online framework assumes that the algorithm has no information on the input sequence, a recently emerged and very active direction in Machine Learning seeks to leverage predictions on the input. More precisely, the algorithm has access to some machine-learned information on the input, which, however, may be erroneous; namely, there is a prediction error η𝜂\etaitalic_η associated with it. The objective is to design algorithms which perform well if the prediction is accurate, maintain an efficient competitive ratio is the prediction is highly erroneous (i.e., adversarial), and also exhibit a gentle degradation of the competitive performance, as a function of the prediction error.
We first present and analyze an algorithm called ProfilePacking, that achieves optimal consistency, and is also efficient if the prediction error is relatively small. The algorithm builds on the concept of a profile set, which serves as an approximation of the items that are expected to appear in the sequence, given the frequency predictions. This is a natural concept that, perhaps surprisingly, has not been exploited in the long history of competitive analysis of bin packing, and which can be readily applicable to other online packing problems, such as multi-dimensional packing (?) and vector packing (?), as we discuss in Section 7.
Online bin packing was recently studied under an extension of the advice complexity model, in which the advice may be untrusted (?). Here, the algorithm’s performance is evaluated only at the extreme cases in which the advice is either error-free or adversarially generated, namely with respect to its consistency and its robustness, respectively. The objective is to find Pareto-efficient algorithms concerning these two metrics, as a function of the advice size. However, this model is not concerned with the algorithm’s performance in typical cases in which the prediction does not fall in one of the two above extremes, does not incorporate the prediction error into the analysis, and does not consider the learnability aspects of the advice. In particular, even with error-free predictions, the algorithm of (?) has a competitive ratio as large as 1.5, whereas a single bit error may result in a competitive ratio that is as large as 6.
Our analysis of ProfilePacking, as stated in Theorem 3, in conjunction with the PAC-learnability of frequency predictions, can help obtain a sampling-based algorithm with an efficient tradeoff between the number of sampled items and its attained competitive ratio. More precisely, consider the setting in which the online algorithm is allowed to observe s𝑠sitalic_s items of the request sequence, and we would like to express its (asymptotic) competitive ratio as a function of s𝑠sitalic_s. Similar types of sampling-based competitive analysis have recently attracted attention in the context of other online problems such as ski rental and prophet inequalities (?), matching (?), and network optimization problems (?).
Following the influential work (?), we refer to the competitive ratio of an algorithm with an error-free prediction as the consistency of the algorithm, and to the competitive ratio with an adversarial prediction as its robustness. Several online optimization problems have been studied in this learning-augmented setting, including caching (?, ?), ski rental and non-clairvoyant scheduling (?, ?), makespan scheduling (?), rent-or-buy problems (?, ?, ?),
D
Finally, we empirically show the proposed framework produces high-fidelity and watertight meshes. It means that it solves the initial problem of disjoint patches occurring in the original AtlasNet (Groueix et al., 2018). To evaluate the continuity of output surfaces, we propose to use the following metric.
In this experiment, we set N=105𝑁superscript105N=10^{5}italic_N = 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT. Using more rays had a negligible effect on the output value of W⁢T𝑊𝑇WTitalic_W italic_T but significantly slowed the computation. We compared AtlasNet with LoCondA applied to HyperCloud (HC) and HyperFlow (HF). We show the obtained results in Table 3. Note that AtlasNet cannot produce watertight meshes for any of the classes, limiting its applicability. On the other hand, LoCondA creates meshes where all sampled rays pass the test.
The above formulation alone causes that many of the produced patches have unnecessarily long edges, and the network folds them, so the patch fits the surface of an object. To mitigate the issue, we add an edge length regularization motivated by (Wang et al., 2018). If we assume that the reconstructed mesh has the form of a graph M=(V,E)𝑀𝑉𝐸M=(V,E)italic_M = ( italic_V , italic_E ) with edges E𝐸Eitalic_E, then the term is defined as follows:
To leverage that knowledge, we express watertigthness as a ratio of rays that passed the parity test to the total number of all casted rays. Firstly, we sample N𝑁Nitalic_N points p∈S^𝑝^𝑆p\in\hat{S}italic_p ∈ over^ start_ARG italic_S end_ARG from all triangles of the reconstructed object S^^𝑆\hat{S}over^ start_ARG italic_S end_ARG. Since each point is associated with a triangle it was sampled from, we use a corresponding normal n^^𝑛\hat{n}over^ start_ARG italic_n end_ARG of its triangle and negate it to obtain a direction of a ray R⁢(S^)∋r=−n^⁢pcontains𝑅^𝑆𝑟^𝑛𝑝R(\hat{S})\ni r=-\hat{n}pitalic_R ( over^ start_ARG italic_S end_ARG ) ∋ italic_r = - over^ start_ARG italic_n end_ARG italic_p towards the object. Then, we calculate the number of crossings c⁢(r)𝑐𝑟c(r)italic_c ( italic_r ) with all triangles. For each ray, we set 1 if it passes a test and 0 otherwise. We sum test results over all rays and divide by the number of rays to obtain the watertightness (W⁢T𝑊𝑇WTitalic_W italic_T) measure, which we formulate as:
Watertigthness Typically, a mesh is referred to as being either watertight or not watertight. Since it is a true or false statement, there is no well-established measure to define the degree of discontinuities in the object’s surface. To fill this gap, we propose a metric based on a simple, approximate check of whether a mesh is watertight - the parity test. The test says that any ray cast from infinity towards the object has to enter and leave the object. It is realized as checking whether the number of rays’ crossings with all triangles in the mesh is an even number. If so, the ray is said to pass the parity test. The mesh is watertight if all rays pass the test.
D
The Mirror-prox algorithm can be performed in a decentralized manner, however, it is not known whether its optimality is preserved. In this paper, we prove that Mirror-prox remains optimal even in a decentralized case w.r.t. the dependence on the desired accuracy ε𝜀\varepsilonitalic_ε and condition number χ𝜒\chiitalic_χ of communication network if we split communication and oracle complexities by Chebyshev acceleration trick (see, e.g. [37]).
We demonstrate the performance of the DMP algorithm on different network architectures with different conditional number χ𝜒\chiitalic_χ: complete graph, star graph, cycle graph and the Erdős-Rényi random graphs with the probability of edge creation p=0.5𝑝0.5p=0.5italic_p = 0.5 and p=0.4𝑝0.4p=0.4italic_p = 0.4 under the random seed =10absent10=10= 10. As the true barycenter of Gaussian measures can be calculated theoretically [14], we use them to study the convergence of the DMP to the non-optimality gap.
Paper organization. This paper is organized as follows. Section 2 presents a saddle point problem of interest along with its decentralized reformulation. In Section 3, we provide the main algorithm of the paper to solve such kind of problems. In Section 4, we present the lower complexity bounds for saddle point problems without individual variables. Finally in Section 5, we show how the proposed algorithm can be applied to the problem computing Wasserstein barycenters .
Finally, we show how the proposed method can be applied to prominent problem of computing Wasserstein barycenters to tackle the problem of instability of regularization-based approaches under a small value of regularizing parameter. The idea is based on the saddle point reformulation of the Wasserstein barycenter problem (see [17]). Wasserstein barycenters, which define the mean of objects that can be modeled as probability measures on a metric space (images, texts, videos), are used in many fields including Bayesian computations [55], texture mixing [50], clustering (k𝑘kitalic_k-means for probability measures) [13], shape interpolation and color transferring [53], statistical estimation of template models [10] and neuroimaging [25].
We proposed a decentralized method for saddle point problems based on non-Euclidean Mirror-Prox algorithm. Our reformulation is built upon moving the consensus constraints into the problem by adding Lagrangian multipliers. As a result, we get a common saddle point problem that includes both primal and dual variables. After that, we employ the Mirror-Prox algorithm and bound the norms of dual variables at solution to assist the theoretical analysis. Finally, we demonstrate the effectiveness of our approach on the problem of computing Wasserstein barycenters (both theoretically and numerically).
C
Different classes of cycle bases can be considered. In [6] the authors characterize them in terms of their corresponding cycle matrices and present a Venn diagram that shows their inclusion relations. Among these classes we can find the strictly fundamental class.
In the introduction of this article we mentioned that the MSTCI problem is a particular case of finding a cycle basis with sparsest cycle intersection matrix. Another possible analysis would be to consider this in the context of the cycle basis classes described in [6].
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the strictly fundamental class context. In more concrete terms this problem is equivalent to finding the cycle basis with the sparsest cycle matrix. In [5] a unified perspective of the problem is presented. The authors show that the MCB problem is different in nature for each class. For example in [10] a remarkable reduction is constructed to prove that the MCB problem is NP-hard for the strictly fundamental class, while in [11] a polynomial time algorithm is given to solve the problem for the undirected class. Some applications of the MCB problem are described in [5, 11, 10, 12].
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of interesting properties, and a conjecture in the slightly general case of a graph (not necessarily complete) that admits a star spanning tree. Section 5 explores programmatically the space of spanning trees to provide evidence that the conjecture is well posed. Section 6 collects the conclusions of the article.
where L^=D^t⁢D^^𝐿superscript^𝐷𝑡^𝐷\hat{L}=\hat{D}^{t}\hat{D}over^ start_ARG italic_L end_ARG = over^ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT over^ start_ARG italic_D end_ARG is the lower right |V|−1×|V|−1𝑉1𝑉1|V|-1\times|V|-1| italic_V | - 1 × | italic_V | - 1 submatrix of the laplacian matrix of G𝐺Gitalic_G and Γ^=Γt⁢Γ^ΓsuperscriptΓ𝑡Γ\hat{\Gamma}=\Gamma^{t}\Gammaover^ start_ARG roman_Γ end_ARG = roman_Γ start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT roman_Γ is the cycle intersection matrix of B𝐵Bitalic_B. The same question can be formulated in this setting: how can we choose B𝐵Bitalic_B such that its corresponding cycle intersection matrix Γ^^Γ\hat{\Gamma}over^ start_ARG roman_Γ end_ARG is as sparse as possible? In the particular case where the cycle basis is in fact a strictly fundamental cycle basis, namely a cycle basis induced by a spanning tree, this is precisely the MSTCI problem.
B
(m+1)𝑚1(m+1)( italic_m + 1 )-tuples of ℱℱ\mathcal{F}caligraphic_F with nonempty intersection. In other words, πm+1⁢(ℱ)subscript𝜋𝑚1ℱ\pi_{m+1}(\mathcal{F})italic_π start_POSTSUBSCRIPT italic_m + 1 end_POSTSUBSCRIPT ( caligraphic_F ) is at least δ′=defρ/(m⁢tm+1)superscriptdefsuperscript𝛿′𝜌binomial𝑚𝑡𝑚1\delta^{\prime}\stackrel{{\scriptstyle\text{def}}}{{=}}\rho/\binom{mt}{m+1}italic_δ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_RELOP SUPERSCRIPTOP start_ARG = end_ARG start_ARG def end_ARG end_RELOP italic_ρ / ( FRACOP start_ARG italic_m italic_t end_ARG start_ARG italic_m + 1 end_ARG ), where ρ𝜌\rhoitalic_ρ depends only on m𝑚mitalic_m, t𝑡titalic_t, and δ𝛿\deltaitalic_δ, that is on m𝑚mitalic_m, b𝑏bitalic_b, K𝐾Kitalic_K and δ𝛿\deltaitalic_δ. That concludes the proof.
a positive fraction of the m𝑚mitalic_m-tuples to have a nonempty intersection, where for dimK>1dimension𝐾1\dim K>1roman_dim italic_K > 1, m𝑚mitalic_m is some hypergraph Ramsey number depending on b𝑏bitalic_b and K𝐾Kitalic_K. So in order to prove Corollary 1.3 it suffices to show that if a positive fraction of the (μ⁢(K)+1)𝜇𝐾1(\mu(K)+1)( italic_μ ( italic_K ) + 1 )-tuples intersect, then a positive fraction of the m𝑚mitalic_m-tuples intersect. This follows from successive applications of Theorem 1.2. (Note that [35, Theorem 2.3] still needs to be proven independently to provide a stopping point for the successive applications of Theorem 1.2; also, the implicit bound given by the proof of [35, Theorem 2.3] on the constant β𝛽\betaitalic_β changes in the process.)
Lemma 4.6 assumes that the m𝑚mitalic_m-colored family ℱℱ\mathcal{F}caligraphic_F has the property that for 0≤j<dimK0𝑗dimension𝐾0\leq j<\dim K0 ≤ italic_j < roman_dim italic_K and for every colorful subfamily 𝒢𝒢\mathcal{G}caligraphic_G of ℱℱ\mathcal{F}caligraphic_F, the j𝑗jitalic_jth reduced Betti number β~j⁢(⋂F∈𝒢F)subscript~𝛽𝑗subscript𝐹𝒢𝐹\tilde{\beta}_{j}(\bigcap_{F\in\mathcal{G}}F)over~ start_ARG italic_β end_ARG start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( ⋂ start_POSTSUBSCRIPT italic_F ∈ caligraphic_G end_POSTSUBSCRIPT italic_F ) is strictly less than b𝑏bitalic_b. A careful inspection of the proof reveals that this assumption is only used in the induction step, for the definition of the labeling hℎhitalic_h in Equation (7). When proving that (Pℓ)subscript𝑃ℓ(P_{\ell})( italic_P start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT ) implies (Pℓ+1)subscript𝑃ℓ1(P_{\ell+1})( italic_P start_POSTSUBSCRIPT roman_ℓ + 1 end_POSTSUBSCRIPT ), the face σ𝜎\sigmaitalic_σ appearing in Equation (7) is (ℓ+1)ℓ1(\ell+1)( roman_ℓ + 1 )-dimensional, so
If we use Lemma 4.8 in place of Lemma 4.6 in the proof of Theorem 2.1, the hypothesis on the m𝑚mitalic_m-colored family ℱℱ\mathcal{F}caligraphic_F can be weakened. This “improved” Theorem 2.1 can in turn be applied in the proof of Theorem 1.2, yielding the following:
The rest of Section 4.1 is devoted to the proof of Lemma 4.2. The proof first handles the case k=m𝑘𝑚k=mitalic_k = italic_m, and then uses it to prove the case k<m𝑘𝑚k<mitalic_k < italic_m. Note that for k>m𝑘𝑚k>mitalic_k > italic_m the lemma is trivial, as the chain group contains only a trivial chain and we can take N=ℓ𝑁ℓN=\ellitalic_N = roman_ℓ.
C
Figure 1: Selecting important features, transforming them, and generating new features with FeatureEnVi: (a) the horizontal beeswarm plot for manually slicing the data space (which is sorted by predicted probabilities) and continuously checking the migration of data instances throughout the process; (b) the table heatmap view for the selection of features according to feature importances calculated from automatic techniques; (c) the radial tree providing an overview of the features with statistical measures for the different groups of instances, as set by the user-defined data slices; (d) the graph visualization for the detailed exploration of features, their transformation, and comparison between two or three features for feature generation purposes; and (e) the punchcard for tracking the steps of the process and the grouped bar chart for comparing the current vs. the best predictive performance based on three validation metrics.
Automatic feature transformation has been examined within the ML community with positive results in reinforcement learning. In the work by Khurana et al. [1], the authors conduct a performance-driven exploration of a transformation graph which systematically enumerates the space of given options. A single “best” measurement is not possible, however, since the options might conflict with each other. In contrast, FeatureEnVi focuses on classification problems and presents users with various statistics about each feature in four different slices of the data space (the ones considered in Section 2.1, along with variance influence factor and per class correlation). This is similar to ExplainExplore[37] for classification and HyperMoVal [38] for regression. The authors of these tools work with the probabilities of instances to belong in different classes or clusters depending on the features. In our case, we take into account the ground truth values of the target variable and compute the probability (0% to 100%) of the ML model to classify each instance correctly.
Workflow. All experts commented that the workflow of FeatureEnVi is straightforward, because it is mainly linear despite involving optional iterative steps. E2 stated that feature engineering is usually very time consuming, especially without the support of a system like ours. E3 also agreed with us that the features have an important influence on the predictive model-based quality and affect the generalization ability of the final ML model. Furthermore, he noticed that it is difficult to judge how each feature should be engineered when there is contradicting statistical evidence. Without FeatureEnVi, it would have been risky to make a deterministic decision. The connection between the features present in the radial tree visualization and the instances’ reallocation in the data space at the top of FeatureEnVi helps to identify impactful features (as highlighted by E1). Hence, it is up to users to understand which features matter more for the subspaces (locally) and the entire data space (globally), which provides transparency and enhances the trustworthiness of the feature engineering process, as outlined by E1 and E2. Although FeatureEnVi works better with a limited number of features (e.g., 41 for the case study in Section 5), E2 suggested that a prephase with an AutoML system [100] or a DR algorithm [101] could be a solution, if used to set specific boundaries by investigating the relations between features.
In machine learning (ML), classification is a type of supervised learning where the primary goal is to predict the dependent variable—also known as the target or class label—of every data instance (e.g., rows in a table) given independent features of the data (e.g., columns in a table). Feature engineering is the process of converting raw data into a set of features that better expresses the underlying problem, resulting in powerful ML models with enhanced predictive performance based on validation metrics [1]. In practice, most of the time spent in ML is in preparing this set of features, which should be as concise as possible while retaining vital information about the data set [2, 3]. If complications occur in this step and remain undetected, they can spoil the later phases of the ML pipeline, according to the classic “garbage in, garbage out” principle. Another important reason why feature engineering is essential in real-world problems is that it increases the transparency and trustworthiness of the data and, in consequence, the ML process in general [4]. This matter has been drawing attention recently with, for example, the new European General Data Protection Regulation (GDPR) instructions [5]. Furthermore, domain experts are increasingly requesting clear evidence in order to trust in ML [6].
Figure 3: Exploration of features with FeatureEnVi. The default slicing thresholds for the data space separate the instances into four quadrants that represent intervals of 25% predicted probability (see (a.1–a.4)). View (b) presents a table heatmap with five different feature selection techniques and their average value per feature. We exclude the less contributing features, as shown in the duplicated view (c). In the radial tree, the paths from (d.1) to (d.4) are the features for the groups formed at (a.1–a.4), respectively, while the features’ impact for the entire data set is shown in the red box. The whole data space is displayed with even more details in the graph visualization in (e), where additional metrics’ results are reported. A summary of the meaning of the visual encodings for these metrics is visible in the top-left corner in (e). More details about these views are described in the text.
C
As expected, adding the global tracking error constraint increases the traversal time, but maintains the maximal deviation within the bounds (see the table in 5). This tracking error constraint results in a dramatic 5-fold decrease of the maximum deviation ‖e^c‖∞subscriptnormsubscript^𝑒𝑐\|\hat{e}_{c}\|_{\infty}∥ over^ start_ARG italic_e end_ARG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT, at the cost of an increase of the traversal time only by 10%, and the traversal time is still better compared to the one achieved with manually tuned MPCC. The constraints on the jerk successfully result in avoiding big jumps in the acceleration.
For the initialization phase needed to train the GPs in the Bayesian optimization, we select 20 samples over the whole range of MPC parameters, using Latin hypercube design of experiments. The BO progress is shown in Figure 5, right pannel, for the optimization with constraints on the jerk and on the tracking error. After the initial learning phase the algorithm quickly finds the region where the simulation is feasible with respect to the constraints. The confidence interval in the cost prediction narrows for the infinity shaped trajectory, which is likely due to a more clear minimum in the cost of this geometry. The optimization stops after a fixed number of iterations is reached, and the parameters are set to those corresponding to the best observed cost.
To reduce the number of times this experimental “oracle” is invoked, we employ Bayesian optimization (BO) [16, 17], which is an effective method for controller tuning [13, 18, 19] and optimization of industrial processes [20]. The constrained Bayesian optimization samples and learns both the objective function and the constraints online and finds the global optimum iteratively. For this, it uses Gaussian process regression [21] to build a surrogate for the objective and the constraints and to quantify the uncertainty. We explain the details of this iterative procedure in the remainder of this section.
which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combination of the identified system model with the contouring terms. In our approach the tracking error is coupled with the progression along the path through the cost function. The automated tuning of the parameters is performed using a cost that accounts for the global performance over the whole trajectory. Additional constraints in the Bayesian optimization algorithm allow for balancing traversal time, accuracy, and minimization of oscillations, according to the specific crucial requirements of the application. We demonstrate enhanced performance in simulation for a 2-axis gantry, for geometries of different nature.
MPC accounts for the real behavior of the machine and the axis drive dynamics can be excited to compensate for the contour error to a big extent, even without including friction effects in the model [4, 5]. High-precision trajectories or set points can be generated prior to the actual machining process following various optimization methods, including MPC, feed-forward PID control strategies, or iterative-learning control [6, 7], where friction or vibration-induced disturbances can be corrected. In MPC, closed-loop performance is pushed to the limits only if the plant under control is accurately modeled, alternatively, the performance degrades due to imposed robustness constraints. Instead of adapting the controller for the worst case scenarios, the prediction model can be selected to provide the best closed-loop performance by tuning the parameters in the MPC optimization objective for maximum performance [8, 9, 10]. Using Bayesian optimization-based tuning for enhanced performance has been further demonstrated for cascade controllers of linear axis drives, where data-driven performance metrics have been used to specifically increase the traversal time and the tracking accuracy while reducing vibrations in the systems [11, 12]. The approach has been successfully applied to linear and rotational axis embedded in grinding machines and shown to standardize and automate tuning of multiple parameters [13].
A
Our study demonstrates that systems are highly sensitive to the tuning distribution, that explicit methods cannot handle multiple bias sources, and that more rigorous analysis is critical for bias mitigation algorithms for future progress. Based on our results, we argue that the community should focus on implicit methods, rather than explicit, not only because explicit methods require additional annotations, but also because they perform worse on realistic settings.
Interestingly, MMD was low for digit position. We hypothesize this is because CNNs are unable to use position information for inference [42]. To confirm this, we add CoordConv layers [42] before and after the maxpooling layer in CNN to enable usage of position information. This resulted in methods exploiting digit position too, showing larger MMD values of 11.1%-25.6% as compared to the 2.2%-8.7% without the CoordConv layers. Such inductive biases affect whether or not methods exploit certain dataset biases, and we discuss this in Sec. 7.
Methods are typically highly sensitive to hyperparameter choices, and papers report numbers on systems in which the hyperparameters were tuned using the test set distribution [18, 50, 64]. In the real world, biases may stem from multiple factors and may change in different environments, making this setup unrealistic. Furthermore, tuning on the test distribution can lead to methods that are right for the wrong reasons. When this is done, systems can perform well just by exploiting the biases they are supposed to overcome [62, 64], and they will then fail once deployed because they have not really learned to solve the task.
Results. In Fig. 3(a), we present the MMD boxplots for all bias variables, comparing cases when the label of the variable is either explicitly specified (explicit bias), or kept hidden (implicit bias) from the methods. Barring digit position, we observe that the MMD values are higher when the variables are not explicitly labeled for the methods, indicating that the explicit methods in general fail to mitigate implicit biases. Fig. 3(b) breaks down exploitation of explicit and implicit biases for each method. UpWt, GDRO and RUBi have low MMD values for explicit biases, but high MMD values for implicit biases, showing that they mitigate the explicit biases to some extent, but are not robust to the implicit biases. LNL and IRMv1 seem to be equanimously affected by both explicit and implicit biases, and thus fail to improve upon the baseline as previously shown in Table 1. LFF has a relatively low range of MMDs and as shown by the improvements in Table 1, the method outperforms others on Biased MNIST.
An interesting observation was that a weaker architecture, CNNs, were able to ignore position bias, whereas a more powerful architecture, CoordConv, resorted to exploiting this bias resulting in worse performance. While the community has largely focused on training procedures for bias mitigation, an exciting avenue for future work is to incorporate appropriate inductive biases into the architectures, perhaps endowing them with the ability to choose the the minimal computational power to do a task so that they are less sensitive to unwanted biases. This will essentially enable the algorithms to use Occam’s razor to determine the minimal capabilities required to do a task to reduce their ability to utilize biases.
D
Some works seek to decompose the gaze into multiple related features and construct multi-task CNNs to estimate these feature. Yu et al. introduce a constrained landmark-gaze model for modeling the joint variation of eye landmark locations and gaze directions [119]. As shown in Fig. 9, they build a multi-task CNN to estimate the coefficients of the landmark-gaze model as well as the scale and translation information to align eye landmarks. Finally, the landmark-gaze model serve as a decode to calculate gaze from estimated parameters.. Deng et al. decompose the gaze direction into eyeball movement and head pose [76]. They design a multi-tasks CNN to estimate the eyeball movement from eye images and the head pose from facial images. The gaze direction is computed from eyeball movement and head pose using geometric transformation. Wu et al. propose a multi-task CNN that simultaneously segments the eye part, detects the IR LED glints, and estimates the pupil and cornea center [123].
Different types of input have been explored to extract features. Kellnhofer et al. directly extract features from facial images [43]. Zhou et al. combine the feature extracted from facial and eye images [84]. Palmero et al. use facial images, binocular images and facial landmarks to generate the feature vectors [79]. Different RNN structures have also been explored, such as GRU [77] in  [79], LSTM [130] in [84] and bi-LSTM [128] in [43]. Cheng et al.  leverage the recurrent CNN to improve gaze estimation performance from static images rather than videos [56]. They generalize the gaze estimation as a sequential coarse-to-fine process and use GRU to relate the basic gaze direction estimated from facial images and the gaze residual estimated from eye images.
Recasens et al. present an approach for following gaze in video by predicting where a person (in the video) is looking, even when the object is in a different frame [124]. They build a CNN to predict the gaze location in each frame and the probability containing the gazed object of each frame. Also, visual saliency shows strong correlation with human gaze in scene images [125, 126]. In [127], they estimate the general visual attention and human’s gaze directions in images at the same time. Kellnhofer et al. propose a temporal 3D gaze network [43]. They use bi-LSTM [128] to process a sequence of 7 frames to estimate not only gaze directionS but also gaze uncertainty.
Temporal information from videos also contributes to better gaze estimates. Recurrent Neural Network (RNN) has been widely used in video processing, e.g., long short-term memory (LSTM) [43, 84]. As shown in Fig. 5, they usually use a CNN to extract features from face images at each frame, and then input these features into a RNN.
Human gaze has a strong correlation with eye appearance. Even a minor perturbation in gaze direction can result in noticeable changes in eye appearance. For instance, when the eyeball rotates, the position of the iris and the shape of the eyelid undergo alterations, leading to corresponding changes in gaze direction. This relationship between gaze and eye appearance enables the gaze estimation based on the visual feature of eyes. Conventional methods typically estimate gaze using high-dimensional raw image features [21, 51]. These features are obtained by raster scanning all the pixels in eye images, resulting in a representation that contains a significant amount of redundancy. Moreover, these features are highly sensitive to environmental changes, which can pose challenges in achieving accurate gaze estimation.
B
This deep quantization technique presents many advantages. It ensures a lightweight representation that makes the real-world masked face recognition process a feasible task. Moreover, the masked regions vary from one face to another, which leads to informative images of different sizes. The proposed deep quantization allows classifying images from different sizes in order to handle this issue. Besides, the Deep BoF approach uses a differentiable quantization scheme that enables simultaneous training of both the quantizer and the rest of the network, instead of using fixed quantization merely to minimize the model size passalis2017learning . It is worth stating that our proposed method doesn’t need to be trained on the mission region after removing the mask. It instead improves the generalization of the face recognition process in the presence of the mask during the pandemic of coronavirus.
The next step is to apply a cropping filter in order to extract only the non-masked region. To do so, we firstly normalize all face images into 240 ×\times× 240 pixels. Next, we partition a face into blocks. The principle of this technique is to divide the image into 100 fixed-size square blocks (24 ×\times× 24 pixels in our case). Then we extract only the blocks including the non-masked region (blocks from number 1 to 50). Finally, we eliminate the rest of the blocks as presented in Fig. 3.
Experimental results are carried out on Real-world Masked Face Recognition Dataset (RMFRD) and Simulated Masked Face Recognition Dataset (SMFRD) presented in wang2020masked . We start by localizing the mask region. To do so, we apply a cropping filter in order to obtain only the informative regions of the masked face (i.e. forehead and eyes). Next, we describe the selected regions using a pre-trained deep learning model as a feature extractor. This strategy is more suitable in real-world applications comparing to restoration approaches. Recently, some works have applied supervised learning on the missing region to restore them such as in din2020novel . This strategy, however, is a difficult and highly time-consuming process.
To tackle these problems, we distinguish two different tasks namely: face mask recognition and masked face recognition. The first one checks whether the person is wearing a mask or no. This can be applied in public places where the mask is compulsory. Masked face recognition, on the other hand, aims to recognize a face with a mask basing on the eyes and the forehead regions. In this paper, we handle the second task using a deep learning-based method. We use a pre-trained deep learning-based model in order to extract features from the unmasked face regions (out of the mask region). It is worth stating that the occlusions in our case can occur in only one predictable facial region (nose and mouth regions), this can be a good guide to handle this problem efficiently.
The images of the used dataset are already cropped around the face, so we don’t need a face detection stage to localize the face from each image. However, we need to correct the rotation of the face so that we can remove the masked region efficiently. To do so, we detect 68 facial landmarks using Dlib-ml open-source library introduced in king2009dlib . According to the eye locations, we apply a 2D rotation to make them horizontal as presented in Fig. 2.
D
\mathscr{A}]\triangleqitalic_F ∈ [ ⟨ ∗ , italic_x ⟩ ⇒ italic_P ( italic_x ) : italic_ϕ bold_⇒ script_A ] ≜ if ⋅;⋅⊢ϕ\cdot;\cdot\vdash\phi⋅ ; ⋅ ⊢ italic_ϕ, then F,proca(P(a))∈⟦a:𝒜⟧F,\operatorname{proc}a\,(P(a))\in\llbracket a:\mathscr{A}\rrbracketitalic_F , roman_proc italic_a ( italic_P ( italic_a ) ) ∈ ⟦ italic_a : script_A ⟧ for a𝑎aitalic_a fresh in F𝐹Fitalic_F.
The first rule for →→\to→ corresponds to the identity rule and copies the contents of one cell into another. The second rule, which is for cut, models computing with futures [Hal85]: it allocates a new cell to be populated by the newly spawned P𝑃Pitalic_P. Concurrently, Q𝑄Qitalic_Q may read from said new cell, which blocks if it is not yet populated. The third and fourth rules resolve principal cuts by passing a value to a continuation, whereas the fifth one resolves definition calls. Lastly, the final two rules perform the action of writing to a cell.
Positive semantic types are defined by intension—the contents of a particular cell—whereas negative semantic types are defined by extension—how interacting with a continuation produces the desired result. Analogously for the λ𝜆\lambdaitalic_λ-calculus, the semantic positive product is defined as containing pairs of terminating terms, whereas the semantic function space contains all terms that terminate under application [GTL89, AP16]. Now, to state the compatibility lemmas, we need to define the semantic typing judgment.
For space, we omit the process terms. Of importance is the instance of the call rule for the recursive call to eat: the check i−1<i𝑖1𝑖i-1<iitalic_i - 1 < italic_i verifies that the process terminates and the loop [(i−1)/i]⁢[z/x]⁢Ddelimited-[]𝑖1𝑖delimited-[]𝑧𝑥𝐷[(i-1)/i][z/x]D[ ( italic_i - 1 ) / italic_i ] [ italic_z / italic_x ] italic_D “ties the knot” on the typechecking process. Mutually recursive programs, then, are checked by circular typing derivations that are mutually recursive in the metatheory.
With these compatibility lemmas in hand, we are almost ready to construct a correspondence between the syntactic typing of processes and configuration objects with the semantic typing thereof. First, we need a semantic interpretation of (syntactic) types.
B
First, the owner requires that the cloud not be able to obtain the plaintext about the media content and the LUTs, and that access to the media content is controlled by his/her authorization. Second, the owner asks for significant overhead savings from cloud media sharing. Third, the owner demands traitor tracing of users who violate copyright.
The threats considered in this paper come from three entities: users, the owner, and the cloud. First, users are assumed to be malicious, who could illegally redistribute the owner’s media content with the hope that this behavior will not be detected. Second, the owner is also assumed to be malicious, who may try to obtain the users’ fingerprints and maliciously embed the fingerprints into any media content to frame honest users for copyright infringement. Third, the cloud is assumed to be honest-but-curious, which is the same as other privacy-preserving cloud media sharing schemes based on ABE or PRE [7, 21, 23, 29, 15]. Although the honest-but-curious cloud faithfully performs its assigned duties, he/she could try to steal the plaintext about the owner’s media content. Moreover, the cloud is also curious about other information it encounters, including the users’ fingerprints and the LUTs. Finally, as in [28], we assume that there may exist collusion among individual users and collusion between the owner and the cloud, while there is no collusion between users and the cloud.
Implement privacy-preserving access control. On the one hand, the cloud should be prevented from obtaining the private plaintext of the data it encounters, including the owner’s media content, the users’ fingerprints, and the LUTs. On the other hand, only users authorized by the owner can access the media content.
Users. Users want to access the owner’s media content. To this end, users request authorization from the owner, for example by paying for purchases. If successful, users can get the desired shared media content from the cloud. Users require that the plaintext of their fingerprints not be accessed by the owner or the cloud, to prevent malicious framing by the owner.
First, the owner requires that the cloud not be able to obtain the plaintext about the media content and the LUTs, and that access to the media content is controlled by his/her authorization. Second, the owner asks for significant overhead savings from cloud media sharing. Third, the owner demands traitor tracing of users who violate copyright.
C
This section presents an empirical investigation of the performance of GraphFM on two CTR benchmark datasets and a recommender system dataset. The experimental settings are described, followed by comparisons with other state-of-the-art methods. An ablation study is also conducted to verify the importance of each component of the model and evaluate its performance under different hyperparameter settings. Finally, the question of whether GraphFM can provide interpretable explanations for its predictions is examined.
Our proposed GraphFM achieves best performance among all these four classes of methods on three datasets. The performance improvement of GraphFM compared with the three classes of methods (A, B, C) is especially significant, above 0.010.01\mathbf{0.01}bold_0.01-level. The aggregation-based methods including InterHAt, AutoInt, Fi-GNN and our GraphFM consistently outperform the other three classes of models, which demonstrates the strength of the aggregation strategy in capturing high-order relations. Compared with the strong aggregation-based baselines AutoInt and Fi-GNN, GraphFM still advances the performance by a large margin, especially on MovieLens-1M dataset. The performance improvement on the other two datasets are also at 0.0010.001\mathbf{0.001}bold_0.001-level, which can be regarded as significant for CTR prediction task Cheng et al. (2016); Guo et al. (2017); Song et al. (2019). Such improvement can be attributed to its combination with FM, which introduces feature interactions operations, and also the interaction selection mechanism, which selects and models only the beneficial feature interactions. GraphFM outperforms the compared baselines by the largest margin on MovieLens-1M dataset, whose feature size is smallest among the three datasets. I suppose this is because the feature embedding size is not large enough for the other two datasets.
Since our proposed approach selects the beneficial feature interactions and models them in an explicit manner, it has high efficiency in analyzing high-order feature interactions and thus provides rationales for the model outcome. Through extensive experiments conducted on CTR benchmark and recommender system datasets, we verify the rationality, effectiveness, and interpretability of our proposed approach.
Our experiments are conducted on three real-world datasets, two CTR benchmark datasets, and one recommender system dataset. Details of these datasets are illustrated in Table 1. The data preparation follows the strategy in Tian et al. (2023). We randomly split all the instances in 8:1:1 for training, validation, and testing. We adopt the two most popular metrics, AUC and Logloss to measure the probability that one prediction diverges from the ground truth.
(2) By treating features as nodes and their pairwise feature interactions as edges, we bridge the gap between GNN and FM, and make it feasible to leverage the strength of GNN to solve the problem of FM. (3) Extensive experiments are conducted on CTR benchmark and recommender system datasets to evaluate the effectiveness and interpretability of our proposed method. We show that GraphFM can provide persuasive rationales for the feature interaction modeling and prediction-making process.
C
\mathcal{L}_{0}}\delta^{2}}{4\tilde{L}D^{2}}\right)^{\left\lceil(t-1)/2\right% \rceil}.italic_h ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) ≤ italic_h ( bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) ( 1 - divide start_ARG italic_μ start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_L start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT italic_δ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG 4 over~ start_ARG italic_L end_ARG italic_D start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ) start_POSTSUPERSCRIPT ⌈ ( italic_t - 1 ) / 2 ⌉ end_POSTSUPERSCRIPT .
We also show improved convergence rates for several variants in various cases of interest and prove that the AFW [Wolfe, 1970, Lacoste-Julien & Jaggi, 2015] and BPCG Tsuji et al. [2022] algorithms coupled with the backtracking line search of Pedregosa et al. [2020] can achieve linear convergence rates over polytopes when minimizing generalized self-concordant functions.
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪⁢(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is very similar to the one in Jaggi [2013]. In a nutshell, as the primal progress per iteration is directly related to the step size times the Frank-Wolfe gap, we know that the Frank-Wolfe gap cannot remain indefinitely above a given value, as otherwise we would obtain a large amount of primal progress, which would make the primal gap become negative. This is formalized in Theorem 2.6.
We can make use of the proof of convergence in primal gap to prove linear convergence in Frank-Wolfe gap. In order to do so, we recall a quantity formally defined in Kerdreux et al. [2019] but already implicitly used earlier in Lacoste-Julien & Jaggi [2015] as:
When the domain 𝒳𝒳\mathcal{X}caligraphic_X is a polytope, one can obtain linear convergence in primal gap for a generalized self-concordant function using the well known Away-step Frank-Wolfe (AFW) algorithm [Guélat & Marcotte, 1986, Lacoste-Julien & Jaggi, 2015] shown in Algorithm 5
C
In particular, it is desirable that the number of passes is independent of the input graph size. We call an algorithm a k𝑘kitalic_k-pass algorithm if the algorithm makes k𝑘kitalic_k passes over the edge stream, possibly each time in a different order [MP80, FKM+05].
This model is not only interesting for massive data sets but also whenever there is no random access to the input, for instance, if the input is only defined implicitly. Moreover, many insights and techniques from this model naturally carry over to a variety of areas in theoretical computer science, including communication complexity and approximation algorithms.
The problem of finding an arbitrarily good approximation has been studied in the streaming model [Ber88, ALT21, KN21] on bipartite graphs as well as various related models that deal with non-random access to the input. For instance, there are works in the setting of dynamic streams where edges can be added and removed [Kon15, AKLY16, CCE+16], in the random streaming model where edges or vertices arrive in a random order [AB21, Ber20, GKMS19, KMM12, MY11], and in models with vertex (instead of edge) arrival [KVV90, ELSW13, CTV15, BST19, GKM+19].
Maximum Matching is one of the most fundamental problems in combinatorial optimization and has been extensively studied in the classic centralized model of computation for almost half a century. We refer to [Sch03] for an overview. In particular, several exact polynomial-time deterministic maximum matching algorithms are known [Edm65a, HK73, MV80, Gab90]. Due to the quickly growing data sets naturally arising in many real-world applications (see [DH03] for an overview), there has been an increasing interest in algorithm design for huge inputs.
For massive graphs the classical matching algorithms are not only prohibitively slow, but also space complexity becomes a concern. If a graph is too large to fit into the memory of a single machine, all the classical algorithms—which assume random access to the input—are not applicable. This demand for a more realistic model for processing modern data sets has led to the proposal of several different computing models that address this shortcoming.
A
When b=6𝑏6b=6italic_b = 6 or k=20𝑘20k=20italic_k = 20, the trajectories of CPP are very close to that of exact Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B, which indicates that when the compression errors are small, they are no longer the bottleneck of convergence.
Figure 3: Performance of Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B, CPP, B-CPP against the number of transmitted bits: the left column shows the results with quantization (b=2,4,6𝑏246b=2,4,6italic_b = 2 , 4 , 6) and the right column shows the results with Rand-k (k=5,10,20𝑘51020k=5,10,20italic_k = 5 , 10 , 20).
To see why CPP outperforms Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B, note that the vectors sent in CPP have been compressed, and hence the transmitted bits at each iteration are greatly reduced compared to Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B.
Figure 1: Linear convergence of Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B, CPP, and B-CPP with b𝑏bitalic_b bit quantization (b=2,4,6𝑏246b=2,4,6italic_b = 2 , 4 , 6) and Rand-k (k=5,10,20𝑘51020k=5,10,20italic_k = 5 , 10 , 20) compressors.
We can see from all of the sub-figures of Fig. 3 that, to reach a high accuracy within about 10−15superscript101510^{-15}10 start_POSTSUPERSCRIPT - 15 end_POSTSUPERSCRIPT, the number of transmitted bits required by these methods have the ranking: B-CPP <<< CPP <<< Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B.
D
where x1,…,xMsubscript𝑥1…subscript𝑥𝑀x_{1},\ldots,x_{M}italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_x start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT and y1,…,yMsubscript𝑦1…subscript𝑦𝑀y_{1},\ldots,y_{M}italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_y start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT are interpreted as local models on nodes which are grouped into matrices X:=[x1,…,xM]Tassign𝑋superscriptsubscript𝑥1…subscript𝑥𝑀𝑇X:=[x_{1},\ldots,x_{M}]^{T}italic_X := [ italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_x start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT and Y:=[y1,…,yM]Tassign𝑌superscriptsubscript𝑦1…subscript𝑦𝑀𝑇Y:=[y_{1},\ldots,y_{M}]^{T}italic_Y := [ italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_y start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT. W𝑊Witalic_W is the gossip matrix reflecting the properties of the communication graph between the nodes. λ>0𝜆0\lambda>0italic_λ > 0 is the key regularization parameter, which corresponds to the personalization degree of the models.
Unlike (2), the formulation (1) penalizes not the difference with the global average, but the sameness with other connected local nodes. Thereby the decentralized case can be artificially created in centralized architecture, e.g., if we want to create the network and W𝑊Witalic_W matrix to connect only some clients based on their location, age and other meta data. The regularization parameter λ𝜆\lambdaitalic_λ is responsible for importance degree of this difference. For example, with λ=0𝜆0\lambda=0italic_λ = 0 the problem (1) will decompose into M𝑀Mitalic_M separable problems and each m∈M𝑚𝑀m\in Mitalic_m ∈ italic_M will independently train just a local model. As λ𝜆\lambdaitalic_λ increases, local models begin to use the information from their neighbours due to increase the ”importance” degree of regularization terms. The idea of using this type of penalty is not new and has been used in the literature in several contexts, in particular for classical decentralized minimization [23, 24] with large λ𝜆\lambdaitalic_λ and for multitask PFL [25, 26] with small λ𝜆\lambdaitalic_λ.
Note that in the proposed formulation (1) we consider both the centralized and decentralized cases. In the decentralized setting, all nodes are connected within a network, and each node can communicate/exchange information only with their neighbors in the network. While the centralized architecture consists of master-server that connected with all devices which communicate to the central server. But in theory, the centralized case is similar to decentralized with a complete computational graph. If we set W𝑊Witalic_W to the Laplacian of a complete graph, it is easy to verify that we obtain the following centralized PF SPP:
In this paper, we present a novel formulation for the Personalized Federated Learning Saddle Point Problem (1). This formulation incorporates a penalty term that accounts for the specific structure of the network and is applicable to both centralized and decentralized network settings. Additionally, we provide the lower bounds both on the communication and the number of local oracle calls required to solve problem (1). Furthermore, we have developed the novel methods (Algorithm 1, Algorithm 2, Algorithm 3) for this problem that are optimal up to logarithmic factor in certain scenarios (see Table 1). These algorithms are based on sliding or variance reduction techniques. The theoretical analysis and experimental evidence corroborate our methods. Moreover, we have customized our approach for neural network training.
To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detailed comparison with them in Appendix C. Due to the fact that we consider a personalized setting, we can have a significant gain in communications. For example, when λ=0𝜆0\lambda=0italic_λ = 0 or small enough in (1) the importance of local models increases and we may communicate less frequently. We now outline the main contribution of our work as follows (please refer also Table 1 for an overview of the results):
B
A (C)CE MS provides a distribution that is in equilibrium over the set of joint policies found so far, Π0:tsuperscriptΠ:0𝑡\Pi^{0:t}roman_Π start_POSTSUPERSCRIPT 0 : italic_t end_POSTSUPERSCRIPT. For the algorithm to have converged, it needs to also be in equilibrium over the set of all possible joint policies, Π∗superscriptΠ\Pi^{*}roman_Π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT. This is the case when the BR fails to find a novel policy with nonzero gap. Policies that have been found before, by definition of (C)CE, have zero gap. All behavioural policies can be defined in terms of a mixture of deterministic policies. Therefore, given that there are finite deterministic policies the algorithm will converge.
PSRO consists of a response oracle that estimates the best response (BR) to a joint distribution of policies. Commonly the response oracle is either a reinforcement learning (RL) agent or a method that computes the exact BR. The component that determines the distribution of policies that the oracle responds to is called the meta-solver (MS). The MS operates on the meta-game (MG), which is a payoff tensor estimated by measuring the expected return (ER) of policies against one another. This is a NF game, but instead of strategies corresponding to actions, a𝑎aitalic_a, they correspond to policies, π𝜋\piitalic_π. The set of deterministic policies can be huge and that of stochastic policies is infinite, therefore PSRO only considers a subset of game policies: the ones found by the BR over all iterations so far. Different MSs result in different algorithms: the uniform distribution results in FSP, and using the NE distribution results in an extension of DO.
We evaluate a number of (C)CE MSs in JPSRO on pure competition, pure cooperation, and general-sum games (Section H). All games used are available in OpenSpiel (Lanctot et al., 2019). More thorough descriptions of the games used can be found in Section F. We use an exact BR oracle, and exactly evaluate policies in the meta-game by traversing the game tree to precisely isolate the MS’s contribution to the algorithm.
We have shown that JPSRO converges to an NF(C)CE over joint policies in extensive form and stochastic games. Furthermore, there is empirical evidence that some MSs also result in high value equilibria over a variety of games. We argue that (C)CEs are an important concept in evaluating policies in n-player, general-sum games and thoroughly evaluate several MSs. Finally, we believe that both MG(C)CE and JPSRO can scale to large problems, by using stochastic online MSs for the former and exploiting function approximation and RL for the latter.
In Section 2 we provide background on a) correlated equilibrium (CE), an important generalization of NE, b) coarse correlated equilibrium (CCE) (Moulin & Vial, 1978), a similar solution concept, and c) PSRO, a powerful multi-agent training algorithm. In Section 3 we propose novel solution concepts called Maximum Gini (Coarse) Correlated Equilibrium (MG(C)CE) and in Section 4 we thoroughly explore its properties including tractability, scalability, invariance, and a parameterized family of solutions. In Section 5 we propose a novel training algorithm, Joint Policy-Space Response Oracles (JPSRO), to train policies on n-player, general-sum extensive form games. JPSRO requires the solution of a meta-game, and we propose using MG(C)CE as a meta-solver. We prove that the resulting algorithm converges to a normal form (C)CE in the extensive form game. In Section 6 we conduct an empirical study and show convergence rates and social welfare across a variety of games including n-player, general-sum, and common-payoff games.
B
Another line of work (e.g., Gehrke et al. (2012); Bassily et al. (2013); Bhaskar et al. (2011)) proposes relaxed privacy definitions that leverage the natural noise introduced by dataset sampling to achieve more average-case notions of privacy. This builds on intuition that average-case privacy can be viewed from a Bayesian perspective, by restricting some distance measure between some prior distribution and some posterior distribution induced by the mechanism’s behavior (Dwork et al., 2006; Kasiviswanathan and Smith, 2014). This perspective was used Shenfeld and Ligett (2019) to propose a stability notion which is both necessary and sufficient for adaptive generalization under several assumptions. Unfortunately, these definitions have at best extremely limited adaptive composition guarantees.  Bassily and Freund (2016) connect this Bayesian intuition to statistical validity via typical stability, an approach that discards “unlikely” databases that do not obey a differential privacy guarantee, but their results require a sample size that grows linearly with the number of queries even for iid distributions. Triastcyn and Faltings (2020) propose the notion of Bayesian differential privacy which leverages the underlying distribution to improve generalization guarantees, but their results still scale with the range in the general case.
One cluster of works that steps away from this worst-case perspective focuses on giving privacy guarantees that are tailored to the dataset at hand (Nissim et al., 2007; Ghosh and Roth, 2011; Ebadi et al., 2015; Wang, 2019). In  Feldman and Zrnic (2021) in particular, the authors elegantly manage to track the individual privacy loss of the elements in the dataset. However, their results do not enjoy a dependence on the standard deviation in place of the range of the queries. Several truncation-based specialized mechanisms have been proposed, both to provide differential privacy guarantees for Gaussian and sub-Gaussian queries even in the case of multivariate distribution with unknown covariance (Karwa and Vadhan, 2018; Ashtiani and Liaw, 2022; Duchi et al., 2023) and, remarkably, design specialized algorithms that achieve adaptive data analysis guarantees that scale like the standard deviation of the queries (Feldman and Steinke, 2017). Recently, Blanc (2023) proved that randomized rounding followed by sub-sampling provides accuracy guarantees that scale with the queries’ variance. But none of these results apply to simple noise addition mechanisms.
Differential privacy essentially provides the optimal asymptotic generalization guarantees given adaptive queries (Hardt and Ullman, 2014; Steinke and Ullman, 2015). However, its optimality is for worst-case adaptive queries, and the guarantees that it offers only beat the naive intervention—of splitting a dataset so that each query gets fresh data—when the input dataset is quite huge (Jung et al., 2020). A worst-case approach makes sense for privacy, but for statistical guarantees like generalization, we only need statements that hold with high probability with respect to the sampled dataset, and only on the actual queries issued.
An alternative route for avoiding the dependence on worst case queries and datasets was achieved using expectation based stability notions such as mutual information and KL stability Russo and Zou (2016); Bassily et al. (2021); Steinke and Zakynthinou (2020). Using these methods Feldman and Steinke (2018) presented a natural noise addition mechanism, which adds noise that scales with the empirical variance when responding to queries with known range and unknown variance. Unfortunately, in the general case, the accuracy guarantees provided by these methods hold only for the expected error rather than with high probability.
Another line of work (e.g., Gehrke et al. (2012); Bassily et al. (2013); Bhaskar et al. (2011)) proposes relaxed privacy definitions that leverage the natural noise introduced by dataset sampling to achieve more average-case notions of privacy. This builds on intuition that average-case privacy can be viewed from a Bayesian perspective, by restricting some distance measure between some prior distribution and some posterior distribution induced by the mechanism’s behavior (Dwork et al., 2006; Kasiviswanathan and Smith, 2014). This perspective was used Shenfeld and Ligett (2019) to propose a stability notion which is both necessary and sufficient for adaptive generalization under several assumptions. Unfortunately, these definitions have at best extremely limited adaptive composition guarantees.  Bassily and Freund (2016) connect this Bayesian intuition to statistical validity via typical stability, an approach that discards “unlikely” databases that do not obey a differential privacy guarantee, but their results require a sample size that grows linearly with the number of queries even for iid distributions. Triastcyn and Faltings (2020) propose the notion of Bayesian differential privacy which leverages the underlying distribution to improve generalization guarantees, but their results still scale with the range in the general case.
C
We have taken the first steps into a new direction for preprocessing which aims to investigate how and when a preprocessing phase can guarantee to identify parts of an optimal solution to an 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problem, thereby reducing the running time of the follow-up algorithm. Aside from the technical results concerning antler structures for Feedback Vertex Set and their algorithmic properties, we consider the conceptual message of this research direction an important contribution of our theoretical work on understanding the power of preprocessing and the structure of solutions to 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problems.
As the first step of our proposed research program into parameter reduction (and thereby, search space reduction) by a preprocessing phase, we present a graph decomposition for Feedback Vertex Set which can identify vertices S𝑆Sitalic_S that belong to an optimal solution; and which therefore facilitate a reduction from finding a solution of size k𝑘kitalic_k in graph G𝐺Gitalic_G, to finding a solution of size k−|S|𝑘𝑆k-|S|italic_k - | italic_S | in G−S𝐺𝑆G-Sitalic_G - italic_S. While there has been a significant amount of work on kernelization for Feedback Vertex Set [12, 14, 35, 37, 46], the corresponding preprocessing algorithms do not succeed in finding vertices that belong to an optimal solution, other than those for which there is a self-loop or those which form the center a flower (consisting of k+1𝑘1k+1italic_k + 1 otherwise vertex-disjoint cycles [12, 14, 46], or a technical relaxation of this notion [35]). In particular, apart from the trivial self-loop rule, earlier preprocessing algorithms can only conclude a vertex v𝑣vitalic_v belongs to all optimal solutions (of a size k𝑘kitalic_k which must be given in advance) if they find a suitable packing of cycles witnessing that solutions without v𝑣vitalic_v must have size larger than k𝑘kitalic_k. In contrast, our argumentation will be based on local exchange arguments, which can be applied independently of the global solution size k𝑘kitalic_k.
We have taken the first steps into a new direction for preprocessing which aims to investigate how and when a preprocessing phase can guarantee to identify parts of an optimal solution to an 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problem, thereby reducing the running time of the follow-up algorithm. Aside from the technical results concerning antler structures for Feedback Vertex Set and their algorithmic properties, we consider the conceptual message of this research direction an important contribution of our theoretical work on understanding the power of preprocessing and the structure of solutions to 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problems.
This line of investigation opens up a host of opportunities for future research. For combinatorial problems such as Vertex Cover, Odd Cycle Transversal, and Directed Feedback Vertex Set, which kinds of substructures in inputs allow parts of an optimal solution to be identified by an efficient preprocessing phase? Is it possible to give preprocessing guarantees not in terms of the size of an optimal solution, but in terms of measures of the stability [7, 8, 18] of optimal solutions under small perturbations? A recent work shows that preprocessing guarantees in terms of which vertices are essential to made a constant-factor approximation is often possible [13].
The goal of this paper is to open up a new research direction aimed at understanding the power of preprocessing in speeding up algorithms that solve NP-hard problems exactly [26, 31]. In a nutshell, this new direction can be summarized as: how can an algorithm identify part of an optimal solution in an efficient preprocessing phase? We explore this direction for the classic [38] Feedback Vertex Set problem on undirected graphs, leading to a new graph structure called antler which reveals vertices that belong to an optimal feedback vertex set.
C
Painterly image harmonization: In standard image harmonization, both foreground and background are from realistic images. There exist certain application scenarios that the background is an artistic image while the foreground is from a realistic image, in which case the standard image harmonization models may not work well. To overcome this problem, painterly image harmonization [104] has been studied to harmonize the realistic foreground according to the artistic background to obtain a uniformly stylized composite image.
For example, Luan et al. [104] proposed to optimize the input image with two passes, in which the first pass aims at robust coarse harmonization and the second pass targets at high-quality refinement. Feed-forward methods send the input image through the model to output the harmonized result. For example, Peng et al. [119] applied adaptive instance normalization to match the means and variances between the feature map of composite image and that of artistic background. Cao et al. [10] performed painterly image harmonization in both frequency domain and spatial domain, considering that artistic paintings often have periodic textures and patterns which appear regularly. Lu et al. [99] introduced diffusion model to painterly image harmonization, which can significantly outperform GAN-based methods when the background has dense textures or abstract style. Niu et al. [115] divided styles into low-level styles (e.g., color, simple pattern) and high-level styles (e.g., complex pattern), and devised a progressive network which can harmonize a composite image from low-level styles to high-level styles progressively. Niu et al. [114] proposed style-level supervision based on pairs of artistic objects and photographic objects, considering that it is hard to obtain pixel-wise supervision based on pairs of artistic images and photographic images. Niu et al. [114] also contributed an artistic object dataset which contains the segmentation masks and similar photographic objects for artistic objects.
Image harmonization is closely related to style transfer. Note that both artistic style transfer [37, 56, 118] and photorealistic style transfer [103, 82] belong to style transfer. Image harmonization is closer to photorealistic style transfer, which transfers the style of a reference photo to another input photo. There are two main differences between image harmonization and photorealistic style transfer. 1) Firstly, image harmonization adjusts the foreground appearance according to the background, which needs to take the foreground location into consideration due to the locality property. In contrast, photorealistic style transfer adjusts the appearance of a whole input image according to another whole reference image. 2) Secondly, the definition of “style” in photorealistic style transfer is unclear and coarsely depends on the employed style loss (e.g., Gram matrix loss [37], AdaIn loss [56]). Differently, the goal of image harmonization is clearly adjusting the illumination statistics of foreground, so that the resultant foreground looks like the same object captured in the background illumination condition.
Painterly image harmonization is more challenging because multiple levels of styles (i.e., color, simple texture, complex texture) [115] need to be transferred from background to foreground, while standard image harmonization only needs to transfer low-level style (i.e., illumination). Painterly image harmonization is also referred to as cross-domain image composition [47, 101, 178].
Painterly image harmonization: In standard image harmonization, both foreground and background are from realistic images. There exist certain application scenarios that the background is an artistic image while the foreground is from a realistic image, in which case the standard image harmonization models may not work well. To overcome this problem, painterly image harmonization [104] has been studied to harmonize the realistic foreground according to the artistic background to obtain a uniformly stylized composite image.
C
Transfer learning: Firstly, it can serve as an ideal testbed for transfer learning algorithms, including meta-learning [5], AutoML [23], and transfer learning on spatio-temporal graphs under homogeneous or heterogeneous representations. In the field of urban computing, it is highly probable that the knowledge required for different tasks, cities, or time intervals is correlated. By leveraging this transferable knowledge across domains with this multi-city, multi-task data, CityNet can help researcher alleviate the data scarcity problems that arise in newly-built or under-developed cities.
In the present study, we have introduced CityNet, a multi-modal dataset specifically designed for urban computing in smart cities, which incorporates spatio-temporally aligned urban data from multiple cities and diverse tasks. To the best of our knowledge, CityNet is the first dataset of its kind, which provides a comprehensive and integrated view of urban data from various sources. Through the use of data mining and visualization tools, we have demonstrated the significance of multi-modal urban data and have highlighted the connections between service and context data. Furthermore, we have presented extensive experimental results on spatio-temporal predictions, transfer learning, and reinforcement learning, which demonstrate the potential of CityNet as a versatile benchmark for various research topics.
As depicted in Table V, deep learning models can generate highly accurate predictions when provided with ample data. However, the level of digitization varies significantly among cities, and it is likely that many cities may not be able to construct accurate deep learning prediction models due to a lack of data. One effective solution to this problem is transfer learning [20], which leverages knowledge from a source domain with abundant data to a target domain with limited data. In our case, this involves transferring knowledge from one city to another. Therefore, we conduct transfer learning experiments on CityNet to demonstrate that inter-city connections can facilitate positive knowledge transfer and to establish benchmarks for future research on inter-city transfer learning.
To the best of our knowledge, CityNet is the first multi-modal urban dataset that aggregates and aligns sub-datasets from various tasks and cities. Using CityNet, we have provided a wide range of benchmarking results to inspire further research in areas such as spatio-temporal predictions, transfer learning, reinforcement learning, and federated learning in the field of urban computing.
Federated learning: Secondly, CityNet is an appropriate dataset to investigate various federated learning topics under different settings, with each party holding data from one source or one city. Urban data is usually generated by a multitude of human activities and stored by diverse stakeholders, such as organizations, companies, and the government. However, due to data privacy regulations or the need to protect commercial interests, collaborations between these stakeholders should be conducted in a privacy-preserving manner. Federated learning (FL) [24] could provide an effective solution for enabling privacy-preserving multi-party collaboration and can be specifically investigated using CityNet, with its data from multiple cities and sources.
D
𝒞⁢(Γ,P):=E(𝐱,y)∼P⁢[𝟙⁢(y∈Γ⁢(𝐱))]=Prob⁢{y∈Γ⁢(𝐱)},assign𝒞Γ𝑃subscriptEsimilar-to𝐱𝑦𝑃delimited-[]1𝑦Γ𝐱Prob𝑦Γ𝐱\displaystyle\mathcal{C}(\Gamma,P):=\mathrm{E}_{(\mathbf{x},y)\sim P}\big{[}% \mathbbm{1}(y\in\Gamma(\mathbf{x}))\big{]}=\text{Prob}\{y\in\Gamma(\mathbf{x})%
where 𝟙1\mathbbm{1}blackboard_1 denotes the indicator function and P𝑃Pitalic_P denotes the joint distribution on 𝒵𝒵\mathcal{Z}caligraphic_Z, a significance or confidence level α𝛼\alphaitalic_α is chosen faulkenberry1973method ; fraser1956tolerance such that
where 𝟙y≤y∗subscript1𝑦superscript𝑦\mathbbm{1}_{y\leq y^{*}}blackboard_1 start_POSTSUBSCRIPT italic_y ≤ italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT denotes the indicator function of the set {y∈ℝ:y≤y∗}conditional-set𝑦ℝ𝑦superscript𝑦\{y\in\mathbb{R}:y\leq y^{*}\}{ italic_y ∈ blackboard_R : italic_y ≤ italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT }. The algorithm starts by building an ordinary random forest and then extracts the relevant weights to estimate the conditional cumulative distribution F^⁢(y∗|𝐱∗)^𝐹conditionalsuperscript𝑦superscript𝐱\hat{F}(y^{*}\,|\,\mathbf{x}^{*})over^ start_ARG italic_F end_ARG ( italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT | bold_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ). The (conditional) quantiles are inferred through their defining equation
By differentiating the argument on the right-hand side with respect to q𝑞qitalic_q and equating it to 0, one obtains definition (19) of the α𝛼\alphaitalic_α-quantile. The pinball loss (26) is then simply the loss function for the sample α𝛼\alphaitalic_α-quantile, i.e. the α𝛼\alphaitalic_α-quantile of the empirical distribution function. An intuition for this loss function can be gained from considering the example of the sample median (α=0.5𝛼0.5\alpha=0.5italic_α = 0.5). The median ξ𝜉\xiitalic_ξ is defined as the data point for which there are as many positive as negative residuals ri:=yi−ξassignsubscript𝑟𝑖subscript𝑦𝑖𝜉r_{i}:=y_{i}-\xiitalic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT := italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT - italic_ξ. From an optimization point of view, this definition is equivalent to that of the minimizer of the average of all absolute residuals. The pinball loss estimates other quantiles by replacing this average with a weighted average.
In this section the models that predict the lower and upper bounds of prediction intervals are considered, for example the α/2𝛼2\alpha/2italic_α / 2- and (1−α/2)1𝛼2(1-\alpha/2)( 1 - italic_α / 2 )-quantile estimates for a given significance level α𝛼\alphaitalic_α. For this class of estimators a reasonable choice of nonconformity measure is
A
Despite the fame of BERT, we are aware of only two publications that employ BERT-like PTMs for symbolic music classification \parencitetsai20ismir,musicbert. The first work \parencitetsai20ismir deals with optically scanned sheet music, while we use MIDI inputs.
Throughout this article, we refer to note-level classification tasks as tasks that perform a prediction for each individual note in a music sequence and sequence-level tasks as tasks that require a single prediction for an entire music sequence. We consider two note-level tasks and two sequence-level tasks in our experiments, as elaborated below.
Machine learning has been applied to music in symbolic formats such as MIDI. Exemplary tasks include symbolic-domain music genre classification \parencitecorrea16survey,ferraro18, composer classification \parencitelee20ismirLBD,kong2020largescale, and melody note identification \parencitesimonettaCNW19, note-affinity.
Table 2: The testing classification accuracy (in %) of different combinations of MIDI token representations and models for four downstream tasks: three-class melody classification, velocity prediction, style classification and emotion classification. “CNN” represents the ResNet50 model used by \textcitelee20ismirLBD, which only supports sequence-level tasks. “RNN” denotes the baseline models introduced in Section 5, representing the Bi-LSTM model for the first two (note-level) tasks and the Bi-LSTM-Attn model \parencitelin2017structured for the last two (sequence-level) tasks.
We evaluate PTMs on four piano music classification tasks. These include two note-level classification tasks, i.e., melody extraction \parencitesimonettaCNW19,note-affinity and velocity prediction \parencitewidmer94aaai,jeongKKLN19ismir,jeongKKN19icml and two sequence-level classification tasks, i.e., style classification \parencitelee20ismirLBD,kong2020largescale and emotion classification \parencitegrekow2009detecting,lin2013exploration,panda2013multi,panda2018.
D
Otherwise, F𝐹Fitalic_F has a leaf v∈A𝑣𝐴v\in Aitalic_v ∈ italic_A with a neighbor u∈B𝑢𝐵u\in Bitalic_u ∈ italic_B. We can assign c⁢(v)=a2𝑐𝑣subscript𝑎2c(v)=a_{2}italic_c ( italic_v ) = italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, c⁢(u)=b2𝑐𝑢subscript𝑏2c(u)=b_{2}italic_c ( italic_u ) = italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and invoke a subproblem for F′=F−{u,v}superscript𝐹normal-′𝐹𝑢𝑣F^{\prime}=F-\{u,v\}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_F - { italic_u , italic_v }, A′=A∖{v}superscript𝐴normal-′𝐴𝑣A^{\prime}=A\setminus\{v\}italic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_A ∖ { italic_v }, B′=B∖{u}superscript𝐵normal-′𝐵𝑢B^{\prime}=B\setminus\{u\}italic_B start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_B ∖ { italic_u } with the same coloring c𝑐citalic_c and color intervals [a1,a2−1]subscript𝑎1subscript𝑎21[a_{1},a_{2}-1][ italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT - 1 ] and [b1,b2−1]subscript𝑏1subscript𝑏21[b_{1},b_{2}-1][ italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT - 1 ]. The solution for F′superscript𝐹normal-′F^{\prime}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT would be consistent with coloring of u𝑢uitalic_u and v𝑣vitalic_v, since all other neighbors of u𝑢uitalic_u in F𝐹Fitalic_F would get colors at most a2−1≤b2−1−λ<c⁢(u)−λsubscript𝑎21subscript𝑏21𝜆𝑐𝑢𝜆a_{2}-1\leq b_{2}-1-\lambda<c(u)-\lambdaitalic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT - 1 ≤ italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT - 1 - italic_λ < italic_c ( italic_u ) - italic_λ.
To obtain the total running time we first note that each of the initial steps – obtaining (R,B,Y)𝑅𝐵𝑌(R,B,Y)( italic_R , italic_B , italic_Y ) from Corollary 2.11 (e.g. using Algorithm 1), contraction of F𝐹Fitalic_F into F′superscript𝐹normal-′F^{\prime}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, and finding both Y1subscript𝑌1Y_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and Y2subscript𝑌2Y_{2}italic_Y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT – requires only linear time. Coloring Y1∪R1∪B1subscript𝑌1subscript𝑅1subscript𝐵1Y_{1}\cup R_{1}\cup B_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∪ italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∪ italic_B start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT also requires O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) time, since we need to traverse each edge between these vertices only once to ensure the proper distances between the colors, and it is sufficient to use bucket sort to order vertices within B1subscript𝐵1B_{1}italic_B start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and R1subscript𝑅1R_{1}italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. The same argument follows symmetrically for Y2∪R2∪B2subscript𝑌2subscript𝑅2subscript𝐵2Y_{2}\cup R_{2}\cup B_{2}italic_Y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∪ italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∪ italic_B start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.
The linear running time follows directly from the fact that we compute c𝑐citalic_c only once and we can pass additionally through recursion the lists of leaves and isolated vertices in an uncolored induced subtree. The total number of updates of these lists is proportional to the total number of edges in the tree, hence the claim follows.
Next, let us count the total number of jumps necessary for finding central vertices over all loops in Algorithm 1. As it was stated in the proof of Lemma 2.2, while searching for a central vertex we always jump from a vertex to its neighbor in a way that decreases the largest remaining component by one. Thus, if in the next iteration we start at exactly the neighbor of the previous central vertex, there can be only O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) such jumps in total.
Now, observe that if the block to the left is also of type A, then a respective block from Z⁢(S)𝑍𝑆Z(S)italic_Z ( italic_S ) is (0,1,0)010(0,1,0)( 0 , 1 , 0 ) – and when we add the backward carry (0,0,1)001(0,0,1)( 0 , 0 , 1 ) to it, we obtain the forward carry to the rightmost block. And regardless of the value of the appropriate block of Z⁢(S2)𝑍subscript𝑆2Z(S_{2})italic_Z ( italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ), the total sum of the blocks and the backward carry cannot generate any further backward carry.
B