context
stringlengths
250
4.88k
A
stringlengths
250
4.73k
B
stringlengths
250
3.79k
C
stringlengths
250
8.2k
D
stringlengths
250
4.17k
label
stringclasses
4 values
This is f′′⁢(x)/f′⁢(x)superscript𝑓′′𝑥superscript𝑓′𝑥f^{\prime\prime}(x)/f^{\prime}(x)italic_f start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ( italic_x ) / italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) of the generic formula, and can be quickly
Rnm′′/Rnm′superscriptsuperscriptsubscript𝑅𝑛𝑚′′superscriptsuperscriptsubscript𝑅𝑛𝑚′{R_{n}^{m}}^{\prime\prime}/{R_{n}^{m}}^{\prime}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT / italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and Rnm/Rnm′superscriptsubscript𝑅𝑛𝑚superscriptsuperscriptsubscript𝑅𝑛𝑚′R_{n}^{m}/{R_{n}^{m}}^{\prime}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT / italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with relay to the generic formulas of associated, terminating hypergeometric
computed from Rnm⁢(x)/Rnm′⁢(x)=f⁢(x)/f′⁢(x)superscriptsubscript𝑅𝑛𝑚𝑥superscriptsuperscriptsubscript𝑅𝑛𝑚′𝑥𝑓𝑥superscript𝑓′𝑥R_{n}^{m}(x)/{R_{n}^{m}}^{\prime}(x)=f(x)/f^{\prime}(x)italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) / italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) = italic_f ( italic_x ) / italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) of the lower order.
\prime},+ 12 ( 1 + italic_m ) italic_x start_POSTSUPERSCRIPT italic_m + 1 end_POSTSUPERSCRIPT italic_F start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT + 8 italic_x start_POSTSUPERSCRIPT italic_m + 3 end_POSTSUPERSCRIPT italic_F start_POSTSUPERSCRIPT ′ ′ ′ end_POSTSUPERSCRIPT ,
{n,n^{\prime}}.∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_R start_POSTSUBSCRIPT italic_n start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_d italic_x = divide start_ARG 1 end_ARG start_ARG 2 italic_n + italic_D end_ARG italic_δ start_POSTSUBSCRIPT italic_n , italic_n start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT .
B
The lower-unitriangular matrices u1subscript𝑢1u_{1}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and u2subscript𝑢2u_{2}italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are returned as words in the Leedham-Green–O’Brien standard generators [11] for SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) defined in Section 3.1 below. In the rest of this section we explain how to express an arbitrary monomial matrix w∈SL⁢(d,q)𝑤SL𝑑𝑞w\in\textnormal{SL}(d,q)italic_w ∈ SL ( italic_d , italic_q ) as a word in the Leedham-Green–O’Brien standard generators, yielding an MSLP from the standard generators to w𝑤witalic_w.
There are several well-known generating sets for classical groups. For example, special linear groups are generated by the subset of all transvections [21, Theorem 4.3] or by two well chosen matrices, such as the Steinberg generators [19]. Another generating set which has become important in algorithms and applications in the last 10-15 years is the Leedham-Green and O’Brien standard generating set in the following called the LGO generating set. These generators are defined for all classical groups in odd characteristic in [11] and even characteristic in [10].
Therefore, we decided to base the procedures we present on a set of generators very close to the LGO standard generators. Note, that the choice of the generating set has no impact on the results as it is always possible to determine an MSLP which computes the LGO standard generators given an arbritary generating set and preface an MSLP for another application by this MSLP.
Note that a small variation of these standard generators for SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) are used in Magma [14] as well as in algorithms to verify presentations of classical groups, see [12], where only the generator v𝑣vitalic_v is slightly different in the two scenarios when d𝑑ditalic_d is even.
The LGO generating set offers a variety of advantages. In practice it is the generating set produced by the constructive recognition algorithms from [10, 11] as implemented in MAGMA. Consequently, algorithms in the composition tree data structure, both in MAGMA and in GAP, store elements in classical groups as words in the LGO generators. Moreover, the LGO generators can be used directly to verify representations of classical groups [12].
A
where Ω⊂ℝdΩsuperscriptℝ𝑑\Omega\subset\mathbb{R}^{d}roman_Ω ⊂ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with d=2𝑑2d=2italic_d = 2 or 3333 for simplicity, and is an open bounded domain with polyhedral boundary ∂ΩΩ\partial\Omega∂ roman_Ω, the symmetric tensor 𝒜∈[L∞⁢(Ω)]symd×d𝒜superscriptsubscriptdelimited-[]superscript𝐿Ωsym𝑑𝑑\mathcal{A}\in[L^{\infty}(\Omega)]_{\text{sym}}^{d\times d}caligraphic_A ∈ [ italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( roman_Ω ) ] start_POSTSUBSCRIPT sym end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_d × italic_d end_POSTSUPERSCRIPT is uniformly positive definite and bounded, and g𝑔gitalic_g is part of the given data.
In [MR2718268] is shown that the number of eigenvalues that are very large is related to the number of connected sub-regions on τ¯∪τ¯′¯𝜏superscript¯𝜏′\bar{\tau}\cup{\bar{\tau}}^{\prime}over¯ start_ARG italic_τ end_ARG ∪ over¯ start_ARG italic_τ end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with large coefficients surrounded by regions with small coefficients. Generalized eigenvalue problems also have been used on overlapping domain decomposition solvers [MR2718268, MR2916377, MR3175183, MR3033238]. The design of robust discretizations with respect to coefficients using domain decomposition ideas have been studied in [MR2666649, MR1642758, MR3350765] assuming some regularity on the solution, and in [MR2718268] for a class of problems when the weighted Poincaré constant [MR3047947, MR3013465, MR2867661] is not large, otherwise the exponential decay of the multiscale functions deteriorates. See also [MR2753343, MR3109775] where a priori error estimates are obtained in terms of spectral norms.
One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method not so practical. Here in this paper, in the presence of rough coefficients, spectral techniques are employed to overcome such hurdle, and by solving local eigenvalue problems we define a space where the exponential decay of solutions is insensitive to high-contrast coefficients. Additionally, the spectral techniques remove macro-elements corner singularities that occur in LOD methods based on
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85, MR1979846, MR2058933, HMV, MR1642758, MR3584539, MR2030161, MR2383203, vs1, vs2, MR2740478]. Some methods work even considering that the solution has low regularity [MR2801210, MR2753343, MR3225627, MR3177856, MR2861254] but are based on ideas that differ considerably from what we advocate here
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local computations are required, although these are not restricted to a single element. It is interesting to notice that, although the formulation is based on hybridization, the final numerical solution is defined by a sequence of elliptic problems.
C
We think Alg-A is better in almost every aspect. This is because it is essentially simpler. Among other merits, Alg-A is much faster, because it has a smaller constant behind the asymptotic complexity O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) than the others:
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM.
Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5⁢n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K. (by experiment, Alg-CM and Alg-K have to compute roughly 4.66⁢n4.66𝑛4.66n4.66 italic_n candidate triangles.)
D
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet level and let each tweet vote for the credibility of its event. We show the CreditScore measured over time in Figure 5(a). It can be seen that although the credibility of some tweets are low (rumor-related), averaging still makes the CreditScore of Munich shooting higher than the average of news events (hence, close to a news). In addition, we show the feature analysis for ContainNews (percentage of URLs containing news websites) for the event Munich shooting in Figure 5(b). We can see the curve of Munich shooting event is also close to the curve of average news, indicating the event is more news-related.
As observed in [19, 20], rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in [20]. We base our credibility feature on the time series approach and train the classifier with features from diffent high-level contexts (i.e., users, Twitter and propagation) in a cascaded manner. In this section, we first detail the employed Dynamic Series-Time Structure, then describe the high and low-level ensemble features used for learning in this pipeline step.
In this work, we propose an effective cascaded rumor detection approach using deep neural networks at tweet level in the first stage and wisdom of the “machines”, together with a variety of other features in the second stage, in order to enhance rumor detection performance in the early phase of an event. The proposed approach outperforms state of the
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analysis on the wide range of features change during diffusion time. Ma et al. [19] used Recurrent Neural Networks for rumor detection, they batch tweets into time intervals and model the time series as a RNN sequence. Without any other handcrafted features, they got almost 90% accuracy for events reported in Snope.com. As the same disadvantage of all other deep learning models, the process of learning is a black box, so we cannot envisage the cause of the good performance based only on content features. The model performance is also dependent on the tweet retrieval mechanism, of which quality is uncertain for stream-based trending sub-events.
at an early stage. Our fully automatic, cascading rumor detection method follows the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumors.  [7, 19] also use RNN for rumor debunking. However, in their work, RNN is used at event-level. The classification leverages only the deep data representations of aggregated tweet contents of the whole event, while ignoring exploiting other –in latter stage–effective features such as user-based features and propagation features. Although, tweet contents are merely the only reliable source of clue at early stage, they are also likely to have doubtful perspectives and different stands in this specific moment. In addition, they could relate to rumorous sub-events (see e.g., the Munich shooting). Aggregating all relevant tweets of the event at this point can be of noisy and harm the classification performance. One could think of a sub-event detection mechanism as a solution, however, detecting sub-events at real-time over Twitter stream is a challenging task [22], which increases latency and complexity. In this work, we address this issue by deep neural modeling only at single tweet level. Our intuition is to leverage the “wisdom of the crowd” theory; such that even a certain portion of tweets at a moment (mostly early stage) are weakly predicted (because of these noisy factors), the ensemble of them would attribute to a stronger prediction.
B
In a follow-up work Nacson et al. (2018) provided partial answers to these questions. They proved that the exponential tail has the optimal convergence rate, for tails for which ℓ′⁢(u)superscriptℓ′𝑢\ell^{\prime}(u)roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) is of the form exp⁡(−uν)superscript𝑢𝜈\exp(-u^{\nu})roman_exp ( - italic_u start_POSTSUPERSCRIPT italic_ν end_POSTSUPERSCRIPT ) with ν>0.25𝜈0.25\nu>0.25italic_ν > 0.25. They then conjectured, based on heuristic analysis, that the exponential tail is optimal among all possible tails. Furthermore, they demonstrated that polynomial or heavier tails do not converge to the max margin solution. Lastly, for the exponential loss they proposed a normalized gradient scheme which can significantly improve convergence rate, achieving O⁢(log⁡(t)/t)𝑂𝑡𝑡O(\log(t)/\sqrt{t})italic_O ( roman_log ( italic_t ) / square-root start_ARG italic_t end_ARG ).
Perhaps most similar to our study is the line of work on understanding AdaBoost in terms its implicit bias toward large L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-margin solutions, starting with the seminal work of Schapire et al. (1998). Since AdaBoost can be viewed as coordinate descent on the exponential loss of a linear model, these results can be interpreted as analyzing the bias of coordinate descent, rather then gradient descent, on a monotone decreasing loss with an exact exponential tail. Indeed, with small enough step sizes, such a coordinate descent procedure does converge precisely to the maximum L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-margin solution (Zhang et al., 2005; Telgarsky, 2013). In fact, Telgarsky (2013) also generalizes these results to other losses with tight exponential tails, similar to the class of losses we consider here.
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is also independent of the step-size
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile continuing to optimize long after we have zero training error, and
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameterization asymptotically to the maximum margin solution with unit nuclear norm. Unlike the case of squared loss, the result for exponential loss are independent of initialization and with only mild conditions on the step size. Here again, we see the asymptotic nature of exponential loss on separable data nullifying the initialization effects thereby making the analysis simpler compared to squared loss.
D
For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news event is -0.066 and the average of rumors is -0.1393, showing that rumor-related messages tend to contain more negative sentiments. Furthermore, we would expect that verified users are less involved in the rumor spreading. However, the feature appears near-bottom in the ranked list, indicating that it is not as reliable as expected. Also interestingly, the feature“IsRetweet” is also not as good a feature as expected, which means the probability of people retweeting rumors or true news are similar (both appear near-bottom in the ranked feature list).It has to be noted here that even though we obtain reasonable results on the classification task in general, the prediction performance varies considerably along the time dimension. This is understandable, since tweets become more distinguishable, only when the user gains more knowledge about the event.
To construct the training dataset, we collected rumor stories from the rumor tracking websites snopes.com and urbanlegends.about.com. In more detail, we crawled 4300 stories from these websites. From the story descriptions we manually constructed queries to retrieve the relevant tweets for the 270 rumors with highest impact. Our approach to query construction mainly follows (gupta2014tweetcred, ). For the news event instances (non-rumor examples), we make use of the corpus from Mcminn et al. (mcminn2013building, ), which covers 500 real-world events. They have crawled tweets via the streaming API from 10t⁢hsuperscript10𝑡ℎ10^{th}10 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT of October 2012 to 7t⁢hsuperscript7𝑡ℎ7^{th}7 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT of November 2012. The involved events have been manually verified and related to tweets with relevance judgments, which has resulted in a high quality corpus. From the 500 events, we select top 230 events with the highest tweet volumes (as a criteria for event impact). Furthermore, we have added 40 other news events, which happened around the time periods of our rumors. This results in a dataset of 270 rumors and 270 events. The dataset details are shown in Table 2. We then constructs two distinct datasets for (1) single tweet and (2) rumor classification.
We use the same dataset described in Section 4.1. In total –after cutting off 180 events for pre-training single tweet model – our dataset contains 360 events and 180 of them are labeled as rumors. As a rumor is often of a long circurlating story (friggeri2014rumor, ), this results in a rather long time span. In this work, we develop an event identification strategy that focuses on the first 48 hours after the rumor is peaked. We also extract 11,038 domains, which are contained in tweets in this 48 hours time range.
Training data for single tweet classification. An event might include sub-events for which relevant tweets are rumorous. To deal with this complexity, we train our single-tweet learning model only with manually selected breaking and subless events from the above dataset. In the end, we used 90 rumors and 90 news associated with 72452 tweets, in total. This results in a highly-reliable ground-truth of tweets labelled as news-related and rumor-related, respectively. Note that the labeling of a tweet is inherited from the event label, thus can be considered as an semi-automatic process.
The time period of a rumor event is sometimes fuzzy and hard to define. One reason is a rumor may have been triggered for a long time and kept existing, but it did not attract public attention. However it can be triggered by other events after a uncertain time and suddenly spreads as a bursty event. E.g., a rumor999http://www.snopes.com/robert-byrd-kkk-photo/ claimed that Robert Byrd was member of KKK. This rumor has been circulating in Twitter for a while. As shown in Figure 7(a) that almost every day there were several tweets talking about this rumor. But this rumor was triggered by a picture about Robert Byrd kissing Hillary Clinton in 2016 101010http://www.snopes.com/clinton-byrd-photo-klan/ and Twitter users suddenly noticed this rumor and it was spreaded burstily. In this work, what we are really interested in is the tweets which are posted in hours around the bursty peak. We defined the hour with the most tweets’ volume as tm⁢a⁢xsubscript𝑡𝑚𝑎𝑥t_{max}italic_t start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT and we want to detect the rumor event as soon as possible before its burst, so we define the time of the first tweet before tm⁢a⁢xsubscript𝑡𝑚𝑎𝑥t_{max}italic_t start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT within 48 hours as the beginning of this rumor event, marked as t0subscript𝑡0t_{0}italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. And the end time of the event is defined as te⁢n⁢d=t0+48subscript𝑡𝑒𝑛𝑑subscript𝑡048t_{end}=t_{0}+48italic_t start_POSTSUBSCRIPT italic_e italic_n italic_d end_POSTSUBSCRIPT = italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + 48. We show the tweet volumes in Figure 7(b) of the above rumor example.
B
Evaluating methodology. For RQ1, given an event entity e, at time t, we need to classify them into either Breaking or Anticipated class. We select a studied time for each event period randomly in the range of 5 days before and after the event time. In total, our training dataset for AOL consists of 1,740 instances of breaking class and 3,050 instances of anticipated, with over 300 event entities. For GoogleTrends, there are 2,700 and 4,200 instances respectively. We then bin the entities in the two datasets chronologically into 10 different parts. We set up 4 trials with each of the last 4 bins (using the history bins for training in a rolling basic) for testing; and report the results as average of the trials.
RQ2. Figure 4 shows the performance of the aspect ranking models for our event entities at specific times and types. The most right three models in each metric are the models proposed in this work. The overall results show that, the performances of these models, even better than the baselines (for at least one of the three), vary greatly among the cases. In general, S⁢V⁢Ms⁢a⁢l⁢i⁢e⁢n⁢c⁢e𝑆𝑉subscript𝑀𝑠𝑎𝑙𝑖𝑒𝑛𝑐𝑒SVM_{salience}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_s italic_a italic_l italic_i italic_e italic_n italic_c italic_e end_POSTSUBSCRIPT performs well at the before stage of breaking events, and badly at the after stage of the same event type. Whereas S⁢V⁢Mt⁢i⁢m⁢e⁢l⁢i⁢n⁢e⁢s⁢s𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT gives a contradictory performance for the cases. For anticipated events, S⁢V⁢Mt⁢i⁢m⁢e⁢l⁢i⁢n⁢e⁢s⁢s𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT performs well at the before and after stages, but gives a rather low performance at the during stage. For this event type, S⁢V⁢Ms⁢a⁢l⁢i⁢e⁢n⁢c⁢e𝑆𝑉subscript𝑀𝑠𝑎𝑙𝑖𝑒𝑛𝑐𝑒SVM_{salience}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_s italic_a italic_l italic_i italic_e italic_n italic_c italic_e end_POSTSUBSCRIPT generally performs worse than S⁢V⁢Mt⁢i⁢m⁢e⁢l⁢i⁢n⁢e⁢s⁢s𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT. Overall, The S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT with all features combined gives a good and stable performance, but for most cases, are not better than the well-performed single set of features L2R model. In general, these results prove our assumption that salience and timeliness should be traded-off for different event types, at different event times. For feature importances, we observe regularly, stable performances of same-group features across these cases. Salience features from knowledge bases tend to perform better than from query logs for short-duration or less popular events. We leave the more in-depth analysis of this part for future work.
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The results are shown in Table 3-bottom, showing that our cascaded model, with features inherited from the performance of SVM in previous task, substantially improves the single model. However, the overall modest results show the difficulty of this multi-class classification task.
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall, improved the baseline, yet not significantly. Our Ensemble model, that is learned to trade-off between salience and timeliness achieves the best results for all metrics, outperforms the baseline significantly. As the testing entity queries in this experiment are at all event times and with all event types, these improvements illustrate the robustness of our model. Overall, we witness the low performance of adapted QAC methods. One reason is as mentioned, QACs, even time-aware generally favor already salient queries as follows the rich-get-richer phenomenon, and are not ideal for entity queries that are event-related (where aspect relevance can change abruptly). Time-aware QACs for partially long prefixes like entities often encounter sparse traffic of query volumes, that also contributes to the low results.
Results. The baseline and the best results of our 1s⁢tsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achieves marginally better result at F1 metric.
D
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains, from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
SMC weights are updated based on the likelihood of the observed rewards: wt,a(m)∝pa⁢(yt|xt,θt,a(m))proportional-tosuperscriptsubscript𝑤𝑡𝑎𝑚subscript𝑝𝑎conditionalsubscript𝑦𝑡subscript𝑥𝑡superscriptsubscript𝜃𝑡𝑎𝑚w_{t,a}^{(m)}\propto p_{a}(y_{t}|x_{t},\theta_{t,a}^{(m)})italic_w start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ∝ italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ( italic_y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_θ start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ) —Step (9.c) in Algorithm 1; and
The techniques used in these success stories are grounded on statistical advances on sequential decision processes and multi-armed bandits. The MAB crystallizes the fundamental trade-off between exploration and exploitation in sequential decision making.
we propagate forward the sequential random measure pM⁢(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) by drawing new samples from the transition density, conditioned on resampled particles, i.e.,
the fundamental operation in the proposed SMC-based MAB Algorithm 1 is to sequentially update the random measure pM⁢(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT )
B
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only two glucose measurements per day on average and measured glucose within 4 hours or less after a meal only 5 out of 54 times.
For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal. Overall, patients measure blood glucose within 10 minutes before meals most of the time – for more than 2/3 of the meals for most patients.
In order to have a broad overview of different patients’ patterns over the one month period, we first show the figures illustrating measurements aggregated by days-in-week. For consistency, we only consider the data recorded from 01/03/17 to 31/03/17 where the observations are most stable.
Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available. The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14.
The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day. Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning.
B
We propose a new CNN architecture with modules adapted from the semantic segmentation literature to predict fixation density maps of the same image resolution as the input. Our approach is based on a large body of research regarding saliency models that leverage object-specific features and functionally replicate human behavior under free-viewing conditions. In the following sections, we describe our contributions to this challenging task.
To restore the original image resolution, extracted features were processed by a series of convolutional and upsampling layers. Previous work on saliency prediction has commonly utilized bilinear interpolation for that task Cornia et al. (2018); Liu and Han (2018), but we argue that a carefully chosen decoder architecture, similar to the model by Pan et al. (2017), results in better approximations. Here we employed three upsampling blocks consisting of a bilinear scaling operation, which doubled the number of rows and columns, and a subsequent convolutional layer with kernel size 3×3333\times 33 × 3. This setup has previously been shown to prevent checkerboard artifacts in the upsampled image space in contrast to deconvolution Odena et al. (2016). Besides an increase of resolution throughout the decoder, the amount of channels was halved in each block to yield 32 feature maps. Our last network layer transformed activations into a continuous saliency distribution by applying a final 3×3333\times 33 × 3 convolution. The outputs of all but the last linear layer were modified via rectified linear units. Figure 2 visualizes the overall architecture design as described in this section.
Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer et al. (2014) and II Kümmerer et al. (2016) employed a pre-trained classification model to read out salient image locations from a small subset of encoding layers. This is similar to the network by Cornia et al. (2016) which utilizes the output at three stages of the hierarchy. Oyama and Yamanaka (2018) demonstrated that classification performance of pre-trained architectures strongly correlates with the accuracy of saliency predictions, highlighting the importance of object information. Related approaches also focused on the potential benefits of incorporating activation from both coarse and fine image resolutions Huang et al. (2015), and recurrent connections to capture long-range spatial dependencies in convolutional feature maps Cornia et al. (2018); Liu and Han (2018). Our model explicitly combines semantic representations at multiple spatial scales to include contextual information in the predictive process. For a more complete account of existing saliency architectures, we refer the interested reader to a comprehensive review by Borji (2018).
Image-to-image learning problems require the preservation of spatial features throughout the whole processing stream. As a consequence, our network does not include any fully-connected layers and reduces the number of downsampling operations inherent to classification models. We adapted the popular VGG16 architecture Simonyan and Zisserman (2014) as an image encoder by reusing the pre-trained convolutional layers to extract increasingly complex features along its hierarchy. Striding in the last two pooling layers was removed, which yields spatial representations at 1/818\nicefrac{{1}}{{8}}/ start_ARG 1 end_ARG start_ARG 8 end_ARG of their original input size. All subsequent convolutional encoding layers were then dilated at a rate of 2 by expanding their kernel, and thereby increased the receptive field to compensate for the higher resolution Yu and Koltun (2015). This modification still allowed us to initialize the model with pre-trained weights since the number of trainable parameters remained unchanged. Prior work has shown the effectiveness of this approach in the context of saliency prediction problems Cornia et al. (2018); Liu and Han (2018).
Figure 2: An illustration of the modules that constitute our encoder-decoder architecture. The VGG16 backbone was modified to account for the requirements of dense prediction tasks by omitting feature downsampling in the last two max-pooling layers. Multi-level activations were then forwarded to the ASPP module, which captured information at different spatial scales in parallel. Finally, the input image dimensions were restored via the decoder network. Subscripts beneath convolutional layers denote the corresponding number of feature maps.
C
The procedure which, for each vertex v∈V𝑣𝑉v\in Vitalic_v ∈ italic_V, constructs αesubscript𝛼𝑒\alpha_{e}italic_α start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT for some e∈E𝑒𝐸e\in Eitalic_e ∈ italic_E adjacent to v𝑣vitalic_v in O⁡(h)Oℎ\operatorname{O}(h)roman_O ( italic_h ), runs 𝒜𝒜\mathcal{A}caligraphic_A in O⁡(f⁢(|αe|))=O⁡(f⁢(h))O𝑓subscript𝛼𝑒O𝑓ℎ\operatorname{O}(f(|\alpha_{e}|))=\operatorname{O}(f(h))roman_O ( italic_f ( | italic_α start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT | ) ) = roman_O ( italic_f ( italic_h ) ) and checks the resulting linear arrangement in O⁡(h)Oℎ\operatorname{O}(h)roman_O ( italic_h ) and returns the best linear arrangement among all v∈V𝑣𝑉v\in Vitalic_v ∈ italic_V, yields an r⁢(opt,h)𝑟optℎr(\operatorname{\textsf{opt}},h)italic_r ( opt , italic_h )-approximation for MinCutwidth on multigraphs in O⁡(n⁢(f⁢(h)+h))O𝑛𝑓ℎℎ\operatorname{O}(n(f(h)+h))roman_O ( italic_n ( italic_f ( italic_h ) + italic_h ) ). ∎
In the following, we obtain an approximation algorithm for the locality number by reducing it to the problem of computing the pathwidth of a graph. To this end, we first describe another way of how a word can be represented by a graph. Recall that the reduction to cutwidth from Section 4 also transforms words into graphs. The main difference is that the reduction from Section 4 turns every symbol from the alphabet into an individual vertex of the graph (thus, producing a graph with O⁡(|Σ|)OΣ\operatorname{O}(|\Sigma|)roman_O ( | roman_Σ | ) vertices), while the reduction to pathwidth will use a vertex per position of the word α𝛼\alphaitalic_α, i. e., |α|𝛼|\alpha|| italic_α | individual vertices. In the reduction from Section 4 the information of the actual occurrences of the symbols in the word is encoded by the edges (in particular, the length |α|𝛼|\alpha|| italic_α | is represented by the number of edges), while in the following reduction the alphabet is encoded by connecting the vertices that correspond to positions of the same symbol to cliques in the graph (in particular, the number of edges may range between |α|𝛼|\alpha|| italic_α | and |α|2superscript𝛼2|\alpha|^{2}| italic_α | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT). We proceed with a formal definition and an example.
In the following, we discuss the lower and upper complexity bounds that we obtain from the reductions provided above. We first note that since Cutwidth is NP-complete, so is Loc. In particular, note that this answers one of the main questions left open in [15].
The main results are presented in Sections 4, 5 and 6. First, in Section 4, we present the reductions from Loc to Cutwidth and vice versa, and we discuss the consequences of these reductions. Then, in Section 5, we show how Loc can be reduced to Pathwidth, which yields an approximation algorithm for computing the locality number; furthermore, we investigate the performance of direct greedy strategies for approximating the locality number. Finally, since we consider this of high importance independent of the locality number, we provide a direct reduction from cutwidth to pathwidth in Section 6.
In this section, we introduce polynomial-time reductions from the problem of computing the locality number of a word to the problem of computing the cutwidth of a graph, and vice versa. This establishes a close relationship between these two problems (and their corresponding parameters), which lets us derive several upper and lower complexity bounds for Loc. We also discuss the approximation-preserving properties of our reductions, which shall be important later on.
B
The first network is a six layer CNN that detects the slice located within heart limits, and segments the thoracic and epicardial-paracardial masks. The second network is a five layer CNN that detects the pericardium line from the CT scan in cylindrical coordinates.
The literature phrase search is the combined presence of each one of the cardiology terms indicated by (∗*∗) in Table I with each one of the deep learning terms related to architecture, indicated by (+++) in Table II using Google Scholar111https://scholar.google.com, Pubmed222https://ncbi.nlm.nih.gov/pubmed/ and Scopus333https://www.scopus.com/search/form.uri?=display=basic. Results are then curated to match the selection criteria of the review and summarized according to two main axis: neural network architecture and the type of data that was used for training/validation/testing.
First, optimal paths in a computed flow field are found and then a CNN classifier is used for removing extraneous paths in the detected centerlines. The method was enhanced using a model-based detection of coronary specific territories and main branches to constrain the search space.
These predictions formed a vector field which was then used for evolving the contour using the Sobolev active contour framework. Anh et al.[130] created a non-rigid segmentation method based on the distance regularized level set method that was initialized and constrained by the results of a structured inference using a DBN.
A graph was then constructed from the retinal vascular network where the nodes are defined as the vessel branches and each edge gets associated to a cost that evaluates whether the two branches should have the same label. The CNN classification was propagated through the minimum spanning tree of the graph.
B
This demonstrates that SimPLe excels in a low data regime, but its advantage disappears with a bigger amount of data. Such a behavior, with fast growth at the beginning of training, but lower asymptotic performance is commonly observed when comparing model-based and model-free methods (Wang et al. (2019)). As observed in Section 6.4 assigning bigger computational budget helps in 100100100100K setting. We suspect that gains would be even bigger for the settings with more samples.
The iterative process of training the model, training the policy, and collecting data is crucial for non-trivial tasks where random data collection is insufficient. In a game-by-game analysis, we quantified the number of games where the best results were obtained in later iterations of training. In some games, good policies could be learned very early. While this might have been due to the high variability of training, it does suggest the possibility of much faster training (i.e. in fewer step than 100k) with more directed exploration policies. In Figure 9 in the Appendix we present the cumulative distribution plot for the (first) point during learning when the maximum score for the run was achieved in the main training loop of Algorithm 1.
Finally, we verified if a model obtained with SimPLe using 100100100100K is a useful initialization for model-free PPO training. Based on the results depicted in Figure 5 (b) we can positively answer this conjecture. Lower asymptotic performance is probably due to worse exploration. A policy pre-trained with SimPLe was meant to obtain the best performance on 100100100100K, at which point its entropy is very low thus hindering further PPO training.
We focused our work on learning games with 100100100100K interaction steps with the environment. In this section we present additional results for settings with 20202020K, 50505050K, 200200200200K, 500500500500K and 1111M interactions; see Figure 5 (a). Our results are poor with 20202020K interactions. For 50505050K they are already almost as good as with 100100100100K interactions. From there the results improve until 500500500500K samples – it is also the point at which they are on par with model-free PPO. Detailed per game results can be found in Appendix F.
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, and PPO (Schulman et al., 2017), a model-free policy gradient algorithm (see Appendix E for details of tuning of Rainbow and PPO). The results of the comparison are presented in Figure 3. For each game, we plot the number of time steps needed for either Rainbow or PPO to reach the same score that our method reaches after 100100100100K interaction steps. The red line indicates 100100100100K steps: any bar larger than this indicates a game where the model-free method required more steps. SimPLe outperforms the model-free algorithms in terms of learning speed on nearly all of the games, and in the case of a few games, does so by over an order of magnitude. For some games, it reaches the same performance that our PPO implementation reaches at 10101010M steps. This indicates that model-based reinforcement learning provides an effective approach to learning Atari games, at a fraction of the sample complexity.
B
Here we also refer to CNN as a neural network consisting of alternating convolutional layers each one followed by a Rectified Linear Unit (ReLU) and a max pooling layer and a fully connected layer at the end while the term ‘layer’ denotes the number of convolutional layers.
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals.
A high level overview of these combined methods is shown in Fig. 1. Although we choose the EEG epileptic seizure recognition dataset from University of California, Irvine (UCI) [13] for EEG classification, the implications of this study could be generalized in any kind of signal classification problem.
For the spectrogram module, which is used for visualizing the change of the frequency of a non-stationary signal over time [18], we used a Tukey window with a shape parameter of 0.250.250.250.25, a segment length of 8888 samples, an overlap between segments of 4444 samples and a fast Fourier transform of 64646464 samples to convert the xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT into the time-frequency domain. The resulted spectrogram, which represents the magnitude of the power spectral density (V2/H⁢zsuperscript𝑉2𝐻𝑧V^{2}/Hzitalic_V start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_H italic_z) of xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, was then upsampled to 178×178178178178\times 178178 × 178 using bilinear pixel interpolation.
This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data. Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for diagnosis and prediction problems.
A
A major obstacle in achieving seamless autonomous locomotion transition lies in the need for an efficient sensing methodology that can promptly and reliably evaluate the interaction between the robot and the terrain, referred to as terramechanics. These methods generally involve performing comprehensive on-site measurements of soil attributes prior to robot deployment [9]. Moreover, it’s important to consider that these terramechanics models, striving to predict robot-terrain interactions, often involve substantial computational costs due to their complexity [16]. Therefore, terramechanics methods are unsuitable for use in autonomous locomotion mode transition control directly, particularly in scenarios where robots need to move at high speeds, for example in search and rescue missions. To bypass the limitations of terramechanics methods, researchers have probed into alternative strategies for accomplishing autonomous locomotion transition. For example, certain studies have utilized energy consumption as a metric for evaluating the transverse-ability of different locomotion modes in wheel/track-legged robots [8]. By scrutinizing the energy expenditure for different locomotion modes, researchers can evaluate their efficiency in navigating various terrains. Additionally, other general parameters like stability margin and motion efficiency have been examined in the quest to achieve autonomous locomotion transition [2].
In the literature review, Gorilla [2] is able to switch between bipedal and quadrupedal walking locomotion modes autonomously using criteria developed based on motion efficiency and stability margin. WorkPartner [8] demonstrated its capability to seamlessly transition between two locomotion modes: rolling and rolking. The rolking mode, a combination of rolling and walking, empowered WorkPartner to navigate with enhanced agility. This feat was accomplished through the implementation of devised criteria that took into account a comprehensive analysis of energy utilization, wheel slip percentage, and the intricate dynamics between the wheels and the demanding terrain. However, it’s noteworthy that Gorilla only has walking locomotion mode and does not fit into the wheel/track-legged hybrid robot category. It is important to note that the approach introduced by WorkPartner is tailored specifically to it. The threshold values for locomotion transition criteria were established empirically through prior experimental evaluations conducted on the target terrains. However, a critical aspect that deserves emphasis is that the prevailing criteria proposed for locomotion mode transitions have primarily concentrated on the robot’s internal states, neglecting the integration of external environmental information into the decision-making process. This oversight underscores the need for future developments that incorporate a more comprehensive understanding of the external context and environmental factors, enabling robots like WorkPartner to make informed decisions based on a holistic assessment of both internal and external conditions.
In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and walking locomotion modes. Through energy consumption analyses during step negotiations of varied heights, we establish energy criterion thresholds that guide the robot’s transition from rolling to walking mode. Our simulation studies reveal that the Cricket robot can autonomously switch to the most suitable locomotion mode based on the height of the steps encountered.
There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that determine the best mode—either rolling or walking—based on the robot’s environmental interactions and internal states [7, 8]. In addressing the first challenge, the dynamics of rolling locomotion are well understood and are similar to those of traditional wheeled/tracked robots. However, despite extensive research on the walking dynamics of standard legged robots, focused studies on the walking patterns specific to wheel/track-legged robots are limited [9]. Transition control between these locomotion modes for wheel/track-legged robots also requires more exploration [6]. In this study, we focus on the second challenge to develop efficient decision-making algorithms for transitioning between locomotion modes. This remains a very less explored area [3], but is essential to achieve an autonomous locomotion transition in hybrid robots. Building upon our prior work, we employ two climbing gaits to ensure smooth walking locomotion for wheel/track-legged robots, particularly when navigating steps [10].
Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the rear legs (depicted by the green line) exceeded the predetermined threshold values set by the rear body climbing gait for heights of 2h. The overlap between the red line (ongoing energy consumption of the robot) and the blue line (pre-studied energy consumption of step negotiation in rolling locomotion mode only) illustrates this. After the mode transition is triggered, the robot enters a well-defined preparation phase, wherein it moves backward a short distance to ensure the rear tracks are separated from the step. Following the preparation phase, the robot switches to the rear body climbing gait. Despite the noticeable improvement in energy consumption, the transition to the rear body climbing gait takes more time for the robot to tackle a 2h step.
A
As argued in detail in [9], there are compelling reasons to study the advice complexity of online computation. Lower bounds establish strict limitations on the power of any online algorithm; there are strong connections between randomized online algorithms and online algorithms with advice (see, e.g., [27]); online algorithms with advice can be of practical interest in settings in which it is feasible to run multiple algorithms and output the best solution (see [20] about obtaining improved data compression algorithms by means of list update algorithms with advice); and the first complexity classes for online computation have been based on advice complexity [10].
Notwithstanding such interesting attributes, the known advice model has certain drawbacks. The advice is always assumed to be some error-free information that may be used to encode some property often explicitly connected to the optimal solution. In many settings, one can argue that such information cannot be readily available, which implies that the resulting algorithms are often impractical.
Under the current models, the advice bits can encode any information about the input sequence; indeed, defining the “right” information to be conveyed to the algorithm plays an important role in obtaining better online algorithms. Clearly, the performance of the online algorithm can only improve with larger number of advice bits. The objective is thus to identify the exact trade-offs between the size of the advice and the performance of the algorithm. This is meant to provide a smooth transition between the purely online world (nothing is known about the input) and the purely “offline” world (everything is known about the input).
In future work, we would like to expand the model so as to incorporate, into the analysis, the concept of advice error. More specifically, given an advice string of size k𝑘kitalic_k, let η𝜂\etaitalic_η denote the number of erroneous bits (which may be not known to the algorithm). In this setting, the objective would be to study the power and limitations of online algorithms, i.e., from the point of view of both upper and lower bounds on the competitive ratio. A first approach towards this direction was made recently in the context of problems such as contract
It should be fairly clear that such assumptions are very unrealistic or undesirable. Advice bits, as all information, are prone to transmission errors. In addition, the known advice models often allow information that one may arguably consider unrealistic, e.g., an encoding of some part of the offline optimal solution. Last, and perhaps more significantly, a malicious entity that takes control of the advice oracle can have a catastrophic impact. For a very simple example, consider the well-known ski rental problem: this is a simple, yet fundamental resource allocation, in which we have to decide ahead of time whether to rent or buy equipment without knowing the time horizon in advance. In the traditional advice model, one bit suffices to be optimal: 0 for renting throughout the horizon, 1 for buying right away. However, if this bit is wrong, then the online algorithm has unbounded competitive ratio, i.e., can perform extremely badly. In contrast, an online algorithm that does not use advice at all has competitive ratio at most 2, i.e., its output can be at most twice as costly as the optimal one.
A
Another (more elaborated) policy could have taken into account how fast the positive value grows (the slope) in relation with the negative one, and if a given threshold was exceeded, classify subjects as depressed —in such case our subject could have been classified as depressed, for instance, after reading his/her 92nd writing. Note that we could also combine multiple policies as we will see in Section 5.
This brief subsection describes the training process, which is trivial. Only a dictionary of term-frequency pairs is needed for each category. Then, during training, dictionaries are updated as new documents are processed —i.e. unseen terms are added and frequencies of already seen terms are updated.
Otherwise, it can be omitted since, during classification, g⁢v𝑔𝑣gvitalic_g italic_v can be dynamically computed based on the frequencies stored in the dictionaries. It is worth mentioning that this algorithm could be easily parallelized by following the MapReduce model as well —for instance, all training documents could be split into batches, then frequencies locally calculated within each batch, and finally, all these local frequencies summed up to obtain the total frequencies.
In the rest of this subsection, we will exemplify how the SS3 framework carries out the classification and training process and how the early classification and explainability aspects are addressed. The last subsection goes into more technical details and we will study how the local and global value of a term is actually computed. As we will see, these values are the basis of the entire classification process.
Note that with this simple training method there is no need neither to store all documents nor to re-train from scratch every time a new training document is added, making the training incremental101010Even new categories could be dynamically added.. Additionally, there is no need to compute the document-term matrix because, during classification, g⁢v𝑔𝑣gvitalic_g italic_v can be dynamically computed based on the frequencies stored in the dictionaries —although, in case we are working in an offline fashion and to speed up classification, it is still possible to create the document-term matrix holding the g⁢v𝑔𝑣gvitalic_g italic_v value for each term. Finally, also note that training computation is very cheap since involves only updating term frequencies i.e only one addition operation is needed.
A
We run DMSGD, DGC (w/ mfm), DGC (w/o mfm) and GMC respectively to solve the optimization problem: min𝐰∈ℝd⁡F⁢(𝐰)subscript𝐰superscriptℝ𝑑𝐹𝐰\min_{{\bf w}\in{\mathbb{R}}^{d}}F({\bf w})roman_min start_POSTSUBSCRIPT bold_w ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_F ( bold_w ). The momentum coefficient β𝛽\betaitalic_β is set as 0.90.90.90.9 and the learning rate is set as 0.005. We use top-s𝑠sitalic_s as the sparsification compressor.
Table 2 and Figure 4 show the performance under non-IID data distribution. We can find that GMC can achieve much better test accuracy and faster convergence speed compared to other methods. Furthermore, we can find that the momentum factor masking trick will severely impair the performance of DGC under non-IID data distribution.
Figure 2(b), 2(c) and 2(d) show the distances to the global optimal point when using different s𝑠sitalic_s for the case when d=20𝑑20d=20italic_d = 20. We can find that, compared with the local momentum methods, the global momentum method GMC converges faster and more stably.
process. As for global momentum, the momentum term −(𝐰t−𝐰t−1)/ηsubscript𝐰𝑡subscript𝐰𝑡1𝜂-({\bf w}_{t}-{\bf w}_{t-1})/\eta- ( bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - bold_w start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT ) / italic_η contains global information from all the workers. Since we are optimizing the objective function F⁢(𝐰)𝐹𝐰F({\bf w})italic_F ( bold_w ), 𝐰t−𝐰t−1subscript𝐰𝑡subscript𝐰𝑡1{\bf w}_{t}-{\bf w}_{t-1}bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - bold_w start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT denotes the descent direction of F⁢(𝐰)𝐹𝐰F({\bf w})italic_F ( bold_w ) with high probability in the next iteration, which will help the parameter to converge to the global optimal point. If no sparse communication is adopted, DGC (w/o mfm) will degenerate to DMSGD, and DGC (w/ mfm) will degenerate to DSGD.
We can find that after a sufficient number of iterations, the parameter in DGC (w/o mfm) can only oscillate within a relatively large neighborhood of the optimal point. Compared with DGC (w/o mfm), the parameter in GMC converges closer to the optimal point and then remains stable. Figure 2(a) shows the distances to the global optimal point during the optimization process. We can find that although the momentum factor masking trick can make the convergence trajectory appear more stable, it also slows down the convergence.
D
For the purposes of this paper we use a variation of the database444https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals in total.
During supervised learning the weights of the kernels are frozen and a one layer fully connected network (FNN) is stacked on top of the reconstruction output of the SANs. The FNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as the loss function.
We use one signal from each of 15 signal datasets from Physionet listed in the first column of Table I. Each signal consists of 12000120001200012000 samples which in turn is split in 12121212 signals of 1000100010001000 samples each, to create the training (6666 signals), validation (2222 signals) and test datasets (4444 signals).
We then split the 11500115001150011500 signals into 76%percent7676\%76 %, 12%percent1212\%12 % and 12%percent1212\%12 % (8740,1380,13808740138013808740,1380,13808740 , 1380 , 1380 signals) as training, validation and test data respectively and normalize in the range [0,1]01[0,1][ 0 , 1 ] using the global max and min. For the SANs, we used two kernels q=2𝑞2q=2italic_q = 2 with a varying size in the range of [8,15]815[8,15][ 8 , 15 ] and trained for 5555 epochs with a batch size of 64646464.
The first two fully connected layers are followed by a ReLU while the last one produces the predictions. The CNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as the loss function.
C
A new algorithm which can learn from previous experiences is required, and the algorithm with faster learning speed is more desirable. Existing algorithms’ learning method is learning by prediction. It means that UAV knows current strategies with corresponding payoff and it can randomly select another strategy and calculate its payoff. And then UAV compares two payoffs. If the payoff of new strategy is larger, the current strategy will be replaced by the new strategy; if the current payoff strategy is large, it will remain in the current strategy. However, under highly dynamic scenarios, complicate network conditions make UAVs hard to calculate their expected payoffs but only learn from previous experiences [11]. In this situation, UAV only can learn from previous experience. UAV merely knows the current and the last strategy with the corresponding payoff, and it can only learn from these. In this case, if the two strategies are different, UAV chooses the strategy with the larger payoff as the current strategy in the next iteration; if the two strategies are the same, a new strategy is randomly selected as the current strategy of the next iteration. To sum up, the difference between the existing and required algorithms is that the existing algorithm calculates payoff for a new strategy (equivalent to prediction) and chooses it based on prediction; while required algorithm can be available when UAV only knows strategy’s payoff if strategy has been selected, and then UAV can decide whether to stick to the current strategy or return to the past strategy by comparing their payoffs. Therefore, an algorithm which can learn from previous experiences is required.
A new algorithm which can learn from previous experiences is required, and the algorithm with faster learning speed is more desirable. Existing algorithms’ learning method is learning by prediction. It means that UAV knows current strategies with corresponding payoff and it can randomly select another strategy and calculate its payoff. And then UAV compares two payoffs. If the payoff of new strategy is larger, the current strategy will be replaced by the new strategy; if the current payoff strategy is large, it will remain in the current strategy. However, under highly dynamic scenarios, complicate network conditions make UAVs hard to calculate their expected payoffs but only learn from previous experiences [11]. In this situation, UAV only can learn from previous experience. UAV merely knows the current and the last strategy with the corresponding payoff, and it can only learn from these. In this case, if the two strategies are different, UAV chooses the strategy with the larger payoff as the current strategy in the next iteration; if the two strategies are the same, a new strategy is randomly selected as the current strategy of the next iteration. To sum up, the difference between the existing and required algorithms is that the existing algorithm calculates payoff for a new strategy (equivalent to prediction) and chooses it based on prediction; while required algorithm can be available when UAV only knows strategy’s payoff if strategy has been selected, and then UAV can decide whether to stick to the current strategy or return to the past strategy by comparing their payoffs. Therefore, an algorithm which can learn from previous experiences is required.
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approaching [9][32]. The BLLA has been employed by [33], which is modified from LLA to update strategies in each iteration to converge to the NE. However, only a single agent is allowed to alter strategies in one iteration. In large-scale scenarios, more iterations are required, which makes BLLA inefficient. It is obvious that more UAVs altering strategies in one iteration would be more efficient. To achieve it, the works in [34] and [35] have provided a novel synchronous algorithm. However, there exist superabundant restrictions that make the algorithm impractical in most scenarios. Compared with the formers, SPBLLA has fewer constraints and can achieve synchronous operation, which can significantly improve the computational efficiency.
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV changes strategy in the next iteration based on the new game state. It means that UAVs are not permitted to update strategies at the same time. Besides, to determine which UAV to update strategy, the coordinating process will occupy plenty of channel capacities and require more time between two iterations [15]. If the algorithm can learn synchronously, more than one UAV can update strategies based on the current game state in one iteration. Thus, the algorithm can be more efficient. To sum up, synchronous update algorithms which can learn from previous experiences are desirable, but only a little research investigated on it.
In the literatures, most works search PSNE by using the Binary Log-linear Learning Algorithm (BLLA). However, there are limitations of this algorithm. In BLLA, each UAV can calculate and predict its utility for any si∈Sisubscript𝑠𝑖subscript𝑆𝑖s_{i}\in S_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in the complete strategy set. While in UAV ad-hoc networks, UAVs are accessible only to the constrained strategy set and corresponding utilities in the last two decision periods. Thus, conventional BLLA is no more suitable for the scenario we considered here, we propose a revised algorithm based on BLLA, called Payoff-based Binary Log-linear Learning Algorithm (PBLLA) to resolve the issue.
C
resistivity, η⁢[m2/s]=η′/μ0𝜂delimited-[]superscriptm2ssuperscript𝜂′subscript𝜇0\eta[\mbox{m}^{2}/\mbox{s}]=\eta^{\prime}/\mu_{0}italic_η [ m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / s ] = italic_η start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT / italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is magnetic diffusivity, and the adiabatic index γ=53𝛾53\gamma=\frac{5}{3}italic_γ = divide start_ARG 5 end_ARG start_ARG 3 end_ARG is the
\perp}|_{\Gamma}=\mathbf{0},\,(\nabla_{\perp}\psi)|_{\Gamma}=0\mbox{ and }(% \nabla_{\perp}f)|_{\Gamma}=0bold_v | start_POSTSUBSCRIPT roman_Γ end_POSTSUBSCRIPT = bold_0 , bold_q start_POSTSUBSCRIPT italic_i ⟂ end_POSTSUBSCRIPT | start_POSTSUBSCRIPT roman_Γ end_POSTSUBSCRIPT = bold_q start_POSTSUBSCRIPT italic_e ⟂ end_POSTSUBSCRIPT | start_POSTSUBSCRIPT roman_Γ end_POSTSUBSCRIPT = bold_0 , ( ∇ start_POSTSUBSCRIPT ⟂ end_POSTSUBSCRIPT italic_ψ ) | start_POSTSUBSCRIPT roman_Γ end_POSTSUBSCRIPT = 0 and ( ∇ start_POSTSUBSCRIPT ⟂ end_POSTSUBSCRIPT italic_f ) | start_POSTSUBSCRIPT roman_Γ end_POSTSUBSCRIPT = 0,
are standard. The boundary conditions and closure for this model (namely, definitions of thermal fluxes 𝐪isubscript𝐪𝑖\mathbf{q}_{i}bold_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and 𝐪esubscript𝐪𝑒\mathbf{q}_{e}bold_q start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT,
With reference to the definitions of the discrete forms for the thermal flux ∇¯^⋅𝐪^α⋅^¯∇subscript^𝐪𝛼\widehat{\overline{\nabla}}\cdot\widehat{\mathbf{q}}_{\alpha}over^ start_ARG over¯ start_ARG ∇ end_ARG end_ARG ⋅ over^ start_ARG bold_q end_ARG start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT
terms 𝐪^isubscript^𝐪𝑖\widehat{\mathbf{q}}_{i}over^ start_ARG bold_q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and 𝐪^esubscript^𝐪𝑒\widehat{\mathbf{q}}_{e}over^ start_ARG bold_q end_ARG start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT. For the viscous terms, we use, for simplicity, the unmagnetised version
B
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12. Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right.
The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to B⁢C⁢→⁡A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and g3subscript𝑔3g_{3}italic_g start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT.
First, remark that both A⁢→⁡B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B⁢→⁡A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible. Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA⁢→⁡Bsubscriptmodels𝑔𝑟𝐴→𝐵r\models_{g}A\operatorname{\rightarrow}Bitalic_r ⊧ start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT italic_A → italic_B as there are no counter-examples in the resulting closure system.
If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use ≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∨ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT, respectively.
For convenience we give in Table 7 the list of all possible realities along with the abstract tuples which will be interpreted as counter-examples to A⁢→⁡B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B⁢→⁡A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A.
D
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes because of the unseen state transitions and the finite size of experience reply buffer. This type of variance leads to converging to sub-optimal policies and brutally hurts DQN performance. The second source of variance Target Approximation Error which is the error coming from the inexact minimization of DQN parameters. Many of the proposed extensions focus on minimizing the variance that comes from AGE by finding methods to optimize the learning trajectory or from TAE by using methods like averaging to exact DQN parameters. Dropout methods have the ability to assemble these two solutions which minimize different source of variance. Dropout methods can achieve a consistence learning trajectory and exact DQN parameters with averaging, which comes inherently with Dropout methods.
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and after applying Dropout (Dropout methods DQN). There was a statistically significant decrease in Variance (14.72% between Gaussian Dropout and DQN, 48.89% between Variational Dropout and DQN). Furthermore one of the Dropout methods outperformed DQN score.
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optimal value function can be computed exactly.
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Classic Control Environment. The game of CARTPOLE was selected due to its widespread use and the ease with which the DQN can achieve a steady state policy.
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Reinforcement Learning is concerned with finding a sequence of actions an agent can follow that could lead to solve the task on the environment [1][2][3]. Most of Reinforcement Learning techniques estimate the consequences of actions in order to find an optimal policy in the form of sequence of actions that can be followed by the agent to solve the task. The process of choosing the optimal policy is based on selecting actions that maximize the future payoff of an action. Finding an optimal policy is the main concern of Reinforcement Learning for that reason many algorithms have been introduced over a course of time, e.g, Q-learning[4], SARSA[5], and policy gradient methods[6]. These methods use linear function approximation techniques to estimate action value, where convergence is guaranteed [7]. However, as challenges in modeling complex patterns increase, the need for expressive and flexible non-linear function approximators becomes clear. The recent advances in deep neural networks helped to develop artificial agent named deep Q-network(DQN)[8] that can learn successful policies directly from high-dimensional features. Despite the remarkable flexibility and the huge representative capability of DQN, some issues emerge from the combination of Q-learning and neural networks. One of these issues, known as ”overestimation phenomenon,” was first explored by [9]. They noted that the expansion of the action space in the Q-learning algorithm, along with generalization errors in neural networks, often results in an overestimation and increased variance of state-action values. They suggested that to counter these issues, further modifications and enhancements to the standard algorithm would be necessary to boost training stability and diminish overestimation. In response, [10] introduced Double-DQN, an improvement that incorporates the double Q-learning estimator [11], aiming to address the challenges of variance and overestimation. Additionally, [31] developed the Averaged-DQN algorithm, a significant improvement over the standard DQN. By averaging previously learned Q-values, Averaged-DQN effectively lowers the variance in target value estimates, thus enhancing training stability and overall performance.
B
Chaichulee et al. (2017) extended the VGG16 architecture (Simonyan and Zisserman, 2014) to include a global average pooling layer for patient detection and a fully convolutional network for skin segmentation. The proposed model was evaluated on images from a clinical study conducted at a neonatal intensive care unit, and was robust to changes in lighting, skin tone, and pose. He et al. (2019) trained a U-Net (Ronneberger et al., 2015)-like encoder-decoder architecture to simultaneously segment thoracic organs from CT scans and perform global slice classification. Ke et al. (2019) trained a multi-task U-Net architecture to solve three tasks - separating wrongly connected objects, detecting class instances, and pixelwise labeling for each object, and evaluated it on a food microscopy image dataset. Other multi-task models have also been proposed for segmentation and classification for detecting manipulated faces in images and video (Nguyen et al., 2019) and diagnosis of breast biopsy images (Mehta et al., 2018) and mammograms (Le et al., 2019).
V-Net (Milletari et al., 2016) and FCN (Long et al., 2015). Sinha and Dolz (2019) proposed a multi-level attention based architecture for abdominal organ segmentation from MRI images.  Qin et al. (2018) proposed a dilated convolution base block to preserve more detailed attention in 3D medical image segmentation. Similarly, other papers (Lian et al., 2018; Isensee et al., 2019; Li et al., 2019b; Ni et al., 2019; Oktay et al., 2018; Schlemper et al., 2019) have leveraged the attention concept into medical image segmentation as well.
Bischke et al. (2019) proposed a cascaded multi-task loss to preserve boundary information from segmentation masks for segmenting building footprints and achieved state-of-the-art performance on an aerial image labeling task. He et al. (2017) extended Faster R-CNN (Ren et al., 2015) by adding a new branch to predict the object mask along with a class label and a bounding box, and the proposed model was called Mask R-CNN. Mask R-CNN has been used extensively for multi-task segmentation models for a wide range of application areas (Abdulla, 2017), such as adding sports fields to OpenStreetMap (Remillard, 2018), detection and segmentation for surgery robots (SUYEgit, 2018), understanding climate change patterns from aerial imagery of the Arctic (Zhang et al., 2018a), converting satellite imagery to maps (Mohanty, 2018), detecting image forgeries (Wang et al., 2019d), and segmenting tree canopy (Zhao et al., 2018).
Mask R-CNN has also been used for segmentation tasks in medical image analysis such as automatically segmenting and tracking cell migration in phase-contrast microscopy (Tsai et al., 2019), detecting and segmenting nuclei from histological and microscopic images (Johnson, 2018; Vuola et al., 2019; Wang et al., 2019a, b), detecting and segmenting oral diseases (Anantharaman et al., 2018), segmenting neuropathic ulcers (Gamage et al., 2019), and labeling and segmenting ribs in chest X-rays (Wessel et al., 2019). Mask R-CNN has also been extended to work with 3D volumes and has been evaluated on lung nodule detection and segmentation from CT scans and breast lesion detection and categorization on diffusion MR images (Jaeger et al., 2018; Kopelowitz and Engelhard, 2019).
Chaichulee et al. (2017) extended the VGG16 architecture (Simonyan and Zisserman, 2014) to include a global average pooling layer for patient detection and a fully convolutional network for skin segmentation. The proposed model was evaluated on images from a clinical study conducted at a neonatal intensive care unit, and was robust to changes in lighting, skin tone, and pose. He et al. (2019) trained a U-Net (Ronneberger et al., 2015)-like encoder-decoder architecture to simultaneously segment thoracic organs from CT scans and perform global slice classification. Ke et al. (2019) trained a multi-task U-Net architecture to solve three tasks - separating wrongly connected objects, detecting class instances, and pixelwise labeling for each object, and evaluated it on a food microscopy image dataset. Other multi-task models have also been proposed for segmentation and classification for detecting manipulated faces in images and video (Nguyen et al., 2019) and diagnosis of breast biopsy images (Mehta et al., 2018) and mammograms (Le et al., 2019).
C
The red line indicates the number of edges that remain in 𝐀¯¯𝐀\bar{{\mathbf{A}}}over¯ start_ARG bold_A end_ARG after sparsification. It is possible to see that for small increments of ϵitalic-ϵ\epsilonitalic_ϵ the spectral distance increases linearly, while the number of edges in the graph drops exponentially.
We notice that the coarsened graphs are pre-computed before training the GNN. Therefore, the computational time of graph coarsening is much lower compared to training the GNN for several epochs, since each MP operation in the GNN has a cost 𝒪⁢(N2)𝒪superscript𝑁2\mathcal{O}(N^{2})caligraphic_O ( italic_N start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ).
The proposed spectral algorithm is not designed to handle very dense graphs; an intuitive explanation is that 𝐯maxssubscriptsuperscript𝐯𝑠max{\mathbf{v}}^{s}_{\text{max}}bold_v start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT can be interpreted as the graph signal with the highest frequency, since its sign oscillates as much as possible when transiting from a node to one of its neighbors.
The GNN is then trained to fit its node representations to these pre-determined structures. Pre-computing graph coarsening not only makes the training much faster by avoiding to perform graph reduction at every forward pass, but it also provides a strong inductive bias that prevents degenerate solutions, such as entire graphs collapsing into a single node or entire graph sections being discarded.
The reason can be once again attributed to the low information content of the individual node features and in the sparsity of the graph signal (most node features are 0), which makes it difficult for the feature-based pooling methods to infer global properties of the graph by looking at local sub-structures.
C
The following analyses are shown exemplarily on the Soybean dataset. This dataset has 35353535 features and 19191919 classes. First, we analyze the generated data with a fixed number of decision trees, i.e., the number of sampled decision trees in R⁢Fsub𝑅subscript𝐹subRF_{\text{sub}}italic_R italic_F start_POSTSUBSCRIPT sub end_POSTSUBSCRIPT. The resulting confidence distributions for different numbers of decision trees are shown in the first column of Figure 5. When adopting the data sample to only a few decision trees, the confidence of the generated samples is lower (around 0.20.20.20.2 for 5555 samples per class).
Probability distribution of the predicted confidences for different data generation settings on Soybean with 5555 (top) and 50505050 samples per class (bottom). Generating data with different numbers of decision trees is visualized in the left column. Additionally, a comparison between random sampling (red), NRFI uniform (orange), and NRFI dynamic (green) is shown in the right column. By optimizing the decision tree sampling, NRFI dynamic automatically balances the confidences and generates the most diverse and evenly distributed data.
This shows that neural random forest imitation is able to generate significantly better data samples based on the knowledge in the random forest. NRFI dynamic improves the performance by automatically optimizing the decision tree sampling and generating the largest variation in the data.
The analysis shows that random data samples and uniform sampling have a bias to generate data samples that are classified with high confidence. NRFI dynamic automatically balances the number of decision trees and archives an evenly distributed data distribution, i.e., generates the most diverse data samples.
NRFI uniform and NRFI dynamic sample the number of decision trees for each data point uniformly, respectively, optimized via automatic confidence distribution (see Section 4.1.4). The confidence distributions for both sampling modes are visualized in the second column of Figure 5. Additionally, sampling random data points without generating data from the random forest is included as a baseline.
D
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy optimization remain rather limited from both computational and statistical perspectives. More specifically, from the computational perspective, it remains unclear until recently whether policy optimization converges to the globally optimal policy in a finite number of iterations, even given infinite data. Meanwhile, from the statistical perspective, it still remains unclear how to attain the globally optimal policy with a finite regret or sample complexity.
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020), as well as nonlinear settings involving general function approximators (Wen and Van Roy, 2017; Jiang et al., 2017; Du et al., 2019b; Dong et al., 2019). In particular, our setting is the same as the linear setting studied by Ayoub et al. (2020); Zhou et al. (2020), which generalizes the one proposed by Yang and Wang (2019a). We remark that our setting differs from the linear setting studied by Yang and Wang (2019b); Jin et al. (2019). It can be shown that the two settings are incomparable in the sense that one does not imply the other (Zhou et al., 2020). Also, our setting is related to the low-Bellman-rank setting studied by Jiang et al. (2017); Dong et al. (2019). In comparison, we focus on policy-based reinforcement learning, which is significantly less studied in theory. In particular, compared with the work of Yang and Wang (2019b, a); Jin et al. (2019); Ayoub et al. (2020); Zhou et al. (2020), which focuses on value-based reinforcement learning, OPPO attains the same T𝑇\sqrt{T}square-root start_ARG italic_T end_ARG-regret even in the presence of adversarially chosen reward functions. Compared with optimism-led iterative value-function elimination (OLIVE) (Jiang et al., 2017; Dong et al., 2019), which handles the more general low-Bellman-rank setting but is only sample-efficient, OPPO simultaneously attains computational efficiency and sample efficiency in the linear setting. Despite the differences between policy-based and value-based reinforcement learning, our work shows that the general principle of “optimism in the face of uncertainty” (Auer et al., 2002; Bubeck and Cesa-Bianchi, 2012) can be carried over from existing algorithms based on value iteration, e.g., optimistic LSVI, into policy optimization algorithms, e.g., NPG, TRPO, and PPO, to make them sample-efficient, which further leads to a new general principle of “conservative optimism in the face of uncertainty and adversary” that additionally allows adversarially chosen reward functions.
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient (PG) (Williams, 1992; Baxter and Bartlett, 2000; Sutton et al., 2000), natural policy gradient (NPG) (Kakade, 2002), trust-region policy optimization (TRPO) (Schulman et al., 2015), proximal policy optimization (PPO) (Schulman et al., 2017), and actor-critic (AC) (Konda and Tsitsiklis, 2000), converge to the globally optimal policy at sublinear rates of convergence, even when they are coupled with neural networks (Liu et al., 2019; Wang et al., 2019). However, such computational efficiency guarantees rely on the regularity condition that the state space is already well explored. Such a condition is often implied by assuming either the access to a “simulator” (also known as the generative model) (Koenig and Simmons, 1993; Azar et al., 2011, 2012a, 2012b; Sidford et al., 2018a, b; Wainwright, 2019) or finite concentratability coefficients (Munos and Szepesvári, 2008; Antos et al., 2008; Farahmand et al., 2010; Tosatto et al., 2017; Yang et al., 2019b; Chen and Jiang, 2019), both of which are often unavailable in practice.
for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018, 2019; Yang and Wang, 2019b, a), where the reward function is fixed across all the episodes.
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In particular, OPPO is based on PPO (and similarly, NPG and TRPO), which is shown to converge to the globally optimal policy at sublinear rates in tabular and linear settings, as well as nonlinear settings involving neural networks (Liu et al., 2019; Wang et al., 2019). However, without assuming the access to a “simulator” or finite concentratability coefficients, both of which imply that the state space is already well explored, it remains unclear whether any of such algorithms is sample-efficient, that is, attains a finite regret or sample complexity. In comparison, by incorporating uncertainty quantification into the action-value function at each update, which explicitly encourages exploration, OPPO not only attains the same computational efficiency as NPG, TRPO, and PPO, but is also shown to be sample-efficient with a d2⁢H3⁢Tsuperscript𝑑2superscript𝐻3𝑇\sqrt{d^{2}H^{3}T}square-root start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT italic_T end_ARG-regret up to logarithmic factors.
B
Both training and inference have extremely high demands on their targeted platform and certain hardware requirements can be the deciding factor whether an application can be realized. This section briefly introduces the most important hardware for deep learning and discusses their potentials and limitations.
Jacob et al. (2018) proposed a quantization scheme that accurately approximates floating-point operations using only integer arithmetic to speed up computation. During training, their forward pass simulates the quantization step to keep the performance of the quantized DNN close to the performance of using single-precision.
In Huang and Wang (2018), the outputs of different structures are scaled with individual trainable scaling factors. By using a sparsity enforcing ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT-norm regularizer on these scaling factors, the outputs of the corresponding structures are driven to zero and can be pruned.
Quantized DNNs with 1-bit weights and activations are the worst performing models, which is due to the severe implications of extreme quantization on prediction performance. As can be seen, however, the overall performance of the quantized models increases considerably when the bit width of activations is increased to 2–3 bit whilst the bit width of the weights is kept low.
CPUs were originally designed to optimize single-thread performance in order to execute an individual computation within the shortest possible latency. Unfortunately, single-thread performance is stagnating since the end of Dennard scaling (Dennard et al., 1974), and now performance scaling usually requires parallelization.
D
In Section 3, we construct a category of metric pairs. This category will be the natural setting for our extrinsic persistent homology. Although being functorial is trivial in the case of Vietoris-Rips persistence, the type of functoriality which one is supposed to expect in the case of metric embeddings is a priori not obvious. We address this question in Section 3 by introducing a suitable category structure.
One main contribution of this paper is establishing a precise relationship (i.e. a filtered homotopy equivalence) between the Vietoris-Rips simplicial filtration of a metric space and a more geometric (or extrinsic) way of assigning a persistence module to a metric space, which consists of first isometrically embedding it into a larger space and then considering the persistent homology of the filtration obtained by considering the resulting system of nested neighborhoods of the original space inside this ambient space. These neighborhoods, being also metric (and thus topological) spaces, permit giving a short proof of the Künneth formula for Vietoris-Rips persistent homology.
In Section 4, we show that the Vietoris-Rips filtration can be (categorically) seen as a special case of persistent homology obtained through metric embeddings via the isomorphism theorem (Theorem 1). In this section, we also we also establish the stability of the filtration obtained via metric embeddings.
In Section 3, we construct a category of metric pairs. This category will be the natural setting for our extrinsic persistent homology. Although being functorial is trivial in the case of Vietoris-Rips persistence, the type of functoriality which one is supposed to expect in the case of metric embeddings is a priori not obvious. We address this question in Section 3 by introducing a suitable category structure.
In Section 8, we reprove Rips and Gromov’s result about the contractibility of the Vietoris-Rips complex of hyperbolic geodesic metric spaces, by using our method consisting of isometric embeddings into injective metric spaces. As a result, we will be able to bound the length of intervals in Vietoris-Rips persistence barcode by the hyperbolicity of the underlying space.
B
The difference line plot (d), on the other hand, builds on the standard plot by highlighting the differences between the selection and the global average, shown as positive and negative values around the 0 value of the y-axis. It provides a clearer overall picture of the difference in preservation among all the shown scales, but compromises the precision and simplicity of interpretation of the y-axis (where the exact percentage of Neighborhood Preservation was previously shown). The difference bar chart (b) is a combination of the designs (a) and (d). Similar to (d), the interpretation of the y-values might be misleading.
After choosing a projection, users will proceed with the visual analysis using all the functionalities described in the next sections. However, the hyper-parameter exploration does not necessarily stop here. The top 6 representatives (according to a user-selected quality measure) are still shown at the top of the main view (Figure 1(e)), and the projection can be switched at any time if the user is not satisfied with the initial choice. We also provide the mechanism for a selection-based ranking of the representatives. During the exploration of the projection, if the user finds a certain pattern of interest (i.e., cluster, shape, etc.), one possible question might be whether this specific pattern is better visible or better represented in another projection. After selecting these points, the list of top representatives can be ranked again to contain the projections with the best quality regarding the selection (as opposed to the best global quality, which is the default). The way this “selection-based quality” is computed is by adapting the global quality measures we used, taking advantage of the fact that they all work by aggregating a measure-specific quality computation over all the points of the projection. In the case of the selection-based quality, we aggregate only over the selected points to reach the final value of the quality measure, which is then used to re-rank the representatives.
Adaptive PCP vs. PCP   Although it is not uncommon to find tools that use PCP views together with DR-based scatterplots (e.g., iPCA [69]) with various schemes for re-ordering and prioritizing the axes (e.g., [70, 71]), the arrangement and presentation of these PCP’s are usually static in order to reflect attributes of the data (or the projection) as a whole. In our proposed Adaptive PCP, the arrangement of the axes is dynamically updated every time the user makes a new selection (using a local PCA); this way, the PCP only shows, at any given time, the most relevant dimensions for the user’s current focus, which may differ significantly from the global aspects of the projection as a whole. Coupled with the Dimension Correlation view, this provides a highly-customized toolset for inspecting and interpreting the meanings of specific neighborhoods of data points.
Apart from the adaptive filtering and re-ordering of the axes, we maintained a rather standard visual presentation of the PCP plot, to make sure it is as easy and natural as possible for users to inspect it. The colors reflect the labels of the data with the same colors as in the overview (Subsection 4.2), when available, and the rest of the instances of the data—which are not selected—are shown with high transparency. Each axis maps the entire range of each dimension, from bottom to top. A simple example is given in Figure 4(b), where we can see that the dimensions of the selected points roughly appear at the intersection between two species, versicolor (brown) and virginica (orange).
Adaptive Parallel Coordinates Plot   Our first proposal to support the task of interpreting patterns in a t-SNE projection is an Adaptive PCP [59], as shown in Figure 1(k). It highlights the dimensions of the points selected with the lasso tool, using a maximum of 8 axes at any time, to avoid clutter. The shown axes (and their order) are, however, not fixed, as is the usual case. Instead, they are adapted to the selection in the following way. First, a Principal Component Analysis (PCA) [1] is performed using only the selected points, but with all dimensions. That yields two results: (1) a set of eigenvectors that represent a new base that best explains the variance of the selected points, and (2) a set of eigenvalues that represent how much variance is explained by each eigenvector. Simulating a reduction of the dimensions of the selected points to 1111-Dimensional space using PCA, we pick the eigenvector with the largest eigenvalue, i.e., the most representative one. This N𝑁Nitalic_N-D vector can be seen as sequence w𝑤witalic_w of N𝑁Nitalic_N weights, one per original dimension, where the value of wjsubscript𝑤𝑗w_{j}italic_w start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT indicates the importance of dimension j𝑗jitalic_j in explaining the variance of the user-selected subset of the data. Finally, we sort w𝑤witalic_w in descending order, then pick the dimensions that correspond to the first (up to) 8 values of the sorted w𝑤witalic_w. These are the (up to) 8 dimensions shown in the PCP axes, in the same descending order (from left to right).
B
The correct design of a bio-inspired algorithm involves the execution of a series of steps in a conscientious and organized manner, both at the time of algorithm development and during subsequent experimentation and application to real-world optimization problems. In [5], a complete tutorial on the design of new bio-inspired algorithms is presented, and in this work, we make a brief introduction to the phases that are necessary for quality research.
In such work, an analysis is conducted from a critical yet constructive point of view, aiming to correct misconceptions and bad methodological habits. Each phase of the analysis includes the prescription of application guidelines and recommendations intended for adoption by the community. These guidelines are intended to promote actionable metaheuristics designed and tested in a principled manner, to achieve valuable research results and ensure their practical use in real-world applications.
The correct design of a bio-inspired algorithm involves the execution of a series of steps in a conscientious and organized manner, both at the time of algorithm development and during subsequent experimentation and application to real-world optimization problems. In [5], a complete tutorial on the design of new bio-inspired algorithms is presented, and in this work, we make a brief introduction to the phases that are necessary for quality research.
The rest of this paper is organized as follows. In Section 2, we examine previous surveys, taxonomies, and reviews of nature- and bio-inspired algorithms reported so far in the literature. Section 3 delves into the taxonomy based on the inspiration of the algorithms. In Section 4, we present and populate the taxonomy based on the behavior of the algorithm. In Section 5, we analyze similarities and differences found between both taxonomies, ultimately identifying the most influential algorithms in our reviewed papers. In Section 6, we report several lessons learned and recommendations as the result of the previous analysis. In addition, as novel contributions of this version over its preceding ones, Section 7 provides an extended critical analysis of the state of the art in the field, highlighting the aforementioned good, the bad, and the ugly in the metaheuristic landscape [2]. In Section 8, we discuss future directions in bio-inspired optimization algorithms, and prescribe potential solutions and analysis toward ensuring good practices and correct experimental procedures with these algorithms. Section 9 shows studies and guidelines for good practices, together with recent studies including taxonomies, overviews, and general approaches related to metaheuristics. Finally, in Section 10, we summarize our current main conclusions and reflections on the field, with builds upon a five-year reflection and literature study.
As we have mentioned in the introduction, we revisit a triple study of evolutionary and bio-inspired algorithms from a triple perspective, where we stand and what’s next from a perspective published in 2020, but still valid in terms of the need to address important problems and challenges in optimization for EAs and population-based optimization models, a prescription of methodological guidelines for comparing bio-inspired optimization algorithms, and a tutorial on the design, experimentation, and application of metaheuristic algorithms to real-world optimization problems.
A
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update the graph from the learned embedding with a larger sparsity, k𝑘kitalic_k. With the new graph, we re-train the GAE. These steps are repeated until the convergence.
In recent years, GCNs have been studied a lot to extend neural networks to graph type data. How to design a graph convolution operator is a key issue and has attracted a mass of attention. Most of them can be classified into 2 categories, spectral methods [24] and spatial methods[25].
However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods. In this paper, we propose an Adaptive Graph Auto-Encoder (AdaGAE) to extend graph auto-encoder into common scenarios. The main contributions are listed as follows:
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore, they are widely used in practice. Due to the success of deep learning, how to combine neural networks and traditional clustering models has been studied a lot [7, 8, 9]. In particular, CNN-based clustering models have been extensively investigated [10, 11, 12]. However, the convolution operation may be unavailable on other kinds of datasets, e.g., text, social network, signal, data mining, etc.
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for decoders. (2) As we utilize GAE to exploit the high-level information to construct a desirable graph, we find that the model suffers from a severe collapse due to the simple update of the graph. We analyze the degeneration theoretically and experimentally to understand the phenomenon. We further propose a simple but effective strategy to avoid it.
B
Since the Open Resolver and the Spoofer Projects are the only two infrastructures providing vantage points for measuring spoofing - their importance is immense as they facilitated many research works analysing the spoofability of networks based on the datasets collected by these infrastructures. Nevertheless, the studies using these infrastructure, e.g., (Huz et al., 2015; Luckie et al., 2019), point out the problems with the representativeness of the collected data of the larger Internet. Both projects (the Spoofer and the Open Resolver) acknowledged the need to increase the coverage of the measurements, as well as the challenges for obtaining better coverage and stable vantage points.
Limitations of filtering studies. The measurement community provided indispensable studies for assessing “spoofability” in the Internet, and has had success in detecting the ability to spoof in some individual networks using active measurements, e.g., via agents installed on those networks (Mauch, 2013; Lone et al., 2018), or by identifying spoofed packets using offline analysis of traffic, e.g., (Lone et al., 2017; Luckie et al., 2019). The need to install agents on networks or the ability to obtain traces only from some networks limits the studies to non-uniform coverage of the Internet. Therefore it is not clear how representative these statistics are. Unfortunately, this limitation to a small set of networks creates a bias in the assessments of the overall number of spoofable networks. The extrapolation from the small set of networks to the entire Internet typically result in assessment that at least 30% of the Internet networks do not filter spoofed packets (Luckie et al., 2019; Man et al., 2020). As we show, the number of spoofable networks is above 72% which is significantly higher than what was previous believed.
Network Traces. To overcome the dependency on vantage points for running the tests, researchers explored alternatives for inferring filtering of spoofed packets. A recent work used loops in traceroute to infer ability to send packets from spoofed IP addresses, (Lone et al., 2017).
(Lichtblau et al., 2017) developed a methodology to passively detect spoofed packets in traces recorded at a European IXP connecting 700 networks. The limitation of this approach is that it requires cooperation of the IXP to perform the analysis over the traffic and applies only to networks connected to the IXP. Allowing to identify spoofing that defacto took place, the approach proposed in (Lichtblau et al., 2017) misses out on the networks which do not enforce filtering but which did not receive packets from spoofed IP addresses (at least during the time frame in which the traces were collected).
Vantage Points. Measurement of networks which do not perform egress filtering of packets with spoofed IP addresses was first presented by the Spoofer Project in 2005 (Beverly and Bauer, 2005). The idea behind the Spoofer Project is to craft packets with spoofed IP addresses and check receipt thereof on the vantage points operated by the volunteers, i.e., participants who run a “spoofer” software provided by the authors. Based on the data collected by the Spoofer Project many reports were published providing statistics on the deployment of egress filtering in the Internet (Beverly et al., 2009, 2013; Lone et al., 2018; Luckie et al., 2019); we list the statistics in Table 1.
B
Sensor drift in industrial processes is one such use case. For example, sensing gases in the environment is mostly tasked to metal oxide-based sensors, chosen for their low cost and ease of use [1, 2]. An array of sensors with variable selectivities, coupled with a pattern recognition algorithm, readily recognizes a broad range of odors. The arrangement is called an artificial nose since it resembles the multiplicity of sensory neuron types in the nasal epithelium. However, while metal oxide-based sensors are economical and flexible, they are unstable over time. Changes to the response properties of sensors make it difficult to detect and identify odors in the long term, and sensors have to be recalibrated to compensate [3]. Recalibration requires collecting and labeling new samples, which is costly because a skilled operator is needed, and challenging because the experimental conditions need to be controlled precisely [3]. Recalibrating a model with unlabeled examples, called semisupervised learning, is a possible alternative but difficult to establish in practice.
More specifically, natural odors consist of complex and variable mixtures of molecules present at variable concentrations [4]. Sensor variance arises from environmental dynamics of temperature, humidity, and background chemicals, all contributing to concept drift [5], as well as sensor drift arising from modification of the sensing device. The hard problem of olfaction in nature calls for the learning of new odor assocations [6]. In an attempt to capture much of this complexity, Vergara et al. [7] developed a publicly available benchmark dataset demonstrating sensor drift over a period of 36 months. This dataset offers a controlled testbed for sensor drift mitigation algorithms and thus defines the scope of this paper.
An alternative approach is to emulate adaptation in natural sensor systems. The system expects and automatically adapts to sensor drift, and is thus able to maintain its accuracy for a long time. In this manner, the lifetime of sensor systems can be extended without recalibration.
While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this paper introduced an approach based on continual adaptation. A recurrent neural network uses a sequence of previously seen gas recordings to form a representation of the current state of the sensors. It then modulates the skill of odor recognition with this context, allowing the system to adapt to sensor drift. Context models can thus play a useful role in lifelong adaptation to changing environments in artificial systems.
The purpose of this study was to demonstrate that explicit representation of context can allow a classification system to adapt to sensor drift. Several gas classifier models were placed in a setting with progressive sensor drift and were evaluated on samples from future contexts. This task reflects the practical goal to deploy an artificial nose in a dynamic environment without recalibration.
B
Now we can define the tables A(1)superscript𝐴1A^{(1)}italic_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT, A(2)superscript𝐴2A^{(2)}italic_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT and A(3)superscript𝐴3A^{(3)}italic_A start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT that our algorithm uses. Recall that for Algorithm 1, we used
A(2)⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒A^{(2)}[i,B]:=\begin{cases}\parbox{346.89731pt}{A representative set % containing pairs $(M,x)$, where $M$ is a perfect matching on\leavevmode%
A⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒A[i,B]:=\begin{cases}\parbox{346.89731pt}{A representative set containing %
A⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒A[i,B]:=\begin{cases}\parbox{346.89731pt}{A representative set containing %
A(1)⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒A^{(1)}[i,B]:=\begin{cases}\parbox{346.89731pt}{A representative set % containing pairs $(M,x)$, where $M$ is a perfect matching on\leavevmode%
C
Let S𝑆Sitalic_S be a (completely) self-similar semigroup and let T𝑇Titalic_T be a finite or free semigroup. Then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is (completely) self-similar. If furthermore S𝑆Sitalic_S is a (complete) automaton semigroup, then so is S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T.
While our main result significantly relaxes the hypothesis for showing that the free product of self-similar semigroups (or automaton semigroups) is self-similar (an automaton semigroup), it does not settle the underlying question whether these semigroup classes are closed under free product. It is possible that there is a different construction for the free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T of two self-similar or automaton semigroup without the requirement of a homomorphism from one to the other and it is also possible that there is a pair of self-similar (or automaton) semigroups such that S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is not a self-similar (or an automaton semigroup). In this case, however, no homomorphism S→T→𝑆𝑇S\to Titalic_S → italic_T or T→S→𝑇𝑆T\to Sitalic_T → italic_S can exist. Thus, to make progress in either direction (towards a better construction or towards a counter-example), we need to look at pairs S,T𝑆𝑇S,Titalic_S , italic_T of self-similar (or even automaton) semigroups without a homomorphism from one to the other. However, it turns out that finding such a pair is not easy. In particular, neither S𝑆Sitalic_S nor T𝑇Titalic_T may contain an idempotent. Thus, we have to consider idempotent-free semigroups here. We will show, however, that we cannot find a pair of such semigroups in the class of finitely generated simple semigroups. More precisely, using results by Jones on idempotent-free semigroups [11], we show that finitely generated simple (or 00-simple) idempotent-free semigroups are not residually finite (Theorem 21) and, thus, not self-similar (and, in particular, not automaton semigroups; 22). We then conclude the paper with an example222The authors would like to thank Emanuele Rodaro for his help in finding this example. of a finitely generated residually finite semigroup (23) which has no homomorphism to its opposite semigroup (25). While this comes close to the sought pair S,T𝑆𝑇S,Titalic_S , italic_T, it is not clear whether the given semigroup is self-similar (26).
By Corollaries 10 and 11, we have to look into idempotent-free automaton semigroups without length functions in order to find a pair of self-similar (or automaton) semigroups not satisfying the hypothesis of Theorem 6 (or 8), which would be required in order to either relax the hypothesis even further (possibly with a new construction), or provide a pair S,T𝑆𝑇S,Titalic_S , italic_T of self-similar semigroups such that S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is not self-similar. It turns out that we can reduce the class of potential candidates even further: we will show next that no finitely generated simple or 00-simple idempotent-free semigroup is self-similar (and, thus, that no simple or 00-simple idempotent-free semigroup is an automaton semigroup).
from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata). Third, we show this result in the more general setting of self-similar semigroups111Note that the constructions from [2, Theorem 2], [3, Theorem 4] and [19] mentioned above do not use that the generating automata for S𝑆Sitalic_S and for T𝑇Titalic_T are finite. Therefore, these constructions also work for self-similar semigroups, although this is not explicitly stated there. (Theorem 6) but observe that the constructed generating automaton for S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is finite (and/or complete) if this was the case for the original two automata generating S𝑆Sitalic_S and T𝑇Titalic_T. The existence of a homomorphism from S𝑆Sitalic_S to T𝑇Titalic_T (or vice-versa) is a very lax requirement and is satisfied by large classes of semigroups. For example, it suffices to have an idempotent (10) or a length function (11) in (at least) one of the two semigroups. By induction, we can even extend the result to arbitrary free products of (finitely many) semigroups where at least one contains an idempotent (12). The construction itself yields further results. As an example, we modify it to show that a new free generator can be adjoined to any self-similar semigroup (or automaton semigroup) without losing the property of self-similarity (or being an automaton semigroup; Theorem 14). This is noteworthy because – as mentioned above – the free semigroup of rank one is not an automaton semigroup (not even if we allow partial automata, see [8, Theorem 19] and [20, Theorem 1.2.1.4]).
The construction used to prove Theorem 6 can also be used to obtain results which are not immediate corollaries of the theorem (or its corollary for automaton semigroups in 8). As an example, we prove in the following theorem that it is possible to adjoin a free generator to every self-similar semigroup without losing the self-similarity property and that the analogous statement for automaton semigroups holds as well. The version for automaton semigroups does not follow directly from 8, as the free monogenic semigroup is not a complete automaton semigroup [4, Proposition 4.3] or even a (partial) automaton semigroup (see [8, Theorem 18] or [20, Theorem 1.2.1.4]).
B
SCR divides the region proposals into influential and non-influential regions and penalizes the model if: 1) 𝒮⁢(ag⁢t)𝒮subscript𝑎𝑔𝑡\mathcal{S}(a_{gt})caligraphic_S ( italic_a start_POSTSUBSCRIPT italic_g italic_t end_POSTSUBSCRIPT ) of a non-influential region is higher than an influential region, and 2) the region most influential for the correct answer has even higher sensitivity for incorrect answers.
Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible to achieve such gains even when the model is trained to look at: a) irrelevant visual regions, and b) random visual regions. Second, we show that differences in the predictions from the variants trained with relevant, irrelevant and random visual regions are not statistically significant. Third, we show that these methods degrade performance when the priors remain intact and instead work on VQA-CPv2 by hurting its train accuracy.
As observed by Selvaraju et al. (2019) and as shown in Fig. 2, we observe small improvements on VQAv2 when the models are fine-tuned on the entire train set. However, if we were to compare against the improvements in VQA-CPv2 in a fair manner, i.e., only use the instances with visual cues while fine-tuning, then, the performance on VQAv2 drops continuously during the course of the training. This indicates that HINT and SCR help forget linguistic priors, which is beneficial for VQA-CPv2 but not for VQAv2.
We test our regularization method on random subsets of varying sizes. Fig. A6 shows the results when we apply our loss to 1−100%1percent1001-100\%1 - 100 % of the training instances. Clearly, the ability to regularize the model does not vary much with respect to the size of the train subset, with the best performance occurring when our loss is applied to 1%percent11\%1 % of the training instances. These results support our claims that it is possible to improve performance without actually performing visual grounding.
We probe the reasons behind the performance improvements of HINT and SCR. We first analyze if the results improve even when the visual cues are irrelevant (Sec. 4.2) or random (Sec. 4.3) and examine if their differences are statistically significant (Sec. 4.4). Then, we analyze the regularization effects by evaluating the performance on VQA-CPv2’s train split (Sec. 4.5) and the behavior on a dataset without changing priors (Sec. 2). We present a new metric to assess visual grounding in Sec. 4.7 and describe our regularization method in Sec. 5.
D
A privacy policy is a legal document that an organisation uses to disclose how they collect, analyze, share, and protect users’ personal information. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users, and laws such as General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) place specific expectations upon privacy policies. However, although many internet users have concerns about their privacy Madden (2017), most fail to understand privacy policies Meiselwitz (2013). Studies show that privacy policies require a considerable investment in time to read Obar and Oeldorf-Hirsch (2018) and estimate that it would require approximately 200 hours to read all the privacy policies that an average person would come across every year McDonald and Cranor (2008).
For the question answering task, we leveraged the PrivacyQA corpus (Ravichander et al., 2019). PrivacyQA consists of 1,750 questions about the contents of privacy policies from 35 privacy documents. While crowdworkers were asked to come up with privacy related questions based on public information about an application from the Google Play Store, legal experts were recruited to identify relevant evidence within respective privacy policies that answered the question asked by the crowdworkers. The goal of the question answering task is to identify a set sentences in the privacy policy that has information relevant to the question. Ravichander et al. (2019) divided the corpus into 1,350 questions for training and validation and 400 questions for testing where each question in the test set is annotated by at least three experts. We fine-tuned PrivBERT on the training set as a binary classification task on each question-answer sentence pair to identify if the sentence is evidence for the question or not. We trained the model with a dropout of 0.2 and a learning rate of 3e-6 with the positive and negative classes weighted in the ratio 8:1 during training. We used sentence level F1 as the evaluation metric as described by Ravichander et al. (2019), where precision and recall are calculated by measuring the overlap between the predicted sentences and gold standard sentences.
Prior collections of privacy policy corpora have led to progress in privacy research. Wilson et al. (2016) released the OPP-115 Corpus, a dataset of 115 privacy policies with manual annotations of 23k fine-grained data practices, and they created a baseline for classifying privacy policy text into one of ten categories. The corpus was used to train models to extract opt-out choices from privacy policies (Sathyendra et al., 2016), to automatically identify policies on websites and find compliance issues (Story et al., 2019), and to classify privacy practices and answer privacy related non-factoid questions (Harkous et al., 2018).
Other corpora similar to OPP-115 Corpus have enabled research on privacy practices. The PrivacyQA corpus contains 1,750 questions and expert-annotated answers for the privacy question answering task (Ravichander et al., 2019). Similarly, Lebanoff and Liu (2018) constructed the first corpus of human-annotated vague words and sentences in privacy policies and studied automatic vagueness detection. Sathyendra et al. (2017) presented a dataset and developed a model to automatically identify and label opt-out choices offered in privacy policies. Similarly, Zimmeck et al. (2019) released a set of over 400k URLs to Android app privacy policy pages collected by crawling the Google Play store. Amos et al. (2020) collected privacy policies from around 130,000 websites from over two decades and analysed the evolution of the online privacy landscape. Finally, Nokhbeh Zaeem and Barber (2021) collected a corpus of around 100k privacy policies using the domains from DMOZ, a website which maintained categories of websites on the internet.
Natural language processing (NLP) provides an opportunity to automate the extraction of salient details from privacy policies, thereby reducing human effort and enabling the creation of tools for internet users to understand and control their online privacy. Existing research has achieved some success using expert annotated corpora of a few hundred or a few thousand privacy policies Wilson et al. (2016); Zimmeck et al. (2019); Ramanath et al. (2014), but issues of accuracy, scalability and generalization remain. More importantly, annotations in the privacy policy domain are expensive. Privacy policies are difficult to understand and many tasks such as privacy practice classification (Wilson et al., 2016), privacy question answering (Ravichander et al., 2019), vague sentence detection (Lebanoff and Liu, 2018), and detection of compliance issues (Zimmeck et al., 2019) require skilled legal experts to annotate the dataset. In contrast, approaches involving large amounts of unlabeled privacy policies remain relatively unexplored.
D
We answered that the per-class performance is also a very important component, and exploratory visualization can assist in the selection process, as seen in Figure 2(b and c.1). The expert understood the importance of visualization in that situation, compared to not using it.
Interpretability and explainability is another challenge (mentioned by E3) in complicated ensemble methods, which is not necessarily always a problem depending on the data and the tasks. However, the utilization of user-selected weights for multiple validation metrics is one way towards interpreting and trusting the results of stacking ensembles. This is an advantage identified by E2. In the first use case we presented to him, he noted that: “if you are interested in the fairness of the results, you could show with the history preservation view of the system how you reached to these predictions without removing the age or sex features, consequently, not leading to discrimination against patients, for example”. The visual exploration of stacking methods that use multiple layers [28] mentioned by E1 is set as another future work goal.
Workflow. E1, E2, and E3 agreed that the workflow of StackGenVis made sense. They all suggested that data wrangling could happen before the algorithms’ exploration, but also that it is usual to first train a few algorithms and then, based on their predictions, wrangle the data.
Figure 4: Our feature selection view that provides three different feature selection techniques. The y-axis of the table heatmap depicts the data set’s features, and the x-axis depicts the selected models in the current stored stack. Univariate-, permutation-, and accuracy-based feature selection is available as long with any combination of them (a). (b) displays the normalized importance color legend. The per-model feature accuracy is depicted in (c), and (d) presents the user’s interaction to disable specific features to be used for all the models (only seven features are shown here). This could also happen on an individual basis for every model.
Another positive opinion from E3 was that, with a few adaptations to the performance metrics, StackGenVis could work with regression or even ranking problems. E3 also mentioned that supporting feature generation in the feature selection phase might be helpful. Finally, E1 suggested that the circular barcharts could only show the positive or negative difference compared to the first stored stack. To avoid an asymmetric design and retain a lower complexity level for StackGenVis, we omitted his proposal for the time being, but we consider implementing both methods in the future.
D
We thus have 3333 cases, depending on the value of the tuple (p⁢(v,[010]),p⁢(v,[323]),p⁢(v,[313]),p⁢(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v , [ 313 ] ) , italic_p ( italic_v , [ 003 ] ) ):
p⁢(v,[013])=p⁢(v,[313])=p⁢(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1. Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ],
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
{0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],% [112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end_ARG , over¯ start_ARG 2 end_ARG , over¯ start_ARG 3 end_ARG , [ 013 ] , [ 010 ] , [ 323 ] , [ 313 ] , [ 112 ] , [ 003 ] , [ 113 ] } .
C
In Experiment I: Text Classification, we use FewRel [Han et al., 2018] and Amazon [He and McAuley, 2016]. They are datasets for 5-way 5-shot classification, which means 5 classes are randomly sampled from the full dataset for each task, and each class has 5 samples. FewRel is a relation classification dataset with 65/5/10 tasks for meta-training/meta-validation/meta-testing.
In Experiment II: Dialogue Generation, we use Persona [Zhang et al., 2018] and Weibo, regarding building a dialogue model for a user as a task. Persona is a personalized dialogue dataset with 1137/99/100 users for meta-training/meta-validation/meta-testing. Each user has 121 utterances on average. Weibo is a personalized dialogue dataset collected from Weibo conversations with 371/40/38 users for meta-training/meta-validation/meta-testing. Each user has 1200 utterances on average.
In Experiment I: Text Classification, we use FewRel [Han et al., 2018] and Amazon [He and McAuley, 2016]. They are datasets for 5-way 5-shot classification, which means 5 classes are randomly sampled from the full dataset for each task, and each class has 5 samples. FewRel is a relation classification dataset with 65/5/10 tasks for meta-training/meta-validation/meta-testing.
Task similarity. In Persona and Weibo, each task is a set of dialogues for one user, so tasks are different from each other. We shuffle the samples and randomly divide tasks to construct the setting that tasks are similar to each other. For a fair comparison, each task on this setting also has 120 and 1200 utterances on average in Persona and Weibo respectively. We train and evaluate Transformer-F and MAML on this setting. (Table 2). When tasks are similar to each other, MAML performs comparatively poorly. In Persona and Weibo, the performance of MAML is similar to that of Transformer-F, while MAML performs significantly better than Transformer-F when tasks are different. A possible explanation is that if there is no clear distinction between tasks, the meta-learning setting can be viewed as a transfer learning setting, which only has a source domain and a target domain, and fine-tuning performs well in transfer learning. So if the tasks are similar to each other, we can simply use Transformer-F rather than MAML.
In meta-learning, we have multiple tasks T𝑇Titalic_T sampled from distribution p⁢(𝒯)𝑝𝒯p(\mathcal{T})italic_p ( caligraphic_T ) [Ravi and Larochelle, 2017, Andrychowicz et al., 2016, Santoro et al., 2016]. For each task Tisubscript𝑇𝑖T_{i}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, we train a base model fiθsuperscriptsubscript𝑓𝑖𝜃f_{i}^{\theta}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_θ end_POSTSUPERSCRIPT with parameter θisubscript𝜃𝑖\theta_{i}italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT on its training corpus Dit⁢r⁢a⁢i⁢nsuperscriptsubscript𝐷𝑖𝑡𝑟𝑎𝑖𝑛D_{i}^{train}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t italic_r italic_a italic_i italic_n end_POSTSUPERSCRIPT which only has a few samples, and evaluate the model on the testing corpus Div⁢a⁢l⁢i⁢dsuperscriptsubscript𝐷𝑖𝑣𝑎𝑙𝑖𝑑D_{i}^{valid}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_v italic_a italic_l italic_i italic_d end_POSTSUPERSCRIPT. We divide the tasks into meta-training, meta-validation, and meta-testing. The goal of meta-learning is that after training on meta-training, we can quickly find fiθsuperscriptsubscript𝑓𝑖𝜃f_{i}^{\theta}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_θ end_POSTSUPERSCRIPT via fine-tuning (adaptation) with Dit⁢r⁢a⁢i⁢nsuperscriptsubscript𝐷𝑖𝑡𝑟𝑎𝑖𝑛D_{i}^{train}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t italic_r italic_a italic_i italic_n end_POSTSUPERSCRIPT for each task Tisubscript𝑇𝑖T_{i}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in meta-testing.
A
The rest of this paper is as follows. In Section II, the system model is introduced. In Section III, the CCA codebook design and the codebook-based joint subarray partition and AWV selection algorithms are proposed. Next, the TE-aware codebook-based beam tracking with 3D beamwidth control is further proposed in Section IV. Simulation results are given in Section V, and finally Section VI concludes this paper.
In addition, the AOAs and AODs should be tracked in the highly dynamic UAV mmWave network. To this end, in Section IV we will further propose a novel predictive AOA/AOD tracking scheme in conjunction with tracking error treatment to address the high mobility challenge, then we integrate these operations into the codebook-based SPAS to achieve reliable beam-tracking for the considered UAV mmWave network.
Note that directly solving the above beam tracking problem is very challenging, especially in the considered highly dynamic UAV mmWave network. Therefore, developing new and efficient beam tracking solution for the CA-enabled UAV mmWave network is the major focus of our work. Recall that several efficient codebook-based beam training and tracking schemes have been proposed for conventional mmWave network with uniform ULA and UPA [22, 23]. These prior works inspire us to propose a specialized new codebook design and the corresponding codeword selection/processing strategy that can drive the CCA to achieve fast beam tracking in the highly dynamic UAV mmWave network. To this end, the properties of the CCA should be exploited in the design of the codebook, which are briefly discussed as follows.
A CCA-enabled UAV mmWave network is considered in this paper. Here, we first establish the DRE-covered CCA model in Section II-A. Then the system setup of the considered UAV mmWave network is described in Section II-B. Finally, the beam tracking problem for the CA-enabled UAV mmWave network is modeled in Section II-C.
The rest of this paper is as follows. In Section II, the system model is introduced. In Section III, the CCA codebook design and the codebook-based joint subarray partition and AWV selection algorithms are proposed. Next, the TE-aware codebook-based beam tracking with 3D beamwidth control is further proposed in Section IV. Simulation results are given in Section V, and finally Section VI concludes this paper.
C
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging.
To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging.
After the merging the total degree of each vertex increases by t⁢δ⁢(A0,B0)2𝑡𝛿superscriptsubscript𝐴0subscript𝐵02t\delta(A_{0},B_{0})^{2}italic_t italic_δ ( italic_A start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. We perform the “edge swapping” to get rid of the parallel edges without effecting the degree of each vertex.
The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges. The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from
D
Deep reinforcement learning achieves phenomenal empirical successes, especially in challenging applications where an agent acts upon rich observations, e.g., images and texts. Examples include video gaming (Mnih et al., 2015), visuomotor manipulation (Levine et al., 2016), and language generation (He et al., 2015). Such empirical successes are empowered by expressive nonlinear function approximators such as neural networks, which are used to parameterize both policies (actors) and value functions (critics) (Konda and Tsitsiklis, 2000). In particular, the neural network learned from interacting with the environment induces a data-dependent feature representation, which embeds rich observations into a latent space encoding semantic structures (Hinton, 1986; Bengio, 2012; Yosinski et al., 2014; LeCun et al., 2015). In contrast, classical reinforcement learning mostly relies on a handcrafted feature representation that is fixed throughout learning (Sutton and Barto, 2018).
Moreover, soft Q-learning is equivalent to a variant of policy gradient (O’Donoghue et al., 2016; Schulman et al., 2017; Nachum et al., 2017; Haarnoja et al., 2017). Hence, Proposition 6.4 also characterizes the global optimality and convergence of such a variant of policy gradient.
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et al., 2017; Nachum et al., 2017; Haarnoja et al., 2017). In particular, we aim to characterize how an overparameterized two-layer neural network and its induced feature representation evolve in TD and Q-learning, especially their rate of convergence and global optimality. A fundamental obstacle, however, is that such an evolving feature representation possibly leads to the divergence of TD and Q-learning. For example, TD converges when the value function approximator is linear in a feature representation, which is fixed throughout learning, and possibly diverges otherwise (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997).
In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, TD possibly diverges (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997). Bhatnagar et al. (2009) propose nonlinear gradient TD, which converges but only to a locally optimal solution. See Geist and Pietquin (2013); Bertsekas (2019) for a detailed survey. When the value function approximator is an overparameterized multi-layer neural network, Cai et al. (2019) prove that TD converges to the globally optimal solution in the NTK regime. See also the independent work of Brandfonbrener and Bruna (2019a, b); Agazzi and Lu (2019); Sirignano and Spiliopoulos (2019), where the state space is required to be finite. In contrast to the previous analysis in the NTK regime, our analysis allows TD to attain a data-dependent feature representation that is globally optimal.
B
Regarding parameter efficiency for NMT, Wu et al. (2019a) present lightweight and dynamic convolutions. Ma et al. (2021) approximate softmax attention with two nested linear attention functions. These methods are orthogonal to our work and it should be possible to combine them with our approach.
In this paper, we replace residual connections of the Transformer with depth-wise LSTMs, to selectively manage the representation aggregation of layers benefiting performance while ensuring convergence of the Transformer. Specifically, we show how to integrate the computation of multi-head attention networks and feed-forward networks with the depth-wise LSTM for the Transformer.
We use depth-wise LSTM rather than a depth-wise multi-head attention network Dou et al. (2018) with which we can build the NMT model solely based on the attention mechanism for two reasons: 1) we have to compute the stacking of Transformer layers sequentially as in sequential token-by-token decoding, and compared to the use of depth-wise LSTM of O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) complexity, depth-wise multi-head attention networks suffer from O⁢(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) complexity and they cannot be parallelized at the depth level. 2) the attention mechanism linearly combines representations with attention weights. Thus, it lacks the ability to provide the non-linearity compared to the LSTM, which we suggest is important.
We suggest that selectively aggregating different layer representations of the Transformer may improve the performance, and propose to use depth-wise LSTMs to connect stacked (sub-) layers of Transformers. We show how Transformer layer normalization and feed-forward sub-layers can be absorbed by depth-wise LSTMs, while connecting pure Transformer attention layers by depth-wise LSTMs (for Transformer encoder and decoder blocks), replacing residual connections.
Directly replacing residual connections with LSTM units will introduce a large amount of additional parameters and computation. Given that the task of computing the LSTM hidden state is similar to the feed-forward sub-layer in the original Transformer layers, we propose to replace the feed-forward sub-layer with the newly introduced LSTM unit, which only introduces one LSTM unit per layer, and the parameters of the LSTM can be shared across layers.
A
define compact sets in X𝑋Xitalic_X for the topology generated by ℒ′superscriptℒ′\mathcal{L}^{\prime}caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT. We usually instantiate the theorem with X⊆Struct⁡(σ)𝑋StructσX\subseteq\operatorname{Struct}(\upsigma)italic_X ⊆ roman_Struct ( roman_σ ) ℒ=⟦𝖥𝖮[σ]⟧X\mathcal{L}=\llbracket\mathsf{FO}[\upsigma]\rrbracket_{X}caligraphic_L = ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT
𝒪∩⟦𝖥𝖮[σ]⟧X=⟦𝖥⟧X.\mathcal{O}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{X}=\llbracket\mathsf% {F}\rrbracket_{X}\;.caligraphic_O ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT = ⟦ sansserif_F ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT .
instantiated with ℒ=⟦𝖥𝖮[σ]⟧X\mathcal{L}=\llbracket\mathsf{FO}[\upsigma]\rrbracket_{X}caligraphic_L = ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT and ℒ′=⟦𝖥⟧X\mathcal{L}^{\prime}=\llbracket\mathsf{F}\rrbracket_{X}caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = ⟦ sansserif_F ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT for a fragment 𝖥𝖥\mathsf{F}sansserif_F of 𝖥𝖮⁢[σ]𝖥𝖮delimited-[]σ\mathsf{FO}[\upsigma]sansserif_FO [ roman_σ ].
and ℒ′=⟦𝖥⟧X\mathcal{L}^{\prime}=\llbracket\mathsf{F}\rrbracket_{X}caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = ⟦ sansserif_F ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT where 𝖥𝖥\mathsf{F}sansserif_F is a fragment of 𝖥𝖮⁢[σ]𝖥𝖮delimited-[]σ\mathsf{FO}[\upsigma]sansserif_FO [ roman_σ ].
that ⟦𝖥⟧X\llbracket\mathsf{F}\rrbracket_{X}⟦ sansserif_F ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT is a base of ⟨τ≤∩⟦𝖥𝖮[σ]⟧X⟩\left\langle\uptau_{\leq}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{X}\right\rangle⟨ roman_τ start_POSTSUBSCRIPT ≤ end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ⟩.
C
Relationship to Distortion Distribution: We first emphasize the relationship between two learning representations and the realistic distortion distribution of a distorted image. In detail, we train a learning model to estimate the distortion parameters and the ordinal distortions separately, and the errors of estimated results are built in the relationship to the distortion reprojection error. As shown in Fig. 5, we visualize the scatter diagram of two learning representations using 1,000 test distorted images. For the distortion parameter, its relationship to the distortion distribution is ambiguous and the similar parameter errors are related to quite different reprojection errors, which indicates that optimizing the parameter error would confuse the learning of neural networks. In contrast, the ordinal distortion error displays an evident positive correlation to the distortion distribution error, and thus the learning model gains intuitive distortion perception. Therefore, the proposed representation helps to decrease the error of distortion estimation.
Relationship to Distortion Distribution: We first emphasize the relationship between two learning representations and the realistic distortion distribution of a distorted image. In detail, we train a learning model to estimate the distortion parameters and the ordinal distortions separately, and the errors of estimated results are built in the relationship to the distortion reprojection error. As shown in Fig. 5, we visualize the scatter diagram of two learning representations using 1,000 test distorted images. For the distortion parameter, its relationship to the distortion distribution is ambiguous and the similar parameter errors are related to quite different reprojection errors, which indicates that optimizing the parameter error would confuse the learning of neural networks. In contrast, the ordinal distortion error displays an evident positive correlation to the distortion distribution error, and thus the learning model gains intuitive distortion perception. Therefore, the proposed representation helps to decrease the error of distortion estimation.
To exhibit the performance fairly, we employ three common network architectures VGG16, ResNet50, and InceptionV3 as the backbone networks of the learning model. The proposed MDLD metric is used to express the distortion estimation error due to its unique and fair measurement for distortion distribution. To be specific, we visualize the error and convergence epoch when estimating two representations under the same number of training data in Fig. 6, which is sampled with 20%, 40%, 60%, 80%, and 100% from the entire training data. Besides, the training and validation loss curves of two learning representations are shown in Fig. 7, in which the distortion parameters are processed without (top) and with (middle) the normalization of magnitude. From these learning evaluations, we can observe:
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed out earlier, the proposed ordinal distortion is explicit to the image feature and is observable from a distorted image; thus it boosts the neural networks’ learning ability. On the other hand, the performance of the distortion parameter estimation drops as the amount of training data decreases. In contrast, our ordinal distortion estimation performs more consistently due to the homogeneity of the learning representation.
Distortion Learning Evaluation: Then, we introduce three key elements for evaluating the learning representation: training data, convergence, and error. Supposed that the settings such as the network architecture and optimizer are the same, a better learning representation can be described from the less the training data is, the faster convergence and the lower error are. For example, a student is able to achieve the highest test grade (the lowest error) with the fastest learning speed and the least homework, meaning that he grasps the best learning strategy compared with other students. In terms of the above description, we evaluate the distortion parameter and ordinal distortion as shown in Fig. 6 and Fig. 7.
D
Please note that EXTRAP-SGD has two learning rates for tuning and needs to compute two mini-batch gradients in each iteration. EXTRAP-SGD requires more time than other methods to tune hyperparameters and train models. Similarly, CLARS needs to compute extra mini-batch gradients to estimate the layer-wise learning rate for each iteration, which requires more training time and computing resources.
First, we use the dataset CIFAR-10 and the model ResNet20 [10] to evaluate SNGM. We train the model with eight GPUs. Each GPU will compute a gradient with the batch size being B/8𝐵8B/8italic_B / 8. If B/8≥128𝐵8128B/8\geq 128italic_B / 8 ≥ 128, we will use the gradient accumulation [28] with the batch size being 128. We train the model with 160160160160 epochs (i.e., pass through the dataset 160160160160 times). The cosine annealing learning rate [24] (without restarts) is adopted for the five methods. In the m𝑚mitalic_m-th epoch, the learning rate is ηm=η0∗0.5⁢(1+cos⁡(m⁢π/160))subscript𝜂𝑚subscript𝜂00.51𝑚𝜋160\eta_{m}=\eta_{0}*0.5(1+\cos(m\pi/160))italic_η start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_η start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∗ 0.5 ( 1 + roman_cos ( italic_m italic_π / 160 ) ), m=0,1,…⁢…,159𝑚01……159m=0,1,...\ldots,159italic_m = 0 , 1 , … … , 159.
To further verify the superiority of SNGM with respect to LARS, we also evaluate them on a larger dataset ImageNet [2] and a larger model ResNet50 [10]. We train the model with 90 epochs. As recommended in [32], we use warm-up and polynomial learning rate strategy.
Hence, with the same number of gradient computations, SNGM can adopt a larger batch size than MSGD to converge to the ϵitalic-ϵ\epsilonitalic_ϵ-stationary point. Empirical results on deep learning further verify that SNGM can achieve better test accuracy than MSGD and other state-of-the-art large-batch training methods.
We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework. Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings.
B
Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific knowledge of the distribution is unknown but we have the ability to sample or simulate from the distribution. To our knowledge, radius minimization has not been previously considered in the two-stage stochastic paradigm. Most prior work in this setting has focused on Facility Location [23, 24, 21, 22, 11, 19, 25]. On similar lines, [1] studies a stochastic k𝑘kitalic_k-center variant, where points arrive independently and each point only needs to get covered with some given probability. 2S-Sup is the natural two-stage counterpart of the well-known Knapsack-Supplier problem, which has a well-known 3333-approximation [14].
An outbreak is an instance from 𝒟𝒟\mathcal{D}caligraphic_D, and after it actually happened, additional testing and vaccination locations were deployed or altered based on the new requirements, e.g., [20], which corresponds to stage-II decisions. To continue this example, there may be further constraints on FIsubscript𝐹𝐼F_{I}italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT, irrespective of the stage-II decisions, which cannot be directly reduced to the budget B𝐵Bitalic_B. For instance, there may be a limited number of personnel available prior to the disease outbreak, assuming that facility i𝑖iitalic_i requires fisubscript𝑓𝑖f_{i}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT people to keep it operational during the waiting period. (These additional stage-I constraints have not been previously considered in the two-stage stochastic regime.)
For instance, during the COVID-19 pandemic, testing and vaccination centers were deployed at different kinds of locations, and access was an important consideration [18, 20]; access can be quantified in terms of different objectives including distance, as in our work. Here, ℱℱ\mathcal{F}caligraphic_F and 𝒞𝒞\mathcal{C}caligraphic_C correspond to such locations and the population affected by the outbreak, and needing services, respectively.
We are given a set of clients 𝒞𝒞\mathcal{C}caligraphic_C and a set of facilities ℱℱ\mathcal{F}caligraphic_F, in a metric space with a distance function d𝑑ditalic_d. We let n=|𝒞|𝑛𝒞n=|\mathcal{C}|italic_n = | caligraphic_C | and m=|ℱ|𝑚ℱm=|\mathcal{F}|italic_m = | caligraphic_F |. Our paradigm unfolds in two stages. First, in stage-I, each i∈ℱ𝑖ℱi\in\mathcal{F}italic_i ∈ caligraphic_F has a cost ciIsubscriptsuperscript𝑐𝐼𝑖c^{I}_{i}italic_c start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, but at that time we do not know which clients from 𝒞𝒞\mathcal{C}caligraphic_C will need service. At this point, we can proactively open a set of facilities FIsubscript𝐹𝐼F_{I}italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT. After committing to FIsubscript𝐹𝐼F_{I}italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT, a scenario A𝐴Aitalic_A is sampled from some underlying distribution 𝒟𝒟\mathcal{D}caligraphic_D, which specifies some subset of clients 𝒞Asuperscript𝒞𝐴\mathcal{C}^{A}caligraphic_C start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT needing service; each i∈ℱ𝑖ℱi\in\mathcal{F}italic_i ∈ caligraphic_F now has a cost ciAsubscriptsuperscript𝑐𝐴𝑖c^{A}_{i}italic_c start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT (which may vary across scenarios A∈𝒟𝐴𝒟A\in\mathcal{D}italic_A ∈ caligraphic_D). When this scenario A𝐴Aitalic_A arrives we can augment the solution by opening some additional stage-II facilities FAsubscript𝐹𝐴F_{A}italic_F start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT.
There is an important connection between our generalization scheme and the design of our polynomial-scenarios approximation algorithms. In Theorem 1.1, the sample bounds are given in terms of the cardinality |𝒮|𝒮|\mathcal{S}|| caligraphic_S |. Our polynomial-scenarios algorithms are carefully designed to make |𝒮|𝒮|\mathcal{S}|| caligraphic_S | as small as possible. Indeed, one of the major contributions of this work is to show that effective bounds on |𝒮|𝒮|\mathcal{S}|| caligraphic_S | are possible for sophisticated approximation algorithms using complex LP rounding.
B
In addition to uncertainties in information exchange, different assumptions on the cost functions have been discussed. In the most of existing works on the distributed convex optimization, it is assumed that the subgradients are bounded if the local cost
Both (sub)gradient noises and random graphs are considered in [11]-[13]. In [11], the local gradient noises are independent with bounded second-order moments and the graph sequence is i.i.d. In [12]-[14], the (sub)gradient measurement noises are martingale difference sequences and their second-order conditional moments depend on the states of the local optimizers. The random graph sequences in [12]-[15] are i.i.d. with connected and undirected mean graphs. In addition, additive communication noises are considered in [14]-[15].
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be spatially and temporally dependent. The local cost functions are not required to be differentiable, nor do their subgradients need to be bounded. The local optimizers can only obtain measurement information of the local subgradients with random noises. The additive and multiplicative communication noises co-exist in communication links. We consider the distributed stochastic subgradient optimization algorithm and prove that if the sequence of random digraphs is conditionally balanced and uniformly conditionally jointly connected, then the states of all local optimizers converge to the same global optimal solution almost surely. The main contributions of our paper are listed as follows.
Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15], and additive and multiplicative communication noises may co-exist in communication links ([21]).
However, a variety of random factors may co-exist in practical environment. In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the distributed optimization with multiple uncertain factors ([11]-[15]).
C
Typically, the attributes in microdata can be divided into three categories: (1) Explicit-Identifier (EI, also known as Personally-Identifiable Information), such as name and social security number, which can uniquely or mostly identify the record owner; (2) Quasi-Identifier (QI), such as age, gender and zip code, which can be used to re-identify the record owner when taken together; and (3) Sensitive Attribute (SA), such as salary and disease, which contains the confidential information of individuals. According to the work of Sweeney [31], even with all EI attributes being removed, the record owners can still be re-identified by matching the combination of QI values.
Specifically, there are three main steps in the proposed approach. First, MuCo partitions the tuples into groups and assigns similar records into the same group as far as possible. Second, the random output tables, which control the distribution of random output values within each group, are calculated to make similar tuples to cover for each other at the minimal cost. Finally, MuCo generates anonymized microdata by replacing the original QI values with random values according to the random output tables. For instance, for the original table in Figure 1(a), MuCo partitions the records into four groups and calculates random output tables on age as shown in Figure 3. In the random output tables, the rows correspond to the records, and the columns correspond to the ranges of age values. Every entry value denotes the probability that the record carries the column value in the anonymized table. For example, we can observe that Helen is covered with Daphne and Dean, and her age outputs 28 with a probability of 0.7129 and outputs 29 with a probability of 0.2871. Then, MuCo generates an anonymized table in which the original QI values are replaced by the random values according to the random output tables.
Generalization [8, 26] is one of the most widely used privacy-preserving techniques. It transforms the values on QI attributes into general forms, and the tuples with equally generalized values constitute an equivalence group. In this way, records in the same equivalence group are indistinguishable. k𝑘kitalic_k-Anonymity [31, 28] ensures that the probability of identity disclosure is at most 1/k1𝑘1/k1 / italic_k. For instance, Figure 1(b) is a generalized table of Figure 1(a) that complies with 2-anonymity, and the adversary has to acquire at least two different tuples by matching the age value of any person.
Although the generalization for k𝑘kitalic_k-anonymity provides enough protection for identities, it is vulnerable to the attribute disclosure [23]. For instance, in Figure 1(b), the sensitive values in the third equivalence group are both “pneumonia”. Therefore, an adversary can infer the disease value of Dave by matching his age without re-identifying his exact record. To prevent such disclosure, many effective principles have been proposed, such as l𝑙litalic_l-diversity [23] and t𝑡titalic_t-closeness [19]. For example, Figure 1(c) is the generalized version of Figure 1(a) complying with 5-diversity, such that the proportion of each sensitive value inside the equivalence group is no more than 1/5151/51 / 5. Thus, for any individual, the adversary has to obtain at least five different sensitive values by matching the age value.
However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserve the ranges of QI values and the number of records. Consequently, the distributions of QI values are hardly maintained and the information utility is reduced significantly. For instance, as shown in Figure 2, the red polyline and the magenta polyline represent the distributions on age in Figure 1(a) and Figure 1(c), respectively. We can observe that the original distribution is barely preserved in the generalized table. On the other hand, the partition of equivalence groups also increases the information loss of anonymized table because the results of query statements are always the matching equivalence groups rather than the specific matching tuples. For example, if we want to select the tuples whose age values are more than 30 in Figure 1(c), both equivalence groups are considered as the results.
B
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.9 mAP and surpasses MaskRCNN by a remarkable margin of 9.7 mAP. More Points Test. By increasing the number of subdivision points from default 28 to 70 during inference, we gain another 1.1 mAP with free training cost. Large Backbone. X101-64x4d Xie et al. (2017) is then used as large backbone and it boosts 6 mAP against ResNet50. DCN and More Points Train. We adopt more interpolated points during training, by increasing the number of sampled points from original 14 to 26 for coarse prediction head, and from 14 to 24 for fine-grained point head. Then by adopting DCN Dai et al. (2017), we gain 71.6 mAP, which already outperforms HTC and SOLOV2 from our offline observation. Large Resolution and P6 Feature. Due to PointRend’s lightweight segmentation head and less memory consumption compared to HTC, the input resolution can be further increased from range [800,1000] to [1200,1400] during multi-scale training. P6 level of FPN is also added for both coarse prediction head and fine-grained point head, which finally yields 74.3 mAP on our splitted validation set. Other tricks we tried on PointRend give little improvement, including MaskScoring head, GC Block and DoubleHead Wu et al. (2020). In the following, we refer the model in the last row (74.3 mAP) of Table 2 as PointRend baseline. The baseline trained on the official training set finally reaches 79.17 and 77.38 mAP on validation and testing set respectively, as shown in Table 1 and Table 3. It surpasses SOLOv2 by a large margin: 6.2, 4.5 and 3.5 mAP respectively for small, medium and large size on validation set. We believe that PointRend’s iteratively rendering process acts as a pivot for generating high-quality masks, especially fine-grained instance boundaries. Due to its superior performance, we only choose PointRend as ensemble candidates for the final submission.
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains another 2 mAP. Armed with DCN, GC block and SyncBN training, our HTC with Res2NetR101 backbone yields 74.58 mAP on validation set, as shown in Table 1. However, the convolutional mask heads adopted in all stages bring non-negligible computation and memory costs, which constrain the mask resolution and further limit the segmentation quality for large instances.
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRend Kirillov et al. (2020). Most of these detectors focus on an overall performance on public datasets like COCO, which contains much smaller instances than 3D-FUTURE, while paying less attention to large objects segmentation. As illustrated in Figure 1, the size distribution of bounding boxes in 3D-FUTURE and COCO indicates that the former contains much larger objects while the latter is dominated by smaller instances. Thus, the prominent methods used in COCO, like MaskRCNN He et al. (2017) and HTC, may generate blurry contours for large instances. Their mask heads output segmentation from a limited small feature size (e.g., 14×14141414\times 1414 × 14), which is dramatically insufficient to represent large objects. All of these motivate us to segment large instances in a fine-grained and high-quality manner. SOLOv2 builds an efficient single-shot framework with strong performance and dynamically generates predictions with much larger mask size (e.g., 1/4 scale of input size) than HTC. PointRend iteratively renders the output mask over adaptively sampled uncertain points in a coarse-to-fine fashion, which is naturally suitable for generating smooth and fine-grained instance boundaries. By conducting extensive experiments on HTC, SOLOv2 and PointRend, PointRend succeeds in producing finer mask boundaries and significantly outperforms other methods by a large margin. Our step-by-step modifications adopted on PointRend finally achieves state-of-the-art performance on 3D-FUTURE dataset, which yields 79.2 mAP and 77.38 mAP on validation and test set respectively. The final submission is an ensemble of 5 PointRend models with slightly different settings, reaching the 1st place in this competition.
We implement PointRend using MMDetection Chen et al. (2019b) and adopt the modifications and tricks mentioned in Section 3.3. Both X101-64x4d and Res2Net101 Gao et al. (2019) are used as our backbones, pretrained on ImageNet only. SGD with momentum 0.9 and weight decay 1e-4 is adopted. The initial learning rate is set to 0.01 for Res2Net101 and 0.02 for X101-64x4d defaultly and decayed by factor 0.1 at epoch 32. During training process, the batch size is 8 (one image per GPU) and all BN statistics are freezed. Mixed precision training enables to reduce GPU memory. The input images are randomly resized to n×n𝑛𝑛n\times nitalic_n × italic_n, which is uniformly sampled from range [1200,1400]12001400[1200,1400][ 1200 , 1400 ]. All models are trained for 44 epochs. For inference, images are resized to 1400×1400140014001400\times 14001400 × 1400 and horizontal flip is used.
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.9 mAP and surpasses MaskRCNN by a remarkable margin of 9.7 mAP. More Points Test. By increasing the number of subdivision points from default 28 to 70 during inference, we gain another 1.1 mAP with free training cost. Large Backbone. X101-64x4d Xie et al. (2017) is then used as large backbone and it boosts 6 mAP against ResNet50. DCN and More Points Train. We adopt more interpolated points during training, by increasing the number of sampled points from original 14 to 26 for coarse prediction head, and from 14 to 24 for fine-grained point head. Then by adopting DCN Dai et al. (2017), we gain 71.6 mAP, which already outperforms HTC and SOLOV2 from our offline observation. Large Resolution and P6 Feature. Due to PointRend’s lightweight segmentation head and less memory consumption compared to HTC, the input resolution can be further increased from range [800,1000] to [1200,1400] during multi-scale training. P6 level of FPN is also added for both coarse prediction head and fine-grained point head, which finally yields 74.3 mAP on our splitted validation set. Other tricks we tried on PointRend give little improvement, including MaskScoring head, GC Block and DoubleHead Wu et al. (2020). In the following, we refer the model in the last row (74.3 mAP) of Table 2 as PointRend baseline. The baseline trained on the official training set finally reaches 79.17 and 77.38 mAP on validation and testing set respectively, as shown in Table 1 and Table 3. It surpasses SOLOv2 by a large margin: 6.2, 4.5 and 3.5 mAP respectively for small, medium and large size on validation set. We believe that PointRend’s iteratively rendering process acts as a pivot for generating high-quality masks, especially fine-grained instance boundaries. Due to its superior performance, we only choose PointRend as ensemble candidates for the final submission.
C
I⁢(f)<1,andH⁢(|f^|2)>nn+1⁢log⁡n.formulae-sequence𝐼𝑓1and𝐻superscript^𝑓2𝑛𝑛1𝑛I(f)<1,\ \ {\mbox{and}}\ \ H(|\hat{f}|^{2})>\frac{n}{n+1}\log n.italic_I ( italic_f ) < 1 , and italic_H ( | over^ start_ARG italic_f end_ARG | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) > divide start_ARG italic_n end_ARG start_ARG italic_n + 1 end_ARG roman_log italic_n .
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma some time ago (see [K], comment from April 2, 2011). More specifically, we proved
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Maybe the presentation below is what was known.
(0⁢log⁡0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^⁢(A)|2}A⊆[n]subscriptsuperscript^𝑓𝐴2𝐴delimited-[]𝑛\{|\hat{f}(A)|^{2}\}_{A\subseteq[n]}{ | over^ start_ARG italic_f end_ARG ( italic_A ) | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_A ⊆ [ italic_n ] end_POSTSUBSCRIPT sums up to 1111 and thus this is the usual definition of entropy of this probability distribution.
C
^{l},a_{h}^{l})+\max_{a\in\mathcal{A}}Q_{h+1}^{k-1}(s_{h+1}^{l},a)-\langle\bm{% \phi}(s_{h}^{l},a_{h}^{l}),\bm{w}\rangle]^{2}+\left\lVert\bm{w}\right\rVert_{2}.bold_italic_w start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT = roman_arg roman_min start_POSTSUBSCRIPT bold_italic_w end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_l = italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT [ italic_r start_POSTSUBSCRIPT italic_h , italic_l end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT , italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT ) + roman_max start_POSTSUBSCRIPT italic_a ∈ caligraphic_A end_POSTSUBSCRIPT italic_Q start_POSTSUBSCRIPT italic_h + 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT ( italic_s start_POSTSUBSCRIPT italic_h + 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT , italic_a ) - ⟨ bold_italic_ϕ ( italic_s start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT , italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT ) , bold_italic_w ⟩ ] start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + ∥ bold_italic_w ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT .
In practice, the transition function ℙℙ\mathbb{P}blackboard_P is unknown, and the state space might be so large that it is impossible for the learner to fully explore all states. If we parametrize the action-value function in a linear form as ⟨ϕ⁢(⋅,⋅),𝒘⟩bold-italic-ϕ⋅⋅𝒘\langle\bm{\phi}(\cdot,\cdot),\bm{w}\rangle⟨ bold_italic_ϕ ( ⋅ , ⋅ ) , bold_italic_w ⟩, it is natural to solve a regularized least-squares problems using collected data inspired by classical value iteration. Specifically, the update formula of 𝒘hksuperscriptsubscript𝒘ℎ𝑘\bm{w}_{h}^{k}bold_italic_w start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT in Algorithm 1 (line 8) is the analytic solution of the following regularized least-squares problem:
Finally, we use epoch restart strategy to adapt to the drifting environment, which achieves near-optimal dynamic regret notwithstanding its simplicity. Specifically, we restart the estimation of 𝒘𝒘\bm{w}bold_italic_w after WH𝑊𝐻\frac{W}{H}divide start_ARG italic_W end_ARG start_ARG italic_H end_ARG episodes, all illustrated in the outer loop of Algorithm 1. Note that in general epoch size W𝑊Witalic_W can vary for different epochs, but we find that a fixed length is sufficient to achieve near-optimal performance.
From Figure 1, we see LSVI-UCB-Restart with the knowledge of global variation drastically outperforms all other methods designed for stationary environments , in both abruptly-changing and gradually-changing environments, since it restarts the estimation of the Q𝑄Qitalic_Q function with knowledge of the total variations. Ada-LSVI-UCB-Restart also outperforms the baselines because it also takes the nonstationarity into account by periodically updating the epoch size for restart. In addition, Ada-LSVI-UCB-Restart has a huge gain compared to LSVI-UCB-Unknown, which agrees with our theoretical analysis. This suggests that Ada-LSVI-UCB-Restart works well when the knowledge of global variation is unavailable. Our proposed algorithms not only perform systemic exploration, but also adapt to the environment change.
One might be skeptical since simply applying least-squares method to solve 𝒘𝒘\bm{w}bold_italic_w does not take the distribution drift in ℙℙ\mathbb{P}blackboard_P and r𝑟ritalic_r into account and hence, may lead to non-trivial estimation error. However, we show that the estimation error can gracefully adapt to the nonstationarity, and it suffices to restart the estimation periodically to achieve good dynamic regret.
D
Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et al., 2017). The usage of fake news ranges from self-serving purposes like clickbait for moneymaking (Geçkil et al., 2018) to agendas on a national scale like political manipulation (Allcott and Gentzkow, 2017) and terrorism (Fang, 2021). With the rapid and extensive adoption of social platforms, fake news has come to be more closely integrated with daily life, resulting in rising social costs due to people making poorly justified and unwarranted choices based on inaccurate knowledge (Duffy et al., 2020). This has spurred CSCW research on areas like attitudes towards news (Wang and Mark, 2013), news transmission (Liao and Shi, 2013), and forms of innovative countermeasures (Bhuiyan et al., 2018; Mitra et al., 2017), revealing the breadth of interests in this issue.
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Government to more directly address falsehoods that hurt the public interest. The rising attention of fake news in the local scene has motivated various research including studies on the perceptions and motivations of fake news sharing (Chen et al., 2015) and responses to fake news (Edson C Tandoc et al., 2020). Although there are parallels between these studies and ours, we want to highlight that our study explores fake news in general media instead of solely social media, examining both usage and trust. Furthermore, we investigate more broadly the attitudes and behaviors on news sharing and fake news.
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by political and financial gains, and its influence has led to increasing social costs due to the adverse effects it has on people’s truth discernment and behavior (Duffy et al., 2020). With fake news stemming mainly from digital media and causing misguided dissent that could compromise collaboration among people, we see this to be of concern to the CSCW community. As global efforts addressing fake news take off, we aim to understand what the perceptions and practices of news sharing and fake news are in a local context, with Singapore as the place of interest, to gain insights on where best to direct local mitigation efforts.
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms, and post corrections and warnings when they encounter fake news. That respondents show strong trust and reliance on government communication platforms, such as official websites and hotlines, signifies the relatively strong faith that Singapore residents have in the Singapore Government to provide truthful and helpful information and to debunk fake news. This may be attributed to the successful ongoing efforts in making transparent government decisions and the readiness of the government in addressing public concerns through online forums and dialogues (REACH, [n.d.]). There is opportunity here for the government to launch programs such as campaigns, call-to-actions and civic tech initiatives that aim to more actively involve the public in discussing the local impacts of fake news and the strategies to manage it, and to encourage them to play a part through personal and community actions.
Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et al., 2017). The usage of fake news ranges from self-serving purposes like clickbait for moneymaking (Geçkil et al., 2018) to agendas on a national scale like political manipulation (Allcott and Gentzkow, 2017) and terrorism (Fang, 2021). With the rapid and extensive adoption of social platforms, fake news has come to be more closely integrated with daily life, resulting in rising social costs due to people making poorly justified and unwarranted choices based on inaccurate knowledge (Duffy et al., 2020). This has spurred CSCW research on areas like attitudes towards news (Wang and Mark, 2013), news transmission (Liao and Shi, 2013), and forms of innovative countermeasures (Bhuiyan et al., 2018; Mitra et al., 2017), revealing the breadth of interests in this issue.
A
Our method represents a standard KG embedding approach capable of generating embeddings for various tasks. This distinguishes it from most inductive methods that either cannot produce entity embeddings [22, 23, 25], or have entity embeddings conditioned on specific relations/entities [20, 21]. While some methods attempt to address entity alignment by introducing a new relation, the results often demonstrate poor performance, as evidenced in [2, 27].
Unlike many inductive methods that are solely evaluated on datasets with unseen entities, our method aims to produce high-quality embeddings for both seen and unseen entities across various downstream tasks. To our knowledge, decentRL is the first method capable of generating high-quality embeddings for different downstream tasks on datasets that encompass both existing and new entities.
In this work, we propose Decentralized Attention Network for knowledge graph embedding and introduce self-distillation to enhance its ability to generate desired embeddings for both known and unknown entities. We provide theoretical justification for the effectiveness of our proposed learning paradigm and conduct comprehensive experiments to evaluate its performance on entity alignment and entity prediction, considering scenarios with and without new entities. Our experimental results demonstrate state-of-the-art performance of the proposed method on conventional and open-world benchmarks for both entity alignment and entity prediction tasks. Our method not only provides a solution for knowledge graph representation learning but also offers valuable insights into the potential of decentralized attention mechanisms for other graph-based applications.
Our method represents a standard KG embedding approach capable of generating embeddings for various tasks. This distinguishes it from most inductive methods that either cannot produce entity embeddings [22, 23, 25], or have entity embeddings conditioned on specific relations/entities [20, 21]. While some methods attempt to address entity alignment by introducing a new relation, the results often demonstrate poor performance, as evidenced in [2, 27].
We conduct experiments to explore the impact of the numbers of unseen entities on the performance in open-world entity alignment. We present the results on the ZH-EN datasets in Figure 6. Clearly, the performance gain achieved by leveraging our method significantly increases when there are more unseen entities. For example, when only 20% of entities are unseen, decentRL outperforms AliNet on Hits@1 by 9.2%, while this margin extends to 35.9% when 80% of entities are unseen. Overall, decentRL demonstrates significant advantages as new entities are added to KGs.
A
Reinforcement learning (RL) [1] achieves promising results in solving a wide range of problems, including human-level performance on Atari games [2, 3], the board game Go [4], the strategy game StarCraft II [5], and challenging robotic tasks [6, 7, 8]. Most of the successes in RL rely on a well-defined extrinsic reward function from the environment, e.g., a running score from video games. However, in real-world applications, such an extrinsic reward function is sparse or not available, making efficient exploration in such applications a tricky problem.
Conducting exploration without the extrinsic rewards is called the self-supervised exploration. From the perspective of human cognition, the learning style of children can inspire us to solve such problems. The children often employ goal-less exploration to learn skills that will be useful in the future. Developmental psychologists consider intrinsic motivation as the primary driver in the early stages of development [9]. By extending such idea to RL domain, the ‘intrinsic’ rewards are used in RL to incentivize exploration. Previous formulations of intrinsic rewards used in self-supervised exploration typically utilize ‘curiosity’ corresponding to the prediction-error of environment model [10, 11] and the Bayesian uncertainty estimation with ensemble-based [12] environment models [13]. Both of such formulations require modeling dynamic models of the corresponding environments.
To validate the effectiveness of our method, we compare the proposed method with the following self-supervised exploration baselines. Specifically, we conduct experiments to compare the following methods: (i) VDM. The proposed self-supervised exploration method. (ii) ICM [10]. ICM first builds an inverse dynamics model to contain information related to the actions taken by the agent while ignoring other side information. ICM utilizes the prediction error of the forward dynamics as the intrinsic reward. ICM is robust to the stochasticity of the environment. (iii) RFM [11]. Similar to ICM, RFM uses the prediction error as the intrinsic reward for self-supervised exploration. RFM uses a fixed CNN to extract features of the state. RFM achieves comparable performance with ICM in challenging tasks and is more computationally efficient. (iv) Disagreement [13]. The disagreement method uses an ensemble of environment models to evaluate the uncertainty in exploration. The agent is encouraged to explore the area with maximum disagreement among the predictions in the ensemble of models. Such a disagreement can be cast as Bayesian measures of model uncertainty [50] and prevents the agent from getting stuck in local-minima of exploration.
In this work, we consider self-supervised exploration without extrinsic reward. In such a case, the above trade-off narrows down to a pure exploration problem, aiming at efficiently accumulating information from the environment. Previous self-supervised exploration typically utilizes ‘curiosity’ based on prediction-error of dynamic [10, 25, 11] and the Bayesian uncertainty estimation using ensemble-based environment models [26, 13] or ensemble Q-functions [27]. Since the agent does pure exploration, the intrinsic motivation becomes the only driving force of the whole learning process. Meanwhile, because the influence of extrinsic rewards is eliminated, the effectiveness of intrinsic rewards can be evaluated independently. After training the pure-exploratory policy with intrinsic rewards, there are several ways to combine the intrinsic policy with extrinsic policies. Scheduled intrinsic drive [28] uses a high-level scheduler that periodically selects to follow either the extrinsic or the intrinsic policy to gather experiences. MuleX [29] learns several policies independently and uses a random heuristic to decide which one to use in each time step. Such policy combination methods perform better than the policy obtained from the linear combination of extrinsic and intrinsic rewards. We focus on developing the pure-exploratory agent and leave the study of policy combination in the future.
We first evaluate our method on standard Atari games. Since different methods utilize different intrinsic rewards, the intrinsic rewards are not applicable to measure the performance of the trained purely exploratory agents. In alternative, we follow [11, 13], and use the extrinsic rewards given by the environment to measure the performance. We highlight that the extrinsic rewards are only used for evaluation, not for training. We illustrate the evaluation curves of 18181818 common Atari games in Fig. 6, where the first 6666 games are hard exploration tasks. We draw each curve with five distinct random seeds. For each method, the solid line indicates the mean episodic reward of all five seeds, and the shadow area shows the confidence interval (i.e., ±plus-or-minus\pm±Std of episodic rewards among all seeds) of the performance. The result shows that self-supervised exploration enables the agent to obtain higher extrinsic rewards by learning based on intrinsic rewards. More specifically, maximizing the intrinsic rewards encourages the agent to explore the complicated part of the environment, which typically corresponds to significant changes in the scenarios and leads to large extrinsic rewards.
A
If we would add nodes to make the grid symmetric or tensorial, then the number of nodes of the resulting (sparse) tensorial grid would scale exponentially 𝒪⁢(nm)𝒪superscript𝑛𝑚\mathcal{O}(n^{m})caligraphic_O ( italic_n start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ) with space dimension m∈ℕ𝑚ℕm\in\mathbb{N}italic_m ∈ blackboard_N. In contrast, our proposed interpolation nodes scale sub-exponentially o⁢(nm)𝑜superscript𝑛𝑚o(n^{m})italic_o ( italic_n start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ) and
We complement the established notion of unisolvent nodes by the dual notion of unisolvence. That is: For given arbitrary nodes P𝑃Pitalic_P, determine the polynomial space ΠΠ\Piroman_Π such that P𝑃Pitalic_P is unisolvent with respect to ΠΠ\Piroman_Π. In doing so, we revisit earlier results by Carl de Boor and Amon Ros [28, 29] and answer their question from our perspective.
for a given polynomial space ΠΠ\Piroman_Π and a set of nodes P⊆ℝm𝑃superscriptℝ𝑚P\subseteq\mathbb{R}^{m}italic_P ⊆ blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT that is not unisolvent with respect to ΠΠ\Piroman_Π, find a maximum subset P0⊆Psubscript𝑃0𝑃P_{0}\subseteq Pitalic_P start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ⊆ italic_P and a polynomial subspace ΠP0⊆ΠsubscriptΠsubscript𝑃0Π\Pi_{P_{0}}\subseteq\Piroman_Π start_POSTSUBSCRIPT italic_P start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ⊆ roman_Π,
We realize the algorithm of Carl de Boor and Amon Ros [28, 29] in terms of Corollary 6.5 in case of the torus M=𝕋R,r2𝑀subscriptsuperscript𝕋2𝑅𝑟M=\mathbb{T}^{2}_{R,r}italic_M = blackboard_T start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_R , italic_r end_POSTSUBSCRIPT. That is, we consider
Here, we answer Questions 1–2. To do so, we generalize the notion of unisolvent nodes PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, A⊆ℕm𝐴superscriptℕ𝑚A\subseteq\mathbb{N}^{m}italic_A ⊆ blackboard_N start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT to non-tensorial grids. This allows us to extend Newton (NI) and Lagrange (LI) interpolation to arbitrary-dimensional spaces such that:
A
On the one hand, it should be rich enough to claim μ=ν𝜇𝜈\mu=\nuitalic_μ = italic_ν if the metric vanishes. On the other hand, to control the type-I error, the function space should also be relatively small so that the empirical estimate of IPM decays quickly into zero.
While the Wasserstein distance has wide applications in machine learning, the finite-sample convergence rate of the Wasserstein distance between empirical distributions is slow in high-dimensional settings. We propose the projected Wasserstein distance to address this issue.
The finite-sample convergence of general IPMs between two empirical distributions was established. Compared with the Wasserstein distance, the convergence rate of the projected Wasserstein distance has a minor dependence on the dimension of target distributions, which alleviates the curse of dimensionality.
The max-sliced Wasserstein distance is proposed to address this issue by finding the worst-case one-dimensional projection mapping such that the Wasserstein distance between projected distributions is maximized. The projected Wasserstein distance proposed in our paper generalizes the max-sliced Wasserstein distance by considering the k𝑘kitalic_k-dimensional projection mappings, and we discuss the finite-sample convergence rate of the projected Wasserstein distance so that two-sample tests can be designed.
The Wasserstein distance, as a particular case of IPM, is popular in many machine learning applications. However, a significant challenge in utilizing the Wasserstein distance for two-sample tests is that the empirical Wasserstein distance converges at a slow rate due to the complexity of the associated function space. Thus, its performance suffers from the curse of dimensionality.
D
Learning disentangled factors h∼qϕ⁢(H|x)similar-toℎsubscript𝑞italic-ϕconditional𝐻𝑥h\sim q_{\phi}(H|x)italic_h ∼ italic_q start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_H | italic_x ) that are semantically meaningful representations of the observation x𝑥xitalic_x is highly desirable because such interpretable representations can arguably [icmlbest] be advantageous for a variety of downstream tasks, including classification, detection, reinforcement learning, and transfer learning. [bengio2013representation, lecun2015deep, lake2017building, van2019disentangled]. While a formal definition of disentangled representation (DR) remains elusive, we understand it to mean that by manipulating only one of the factors while holding the rest constant, only one semantically meaningful aspect of the observation, e.g. the pose of an object in an image, changes. Such capability can be highly useful for data generation tasks such as image synthesis from textual descriptions [DBLP:conf/icml/ReedAYLSL16, DBLP:journals/corr/ZhangXLZHWM16]. For this reason there has been extensive research towards developing DGMs that learn DR while generating data points of high quality, i.e. that are indistinguishable from the data being modeled. Of particular interest are models that can achieve this without supervision.
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervised, semi-supervised or unsupervised. In the Appendix we present such implementations. where we significantly constrain the capacity of the learned representation and heavily regularize the model to produce independent factors. As we explained above, such a model will likely learn a good disentangled representation, however, its reconstruction will be of low quality as it will only be able to generate the information captured by the disentangled factors while averaging the details. For example, in Figure 1, the model uses β𝛽\betaitalic_β-TCVAE [mig] to retrieve the pose of the model as a latent factor. In the reconstruction, the rest of the details are averaged, resulting in a blurry image (1b). The goal of the second part of the model, is to add the details while maintaining the semantic information retrieved in the first stage. In Figure 1 that means to transform Image 1b (the output of the first stage) to be as similar as possible to Image 1a (the target observation). We can view this as a style transfer task and use a technique from [adaIN] to achieve our goal.
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Zitalic_Z while maintaining the semantic information captured in C𝐶Citalic_C to obtain the final reconstruction (Image 1d in our example).
Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and correlated components Z𝑍Zitalic_Z, a.k.a as nuisance variables, which encode the details information not stored in the independent components. A series of works starting from [beta] aims to achieve that via regularizing the models by up-weighting certain terms in the ELBO formulation which penalize the (aggregate) posterior to be factorized over all or some of the latent dimensions [kumar2017variational, factor, mig]. I think I would make what these methods doing clearer. They aren’t really separating into nuisance and independent only.. they are also throwing away nuisance.
While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, if the unconstrained nuisance variables have enough capacity, the model can use them to achieve a high quality reconstruction while ignoring the latent variables related to the disentangled factors. This phenomena is sometimes called the "shortcut problem" and has been discussed in previous works [DBLP:conf/iclr/SzaboHPZF18].
C
This window operator calculates the connection between the pie and alpha, or beta, at A and B and transfers it to the right side (A AND B). In case of output, it is possible to measure by firing a laser onto a pie pin on the resulting side and checking whether it returns to either alpha or beta. The picture shows the connection status determination of the results for each input.
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. However, one can think about whether the four pin designs are the minimum number of pins required by structural computers. In other words, operating a structural computer with a minimal lead is also a task to be addressed by this study because one of the most important factors in computer hardware design is aggregation. Let’s look at the role of the four pins that transmit signals in a 4 pin based signal system. Four pins are paired into two pairs, each representing/delivering true and inverted values as a connection state. When checking the output, place a voltage on one of the two wires in a pair and ground the other. In this case, the study inferred that of the four wires, two wires acting as ground can be replaced by one wire, and based on this reasoning, the method in which the 4 pin signal system can be described as 3-pin based logic as the same 3 pin signal system. As mentioned above, a 3-pin based logic consists of a ground cable in the center and two signal lines representing true and inverted values above and below, and is capable of operating NOT, AND and OR operations through the structural transformations shown below.
Optical logic aggregates can be designed in the same way as in Implementation of Structural Computer Using Mirrors and Translucent Mirrors, and for the convenience of expression and the exploration of mathematical properties (especially their association with matrices), the number shown in Fig. 5 can be applied to the label of the window operator to express the AND gate as shown below, which is referred to as the matrix representation of the optical logic. Fig. 7 shows, however, that some rays of light can be counted on the lower beta signal, which can interfere with the operation of other Thus, a black body gate was implemented using i cells to make input everywhere into NULL state. Including this, functions derived from the properties of light that are only available in structural-based optical computing can be modularized with window operators, which can be organized into the following seven categories. 222AND- Logic in Boolean algebra, OR- Logic in Boolean algebra, CROS- Vertical Reflection/Crossing of Two Logics, CNOT- Vertical Reflection/Crossing of Two Logics, Only Intersects and Both Logics are NOT-operated. INVS- Transmittance of Two Logics, COPY- Cloning Logic, BLAK- Absorption of logic (to make it all NULL)
The structure-based computer mentioned in this paper are based on Boolean Algebra, a system commonly applied to digital computers. Boolean algebra is a concept created by George Boole (1815-1854) of the United Kingdom that expresses the True and False of logic 1 and 0, and mathematically describes digital electrical signals. The concept of logical aggregates defined in Boolean algebra has become the basis for hardware devices such as ALU, CLU, RAM, and so on. Structure-based computer in this paper was also designed to perform logical operations using digital signals of 1 and 0. Logic circuits are the units in which logical operations are performed, and there are AND, OR, and NOT gates. Of these, the NOT gate in the computer we use today is based on transistors. The advantage of transistors is that they can differentiate between signal and power and perform switching and amplification at the same time. On the other hand, more heat is generated compared to passing through a conductor of the same length, which causes semiconductors to age and limits the number of clocks. To solve the various problems of the semiconductor mentioned above, this paper shows the concept of ”Reverse-Logic pair of digital signals” and ”double-pair(4-pin)-based logic operation” techniques on which Structure-based computer hardware is. This paper shows the concept of Reverse-Logic pair[7] of digital signals, which is a method for solving the problem of heating, aging, and computation speed of NOT operations. Expressing 1 as an inverted signal pair, it appears as an ordered pair of two auxiliary signals, each with a signal of one or zero, as shown in (1,0). Similarly, zeros are expressed in sequence pairs (0,1).
The NOT gate can be operated in a logic-negative operation through one ‘twisting’ as in a 4-pin. To be exact, the position of the middle ground pin is fixed and is a structural transformation that changes the position of the remaining two true and false pins.
B
Hence any function xnsuperscript𝑥𝑛x^{n}italic_x start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT with g⁢c⁢d⁢(n,q−1)≠1𝑔𝑐𝑑𝑛𝑞11gcd(n,q-1)\neq 1italic_g italic_c italic_d ( italic_n , italic_q - 1 ) ≠ 1, under the action of 𝐊𝐊\mathbf{K}bold_K settles down to the function xq−1superscript𝑥𝑞1x^{q-1}italic_x start_POSTSUPERSCRIPT italic_q - 1 end_POSTSUPERSCRIPT. Further m𝑚mitalic_m is the least such integer such that nm⁢m⁢o⁢d⁢q−1=0superscript𝑛𝑚𝑚𝑜𝑑𝑞10n^{m}\ mod\ q-1=0italic_n start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT italic_m italic_o italic_d italic_q - 1 = 0 as any smaller m1subscript𝑚1m_{1}italic_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT such that xnm1=xq−1superscript𝑥superscript𝑛subscript𝑚1superscript𝑥𝑞1x^{n^{m_{1}}}=x^{q-1}italic_x start_POSTSUPERSCRIPT italic_n start_POSTSUPERSCRIPT italic_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT = italic_x start_POSTSUPERSCRIPT italic_q - 1 end_POSTSUPERSCRIPT is a contradiction to the assumption that m𝑚mitalic_m is the index of nilpotence of n𝑛nitalic_n in the nilradical of ℤq−1subscriptℤ𝑞1\mathbb{Z}_{q-1}blackboard_Z start_POSTSUBSCRIPT italic_q - 1 end_POSTSUBSCRIPT ∎
The paper is organized as follows. Section 2 focuses on linear representation for maps over finite fields 𝔽𝔽\mathbb{F}blackboard_F, develops conditions for invertibility, computes the compositional inverse of such maps and estimates the cycle structure of permutation polynomials. In Section 3, this linear representation is extended to a family of parametric maps, studying its invertibility and computation of the parametric inverse. The extension of the theory of linear representation to multivariate maps (maps over 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT) is discussed in Section 4 and finally, a linear representation of the group generated by a finite set of invertible maps over 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT is addressed in Section 5.
In this section, we aim to compute the possible cycle lengths of the PP through the linear representation defined in (10). As discussed in Section 1.3, given a polynomial f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ), we associate a dynamical system through a difference equation of the form
The work [19] also provides a computational framework to compute the cycle structure of the permutation polynomial f𝑓fitalic_f by constructing a matrix A⁢(f)𝐴𝑓A(f)italic_A ( italic_f ), of dimension q×q𝑞𝑞q\times qitalic_q × italic_q through the coefficients of the (algebraic) powers of fksuperscript𝑓𝑘f^{k}italic_f start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT, k=0,1,…,q−1𝑘01…𝑞1k=0,1,\dots,q-1italic_k = 0 , 1 , … , italic_q - 1 and computing the multiplicative order of the eigenvalues of this matrix A⁢(f)𝐴𝑓A(f)italic_A ( italic_f ) over a suitable field extension. In our work, to compute the cycle structure of the permutation polynomial, we have to compute the solutions of the associated linear dynamical system (19). This computation amounts to computing the multiplicative order of the eigenvalues of the matrix M𝑀Mitalic_M over a suitable field extension [24]. From the table, we see that the dimension of the matrix M𝑀Mitalic_M, which is used to compute the cycle lengths, is not necessarily q𝑞qitalic_q. Hence, this approach does not necessarily involve matrices of dimension q𝑞qitalic_q in all cases.
In this section, we provide examples of estimating the possible orbit lengths of permutation polynomials in the form of Dickson polynomials Dn⁢(x,α)subscript𝐷𝑛𝑥𝛼D_{n}(x,\alpha)italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x , italic_α ) [10] of degree n𝑛nitalic_n through the linear representation approach. The Dickson polynomial Dn⁢(x,α)subscript𝐷𝑛𝑥𝛼D_{n}(x,\alpha)italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x , italic_α ) is of the form
B
Any simulation study is limited by its choice of experimental factors. In particular, in our simulations we assumed that all features corresponding to signal have the same regression weight, and that all views contain an equal number of features. The correlation structures we used are likely simpler than those encountered in real data sets. Additionally, we defined the view selection problem in such a way that we want to select any view which contains at least some (in our simulations at least 50%) features truly related to the outcome. In practice, the amount of signal present in a view may be lower, leading to considerations of exactly how much signal should be present in a view in order for the researcher to be considered worth selecting. Additionally, we only considered settings where views are mutually exclusive, but in practice views may overlap (L. Yuan \BOthers., \APACyear2011; Park \BOthers., \APACyear2015), meaning that a single feature may correspond to multiple views. In general, the MVS algorithm can handle overlapping views by simply ‘copying’ a feature for each additional view in which it occurs. However, an exploration of the implications of overlapping views for view selection, both in MVS and in general, would make an interesting topic for future research. We also did not include the possibility of missing data. In multi-view data, it is quite likely that if missing data occurs, all features within a view will be simultaneously missing. Future work may focus on developing optimal strategies for handling missing data in the multi-view context.
Our implementation of the nonnegative adaptive lasso produced slightly sparser models than the regular nonnegative lasso. This did not appear to substantially reduce classification accuracy in our simulations, although there were some minor reductions in some low sample size cases. In both gene expression data sets the adaptive lasso performed worse on average than the lasso in all three classification metrics, but the observed differences were small. The main difference between these two meta-learners appears to be that the regular lasso slightly favors classification performance, whereas the adaptive lasso slightly favors sparsity. Note that the adaptive lasso is a flexible method, and one can change the way in which its weights are initialized, which will likely affect performance. Additionally, one could consider a larger set of possible values for the tuning parameter γ𝛾\gammaitalic_γ. However, this flexibility also means the method is less straightforward to use than the regular lasso.
In this article we investigate how the choice of meta-learner affects the view selection and classification performance of MVS. We compare the following meta-learners: (1) the interpolating predictor of Breiman (\APACyear1996), (2) nonnegative ridge regression (Hoerl \BBA Kennard, \APACyear1970; Le Cessie \BBA Van Houwelingen, \APACyear1992), (3) the nonnegative elastic net (Zou \BBA Hastie, \APACyear2005), (4) the nonnegative lasso (Tibshirani, \APACyear1996), (5) the nonnegative adaptive lasso (Zou, \APACyear2006), (6) stability selection with the nonnegative lasso (Hofner \BOthers., \APACyear2015), and (7) nonnegative forward selection. All of these meta-learners provide models with nonnegative coefficients. In addition, they can all set some coefficients to zero, thus potentially obtaining sparse models and performing view selection. Although not an exhaustive comparison of all possible meta-learners, six of these are popular feature selection methods in their own right, and would most likely end up high on many researchers’ list of candidate meta-learners. A likely exception to this is nonnegative ridge regression, since ridge regression without nonnegativity constraints would not set any coefficients to zero. However, this method is included because it provides an indication of the view selection effect of just the addition of nonnegativity constraints on the meta-learner. Each of the seven candidate meta-learners is described in more detail below.
In this article we investigated how different view-selecting meta-learners affect the performance of multi-view stacking. In our simulations, the interpolating predictor often performed worse than the other meta-learners on at least one outcome measure. For example, when the sample size was larger than the number of views, the interpolating predictor often had the lowest TPR in view selection, as well as the lowest test accuracy, particularly when there was no correlation between the different views. When the sample size was smaller than the number of views, the interpolating predictor had a FPR in view selection that was considerably higher than that of all other meta-learners. In terms of accuracy it performed very well in the breast cancer data, but less so in the colitis data. However, in both cases it produced very dense models, which additionally had low view selection stability. The fact that its behavior varied considerably across our experimental conditions, combined with its tendency to select very dense models when the meta-learning problem is high-dimensional, suggests that the interpolating predictor should not be used when view selection is among the goals of the study under consideration. However, it may have some use when its interpretation as a weighted mean of the view-specific models is of particular importance.
In this study, we evaluated the performance of the different meta-learners across a variety of settings, including high-dimensional and highly correlated settings. Most of these settings were not easy problems, as evident by the absolute accuracy values obtained by the meta-learners. Additionally we considered two real data examples, one considerably harder than the other. Across all our experiments, the relative performance of the nonnegative lasso, nonnegative adaptive lasso and nonnegative elastic net remained remarkably stable. Our results show that MVS can be used with one of these meta-learners to obtain models which are substantially sparser at the view level than those obtained with other meta-learners, without incurring a major penalty in classification accuracy.
D
According to Figure 7 and Table 8, the two DepAD algorithms are significantly better than all benchmark methods except for wkNN and iForest in terms of ROC AUC . With wkNN, the results are similar. With iForest, the p𝑝pitalic_p-values are very close to 0.05. In terms of AP, the two DepAD algorithms yield significantly better results than all benchmark methods except for wKNN, iForest and COMBN, as shown in Figure 8 and Table 8. With wkNN, the p𝑝pitalic_p-value is around 0.5, which shows a similar performance. The p𝑝pitalic_p-values with iForest and COMBN are close to 0.05. Furthermore, the two DepAD methods significantly outperform ALSO, and this is attributed to the inclusion of the relevant variable selection. In summary, the two DepAD algorithms outperform most of the benchmark methods, including both proximity-based methods and existing dependency-based methods.
Figure 8: Comparison of two DepAD algorithms, FBED-CART-PS and FBED-CART-Sum, with benchmark methods in terms of AP. The X axis stands for the AP of a comparison method, and the Y axis represents the AP of FBED-CART-PS (circle) or FBED-CART-Sum (plus). A dot (or plus) represents a comparison of FBED-CART-PS (or FBED-CART-Sum) with a method named in each sub figure. A dot or plus falling in the top left of the diagonal line indicates FBED-CART-PS or FBED-CART-Sum performs better than the comparison method.
In summary, the DepAD methods FBED-CART-RZPS, FBED-CART-PS, and FBED-CART-Sum generally demonstrate good performance in terms of ROC AUC. Among them, FBED-CART-PS and FBED-CART-Sum are considered good choices as they exhibit favorable performance in both ROC AUC and AP. It is noteworthy that FBED-CART-PS is the same algorithm proposed in [4].
As FBED-CART-PS and FBED-CART-Sum show similar results as wkNN, in this section, we explain the performance difference between DepAD algorithms and wkNN. The following analysis is conducted with both FBED-CART-PS and FBED-CART-Sum, and the results are very similar. We only present the analysis based on FBED-CART-PS in the paper to save space.
Figure 7: Comparison of two DepAD algorithms, FBED-CART-PS and FBED-CART-Sum, with benchmark methods in terms of ROC AUC. The X axis stands for the ROC AUC of a comparison method, and the Y axis represents the ROC AUC of FBED-CART-PS (circle) or FBED-CART-Sum (plus). A dot (or plus) represents a comparison of FBED-CART-PS (or FBED-CART-Sum) with a method named in each sub figure. A dot or plus falling in the top left of the diagonal line indicates FBED-CART-PS or FBED-CART-Sum performs better than the comparison method.
C
Δpred⁢\del⁢𝐗𝒬t,θsuperscriptΔpred\delsubscript𝐗subscript𝒬𝑡𝜃\Delta^{\text{pred}}\del{\mathbf{X}_{\mathcal{Q}_{t}},\theta}roman_Δ start_POSTSUPERSCRIPT pred end_POSTSUPERSCRIPT bold_X start_POSTSUBSCRIPT caligraphic_Q start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUBSCRIPT , italic_θ represents the difference in perceived rewards due to the inaccuracy in the estimation of the parameter θ∗subscript𝜃\theta_{*}italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT.
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of uncertainty approaches)  [Abbasi-Yadkori et al., 2011, Abeille et al., 2021]. We use Bernstein-style concentration for self-normalized martingales, which were previously proposed in the context of scalar logistic bandits in Faury et al. [2020], to define our confidence set over the true parameter, taking into account the effects of the local curvature of the reward function. We show that the performance of CB-MNL (as measured by regret) is bounded as O~⁢\del⁢d⁢T+κ~O\del𝑑𝑇𝜅\tilde{\mathrm{O}}\del{d\sqrt{T}+\kappa}over~ start_ARG roman_O end_ARG italic_d square-root start_ARG italic_T end_ARG + italic_κ, significantly improving the theoretical performance over existing algorithms where κ𝜅\kappaitalic_κ appears as a multiplicative factor in the leading term. We also leverage a self-concordance [Bach, 2010] like relation for the multinomial logit reward function [Zhang & Lin, 2015], which helps us limit the effect of κ𝜅\kappaitalic_κ on the final regret upper bound to only the higher-order terms. Finally, we propose a different convex confidence set for the optimization problem in the decision set of CB-MNL, which reduces the optimization problem to a constrained convex problem.
Algorithm 1 follows the template of in the face of uncertainty (OFU) strategies [Auer et al., 2002, Filippi et al., 2010, Faury et al., 2020]. Technical analysis of OFU algorithms relies on two key factors: the design of the confidence set and the ease of choosing an action using the confidence set.
Comparison with Faury et al. [2020] Faury et al. [2020] use a bonus term for optimization in each round, and their algorithm performs non-trivial projections on the admissible log-odds. While we do reuse the Bernstein-style concentration inequality as proposed by them, their results do not seem to extend directly to the MNL setting without requiring significantly more work. Further, our algorithm CB-MNL performs an optimistic parameter search for making decisions instead of using a bonus term, which allow for a cleaner and shorter analysis.
CB-MNL enforces optimism via an optimistic parameter search (e.g. in Abbasi-Yadkori et al. [2011]), which is in contrast to the use of an exploration bonus as seen in Faury et al. [2020], Filippi et al. [2010]. Optimistic parameter search provides a cleaner description of the learning strategy. In non-linear reward models, both approaches may not follow similar trajectory but may have overlapping analysis styles (see Filippi et al. [2010] for a short discussion).
D
Datasets and evaluation metrics. We present our experimental results on two representative datasets THUMOS-14 (THUMOS for short) [15] and ActivityNet-v1.3 (ActivityNet for short) [7]. THUMOS-14 contains 413 temporally annotated untrimmed videos with 20 action categories, in which 200 videos are for training and 213 videos for validation333The training and validation sets of THUMOS are temporally annotated videos from the validation and testing sets of UCF101 [33], respectively.. ActivityNet-v1.3 has 19994 temporally annotated untrimmed videos in 200 action categories, which are split into training, validation and testing sets by the ratio of 2:1:1. For both datasets, we use mean Average Precision (mAP) at different tIoU thresholds as the evaluation metric. On THUMOS-14, we use tIoU thresholds {0.3,0.4,0.5,0.6,0.7}0.30.40.50.60.7\{0.3,0.4,0.5,0.6,0.7\}{ 0.3 , 0.4 , 0.5 , 0.6 , 0.7 }; on ActivityNet-v1.3, we choose 10 values in the range [0.5,0.95]0.50.95[0.5,0.95][ 0.5 , 0.95 ] with a step size 0.05 as tIoU thresholds following the official evaluation practice.
Table 2: Action localization results on validation set of ActivityNet-v1.3, measured by mAPs (%) at different tIoU thresholds and the average mAP. Our VSGN achieves the state-of-the-art average mAP and the highest mAP for short actions. Note that our VSGN, which uses pre-extracted features without further finetuning, significantly outperforms all other methods that use the same pre-extracted features. It is even on par with concurrent methods that finetune the features on ActivityNet for TAL end to end.
We compare the inference time of different methods on the ActivityNet validation set on a 1080ti GPU in Table 8. Compared to end-to-end frameworks such as PBRNet, the methods using pre-extracted features such as BMN, G-TAD and VSGN can re-use the features extracted for other tasks, and these methods do not introduce complex 3D convolutions in the TAL architecture, therefore, they have obviously lower inference time. Our VSGN has negligible computation in VSS, and has similar cost in xGPN to the GNNs in G-TAD. Addtionally, it uses fewer anchors (1240 vs 4950), and does not have the stage of ROIAlign, so it runs faster than G-TAD.
Implementation Details. In order to achieve higher performance, some works directly process video frames and learn features for the task of temporal action localization (TAL) in an end-to-end fashion [24, 42]. However, this has humongous requirements for GPU memory and computational capability. Instead, we follow the practice of using off-the-shelf pre-extracted features, without further finetuning on the target TAL task [3, 19, 21, 44]. For THUMOS, we sample at the original frame rate of each video and pre-extract features using the two-stream network TSN [41] trained on Kinects [16]. For ActivityNet, we evaluate on two different types of features: TSN features at 5 snippets per second and I3D [8] features at 1.5 snippets per second (both networks are trained on Kinetics [16]).
We compare the performance of our proposed VSGN to recent representative methods in the literature on the two datasets in Table 1 and Table 2, respectively. On both datasets, VSGN achieves state-of-the-art performance, reaching mAP 52.4% at tIoU 0.5 on THUMOS and average mAP 35.07% on ActivityNet. It significantly outperforms all other methods that use the same features. More remarkably, our VSGN which uses pre-extracted features without further finetuning, is on par with and even better than concurrent methods that finetune features end to end for TAL.
C
With crossover, random pairs of underperforming models (originating from the same algorithm) are picked and their hyperparameters are fused with the goal of creating a better model. As a result, internal regions of the solution space are further explored, and better local optima are investigated. On the other hand, mutation randomly generates new values for the hyperparameters to substitute old values. It facilitates scanning for external regions of the solution space to discover additional local optima. These unexplored areas of the hyperparameter space may offer a fresh start to the search for hyperparameters. The synergy of combining both techniques can be beneficial in finding distinctive local optima that generalize to a better result in the end. Hence, the problem of getting stuck in local optima of the hyperparameter space is addressed. However, one question that emerges is: (RQ1) how to choose which models (and algorithms) should crossover and/or mutate, and to what extent, considering we have limited computational resources?
In this paper, we presented VisEvol, a VA tool with the aim to support hyperparameter search through evolutionary optimization. With the utilization of multiple coordinated views, we allow users to generate new hyperparameter sets and store the already robust hyperparameters in a majority-voting ensemble. Exploring the impact of the addition and removal of algorithms and models in a majority-voting ensemble from different perspectives and tracking the crossover and mutation process enables users to be sure how to proceed with the selection of hyperparameters for a single model or complex ensembles that require a combination of the most performant and diverse models. The effectiveness of VisEvol was examined with use cases using real-world data that demonstrated the advancement of the methods behind achieving performance improvement. Our tool’s workflow and visual metaphors received positive feedback from three ML experts, who even identified limitations of VisEvol. These limitations pose future research directions for us.
The authors of a recent survey [SR18] state that users should understand how to tune models and, in extension, choose hyperparameters for selecting the appropriate ML ensemble. Consequently, another open question is: (RQ2) how to find which particular hyperparameter set is suitable for each model in a majority-voting ensemble of diverse models?
Various automatic ML methods [FH19] and practical frameworks [Com, NNI] have been proposed to deal with the challenge of hyperparameter search. However, their output is usually a single model, which is frequently underpowered when compared to an ensemble of ML models [SR18]. Ensemble methods—such as bagging and boosting—could be combined in a majority-voting ensemble [CGW13] with a democratic voting system that summarizes the decisions among models.
Visualization tools have been implemented for sequential-based, bandit-based, and population-based approaches [PNKC21], and for more straightforward techniques such as grid and random search [LCW∗18]. Evolutionary optimization, however, has not experienced similar consideration by the InfoVis and VA communities, with the exception of more general visualization approaches such as EAVis [KE05, Ker06] and interactive evolutionary computation (IEC) [Tak01]. To the best of our knowledge, there is no literature describing the use of VA in hyperparameter tuning of evolutionary optimization (as defined in Section 1) with the improvement of performance based on majority-voting ensembles. In this section, we review prior work on automatic approaches, visual hyperparameter search, and tools with which users may tune ML ensembles. Finally, we discuss the differences of such systems when compared to VisEvol in order to clarify the novelty of our tool.
C
A comprehensive review of the broader category of multi-agent algorithms is presented in [33], while a survey specifically focusing on aerial swarm robotics is provided in [34]. Additionally, [35] offers an overview of existing swarm robotic applications. For swarm guidance purposes, certain deterministic algorithms have been developed in [36, 37, 38, 39, 40, 41]. However, these algorithms may become computationally infeasible when dealing with swarms that comprise hundreds to thousands of agents.
and a complex communication architecture is not required for the estimation of the distribution. By presenting numerical evidence within the context of the probabilistic swarm guidance problem, we demonstrate that the convergence rate of the swarm distribution to the desired steady-state distribution is substantially faster when compared to previous methodologies.
the performance of the algorithm drops significantly if the current density distribution of the swarm cannot be estimated accurately. The time-inhomogeneous Markov chain approach to the probabilistic swarm guidance problem (PSG-IMC algorithm) is developed in [14] to minimize the number of state transitions. This algorithm is computationally efficient and yields reasonable results with low estimation errors.
In the context of addressing the guidance problem for a large number of agents, considering the spatial distribution of swarm agents and directing it towards a desired steady-state distribution offers a computationally efficient approach. In this regard, both probabilistic and deterministic swarm guidance algorithms are presented in [42, 43, 44, 45, 46, 47, 48] for continuous state spaces. For discrete state spaces, a probabilistic guidance algorithm is introduced in [7].
This algorithm treats the spatial distribution of swarm agents, called the density distribution, as a probability distribution and employs the Metropolis-Hastings (M-H) algorithm to synthesize a Markov chain that guides the density distribution toward a desired state. The probabilistic guidance algorithm led to the development of numerous Markov chain synthesis algorithms involving specific objectives and constraints [8, 9, 10, 11, 12, 13, 14, 15, 16, 17].
C
Moreover, when assuming (near)-isometries between shapes, efficient and powerful spectral approaches can be leveraged for shape matching [51]. Isometries describe classes of deformable shapes of the same type but in different poses, \eghumans or animals who are able to adopt a variety of poses. Potential applications for isometric shape matching include AR/VR or template matching.
In principle, any pairwise shape matching method can be used for matching a shape collection. To do so, one can select one of the shapes as reference, and then solve a sequence of pairwise shape matching problems between each of the remaining shapes and the reference. However, a major disadvantage is that such an approach has a strong bias due to the choice of the reference.
While (near)-isometric shape matching has been studied extensively for the case of matching a pair of shapes, the isometric multi-shape matching problem, where an entire collection of (near-isometric) shapes is to be matched, is less explored. Important applications of isometric multi-shape matching include learning low-dimensional shape space representations [84], motion tracking and reconstruction.
Alternatively, one could solve pairwise shape matching problems between all pairs of shapes in the shape collection. Although this way there is no bias, in general the resulting correspondences are not cycle-consistent. As such, matching shape A via shape B to shape C, may lead to a different correspondence than matching shape A directly to C.
Moreover, when assuming (near)-isometries between shapes, efficient and powerful spectral approaches can be leveraged for shape matching [51]. Isometries describe classes of deformable shapes of the same type but in different poses, \eghumans or animals who are able to adopt a variety of poses. Potential applications for isometric shape matching include AR/VR or template matching.
B
Convert the coloring f:ΓC/∼→{0,1}f:\Gamma_{C}/\sim\rightarrow\{0,1\}italic_f : roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT / ∼ → { 0 , 1 } in a directed clique path tree of ΓCsubscriptΓ𝐶\Gamma_{C}roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT.
On the side of directed path graphs, prior to this paper, it was necessary to implement two algorithms to recognize them: a recognition algorithm for path graphs as in [3, 22], and the algorithm in [4] that in linear time is able to determining whether a path graph is also a directed path graph. Our algorithm directly recognizes directed path graphs in the same time complexity, and the simplification is clear.
On the side of directed path graphs, at the state of art, our algorithm is the only one that does not use the results in [4], in which it is given a linear time algorithm able to establish whether a path graph is a directed path graph too (see Theorem 5 for further details). Thus, prior to this paper, it was necessary to implement two algorithms to recognize directed path graphs, while we obtain our recognition algorithm for directed path graphs by slightly modifying the recognition algorithm for path graphs.
We presented the first recognition algorithm for both path graphs and directed path graphs. Both graph classes are characterized very similarly in [18], and we extended the simpler characterization of path graphs in [1] to include directed path graphs as well; this result can be of interest itself. Thus, now these two graph classes can be recognized in the same way both theoretically and algorithmically.
Directed path graphs are characterized by Gavril [9], in the same article he also gives the first recognition algorithms that has O⁢(n4)𝑂superscript𝑛4O(n^{4})italic_O ( italic_n start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) time complexity. In the above cited article, Monma and Wei [18] give the second characterization of directed path graphs, that yields to a recognition algorithm with O⁢(n2⁢m)𝑂superscript𝑛2𝑚O(n^{2}m)italic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_m ) time complexity. Chaplick et al. [4] present a linear time algorithm able to establish if a path graph is a directed path graph (actually, their algorithm requires the clique path tree of the input graph, we refer to Section 2 for further details). This implies that algorithms in [3, 22] can be used to obtain a recognition algorithm for directed path graphs with the same time complexity. At the state of art, this technique leads to the fastest algorithms.
C
In experiments 1(a) and 1(b), we study how the fraction of pure nodes affects the behaviors of these mixed membership community detection methods under MMSB and DCMM, respectively. We fix (x,ρ)=(0.4,0.1)𝑥𝜌0.40.1(x,\rho)=(0.4,0.1)( italic_x , italic_ρ ) = ( 0.4 , 0.1 ) and let n0subscript𝑛0n_{0}italic_n start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT range in {40,60,80,100,120,140,160}406080100120140160\{40,60,80,100,120,140,160\}{ 40 , 60 , 80 , 100 , 120 , 140 , 160 }. In Experiment 1(a) generate θ𝜃\thetaitalic_θ as θ⁢(i)=0.4𝜃𝑖0.4\theta(i)=0.4italic_θ ( italic_i ) = 0.4 for all 1≤i≤n1𝑖𝑛1\leq i\leq n1 ≤ italic_i ≤ italic_n, that is, it is under MMSB model. In Experiment 1(b), generate θ𝜃\thetaitalic_θ as θ⁢(i)=0.2+0.8⁢(i/n)2𝜃𝑖0.20.8superscript𝑖𝑛2\theta(i)=0.2+0.8(i/n)^{2}italic_θ ( italic_i ) = 0.2 + 0.8 ( italic_i / italic_n ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT for all 1≤i≤n1𝑖𝑛1\leq i\leq n1 ≤ italic_i ≤ italic_n, i.e., it is under DCMM model.
Numerical results of these two sub-experiments are shown in panels (c) and (d) of Figure 1. From subfigure (c), under the MMSB model, we can find that Mixed-SLIM, Mixed-SCORE, OCCAM, and GeoNMF have similar performances, and as ρ𝜌\rhoitalic_ρ increases they all perform poorer. Under the DCMM model, the mixed Humming error rate of Mixed-SLIM decreases as ρ𝜌\rhoitalic_ρ decreases, while the performances of the other three approaches are still unsatisfactory.
Numerical results of these two sub-experiments are shown in panels (a) and (b) of Figure 1, respectively. From the results in subfigure 1(a), it can be found that Mixed-SLIM performs similarly to Mixed-SCORE while both two methods perform better than OCCAM and GeoNMF under the MMSB setting. Subfigure 1(b) suggests that Mixed-SLIM significantly outperforms Mixed-SCORE, OCCAM, and GeoNMF under the DCMM setting. It is interesting to find that only Mixed-SLIM enjoys better performances as the fraction of pure nodes increases under the DCMM setting.
Panels (e) and (f) of Figure 1 report the numerical results of these two sub-experiments. They suggest that estimating the memberships becomes harder as the purity of mixed nodes decreases. Mixed-SLIM and Mixed-SCORE perform similarly and both two approaches perform better than OCCAM and GeoNMF under the MMSB setting. Meanwhile, Mixed-SLIM significantly outperforms the other three methods under the DCMM setting.
The numerical results are given by the last two panels of Figure 1. Subfigure 1(k) suggests that Mixed-SLIM, Mixed-SCORE, and GeoNMF share similar performances and they perform better than OCCAM under the MMSB setting. the proposed Mixed-SLIM significantly outperforms the other three methods under the DCMM setting.
B
In each iteration, variational transport approximates the update in (1.1) by first solving the dual maximization problem associated with the variational form of the objective and then using the obtained solution to specify a direction to push each particle. The variational transport algorithm can be viewed as a forward discretization of the Wasserstein gradient flow (Santambrogio, 2017)
Our Contribution. Our contribution is two fold. First, utilizing the optimal transport framework and the variational form of the objective functional, we propose a novel variational transport algorithmic framework for solving the distributional optimization problem via particle approximation. In each iteration, variational transport first solves the variational problem associated with the objective to obtain an estimator of the Wasserstein gradient and then approximately implements Wasserstein gradient descent by pushing the particles.
Compared with existing methods, variational transport features a unified algorithmic framework that enjoys the following advantages. First, by considering functionals with a variational form, the algorithm can be applied to a broad class of objective functionals.
In each iteration, variational transport approximates the update in (1.1) by first solving the dual maximization problem associated with the variational form of the objective and then using the obtained solution to specify a direction to push each particle. The variational transport algorithm can be viewed as a forward discretization of the Wasserstein gradient flow (Santambrogio, 2017)
To showcase these advantages, we consider an instantiation of variational transport where the objective functional F𝐹Fitalic_F satisfies the Polyak-Łojasiewicz (PL) condition (Polyak, 1963) with respect to the Wasserstein distance and the variational problem associated with F𝐹Fitalic_F is solved via kernel methods. In this case, we prove that variational transport generates a sequence of probability distributions that converges linearly to a global minimizer of F𝐹Fitalic_F up to some statistical error.
B
For an intersection, the incoming lanes refer to the lanes where the vehicles are about to enter the intersection. In real world, most intersections are equipped with 4-way entering approaches, but some are 3-way or 5-way intersections. A standard 4-way intersection is shown in Fig. 2, which consists of four approaches, i.e., "east", "south", "west" and "north". Each approach consists of three types of lanes, representing "left-turn", "straight" and "right-turn" directions from inner to outer. The outgoing lanes refer to the lanes where the vehicles are about to leave the intersection. Notes that vehicles on the incoming lanes are affected directly by the traffic signal at the current intersection. Therefore, we adopt the traffic information on the incoming lanes as part of the observation, which is the same as most existing works [46, 13, 41, 14].
Observation. Each agent has its own local observation, including the number of vehicles on each incoming lane and the current phase of the intersection, where phase is the part of the signal cycle allocated to any combination of traffic movements, as explained in Section 3.1. Observation of agent i𝑖iitalic_i is defined by
where M𝑀Mitalic_M is the total number of incoming lanes and 𝒱Msubscript𝒱𝑀\mathcal{V}_{M}caligraphic_V start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT means the number of vehicles in the Mt⁢hsuperscript𝑀𝑡ℎM^{th}italic_M start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT incoming lanes, 𝚙𝚙\mathtt{p}typewriter_p is the current phase and represented as a one-hot vector.
For an intersection, the incoming lanes refer to the lanes where the vehicles are about to enter the intersection. In real world, most intersections are equipped with 4-way entering approaches, but some are 3-way or 5-way intersections. A standard 4-way intersection is shown in Fig. 2, which consists of four approaches, i.e., "east", "south", "west" and "north". Each approach consists of three types of lanes, representing "left-turn", "straight" and "right-turn" directions from inner to outer. The outgoing lanes refer to the lanes where the vehicles are about to leave the intersection. Notes that vehicles on the incoming lanes are affected directly by the traffic signal at the current intersection. Therefore, we adopt the traffic information on the incoming lanes as part of the observation, which is the same as most existing works [46, 13, 41, 14].
Phase is a controller timing unit associated with the control of one or more movements, representing the permutation and combination of different traffic flows. At each phase, vehicles in the specific lanes can continue to drive. The 4-phase setting is the most common configuration in reality, but the number of phases can vary due to different intersection topologies (3-way, 5-way intersections, etc.). Fig. 2 illustrates a standard 4-phase setting: "north-south-straight", "north-south-left", "east-west-straight" and "east-west-left", "north-south-straight" means that the signal on the corresponding lanes are green. Note that the signal on the right-turn lanes is always green for consistency with real world.
D
(partial) Jacobians with respect to 𝐱𝐱\mathbf{x}bold_x and 𝐲𝐲\mathbf{y}bold_y respectively at (𝐱0,𝐲0)subscript𝐱0subscript𝐲0(\mathbf{x}_{0},\mathbf{y}_{0})( bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , bold_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ).
at a particular parameter value 𝐲∈Σ⊂ℂn𝐲Σsuperscriptℂ𝑛\mathbf{y}\in\Sigma\subset\mathbbm{C}^{n}bold_y ∈ roman_Σ ⊂ blackboard_C start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT  or  ℝnsuperscriptℝ𝑛\mathbbm{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT
for a smooth mapping 𝐟𝐟\mathbf{f}bold_f from an open domain in ℝmsuperscriptℝ𝑚\mathbbm{R}^{m}blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT or ℂmsuperscriptℂ𝑚\mathbbm{C}^{m}blackboard_C start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT
Let  J⁢(𝐱)𝐽𝐱J(\mathbf{x})italic_J ( bold_x )  be the Jacobian of a smooth mapping  𝐟𝐟\mathbf{f}bold_f  at any 𝐱𝐱\mathbf{x}bold_x  in its open domain  ΩΩ\Omegaroman_Ω  in  ℂnsuperscriptℂ𝑛\mathbbm{C}^{n}blackboard_C start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT  or  ℝnsuperscriptℝ𝑛\mathbbm{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT.
is holomorphic, where the domain ΩΩ\Omegaroman_Ω is an open subset of ℂnsuperscriptℂ𝑛\mathbbm{C}^{n}blackboard_C start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT or ℝnsuperscriptℝ𝑛\mathbbm{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT.
C
Last, we show that our algorithms can be applicable in other settings. Specifically, we show an application of our algorithms in the context of Virtual Machine (VM) placement in large data centers (?): here, we obtain a more refined competitive analysis in terms of the consolidation ratio, which reflects the maximum number of VMs per physical machine, in typical scenarios. Furthermore, we show that our analysis of ProfilePacking has a direct application in the sampling-based setting, in which the algorithm can access a small sample of the input, and the objective is to obtain an online algorithm that performs efficiently as a function of the number of sampled input items. Thus, our online algorithms can also serve as fast approximations to the offline problem, since frequency prediction with bounded error can be attained with a small sample size.
Last, we show that our algorithms can be applicable in other settings. Specifically, we show an application of our algorithms in the context of Virtual Machine (VM) placement in large data centers (?): here, we obtain a more refined competitive analysis in terms of the consolidation ratio, which reflects the maximum number of VMs per physical machine, in typical scenarios. Furthermore, we show that our analysis of ProfilePacking has a direct application in the sampling-based setting, in which the algorithm can access a small sample of the input, and the objective is to obtain an online algorithm that performs efficiently as a function of the number of sampled input items. Thus, our online algorithms can also serve as fast approximations to the offline problem, since frequency prediction with bounded error can be attained with a small sample size.
We first present and analyze an algorithm called ProfilePacking, that achieves optimal consistency, and is also efficient if the prediction error is relatively small. The algorithm builds on the concept of a profile set, which serves as an approximation of the items that are expected to appear in the sequence, given the frequency predictions. This is a natural concept that, perhaps surprisingly, has not been exploited in the long history of competitive analysis of bin packing, and which can be readily applicable to other online packing problems, such as multi-dimensional packing (?) and vector packing (?), as we discuss in Section 7.
In terms of analysis techniques, we note that the theoretical analysis of the algorithms we present is specific to the setting at hand and treats items “collectively”. In contrast, almost all known online bin packing algorithms are analyzed using a weighting technique (?), which treats each bin “individually” and independently from the others (by assigning weights to items and independently comparing a bin’s weight in the online algorithm and the optimal offline solution). In terms of the experimental analysis, in our experiments, the prediction error is a natural byproduct of the learning phase, and predictions are obtained by observing a small prefix of the input sequence. This is in contrast to several works in learning-enhanced algorithms, in which a perfect prediction is first generated by a very powerful oracle, then some random error is applied in order to simulate the imperfect prediction.
We give the first theoretical and experimental study of online bin packing with machine-learned predictions. Previous work on this problem has assumed ideal and error-free predictions that must be provided by a very powerful oracle, without any learnability considerations, as we discuss in more detail in Section 1.2. In contrast, our algorithms exploit natural, and PAC-learnable predictions concerning the frequency at which item sizes occur in the input, and our analysis incorporates the prediction error into the performance guarantee. As in other AI-motivated works on bin packing, namely (?, ?, ?, ?), we assume a discrete model in which item sizes are integers in [1,k]1𝑘[1,k][ 1 , italic_k ] for some constant k𝑘kitalic_k
C
For the point cloud representation, the crucial step is to define reconstruction loss that can be used in the autoencoding framework. In the literature, two distance measures are successively applied: Earth Mover’s (Wasserstein) Distance (Rubner et al., 2000), and Chamfer pseudo-distance (Tran, 2013).
In literature, there exist a huge variety of 3D shape reconstruction models. The most popular ones are dense, pixel-wise depth maps, or normal maps (Eigen et al., 2014; Bansal et al., 2016; Bednarik et al., 2018; Tsoli et al., 2019; Zeng et al., 2019), point clouds (Fan et al., 2017; Qi et al., 2017b; Yang et al., 2018b), meshes (Wang et al., 2018; Gundogdu et al., 2019; Yao et al., 2020; Yifan et al., 2020), implicit functions (Chen & Zhang, 2019; Mescheder et al., 2019; Park et al., 2019; Xu et al., 2019; Atzmon & Lipman, 2020), voxels (Choy et al., 2016; Häne et al., 2017), shape primitives (Chen et al., 2020b; Deng et al., 2020a; Smirnov et al., 2020; Paschalidou et al., 2020), parametric mappings (Yang et al., 2018b; Groueix et al., 2018; Williams et al., 2019; Deprelle et al., 2019; Bednarik et al., 2020) or combinations of some of these (Muralikrishnan et al., 2019; Poursaeed et al., 2020). All of the above representations have their pros and cons based on memory requirements and surface fitting precision.
Therefore, we use a hypernetwork that produces parameters of a small neural network that performs 1, and the conditioning of that neural network with a point p𝑝pitalic_p to realize 2. The transformation ϕitalic-ϕ\phiitalic_ϕ is a fully connected network and is formulated as:
The transformation ϕitalic-ϕ\phiitalic_ϕ is modeled as a target network represented as MLP with weights Wϕsubscript𝑊italic-ϕW_{\phi}italic_W start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT produced by the hypernetwork Tϕsubscript𝑇italic-ϕT_{\phi}italic_T start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT. Therefore, we can create an individual ϕitalic-ϕ\phiitalic_ϕ function for each of the 3D shapes and significantly reduce the number of parameters of the function by eliminating the need to share the parameters among the shapes. For Tϕsubscript𝑇italic-ϕT_{\phi}italic_T start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT we use the architecture analogical to T𝑇Titalic_T, but we train it with a different cost function. The new target network does not directly transfer uniform distribution on U𝑈Uitalic_U but uses conditioning as follows.
Hypernetworks (Ha et al., 2016) are defined as neural models that generate weights for a separate target network solving a specific task. The authors aim to reduce the number of trainable parameters by designing a hypernetwork with fewer of parameters than the original network. Making an analogy between hypernetworks and generative models, Sheikh et al. (2017) use that
D
{\bf p}\in{\mathcal{P}}\end{subarray}}\frac{1}{m}\sum_{i=1}^{m}f_{i}(x,p_{i},% \widehat{y}_{av}^{N},\widehat{q}_{i}^{N})}\leq\varepsilon.roman_max start_POSTSUBSCRIPT start_ARG start_ROW start_CELL italic_y ∈ over¯ start_ARG caligraphic_Y end_ARG , end_CELL end_ROW start_ROW start_CELL bold_q ∈ caligraphic_Q end_CELL end_ROW end_ARG end_POSTSUBSCRIPT divide start_ARG 1 end_ARG start_ARG italic_m end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_a italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT , over^ start_ARG italic_p end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT , italic_y , italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) - roman_min start_POSTSUBSCRIPT start_ARG start_ROW start_CELL italic_x ∈ over¯ start_ARG caligraphic_X end_ARG , end_CELL end_ROW start_ROW start_CELL bold_p ∈ caligraphic_P end_CELL end_ROW end_ARG end_POSTSUBSCRIPT divide start_ARG 1 end_ARG start_ARG italic_m end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x , italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , over^ start_ARG italic_y end_ARG start_POSTSUBSCRIPT italic_a italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT , over^ start_ARG italic_q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT ) ≤ italic_ε .
This fact leads to the main idea of the proof. At the initial moment of time T=0𝑇0T=0italic_T = 0, we have all zero coordinates in the global output, since the starting points x0,y0subscript𝑥0subscript𝑦0x_{0},y_{0}italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT are equal to 00. Using only local iterations (at least 2), we can achieve that for the nodes Bρsubscript𝐵𝜌B_{\rho}italic_B start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT only the first coordinates of x𝑥xitalic_x and y𝑦yitalic_y can be non-zero, the rest coordinates are strictly zero. For the rest of the nodes, all coordinates remains strictly zero. Without communications, the situation does not change. Therefore, we need to make at least ρ𝜌\rhoitalic_ρ communications in order to have non-zero first coordinates in some node from B𝐵Bitalic_B (transfer of information from Bρsubscript𝐵𝜌B_{\rho}italic_B start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT to B𝐵Bitalic_B). Using (77), by local iterations (at least 1) at the node of the set B𝐵Bitalic_B, one can achieve the first and second non-zero coordinates. Next the process continues with respect of (77).
The main idea is to use reformulation (54) and apply mirror prox algorithm [45] for its solution. This requires careful analysis in two aspects. First, the Lagrange multipliers 𝐳,𝐬𝐳𝐬{\bf z},{\bf s}bold_z , bold_s are not constrained, while the convergence rate result for the classical Mirror-Prox algorithm [45] is proved for problems on compact sets. Second, we need to show that the updates can be organized via only local communications between the nodes in the network.
If Bρ≠∅subscript𝐵𝜌B_{\rho}\neq\varnothingitalic_B start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT ≠ ∅, in the global output of any procedure that satisfies Assumption 4.1, after T𝑇Titalic_T units of time, only the first k=⌊T−2⁢tt+ρ⁢τ⌋+2𝑘𝑇2𝑡𝑡𝜌𝜏2k=\left\lfloor\frac{T-2t}{t+\rho\tau}\right\rfloor+2italic_k = ⌊ divide start_ARG italic_T - 2 italic_t end_ARG start_ARG italic_t + italic_ρ italic_τ end_ARG ⌋ + 2 coordinates can be non-zero, the rest of the d−k𝑑𝑘d-kitalic_d - italic_k coordinates are strictly equal to zero.
To describe this class of first-order methods, we use a similar definition of Black-Box procedure as in [51]. We assume that one local iteration costs t𝑡titalic_t time units, and the communication round costs τ𝜏\tauitalic_τ time units. Additionally, information can be transmitted only along the undirected edge of the network. Communications and local updates can take place in parallel and asynchronously. More formally, it can be described as follows.
D
The set of cycles of a graph has a vector space structure over ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, in the case of undirected graphs, and over ℚℚ\mathbb{Q}blackboard_Q, in the case of directed graphs [5]. A basis of such a vector space is denoted cycle basis and its dimension is the cyclomatic number ν=|E|−|V|+|C⁢C|𝜈𝐸𝑉𝐶𝐶\nu=|E|-|V|+|CC|italic_ν = | italic_E | - | italic_V | + | italic_C italic_C | where E𝐸Eitalic_E, V𝑉Vitalic_V ad C⁢C𝐶𝐶CCitalic_C italic_C are the set of edges, vertices and connected components of the graph, resp. Given a cycle basis B𝐵Bitalic_B we can define its cycle matrix Γ∈K|E|×νΓsuperscript𝐾𝐸𝜈\Gamma\in K^{|E|\times\nu}roman_Γ ∈ italic_K start_POSTSUPERSCRIPT | italic_E | × italic_ν end_POSTSUPERSCRIPT where K𝐾Kitalic_K is the scalar field (i.e.: ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT or ℚℚ\mathbb{Q}blackboard_Q), as the matrix that has the cycles of B𝐵Bitalic_B as columns.
Different classes of cycle bases can be considered. In [6] the authors characterize them in terms of their corresponding cycle matrices and present a Venn diagram that shows their inclusion relations. Among these classes we can find the strictly fundamental class.
In the case that we can find some non-star spanning tree T𝑇Titalic_T of G𝐺Gitalic_G such that ∩(T)<∩(Ts)𝑇subscript𝑇𝑠\cap(T)<\cap(T_{s})∩ ( italic_T ) < ∩ ( italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ) then, we can “simplify” the instance by removing the interbranch cycle-edges with respect to T𝑇Titalic_T in G𝐺Gitalic_G without affecting the inequality (see Lemma 18).
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the strictly fundamental class context. In more concrete terms this problem is equivalent to finding the cycle basis with the sparsest cycle matrix. In [5] a unified perspective of the problem is presented. The authors show that the MCB problem is different in nature for each class. For example in [10] a remarkable reduction is constructed to prove that the MCB problem is NP-hard for the strictly fundamental class, while in [11] a polynomial time algorithm is given to solve the problem for the undirected class. Some applications of the MCB problem are described in [5, 11, 10, 12].
In the introduction of this article we mentioned that the MSTCI problem is a particular case of finding a cycle basis with sparsest cycle intersection matrix. Another possible analysis would be to consider this in the context of the cycle basis classes described in [6].
A
N=N⁢(b,k,m,ℓ)𝑁𝑁𝑏𝑘𝑚ℓN=N(b,k,m,\ell)italic_N = italic_N ( italic_b , italic_k , italic_m , roman_ℓ ) such that for every n≥N𝑛𝑁n\geq Nitalic_n ≥ italic_N and any group homomorphism h:Ck⁢(G⁢[n]m)→(ℤ2)b:ℎ→subscript𝐶𝑘𝐺superscriptdelimited-[]𝑛𝑚superscriptsubscriptℤ2𝑏h:C_{k}(G[n]^{m})\to\left(\mathbb{Z}_{2}\right)^{b}italic_h : italic_C start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_G [ italic_n ] start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ) → ( blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT, there exists a subgrid γ𝛾\gammaitalic_γ of size ℓℓ\ellroman_ℓ in G⁢[N]m𝐺superscriptdelimited-[]𝑁𝑚G[N]^{m}italic_G [ italic_N ] start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT that lies in the kernel of hℎhitalic_h.
1111111111111111001111001111111100111111110011110000111111110011110011110000111100111100001111111111111111001111001111000000001111111111110000000000001111111111111111001111111111110011110011111111001111111100000000000000000011111111111111110000000011110000111111111111001111001111111100001111000000000011111111111100111111110000111100
Two central problems in this line of research are to identify the weakest possible assumptions under which the classical theorems generalize, and to determine their key parameters, for instance the Helly number (d+1𝑑1d+1italic_d + 1 for convex sets in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT) or the range for which the (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem holds (every p≥q≥d+1𝑝𝑞𝑑1p\geq q\geq d+1italic_p ≥ italic_q ≥ italic_d + 1 for convex sets in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT).
In this respect, the case of convex lattice sets, that is, sets of the form C∩ℤd𝐶superscriptℤ𝑑C\cap\mathbb{Z}^{d}italic_C ∩ blackboard_Z start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT where C𝐶Citalic_C is a convex set in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, showcases an interesting phenomenon: the Helly number is 2dsuperscript2𝑑2^{d}2 start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT [14, 36], an exponential dependency on the dimension that contributes to the computational intractability of integer programming [12, §⁢6§6\mathsection 6§ 6], but a (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem holds for every p≥q≥d+1𝑝𝑞𝑑1p\geq q\geq d+1italic_p ≥ italic_q ≥ italic_d + 1 [7]; in the words of Bárány and Matoušek [7, §⁢1§1\mathsection 1§ 1], “… this large Helly number can be regarded as a ‘local anomaly’ and that the relevant number for other, more global Helly-type properties is only d+1𝑑1d+1italic_d + 1.”.
In this paper we are concerned with generalizations of Helly’s theorem that allow for more flexible intersection patterns and relax the convexity assumption. A famous example is the celebrated (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem [3], which asserts that for a finite family of convex sets in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, if the intersecting subfamilies are evenly distributed, in the sense that among every p𝑝pitalic_p members some q𝑞qitalic_q intersect, then a constant number of points suffice to intersect all the convex sets. The crucial part here is that the constant depends only on p𝑝pitalic_p, q𝑞qitalic_q and d𝑑ditalic_d, but not on the size of family.
A
Fig. 3(b) is a table heatmap view with five automatic feature selection techniques, their Average contribution, and an # Action # button to exclude any number of features. As we originally train our ML algorithm with all features, the yellow color (one of the standard colors used for highlighting [77]) in the last column symbolizes that all features are included in the current phase (if excluded, then B/W stripe patterns appear). The first technique is Univariate FS (Feature Selection) [78] which uses the ANOVA F𝐹Fitalic_F-v⁢a⁢l⁢u⁢e𝑣𝑎𝑙𝑢𝑒valueitalic_v italic_a italic_l italic_u italic_e test for selecting the k𝑘kitalic_k best features. The value k𝑘kitalic_k is always set to the maximum in order to retrieve scores for all features, since we want to avoid removing features automatically (instead, we visualize the scores for the user to decide). The Impurity-based FI (Feature Importance) method [79] is connected to the intrinsic nature of ensemble algorithms to export feature importance scores after their training. Hence, we extract the feature importance scores from the best model we found. Permutation FI [80] is another technique in which the decrease in a model’s score is monitored while a single feature value is randomly shuffled [81]. The last two techniques are rather similar with one difference: the former method can be biased toward high cardinality features (many unique values) over low cardinality features such as binary features. In Accuracy-based FI [82], we also fit the ML model using one feature at a time and compute the accuracy to evaluate every feature’s performance.
Figure 1: Selecting important features, transforming them, and generating new features with FeatureEnVi: (a) the horizontal beeswarm plot for manually slicing the data space (which is sorted by predicted probabilities) and continuously checking the migration of data instances throughout the process; (b) the table heatmap view for the selection of features according to feature importances calculated from automatic techniques; (c) the radial tree providing an overview of the features with statistical measures for the different groups of instances, as set by the user-defined data slices; (d) the graph visualization for the detailed exploration of features, their transformation, and comparison between two or three features for feature generation purposes; and (e) the punchcard for tracking the steps of the process and the grouped bar chart for comparing the current vs. the best predictive performance based on three validation metrics.
Fig. 3(b) is a table heatmap view with five automatic feature selection techniques, their Average contribution, and an # Action # button to exclude any number of features. As we originally train our ML algorithm with all features, the yellow color (one of the standard colors used for highlighting [77]) in the last column symbolizes that all features are included in the current phase (if excluded, then B/W stripe patterns appear). The first technique is Univariate FS (Feature Selection) [78] which uses the ANOVA F𝐹Fitalic_F-v⁢a⁢l⁢u⁢e𝑣𝑎𝑙𝑢𝑒valueitalic_v italic_a italic_l italic_u italic_e test for selecting the k𝑘kitalic_k best features. The value k𝑘kitalic_k is always set to the maximum in order to retrieve scores for all features, since we want to avoid removing features automatically (instead, we visualize the scores for the user to decide). The Impurity-based FI (Feature Importance) method [79] is connected to the intrinsic nature of ensemble algorithms to export feature importance scores after their training. Hence, we extract the feature importance scores from the best model we found. Permutation FI [80] is another technique in which the decrease in a model’s score is monitored while a single feature value is randomly shuffled [81]. The last two techniques are rather similar with one difference: the former method can be biased toward high cardinality features (many unique values) over low cardinality features such as binary features. In Accuracy-based FI [82], we also fit the ML model using one feature at a time and compute the accuracy to evaluate every feature’s performance.
Next, as XGBoost [29] is a nonlinear ML algorithm, we also train a linear classifier (a logistic regression [83] model with the default Scikit-learn’s hyperparameters [84]) to compute the coefficients matrix and then use Recursive Feature Elimination (RFE) [40] to rank the features from the best to the worst in terms of contribution. This technique is referred to as Ranking-based FS [85] in our VA system. We would like to include further techniques in the future, however, the current selection is specifically assembled to contain one candidate for each of the high-level categories of feature selection methods introduced in Section 1. For every method, we normalize the output from 0 to 1 to set a common ground for the user to compare them, as indicated in the legend of Fig. 1(b). Hence, their average is calculated and displayed in the penultimate column. Following the design guidelines from the conventions introduced by prior works [86, 87], we choose red and green colors for the table heatmap. This view also automatically extends for the newly-generated features from combinations of already existing features (cf. Fig. 1(b)). The original features used for the creation of new features are depicted in dark gray in the last column of the table heatmap view. The table is automatically sorted based on the average; however, Impurity-based FI is selected by the user for the Fig. 1(b)) scenario. Due to this selection, the table heatmap resorts the features from the highest to the lowest importance only according to the XGBoost model’s inherent feature importance. More details can be found in Section 4.4.
In FeatureEnVi, data instances are sorted according to the predicted probability of belonging to the ground truth class, as shown in Fig. 1(a). The initial step before the exploration of features is to pre-train the XGBoost [29] on the original pool of features, and then divide the data space into four groups automatically (i.e., Worst, Bad, Good, and Best). The vertical black line is the stable threshold anchored precisely at 50% predictive probability, which separates the correctly from the wrongly classified instances. The other two thresholds partition the prior subspace into their half areas for the default option. However, the user can alter the vertical gray lines as indicated in Fig. 5(a.1–a.4), with a degree of freedom set to −--///+++20% from the defaults. The vertical positioning of the instances is purely used to avoid—as much as possible—overlapping/cluttering issues via jittering. The data space will always be divided into four parts conveying extra information to the user. If no instances belong to a slice of the data space, the system works normally, but there will be no values for the statistical measures (see Section 4.3). Overall, the user’s goal is to move as many instances as possible from the left side (Worst and Bad subspaces) to the right side (Good and Best subspaces) while avoiding the opposite. Nevertheless, the primary purpose of this view is to provide better local and global explainabilities of the impact of features according to the user-defined slices. In the future, we plan to enable users to determine the number of slices customly (cf. Section 7.1).
C
[xref,x˙ref]𝖳superscriptsubscript𝑥refsubscript˙𝑥ref𝖳[x_{\text{ref}},\dot{x}_{\text{ref}}]^{\mathsf{\scriptscriptstyle T}}[ italic_x start_POSTSUBSCRIPT ref end_POSTSUBSCRIPT , over˙ start_ARG italic_x end_ARG start_POSTSUBSCRIPT ref end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT sansserif_T end_POSTSUPERSCRIPT and [yref,y˙ref]𝖳superscriptsubscript𝑦refsubscript˙𝑦ref𝖳[y_{\text{ref}},\dot{y}_{\text{ref}}]^{\mathsf{\scriptscriptstyle T}}[ italic_y start_POSTSUBSCRIPT ref end_POSTSUBSCRIPT , over˙ start_ARG italic_y end_ARG start_POSTSUBSCRIPT ref end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT sansserif_T end_POSTSUPERSCRIPT, and, ζk(x)subscriptsuperscript𝜁𝑥𝑘\zeta^{(x)}_{k}italic_ζ start_POSTSUPERSCRIPT ( italic_x ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT and ζk(y)subscriptsuperscript𝜁𝑦𝑘\zeta^{(y)}_{k}italic_ζ start_POSTSUPERSCRIPT ( italic_y ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT respectively denote the state vectors in the corresponding discrete-time dynamics.
To explore these trade-offs we formulate high-level optimization problem with cost function and constraints defined based on the entire position and velocity trajectory, which indicate respectively the overall performance of the control scheme and the operation limits.
which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combination of the identified system model with the contouring terms. In our approach the tracking error is coupled with the progression along the path through the cost function. The automated tuning of the parameters is performed using a cost that accounts for the global performance over the whole trajectory. Additional constraints in the Bayesian optimization algorithm allow for balancing traversal time, accuracy, and minimization of oscillations, according to the specific crucial requirements of the application. We demonstrate enhanced performance in simulation for a 2-axis gantry, for geometries of different nature.
This paper demonstrated a hierarchical contour control implementation for the increase of productivity in positioning systems. We use a contouring predictive control approach to optimize the input to a low level controller. This control framework requires tuning of multiple parameters associated with an extensive number of iterations. We propose a sample-efficient joint tuning algorithm, where the performance metrics associated with the full geometry traversal are modelled as Gaussian processes, and used to form the cost and the constraints in a constrained Bayesian optimization algorithm, where they enable the trade-off between fast traversal, high tracking accuracy, and suppression of vibrations in the system. Data-driven tuning of all the parameters compensates for model imperfections and results in improved performance. Our numerical results demonstrate that tuning the parameters of the MPCC stage achieves the best performance in terms of time and tracking accuracy.
Model predictive contouring control (MPCC) is a control scheme based on minimisation of a cost function which trades the competing objectives of tracking accuracy and traversal time by adjusting the corresponding weights in the cost function. We now introduce the main ingredients for a MPCC formulation.
D
Group Upweighting (Up Wt) [55] attempts to mitigate the correlations between y𝑦yitalic_y and be⁢x⁢p⁢l.subscript𝑏𝑒𝑥𝑝𝑙b_{expl.}italic_b start_POSTSUBSCRIPT italic_e italic_x italic_p italic_l . end_POSTSUBSCRIPT by upweighting the minority patterns. Specifically, each sample (x,y𝑥𝑦x,yitalic_x , italic_y) is assigned to a group: g=(y,b1,b2,..,bE)g=(y,b_{1},b_{2},..,b_{E})italic_g = ( italic_y , italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , . . , italic_b start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT ), where E𝐸Eitalic_E is the total number of variables contained in be⁢x⁢p⁢l.subscript𝑏𝑒𝑥𝑝𝑙b_{expl.}italic_b start_POSTSUBSCRIPT italic_e italic_x italic_p italic_l . end_POSTSUBSCRIPT and the loss is scaled by 1Ng1subscript𝑁𝑔\frac{1}{N_{g}}divide start_ARG 1 end_ARG start_ARG italic_N start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT end_ARG, where Ngsubscript𝑁𝑔N_{g}italic_N start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT is the number of instances in group g𝑔gitalic_g. Up Wt requires the models to be sufficiently regularized, i.e., be trained with low learning rates and/or high weight decays to be robust to the minority groups.
Hyperparameters for each method were chosen using a grid search with unbiased accuracy on each dataset’s validation set. To make this tractable, we first ran a grid search for the learning rate over {10−3,10−4,10−5}superscript103superscript104superscript105\{10^{-3},10^{-4},10^{-5}\}{ 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT , 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT , 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT } and weight decay over {0.1,10−3,10−5,0}0.1superscript103superscript1050\{0.1,10^{-3},10^{-5},0\}{ 0.1 , 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT , 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT , 0 }. After the best values were chosen, we searched for method-specific hyperparameters. Due to the size of GQA-OOD, hyperparameter search was performed by training on only 10% of instances, and then the best selected hyperparameters were used with the full training dataset. The exact values for the hyperparameters are specified in the Appendix.
Distributionally Robust Optimization (DRO): DRO [22] minimizes the worst-case expected loss over potential test distributions. Often, such distributions are approximated by sampling from a uniform divergence ball around the train distribution [10, 23, 47]. However, this lacks structured priors about the potential shifts, and instead hurts generalization [32].
Group DRO (GDRO) [55] provides DRO with the necessary prior that it must generalize to all groups. Similar to Up Wt, GDRO also uses y𝑦yitalic_y and be⁢x⁢p⁢l.subscript𝑏𝑒𝑥𝑝𝑙b_{expl.}italic_b start_POSTSUBSCRIPT italic_e italic_x italic_p italic_l . end_POSTSUBSCRIPT to create groups and has been shown to work well with sufficiently regularized models. However, unlike Up Wt, it performs weighted sampling from each group and has an optimization procedure to minimize the loss over the worst-case group.
Assuming access to the test distribution for model selection is unrealistic and can result in models being right for the wrong reasons [64]. Rather, it is ideal if the methods can generalize without being tuned on the test distribution and we study this ability by comparing models selected through varying tuning distributions. To control the tuning distribution, we define a generalization of the mean per group accuracy (MPG) metric, that can interpolate within as well as extrapolate beyond the train and test distributions:
B
Feature extraction plays a crucial role in most of the learning-based tasks. It is challenging to effectively extract features from complex eye appearance due to identity, illumination and etc. The quality of the extracted features determines the gaze estimation accuracy. In this section, we summarize feature extraction mechanisms according to the types of input into the deep neural network, including eye images, face images and videos.
Figure 2: From intrusive skin electrodes [16] to off-shelf web cameras [17], gaze estimation is more flexible. Gaze estimation methods are also updated with the change of devices. We illustrate five kinds of gaze estimation methods. (1). Attached sensor-based methods. The method samples the electrical signal of skin electrodes. The signal indicates the eye movement of subjects [18]. (2) 3D eye model recovery methods. The method usually builds a geometric eye model to calculate the visual axis, i.e., gaze directions. The eye model is fitted based on the light reflection. (3) 2D eye feature regression methods. The method relies on IR cameras to detect geometric eye features such as pupil center, glints, and directly regress the PoG from these features. (4) Conventional appearance-based methods. The method use entire images as feature and directly regress human gaze from features. Some feature reduction methods are also used for extracting low-dimensional feature. For example, Lu et al. divide eye images into 15 subregion and sum the pixel intensities in each subregion as feature [19]. (5) Appearance-based gaze estimation with deep learning, which is the recent hotspots. Face or eye images are directly inputted into a designed neural network to learn latent feature representation, and human gaze is regressed from the feature representation.
Human gaze has a strong correlation with eye appearance. Even a minor perturbation in gaze direction can result in noticeable changes in eye appearance. For instance, when the eyeball rotates, the position of the iris and the shape of the eyelid undergo alterations, leading to corresponding changes in gaze direction. This relationship between gaze and eye appearance enables the gaze estimation based on the visual feature of eyes. Conventional methods typically estimate gaze using high-dimensional raw image features [21, 51]. These features are obtained by raster scanning all the pixels in eye images, resulting in a representation that contains a significant amount of redundancy. Moreover, these features are highly sensitive to environmental changes, which can pose challenges in achieving accurate gaze estimation.
Recently, deep learning-based methods have gained popularity as they offer several advantages over conventional appearance-based methods. These methods use convolution layers or transformers [22] to automatically extract high-level gaze features from images. Deep learning models are also highly non-linear and can fit the mapping function from eye appearance to gaze direction even with large head motion. These advantages make deep learning-based methods more accurate and robust than conventional methods. Deep learning-based methods also improve cross-subject gaze estimation performance significantly, reducing the need for time-consuming person calibration. These improvements expand the application range of appearance-based gaze estimation.
Recasens et al. present an approach for following gaze in video by predicting where a person (in the video) is looking, even when the object is in a different frame [124]. They build a CNN to predict the gaze location in each frame and the probability containing the gazed object of each frame. Also, visual saliency shows strong correlation with human gaze in scene images [125, 126]. In [127], they estimate the general visual attention and human’s gaze directions in images at the same time. Kellnhofer et al. propose a temporal 3D gaze network [43]. They use bi-LSTM [128] to process a sequence of 7 frames to estimate not only gaze directionS but also gaze uncertainty.
B
Table 1 reports the classification rates on the RMFRD dataset using four different sizes of the codebook (i.e. number of codewords in RBF layer) by (i.e. 50, 60, 70, 100 term vectors per image). We can see that the best recognition rate is obtained using the third FMs in the last convolutional layer from VGG-16 with 60 codewords by 91.3%. The second FMs achieved 90.8% with 50 codewords and outperformed the first FMs over the four codeword sizes. AlexNet, on the other hand, realized 86.6% with 100 codewords where the best recognition rate achieved by ResNet-50 was 89.5% with 70 codewords. In this experiment, it is clear that VGG-16 outperformed the AlexNet and ResNet-50 models.
Table 1 reports the classification rates on the RMFRD dataset using four different sizes of the codebook (i.e. number of codewords in RBF layer) by (i.e. 50, 60, 70, 100 term vectors per image). We can see that the best recognition rate is obtained using the third FMs in the last convolutional layer from VGG-16 with 60 codewords by 91.3%. The second FMs achieved 90.8% with 50 codewords and outperformed the first FMs over the four codeword sizes. AlexNet, on the other hand, realized 86.6% with 100 codewords where the best recognition rate achieved by ResNet-50 was 89.5% with 70 codewords. In this experiment, it is clear that VGG-16 outperformed the AlexNet and ResNet-50 models.
The efficiency of each pre-trained model depends on its architecture and the abstraction level of the extracted features. When dealing with real masked faces, VGG-16 has achieved the best recognition rate, while ResNet-50 outperformed both VGG-16 and AlexNet on the simulated masked faces. This behavior can be explained by the fact that VGG-16 features fail to ensure a high discriminative power comparing to the DRF features that are still relatively steady compared to their results on the real masked faces. When dealing with other state-of-the-art recognizers, one of them applied the same pre-trained models with a different strategy. The proposed method outperformed TL-based method using the same pre-trained models. This performance is explained by the fact that the fc layers of the pre-trained models are more dataset-specific features (generally pre-trained on ImageNet dataset) which is a very different dataset, thus, this strategy is not always suitable for our task. Moreover, the proposed method outperformed previous methods in terms of training time. The achieved performance further confirms that the BoF paradigm is a slight representation that further reinforces the high discrimination power of the deep features to feed a machine learning-based classifier.
Another efficient face recognition method using the same pre-trained models (AlexNet and ResNet-50) is proposed in almabdy2019deep and achieved a high recognition rate on various datasets. Nevertheless, the pre-trained models are employed in a different manner. It consists of applying a TL technique to fine-tune the pre-trained models to the problem of masked face recognition using an SVM classifier. We have tested the this strategy on the masked faces. The results in Table 3 further demonstrate the efficiency of the BoF paradigm compared to the use of a machine learning-based classifier directly.
Table 2 reports the classification rates on the SMFRD dataset. The highest recognition rate is achieved by the ResNet-50 through the quantization of DRF features by 88.9%. This performance is achieved using 70 codewords that feed an MLP classifier. AlexNet model realized good recognition rates comparing to the VGG-16 model (86.0% vs 85.6% as highest rates).
D
We define SAX∞superscriptSAX\text{SAX}^{\infty}SAX start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT, which extends the semi-axiomatic sequent calculus (SAX) [DPP20] with arithmetic refinements, recursion, and infinitely deep typing derivations (Section 2). Then, we define an auxiliary type system called SAXωsuperscriptSAX𝜔\text{SAX}^{\omega}SAX start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT which has infinitely wide but finitely deep derivations to which we translate the derivations of SAX∞superscriptSAX\text{SAX}^{\infty}SAX start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT (Section 3). Then, we show that all SAXωsuperscriptSAX𝜔\text{SAX}^{\omega}SAX start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT-typed programs terminate by a novel logical relations argument over configurations of processes that capture the state of a concurrent computation (Section 4).
We define SAX∞superscriptSAX\text{SAX}^{\infty}SAX start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT, which extends the semi-axiomatic sequent calculus (SAX) [DPP20] with arithmetic refinements, recursion, and infinitely deep typing derivations (Section 2). Then, we define an auxiliary type system called SAXωsuperscriptSAX𝜔\text{SAX}^{\omega}SAX start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT which has infinitely wide but finitely deep derivations to which we translate the derivations of SAX∞superscriptSAX\text{SAX}^{\infty}SAX start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT (Section 3). Then, we show that all SAXωsuperscriptSAX𝜔\text{SAX}^{\omega}SAX start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT-typed programs terminate by a novel logical relations argument over configurations of processes that capture the state of a concurrent computation (Section 4).
As we mentioned in the introduction, we can make the SAX∞superscriptSAX\text{SAX}^{\infty}SAX start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT judgment arbitrarily rich to support more complex patterns of recursion. As long as derivations in that system can be translated to SAXωsuperscriptSAX𝜔\text{SAX}^{\omega}SAX start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT, the logical relations argument over SAXωsuperscriptSAX𝜔\text{SAX}^{\omega}SAX start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT typing that we detail in Section 4 does not change. For example, consider the following additions.
In this section, we extend SAX [DPP20] with recursion and arithmetic refinements in the style of Das and Pfenning [DP20b]. SAX is a logic-based formalism and subsuming paradigm [Lev04] for concurrent functional programming that conceives call-by-need and call-by-value strategies as particular concurrent schedules [PP20]. Concurrency and parallelism devices like fork/join, futures [Hal85], and SILL-style [TCP13] monadic concurrency can all be encoded and used side-by-side in SAX [PP20].
Most importantly, the call rule does not refer to a coinductively-defined auxiliary judgment, because in the absence of free arithmetic variables, the tracked size arguments decrease from some n¯¯𝑛\overline{n}over¯ start_ARG italic_n end_ARG to n′¯¯superscript𝑛′\overline{n^{\prime}}over¯ start_ARG italic_n start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_ARG to etc. Since the lexicographic order on fixed-length natural number vectors is well-founded, this sequence necessarily terminates. To rephrase: the exact number of recursive calls is known. While this system is impractical for type checking, we can translate arithmetically closed SAX∞superscriptSAX\text{SAX}^{\infty}SAX start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT derivations to SAXωsuperscriptSAX𝜔\text{SAX}^{\omega}SAX start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT derivations. In fact, any SAX∞superscriptSAX\text{SAX}^{\infty}SAX start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT derivation can be made arithmetically closed by substituting each of its free arithmetic variables for numbers that validate (and therefore discharge) its constraints. By trading infinitely deep derivations for infinitely wide but finitely deep ones, we may complete a logical relations argument by induction over a SAXωsuperscriptSAX𝜔\text{SAX}^{\omega}SAX start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT derivation. Thus, let us examine the translation theorem.
C
Rial et al. [13] proposed a provably secure anonymous AFP scheme based on the ideal-world/real-world paradigm. Poh et al. [25] designed an innovative user-side AFP scheme based on the symmetric Chameleon encryption technique, which achieves significant gains in owner-side computing and communication efficiency.
Afterwards, Bianchi et al. [10] proposed a LUT-based AFP scheme without involving a Trusted Third Party (TTP) based on homomorphic encryption, which also implements AFP within the user-side framework. Despite the fact that Problems 2 and 3 are solved in these works, Problem 1 is not mentioned.
In this paper, facing these problems and challenges, we set out to solve them. First, to achieve data protection and access control, we adopt the lifted-ElGamal based PRE scheme, as discussed in [16, 17, 18, 19, 20], whose most prominent characteristic is that it satisfies the property of additive homomorphism. Then this homomorphism property is fully exploited to facilitate the integration with the Look-Up Table (LUT) based AFP scheme put forward by Bianchi et al. [10]. In this way, the cloud is successfully introduced to participate in the AFP solution, and the combination of the two technologies provides an approach to solve Problems 1, 2, and 3 simultaneously.
Thirdly, there are also studies that deal with both privacy-protected access control and traitor tracing. Xia et al. [26] introduced the watermarking technique to privacy-protected content-based ciphertext image retrieval in the cloud, which can prevent the user from illegally distributing the retrieved images. However, the fairness of the traitor tracing is only realized by the involvement of a TTP in the scheme. Moreover, the encryption of image features in the scheme is not IND-CPA secure. Zheng et al. [27] aimed to achieve differential access control and access history hiding on the cloud while enabling fair redistribution tracing by embedding watermarks homomorphically. However, the computing overhead on the cloud side would be onerous due to the need of performing re-encryption operations and homomorphic operations on the media content. Additionally, a TTP is still required to generate and encrypt watermarks for every user. Frattolillo et al. [28] proposed a multi-party watermarking scheme for the cloud’s environment, which is able to solve the aforementioned three problems simultaneously. However, IND-CPA security is not satisfied in the scheme due to the adoption of commutative cryptosystem. Zhang et al. [3] combined PRE and fair watermarking to realize privacy-protected access control and combat content redistribution in the cloud, which also solves all three problems successfully. For one thing, compared with the first scheme of Zhang et al., neither of our schemes requires the participation of a TTP. For another, compared with the second scheme of Zhang et al., which does not require a TTP, in our proposed scheme FairCMS-I, the cloud only needs to perform homomorphic operations and re-encryption operations on the encrypted LUT and fingerprint instead of the encrypted media content. As LUTs and fingerprints are far shorter than the media content itself, FairCMS-I consumes much fewer cloud resources than that of [3] (the cloud-side overhead of the two schemes in [3] is the same). Furthermore, in the second scheme of Zhang et al., the user can escape traceability by generating two different fingerprints (we discuss this in detail in the third last paragraph of Section V-A), and both FairCMS-I and FairCMS-II solve this problem.
Rial et al. [13] proposed a provably secure anonymous AFP scheme based on the ideal-world/real-world paradigm. Poh et al. [25] designed an innovative user-side AFP scheme based on the symmetric Chameleon encryption technique, which achieves significant gains in owner-side computing and communication efficiency.
A
This section presents an empirical investigation of the performance of GraphFM on two CTR benchmark datasets and a recommender system dataset. The experimental settings are described, followed by comparisons with other state-of-the-art methods. An ablation study is also conducted to verify the importance of each component of the model and evaluate its performance under different hyperparameter settings. Finally, the question of whether GraphFM can provide interpretable explanations for its predictions is examined.
(2) By treating features as nodes and their pairwise feature interactions as edges, we bridge the gap between GNN and FM, and make it feasible to leverage the strength of GNN to solve the problem of FM. (3) Extensive experiments are conducted on CTR benchmark and recommender system datasets to evaluate the effectiveness and interpretability of our proposed method. We show that GraphFM can provide persuasive rationales for the feature interaction modeling and prediction-making process.
Our proposed GraphFM achieves best performance among all these four classes of methods on three datasets. The performance improvement of GraphFM compared with the three classes of methods (A, B, C) is especially significant, above 0.010.01\mathbf{0.01}bold_0.01-level. The aggregation-based methods including InterHAt, AutoInt, Fi-GNN and our GraphFM consistently outperform the other three classes of models, which demonstrates the strength of the aggregation strategy in capturing high-order relations. Compared with the strong aggregation-based baselines AutoInt and Fi-GNN, GraphFM still advances the performance by a large margin, especially on MovieLens-1M dataset. The performance improvement on the other two datasets are also at 0.0010.001\mathbf{0.001}bold_0.001-level, which can be regarded as significant for CTR prediction task Cheng et al. (2016); Guo et al. (2017); Song et al. (2019). Such improvement can be attributed to its combination with FM, which introduces feature interactions operations, and also the interaction selection mechanism, which selects and models only the beneficial feature interactions. GraphFM outperforms the compared baselines by the largest margin on MovieLens-1M dataset, whose feature size is smallest among the three datasets. I suppose this is because the feature embedding size is not large enough for the other two datasets.
Our experiments are conducted on three real-world datasets, two CTR benchmark datasets, and one recommender system dataset. Details of these datasets are illustrated in Table 1. The data preparation follows the strategy in Tian et al. (2023). We randomly split all the instances in 8:1:1 for training, validation, and testing. We adopt the two most popular metrics, AUC and Logloss to measure the probability that one prediction diverges from the ground truth.
Since our proposed approach selects the beneficial feature interactions and models them in an explicit manner, it has high efficiency in analyzing high-order feature interactions and thus provides rationales for the model outcome. Through extensive experiments conducted on CTR benchmark and recommender system datasets, we verify the rationality, effectiveness, and interpretability of our proposed approach.
C
Figure 3: Portfolio optimization: Convergence of h⁢(𝐱t)ℎsubscript𝐱𝑡h(\mathbf{x}_{t})italic_h ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) and g⁢(𝐱t)𝑔subscript𝐱𝑡g(\mathbf{x}_{t})italic_g ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) vs. t𝑡titalic_t and wall-clock time.
Logistic regression. One of the motivating examples for the development of a theory of generalized self-concordant function is the logistic loss function, as it does not match the definition of a standard self-concordant function but shares many of its characteristics.
The FOO and LMO oracles are standard in the FW literature. The ZOO oracle is often implicitly assumed to be included with the FOO oracle; we make this explicit here for clarity. Finally, the DO oracle is motivated by the properties of generalized self-concordant functions. It is reasonable to assume the availability of the DO oracle: following the definition of the function codomain, one could simply evaluate f𝑓fitalic_f at 𝐱𝐱\mathbf{x}bold_x and assert f⁢(𝐱)<+∞𝑓𝐱f(\mathbf{x})<+\inftyitalic_f ( bold_x ) < + ∞, thereby combining the DO and ZOO oracles into one oracle.
Self-concordant functions have received strong interest in recent years due to the attractive properties that they allow to prove for many statistical estimation settings [Marteau-Ferey et al., 2019, Ostrovskii & Bach, 2021]. The original definition of self-concordance has been expanded and generalized since its inception, as many objective functions of interest have self-concordant-like properties without satisfying the strict definition of self-concordance. For example, the logistic loss function used in logistic regression is not strictly self-concordant, but it fits into a class of pseudo-self-concordant functions, which allows one to obtain similar properties and bounds as those obtained for self-concordant functions [Bach, 2010]. This was also the case in Ostrovskii & Bach [2021] and Tran-Dinh et al. [2015], in which more general properties of these pseudo-self-concordant functions were established. This was fully formalized in Sun & Tran-Dinh [2019], in which the concept of generalized self-concordant functions was introduced, along with key bounds, properties, and variants of Newton methods for the unconstrained setting which make use of this property.
In the classical analysis of Newton’s method, when the Hessian of f𝑓fitalic_f is assumed to be Lipschitz continuous and the function is strongly convex, one arrives at a convergence rate for the algorithm that depends on the Euclidean structure of ℝnsuperscriptℝ𝑛\mathbb{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, despite the fact that the algorithm is affine-invariant. This motivated the introduction of self-concordant functions in Nesterov & Nemirovskii [1994], functions for which the third derivative is bounded by the second-order derivative, with which one can obtain an affine-invariant convergence rate for the aforementioned algorithm. More importantly, many of the barrier functions used in interior-point methods are self-concordant, which extends the use of polynomial-time interior-point methods to many settings of interest.
A
In this section, we give a brief outline of our approach and discuss the challenges we overcome. As the basic building block, we follow the classic approach by Hopcroft and Karp [HK73] of iteratively finding short augmenting paths to improve a 2222-approximate matching that can easily be found by a greedy algorithm.
Let P𝑃Pitalic_P be an alternating path belonging to 𝒮αsubscript𝒮𝛼\mathcal{S}_{\alpha}caligraphic_S start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT. A convenient property of having P𝑃Pitalic_P settled is that, once P𝑃Pitalic_P becomes settled, we show that all the arcs in P𝑃Pitalic_P at any point belong to the same structure (this is made formal in Lemma 4.4). Notice that invoking the method Include-Unmatched-Edges ensures that our algorithm, in its memory, also stores unmatched arcs belonging to P𝑃Pitalic_P. Having all this, it enables us to think of P𝑃Pitalic_P as an odd cycle. In particular, if any free node β≠α𝛽𝛼\beta\neq\alphaitalic_β ≠ italic_α reaches P𝑃Pitalic_P, then the algorithm augments between α𝛼\alphaitalic_α and β𝛽\betaitalic_β, which makes progress. Crucially, and deviating from standard approaches to finding augmenting paths, we allow our augmenting paths to be much longer than 1/ε1𝜀1/\varepsilon1 / italic_ε. (In fact, we do not guarantee that augmenting via settled paths will result in augmenting paths of length at most 1/ε1𝜀1/\varepsilon1 / italic_ε.) Nevertheless, in Section 5 we show how to handle these longer augmenting paths by executing additional phases.
If no DFS search other than by α𝛼\alphaitalic_α is performed over P𝑃Pitalic_P, α𝛼\alphaitalic_α will eventually find β𝛽\betaitalic_β and we have found a short augmenting path as desired. However, it can be the case that the DFS search by another free vertex γ𝛾\gammaitalic_γ has already scanned over an edge aisubscript𝑎𝑖a_{i}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
The basic building block in the search for augmenting paths is to find semi-matchings between the vertices and their matched neighbors such that each vertex has a small amount of neighbors in the semi-matching. In the case of bipartite graphs, they show that their method of searching for augmenting paths in a graph defined by the semi-matchings finds a significant, but only an exponentially small in ε𝜀\varepsilonitalic_ε, fraction of all possible augmenting paths.
To find these augmenting paths, we perform a depth first search (DFS) style truncated search from each free vertex in parallel and, once a sufficient amount of disjoint augmenting paths have been found, we augment the current matching over these augmenting paths. The search scans over alternating paths of length roughly 1/ε1𝜀1/\varepsilon1 / italic_ε starting from the free (i.e., unmatched) vertices.
D
Figure 3: Performance of Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B, CPP, B-CPP against the number of transmitted bits: the left column shows the results with quantization (b=2,4,6𝑏246b=2,4,6italic_b = 2 , 4 , 6) and the right column shows the results with Rand-k (k=5,10,20𝑘51020k=5,10,20italic_k = 5 , 10 , 20).
We can see from all of the sub-figures of Fig. 3 that, to reach a high accuracy within about 10−15superscript101510^{-15}10 start_POSTSUPERSCRIPT - 15 end_POSTSUPERSCRIPT, the number of transmitted bits required by these methods have the ranking: B-CPP <<< CPP <<< Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B.
In this paper, we consider decentralized optimization over general directed networks and propose a novel Compressed Push-Pull method (CPP) that combines Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B with a general class of unbiased compression operators. CPP enjoys large flexibility in both the compression method and the network topology. We show CPP achieves linear convergence rate under strongly convex and smooth objective functions.
In this section, we compare the numerical performance of CPP and B-CPP with the Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method [24, 25]. In the experiments, we equip CPP and B-CPP with different compression operators and consider different graph topologies.
To see why CPP outperforms Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B, note that the vectors sent in CPP have been compressed, and hence the transmitted bits at each iteration are greatly reduced compared to Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B.
D
We propose a lower bounds both on the communication and the number of local oracle calls for a general algorithms class (that satisfy Assumption 3). The bounds naturally depend on the communication matrix W𝑊Witalic_W (as in the minimization problem), but our results apply to SPP (see ”Lower” rows in Table 1 for various settings of the SPP PFL formulations).
We adapt the proposed algorithm for training neural networks. We compare our algorithms: type of sliding (Algorithm 1) and type of local method (Algorithm 3). To the best of our knowledge, this is the first work that compares these approaches in the scope of neural networks, as previous studies were limited to simpler methods, such as regression problems [31, 29]. Our experiments confirm the robustness of our methods on the problem of training a classifier with adversarial noise.
In this paper, we present a novel formulation for the Personalized Federated Learning Saddle Point Problem (1). This formulation incorporates a penalty term that accounts for the specific structure of the network and is applicable to both centralized and decentralized network settings. Additionally, we provide the lower bounds both on the communication and the number of local oracle calls required to solve problem (1). Furthermore, we have developed the novel methods (Algorithm 1, Algorithm 2, Algorithm 3) for this problem that are optimal up to logarithmic factor in certain scenarios (see Table 1). These algorithms are based on sliding or variance reduction techniques. The theoretical analysis and experimental evidence corroborate our methods. Moreover, we have customized our approach for neural network training.
Discussions. We compare algorithms based on the balance of the local and global models, i.e. if the algorithm is able to train well both local and global models, then we find the FL balance by this algorithm. The results show that the Local SGD technique (Algorithm 3) outperformed the Algorithm 1 only with a fairly frequent device communication (Figure 5 (a)). In other cases (Figure 5 (b), (c)), Algorithm 3 was unable to train the global model, although it withstood the good quality of the local models. It turns out that the technique of Algorithm 1 can be considered robust for Federated Learning, even in the case of neural networks.
We develop multiple novel algorithms to solve decentralized personalized federated saddle-point problems. These methods (Algorithm 1 and Algorithm 2) are based on recent sliding technique [27, 28, 29] adapted to SPPs in a decentralized PFL. In addition, we present Algorithm 3 which used the randomized local method from [30]. This algorithm is used to compare Algorithm 1 with Local randomized methods (like Algorithm 3) in practice.
D
Since the ϵitalic-ϵ\epsilonitalic_ϵ is deterministically known for the max⁡(A⁢b)⁢ϵ𝐴𝑏italic-ϵ\max(Ab)\epsilonroman_max ( italic_A italic_b ) italic_ϵ-MG(C)CE, 12⁢max⁡(A⁢b)⁢ϵ12𝐴𝑏italic-ϵ\frac{1}{2}\max(Ab)\epsilondivide start_ARG 1 end_ARG start_ARG 2 end_ARG roman_max ( italic_A italic_b ) italic_ϵ-MG(C)CE and MG(C)CE solutions, we can solve for these using the standard solvers discussed in Section 3. For the min⁡ϵitalic-ϵ\min\epsilonroman_min italic_ϵ-MG(C)CE we can tweak our optimization procedure to solve for this case directly by simply including a c⁢ϵ𝑐italic-ϵc\epsilonitalic_c italic_ϵ term to minimize, where c>1𝑐1c>1italic_c > 1. We use bisection search to find full-ϵitalic-ϵ\epsilonitalic_ϵ-MG(C)CE.
This means that neither NEs nor (C)CEs can be directly used prescriptively in n-player, general-sum games. These solution concepts specify what subsets of joint strategies are in equilibrium, but does not specify how decentralized agents should select amongst these. Furthermore, the presence of a correlation device does not make (C)CEs prescriptive because the agents still need a mechanism to agree on the distribution the correlation device samples from777This is true if the correlation device is not considered as part of the game. If it was part of the game (for example traffic lights at a junction) the solution concept can appear prescriptive.. This coordination problem can be cast as one that is more computational in nature: what rules allow an equilibrium to be uniquely (and perhaps de-centrally) selected?
There are two important solution concepts in the space of CEs. The first is Maximum Welfare Correlated Equilibrium (MWCE) which is defined as the CE that maximises the sum of all player’s payoffs. An MWCE can be obtained by solving a linear program, however the MWCE may not be unique and therefore does not fully solve the equilibrium selection problem (e.g. constant-sum game solutions all have equal payoff). The second such concept is Maximum Entropy Correlated Equilibrium (MECE) (Ortiz et al., 2007) which maximises Shannon’s entropy (Shannon, 1948) as an objective. MECE also shares some interesting properties with MGCE such as computational scalability when the solution is full-support (positive probability mass everywhere). Drawbacks of this approach are that the literature does not provide algorithms when the solution is general-support (non-negative probability) and, maximising Shannon’s entropy can be complex.
An important concept in decision theory, called cardinal utility (Mas-Colell et al., 1995), is that offset and positive scale of each player’s payoff does not change the properties of the game. A notable solution concept that does not have this property is MW(C)CE.
This highlights the main drawback of MW(C)CE which does not select for unique solutions (for example, in constant-sum games all solutions have maximum welfare). One selection criterion for NEs is maximum entropy Nash equilibrium (MENE) (Balduzzi et al., 2018), however outside of the two-player constant-sum setting, these are generally not easy to compute (Daskalakis et al., 2009). CEs exist in a convex polytope, so any convex function can select among them. Maximum entropy correlated equilibrium (MECE) (Ortiz et al., 2007) is limited to full-support solutions, which may not exist when ϵ=0italic-ϵ0\epsilon=0italic_ϵ = 0, and can be hard to solve in practice. Therefore, there is a gap in the literature for a computationally tractable, unique, solution concept and this work proposes MG(C)CE fills this gap.
C
One small extension of the present work would be to consider queries with range ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT. It would also be interesting to extend our results to handle arbitrary normed spaces, using appropriate noise such as perhaps the Laplace mechanism. It might also be possible to relax our assumption that data elements are drawn iid to a weaker independence requirement. Furthermore, it would be interesting to explore an extension from linear queries to general low-sensitivity queries.
The dependence of our PC notion on the actual adaptively chosen queries places it in the so-called fully-adaptive setting (Rogers et al., 2016; Whitehouse et al., 2023), which requires a fairly subtle analysis involving a set of tools and concepts that may be of independent interest. In particular, we establish a series of “dissimilarity” notions in Appendix B, which generalize the notion of divergence, replacing the scalar bound with a function. Our main stability notion (Definition 4.2) can be viewed as an instance-tailored variant of zero-concentrated differential privacy (Bun and Steinke, 2016), and we also make use of a similar extension of the classical max-divergence-based differential privacy definition (B.8).
We hope that the mathematical toolkit that we establish in Appendix B to analyze our stability notion may find additional applications, perhaps also in context of privacy accounting. Furthermore, the max divergence can be generalized analogously to the “dynamic” generalization of Rényi divergence proposed in this paper (B.9), perhaps suggesting that this approach may be useful in analyzing other mechanisms as well.
The contribution of this paper is two-fold. In Section 3, we provide a tight measure of the level of overfitting of some query with respect to previous responses. In Sections 4 and 5, we demonstrate a toolkit to utilize this measure, and use it to prove new generalization properties of fundamental noise-addition mechanisms. The novelty of the PC definition stems replacing the fixed parameters that appear in the differential privacy definition with a function of the datasets and the query. The definition presented in this paper provides a generalization of zero-concentrated differential privacy, and future work could study similar generalizations of other privacy notions, as discussed in Section B.4.
We note that the first part of this definition can be viewed as a refined version of zCDP (Definition B.18), where the bound on the Rényi divergence (Definition B.5) is a function of the sample sets and the query. As for the second part, since the bound depends on the queries, which themselves are random variables, it should be viewed as a bound on the Rényi dissimilarity notion that we introduce in the appendix (Definition B.9). This kind of extension is not limited to Rényi divergence, as discussed in Appendix B.
B
All z𝑧zitalic_z-antlers (C^,F^)normal-^𝐶normal-^𝐹(\hat{C},\hat{F})( over^ start_ARG italic_C end_ARG , over^ start_ARG italic_F end_ARG ) that are z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ prior to executing the algorithm are also z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ after termination of the algorithm.
All z𝑧zitalic_z-antlers (C^,F^)normal-^𝐶normal-^𝐹(\hat{C},\hat{F})( over^ start_ARG italic_C end_ARG , over^ start_ARG italic_F end_ARG ) that are z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ prior to executing the algorithm are also z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ after termination of the algorithm.
We show first that any z𝑧zitalic_z-properly colored antler prior to executing the algorithm remains z𝑧zitalic_z-properly colored after termination. Afterwards we argue that in Item 5, the pair (χV−1⁢(𝖢˙),χV−1⁢(𝖥˙))subscriptsuperscript𝜒1𝑉˙𝖢subscriptsuperscript𝜒1𝑉˙𝖥(\chi^{-1}_{V}(\mathsf{\dot{C}}),\chi^{-1}_{V}(\mathsf{\dot{F}}))( italic_χ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT ( over˙ start_ARG sansserif_C end_ARG ) , italic_χ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT ( over˙ start_ARG sansserif_F end_ARG ) ) is a z𝑧zitalic_z-antler in G𝐺Gitalic_G. Since (χV−1⁢(𝖢˙),χV−1⁢(𝖥˙))subscriptsuperscript𝜒1𝑉˙𝖢subscriptsuperscript𝜒1𝑉˙𝖥(\chi^{-1}_{V}(\mathsf{\dot{C}}),\chi^{-1}_{V}(\mathsf{\dot{F}}))( italic_χ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT ( over˙ start_ARG sansserif_C end_ARG ) , italic_χ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT ( over˙ start_ARG sansserif_F end_ARG ) ) contains all properly colored antlers this proves correctness.
We now show that a z𝑧zitalic_z-antler can be obtained from a suitable coloring χ𝜒\chiitalic_χ of the graph. The algorithm we give updates the coloring χ𝜒\chiitalic_χ and recolors any vertex or edge that is not part of a z𝑧zitalic_z-properly colored antler to color 𝖱˙˙𝖱\mathsf{\dot{R}}over˙ start_ARG sansserif_R end_ARG. We show that after repeatedly refining the coloring, the coloring that we arrive at identifies a suitable antler.
To show the algorithm preserves properness of the coloring, we show that every individual recoloring preserves properness, that is, if an arbitrary z𝑧zitalic_z-antler is z𝑧zitalic_z-properly colored prior to the recoloring, it is also z𝑧zitalic_z-properly colored after the recoloring.
D
Object placement [2, 24, 65, 154, 197] tends to seek for reasonable location, size, and shape by predicting the foreground transformation to avoid the abovementioned inconsistencies. Previous object placement methods [197, 154] mainly predict simple form of spatial transformation, that is, shifting and scaling the foreground to achieve reasonable location and size. Some other methods [65, 88] predict more general form of spatial transformation (e.g., affine transformation, perspective transformation, thin plate spline transformation) to warp the foreground. In terms of more advanced geometric transformation like view synthesis and pose transfer, we should resort to generative approaches [183, 141] to change the viewpoint/pose of the foreground. When placing the object on the background, unreasonable occlusion may occur. Most previous methods seek for reasonable placement to avoid unreasonable occlusions, while some methods [2, 190, 147] aim to fix unreasonable occlusion by removing the occluded regions of foreground based on the estimated depth information.
After compositing a new image with foreground and background, there exist many issues that could make the composite image unrealistic and thus significantly degrade its quality. These issues can be summarized as the inconsistency between foreground and background, which can be divided into appearance inconsistency, geometric inconsistency, and semantic inconsistency. Each type of inconsistency involves a number of issues to be solved. Image composition task could be decomposed into multiple sub-tasks, in which each sub-task targets at one or more issues. Next, we will introduce each type of inconsistency one by one.
The semantic inconsistency is including but not limited to: 1) the foreground appears at a semantically unreasonable place (e.g., a zebra is placed in the living room); 2) the foreground has unreasonable interactions with other objects or people (e.g., a person is riding a motorbike, but the person and the motorbike are facing towards opposite directions); 3) the background may have semantic impact on the foreground appearance. The semantic inconsistency is judged based on commonsense knowledge, so the cases of semantic inconsistency may be arguable according to subjective judgement. For example, when a car is placed in the water, it can be argued that a car is sinking into the water after a car accident. However, such event has rather low probability compared with commonly seen cases, so we can claim that the car appears at an unreasonable place, which belongs to semantic inconsistency. Partial solution to semantic inconsistency falls into the scope of object placement. To be exact, by predicting suitable spatial transformation for the foreground, we can relocate the foreground to a reasonable place or adjust the pose of foreground to make its interactions with environment more convincing. Additionally, the appearance of foreground object may be affected by the background semantically, which is different from low-level appearance inconsistency (illumination, shadow). For example, a car placed on the snowy ground may be covered by snow. A student inserted into a group of students wearing school uniforms should wear the same school uniform. Such semantic appearance variation is very flexible and challenging, which will not be fully discussed in this survey.
Object placement aims to paste the foreground on the background with suitable location, size, and shape. As shown in Fig. 4, the cases of unreasonable object placement are including but not limited to: a) the foreground object has inappropriate size (e.g., the dog is too large); b) the foreground object has unreasonable occlusion with background objects (e.g., the fences are unreasonably occluded by the giraffe); c) the foreground object does not have reasonable force condition (e.g., the suitcase is floating in the air); d) the foreground object appears at a semantically unreasonable place (e.g., the boat appears on the land); e) inconsistent perspectives between foreground and background (e.g., the car and the bus have inconsistent perspectives). By taking all the above factors into consideration, object placement is a very challenging task.
Figure 2: The quality of composite image is degraded by the appearance inconsistency, geometric inconsistency, and semantic inconsistency. Each type of inconsistency involves a number of issues. Each sub-task targets at addressing one or more issues.
B
We utilized the k-means algorithm to cluster regions within city c𝑐citalic_c based on their POI vectors (𝐱cp⁢o⁢isuperscriptsubscript𝐱𝑐𝑝𝑜𝑖\mathbf{x}_{c}^{poi}bold_x start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_p italic_o italic_i end_POSTSUPERSCRIPT). Additionally, we clustered regions based on the average daily pattern of taxi mobility data to obtain ℂc,i⁢nsubscriptℂ𝑐𝑖𝑛\mathbb{C}_{c,in}blackboard_C start_POSTSUBSCRIPT italic_c , italic_i italic_n end_POSTSUBSCRIPT, ℂc,o⁢u⁢tsubscriptℂ𝑐𝑜𝑢𝑡\mathbb{C}_{c,out}blackboard_C start_POSTSUBSCRIPT italic_c , italic_o italic_u italic_t end_POSTSUBSCRIPT, ℂc,p⁢i⁢c⁢k⁢u⁢psubscriptℂ𝑐𝑝𝑖𝑐𝑘𝑢𝑝\mathbb{C}_{c,pickup}blackboard_C start_POSTSUBSCRIPT italic_c , italic_p italic_i italic_c italic_k italic_u italic_p end_POSTSUBSCRIPT, ℂc,i⁢d⁢l⁢esubscriptℂ𝑐𝑖𝑑𝑙𝑒\mathbb{C}_{c,idle}blackboard_C start_POSTSUBSCRIPT italic_c , italic_i italic_d italic_l italic_e end_POSTSUBSCRIPT. To measure the similarity between the cluster assignments based on POIs (ℂc,p⁢o⁢isubscriptℂ𝑐𝑝𝑜𝑖\mathbb{C}_{c,poi}blackboard_C start_POSTSUBSCRIPT italic_c , italic_p italic_o italic_i end_POSTSUBSCRIPT) and those based on taxi mobility data (ℂc,i⁢nsubscriptℂ𝑐𝑖𝑛\mathbb{C}_{c,in}blackboard_C start_POSTSUBSCRIPT italic_c , italic_i italic_n end_POSTSUBSCRIPT, etc.), we employed the adjusted Rand index (ARI) and the adjusted mutual information (AMI). The results, presented in Table III, indicate positive correlations between the cluster assignments based on both POIs and taxi mobility data, thereby confirming the relationship between the two datasets.
TABLE III: Adjusted Rand index (ARI) and adjusted mutual information (AMI) to evaluate the clustering assignments based on POIs and taxi mobility data. Both metrics range from -1 to 1, with larger values indicating a higher degree of coincidence between the two clusters.
Comprehensiveness: Fig. 1(a), illustrates that CityNet comprises three types of raw data (mobility data, geographical data, and meteorological data) collected from seven different cities. Furthermore, we have processed the raw data into several sub-datasets (as shown in Fig. 1(b)) to to capture a wider range of urban phenomena. For instance, we have transformed raw mobility data of taxi movements into region-based measurements such as taxi flows, pickups, and idle driving time. These measurements are crucial in revealing the state of the transportation market and citizen activities.
Interrelationship: We have classified the sub-datasets into two categories: service data and context data, as depicted in Fig. 1(c). Service data pertains to the status of urban service providers (e.g. taxi companies), while context data refers to the urban environment (e.g. weather). Based on this categorization, we have formulated and tested three types of correlations, as shown in Fig. 1(c), correlations (1) among mobility services, (2) among context, such as urban geography, and (3) between contexts and services.
The average regional daily patterns of taxi mobility data from each POI-based cluster in Beijing, Chengdu, and Xi’an are plotted in Fig. 2. As shown in Fig. 2LABEL:sub@fig:cluster-bj, taxi mobility patterns in Beijing exhibit a high level of cohesion within each POI-based cluster, while remaining distinguishable across clusters. Conversely, Fig. 2LABEL:sub@fig:cluster-cdxa, illustrates that clusters with higher inflow/outflow/pick-up values in Xi’an and Chengdu, two cities with relatively low ARI and AMI scores as reported in Table III, demonstrate significant overlaps between adjacent clusters, which may be attributed to the limited number of regions in these cities. Nevertheless, Fig. 2LABEL:sub@fig:cluster-cdxa still enables us to identify distinct clusters.
D
Prediction intervals are constructed as in the previous section, i.e. a (conditionally) normal distribution is assumed and the intervals are given by Eq. (22). It was observed that this architecture shows improved modelling capabilities and robustness for uncertainty estimation. In fort2019deep the improved performance is attributed to the multimodal behaviour in model space. Fort et al. argue that most networks tend to have the property that they only work in one specific subspace of the model space, while deep ensembles (due to random initialization) can model different modes.
Every ensemble allows for a naive construction of a prediction interval heskes1997practical when the aggregation strategy in Algorithm 2 is given by the arithmetic mean. By treating the predictions of the individual models in the ensemble as elements of a data sample, one can calculate the empirical mean and variance and use these as moment estimators for a normal distribution:
The idea behind deep ensembles lakshminarayanan2017simple is the same as for any ensemble technique: training multiple models to obtain a better and more robust prediction. The loss functions of most (deep) models have multiple local minima and by aggregating multiple models one hopes to take into account all these minima. From this point of view the approach by Lakshminarayanan et al. lakshminarayanan2017simple is very similar to that of kendallgal . However, the underlying philosophy is slightly different. First of all, although the same loss function is used, it is not obtained from a Bayesian framework, but rather chosen as a proper scoring rule gneiting2007strictly , i.e. a loss function for distributional forecasts for which a model can never obtain a lower loss than the true distribution (it is said to be strictly proper if it has a unique minimum). By training a model with respect to a (strictly) proper scoring rule, the model is encouraged to approximate the true probability distribution. A disadvantage of these scoring rules is that they are only proper relative to a certain class of distributions, hence they still introduce distributional assumptions in the model. For example, the scoring rule (20) is only proper w.r.t. probability distributions with a finite second moment and strictly proper w.r.t. distributions that are determined by their first two moments. Secondly, instead of constructing an ensemble through MC sampling from the prior distribution, multiple models are independently trained and diversity is induced by using different initial parameters. This is motivated by a prior observation that random initialization often leads to superior performance when compared to other ensemble techniques lee2015m .
Without the adversarial training, this model is similar to the one introduced by Khosravi et al. khosravi2014constructing . However, instead of training an ensemble of mean-variance estimators, an ensemble of point estimators is trained to predict y𝑦yitalic_y and in a second step a separate estimator σ^^𝜎\hat{\sigma}over^ start_ARG italic_σ end_ARG for the data noise is trained using loss function (20), where the ensemble estimator is kept fixed.
The class of direct interval estimators consists of all methods that are trained to directly output a prediction interval. Instead of modelling a distribution or extracting uncertainty from an ensemble, they are trained using a loss function that is specifically tailored to the construction of prediction intervals. The general structure of this approach is summarized in Algorithm 3. Because these methods are specifically made for estimating uncertainty, they can be expected to perform better than modified point estimators. However, this is also immediately their main disadvantage: they do not always produce a point estimate. Another disadvantage is that they are in general also specifically constructed for a predetermined confidence level α𝛼\alphaitalic_α. Choosing a different value for α𝛼\alphaitalic_α requires the model to be retrained.
C
In this article, we presented a large-scale pre-trained model for musical data in the MIDI format. We employed five public-domain piano MIDI datasets for BERT-like masking-based pre-training and evaluated the pre-trained model on four challenging downstream symbolic music classification tasks, most with less than 1K labelled MIDI pieces. Our experiments validate the effectiveness of pre-training for both note-level and sequence-level classification tasks.
To our best knowledge, the work of \textcitetsai20ismir represents the first attempt to use PTMs for symbolic-domain music classification. They showed that either a RoBERTa-based Transformer encoder PTM \parenciteroberta or a GPT2-based Transformer encoder PTM \parencitegpt2 outperform non-pre-trained baselines for a 9-class symbolic-domain composer classification task. Pre-training boosts the classification accuracy for the GPT2 model greatly from 46% to 70%. However, the symbolic data format considered in their work is “sheet music image” \parencitetsai20ismir, which are images of musical scores. This data format has been much less used than MIDI in the literature.
In this article, we presented a large-scale pre-trained model for musical data in the MIDI format. We employed five public-domain piano MIDI datasets for BERT-like masking-based pre-training and evaluated the pre-trained model on four challenging downstream symbolic music classification tasks, most with less than 1K labelled MIDI pieces. Our experiments validate the effectiveness of pre-training for both note-level and sequence-level classification tasks.
Instead of feeding the token embedding of each of them individually to the Transformer, we can combine the token embedding of either the four tokens for MIDI scores or six tokens for MIDI performances in a group by concatenation and let the Transformer model process them jointly, as depicted in Fig. 1(b). We can also modify the output layer of the Transformer so that it predicts multiple tokens at once with different heads.
This work can be extended in many ways. First, to employ other pre-training strategies or architectures \parencitehan2021pretrained. Second, to employ Transformer models with linear computational complexity \parencitechoromanskiRethinkingAttentionPerformers2020a,liutkus21icml, so as to use the whole MIDI pieces (instead of segments) at pre-training.202020We note that the use of linear Transformers for symbolic music generation has been attempted before \textcitehsiao21aaai.
D
Otherwise, F𝐹Fitalic_F has a leaf v∈A𝑣𝐴v\in Aitalic_v ∈ italic_A with a neighbor u∈B𝑢𝐵u\in Bitalic_u ∈ italic_B. We can assign c⁢(v)=a2𝑐𝑣subscript𝑎2c(v)=a_{2}italic_c ( italic_v ) = italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, c⁢(u)=b2𝑐𝑢subscript𝑏2c(u)=b_{2}italic_c ( italic_u ) = italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and invoke a subproblem for F′=F−{u,v}superscript𝐹normal-′𝐹𝑢𝑣F^{\prime}=F-\{u,v\}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_F - { italic_u , italic_v }, A′=A∖{v}superscript𝐴normal-′𝐴𝑣A^{\prime}=A\setminus\{v\}italic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_A ∖ { italic_v }, B′=B∖{u}superscript𝐵normal-′𝐵𝑢B^{\prime}=B\setminus\{u\}italic_B start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_B ∖ { italic_u } with the same coloring c𝑐citalic_c and color intervals [a1,a2−1]subscript𝑎1subscript𝑎21[a_{1},a_{2}-1][ italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT - 1 ] and [b1,b2−1]subscript𝑏1subscript𝑏21[b_{1},b_{2}-1][ italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT - 1 ]. The solution for F′superscript𝐹normal-′F^{\prime}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT would be consistent with coloring of u𝑢uitalic_u and v𝑣vitalic_v, since all other neighbors of u𝑢uitalic_u in F𝐹Fitalic_F would get colors at most a2−1≤b2−1−λ<c⁢(u)−λsubscript𝑎21subscript𝑏21𝜆𝑐𝑢𝜆a_{2}-1\leq b_{2}-1-\lambda<c(u)-\lambdaitalic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT - 1 ≤ italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT - 1 - italic_λ < italic_c ( italic_u ) - italic_λ.
Now, observe that if the block to the left is also of type A, then a respective block from Z⁢(S)𝑍𝑆Z(S)italic_Z ( italic_S ) is (0,1,0)010(0,1,0)( 0 , 1 , 0 ) – and when we add the backward carry (0,0,1)001(0,0,1)( 0 , 0 , 1 ) to it, we obtain the forward carry to the rightmost block. And regardless of the value of the appropriate block of Z⁢(S2)𝑍subscript𝑆2Z(S_{2})italic_Z ( italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ), the total sum of the blocks and the backward carry cannot generate any further backward carry.
Next, let us count the total number of jumps necessary for finding central vertices over all loops in Algorithm 1. As it was stated in the proof of Lemma 2.2, while searching for a central vertex we always jump from a vertex to its neighbor in a way that decreases the largest remaining component by one. Thus, if in the next iteration we start at exactly the neighbor of the previous central vertex, there can be only O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) such jumps in total.
The linear running time follows directly from the fact that we compute c𝑐citalic_c only once and we can pass additionally through recursion the lists of leaves and isolated vertices in an uncolored induced subtree. The total number of updates of these lists is proportional to the total number of edges in the tree, hence the claim follows.
To obtain the total running time we first note that each of the initial steps – obtaining (R,B,Y)𝑅𝐵𝑌(R,B,Y)( italic_R , italic_B , italic_Y ) from Corollary 2.11 (e.g. using Algorithm 1), contraction of F𝐹Fitalic_F into F′superscript𝐹normal-′F^{\prime}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, and finding both Y1subscript𝑌1Y_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and Y2subscript𝑌2Y_{2}italic_Y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT – requires only linear time. Coloring Y1∪R1∪B1subscript𝑌1subscript𝑅1subscript𝐵1Y_{1}\cup R_{1}\cup B_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∪ italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∪ italic_B start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT also requires O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) time, since we need to traverse each edge between these vertices only once to ensure the proper distances between the colors, and it is sufficient to use bucket sort to order vertices within B1subscript𝐵1B_{1}italic_B start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and R1subscript𝑅1R_{1}italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. The same argument follows symmetrically for Y2∪R2∪B2subscript𝑌2subscript𝑅2subscript𝐵2Y_{2}\cup R_{2}\cup B_{2}italic_Y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∪ italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∪ italic_B start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.
C