context
stringlengths 250
7.19k
| A
stringlengths 250
4.12k
| B
stringlengths 250
8.2k
| C
stringlengths 250
5.47k
| D
stringlengths 250
3.94k
| label
stringclasses 4
values |
---|---|---|---|---|---|
(−1)a(b−1−a)[ddxxmF(a,b;c;z)+xmddxF(a,b;c;z)];superscript1𝑎binomial𝑏1𝑎delimited-[]𝑑𝑑𝑥superscript𝑥𝑚𝐹𝑎𝑏𝑐𝑧superscript𝑥𝑚𝑑𝑑𝑥𝐹𝑎𝑏𝑐𝑧\displaystyle(-1)^{a}{b-1\choose-a}\Big{[}\frac{d}{dx}x^{m}F(a,b;c;z)+x^{m}%
\frac{d}{dx}F(a,b;c;z)\Big{]};( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ divide start_ARG italic_d end_ARG start_ARG italic_d italic_x end_ARG italic_x start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT italic_F ( italic_a , italic_b ; italic_c ; italic_z ) + italic_x start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT divide start_ARG italic_d end_ARG start_ARG italic_d italic_x end_ARG italic_F ( italic_a , italic_b ; italic_c ; italic_z ) ] ; | d2dx2F(a,b;c;z)superscript𝑑2𝑑superscript𝑥2𝐹𝑎𝑏𝑐𝑧\displaystyle\frac{d^{2}}{dx^{2}}F(a,b;c;z)divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG italic_F ( italic_a , italic_b ; italic_c ; italic_z )
=\displaystyle== | }\left[\left(n(n+D)-\frac{m(D-2+m)}{x^{2}}\right)\frac{R_{n}^{m}(x)}{{R_{n}^{m%
}}^{\prime}(x)}+\frac{D-1-(D+1)x^{2}}{x}\right].divide start_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG start_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG = divide start_ARG 1 end_ARG start_ARG italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 end_ARG [ ( italic_n ( italic_n + italic_D ) - divide start_ARG italic_m ( italic_D - 2 + italic_m ) end_ARG start_ARG italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ) divide start_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) end_ARG start_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG + divide start_ARG italic_D - 1 - ( italic_D + 1 ) italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_x end_ARG ] . | d3dx3Rnm(x)superscript𝑑3𝑑superscript𝑥3superscriptsubscript𝑅𝑛𝑚𝑥\displaystyle\frac{d^{3}}{dx^{3}}R_{n}^{m}(x)divide start_ARG italic_d start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT end_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x )
=\displaystyle== |
d2dx2Rnm(x)superscript𝑑2𝑑superscript𝑥2superscriptsubscript𝑅𝑛𝑚𝑥\displaystyle\frac{d^{2}}{dx^{2}}R_{n}^{m}(x)divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) | D |
Now let d𝑑ditalic_d be even. The same results for the transvections t21(ωℓ)subscript𝑡21superscript𝜔ℓt_{21}(\omega^{\ell})italic_t start_POSTSUBSCRIPT 21 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ) and t12(ωℓ)subscript𝑡12superscript𝜔ℓt_{12}(\omega^{\ell})italic_t start_POSTSUBSCRIPT 12 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ) as for d𝑑ditalic_d odd can be obtained by replacing v𝑣vitalic_v by x𝑥xitalic_x in the formula for t21(ωℓ)subscript𝑡21superscript𝜔ℓt_{21}(\omega^{\ell})italic_t start_POSTSUBSCRIPT 21 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ). It remains to compute t32(ωℓ)subscript𝑡32superscript𝜔ℓt_{32}(\omega^{\ell})italic_t start_POSTSUBSCRIPT 32 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ) and t23(ωℓ)subscript𝑡23superscript𝜔ℓt_{23}(\omega^{\ell})italic_t start_POSTSUBSCRIPT 23 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ) which can be done using Lemmas 3.2 and 3.6. First, we compute xv−1𝑥superscript𝑣1xv^{-1}italic_x italic_v start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT and store it in the slot p[2,3,1]𝑝231p[2,3,1]italic_p [ 2 , 3 , 1 ] for t23(ω0)subscript𝑡23superscript𝜔0t_{23}(\omega^{0})italic_t start_POSTSUBSCRIPT 23 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT ) which takes one operation. Then we compute t32(ωℓ)=(xv−1)t21(ωℓ)(xv−1)−1subscript𝑡32superscript𝜔ℓ𝑥superscript𝑣1subscript𝑡21superscript𝜔ℓsuperscript𝑥superscript𝑣11t_{32}(\omega^{\ell})=(xv^{-1})t_{21}(\omega^{\ell})(xv^{-1})^{-1}italic_t start_POSTSUBSCRIPT 32 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ) = ( italic_x italic_v start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) italic_t start_POSTSUBSCRIPT 21 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ) ( italic_x italic_v start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT for 0≤ℓ<f0ℓ𝑓0\leq\ell<f0 ≤ roman_ℓ < italic_f which needs three operations per transvection, and hence 3f3𝑓3f3 italic_f operations overall. Lastly we compute s1=vsv−1subscript𝑠1𝑣𝑠superscript𝑣1s_{1}=vsv^{-1}italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_v italic_s italic_v start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT and store it in slot p[2,3,f−1]𝑝23𝑓1p[2,3,f-1]italic_p [ 2 , 3 , italic_f - 1 ] which needs two operations and t23(ωℓ)subscript𝑡23superscript𝜔ℓt_{23}(\omega^{\ell})italic_t start_POSTSUBSCRIPT 23 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ) for 0≤ℓ<f0ℓ𝑓0\leq\ell<f0 ≤ roman_ℓ < italic_f which needs 3f3𝑓3f3 italic_f operations overall. This requires at most 16f+716𝑓716f+716 italic_f + 7 operations. | Finally, we construct a second MSLP, described in Section 3.5, that writes a diagonal matrix h∈SL(d,q)ℎSL𝑑𝑞h\in\textnormal{SL}(d,q)italic_h ∈ SL ( italic_d , italic_q ) as a word in the standard generators of SL(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) (when evaluated with these generators as input).
Combining the constructions in Sections 3.4 and 3.5 yields, as required, the monomial matrix |
The first step of the algorithm is the one-off computation of T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT from the LGO standard generators of SL(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ). The length and memory requirement of an MSLP for this step is as follows. |
We now compute upper bounds for the length and memory quota of an MSLP for expressing an arbitrary diagonal matrix h∈SL(d,q)ℎSL𝑑𝑞h\in\textnormal{SL}(d,q)italic_h ∈ SL ( italic_d , italic_q ) as a word in the LGO generators, i.e. the computation phase of the algorithm. | Our aim is to determine the length and memory quota for an MSLP for the Bruhat decomposition of an arbitrary matrix g∈SL(d,q)𝑔SL𝑑𝑞g\in\textnormal{SL}(d,q)italic_g ∈ SL ( italic_d , italic_q ) via the above method, with the matrices u1subscript𝑢1u_{1}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, u2subscript𝑢2u_{2}italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, w𝑤witalic_w returned as words in the LGO generators s,t,v,δ,x𝑠𝑡𝑣𝛿𝑥s,t,v,\delta,xitalic_s , italic_t , italic_v , italic_δ , italic_x of SL(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) given in Section 3.1.
| C |
where Ω⊂ℝdΩsuperscriptℝ𝑑\Omega\subset\mathbb{R}^{d}roman_Ω ⊂ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with d=2𝑑2d=2italic_d = 2 or 3333 for simplicity, and is an open bounded domain with polyhedral boundary ∂ΩΩ\partial\Omega∂ roman_Ω, the symmetric tensor 𝒜∈[L∞(Ω)]symd×d𝒜superscriptsubscriptdelimited-[]superscript𝐿Ωsym𝑑𝑑\mathcal{A}\in[L^{\infty}(\Omega)]_{\text{sym}}^{d\times d}caligraphic_A ∈ [ italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( roman_Ω ) ] start_POSTSUBSCRIPT sym end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_d × italic_d end_POSTSUPERSCRIPT is uniformly positive definite and bounded, and g𝑔gitalic_g is part of the given data. |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local computations are required, although these are not restricted to a single element. It is interesting to notice that, although the formulation is based on hybridization, the final numerical solution is defined by a sequence of elliptic problems. | One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method not so practical. Here in this paper, in the presence of rough coefficients, spectral techniques are employed to overcome such hurdle, and by solving local eigenvalue problems we define a space where the exponential decay of solutions is insensitive to high-contrast coefficients. Additionally, the spectral techniques remove
macro-elements corner singularities that occur in LOD methods based on | It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85, MR1979846, MR2058933, HMV, MR1642758, MR3584539, MR2030161, MR2383203, vs1, vs2, MR2740478]. Some methods work even considering that the solution has low regularity [MR2801210, MR2753343, MR3225627, MR3177856, MR2861254]
but are based on ideas that differ considerably from what we advocate here | In [MR2718268] is shown that the number of eigenvalues that are very large is related to the number of connected sub-regions on τ¯∪τ¯′¯𝜏superscript¯𝜏′\bar{\tau}\cup{\bar{\tau}}^{\prime}over¯ start_ARG italic_τ end_ARG ∪ over¯ start_ARG italic_τ end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with large coefficients surrounded by regions with small coefficients. Generalized eigenvalue problems also have been used on overlapping domain decomposition solvers [MR2718268, MR2916377, MR3175183, MR3033238]. The design of robust discretizations with respect to coefficients using domain decomposition ideas have been studied in [MR2666649, MR1642758, MR3350765] assuming some regularity on the solution, and in [MR2718268] for a class of problems when the weighted Poincaré constant [MR3047947, MR3013465, MR2867661] is not large, otherwise the exponential decay of the multiscale functions deteriorates. See also [MR2753343, MR3109775] where a priori error estimates are obtained in terms of spectral norms.
| C |
Moreover,
(iii) A back-stable edge (e.g. the one at ersubscript𝑒𝑟e_{r}italic_e start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT) remains back-stable when we change another edge (e.g. the one at essubscript𝑒𝑠e_{s}italic_e start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT or etsubscript𝑒𝑡e_{t}italic_e start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT) forwardly (e.g. s←s+1←𝑠𝑠1s\leftarrow s+1italic_s ← italic_s + 1 or t←t+1←𝑡𝑡1t\leftarrow t+1italic_t ← italic_t + 1). |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. | It is easy to compute one 3-stable triangle in O(n)𝑂𝑛O(n)italic_O ( italic_n ) time; we show how to do this in section 4111Alg-DS fails to find one 3-stable triangle and so we introduce the algorithm in section 4. This algorithm in section 4 is not the same as and does not originate from Alg-DS (see appendix A.2)..
Denote the computed 3-stable triangle by △vrvsvt△subscript𝑣𝑟subscript𝑣𝑠subscript𝑣𝑡\triangle v_{r}v_{s}v_{t}△ italic_v start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and assume r,s,t𝑟𝑠𝑡r,s,titalic_r , italic_s , italic_t are given in the following. | Our algorithm given in section 4 (denoted by Alg-One) is different from Alg-DS.
First, step 1 of Alg-One sets the initial value of (r,s,t)𝑟𝑠𝑡(r,s,t)( italic_r , italic_s , italic_t ) differently from the initial value (1,2,3)123(1,2,3)( 1 , 2 , 3 ) used by Alg-DS. | D |
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analysis on the wide range of features change during diffusion time. Ma et al. [19] used Recurrent Neural Networks for rumor detection, they batch tweets into time intervals and model the time series as a RNN sequence. Without any other handcrafted features, they got almost 90% accuracy for events reported in Snope.com. As the same disadvantage of all other deep learning models, the process of learning is a black box, so we cannot envisage the cause of the good performance based only on content features. The model performance is also dependent on the tweet retrieval mechanism, of which quality is uncertain for stream-based trending sub-events. | . As shown in Table 5, CreditScore is the best feature in overall. In Figure 4 we show the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, especially for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit after 16-20 hours, but it is not significant. CrowdWisdom is also a good feature which can get 75.8% accuracy as a single feature. But its performance is poor (less than 70%) in the first 32 hours getting better over time (see Table 5). Table 5 also shows the performance of sentiment feature (PolarityScores), which is generally low. This demonstrates the effectiveness of our curated approach over the sentiments, yet the crowd needs time to unify their views toward the event while absorbing different kinds of information.
| The processing pipeline of our classification approach is shown in Figure 2. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline,
we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (CreditScore). | In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
|
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet level and let each tweet vote for the credibility of its event. We show the CreditScore measured over time in Figure 5(a). It can be seen that although the credibility of some tweets are low (rumor-related), averaging still makes the CreditScore of Munich shooting higher than the average of news events (hence, close to a news). In addition, we show the feature analysis for ContainNews (percentage of URLs containing news websites) for the event Munich shooting in Figure 5(b). We can see the curve of Munich shooting event is also close to the curve of average news, indicating the event is more news-related. | B |
In a follow-up work Nacson et al. (2018) provided partial answers to these questions. They proved that the exponential tail has the optimal convergence rate, for tails for which ℓ′(u)superscriptℓ′𝑢\ell^{\prime}(u)roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) is of the form exp(−uν)superscript𝑢𝜈\exp(-u^{\nu})roman_exp ( - italic_u start_POSTSUPERSCRIPT italic_ν end_POSTSUPERSCRIPT ) with ν>0.25𝜈0.25\nu>0.25italic_ν > 0.25. They then conjectured, based on heuristic analysis, that the exponential tail is optimal among all possible tails. Furthermore, they demonstrated that polynomial or heavier tails do not converge to the max margin solution. Lastly, for the exponential loss they proposed a normalized gradient scheme which can significantly improve convergence rate, achieving O(log(t)/t)𝑂𝑡𝑡O(\log(t)/\sqrt{t})italic_O ( roman_log ( italic_t ) / square-root start_ARG italic_t end_ARG ). | The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile
continuing to optimize long after we have zero training error, and | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameterization asymptotically to the maximum margin solution with unit nuclear norm. Unlike the case of squared loss, the result for exponential loss are independent of initialization and with only mild conditions on the step size. Here again, we see the asymptotic nature of exponential loss on separable data nullifying the initialization effects thereby making the analysis simpler compared to squared loss.
| Perhaps most similar to our study is the line of work on understanding AdaBoost in terms its implicit bias toward large L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-margin solutions, starting with the seminal work of Schapire et al. (1998). Since AdaBoost can be viewed as coordinate descent on the exponential loss of a linear model, these results can be interpreted as analyzing the bias of coordinate descent, rather then gradient descent, on a monotone decreasing loss with an exact exponential tail. Indeed, with small enough step sizes, such a coordinate descent procedure does converge precisely to the maximum L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-margin solution (Zhang et al., 2005; Telgarsky, 2013). In fact, Telgarsky (2013) also generalizes these results to other losses with tight exponential tails, similar to the class of losses we consider here.
| decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail
part does not affect the bias. The bias is also independent of the step-size | B |
The effective cascaded model that engages both low and high-level features for rumor classification is proposed in our other work (DBLP:journals/corr/abs-1709-04402, ). The model uses time-series structure of features to capture their temporal dynamics. In this paper, we make the following contributions with respect to rumor detection:
|
We investigate how the performance of different types of low and high-level features changes over time (during the spreading of rumors); improving the understanding of feature impact and model design for rumor detection at different points in time. |
In this work, we present a deep analysis on the feature variants over 48 hours for the rumor detection task. The results show that the low-level hidden representation of tweets feature is at least the second best features over time. We also derive explanations on the low performance of supposed-to-be-strong high-level features at early stage. The study also indicates that, there is still considerable room to improve the effectiveness of the neural network-based rumor detection methods, e.g., by leveragining the embeddings from different sources rather than only text contents. | The performance of user features is similar with the Twitter features, they are both quite stable from the first hour to the last hour. As shown in Table 9, the best feature over 48 hours of the user feature group is UserTweetsPerDays and it is the best feature overall in the first 4 hours, but its rank decreases with time going by. Others user-based features like UserReputationScore and UserJoinDate also have a better performance in the first fews hours. That means the sources (the posters in the first few hours) of news and rumors are quite different with each other. But with more and more users joining in the discussion, the bias of two groups of users becomes less. After 6 hours, it seems that we can better distinguish the rumors based on the tweet contents (text features), rather than relying on the features of users.
|
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet level and let each tweet vote for the credibility of its event. We show the CreditScore measured over time in Figure 13(a). It can be seen that although the credibility of some tweets are low (rumor-related), averaging still makes the CreditScore of Munich shooting higher than the average of news events (hence, close to a news). In addition, we show the feature analysis for ContainNews (percentage of URLs containing news websites) for the event Munich shooting in Figure 13(b). We can see the curve of Munich shooting event is also close to the curve of average news, indicating the event is more news-related. | A |
Evaluating methodology.
For RQ1, given an event entity e, at time t, we need to classify them into either Breaking or Anticipated class. We select a studied time for each event period randomly in the range of 5 days before and after the event time. In total, our training dataset for AOL consists of 1,740 instances of breaking class and 3,050 instances of anticipated, with over 300 event entities. For GoogleTrends, there are 2,700 and 4,200 instances respectively. We then bin the entities in the two datasets chronologically into 10 different parts. We set up 4 trials with each of the last 4 bins (using the history bins for training in a rolling basic) for testing; and report the results as average of the trials. | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The results are shown in Table 3-bottom, showing that our cascaded model, with features inherited from the performance of SVM in previous task, substantially improves the single model. However, the overall modest results show the difficulty of this multi-class classification task.
|
RQ2. Figure 4 shows the performance of the aspect ranking models for our event entities at specific times and types. The most right three models in each metric are the models proposed in this work. The overall results show that, the performances of these models, even better than the baselines (for at least one of the three), vary greatly among the cases. In general, SVMsalience𝑆𝑉subscript𝑀𝑠𝑎𝑙𝑖𝑒𝑛𝑐𝑒SVM_{salience}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_s italic_a italic_l italic_i italic_e italic_n italic_c italic_e end_POSTSUBSCRIPT performs well at the before stage of breaking events, and badly at the after stage of the same event type. Whereas SVMtimeliness𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT gives a contradictory performance for the cases. For anticipated events, SVMtimeliness𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT performs well at the before and after stages, but gives a rather low performance at the during stage. For this event type, SVMsalience𝑆𝑉subscript𝑀𝑠𝑎𝑙𝑖𝑒𝑛𝑐𝑒SVM_{salience}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_s italic_a italic_l italic_i italic_e italic_n italic_c italic_e end_POSTSUBSCRIPT generally performs worse than SVMtimeliness𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT. Overall, The SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT with all features combined gives a good and stable performance, but for most cases, are not better than the well-performed single set of features L2R model. In general, these results prove our assumption that salience and timeliness should be traded-off for different event types, at different event times. For feature importances, we observe regularly, stable performances of same-group features across these cases. Salience features from knowledge bases tend to perform better than from query logs for short-duration or less popular events. We leave the more in-depth analysis of this part for future work. |
Results. The baseline and the best results of our 1stsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achieves marginally better result at F1 metric. | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall, improved the baseline, yet not significantly. Our Ensemble model, that is learned to trade-off between salience and timeliness achieves the best results for all metrics, outperforms the baseline significantly. As the testing entity queries in this experiment are at all event times and with all event types, these improvements illustrate the robustness of our model. Overall, we witness the low performance of adapted QAC methods. One reason is as mentioned, QACs, even time-aware generally favor already salient queries as follows the rich-get-richer phenomenon, and are not ideal for entity queries that are event-related (where aspect relevance can change abruptly). Time-aware QACs for partially long prefixes like entities often encounter sparse traffic of query volumes, that also contributes to the low results.
| C |
RT=𝔼{∑t=1TYt,at∗−Yt,At},subscript𝑅𝑇𝔼superscriptsubscript𝑡1𝑇subscript𝑌𝑡subscriptsuperscript𝑎𝑡subscript𝑌𝑡subscript𝐴𝑡R_{T}=\mathbb{E}\left\{\sum_{t=1}^{T}Y_{t,a^{*}_{t}}-Y_{t,A_{t}}\right\}\;,italic_R start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT = blackboard_E { ∑ start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_Y start_POSTSUBSCRIPT italic_t , italic_a start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUBSCRIPT - italic_Y start_POSTSUBSCRIPT italic_t , italic_A start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUBSCRIPT } ,
| RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains,
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023], | one uses p(θt|ℋ1:t)𝑝conditionalsubscript𝜃𝑡subscriptℋ:1𝑡p(\theta_{t}|\mathcal{H}_{1:t})italic_p ( italic_θ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) to compute the probability of an arm being optimal,
i.e., π(A|xt+1,ℋ1:t)=ℙ(A=at+1∗|xt+1,θt,ℋ1:t)𝜋conditional𝐴subscript𝑥𝑡1subscriptℋ:1𝑡ℙ𝐴conditionalsuperscriptsubscript𝑎𝑡1subscript𝑥𝑡1subscript𝜃𝑡subscriptℋ:1𝑡\pi(A|x_{t+1},\mathcal{H}_{1:t})=\mathbb{P}\left(A=a_{t+1}^{*}|x_{t+1},\theta_% | Thompson sampling (TS) [Thompson, 1935] is an alternative MAB policy that has been popularized in practice, and studied theoretically by many.
TS is a probability matching algorithm that randomly selects an action to play according to the probability of it being optimal [Russo et al., 2018]. | the combination of Bayesian neural networks with approximate inference has also been investigated.
Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; Li et al., 2016]. | C |
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients.
The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14. | These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only two glucose measurements per day on average and measured glucose within 4 hours or less after a meal only 5 out of 54 times. | Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app. | The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day.
Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning. | Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients.
The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14. | B |
Table 2 demonstrates that we obtained state-of-the-art scores for the CAT2000 test dataset regarding the AUC-J, sAUC, and KLD evaluation metrics, and competitive results on the remaining measures. The cumulative rank (as computed above) suggests that our model outperformed all previous approaches, including the ones based on a pre-trained VGG16 classification network (Cornia et al., 2018; Kruthiventi et al., 2017). Our final evaluation results for both the MIT300 and CAT2000 datasets can be viewed on the MIT saliency benchmark under the model name MSI-Net, representing our multi-scale information network. Qualitatively, the proposed architecture successfully captures semantically meaningful image features such as faces and text towards the prediction of saliency, as can be seen in Figure 1. Unfortunately, a visual comparison with the results from prior work was not possible since most models are not openly available.
| Table 6: A summary of the quantitative results for the models with ⊕direct-sum\oplus⊕ and without ⊖symmetric-difference\ominus⊖ an ASPP module. The evaluation was carried out on five eye tracking datasets respectively. Each network was independently trained 10 times resulting in a distribution of values characterized by the mean μ𝜇\muitalic_μ and standard deviation σ𝜎\sigmaitalic_σ. The star * denotes a significant increase of performance between the two conditions according to a one sided paired t-test. Arrows indicate whether the metrics assess similarity
| To quantify the contribution of multi-scale contextual information to the overall performance, we conducted a model ablation analysis. A baseline architecture without the ASPP module was constructed by replacing the five parallel convolutional layers with a single 3×3333\times 33 × 3 convolutional operation that resulted in 1,280 activation maps. This representation was then forwarded to a 1×1111\times 11 × 1 convolutional layer with 256 channels. While the total number of feature maps stayed constant, the amount of trainable parameters increased in this ablation setting. Table 6 summarizes the results according to validation instances of five eye tracking datasets for the model with and without an ASPP module. It can be seen that our multi-scale architecture reached significantly higher performance (one tailed paired t-test) on most metrics and is therefore able to leverage the information captured by convolutional layers with different receptive field sizes. An ablation analysis of the multi-level component adapted from Cornia et al. (2016) can be viewed in the A.
|
Table 2: Quantitative results of our model for the CAT2000 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone) from shallow networks and other machine learning methods. Entries between the second and third lines are models based on theoretical considerations and define a baseline rather than competitive performance. Arrows indicate whether the metrics assess similarity |
Table 1: Quantitative results of our model for the MIT300 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone) from shallow networks and other machine learning methods. Entries between the second and the third line are models based on theoretical considerations and define a baseline rather than competitive performance. Arrows indicate whether the metrics assess similarity | A |
For example, the path decomposition ({u,w,x},{u,v,x},{v,y,z})𝑢𝑤𝑥𝑢𝑣𝑥𝑣𝑦𝑧(\{u,w,x\},\{u,v,x\},\{v,y,z\})( { italic_u , italic_w , italic_x } , { italic_u , italic_v , italic_x } , { italic_v , italic_y , italic_z } ) for graph H𝐻Hitalic_H can be represented as a pd-marking scheme as illustrated in Figure 3 (for convenience, we omit the vertex labels; see also Figure 2 for an illustration of H𝐻Hitalic_H). | In the following, we obtain an approximation algorithm for the locality number by reducing it to the problem of computing the pathwidth of a graph. To this end, we first describe another way of how a word can be represented by a graph. Recall that the reduction to cutwidth from Section 4 also transforms words into graphs. The main difference is that the reduction from Section 4 turns every symbol from the alphabet into an individual vertex of the graph (thus, producing a graph with O(|Σ|)OΣ\operatorname{O}(|\Sigma|)roman_O ( | roman_Σ | ) vertices), while the reduction to pathwidth will use a vertex per position of the word α𝛼\alphaitalic_α, i. e., |α|𝛼|\alpha|| italic_α | individual vertices. In the reduction from Section 4 the information of the actual occurrences of the symbols in the word is encoded by the edges (in particular, the length |α|𝛼|\alpha|| italic_α | is represented by the number of edges), while in the following reduction the alphabet is encoded by connecting the vertices that correspond to positions of the same symbol to cliques in the graph (in particular, the number of edges may range between |α|𝛼|\alpha|| italic_α | and |α|2superscript𝛼2|\alpha|^{2}| italic_α | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT). We proceed with a formal definition and an example.
| The locality number is rather new and we shall discuss it in more detail. A word is k𝑘kitalic_k-local if there exists an order of its symbols such that, if we mark the symbols in the respective order (which is called a marking sequence), at each stage there are at most k𝑘kitalic_k contiguous blocks of marked symbols in the word. This k𝑘kitalic_k is called the marking number of that marking sequence. The locality number of a word is the smallest k𝑘kitalic_k for which that word is k𝑘kitalic_k-local, or, in other words, the minimum marking number over all marking sequences. For example, the marking sequence σ=(𝚡,𝚢,𝚣)𝜎𝚡𝚢𝚣\sigma=(\mathtt{x},\mathtt{y},\mathtt{z})italic_σ = ( typewriter_x , typewriter_y , typewriter_z ) marks α=𝚡𝚢𝚡𝚢𝚣𝚡𝚣𝛼𝚡𝚢𝚡𝚢𝚣𝚡𝚣\alpha=\mathtt{x}\mathtt{y}\mathtt{x}\mathtt{y}\mathtt{z}\mathtt{x}\mathtt{z}italic_α = typewriter_xyxyzxz as follows (marked blocks are illustrated by overlines):
| Both the locality number of a word and the pathwidth of a graph is defined via markings. In order to avoid confusion, we therefore use different terminology to distinguish between these two concepts (see also the terminology defined in Section 2.2): The markings for words are called marking sequences, while the markings for graphs are called pd-marking schemes; the versions of a word during a marking sequence are called the stages (of the marking sequence), while the different marked version of a graph during a pd-marking scheme are called the steps (of the pd-marking scheme).
|
We use Gαsubscript𝐺𝛼G_{\alpha}italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT as a unique graph representation for words and whenever we talk about a path decomposition for α𝛼\alphaitalic_α, we actually refer to a path decomposition of Gαsubscript𝐺𝛼G_{\alpha}italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT. Recall that we consider path-decompositions as certain marking schemes, which we called pd-marking schemes (see Section 2.3 and Figure 3). Since Gαsubscript𝐺𝛼G_{\alpha}italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT has the positions of α𝛼\alphaitalic_α as its vertices, the pd-marking scheme behind a path decomposition (and its respective terminology) directly translates to a marking scheme of the positions of α𝛼\alphaitalic_α. | C |
In[128] the authors created a recurrent u-net that learns image representations from a stack of 2D slices and has the ability to leverage inter-slice spatial dependencies through internal memory units.
It combines anatomical detection and segmentation into a single end-to-end architecture, achieving comparable results with other non end-to-end methods, outperforming the baselines DBN, recurrent DBN and FCN in terms of Dice. | Tan et al.[135] parameterize all short axis slices and phases of the LV segmentation task in terms of the radial distances between the LV center-point and the endocardial and epicardial contours in polar space.
Then, they train a CNN regression on STA11 to infer these parameters and test the generalizability of the method on DS16 with good results. | Other papers combined deep learning methods with level set for LV segmentation.
Rupprecht et al.[129] trained a class-specific four layer CNN which predicts a vector pointing from the respective point on the evolving contour towards the closest point on the boundary of the object of interest. | These predictions formed a vector field which was then used for evolving the contour using the Sobolev active contour framework.
Anh et al.[130] created a non-rigid segmentation method based on the distance regularized level set method that was initialized and constrained by the results of a structured inference using a DBN. | For this task they introduce marginal space deep learning which provides high run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality.
Given the object localization, they propose a combined deep learning active shape model to estimate the non-rigid object boundary. | B |
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, and PPO (Schulman et al., 2017), a model-free policy gradient algorithm (see Appendix E for details of tuning of Rainbow and PPO). The results of the comparison are presented in Figure 3. For each game, we plot the number of time steps needed for either Rainbow or PPO to reach the same score that our method reaches after 100100100100K interaction steps. The red line indicates 100100100100K steps: any bar larger than this indicates a game where the model-free method required more steps. SimPLe outperforms the model-free algorithms in terms of learning speed on nearly all of the games, and in the case of a few games, does so by over an order of magnitude. For some games, it reaches the same performance that our PPO implementation reaches at 10101010M steps. This indicates that model-based reinforcement learning provides an effective approach to learning Atari games, at a fraction of the sample complexity. |
While SimPLe is able to learn more quickly than model-free methods, it does have limitations. First, the final scores are on the whole lower than the best state-of-the-art model-free methods. This can be improved with better dynamics models and, while generally common with model-based RL algorithms, suggests an important direction for future work. Another, less obvious limitation is that the performance of our method generally varied substantially between different runs on the same game. The complex interactions between the model, policy, and data collection were likely responsible for this. In future work, models that capture uncertainty via Bayesian parameter posteriors or ensembles (Kurutach et al., 2018; Chua et al., 2018) may improve robustness. | The iterative process of training the model, training the policy, and collecting data is crucial for non-trivial tasks where random data collection is insufficient. In a game-by-game analysis, we quantified the number of games where the best results were obtained in later iterations of training. In some games, good policies could be learned very early. While this might have been due to the high variability of training, it does suggest the possibility of much faster training (i.e. in fewer step than 100k) with more directed exploration policies. In Figure 9 in the Appendix we present the cumulative distribution plot for the (first) point during learning when the maximum score for the run was achieved in the main training loop of Algorithm 1.
| Figure 1: Main loop of SimPLe. 1) the agent starts interacting with the real environment following the latest policy (initialized to random). 2) the collected observations will be used to train (update) the current world model. 3) the agent updates the policy by acting inside the world model. The new policy will be evaluated to measure the performance of the agent as well as collecting more data (back to 1). Note that world model training is self-supervised for the observed states and supervised for the reward.
| The results in these figures are generated by averaging 5555 runs for each game.
The model-based agent is better than a random policy for all the games except Bank Heist. Interestingly, we observed that the best of the 5555 runs was often significantly better. For 6666 of the games, it exceeds the average human score (as reported in Table 3 of Pohlen et al. (2018)). This suggests that further stabilizing SimPLe should improve its performance, indicating an important direction for future work. In some cases during training we observed high variance of the results during each step of the loop. There are a number of possible reasons, such as mutual interactions of the policy training and the supervised training or domain mismatch between the model and the real environment. We present detailed numerical results, including best scores and standard deviations, in Appendix D. | D |
Zhang et al. [11] trained an ensemble of CNNs containing two to ten layers using STFT features extracted from EEG band frequencies for mental workload classification.
Giri et al. [12] extracted statistical and information measures from frequency domain to train an 1D CNN with two layers to identify ischemic stroke. | The spectrogram S2I results are in contrary with the expectation that the interpretable time-frequency representation would help in finding good features for classification.
We hypothesize that the spectrogram S2I was hindered by its lack of non-trainable parameters. | Figure 1: High level overview of a feed-forward pass of the combined methods.
xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the input, m𝑚mitalic_m is the Signal2Image module, bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT is the 1D or 2D architecture ‘base model’ for d=1,2𝑑12d=1,2italic_d = 1 , 2 respectively and yi^^subscript𝑦𝑖\hat{y_{i}}over^ start_ARG italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG is the predicted output. | The names of the classes are depicted at the right along with the predictions for this example signal.
The image between m𝑚mitalic_m and bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT depicts the output of the one layer CNN Signal2Image module, while the ‘signal as image’ and spectrogram have intermediate images as those depicted at the second and third row of Fig. 2. | For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems.
An important property of a S2I is whether it consists of trainable parameters such as convolutional and linear layers or it is non-trainable such as traditional time-frequency methods. | D |
While the study of legged locomotion gaits has been a topic of research for several decades, the investigation of locomotion in wheel-legged robots is a relatively recent area of study [9]. Hybrid ground robots, equipped with highly articulated legs with more than three degrees-of-freedom, present unique challenges in gait development. Our study contributes to this growing field by suggesting two novel climbing gaits to surmount steps of different dimensions (h, 2h, and 3h, where h represents the track height as displayed in Fig. 3). We term these the whole-body climbing gait and the rear-body climbing gait [10], demonstrated in Fig. 5 and Fig. 6, respectively. | The evaluation of energy consumption for the walking locomotion mode encompassed the entire step negotiation process, from the commencement of the negotiation until its completion. Fig. 8 reveals minimal discrepancies in energy consumption for the whole-body climbing gait, which can be attributed to the thoughtful design of the climbing gaits. These gaits incorporate identical desired joint accelerations, leg stride length, and forward movement height, as highlighted in [4]. Consequently, variations in energy consumption during different step negotiations primarily stem from negotiation time and body movements. In order to establish the threshold values (Twbsubscript𝑇𝑤𝑏T_{wb}italic_T start_POSTSUBSCRIPT italic_w italic_b end_POSTSUBSCRIPT and Trbsubscript𝑇𝑟𝑏T_{rb}italic_T start_POSTSUBSCRIPT italic_r italic_b end_POSTSUBSCRIPT) for the energy criterion, they were equated to the energy expenditure of the walking locomotion mode, utilizing the whole-body climbing and rear-body climbing gaits, respectively. To identify the threshold values (Twbsubscript𝑇𝑤𝑏T_{wb}italic_T start_POSTSUBSCRIPT italic_w italic_b end_POSTSUBSCRIPT and Trbsubscript𝑇𝑟𝑏T_{rb}italic_T start_POSTSUBSCRIPT italic_r italic_b end_POSTSUBSCRIPT) for the energy criterion, they were set equal to the energy expenditure of the walking locomotion mode using the whole body climbing and rear body climbing gaits, respectively. Unlike other methods that use empirical values [2, 8], the threshold values in this study were decided upon based on a novel rule that evaluates the alternative locomotion mode. Moreover, these threshold values are not fixed and are determined based on the terrain profiles the robot is negotiating.
|
Fig. 7 illustrates the hierarchical control design for the autonomous locomotion mode transition. The decision-making process for this transition is accomplished in MATLAB, whereas the control of each separate locomotion mode is enacted in CoppeliaSim. The connection between MATLAB and the physical robot model in CoppeliaSim is facilitated through the use of the remote API function available in the CoppeliaSim environment. Within CoppeliaSim, control is applied to rolling locomotion in order to maintain the required vehicle speed and home configuration. As for walking locomotion, the climbing gaits created from the step height data, as discussed in Sec. 2.2, are employed. In order to facilitate motion control in both locomotion modes, all the necessary kinematics and dynamics calculations are carried out within the CoppeliaSim simulation environment. This includes computing torques and angular velocities for each joint. The simulation outputs, along with these calculated values, are then sent back to MATLAB for further data analysis and energy usage calculations. During the step negotiation simulations, a timestep of 2 milliseconds is employed to simulate real-time dynamics accurately. | The track tip positioning was the key parameter controlled during the creation of these climbing gaits. To assure seamless locomotion, trajectories for each joint of the robot were defined through a fifth-order polynomial along with their first and second derivatives. The trajectory design took into account six constraints: initial and final position, velocity, and acceleration [23]. The Reflexxes Motion Library IV [24] was utilized to perform the inverse kinematics calculation.
| Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the rear legs (depicted by the green line) exceeded the predetermined threshold values set by the rear body climbing gait for heights of 2h. The overlap between the red line (ongoing energy consumption of the robot) and the blue line (pre-studied energy consumption of step negotiation in rolling locomotion mode only) illustrates this. After the mode transition is triggered, the robot enters a well-defined preparation phase, wherein it moves backward a short distance to ensure the rear tracks are separated from the step. Following the preparation phase, the robot switches to the rear body climbing gait. Despite the noticeable improvement in energy consumption, the transition to the rear body climbing gait takes more time for the robot to tackle a 2h step.
| C |
As argued in detail in [9], there are compelling reasons to study the advice complexity of online computation.
Lower bounds establish strict limitations on the power of any online algorithm; there are strong connections between randomized online algorithms and online algorithms with advice (see, e.g., [27]); online algorithms with advice can be of practical interest in settings in which it is feasible to run multiple algorithms and output the best solution (see [20] about obtaining improved data compression algorithms by means of list update algorithms with advice); and the first complexity classes for online computation have been based on advice complexity [10]. |
Under the current models, the advice bits can encode any information about the input sequence; indeed, defining the “right” information to be conveyed to the algorithm plays an important role in obtaining better online algorithms. Clearly, the performance of the online algorithm can only improve with larger number of advice bits. The objective is thus to identify the exact trade-offs between the size of the advice and the performance of the algorithm. This is meant to provide a smooth transition between the purely online world (nothing is known about the input) and the purely “offline” world (everything is known about the input). |
In future work, we would like to expand the model so as to incorporate, into the analysis, the concept of advice error. More specifically, given an advice string of size k𝑘kitalic_k, let η𝜂\etaitalic_η denote the number of erroneous bits (which may be not known to the algorithm). In this setting, the objective would be to study the power and limitations of online algorithms, i.e., from the point of view of both upper and lower bounds on the competitive ratio. A first approach towards this direction was made recently in the context of problems such as contract | It should be fairly clear that such assumptions are very unrealistic or undesirable. Advice bits, as all information, are prone to transmission errors. In addition, the known advice models often allow
information that one may arguably consider unrealistic, e.g., an encoding of some part of the offline optimal solution. Last, and perhaps more significantly, a malicious entity that takes control of the advice oracle can have a catastrophic impact. For a very simple example, consider the well-known ski rental problem: this is a simple, yet fundamental resource allocation, in which we have to decide ahead of time whether to rent or buy equipment without knowing the time horizon in advance. In the traditional advice model, one bit suffices to be optimal: 0 for renting throughout the horizon, 1 for buying right away. However, if this bit is wrong, then the online algorithm has unbounded competitive ratio, i.e., can perform extremely badly. In contrast, an online algorithm that does not use advice at all has competitive ratio at most 2, i.e., its output can be at most twice as costly as the optimal one. | Notwithstanding such interesting attributes, the known advice model has certain drawbacks. The advice is always assumed to be some error-free information that may be used to encode some property often explicitly connected to the optimal solution. In many settings, one can argue that such information cannot be readily available, which implies that the resulting algorithms are often impractical.
| D |
With the aim of avoiding cases of misclassification like in (d), we decided to implement the second classifier, SS3Δ, whose policy also takes into account the changes in both slopes.
As it can be seen from Algorithm 3 and as mentioned before, SS3Δ additionally classifies a subject as positive if the positive slope changes, at least, four times faster than the other one. | the accumulated negative confidence value starts being greater than the positive one, but as more chunks are read (specifically starting after reading the 3rd chunk), the positive value starts and stays growing until it exceeds the other one. In this case, this subject is classified as depressed after reading the 6th chunk.
|
the subject is misclassified as positive since the positive accumulated exceeded the negative one. When we manually analyzed cases like these we often found out that the classifier was correctly accumulating positive evidence since the users were, in fact, apparently depressed. | This problem can be detected in this subject by seeing the blue dotted peek at around the 60th writing, indicating that “the positive slope changed around five times faster than the negative” there, and therefore misclassifying the subject as positive. However, note that this positive change was in fact really small (less than 1).
| In Figure 7 is shown again the subject 1914, this time including information about the changes in the slopes.
Note that this subject was previously misclassified as not depressed because the accumulated positive value never exceeded the negative one, but by adding this new extra policy, this time it is correctly classified as positive after reading the 8th chunk262626Note the peek in the blue dotted line pointing out that, at this point, the positive value has grown around 11 times faster than the negative one.. | D |
There are some other ways to combine momentum and error feedback. For example, we can put the momentum term on the server. However, these ways lead to worse performance than the way adopted in this paper. More discussions can be found in Appendix A.
| We can find that both local momentum and global momentum implementations of DMSGD are equivalent to the serial MSGD if no sparse communication is adopted. However, when it comes to adopting sparse communication, things become different. In the later sections, we will demonstrate that global momentum is better than local momentum when implementing sparse communication in DMSGD.
| GMC combines error feedback and momentum to achieve sparse communication in distributed learning. But different from existing sparse communication methods like DGC which adopt local momentum, GMC adopts global momentum.
To the best of our knowledge, this is the first work to introduce global momentum into sparse communication methods. | However, the theory about the convergence of DGC is still lacking. Furthermore, although DGC combines momentum and error feedback, the momentum in DGC only accumulates stochastic gradients computed by each worker locally. Therefore, the momentum in DGC is a local momentum without global information.
| We can find that DGC (Lin et al., 2018) is mainly based on the local momentum while GMC is based on the global momentum. Hence, each worker in DGC cannot capture the global information from its local momentum, while that in GMC can capture the global information from the global momentum even if sparse communication is adopted.
| D |
φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG is non-differentiable due to the presence of the ℓ0subscriptℓ0\ell_{0}roman_ℓ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT pseudo-norm in Eq. 3.
A way to overcome this is using ℒℒ\mathcal{L}caligraphic_L as the differentiable optimization function during training and φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG as the metric for model selection during validation on which hyperparameter value decisions (such as kernel size) are made. | We set med=m(i)𝑚𝑒𝑑superscript𝑚𝑖med=m^{(i)}italic_m italic_e italic_d = italic_m start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT for utilizing fair comparison between the sparse activation functions.
Specifically for Extrema activation function we introduce a ‘border tolerance’ parameter to allow neuron activation within another neuron activated area. | The Extrema-Pool indices activation function (defined at Algorithm 2) keeps only the index of the activation with the maximum absolute amplitude from each region outlined by a grid as granular as the kernel size m(i)superscript𝑚𝑖m^{(i)}italic_m start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and zeros out the rest.
It consists of a max-pooling layer followed by a max-unpooling layer with the same parameters while the sparsity parameter d(i)superscript𝑑𝑖d^{(i)}italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT in this case is set d(i)=m(i)<n∈ℕsuperscript𝑑𝑖superscript𝑚𝑖𝑛ℕd^{(i)}=m^{(i)}<n\in\mathbb{N}italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT = italic_m start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT < italic_n ∈ blackboard_N. | We then pass 𝒔(i)superscript𝒔𝑖\bm{s}^{(i)}bold_italic_s start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and a sparsity parameter d(i)superscript𝑑𝑖d^{(i)}italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT in the sparse activation function ϕitalic-ϕ\phiitalic_ϕ resulting in the activation map 𝜶(i)superscript𝜶𝑖\bm{\alpha}^{(i)}bold_italic_α start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT:
|
We choose values for d(i)superscript𝑑𝑖d^{(i)}italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT for each activation function in such as way, to approximately have the same number of activations for fair comparison of the sparse activation functions. | D |
The essence of PBLLA is selecting an alternative UAV randomly in one iteration and improving its utility by altering power and altitude with a certain probability, which is determined by the utilities of two strategies and τ𝜏\tauitalic_τ. UAV prefers to select the power and altitude which provide higher utility. Nevertheless, highly dynamic scenarios will cause UAVs to make mistakes and pick the worse strategy. The dynamic degree index τ𝜏\tauitalic_τ determines the dynamic degree of the situation and UAV’s performance. Small τ𝜏\tauitalic_τ means less dynamic scenarios and fewer mistakes when UAVs are making decisions. When τ→0→𝜏0\tau\rightarrow 0italic_τ → 0 which equals to stabilization, UAV will always select the power and altitude with higher utility; when τ→∞→𝜏\tau\rightarrow\inftyitalic_τ → ∞ where exists sever dynamics, UAV will choose them randomly. However, PBLLA has its limitations that PBLLA is only one single UAV is allowed for altering strategies in one iteration. We will propose a new algorithm in the next section to overcome the restrictions. |
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approaching [9][32]. The BLLA has been employed by [33], which is modified from LLA to update strategies in each iteration to converge to the NE. However, only a single agent is allowed to alter strategies in one iteration. In large-scale scenarios, more iterations are required, which makes BLLA inefficient. It is obvious that more UAVs altering strategies in one iteration would be more efficient. To achieve it, the works in [34] and [35] have provided a novel synchronous algorithm. However, there exist superabundant restrictions that make the algorithm impractical in most scenarios. Compared with the formers, SPBLLA has fewer constraints and can achieve synchronous operation, which can significantly improve the computational efficiency. |
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV changes strategy in the next iteration based on the new game state. It means that UAVs are not permitted to update strategies at the same time. Besides, to determine which UAV to update strategy, the coordinating process will occupy plenty of channel capacities and require more time between two iterations [15]. If the algorithm can learn synchronously, more than one UAV can update strategies based on the current game state in one iteration. Thus, the algorithm can be more efficient. To sum up, synchronous update algorithms which can learn from previous experiences are desirable, but only a little research investigated on it. |
Since PBLLA only allows one single UAV to alter strategies in one iteration, such defect would cause computation time to grow exponentially in large-scale UAVs systems. In terms of large-scale UAVs ad-hoc networks with a number of UAVs denoted as M𝑀Mitalic_M, M2superscript𝑀2M^{2}italic_M start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT times of exchange messages will be needed to coordinate and guarantee that only one UAV changes strategy in each iteration. Such a process not only consumes large energy but also prolongs convergence time. Algorithms that can improve the learning rate and reduce messages exchange is urgently needed. Thus, we propose the Synchronous Payoff-based Binary Log-linear Learning Algorithm (SPBLLA), which permits each UAV altering their strategies synchronously and learning with no message exchange. | Fig. 15 presents the learning rate of PBLLA and SPBLLA when τ=0.01𝜏0.01\tau=0.01italic_τ = 0.01. As m𝑚mitalic_m increases the learning rate of SPBLLA decreases, which has been shown in Fig. 15. However, when m𝑚mitalic_m is small, SPBLLA’s learning rate is about 3 times that of PBLLA showing the great advantage of synchronous learning. When τ=0.015𝜏0.015\tau=0.015italic_τ = 0.015 and τ=0.02𝜏0.02\tau=0.02italic_τ = 0.02 as shown in Fig. 15, such phenomenon also exists. Since PBLLA merely permits a single UAV to alter strategies in one iteration, SPBLLA’s synchronous learning rate will much larger than PBLLA. Moreover, in the large-scale UAV network with high dynamic, PBLLA needs information exchange to decide the update order, which would severely prolong the learning time. PBLLA’s learning time might be four times as long as that of SPBLLA. Thus we can make the conclusion that in the same condition (the same τ𝜏\tauitalic_τ and other indexes), SPBLLA performs better and is more suitable for large-scale highly dynamic environment than PBLLA, and SPBLLA can improve the learning rate several times. With larger altering strategies probability, SPBLLA will be even more powerful.
| C |
+[1μ0ω𝐁⋅∇f+1μ0f∇⋅(ω𝐁)]delimited-[]⋅1subscript𝜇0𝜔𝐁∇𝑓⋅1subscript𝜇0𝑓∇𝜔𝐁\displaystyle+\left[\frac{1}{\mu_{0}}\omega\mathbf{B}\cdot\nabla f+\frac{1}{%
\mu_{0}}f\nabla\cdot\bigg{(}\omega\mathbf{B}\bigg{)}\right]+ [ divide start_ARG 1 end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG italic_ω bold_B ⋅ ∇ italic_f + divide start_ARG 1 end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG italic_f ∇ ⋅ ( italic_ω bold_B ) ] | with Poynting flux. Note that the terms +(𝐯⋅∇ψ)μ0r2∇ψ⋅𝐯∇𝜓subscript𝜇0superscript𝑟2∇𝜓+\frac{(\mathbf{v}\cdot\nabla\psi)}{\mu_{0}r^{2}}\nabla\psi+ divide start_ARG ( bold_v ⋅ ∇ italic_ψ ) end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ∇ italic_ψ
and −ηΔ∗ψμ0r2∇ψ𝜂superscriptΔ𝜓subscript𝜇0superscript𝑟2∇𝜓-\frac{\eta\Delta^{*}\psi}{\mu_{0}r^{2}}\nabla\psi- divide start_ARG italic_η roman_Δ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT italic_ψ end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ∇ italic_ψ in the final | +[η(Δ∗ψ)2μ0r2+1μ0r2∇ψ⋅∇(ηΔ∗ψ)]delimited-[]𝜂superscriptsuperscriptΔ𝜓2subscript𝜇0superscript𝑟2⋅1subscript𝜇0superscript𝑟2∇𝜓∇𝜂superscriptΔ𝜓\displaystyle+\left[\frac{\eta(\Delta^{*}\psi)^{2}}{\mu_{0}r^{2}}+\frac{1}{\mu%
_{0}r^{2}}\nabla\psi\cdot\nabla(\eta\Delta^{*}\psi)\right]+ [ divide start_ARG italic_η ( roman_Δ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT italic_ψ ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG + divide start_ARG 1 end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ∇ italic_ψ ⋅ ∇ ( italic_η roman_Δ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT italic_ψ ) ] | +[η(∇f)2μ0r2+fμ0∇⋅(ηr2∇f)]delimited-[]𝜂superscript∇𝑓2subscript𝜇0superscript𝑟2⋅𝑓subscript𝜇0∇𝜂superscript𝑟2∇𝑓\displaystyle+\left[\frac{\eta(\nabla f)^{2}}{\mu_{0}r^{2}}+\frac{f}{\mu_{0}}%
\nabla\cdot\bigg{(}\frac{\eta}{r^{2}}\nabla f\bigg{)}\right]+ [ divide start_ARG italic_η ( ∇ italic_f ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG + divide start_ARG italic_f end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG ∇ ⋅ ( divide start_ARG italic_η end_ARG start_ARG italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ∇ italic_f ) ] | −[1μ0r2Δ∗ψ(𝐯⋅∇ψ)+1μ0r2∇ψ⋅∇(𝐯⋅∇ψ)]delimited-[]1subscript𝜇0superscript𝑟2superscriptΔ𝜓⋅𝐯∇𝜓⋅1subscript𝜇0superscript𝑟2∇𝜓∇⋅𝐯∇𝜓\displaystyle-\left[\frac{1}{\mu_{0}r^{2}}\Delta^{*}\psi(\mathbf{v}\cdot\nabla%
\psi)+\frac{1}{\mu_{0}r^{2}}\nabla\psi\cdot\nabla(\mathbf{v}\cdot\nabla\psi)\right]- [ divide start_ARG 1 end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG roman_Δ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT italic_ψ ( bold_v ⋅ ∇ italic_ψ ) + divide start_ARG 1 end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ∇ italic_ψ ⋅ ∇ ( bold_v ⋅ ∇ italic_ψ ) ] | B |
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12.
Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right. | First, remark that both A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible.
Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA→Bsubscriptmodels𝑔𝑟𝐴→𝐵r\models_{g}A\operatorname{\rightarrow}Bitalic_r ⊧ start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT italic_A → italic_B as there are no counter-examples in the resulting closure system. | The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to BC→A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and
g3subscript𝑔3g_{3}italic_g start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT. | For convenience we give in Table 7 the list of all possible realities
along with the abstract tuples which will be interpreted as counter-examples to A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A. | If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use
≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∨ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT, respectively. | C |
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and after applying Dropout (Dropout methods DQN). There was a statistically significant decrease in Variance (14.72% between Gaussian Dropout and DQN, 48.89% between Variational Dropout and DQN). Furthermore one of the Dropout methods outperformed DQN score. | Q-learning is among the most widely used reinforcement learning (RL) algorithms[4]. It’s based on an incremental dynamic programming technique because of the step by step look-up table representation in which it determines the optimal policy[22]. The Q-learning algorithm employs a table to estimate the optimal action value function, Q∗superscript𝑄Q^{*}italic_Q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT. This table encompasses all states and actions within the environment and utilizes the value function to assess the quality (Q-function) of state-action pairs. It then updates using the following rule:
|
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Reinforcement Learning is concerned with finding a sequence of actions an agent can follow that could lead to solve the task on the environment [1][2][3]. Most of Reinforcement Learning techniques estimate the consequences of actions in order to find an optimal policy in the form of sequence of actions that can be followed by the agent to solve the task. The process of choosing the optimal policy is based on selecting actions that maximize the future payoff of an action. Finding an optimal policy is the main concern of Reinforcement Learning for that reason many algorithms have been introduced over a course of time, e.g, Q-learning[4], SARSA[5], and policy gradient methods[6]. These methods use linear function approximation techniques to estimate action value, where convergence is guaranteed [7]. However, as challenges in modeling complex patterns increase, the need for expressive and flexible non-linear function approximators becomes clear. The recent advances in deep neural networks helped to develop artificial agent named deep Q-network(DQN)[8] that can learn successful policies directly from high-dimensional features. Despite the remarkable flexibility and the huge representative capability of DQN, some issues emerge from the combination of Q-learning and neural networks. One of these issues, known as ”overestimation phenomenon,” was first explored by [9]. They noted that the expansion of the action space in the Q-learning algorithm, along with generalization errors in neural networks, often results in an overestimation and increased variance of state-action values. They suggested that to counter these issues, further modifications and enhancements to the standard algorithm would be necessary to boost training stability and diminish overestimation. In response, [10] introduced Double-DQN, an improvement that incorporates the double Q-learning estimator [11], aiming to address the challenges of variance and overestimation. Additionally, [31] developed the Averaged-DQN algorithm, a significant improvement over the standard DQN. By averaging previously learned Q-values, Averaged-DQN effectively lowers the variance in target value estimates, thus enhancing training stability and overall performance. | The Gridworld problem (Figure 4) is a common RL benchmark. Its relatively small state space permits the Experience Replay (ER) buffer to store all possible state-action pairs. Moreover, this setup allows for the precise computation of the optimal action value function.
| where st+1subscript𝑠𝑡1s_{t+1}italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT is the resulting state after applying action a in the state s, r is the immediate reward observed for action a at state s, γ𝛾\gammaitalic_γ is the discount factor, and α𝛼\alphaitalic_α is learning rate.
| C |
Weakly supervised segmentation using image-level labels versus a few images with segmentation annotations. Most new weakly supervised localization methods apply attention maps or region proposals in a multiple instance learning formulations. While attention maps can be noisy, leading to erroneously highlighted regions, it is not simple to decide on an optimal window or bag size for multiple instance learning approaches. |
We provide comprehensive coverage of research contributions in the field of semantic segmentation of natural and medical images. In terms of medical imaging modalities, we cover the literature pertaining to both 2D (RGB and grayscale) as well as volumetric medical images. | Deep learning has had a tremendous impact on various fields in science. The focus of the current study is on one of the most critical areas of computer vision: medical image analysis (or medical computer vision), particularly deep learning-based approaches for medical image segmentation. Segmentation is an important processing step in natural images for scene understanding and medical image analysis, for image-guided interventions, radiotherapy, or improved radiological diagnostics, etc. Image segmentation is formally defined as “the partition of an image into a set of nonoverlapping
regions whose union is the entire image” (Haralick and Shapiro, 1992). A plethora of deep learning approaches for medical image segmentation have been introduced in the literature for different medical imaging modalities, including X-ray, visible-light imaging (e.g. colour dermoscopic images), magnetic resonance imaging (MRI), positron emission tomography (PET), computerized tomography (CT), and ultrasound (e.g. echocardiographic scans). Deep architectural improvement has been a focus of many researchers for different purposes, e.g., tackling gradient vanishing and exploding of deep models, model compression for efficient small yet accurate models, while other works have tried to improve the performance of deep networks by introducing new optimization functions. |
Because of the large number of imaging modalities, the significant signal noise present in imaging modalities such as PET and ultrasound, and the limited amount of medical imaging data mainly because of high acquisition cost compounded by legal, ethical, and privacy issues, it is difficult to develop universal solutions that yield acceptable performances across various imaging modalities. Therefore, a proper research direction would be along the work of Raghu et al. (2019) on image classification models, studying the risks of using non-medical pre-trained models for medical image segmentation. |
While most deep segmentation models for medical image analysis rely on only clinical images for their predictions, there is often multi-modal patient data in the form of other imaging modalities as well as patient metadata that can provide valuable information, which most deep segmentation models do not use. Therefore, a valuable research direction for improving segmentation performance of medical images would be to develop models which are able to leverage multi-modal patient data. | D |
Black line: the threshold from [28] indicating the value of λmaxs/2subscriptsuperscript𝜆𝑠max2\lambda^{s}_{\text{max}}/2italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT / 2 below which one should switch to the random cut to obtain a solution ≥0.53absent0.53\geq 0.53≥ 0.53 MAXCUT.
The x-axis indicates the density of the graph connectivity, which increases by randomly adding edges. | Fig. 4 illustrates how the size of the cut γ(𝐳)𝛾𝐳\gamma({\mathbf{z}})italic_γ ( bold_z ) induced by the spectral partition 𝐳𝐳{\mathbf{z}}bold_z changes as more edges are added and the original structure of the graph is corrupted (blue line). The figure also reports the size of the random cut (orange line) and the MAXCUT upper bound from Eq. (12) (green line). The black line indicates the threshold from [28], i.e., the value of λmax2/2subscriptsuperscript𝜆2max2\lambda^{2}_{\text{max}}/2italic_λ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT / 2 below which the spectral cut is no longer guaranteed to be larger than the random cut.
The graph used to generate the figure is a regular grid; however, similar results hold also for other families of random graphs and are reported in the supplementary material. | We replicate for each graph type the experiment in Sect. IV-B, which illustrates how the size of the cut obtained with the proposed algorithm changes as we randomly add edges.
Fig. 11 reports in blue the size of the cut associated with the partition yielded by the spectral algorithm; in orange the size of the cut yielded by the random partition; in green the MAXCUT upper bound; in black the theoretical threshold that indicates when to switch to the random partition to obtain a cut with size ≥0.53absent0.53\geq 0.53≥ 0.53 MAXCUT. | Black line: the threshold from [28] indicating the value of λmaxs/2subscriptsuperscript𝜆𝑠max2\lambda^{s}_{\text{max}}/2italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT / 2 below which one should switch to the random cut to obtain a solution ≥0.53absent0.53\geq 0.53≥ 0.53 MAXCUT.
The x-axis indicates the density of the graph connectivity, which increases by randomly adding edges. | We replicate for each graph type the experiment in Sect. IV-B, which illustrates how the size of the cut obtained with the proposed algorithm changes as we randomly add edges.
Fig. 11 reports in blue the size of the cut associated with the partition yielded by the spectral algorithm; in orange the size of the cut yielded by the random partition; in green the MAXCUT upper bound; in black the theoretical threshold that indicates when to switch to the random partition to obtain a cut with size ≥0.53absent0.53\geq 0.53≥ 0.53 MAXCUT. | B |
In contrast to neural networks, random forests are very robust to overfitting due to their ensemble of multiple decision trees. Each decision tree is trained on randomly selected features and samples.
Random forests have demonstrated remarkable performance in many domains (Fernández-Delgado et al., 2014). | Decision trees learn rules by splitting the data. The rules are easy to interpret and additionally provide an importance score of the features.
Random forests (Breiman, 2001) are an ensemble method consisting of multiple decision trees, with each decision tree being trained using a random subset of samples and features. | While the generated decision rules are simple and interpretable, the orthogonal separation of the feature space can also be disadvantageous on other datasets, especially with correlated features (Menze et al., 2011).
Additionally, random forests are not differentiable and cannot be fine-tuned with gradient-based optimization. | The number of parameters of the networks becomes enormous as the number of nodes grows exponentially with the increasing depth of the decision trees.
Additionally, many weights are set to zero so that an inefficient representation is created. Due to both reasons, the mappings do not scale and are only applicable to simple random forests. | (1) We enable the generation of neural networks with very few training examples.
(2) The resulting network can be used as a warm start, is fully differentiable, and allows further end-to-end fine-tuning. (3) The generated network can be easily integrated into any trainable pipeline (e.g., jointly with feature extraction) and existing high-performance deep learning frameworks can be used directly. This accelerates the process and enables parallelization via GPUs. | B |
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In particular, OPPO is based on PPO (and similarly, NPG and TRPO), which is shown to converge to the globally optimal policy at sublinear rates in tabular and linear settings, as well as nonlinear settings involving neural networks (Liu et al., 2019; Wang et al., 2019). However, without assuming the access to a “simulator” or finite concentratability coefficients, both of which imply that the state space is already well explored, it remains unclear whether any of such algorithms is sample-efficient, that is, attains a finite regret or sample complexity. In comparison, by incorporating uncertainty quantification into the action-value function at each update, which explicitly encourages exploration, OPPO not only attains the same computational efficiency as NPG, TRPO, and PPO, but is also shown to be sample-efficient with a d2H3Tsuperscript𝑑2superscript𝐻3𝑇\sqrt{d^{2}H^{3}T}square-root start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT italic_T end_ARG-regret up to logarithmic factors.
|
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient (PG) (Williams, 1992; Baxter and Bartlett, 2000; Sutton et al., 2000), natural policy gradient (NPG) (Kakade, 2002), trust-region policy optimization (TRPO) (Schulman et al., 2015), proximal policy optimization (PPO) (Schulman et al., 2017), and actor-critic (AC) (Konda and Tsitsiklis, 2000), converge to the globally optimal policy at sublinear rates of convergence, even when they are coupled with neural networks (Liu et al., 2019; Wang et al., 2019). However, such computational efficiency guarantees rely on the regularity condition that the state space is already well explored. Such a condition is often implied by assuming either the access to a “simulator” (also known as the generative model) (Koenig and Simmons, 1993; Azar et al., 2011, 2012a, 2012b; Sidford et al., 2018a, b; Wainwright, 2019) or finite concentratability coefficients (Munos and Szepesvári, 2008; Antos et al., 2008; Farahmand et al., 2010; Tosatto et al., 2017; Yang et al., 2019b; Chen and Jiang, 2019), both of which are often unavailable in practice. |
Our work is closely related to another line of work (Even-Dar et al., 2009; Yu et al., 2009; Neu et al., 2010a, b; Zimin and Neu, 2013; Neu et al., 2012; Rosenberg and Mansour, 2019a, b) on online MDPs with adversarially chosen reward functions, which mostly focuses on the tabular setting. | Assuming the transition dynamics are known but only the bandit feedback of the received rewards is available, the work of Neu et al. (2010a, b); Zimin and Neu (2013) establishes an H2|𝒜|T/βsuperscript𝐻2𝒜𝑇𝛽H^{2}\sqrt{|\mathcal{A}|T}/\betaitalic_H start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT square-root start_ARG | caligraphic_A | italic_T end_ARG / italic_β-regret (Neu et al., 2010b), a T2/3superscript𝑇23T^{2/3}italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT-regret (Neu et al., 2010a), and a H|𝒮||𝒜|T𝐻𝒮𝒜𝑇\sqrt{H|{\mathcal{S}}||\mathcal{A}|T}square-root start_ARG italic_H | caligraphic_S | | caligraphic_A | italic_T end_ARG-regret (Zimin and Neu, 2013), respectively, all up to logarithmic factors. Here 𝒮𝒮{\mathcal{S}}caligraphic_S is the state space and |𝒮|𝒮|{\mathcal{S}}|| caligraphic_S | is its cardinality. In particular, it is assumed by Neu et al. (2010b) that, with probability at least β𝛽\betaitalic_β, any state is reachable under any policy.
|
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020), as well as nonlinear settings involving general function approximators (Wen and Van Roy, 2017; Jiang et al., 2017; Du et al., 2019b; Dong et al., 2019). In particular, our setting is the same as the linear setting studied by Ayoub et al. (2020); Zhou et al. (2020), which generalizes the one proposed by Yang and Wang (2019a). We remark that our setting differs from the linear setting studied by Yang and Wang (2019b); Jin et al. (2019). It can be shown that the two settings are incomparable in the sense that one does not imply the other (Zhou et al., 2020). Also, our setting is related to the low-Bellman-rank setting studied by Jiang et al. (2017); Dong et al. (2019). In comparison, we focus on policy-based reinforcement learning, which is significantly less studied in theory. In particular, compared with the work of Yang and Wang (2019b, a); Jin et al. (2019); Ayoub et al. (2020); Zhou et al. (2020), which focuses on value-based reinforcement learning, OPPO attains the same T𝑇\sqrt{T}square-root start_ARG italic_T end_ARG-regret even in the presence of adversarially chosen reward functions. Compared with optimism-led iterative value-function elimination (OLIVE) (Jiang et al., 2017; Dong et al., 2019), which handles the more general low-Bellman-rank setting but is only sample-efficient, OPPO simultaneously attains computational efficiency and sample efficiency in the linear setting. Despite the differences between policy-based and value-based reinforcement learning, our work shows that the general principle of “optimism in the face of uncertainty” (Auer et al., 2002; Bubeck and Cesa-Bianchi, 2012) can be carried over from existing algorithms based on value iteration, e.g., optimistic LSVI, into policy optimization algorithms, e.g., NPG, TRPO, and PPO, to make them sample-efficient, which further leads to a new general principle of “conservative optimism in the face of uncertainty and adversary” that additionally allows adversarially chosen reward functions. | B |
On the contrary, GPUs feature large register files and aim to hide memory latency by leveraging parallel slackness.
Another critical aspect of loop-back architectures is low compute utilization, which can potentially occur if certain layer or operation types do not fit the static compute array (i.e., if operation size is too low). | The results reveal that quantization does not provide throughput improvements on this processor.
This is mainly due to the efficient floating-point units within the CPU in combination with fast on-chip memory and the high overhead resulting from performing low-bit-width computations. | The advantage of their approach is that weight assignments need not be stored explicitly since they are given implicitly by the hashing function.
The authors show a memory footprint reduction by a factor of 10 while keeping the prediction quality essentially unaffected. | The advantage of such a generic compute architecture is that they allow arbitrary operations in combination with productive code generation since the hardware does not need to be optimized for a certain task.
Continuous improvements in semi-conductor and processor technology are the main improvement factor of such inference engines. | While domain-specific accelerators, such as Google’s TPU, excel in their specific performance, they are usually limited to a set of specific operations and are neither flexible in terms of data types nor sparse calculations. Furthermore, in particular for the TPU, experimentation is often hindered due to limitations in the tool chain which is not flexible enough to support such optimizations. They are not suited to execute generic compressed models and are therefore not included in the following experiments.
| C |
{v0,v27}+{v27,v28}+{v28,v14}+{v14v29}+{v29,v23}+{v23,v30}+{v30,v31}+{v31,v0},subscript𝑣0subscript𝑣27subscript𝑣27subscript𝑣28subscript𝑣28subscript𝑣14subscript𝑣14subscript𝑣29subscript𝑣29subscript𝑣23subscript𝑣23subscript𝑣30subscript𝑣30subscript𝑣31subscript𝑣31subscript𝑣0\displaystyle\quad\{v_{0},v_{27}\}+\{v_{27},v_{28}\}+\{v_{28},v_{14}\}+\{v_{14%
}v_{29}\}+\{v_{29},v_{23}\}+\{v_{23},v_{30}\}+\{v_{30},v_{31}\}+\{v_{31},v_{0}\},{ italic_v start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 27 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 27 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 28 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 28 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 14 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 14 end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT 29 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 29 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 23 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 23 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 30 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 30 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 31 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 31 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT } , | ω1 is the degree-1 homology class induced bysubscript𝜔1 is the degree-1 homology class induced by\displaystyle\omega_{1}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the degree-1 homology class induced by
|
ω0 is the degree-1 homology class induced bysubscript𝜔0 is the degree-1 homology class induced by\displaystyle\omega_{0}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the degree-1 homology class induced by | and seeks the infimal r>0𝑟0r>0italic_r > 0 such that the map induced by ιrsubscript𝜄𝑟\iota_{r}italic_ι start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at n𝑛nitalic_n-th homology level annihilates the fundamental class [M]delimited-[]𝑀[M][ italic_M ] of M𝑀Mitalic_M. This infimal value defines FillRad(M)FillRad𝑀\mathrm{FillRad}(M)roman_FillRad ( italic_M ), the filling radius of M𝑀Mitalic_M.
|
ω2 is the degree-1 homology class induced bysubscript𝜔2 is the degree-1 homology class induced by\displaystyle\omega_{2}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the degree-1 homology class induced by | A |
A DR method is an algorithm that projects a high-dimensional data set to a low-dimensional representation, preserving the structure of the original data as much as possible.
Most of these algorithms have some (or many) hyper-parameters that may considerably affect their results, but setting them correctly is not a trivial task. In Subsection 2.1, we briefly describe techniques that try to solve this problem, and discuss the differences to our tool’s functionality. The resulting projection is usually visualized with scatterplots, which support tasks such as finding groups of similar points, correlations, and outliers [16]. However, a scatterplot is simply the first step in analyzing a high-dimensional data set through a projection: questions regarding the quality of the results (see Subsection 2.2) and how to interpret them (see Subsection 2.3) are pervasive in the literature on the subject. | Overall Accuracy
We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are quite different and none of them appears to have a clear advantage over the others, we pick one with good values for all the rest of the quality metrics (i.e., greater than 40%). The overview in Figure 7(a) shows the selected projection with three clear clusters of varying sizes (marked with C1, C2, and C3). However, the labels seem to be mixed in all of them. That means either the projections are not very good, or the labels are simply very hard to separate. By analyzing the Shepard Heatmap (Figure 7(b)), it seems that there is a distortion in how the projection represents the original N-D distances: the darker cells of the heatmap are above the diagonal and concentrated near the origin, which means that the lowest N-D distances (up to 30% of the maximum) have been represented in the projection with a wide range of 2-D distances (up to 60% of the maximum). While it may be argued that the data is too spread in the projection, we must always consider that t-SNE’s goal is not to preserve all pairwise distances, but only close neighborhoods. The projection has used most of its available 2-D space to represent (as best as possible) the smallest N-D distances, which can be considered a good trade-off for this specific objective. In the following paragraphs, we concentrate on some of the goals described in Subsection 4.3 and Subsection 4.4 for each of the three clusters. | Fujiwara et al. [44] proposed the contrasting clusters in PCA (ccPCA) method to find which dimensions contributed more to the formation of a selected cluster and why it differs from the rest of the dataset, based on information on separation and internal vs. external variability. We have similar goals, but approach them with different methods. For exploring clusters and selections in general, we use PCA to filter and order a local PCP plot; this could be easily adapted to use ccPCA instead as an underlying method for choosing which dimensions to filter and how to re-order the axes, without affecting the overall proposed analytical flow of the tool. On the other hand, ccPCA does not deal with the analysis of shapes, which we support with our proposed Dimension Correlation.
Other recent approaches include DimReader [45], where the authors create so-called generalized axes for non-linear DR methods, but besides explaining a single dimension at a time, it is currently unclear how exactly it can be used in an interactive exploration scenario; and | A DR method is an algorithm that projects a high-dimensional data set to a low-dimensional representation, preserving the structure of the original data as much as possible.
Most of these algorithms have some (or many) hyper-parameters that may considerably affect their results, but setting them correctly is not a trivial task. In Subsection 2.1, we briefly describe techniques that try to solve this problem, and discuss the differences to our tool’s functionality. The resulting projection is usually visualized with scatterplots, which support tasks such as finding groups of similar points, correlations, and outliers [16]. However, a scatterplot is simply the first step in analyzing a high-dimensional data set through a projection: questions regarding the quality of the results (see Subsection 2.2) and how to interpret them (see Subsection 2.3) are pervasive in the literature on the subject. | A few other tools have been proposed throughout the years that incorporate these techniques to deal with the problem of supporting the exploration of multidimensional data with DR. In Subsection 2.4, we discuss their goals and trade-offs, and compare them with t-viSNE.
| D |
Topologies: A promising research direction is to jointly consider topologies and ensemble strategies to leverage the superior explorative/exploitative powers of ensembles and also topologies for population-based metaheuristics to achieve better solutions than other solvers. | We should pause and reflect on which research directions should be pursued in the future in regard to bio-inspired optimization and related areas, as there are other remarkable fields to be noted as direct applications for bio-inspired optimization. In [3], the authors show a full discussion of the status of the field from both descriptive (where we stand) and prescriptive (what’s next) points of view. Here, we describe the areas in which bio-inspired optimization algorithms are used, and research niches related to them, as shown in Figure 7. The areas and their main aspects that can be studied as promising research lines are:
|
Surrogate model-assisted optimization: This area has promising research lines of investigation with highly dimensional search spaces and DL models, where there is a need to alleviate high computational efforts, with evaluation times that range from hours to days per experiment. |
From a design perspective, nature- and bio-inspired optimization algorithms are usually conceived after observing a natural process or the behavioral patterns of biological organisms, which are then converted into a computational optimization algorithm. New discoveries in Nature and the undoubted increase of worldwide investigation efforts have ignited the interest of the research community in biological processes and their extrapolation to computational problems. As a result, many new bio-inspired meta-heuristics have appeared in the literature, increasing the outbreak of proposals and applications every year. Nowadays, every natural process can be thought to be adaptable and emulated to produce a new meta-heuristic approach, yet with different capabilities of reaching global optimum solutions to optimization problems. |
Going deeper into the creation of Machine Learning (ML) and Deep Learning (DL) models: Although most algorithms have been developed in recent years, the impact of EAs, a classical family of algorithms, has risen in the last few years. Their use in ML has been widely studied both for the design of models [615] and also as a support for the optimization of those models [616]. These algorithms have gained momentum under the evidence reported around their usage to evolve and improve other AI techniques: most notably, the optimization of the structure and training parameters of deep neural networks [8], or the creation of new data-based models from scratch (i.e. by evolving very essential data processing primitives) that has been presented in the groundbreaking work by Google [617]. With this ongoing development, the research trend of Neural Architecture Search has emerged as another important area full of EAs applications [618], which mainly focuses on the construction of the DL model via the evolution of block of layers [619, 14, 620]. Recently, we have witnessed the use of EAs to model more AI models, as in the case of POET [621] where more environments are generated to learn from the diversity created, with the merging of EAs with Large Language Model (LLM) [622], and with other areas such as Automated Machine Learning [623], Reinforcement Learning and robotics [624], and Multi-task Learning [625]. In recent years, an interesting synergy between bio-inspired optimization and modern ML systems has been observed in the literature, in particular General-Purpose Artificial Intelligence Systems (GPAIS), as we will highlight later in the report. | B |
where φ(⋅)𝜑⋅\varphi(\cdot)italic_φ ( ⋅ ) is certain activation function, A^=D~−12A~D~−12^𝐴superscript~𝐷12~𝐴superscript~𝐷12\hat{A}=\widetilde{D}^{-\frac{1}{2}}\widetilde{A}\widetilde{D}^{-\frac{1}{2}}over^ start_ARG italic_A end_ARG = over~ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT - divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_POSTSUPERSCRIPT over~ start_ARG italic_A end_ARG over~ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT - divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_POSTSUPERSCRIPT, A~=A+I~𝐴𝐴𝐼\widetilde{A}=A+Iover~ start_ARG italic_A end_ARG = italic_A + italic_I, D~~𝐷\widetilde{D}over~ start_ARG italic_D end_ARG denotes the degree matrix (D~ii=∑j=1nA~ijsubscript~𝐷𝑖𝑖superscriptsubscript𝑗1𝑛subscript~𝐴𝑖𝑗\widetilde{D}_{ii}=\sum_{j=1}^{n}\widetilde{A}_{ij}over~ start_ARG italic_D end_ARG start_POSTSUBSCRIPT italic_i italic_i end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT over~ start_ARG italic_A end_ARG start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT), and W𝑊Witalic_W denotes the parameters of GCN. It should be pointed out that A~~𝐴\widetilde{A}over~ start_ARG italic_A end_ARG is a graph with self-loop for each node and A^^𝐴\hat{A}over^ start_ARG italic_A end_ARG is the normalized adjacency matrix. More importantly, A^X^𝐴𝑋\hat{A}Xover^ start_ARG italic_A end_ARG italic_X is equivalent to compute weighted means for each node with its first-order neighbors from the spatial aspect. To improve the performance, MixHop [26] aims to mix information from different order neighbors and SGC [27] tries to utilize higher-order neighbors. |
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update the graph from the learned embedding with a larger sparsity, k𝑘kitalic_k. With the new graph, we re-train the GAE. These steps are repeated until the convergence. | Network embedding is a fundamental task for graph type data such as recommendation systems, social networks, etc.
The goal is to map nodes of a given graph into latent features (namely embedding) such that the learned embedding can be utilized on node classification, node clustering, and link prediction. | To apply graph convolution on unsupervised learning, GAE is proposed [20].
GAE firstly transforms each node into latent representation (i.e., embedding) via GCN, and then aims to reconstruct some part of the input. GAEs proposed in [20, 29, 22] intend to reconstruct the adjacency via decoder while GAEs developed in [21] attempt to reconstruct the content. The difference is which extra mechanism (such as attention, adversarial learning, graph sharpness, etc.) is used. | (1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for decoders.
(2) As we utilize GAE to exploit the high-level information to construct a desirable graph, we find that the model suffers from a severe collapse due to the simple update of the graph. We analyze the degeneration theoretically and experimentally to understand the phenomenon. We further propose a simple but effective strategy to avoid it. | C |
∙∙\bullet∙ Traffic load. Network scans, such as (Lyon, 2009; Durumeric et al., 2013; Kührer et al., 2014), require exchanging packets with a large number of Internet networks as well as IP addresses inside the networks. To avoid scanning the Internet we periodically download a dataset of a full scan of the Internet done by Sonar.
| Limitations of filtering studies. The measurement community provided indispensable studies for assessing “spoofability” in the Internet, and has had success in detecting the ability to spoof in some individual networks using active measurements, e.g., via agents installed on those networks (Mauch, 2013; Lone et al., 2018), or by identifying spoofed packets using offline analysis of traffic, e.g., (Lone et al., 2017; Luckie et al., 2019). The need to install agents on networks or the ability to obtain traces only from some networks limits the studies to non-uniform coverage of the Internet. Therefore it is not clear how representative these statistics are.
Unfortunately, this limitation to a small set of networks creates a bias in the assessments of the overall number of spoofable networks. The extrapolation from the small set of networks to the entire Internet typically result in assessment that at least 30% of the Internet networks do not filter spoofed packets (Luckie et al., 2019; Man et al., 2020). As we show, the number of spoofable networks is above 72% which is significantly higher than what was previous believed. |
∙∙\bullet∙ Consent of the scanned. It is often impossible to request permission from owners of all the tested networks in advance, this challenge similarly applies to other Internet-wide studies (Lyon, 2009; Durumeric et al., 2013, 2014; Kührer et al., 2014). Like the other studies, (Durumeric et al., 2013, 2014), we provide an option to opt out of our scans. To opt out the network has to provide either its network block (in CIDR notation), domain or ASN through the contact page at https://smap.cad.sit.fraunhofer.de. Performing security scans is important - the networks that do not enforce filtering of spoofed packets pose a hazard not only to their operators but also to their users, customers and services, as well as other networks. Due to the importance of identifying such networks, in their recent study (Luckie et al., 2019) even make public the (“name-and-shame”) lists of providers with missing or misconfigured filtering of spoofed packets; (Luckie et al., 2019) also discuss stronger measures against spoofable networks, including liability for damages, and various types of regulation. Inevitably, due to the risks that such networks pose to the Internet ecosystem, it is of public interest to know who those networks are. We do not make the identity of the networks, that do not filter spoofed packets, publicly available, but inform the general public on the fraction of such networks and provide their characterisation (i.e., size, geo-location, business type) in Section 5. | How widespread is the ability to spoof? There are significant research and operational efforts to understand the extent and the scope of (ingress and egress)-filtering enforcement and to characterise the networks which do not filter spoofed packets; we discuss these in Related Work, Section 2. Although the existing studies and tools, such as the Open Resolver (Mauch, 2013) and the Spoofer (Beverly and Bauer, 2005; Beverly et al., 2009, 2013; Lone et al., 2018; Luckie et al., 2019) projects, provide a valuable contribution for inferring networks which do not enforce spoofing, they are nevertheless insufficient: they provide a meager (often non-uniform) coverage of the Internet networks and are limited in their applicability as well as effectiveness.
| ∙∙\bullet∙ Traffic load. Network scans, such as (Lyon, 2009; Durumeric et al., 2013; Kührer et al., 2014), require exchanging packets with a large number of Internet networks as well as IP addresses inside the networks. To avoid scanning the Internet we periodically download a dataset of a full scan of the Internet done by Sonar.
| B |
Experiments in this paper used the gas sensor drift array dataset [7]. The data consists of 10 sequential collection periods, called batches. Every batch contains between 161161161161 to 3,60036003{,}6003 , 600 samples, and each sample is represented by a 128-dimensional feature vector; 8 features each from 16 metal oxide-based gas sensors. These features summarizing the time series sensor responses are the raw and normalized steady-state features and the exponential moving average of the increasing and decaying transients taken at three different alpha values. The experiments used six gases, ammonia, acetaldehyde, acetone, ethylene, ethanol, and toluene, presented in arbitrary order and at variable concentrations. Chemical interferents were also presented to the sensors between batches, and the time between presentations varied, both of which contributed to further sensor variability. The dataset thus exemplifies sensor variance due to contamination and variable odor concentration in a controlled setting.
| Experiments in this paper used the gas sensor drift array dataset [7]. The data consists of 10 sequential collection periods, called batches. Every batch contains between 161161161161 to 3,60036003{,}6003 , 600 samples, and each sample is represented by a 128-dimensional feature vector; 8 features each from 16 metal oxide-based gas sensors. These features summarizing the time series sensor responses are the raw and normalized steady-state features and the exponential moving average of the increasing and decaying transients taken at three different alpha values. The experiments used six gases, ammonia, acetaldehyde, acetone, ethylene, ethanol, and toluene, presented in arbitrary order and at variable concentrations. Chemical interferents were also presented to the sensors between batches, and the time between presentations varied, both of which contributed to further sensor variability. The dataset thus exemplifies sensor variance due to contamination and variable odor concentration in a controlled setting.
|
The current design of the context-based network relies on labeled data because the odor samples for a given class are presented as ordered input to the context layer. However, the model can be modified to be trained on unlabeled data, simply by allowing arbitrary data samples as input to the context layer. This design introduces variation in training inputs, which makes it harder to learn consistent context patterns. For this task, semisupervised learning techniques, such as self-labeled samples, may help. If the context layer can process unlabeled data, then it is no longer necessary to include every class in every batch. The full six-gas sensor drift dataset can be used, as well as other unbalanced and therefore realistic datasets. | Figure 2: Neural network architectures. (A.) The batches used for training and testing illustrate the training procedure. The first T−1𝑇1T-1italic_T - 1 batches are used for training, while the next unseen batch T𝑇Titalic_T is used for evaluation. When training the context network, subsequences of the training data are selected to be processed recurrently, indicated by the labels s𝑠sitalic_s through p𝑝pitalic_p. In all cases, training data is obtained only from the first T−1𝑇1T-1italic_T - 1 batches of data. (B.) A feature vector is input to a collection of SVMs, one trained on each prior batch. Each SVM output is weighted by its corresponding coefficient, β𝛽\betaitalic_β, and the weighted sum of the output class predictions is taken to be the output, 𝐲^normal-^𝐲\hat{\mathbf{y}}over^ start_ARG bold_y end_ARG, of the ensemble. (C.) A schematic of the skill model shows feedforward progression of input through two hidden layers 𝐬𝐬\mathbf{s}bold_s and 𝐝𝐝\mathbf{d}bold_d followed by the output layer 𝐲^normal-^𝐲\hat{\mathbf{y}}over^ start_ARG bold_y end_ARG. (D.) A schematic of the context+skill model introduces a sequential processing of prior samples as a separate processing pathway. For each context batch from s𝑠sitalic_s through p−1𝑝1p-1italic_p - 1, one sample per odor class is chosen as a representative. The context information is then utilized by the “decision-making” layer 𝐝𝐝\mathbf{d}bold_d and is thus integrated into the feedforward pathway.
|
Two processing steps were applied to the data used by all models included in this paper. The first preprocessing step was to remove all samples taken for gas 6, toluene, because there were no toluene samples in batches 3, 4, and 5. Data was too incomplete for drawing meaningful conclusions. Also, with such data missing it was not possible to construct contexts from odor samples from each class in previous batches. The second preprocessing step normalized each feature so that all values corresponding to any feature dimension of the 128 total have zero mean and unit variance as is standard practice in deep learning. | D |
The goal would be to obtain an algorithm with running time 2O(f(δ)n)superscript2𝑂𝑓𝛿𝑛2^{O(f(\delta)\sqrt{n})}2 start_POSTSUPERSCRIPT italic_O ( italic_f ( italic_δ ) square-root start_ARG italic_n end_ARG ) end_POSTSUPERSCRIPT, where f(n)=O(n1/6)𝑓𝑛𝑂superscript𝑛16f(n)=O(n^{1/6})italic_f ( italic_n ) = italic_O ( italic_n start_POSTSUPERSCRIPT 1 / 6 end_POSTSUPERSCRIPT ).
Such a running time becomes 2O(n)superscript2𝑂𝑛2^{O(\sqrt{n})}2 start_POSTSUPERSCRIPT italic_O ( square-root start_ARG italic_n end_ARG ) end_POSTSUPERSCRIPT for constant δ𝛿\deltaitalic_δ (which is optimal for TSP in ℝ2superscriptℝ2\mathbb{R}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, under ETH), and it becomes 2O(n2/3)superscript2𝑂superscript𝑛232^{O(n^{2/3})}2 start_POSTSUPERSCRIPT italic_O ( italic_n start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT for δ=n𝛿𝑛\delta=nitalic_δ = italic_n (which is optimal for TSP in ℝ3superscriptℝ3\mathbb{R}^{3}blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT, assuming ETH). | First of all, the ΔisubscriptΔ𝑖\Delta_{i}roman_Δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are now independent.
Second, as we will prove next, the expected running time of an algorithm on a uniformly distributed point set can be bounded by the expected running time of that algorithm on a point set generated this way. | In the second step, we therefore describe a method to generate the random point set in a different way, and we show how to relate the expected running times in these two settings.
In the third step, we will explain which changes are made to the algorithm. | It would be interesting to see whether a direct proof can be given for this fundamental result.
We note that the proof of Theorem 2.1 can easily be adapted to point sets of which the x𝑥xitalic_x-coordinates of the points need not be integer, as long as the difference between x𝑥xitalic_x-coordinates of any two consecutive points is at least 1. | We believe that our algorithm can serve as the basis of an algorithm solving such a problem, under the assumption that the point sets are dense enough to ensure that the solution will generally follow these curves / segments. Making this precise, and investigating how the running time depends on the number of line segments, would be interesting.
| D |
The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the element on the (full) subtree rooted at the node is the same as that of a (possibly different) element on the entire tree (i. e. at the root). The idea for the name here is that the action on a full subtree is similar to the action of the group or semigroup on the entire tree. An important special case of such a self-similar presentation occurs when there is a finite set of generators such that the action of any generator on the subtree below any node is the same as the action of some (potentially different) generator at the root. By identifying the nodes of the infinite regular tree with the strings over an appropriate finite alphabet, we can describe such an action using a finite automaton (more precisely, a finite-state letter-to-letter – or synchronous – transducer), which leads to the class of automaton semigroups and automaton groups (also often called ‘automata groups’). If we relax the finite-state requirement and also consider infinite automata, we can even describe any self-similar action in this way. This is the approach we will take in this paper.
|
There is a quite interesting evolution of constructions to present free groups in a self-similar way or even as automaton groups (see [15] for an overview). This culminated in constructions to present free groups of arbitrary rank as automaton groups where the number of states coincides with the rank [18, 17]. While these constructions and the involved proofs are generally deemed quite complicated, the situation for semigroups turns out to be much simpler. While it is known that the free semigroup of rank one is not an automaton semigroup [4, Proposition 4.3], the free semigroups of higher rank can be generated by an automaton [4, Proposition 4.1]. In fact, the construction to generate these semigroups is quite simple [4, Proposition 4.1] (compare also to 3). The same construction can also be used to generate free monoids as automaton semigroups or monoids. Here, the main difference is that the free monoid in one generator can indeed be generated by an automaton: it is generated by the adding machine (see 1), which also generates the free group of rank one if inverses are added. On a side note, it is also worthwhile to point out that – although there does not seem to be much research on the topic – there are examples to generate the free inverse semigroup of rank one as a subsemigroup of an automaton semigroup [14, Theorem 25] and an adaption to present the free inverse monoid of rank one as an automaton semigroup [6, Example 2] (see also [8, Example 23]). |
There are quite a few results on free (and related) products of self-similar or automaton groups (again see [15] for an overview) but many of them present the product as a subgroup of an automaton/self-similar group and, thus, loose the self-similarity property. An exception here is a line of research based on the Bellaterra automaton which resulted in a construction to generate the free product of an arbitrary number of copies of the group of order two as an automaton group [16] (see also [17]). | from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata).
Third, we show this result in the more general setting of self-similar semigroups111Note that the constructions from [2, Theorem 2], [3, Theorem 4] and [19] mentioned above do not use that the generating automata for S𝑆Sitalic_S and for T𝑇Titalic_T are finite. Therefore, these constructions also work for self-similar semigroups, although this is not explicitly stated there. (Theorem 6) but observe that the constructed generating automaton for S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is finite (and/or complete) if this was the case for the original two automata generating S𝑆Sitalic_S and T𝑇Titalic_T. The existence of a homomorphism from S𝑆Sitalic_S to T𝑇Titalic_T (or vice-versa) is a very lax requirement and is satisfied by large classes of semigroups. For example, it suffices to have an idempotent (10) or a length function (11) in (at least) one of the two semigroups. By induction, we can even extend the result to arbitrary free products of (finitely many) semigroups where at least one contains an idempotent (12). The construction itself yields further results. As an example, we modify it to show that a new free generator can be adjoined to any self-similar semigroup (or automaton semigroup) without losing the property of self-similarity (or being an automaton semigroup; Theorem 14). This is noteworthy because – as mentioned above – the free semigroup of rank one is not an automaton semigroup (not even if we allow partial automata, see [8, Theorem 19] and [20, Theorem 1.2.1.4]). | The construction used to prove Theorem 6 can also be used to obtain results which are not immediate corollaries of the theorem (or its corollary for automaton semigroups in 8). As an example, we prove in the following theorem that it is possible to adjoin a free generator to every self-similar semigroup without losing the self-similarity property and that the analogous statement for automaton semigroups holds as well. The version for automaton semigroups does not follow directly from 8, as the free monogenic semigroup is not a complete automaton semigroup [4, Proposition 4.3] or even a (partial) automaton semigroup (see [8, Theorem 18] or [20, Theorem 1.2.1.4]).
| A |
Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible to achieve such gains even when the model is trained to look at: a) irrelevant visual regions, and b) random visual regions. Second, we show that differences in the predictions from the variants trained with relevant, irrelevant and random visual regions are not statistically significant. Third, we show that these methods degrade performance when the priors remain intact and instead work on VQA-CPv2 by hurting its train accuracy.
|
The usage of visual cues and sensitivities in existing methods is superfluous because the results indicate that performance improves through degradation of training accuracy. We hypothesize that simple regularization that does not rely on cues or sensitivities can also achieve large performance gains for VQA-CP. To test this hypothesis, we devise a simple loss function which continuously degrades the training accuracy by training the network to always predict a score of zero for all possible answers i.e. produce a zero vector (𝟎0\mathbf{0}bold_0). The overall loss function can be written as: |
Based on these observations, we hypothesize that controlled degradation on the train set allows models to forget the training priors to improve test accuracy. To test this hypothesis, we introduce a simple regularization scheme that zeros out the ground truth answers, thereby always penalizing the model, whether the predictions are correct or incorrect. We find that this approach also achieves near state-of-the-art performance (48.9%percent48.948.9\%48.9 % on VQA-CPv2), providing further support for our claims. | Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible to achieve such gains even when the model is trained to look at: a) irrelevant visual regions, and b) random visual regions. Second, we show that differences in the predictions from the variants trained with relevant, irrelevant and random visual regions are not statistically significant. Third, we show that these methods degrade performance when the priors remain intact and instead work on VQA-CPv2 by hurting its train accuracy.
| It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in any of the methods, indicating that they are not producing right answers for the right reasons.
| B |
To train the RoBERTa model on the privacy policy classification task, we used the sequence classification head of the pretrained language model from HuggingFace (Wolf et al., 2019). We used the pretrained RoBERTa tokenizer to tokenize text extracted from the documents. Since Roberta accepts a maximum of 512 tokens as input, only the first 512 tokens of text from the documents were used for training while the rest was discarded. As shown in the analysis section, the average length of a privacy policy in terms of the number of words is 1,871. Thus 512 tokens would take into account about a fourth of an average privacy policy.
|
For the question answering task, we leveraged the PrivacyQA corpus (Ravichander et al., 2019). PrivacyQA consists of 1,750 questions about the contents of privacy policies from 35 privacy documents. While crowdworkers were asked to come up with privacy related questions based on public information about an application from the Google Play Store, legal experts were recruited to identify relevant evidence within respective privacy policies that answered the question asked by the crowdworkers. The goal of the question answering task is to identify a set sentences in the privacy policy that has information relevant to the question. Ravichander et al. (2019) divided the corpus into 1,350 questions for training and validation and 400 questions for testing where each question in the test set is annotated by at least three experts. We fine-tuned PrivBERT on the training set as a binary classification task on each question-answer sentence pair to identify if the sentence is evidence for the question or not. We trained the model with a dropout of 0.2 and a learning rate of 3e-6 with the positive and negative classes weighted in the ratio 8:1 during training. We used sentence level F1 as the evaluation metric as described by Ravichander et al. (2019), where precision and recall are calculated by measuring the overlap between the predicted sentences and gold standard sentences. |
The 1,600 labelled documents were randomly divided into 960 documents for training, 240 documents for validation and 400 documents for testing. Using 5-fold cross-validation, we tuned the hyperparameters for the models separately with the validation set and then used the held-out test set to report the test results. Due to its size, it was possible for the held out test set to have a biased sample. Thus we repeated the sampling and training processes with a 5-fold cross-validation approach. Table 1 shows performance of the models after the results from test sets were averaged. Since the transformer based model had the best results, we ran it on all the the candidate privacy policies. Out of 2.1 million English candidate privacy polices, 1.54 million were classified as privacy policies and the rest were discarded. | Document Classification. Some of the web pages in the English language candidate document set may not have been privacy policies and instead simply satisfied our URL selection criteria. To separate privacy policies from other web documents we used a supervised machine learning approach. Two researchers in the team labeled 1,600 randomly selected candidate documents based on a preset scheme in consultation with a privacy expert. While both the researchers had substantial prior experience with privacy policies, the privacy expert was consulted to eliminate uncertainty in the annotations of a few documents. Lack of agreement in the annotations occurred for six documents, which were settled by discussion with the expert.
Out of 1,600 documents, 1,145 were privacy policies and 455 were not privacy policies. | The complete set of documents was divided into 97 languages and an unknown language category. We found that the vast majority of documents were in English. We set aside candidate documents that were not identified as English by Langid and were left with 2.1 million candidates.
| B |
The second expert (E2) is a senior researcher in software engineering and applied ML working in a government research institute and as an adjunct professor. He has worked with ML for the past 7 years, and 2 years with stacking ensemble learning. The third expert (E3) is the head of applied ML in a large multinational corporation, working with recommendation systems. She has approximately 7 years of experience with ML, of which 1.5 years are related to stacking ensemble learning. All three experts have a PhD in computer science and none of them reported any colorblindness issues.
The process was as follows: (1) we presented the main goals of our system, (2) we explained the process of improving the heart disease data set results (see section 4), and (3) after that, we gave them a couple of minutes to interact with the VA system by using the simple iris data set. | Workflow. E1, E2, and E3 agreed that the workflow of StackGenVis made sense.
They all suggested that data wrangling could happen before the algorithms’ exploration, but also that it is usual to first train a few algorithms and then, based on their predictions, wrangle the data. | Another positive opinion from E3 was that, with a few adaptations to the performance metrics, StackGenVis could work with regression or even ranking problems.
E3 also mentioned that supporting feature generation in the feature selection phase might be helpful. Finally, E1 suggested that the circular barcharts could only show the positive or negative difference compared to the first stored stack. To avoid an asymmetric design and retain a lower complexity level for StackGenVis, we omitted his proposal for the time being, but we consider implementing both methods in the future. | (ii) in the next algorithm exploration phase, we compare and choose specific ML algorithms for the ensemble and then proceed with their particular instantiations, i.e., the models;
(iii) during the data wrangling phase, we manipulate the instances and features with two different views for each of them; (iv) model exploration allows us to reduce the size of the stacking ensemble, discard any unnecessary models, and observe the predictions of the models collectively (StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(d)); | Thus, it is considered an iterative process: the expert might start with the algorithms’ exploration and move to the data wrangling, or vice versa. “The former approach is even more suitable for your VA system, because you use the accuracy of the base ML models as feedback/guidance to the expert in order to understand which instances should be wrangled”, said E3. E2 stated that having an evaluation metric from early on is important for benchmarking purposes to choose the best strategy while data scientists and domain experts are collaborating. He also noted that flexibility of the workflow—not forcing the user to use all parts of the VA system for every problem—is an extra benefit.
| A |
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | (E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ),
(E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscript𝑢1delimited-[]112subscript𝑢2delimited-[]010(E^{\mathbf{C}},((u_{1},[112]),(u_{2},[010])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( ( italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , [ 112 ] ) , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 010 ] ) ) ). | cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG,
and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ]. | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | D |
To answer RQ3, we conduct experiments on different data quantity and task similarity settings. We compare two baselines with MAML :
Transformer/CNN, which pre-trains the base model (Transformer/CNN) on the meta-training set and evaluates directly on the meta-testing set, and Transformer/CNN-F, which fine-tunes Transformer/CNN on each meta-testing task. | Model-Agnostic Meta-Learning (MAML) [Finn et al., 2017] is one of the most popular meta-learning methods. It is trained on plenty of tasks (i.e. small data sets) to get a parameter initialization which is easy to adapt to target tasks with a few samples. As a model-agnostic framework, MAML is successfully employed in different NLP applications.
Some works use MAML for few-shot text classification, such as relation classification [Obamuyide and Vlachos, 2019] and topic classification [Bao et al., 2020]. | To answer RQ3, we conduct experiments on different data quantity and task similarity settings. We compare two baselines with MAML :
Transformer/CNN, which pre-trains the base model (Transformer/CNN) on the meta-training set and evaluates directly on the meta-testing set, and Transformer/CNN-F, which fine-tunes Transformer/CNN on each meta-testing task. | Task similarity. In Persona and Weibo, each task is a set of dialogues for one user, so tasks are different from each other. We shuffle the samples and randomly divide tasks to construct the setting that tasks are similar to each other. For a fair comparison, each task on this setting also has 120 and 1200 utterances on average in Persona and Weibo respectively. We train and evaluate Transformer-F and MAML on this setting. (Table 2).
When tasks are similar to each other, MAML performs comparatively poorly. In Persona and Weibo, the performance of MAML is similar to that of Transformer-F, while MAML performs significantly better than Transformer-F when tasks are different. A possible explanation is that if there is no clear distinction between tasks, the meta-learning setting can be viewed as a transfer learning setting, which only has a source domain and a target domain, and fine-tuning performs well in transfer learning. So if the tasks are similar to each other, we can simply use Transformer-F rather than MAML. | Data Quantity. In Persona, we evaluate Transformer/CNN, Transformer/CNN-F and MAML on 3 data quantity settings: 50/100/120-shot (each task has 50, 100, 120 utterances on average). In Weibo, FewRel and Amazon, the settings are 500/1000/1500-shot, 3/4/5-shot and 3/4/5-shot respectively (Table 2).
When the data quantity is small, the advantage of MAML is more significant. In Persona, the C Score and BLEU of MAML outperform baselines on 50-shot and 100-shot settings, but on 120-shot setting, the BLEU of MAML is lower than Transformer-F. In Weibo, FewRel and Amazon, the percentages that MAML outperforms the baselines by also decrease as the data quantity increasing. This finding is in line with the mechanism of MAML. MAML finds a sensitive parameter initialization that can adapt with few data samples [Finn et al., 2017]. | D |
As αisubscript𝛼𝑖\alpha_{i}italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and βjsubscript𝛽𝑗\beta_{j}italic_β start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT is the quantization of azimuth angle and elevation angle, respectively, the indexes of the optimal codewords ik*superscriptsubscript𝑖𝑘i_{k}^{*}italic_i start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT and jk*superscriptsubscript𝑗𝑘j_{k}^{*}italic_j start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT in the given layer of the codebook according to (42) are given by
ik*=⌈αt,k(t)BWa⌉superscriptsubscript𝑖𝑘subscript𝛼𝑡𝑘𝑡𝐵subscript𝑊𝑎i_{k}^{*}=\biggl{\lceil}{\frac{\alpha_{t,k}(t)}{BW_{a}}}\biggr{\rceil}italic_i start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = ⌈ divide start_ARG italic_α start_POSTSUBSCRIPT italic_t , italic_k end_POSTSUBSCRIPT ( italic_t ) end_ARG start_ARG italic_B italic_W start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT end_ARG ⌉, and jk*=⌈βt,k(t)BWe⌉superscriptsubscript𝑗𝑘subscript𝛽𝑡𝑘𝑡𝐵subscript𝑊𝑒j_{k}^{*}=\biggl{\lceil}{\frac{\beta_{t,k}(t)}{BW_{e}}}\biggr{\rceil}italic_j start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = ⌈ divide start_ARG italic_β start_POSTSUBSCRIPT italic_t , italic_k end_POSTSUBSCRIPT ( italic_t ) end_ARG start_ARG italic_B italic_W start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT end_ARG ⌉. | Multiuser-resultant Receiver Subarray Partition: As shown in Fig. 3, the r-UAV needs to activate multiple subarrays to serve multiple t-UAVs at the same time. Assuming that an element can not be contained in different subarrays, then the problem of activated CCA subarray partition rises at the r-UAV side for the fast multi-UAV beam tracking. The dynamic CCA subarray partition can be considered as the dynamic antenna resource allocation for multiple t-UAVs, which has strong impact on the sum SE of the UAV mmWave network.
|
Figure 6: The subarray patterns on the cylinder and the corresponding expanded cylinder. (a) The t-UAV subarray partition pattern. (b) The r-UAV subarray partition pattern with conflict. (c) The r-UAV subarray partition pattern without conflict. (d) The t-UAV subarray partition pattern with beamwidth selection. | The t-UAV needs to select an appropriate codeword 𝒗(i,j,𝒮)𝒗𝑖𝑗𝒮\boldsymbol{v}(i,j,\mathcal{S})bold_italic_v ( italic_i , italic_j , caligraphic_S ) from our proposed codebook 𝒱ksubscript𝒱𝑘\mathcal{V}_{k}caligraphic_V start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT to solve the subarray partition and AWV selection problem in (35). Note that after the codeword 𝒗(i,j,𝒮)𝒗𝑖𝑗𝒮\boldsymbol{v}(i,j,\mathcal{S})bold_italic_v ( italic_i , italic_j , caligraphic_S ) is selected, the beam pattern and the subarray pattern are determined.
Given AODs, the maximum size of the activated subarray should be selected and the quantization error between the AODs and the beam angles in the codeword should be minimized to maximize the beam gain of the beamforming vector of the k𝑘kitalic_k-th t-UAV. Therefore, the optimal codeword 𝒗(ik*,jk*,𝒮(ms,k*,ns,k*,𝒑c,k(ik*)))𝒗superscriptsubscript𝑖𝑘superscriptsubscript𝑗𝑘𝒮superscriptsubscript𝑚𝑠𝑘superscriptsubscript𝑛𝑠𝑘subscript𝒑𝑐𝑘superscriptsubscript𝑖𝑘\boldsymbol{v}\left(i_{k}^{*},j_{k}^{*},\mathcal{S}\left(m_{s,k}^{*},n_{s,k}^{% | According to (20), the codeword 𝒗(i,j,𝒮)𝒗𝑖𝑗𝒮\boldsymbol{v}(i,j,\mathcal{S})bold_italic_v ( italic_i , italic_j , caligraphic_S ) includes both the beam pattern information and the subarray pattern information. The beam pattern information mainly includes the beam angle (αi,βj)subscript𝛼𝑖subscript𝛽𝑗(\alpha_{i},\beta_{j})( italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_β start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) and the beam width determined by the size of 𝒮𝒮\mathcal{S}caligraphic_S; the subarray pattern information includes the subarray location and size determined by 𝒮𝒮\mathcal{S}caligraphic_S.
| B |
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the arguments, and will
also be used as the base cases in inductive constructions for the case with arbitrary colors. | We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the arguments, and will
also be used as the base cases in inductive constructions for the case with arbitrary colors. | The requirement that M¯|N¯conditional¯𝑀¯𝑁\bar{M}|\bar{N}over¯ start_ARG italic_M end_ARG | over¯ start_ARG italic_N end_ARG is extra big enough ensures that we have enough edges to perform the edge swapping.
This completes the proof for case 2 when the assumptions (a1) and (a2) hold. | This will be bootstrapped to the multi-color case in later sections. Note that the 1111-color case with the completeness requirement is not very interesting, and also not useful for the general case: completeness states that every node on
the left must be connected, via the unique edge relation, to every node on the right – regardless of the matrix. We | To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer
analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict | C |
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear whether the attained solution is globally optimal. On the other hand, when the value function approximator in TD is an overparameterized multi-layer neural network, which is required to be properly scaled, such a feature representation stabilizes at the initial one (Cai et al., 2019), making the explicit local linearization in nonlinear gradient TD unnecessary. Moreover, the implicit local linearization enabled by overparameterization allows TD (and Q-learning) to converge to the globally optimal solution. However, such a required scaling, also known as the neural tangent kernel (NTK) regime (Jacot et al., 2018), effectively constrains the evolution of the induced feature presentation to an infinitesimal neighborhood of the initial one, which is not data-dependent.
|
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature representation is able to deviate from the initial one and subsequently evolve into the globally optimal one, which corresponds to the global minimizer of the MSPBE. We further extend our analysis to soft Q-learning, which is connected to policy gradient. | To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear whether the attained solution is globally optimal. On the other hand, when the value function approximator in TD is an overparameterized multi-layer neural network, which is required to be properly scaled, such a feature representation stabilizes at the initial one (Cai et al., 2019), making the explicit local linearization in nonlinear gradient TD unnecessary. Moreover, the implicit local linearization enabled by overparameterization allows TD (and Q-learning) to converge to the globally optimal solution. However, such a required scaling, also known as the neural tangent kernel (NTK) regime (Jacot et al., 2018), effectively constrains the evolution of the induced feature presentation to an infinitesimal neighborhood of the initial one, which is not data-dependent.
| Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, TD possibly diverges (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997). Bhatnagar et al. (2009) propose nonlinear gradient TD, which converges but only to a locally optimal solution. See Geist and Pietquin (2013); Bertsekas (2019) for a detailed survey. When the value function approximator is an overparameterized multi-layer neural network, Cai et al. (2019) prove that TD converges to the globally optimal solution in the NTK regime. See also the independent work of Brandfonbrener and
Bruna (2019a, b); Agazzi and Lu (2019); Sirignano and Spiliopoulos (2019), where the state space is required to be finite. In contrast to the previous analysis in the NTK regime, our analysis allows TD to attain a data-dependent feature representation that is globally optimal. | In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
| A |
In this paper, we replace residual connections of the Transformer with depth-wise LSTMs, to selectively manage the representation aggregation of layers benefiting performance while ensuring convergence of the Transformer. Specifically, we show how to integrate the computation of multi-head attention networks and feed-forward networks with the depth-wise LSTM for the Transformer. | We show that the 6-layer Transformer using depth-wise LSTM can bring significant improvements in both WMT tasks and the challenging OPUS-100 multilingual NMT task. We show that depth-wise LSTM also has the ability to support deep Transformers with up to 24242424 layers, and that the 12-layer Transformer using depth-wise LSTM already performs at the level of the 24-layer vanilla Transformer.
|
Notably, on the En-De task, the 12-layer Transformer with depth-wise LSTM already outperforms the 24-layer vanilla Transformer, suggesting efficient use of layer parameters. On the Cs-En task, the 12-layer model with depth-wise LSTM performs on a par with the 24-layer baseline. Unlike in the En-De task, increasing depth over the 12-layer Transformer can still achieve some BLEU improvements, with the 18-layer model resulting in the best performance. We conjecture that this is probably because the data set of the Cs-En task (∼similar-to\sim∼15151515M) is larger than that of the En-De task (∼similar-to\sim∼4.54.54.54.5M), and increasing the depth of the model for the Cs-En task also increases its number of parameters and capacity. For the En-De task, the 12-layer Transformer with depth-wise LSTM may already provide both sufficient complexity and capacity for the data set. |
When using the depth-wise RNN, the architecture is quite similar to the standard Transformer layer without residual connections but using the concatenation of the input to the encoder/decoder layer with the output(s) of attention layer(s) as the input to the last FFN sub-layer. Table 2 shows that the 6-layer Transformer with the depth-wise RNN is able to converge, but its performance is much worse than the model with the depth-wise LSTM (and also much worse than the vanilla Transformer) with depth-wise LSTM outperforming the vanilla Transformer, suggesting the importance of the gating mechanisms of the depth-wise LSTM. The decoding speed of our baseline vanilla Transformer implementation (750.58750.58750.58750.58 sentences/s) is quite fast, and is 1.121.121.121.12 times as fast as the depth-wise LSTM approach, but our approach leads to a higher BLEU score than the baseline, and as shown in Table 6, our approach indeed requires fewer parameters and brings about faster decoding speed than the vanilla Transformer for a comparable BLEU score. | Our experiments with the 6-layer Transformer show that our approach using depth-wise LSTM can achieve significant BLEU improvements in both WMT news translation tasks and the very challenging OPUS-100 many-to-many multilingual translation task over baselines. Our deep Transformer experiments demonstrate that: 1) the depth-wise LSTM approach ensures that deep Transformers with up to 24242424 layers converge, 2) the 12-layer Transformer using depth-wise LSTM already performs on a par with the 24-layer vanilla Transformer, suggesting more efficient usage of per-layer parameters with our depth-wise LSTM approach than the baseline.
| D |
^{\circ}\!\left(X\right)\right\}caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_Y ) ⊇ { italic_U ∩ italic_Y ∣ italic_U ∈ caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_X ) }.
Note that this stronger property is preserved | on ⟨⟦𝖥𝖮[σ]⟧𝒟≤2∩τ⊆i⟩\langle\llbracket\mathsf{FO}[\upsigma]\rrbracket_{\mathcal{D}_{\leq 2}}\cap%
\uptau_{\subseteq_{i}}\rangle⟨ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT caligraphic_D start_POSTSUBSCRIPT ≤ 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ⟩ is a pre-spectral space, | ⟨Fin(σ),τ≤,𝖥𝖮[σ]⟩Finσsubscriptτ𝖥𝖮delimited-[]σ\left\langle\operatorname{Fin}(\upsigma),\uptau_{\leq},\mathsf{FO}[\upsigma]\right\rangle⟨ roman_Fin ( roman_σ ) , roman_τ start_POSTSUBSCRIPT ≤ end_POSTSUBSCRIPT , sansserif_FO [ roman_σ ] ⟩ is a lpps.
| ⟨𝒟≤2,τ⊆i,𝖥𝖮[σ]⟩subscript𝒟absent2subscriptτsubscript𝑖𝖥𝖮delimited-[]σ\left\langle\mathcal{D}_{\leq 2},\uptau_{\subseteq_{i}},\mathsf{FO}[\upsigma]\right\rangle⟨ caligraphic_D start_POSTSUBSCRIPT ≤ 2 end_POSTSUBSCRIPT , roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT , sansserif_FO [ roman_σ ] ⟩ is
a lpps by Remark 3.5 and the fact that | ⟨Struct(σ),τ⊆i,𝖥𝖮[σ]⟩Structσsubscriptτsubscript𝑖𝖥𝖮delimited-[]σ\left\langle\operatorname{Struct}(\upsigma),\uptau_{\subseteq_{i}},\mathsf{FO}%
[\upsigma]\right\rangle⟨ roman_Struct ( roman_σ ) , roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT , sansserif_FO [ roman_σ ] ⟩ is a lpps by Claim 2.2. | C |
In the training stage, we crop each distorted image into four distortion elements and learn the parameters of the neural network using all data. Note that this training process is data-independent, where each part of the entire image is fed into the network one by one without the data correlation. In the test stage, we only need one distortion element, i.e., 1/4 of an image, to estimate the ordinal distortion. For a clear exhibition of our approach, we draw the detailed algorithm schemes of the training process and test process as listed in Algorithm 1 and Algorithm 2, respectively. | To demonstrate a quantitative comparison with the state-of-the-art approaches, we evaluate the rectified images based on the PSNR (peak signal-to-noise ratio), SSIM (structural similarity index), and the proposed MDLD (mean distortion level deviation). All the comparison methods are used to conduct the distortion rectification on the test dataset including 2,000 distorted images. For the PSNR and SSIM, we compute these two metrics using the pixel difference between each rectified image and the ground truth image. For the MDLD, we first exploit the estimated distortion parameters to obtain all distortion levels of the test distorted image based on Eq. 5. Then, the value of MDLD can be calculated by the difference between estimated distortion levels and the ground truth distortion levels based on Eq. 21. Note that the generated-based methods such as Li [11] and Liao [12] directly learn the transformation manner of the pixel mapping instead of estimating the distortion parameters, so we only evaluate these two methods in terms of the PSNR and SSIM.
| Evaluation Metrics: Crucially, evaluating the performance of different methods with reasonable metrics benefits experimental comparisons. In the distortion rectification problem, the corrected image can be evaluated with the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM). For the evaluation of the estimated distortion label, it is straightforward to employ the root mean square error (RMSE) between the estimated coefficients 𝒦^^𝒦\hat{\mathcal{K}}over^ start_ARG caligraphic_K end_ARG and ground truth coefficients 𝒦𝒦\mathcal{K}caligraphic_K:
| In contrast to RMSE, MDLD is more suitable for parameter evaluation due to the uniqueness of the distortion distribution. Moreover, RMSE fails to evaluate the different numbers and attributes of estimated parameters for different camera models. Thanks to the objective description of the distortion, MDLD is capable of evaluating different distortion estimation methods using different camera models.
|
As listed in Table II, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods [23, 24] based on the hand-crafted features, our approach overcomes the scene limitation and simple camera model assumption, showing more promising generality and flexibility. Compared with the learning distortion rectification methods [8][11][12], which omit the prior knowledge of the distortion, our approach transfers the heterogeneous estimation problem into a homogeneous one, eliminating the implicit relationship between image features and predicted values in a more explicit expression. As benefits of the effective ordinal supervision and guidance of distortion information during the learning process, our approach outperforms Liao [12] by a significant margin, with approximately 23% improvement on PSNR and 17% improvement on SSIM. Besides the high quality of the rectified image, our approach can obtain the accurate distortion parameters of a distorted image, which is crucial for the subsequent tasks such as the camera calibration. However, the generation-based methods [11][12] mainly focus on the pixel reconstruction of a rectified image and ignore the parameter estimation. | B |
We use a pre-trained ViT 555https://huggingface.co/google/vit-base-patch16-224-in21k [4] model and fine-tune it on the CIFAR-10/CIFAR-100 datasets.
The experiments are implemented based on the Transformers 666https://github.com/huggingface/transformers framework. We fine-tune the model with 20 epochs. | Many methods have been proposed for improving the performance of SGD with large batch sizes. The works in [7, 33]
proposed several tricks, such as warm-up and learning rate scaling schemes, to bridge the generalization gap under large-batch training settings. Researchers in [11] | We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework.
Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings. | Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD.
In large-batch training, SNGM achieves better training loss and test accuracy than the four baselines. Furthermore, it achieves faster convergence rates than LARS for the small and large batch sizes, which is consistent with our convergence analysis for the block-wise update strategy. | Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different batch sizes.
| B |
When the algorithm terminates with Cs=∅subscript𝐶𝑠C_{s}=\emptysetitalic_C start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = ∅, Lemma 5.2 ensure the solution zfinalsuperscript𝑧finalz^{\text{final}}italic_z start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT is integral. By Lemma 5.5, any client j𝑗jitalic_j with d(j,S)>9Rj𝑑𝑗𝑆9subscript𝑅𝑗d(j,S)>9R_{j}italic_d ( italic_j , italic_S ) > 9 italic_R start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT must have j∈C0final𝑗subscriptsuperscript𝐶final0j\in C^{\text{final}}_{0}italic_j ∈ italic_C start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. Hence, ∑j:d(j,S)>9Rjvj≤∑j∈C0vjsubscript:𝑗𝑑𝑗𝑆9subscript𝑅𝑗subscript𝑣𝑗subscript𝑗subscript𝐶0subscript𝑣𝑗\sum_{j:d(j,S)>9R_{j}}v_{j}\leq\sum_{j\in C_{0}}v_{j}∑ start_POSTSUBSCRIPT italic_j : italic_d ( italic_j , italic_S ) > 9 italic_R start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≤ ∑ start_POSTSUBSCRIPT italic_j ∈ italic_C start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT. For the facility costs, we have ∑i∈Swi=∑izifinalwisubscript𝑖𝑆subscript𝑤𝑖subscript𝑖superscriptsubscript𝑧𝑖finalsubscript𝑤𝑖\sum_{i\in S}w_{i}=\sum_{i}z_{i}^{\text{final}}w_{i}∑ start_POSTSUBSCRIPT italic_i ∈ italic_S end_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Finally, by Lemma 5.3, and noting that Csfinal=∅superscriptsubscript𝐶𝑠finalC_{s}^{\text{final}}=\emptysetitalic_C start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT = ∅, we have ∑izifinalwi+∑j∈C0vj≤Vsubscript𝑖superscriptsubscript𝑧𝑖finalsubscript𝑤𝑖subscript𝑗subscript𝐶0subscript𝑣𝑗𝑉\sum_{i}z_{i}^{\text{final}}w_{i}+\sum_{j\in C_{0}}v_{j}\leq V∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + ∑ start_POSTSUBSCRIPT italic_j ∈ italic_C start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≤ italic_V. | Brian Brubach was supported in part by NSF awards CCF-1422569 and CCF-1749864, and by research awards from Adobe. Nathaniel Grammel and Leonidas Tsepenekas were supported in part by NSF awards CCF-1749864 and CCF-1918749, and by research awards from Amazon and Google. Aravind Srinivasan was supported in part by NSF awards CCF-1422569, CCF-1749864 and CCF-1918749, and by research awards from Adobe, Amazon, and Google.
|
do FA←{ijA|j∈HA and FI∩GπIj=∅}←subscript𝐹𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{\pi^{I}j}=\emptyset\}italic_F start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ← { italic_i start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | italic_j ∈ italic_H start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ∩ italic_G start_POSTSUBSCRIPT italic_π start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT italic_j end_POSTSUBSCRIPT = ∅ } | For instance, during the COVID-19 pandemic, testing and vaccination centers were deployed at different kinds of locations, and access was an important consideration [18, 20]; access can be quantified in terms of different objectives including distance, as in our work. Here,
ℱℱ\mathcal{F}caligraphic_F and 𝒞𝒞\mathcal{C}caligraphic_C correspond to such locations and the population affected by the outbreak, and needing services, respectively. | FAs¯←{ijA|j∈HA and FI∩GπIj=∅}←subscriptsuperscript𝐹¯𝑠𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F^{\bar{s}}_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{%
\pi^{I}j}=\emptyset\}italic_F start_POSTSUPERSCRIPT over¯ start_ARG italic_s end_ARG end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ← { italic_i start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | italic_j ∈ italic_H start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ∩ italic_G start_POSTSUBSCRIPT italic_π start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT italic_j end_POSTSUBSCRIPT = ∅ } | A |
Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15],
and additive and multiplicative communication noises may co-exist in communication links ([21]). |
II. The structure of the networks among optimizers is modeled by a more general sequence of random digraphs. The sequence of random digraphs is conditionally balanced, and the weighted adjacency matrices are not required to have special statistical properties such as independency with identical distribution, Markovian switching, or stationarity, etc. The edge weights are also not required to be nonnegative at every time instant. By introducing the concept of conditional digraphs and developing the stochastic Lyapunov method for distributed optimization over non-stationary randomly time-varying networks, uniformly conditionally joint connectivity condition is established to ensure the convergence of the distributed stochastic optimization algorithms. | We have studied the distributed stochastic subgradient algorithm for the stochastic optimization by networked nodes to cooperatively minimize a sum of convex cost functions.
We have proved that if the local subgradient functions grow linearly and the sequence of digraphs is conditionally balanced and uniformly conditionally jointly connected, then proper algorithm step sizes can be designed so that all nodes’ states converge to the global optimal solution almost surely. |
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be spatially and temporally dependent. The local cost functions are not required to be differentiable, nor do their subgradients need to be bounded. The local optimizers can only obtain measurement information of the local subgradients with random noises. The additive and multiplicative communication noises co-exist in communication links. We consider the distributed stochastic subgradient optimization algorithm and prove that if the sequence of random digraphs is conditionally balanced and uniformly conditionally jointly connected, then the states of all local optimizers converge to the same global optimal solution almost surely. The main contributions of our paper are listed as follows. | I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition.
The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error. This leads the nonnegative supermartingale convergence theorem not to be applied directly | C |
Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces the contradiction between privacy protection and data analysis [9]. For instance, a smaller ϵitalic-ϵ\epsilonitalic_ϵ for ϵitalic-ϵ\epsilonitalic_ϵ-differential privacy provides better protection but worse information utility.
| The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution in the original data. Second, the anonymization of MuCo is a “black box” process for recipients because the only difference between the original data and the anonymized data is that some original QI values are replaced with random values. Thus, the adversary cannot determine which QI values are altered as well as the ranges of variations, causing that the matching tuples are more likely to be wrong or even does not exist when the adversary uses more QI values to match, but the adversary obtains much more matching records if the size of the combination of QI values is not big enough. While for the recipient, the results of query statements are specific records rather than groups. Accordingly, the results are more accurate. The conducted extensive experiments also illustrate the effectiveness of the proposed method.
|
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics without violating the privacy. Inspired by local differential privacy, this paper uses the method of randomized response to perturb original QI values before release to prevent the disclosure of matching the combination of QI values. | Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the original microdata and publish the anonymized version of microdata. Therefore, differential privacy is inapplicable to the scenario we addressed in this paper.
| Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces the contradiction between privacy protection and data analysis [9]. For instance, a smaller ϵitalic-ϵ\epsilonitalic_ϵ for ϵitalic-ϵ\epsilonitalic_ϵ-differential privacy provides better protection but worse information utility.
| B |
3D-FUTURE dataset is a recently public large-scale indoor dataset with 34 categories. Following the official splits, we adopt 12,144 images for training, 2,024 for validation and 6,072 for testing. From the size distribution of bounding boxes in 3D-FUTURE and COCO shown in Figure 1, the medium object size of 3D-FUTURE is about 250 while roughly 50 for COCO, indicating that 3D-FUTURE contains much more larger instances222Followed by 3D-FUTURE official setting, we refer area <113×113absent113113\textless 113\times 113< 113 × 113 for small, 113×113∼256×256similar-to113113256256113\times 113\sim 256\times 256113 × 113 ∼ 256 × 256 for medium, and >256×256absent256256\textgreater 256\times 256> 256 × 256 for large, compared to 32×32323232\times 3232 × 32 and 96×96969696\times 9696 × 96 defined in COCO.. This distribution divergence motivates us to explore fine-grained large object segmentation methods like PointRend.
| Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “FP16” means mixed precision training.
| Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.9 mAP and surpasses MaskRCNN by a remarkable margin of 9.7 mAP. More Points Test. By increasing the number of subdivision points from default 28 to 70 during inference, we gain another 1.1 mAP with free training cost. Large Backbone. X101-64x4d Xie et al. (2017) is then used as large backbone and it boosts 6 mAP against ResNet50. DCN and More Points Train. We adopt more interpolated points during training, by increasing the number of sampled points from original 14 to 26 for coarse prediction head, and from 14 to 24 for fine-grained point head. Then by adopting DCN Dai et al. (2017), we gain 71.6 mAP, which already outperforms HTC and SOLOV2 from our offline observation. Large Resolution and P6 Feature. Due to PointRend’s lightweight segmentation head and less memory consumption compared to HTC, the input resolution can be further increased from range [800,1000] to [1200,1400] during multi-scale training. P6 level of FPN is also added for both coarse prediction head and fine-grained point head, which finally yields 74.3 mAP on our splitted validation set. Other tricks we tried on PointRend give little improvement, including MaskScoring head, GC Block and DoubleHead Wu et al. (2020).
In the following, we refer the model in the last row (74.3 mAP) of Table 2 as PointRend baseline. The baseline trained on the official training set finally reaches 79.17 and 77.38 mAP on validation and testing set respectively, as shown in Table 1 and Table 3. It surpasses SOLOv2 by a large margin: 6.2, 4.5 and 3.5 mAP respectively for small, medium and large size on validation set. We believe that PointRend’s iteratively rendering process acts as a pivot for generating high-quality masks, especially fine-grained instance boundaries. Due to its superior performance, we only choose PointRend as ensemble candidates for the final submission. | Table 3: PointRend’s performance on testing set (trackB). “EnrichFeat” means enhance the feature representation of coarse mask head and point head by increasing the number of fully-connected layers or its hidden sizes. “BFP” means Balanced Feature Pyramid. Note that BFP and EnrichFeat gain little improvements, we guess that our PointRend baseline already achieves promising performance (77.38 mAP).
| PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared to HTC’s mask head, PointRend’s lightweight segmentation head alleviates both memory and computation costs dramatically, thus enables larger input image resolutions during training and testing, which further improves the segmentation quality.
To fully understand which components contribute to PointRend’s performance, we construct our own validation set by randomly selecting 3000 images from original training data to evaluate offline. We will show the step-by-step improvements adopted on PointRend. | A |
I(f)<1,andH(|f^|2)>nn+1logn.formulae-sequence𝐼𝑓1and𝐻superscript^𝑓2𝑛𝑛1𝑛I(f)<1,\ \ {\mbox{and}}\ \ H(|\hat{f}|^{2})>\frac{n}{n+1}\log n.italic_I ( italic_f ) < 1 , and italic_H ( | over^ start_ARG italic_f end_ARG | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) > divide start_ARG italic_n end_ARG start_ARG italic_n + 1 end_ARG roman_log italic_n .
| For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
|
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma some time ago (see [K], comment from April 2, 2011). More specifically, we proved |
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Maybe the presentation below is what was known. | (0log0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^(A)|2}A⊆[n]subscriptsuperscript^𝑓𝐴2𝐴delimited-[]𝑛\{|\hat{f}(A)|^{2}\}_{A\subseteq[n]}{ | over^ start_ARG italic_f end_ARG ( italic_A ) | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_A ⊆ [ italic_n ] end_POSTSUBSCRIPT sums up to 1111 and thus this is the usual definition of entropy of this probability distribution.
| C |
The proof idea is similar to that of Theorem 1. The only difference is that within each piecewise-stationary segment, we use the hard instance constructed by Zhou et al. (2021); Hu et al. (2022) for inhomogenous linear MDPs. Optimizing the length of each piecewise-stationary segment N𝑁Nitalic_N and the variation magnitude between consecutive segments (subject to the constraints of the total variation budget) leads to our lower bound.
∎ |
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 2020) to propose the LSVI-UCB-Restart algorithm with low dynamic regret when the total variations are known. We then designed a parameter-free algorithm Ada-LSVI-UCB-Restart that enjoys a slightly worse dynamic regret bound without knowing the total variations. We derived a minimax regret lower bound for nonstationary linear MDPs to demonstrate that our proposed algorithms are near-optimal. Specifically, when the local variations are known, LSVI-UCB-Restart is near order-optimal except for the dependency on feature dimension d𝑑ditalic_d, planning horizon H𝐻Hitalic_H, and some poly-logarithmic factors. Numerical experiments demonstrates the effectiveness of our algorithms. |
In this section, we describe our proposed algorithm LSVI-UCB-Restart, and discuss how to tune the hyper-parameters for cases when local variation is known or unknown. For both cases, we present their respective regret bounds. Detailed proofs are deferred to Appendix B. Note that our algorithms are all designed for inhomogeneous setting. |
The rest of the paper is organized as follows. Section 2 presents our problem definition. Section 3 establishes the minimax regret lower bound for nonstationary linear MDPs. Section 4 and Section 5 present our algorithms LSVI-UCB-Restart, Ada-LSVI-UCB-Restart and their dynamic regret bounds. Section 6 shows our experiment results. Section 7 concludes the paper and discusses some future directions. All detailed proofs can be found in Appendices. | In this section, we derive minimax regret lower bounds for nonstationary linear MDPs in both inhomogeneous and homogeneous settings, which quantify the fundamental difficulty when measured by the dynamic regret in nonstationary linear MDPs. More specifically, we consider inhomogeneous setting in this paper, where the transition function Phksuperscriptsubscript𝑃ℎ𝑘P_{h}^{k}italic_P start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT (as introduced in Section 1) can be different for different hℎhitalic_h. In contrast, for the homogeneous setting, the transition function Phksuperscriptsubscript𝑃ℎ𝑘P_{h}^{k}italic_P start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT will be the same within an episode, i.e., for any k𝑘kitalic_k, Phk≡Pksuperscriptsubscript𝑃ℎ𝑘superscript𝑃𝑘P_{h}^{k}\equiv P^{k}italic_P start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ≡ italic_P start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT for any h={1,…,H}ℎ1…𝐻h=\{1,\ldots,H\}italic_h = { 1 , … , italic_H }. All of the detailed proofs for this section are in Appendix A.
| B |
Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on instant messaging apps compared to social media, and have reported the least trust in them. They have also rated the sharing of fake news to be a greater problem than its creation. These suggest that, in Singapore, communication with personal contacts such as through the forwarding of messages, rather than with the public such as by sharing posts on social media feeds, is the larger issue. As an Asian country, Singapore tends towards a collectivist culture where emphasis is placed on establishing and maintaining relationships in one’s social group. Research has shown that this is linked to lesser use of social media (Jackson and Wang, 2013), and stronger preferences towards group chats in instant messaging apps (Li et al., 2011), signaling that instant messaging apps feature more prominently in daily communication. An opportunity here is to design more effective interventions, such as warning mechanisms (Gao et al., 2018), to preempt the private sharing of fake news.
|
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms, and post corrections and warnings when they encounter fake news. That respondents show strong trust and reliance on government communication platforms, such as official websites and hotlines, signifies the relatively strong faith that Singapore residents have in the Singapore Government to provide truthful and helpful information and to debunk fake news. This may be attributed to the successful ongoing efforts in making transparent government decisions and the readiness of the government in addressing public concerns through online forums and dialogues (REACH, [n.d.]). There is opportunity here for the government to launch programs such as campaigns, call-to-actions and civic tech initiatives that aim to more actively involve the public in discussing the local impacts of fake news and the strategies to manage it, and to encourage them to play a part through personal and community actions. | Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on instant messaging apps compared to social media, and have reported the least trust in them. They have also rated the sharing of fake news to be a greater problem than its creation. These suggest that, in Singapore, communication with personal contacts such as through the forwarding of messages, rather than with the public such as by sharing posts on social media feeds, is the larger issue. As an Asian country, Singapore tends towards a collectivist culture where emphasis is placed on establishing and maintaining relationships in one’s social group. Research has shown that this is linked to lesser use of social media (Jackson and Wang, 2013), and stronger preferences towards group chats in instant messaging apps (Li et al., 2011), signaling that instant messaging apps feature more prominently in daily communication. An opportunity here is to design more effective interventions, such as warning mechanisms (Gao et al., 2018), to preempt the private sharing of fake news.
|
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures 1 and 2) which is statistically significant (r(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and the presence of fake news, which is deceptive and usually meant to serve hidden agendas, may erode trust. It is worthwhile to consider whether the trust in media items is due to people’s own encounters with fake news, or because of secondary factors. In Singapore, there have been active efforts through campaigns from various organizations (e.g., S.U.R.E. (Board, [n.d.]), Better Internet (Council, [n.d.]), VacciNationSG (Lai, 2021)) to raise awareness on misinformation, disinformation and fake news. If it is through the exposure to the messages of these campaigns that people’s trust in media items have been influenced, especially those who might not have personally encountered fake news, this suggests the importance of media literacy education in addressing fake news, particularly when secondary effects such as practicing greater caution due to a lack of trust comes into play. | While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by political and financial gains, and its influence has led to increasing social costs due to the adverse effects it has on people’s truth discernment and behavior (Duffy et al., 2020). With fake news stemming mainly from digital media and causing misguided dissent that could compromise collaboration among people, we see this to be of concern to the CSCW community. As global efforts addressing fake news take off, we aim to understand what the perceptions and practices of news sharing and fake news are in a local context, with Singapore as the place of interest, to gain insights on where best to direct local mitigation efforts.
| C |
Table 4 presents the results of conventional entity alignment. decentRL achieves state-of-the-art performance, surpassing all others in Hits@1 and MRR. AliNet [39], a hybrid method combining GCN and GAT, performs better than the methods solely based on GAT or GCN on many metrics. Nonetheless, across most metrics and datasets, decentRL consistently outperforms AliNet, highlighting the robustness of the proposed decentralized attention mechanism.
| GNN-based methods [13, 37, 38, 39, 40, 41, 42] introduce relation-specific composition operations to combine neighbors and their corresponding relations before performing neighborhood aggregation. They usually leverage existing GNN models, such as GCN and GAT [43, 44], to aggregate an entity’s neighbors. It is worth noting that these GNN models are regarded as inductive models in graph representation learning where nodes possess self-features. In relational KG embedding, entities do not have such features, which restricts their capacity to induce embeddings for new entities.
| In this work, we propose Decentralized Attention Network for knowledge graph embedding and introduce self-distillation to enhance its ability to generate desired embeddings for both known and unknown entities. We provide theoretical justification for the effectiveness of our proposed learning paradigm and conduct comprehensive experiments to evaluate its performance on entity alignment and entity prediction, considering scenarios with and without new entities. Our experimental results demonstrate state-of-the-art performance of the proposed method on conventional and open-world benchmarks for both entity alignment and entity prediction tasks. Our method not only provides a solution for knowledge graph representation learning but also offers valuable insights into the potential of decentralized attention mechanisms for other graph-based applications.
|
Although GCN and GAT are generally regarded as inductive models for graph representation learning, our analysis in previous sections suggests their limited applicability on relational KG embedding. In further validation of this, we compare the performance of decentRL with AliNet and GAT on datasets containing new entities. The existing inductive KG embedding methods, such as LAN [21], are unsuitable for adaptation to this task as they are tailored for entity prediction. | Figure 4 shows the experimental results. decentRL outperforms both GAT and AliNet across all metrics. While its performance slightly decreases compared to conventional datasets, the other methods experience even greater performance drops in this context. AliNet also outperforms GAT, as it combines GCN and GAT to aggregate different levels of neighbors. The reduced reliance (with GCN) on self-entity embedding contributes to its more resilient performance on datasets with new entities. We also provide more detailed results on ZH-EN in Table 5, where decentRL surpasses AliNet by a larger margin for the new entities on all metrics.
| C |
In this work, we consider self-supervised exploration without extrinsic reward. In such a case, the above trade-off narrows down to a pure exploration problem, aiming at efficiently accumulating information from the environment. Previous self-supervised exploration typically utilizes ‘curiosity’ based on prediction-error of dynamic [10, 25, 11] and the Bayesian uncertainty estimation using ensemble-based environment models [26, 13] or ensemble Q-functions [27]. Since the agent does pure exploration, the intrinsic motivation becomes the only driving force of the whole learning process. Meanwhile, because the influence of extrinsic rewards is eliminated, the effectiveness of intrinsic rewards can be evaluated independently. After training the pure-exploratory policy with intrinsic rewards, there are several ways to combine the intrinsic policy with extrinsic policies. Scheduled intrinsic drive [28] uses a high-level scheduler that periodically selects to follow either the extrinsic or the intrinsic policy to gather experiences. MuleX [29] learns several policies independently and uses a random heuristic to decide which one to use in each time step. Such policy combination methods perform better than the policy obtained from the linear combination of extrinsic and intrinsic rewards. We focus on developing the pure-exploratory agent and leave the study of policy combination in the future.
|
The related exploration methods aim to remove the stochasticity of the dynamics rather than modeling it. For example, Inverse Dynamics [10], Random Features [11], and EMI [30] learn a feature space to remove the task-irrelevant information in feature space such as white-noise. Curiosity-Bottleneck [31] and Dynamic Bottleneck [32] measure reward-relevant novelty through the information bottleneck principle. Contingency awareness [33] builds an attentive model to locate the agent and computes the pseudo-count based on regions around the agent. These methods remove the stochastic part of dynamics to ensure the stability of the intrinsic rewards. In contrast, we propose a novel principle by capturing the multimodality and stochasticity directly through latent space, and measuring the intrinsic reward through sampling latent variables to obtain a tighter upper bound of the true likelihood of dynamics. To the best of our knowledge, a similar problem was only addressed by ensemble-based dynamics in exploration [13]. We analyze the ensemble model in Noisy-Mnist and use it as a baseline in experiments. |
An ordinary encoder-decoder based dynamics model that makes deterministic predictions often fails to capture the multimodality and stochasticity in dynamics and outputs an averaged prediction. An intuitive example is given in Fig. 1, there are two roads (one from the left, and the other from the right) to reach the goal, an ordinary dynamics model will output one pass through the middle. Obviously, the averaged prediction does not reflect the real situation of MDP thus will not generate a reasonable intrinsic reward for the RL agent. However, if we consider the multimodality and stochasticity of the dynamics explicitly through modeling the latent variables (i.e., z1subscript𝑧1z_{1}italic_z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and z2subscript𝑧2z_{2}italic_z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT), then we have a better understanding of the dynamics and lead a better performance in exploration. |
In this paper, we propose the Variational Dynamic Model (VDM), which models the multimodality and stochasticity of the dynamics explicitly based on conditional variational inference. VDM considers the environmental state-action transition as a conditional generative process by generating the next-state prediction under the condition of the current state, action, and latent variable. The latent variable is sampled from a Gaussian distribution to encode the multimodality and stochasticity of the dynamics in a latent space. To conduct efficient exploration based on VDM, we iteratively fit VDM by maximizing the conditional log-likelihood of transitions collected by the agent. To this end, we propose a variational learning objective, which we solve by using stochastic variational inference [14, 15]. The learning of useful latent variables is automatic in VDM training. Through maximizing the learning objective, the latent variables will encode information of multimodality and stochasticity of the underlying dynamics to maximize the log-likelihood of next-state prediction. We do not need to select the latent variables manually, and VDM can be applied in various RL environments and real-world applications. | We observe that our method performs the best in most of the games, in both the sample efficiency and the performance of the best policy. The reason our method outperforms other baselines is the multimodality in dynamics that the Atari games usually have. Such multimodality is typically caused by other objects that are beyond the agent’s control. Affected by these objects, taking the same action may yield different outcomes. For example, in MsPacman, the ghosts choose directions at each fork of the maze freely, which is beyond the control of the agent. Similar to different image classes in the Noisy-Mnist example, different behavior of ghosts leads to the different modes in the transition dynamics. VDM captures the multimodality of the dynamic when measuring the novelty of transitions, which leads to better intrinsic rewards for exploration. Moreover, in VDM, the features encoding multimodality and stochasticity are contained in posterior and prior networks separated from the reconstruction features in the generative network. Hence, VDM prevents the features of multimodality and stochasticity from being ruined in the training of the generative model.
| A |
+1)}\|_{C^{0}(\Omega)}}{2^{n}(n+1)!}\,,\,\,\,P_{A}=\mathrm{Cheb}_{n}^{1\mathrm%
{st}}\,.| italic_f ( italic_x ) - italic_Q start_POSTSUBSCRIPT italic_f , italic_A end_POSTSUBSCRIPT ( italic_x ) | ≤ divide start_ARG | italic_f start_POSTSUPERSCRIPT ( italic_n + 1 ) end_POSTSUPERSCRIPT ( italic_ξ start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT ) | end_ARG start_ARG 2 start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( italic_n + 1 ) ! end_ARG ≤ divide start_ARG ∥ italic_f start_POSTSUPERSCRIPT ( italic_n + 1 ) end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT italic_C start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT ( roman_Ω ) end_POSTSUBSCRIPT end_ARG start_ARG 2 start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( italic_n + 1 ) ! end_ARG , italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT = roman_Cheb start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 roman_s roman_t end_POSTSUPERSCRIPT . | Our result in Eq. (7.8) provides a similar bound on the approximation error in m𝑚mitalic_mD whenever the k𝑘kitalic_k-th derivatives of f𝑓fitalic_f are known or bounded.
However, usually these bounds are unknown. By validating the proposed Trefethen approximation rates in the next section, we even though provide a potential | Recently, Lloyd N. Trefethen [83] proposed a way of delivering a potential solution to the problem: For continuous functions f:Ω⟶ℝ:𝑓⟶Ωℝf:\Omega\longrightarrow\mathbb{R}italic_f : roman_Ω ⟶ blackboard_R
that are analytic in the unbounded Trefethen domain (a genralization of a Bernstein ellipse) Nm,ρ⊊Ω=[−1,1]msubscript𝑁𝑚𝜌Ωsuperscript11𝑚N_{m,\rho}\subsetneq\Omega=[-1,1]^{m}italic_N start_POSTSUBSCRIPT italic_m , italic_ρ end_POSTSUBSCRIPT ⊊ roman_Ω = [ - 1 , 1 ] start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT, of radius ρ>1𝜌1\rho>1italic_ρ > 1, an upper bound on the convergence rate applies: | This result states that any sufficiently smooth function f𝑓fitalic_f can be approximated by piecewise polynomial functions, which allows to approximate f𝑓fitalic_f by Hermite or spline interpolation.
Generalizations of this result rely on this fact and are formulated in a similar manner [23, 24, 26]. | Furthermore, so far none of these approaches is known to reach the optimal Trefethen approximation rates when requiring the number of nodes of the underlying tensorial grids to
scale sub-exponential with space dimension. As the numerical experiments in Section 8 suggest, we believe that only non-tensorial grids are able to lift the curse of dimensionality, which requires | A |
In the second case, the distributions μ𝜇\muitalic_μ and ν𝜈\nuitalic_ν are both d𝑑ditalic_d-dimensional Gaussian distributions with the same mean vector but different covariance metrics, where d∈{30,60}𝑑3060d\in\{30,60\}italic_d ∈ { 30 , 60 }.
More specifically, μ=𝒩(0,Id)𝜇𝒩0subscript𝐼𝑑\mu=\mathcal{N}(0,I_{d})italic_μ = caligraphic_N ( 0 , italic_I start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) and ν=𝒩(0,Σ)𝜈𝒩0Σ\nu=\mathcal{N}(0,\Sigma)italic_ν = caligraphic_N ( 0 , roman_Σ ) with Σ=diag(4,4,1,…,1)Σdiag441…1\Sigma=\mathrm{diag}(4,4,1,\ldots,1)roman_Σ = roman_diag ( 4 , 4 , 1 , … , 1 ). | In other words, we only scale the first two diagonal entries in the covariance matrix of ν𝜈\nuitalic_ν to make the hypothesis testing problem difficult to perform.
We compare the performance of the PW test with the MMD test discussed in [20], where the kernel function is chosen to be the standard Gaussian kernel with bandwidth being the empirical median of data points. | Several data-efficient two-sample tests [20, 21, 22] are constructed based on Maximum Mean Discrepancy (MMD), which quantifies the distance between two distributions by introducing test functions in a Reproducing Kernel Hilbert Space (RKHS).
However, it is pointed out in [23] that when the bandwidth is chosen based on the median heuristic, the MMD tests suffer from decaying power in high dimensions. | However, the two-sample tests based on concentration inequalities in Section III give conservative results in practice. We examine the two-sample tests using the projected Wasserstein distance via the permutation approach.
Specifically, we permute the collected data points for Np=100subscript𝑁𝑝100N_{p}=100italic_N start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT = 100 times, and the p𝑝pitalic_p-value of the proposed test can be computed as the fraction of times that the projected Wasserstein distances under permuted samples are greater than the projected Wasserstein distance under the original empirical samples. | The last two plots correspond to covariance-shifted Gaussian distributions, where Fig. 1c) examines the power for different n𝑛nitalic_n with fixed d=60𝑑60d=60italic_d = 60, and Fig. 1d) examines the power for different d𝑑ditalic_d with fixed n=75𝑛75n=75italic_n = 75.
We can see that the power of all methods increases when the sample size increases, and the power of the PW test is greater than the MMD test especially in high dimensions. | A |
Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and correlated components Z𝑍Zitalic_Z, a.k.a as nuisance variables, which encode the details information not stored in the independent components. A series of works starting from [beta] aims to achieve that via regularizing the models by up-weighting certain terms in the ELBO formulation which penalize the (aggregate) posterior to be factorized over all or some of the latent dimensions [kumar2017variational, factor, mig].
I think I would make what these methods doing clearer. They aren’t really separating into nuisance and independent only.. they are also throwing away nuisance. | While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, if the unconstrained nuisance variables have enough capacity, the model can use them to achieve a high quality reconstruction while ignoring the latent variables related to the disentangled factors. This phenomena is sometimes called the "shortcut problem" and has been discussed in previous works [DBLP:conf/iclr/SzaboHPZF18].
| Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Zitalic_Z while maintaining the semantic information captured in C𝐶Citalic_C to obtain the final reconstruction (Image 1d in our example).
| Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and correlated components Z𝑍Zitalic_Z, a.k.a as nuisance variables, which encode the details information not stored in the independent components. A series of works starting from [beta] aims to achieve that via regularizing the models by up-weighting certain terms in the ELBO formulation which penalize the (aggregate) posterior to be factorized over all or some of the latent dimensions [kumar2017variational, factor, mig].
I think I would make what these methods doing clearer. They aren’t really separating into nuisance and independent only.. they are also throwing away nuisance. |
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervised, semi-supervised or unsupervised. In the Appendix we present such implementations. where we significantly constrain the capacity of the learned representation and heavily regularize the model to produce independent factors. As we explained above, such a model will likely learn a good disentangled representation, however, its reconstruction will be of low quality as it will only be able to generate the information captured by the disentangled factors while averaging the details. For example, in Figure 1, the model uses β𝛽\betaitalic_β-TCVAE [mig] to retrieve the pose of the model as a latent factor. In the reconstruction, the rest of the details are averaged, resulting in a blurry image (1b). The goal of the second part of the model, is to add the details while maintaining the semantic information retrieved in the first stage. In Figure 1 that means to transform Image 1b (the output of the first stage) to be as similar as possible to Image 1a (the target observation). We can view this as a style transfer task and use a technique from [adaIN] to achieve our goal. | A |
Furthermore, we propose Simulation Metric (DFS) based on deep-first search (DFS) that enables easy implementation and testing of complex structural computer Circuits. This confirmed the feasibility of this study in an experiment based on an XOR gate produced by combining NAND, AND and OR gates.
|
And it is expected that this research can be applied to the development of artificial intelligence technologies such as deep learning in the future. In other words, it is expected that the idea of structural computers will be applied to semiconductors that generate a lot of heat, such as Computer Vision task[8][9][10][11][12][13][14][15][16] that require GPU processing of large amounts of data, to drastically reduce heat generation, reduce electricity use, and improve performance more than before. | We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Table. 1. The result of moving from the K2 peak to the K1 peak is the same as that of the XNOR, and the result of moving from the K2 peak to the K3 peak is the same as that of the XOR, it is possible to confirm that this study is feasible.
| Optical logic aggregates can be designed in the same way as in Implementation of Structural Computer Using Mirrors and Translucent Mirrors, and for the convenience of expression and the exploration of mathematical properties (especially their association with matrices), the number shown in Fig. 5 can be applied to the label of the window operator to express the AND gate as shown below, which is referred to as the matrix representation of the optical logic. Fig. 7 shows, however, that some rays of light can be counted on the lower beta signal, which can interfere with the operation of other Thus, a black body gate was implemented using i cells to make input everywhere into NULL state. Including this, functions derived from the properties of light that are only available in structural-based optical computing can be modularized with window operators, which can be organized into the following seven categories. 222AND- Logic in Boolean algebra, OR- Logic in Boolean algebra, CROS- Vertical Reflection/Crossing of Two Logics, CNOT- Vertical Reflection/Crossing of Two Logics, Only Intersects and Both Logics are NOT-operated. INVS- Transmittance of Two Logics, COPY- Cloning Logic, BLAK- Absorption of logic (to make it all NULL)
| The structure-based computer mentioned in this paper are based on Boolean Algebra, a system commonly applied to digital computers. Boolean algebra is a concept created by George Boole (1815-1854) of the United Kingdom that expresses the True and False of logic 1 and 0, and mathematically describes digital electrical signals. The concept of logical aggregates defined in Boolean algebra has become the basis for hardware devices such as ALU, CLU, RAM, and so on. Structure-based computer in this paper was also designed to perform logical operations using digital signals of 1 and 0. Logic circuits are the units in which logical operations are performed, and there are AND, OR, and NOT gates. Of these, the NOT gate in the computer we use today is based on transistors. The advantage of transistors is that they can differentiate between signal and power and perform switching and amplification at the same time. On the other hand, more heat is generated compared to passing through a conductor of the same length, which causes semiconductors to age and limits the number of clocks. To solve the various problems of the semiconductor mentioned above, this paper shows the concept of ”Reverse-Logic pair of digital signals” and ”double-pair(4-pin)-based logic operation” techniques on which Structure-based computer hardware is. This paper shows the concept of Reverse-Logic pair[7] of digital signals, which is a method for solving the problem of heating, aging, and computation speed of NOT operations. Expressing 1 as an inverted signal pair, it appears as an ordered pair of two auxiliary signals, each with a signal of one or zero, as shown in (1,0). Similarly, zeros are expressed in sequence pairs (0,1).
| A |
Given a polynomial function f(x)𝑓𝑥f(x)italic_f ( italic_x ) over a finite field 𝔽𝔽\mathbb{F}blackboard_F (or 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT), determine if it is a permutation over 𝔽𝔽\mathbb{F}blackboard_F (𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT), and if it is, compute its compositional inverse.
| The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}blackboard_F, this paper explores a completely new approach using the Koopman operator defined by the iterates of the map. This helps define the linear representation of non-linear maps, which translates non-linear compositions of the map to matrix multiplications. This linear representation naturally defines a notion of linear complexity for non-linear maps, which can be viewed as a measure of computational complexity associated with computations involving such maps. The framework of linear representation is then extended to parameter dependent maps over 𝔽𝔽\mathbb{F}blackboard_F, and the conditions on parametric invertibility of such maps are established, leading to a construction of the parametric inverse map (under composition). It is shown that the framework can be extended to multivariate maps over 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, and the conditions are established for invertibility of such maps, and the inverse is constructed using the linear representation. Further, the problem of linear representation of the group generated by a finite set of permutation maps over 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT under composition is also solved by extending the theory of linear representation of a single map. This leads to the notion of complexity of a group of permutation maps under composition.
| Given a polynomial function f(x)𝑓𝑥f(x)italic_f ( italic_x ) over a finite field 𝔽𝔽\mathbb{F}blackboard_F (or 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT), determine if it is a permutation over 𝔽𝔽\mathbb{F}blackboard_F (𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT), and if it is, compute its compositional inverse.
| Given an 1111-parameter family of maps over 𝔽𝔽\mathbb{F}blackboard_F, determine if it is parametrically invertible over 𝔽𝔽\mathbb{F}blackboard_F. It is also shown in this paper that the compositional inverse of a 1111-parameter family of permutation polynomials is also a 1111-parameter family of permutation polynomials over the same parameter and an explicit construction of the same is given.
| We developed a linear representation theory for functions over 𝔽𝔽\mathbb{F}blackboard_F in the previous section. This section extends the idea to a family of functions over 𝔽𝔽\mathbb{F}blackboard_F defined through a 𝔽𝔽\mathbb{F}blackboard_F-valued parameter. The well-known Dickson polynomial is one such motivating example for this section. Consider a parameter dependent function Fλ:𝔽→𝔽:subscript𝐹𝜆→𝔽𝔽F_{\lambda}:\mathbb{F}\to\mathbb{F}italic_F start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT : blackboard_F → blackboard_F, where λ𝜆\lambdaitalic_λ is an 𝔽𝔽\mathbb{F}blackboard_F-valued parameter. Any parametric function can also be viewed as a function F(λ,x):𝔽2→𝔽:𝐹𝜆𝑥→superscript𝔽2𝔽F(\lambda,x):\mathbb{F}^{2}\to\mathbb{F}italic_F ( italic_λ , italic_x ) : blackboard_F start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT → blackboard_F. In this section, we develop a linear representation for such functions, explore the parametric dependent invertibility of Fλsubscript𝐹𝜆F_{\lambda}italic_F start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT, and construct the parametric inverse of Fλsubscript𝐹𝜆F_{\lambda}italic_F start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT.
| C |
Stacked penalized logistic regression (StaPLR) (Van Loon \BOthers., \APACyear2020) is a method specifically developed to tackle the joint classification and view selection problem. Compared with a variant of the lasso for selecting groups of features (the so-called group lasso (M. Yuan \BBA Lin, \APACyear2007)), StaPLR was empirically shown to be more accurate in view selection, producing sparser models with an often comparable classification accuracy, and offering computational advantages (Van Loon \BOthers., \APACyear2020).
StaPLR is a special case of a more general framework called multi-view stacking (MVS) (Van Loon \BOthers., \APACyear2020; R. Li \BOthers., \APACyear2011; Garcia-Ceja \BOthers., \APACyear2018). In MVS, a learning algorithm (the base-learner) is trained on each view separately, and another algorithm (the meta-learner) is then trained on the cross-validated predictions of the view-specific models. The meta-learner thus learns how to best combine the predictions of the individual views. If the meta-learner is chosen to be an algorithm that returns sparse models, MVS performs view selection. This is the case as proposed by Van Loon \BOthers. (\APACyear2020), where the meta-learner was chosen to be a nonnegative logistic lasso. | For this purpose, one would ideally like to use an algorithm that provides sparsity, but also algorithmic stability in the sense that given two very similar data sets, the set of selected views should vary little. However, sparse algorithms are generally not stable, and vice versa (Xu \BOthers., \APACyear2012).
An example of the trade-off between sparsity and interpretability of the set of selected views occurs when different views, or combinations of views, contain the same information. If the primary concern is sparsity, a researcher may be satisfied with just one of these combinations being selected, preferably the smallest set which contains the relevant information. But if there is also a desire to interpret the relationships between the views and the outcome, it may be more desirable to identify all of these combinations, even if this includes some redundant information. If one wants to go even further and perform formal statistical inference on the set of selected views, one may additionally be interested in theoretically controlling, say, the family-wise error rate (FWER) or false discovery rate (FDR) of the set of selected views. However, strict control of such an error rate could end up harming the predictive performance of the model, thus leading to a trade-off between the interpretability of the set of selected views and classification accuracy. | In this article we investigated how different view-selecting meta-learners affect the performance of multi-view stacking. In our simulations, the interpolating predictor often performed worse than the other meta-learners on at least one outcome measure. For example, when the sample size was larger than the number of views, the interpolating predictor often had the lowest TPR in view selection, as well as the lowest test accuracy, particularly when there was no correlation between the different views. When the sample size was smaller than the number of views, the interpolating predictor had a FPR in view selection that was considerably higher than that of all other meta-learners. In terms of accuracy it performed very well in the breast cancer data, but less so in the colitis data. However, in both cases it produced very dense models, which additionally had low view selection stability. The fact that its behavior varied considerably across our experimental conditions, combined with its tendency to select very dense models when the meta-learning problem is high-dimensional, suggests that the interpolating predictor should not be used when view selection is among the goals of the study under consideration. However, it may have some use when its interpretation as a weighted mean of the view-specific models is of particular importance.
| In high-dimensional biomedical studies, a common goal is to create an accurate classification model using only a subset of the features (Y. Li \BOthers., \APACyear2018). A popular approach to this type of joint classification and feature selection problem is to apply penalized methods such as the lasso (Tibshirani, \APACyear1996). These methods promote sparsity by imposing a penalty on the coefficient vector so that, for a sufficiently large value of the tuning parameter(s), some coefficients will be set to zero during the model fitting process. The tuning parameter decides on the relative importance of the penalty term, and is typically chosen by minimizing the cross-validation error (Friedman \BOthers., \APACyear2009).
However, biomedical features are often naturally grouped into distinct feature sets. In genomics, for example, genes may be grouped into gene sets or genetic pathways (K. Wang \BOthers., \APACyear2010), while in neuroimaging, different sets of anatomical markers may be calculated from MRI scans (De Vos \BOthers., \APACyear2016). Features may also be grouped at a higher level, for example because they correspond to a certain imaging modality or data source (Fratello \BOthers., \APACyear2017). Such naturally occurring groups of features describing the same set of objects are known as different views of the data, and integrating the information in these different views through machine learning methods is known as multi-view learning (Zhao \BOthers., \APACyear2017; Sun \BOthers., \APACyear2019). In a multi-view setting, it is often more desirable to select or discard entire views rather than individual features, turning the feature selection problem into a view selection problem. | A particular challenge of the aforementioned joint classification and view selection problem is its inherent trade-off between accuracy and sparsity. For example, the most accurate model may not perform the best in terms of view selection. In fact, the prediction-optimal amount of regularization causes the lasso to select superfluous features even when the sample size goes to infinity (Meinshausen \BBA Bühlmann, \APACyear2006; Benner \BOthers., \APACyear2010). This leads to a consideration of how much predictive accuracy a researcher is prepared to sacrifice for increased sparsity.
| D |
Compared to other methods, IEPC exhibits a notably lower reduction rate, which, we believe, contributes to its unstable performance. The experimental results in Figure 3 indicate that when considering only linear prediction models, IEPC performs better with regularization techniques such as LASSO and Ridge, as opposed to general linear regression without regularization. This observation suggests the possibility of irrelevant or redundant variables being included in the set of relevant variables selected by IEPC.
|
It is worth noting that the key difference between the two DepAD methods (FBED-CART-PS and FBED-CAR-Sum) and ALSO lies in their relevant variable selection phase. The two DepAD methods learn and use the MB of a variable as its relevant variables, while ALSO, for each variable, uses all other variables as the variable’s relevant variables. Thus, the evaluation demonstrates the impact of the relevant variable selection phase. | In conclusion, the relevant variable selection phase of the DepAD framework is crucial for identifying optimal predictors for the target variable in anomaly detection. Striking a balance between selecting too many or too few variables is essential for maintaining prediction accuracy. When the ground-truth relevant variable set is unavailable, the Markov blanket (MB) represents a theoretically optimal choice. Our experiments have further validated that HITON-PC and FBED outperform the other techniques, and achieve superior results in both ROC AUC and AP and the highest variable reduction rates.
|
Table 6 presents the reduction rates achieved by each of the five techniques. The reduction rate is computed as 1 minus the ratio of the number of relevant variables selected to the total number of variables in a dataset. The results reveal substantial variations in reduction rates among the different techniques for the same dataset. For instance, for the dataset Libras, the reduction rate achieved by IEPC is 2.2%, while the other techniques achieve rates below 93%. On average, HITON-PC exhibits the highest reduction rate of 84.61%, while IEPC shows the lowest reduction rate at 40.28%. FBED, DC, and MI achieve relatively similar reduction rates, hovering around 76%. Notably, FBED and HITON-PC display a similar trend, with HITON-PC consistently achieving a higher reduction rate than FBED due to the PC set of a variable being a subset of its Markov blanket. |
Among the filter feature selection methods, causal feature selection methods [31, 29, 32] are recommended choices as they identify the causal factors of a target variable, offering better interpretability. These methods select the parents and children (PC) or Markov blanket (MB) of a target variable in a Bayesian network (BN) as predictors for the target. The MB is considered an optimal choice for relevant variables as it contains the complete dependency information of a variable. Efficient methods exist for learning the PC or MB of a variable from data without learning a complete BN [31, 32]. | B |
At the start of the interaction, when no contexts have been observed, θ^tsubscript^𝜃𝑡\hat{\theta}_{t}over^ start_ARG italic_θ end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is well-defined by Eq (5) when λt>0subscript𝜆𝑡0\lambda_{t}>0italic_λ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT > 0. Therefore, the regularization parameter λtsubscript𝜆𝑡\lambda_{t}italic_λ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT makes CB-MNL burn-in period free, in contrast to some previous works, e.g. Filippi et al. [2010]. | where pessimism is the additive inverse of the optimism (difference between the payoffs under true parameters and those estimated by CB-MNL). Due to optimistic decision-making and the fact that θ∗∈Ct(δ)subscript𝜃subscript𝐶𝑡𝛿\theta_{*}\in C_{t}(\delta)italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ italic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) (see Eq (12)), pessimism is non-positive, for all rounds. Thus, the regret is upper bounded by the sum of the prediction error for T𝑇Titalic_T rounds. In Section 4.1 we derive an the expression for prediction error upper bound for a single round t𝑡titalic_t. We also contrast with the previous works Filippi et al. [2010], Li et al. [2017], Oh & Iyengar [2021] and point out specific technical differences which allow us to use Bernstein-like tail concentration inequality and therefore, achieve stronger regret guarantees. In Section 4.2, we describe the additional steps leading to the statement of Theorem 1. The style of the arguments is simpler and shorter than that in Faury et al. [2020]. Finally, in Section 4.3, we discuss the relationship between two confidence sets Ct(δ)subscript𝐶𝑡𝛿C_{t}(\delta)italic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) and Et(δ)subscript𝐸𝑡𝛿E_{t}(\delta)italic_E start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) and show that even using Et(δ)subscript𝐸𝑡𝛿E_{t}(\delta)italic_E start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) in place of Ct(δ)subscript𝐶𝑡𝛿C_{t}(\delta)italic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ), we get the regret upper bounds with same parameter dependence as in Corollary 2.
Lemma 3 gives the expression for an upper bound on the prediction error. |
Comparison with Oh & Iyengar [2019] The Thompson Sampling based approach is inherently different from our Optimism in the face of uncertainty (OFU) style Algorithm CB-MNL. However, the main result in Oh & Iyengar [2019] also relies on a confidence set based analysis along the lines of Filippi et al. [2010] but has a multiplicative κ𝜅\kappaitalic_κ factor in the bound. | In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of uncertainty approaches) [Abbasi-Yadkori et al., 2011, Abeille et al., 2021]. We use Bernstein-style concentration for self-normalized martingales, which were previously proposed in the context of scalar logistic bandits in Faury et al. [2020], to define our confidence set over the true parameter, taking into account the effects of the local curvature of the reward function. We show that the performance of CB-MNL (as measured by regret) is bounded as O~\deldT+κ~O\del𝑑𝑇𝜅\tilde{\mathrm{O}}\del{d\sqrt{T}+\kappa}over~ start_ARG roman_O end_ARG italic_d square-root start_ARG italic_T end_ARG + italic_κ, significantly improving the theoretical performance over existing algorithms where κ𝜅\kappaitalic_κ appears as a multiplicative factor in the leading term. We also leverage a self-concordance [Bach, 2010] like relation for the multinomial logit reward function [Zhang & Lin, 2015], which helps us limit the effect of κ𝜅\kappaitalic_κ on the final regret upper bound to only the higher-order terms. Finally, we propose a different convex confidence set for the optimization problem in the decision set of CB-MNL, which reduces the optimization problem to a constrained convex problem.
| Algorithm 1 follows the template of in the face of uncertainty (OFU) strategies [Auer et al., 2002, Filippi et al., 2010, Faury et al., 2020]. Technical analysis of OFU algorithms relies on two key factors: the design of the confidence set and the ease of choosing an action using the confidence set.
| D |
Up-scaling a video could transform a short action into a long one, but may lose important information for localization. Thus both the original scale and the enlarged scale have their limitations and advantages. The original video scale contains the original intact information, while the enlarged one is easier for the network to detect. In contrast to other works that either use the original-scale video or a down-scaled video, in this paper, we use both to take advantage of their complementary properties and mutually enhance their feature representations. | Specifically, we propose a Video self-Stitching Graph Network (VSGN) for improving performance of short actions in the TAL problem. Our VSGN is a multi-level cross-scale framework that contains two major components: video self-stitching (VSS); cross-scale graph pyramid network (xGPN). In VSS, we focus on a short period of a video and magnify it along the temporal dimension to obtain a larger scale. Then using our self-stitching strategy, we piece together both the original-scale clip and its magnified counterpart into one single sequence as the network input. In xGPN, we progressively aggregate features from cross scales as well as from the same scale via a pyramid of cross-scale graph networks. Hence, we enable direct information pass between the two feature scales. Compared to simply using one scale, our VSGN adaptively rectifies distorted features in either scales from one another by learning to localize actions, therefore, it is able to retain more information for the localization task.
In addition to enhancing the features, our VSGN augments the datasets with more short actions to mitigate the bias towards long actions during the learning process, and enables more anchors, even those with large scales, to predict short actions. |
Multi-scale input. The magnification process may inevitably impair the information in the clip, thus the original video clip, which contains the original intact information, is also necessary. To take advantage of the complementary properties of both scales, we design a video stitching technique to piece them together as one single network input (VSS in Fig. 2, see Sec. 3.2 for details). This strategy enables the network to process both scales in one single pass, and the clip to have more positive anchors of different scales. It is also an effective way to augment the dataset. |
2) We propose a novel temporal action localization framework VSGN, which features two key components: video self-stitching (VSS); cross-scale graph pyramid network (xGPN). For effective feature aggregation, we design a cross-scale graph network for each level in xGPN with a hybrid module of a temporal branch and a graph branch. |
In this paper, to tackle the challenging problem of large action scale variation in the temporal action localization (TAL) problem, we target short actions and propose a multi-level cross-scale solution called video self-stitching graph network (VSGN). It contains a video self-stitching (VSS) component that generates a larger-scale clip and stitches it with the original-scale clip to utilize the complementary properties of different scales. It has a cross-scale graph pyramid network (xGPN) to aggregate features from across different scales as well as from the same scale. This is the first work to focus on the problem of short actions in TAL, and has achieved significant improvement on short action performance as well as overall performance. | A |
Hyperparameter optimization (also called hyperparameter tuning) is the process of selecting appropriate values of hyperparameters for machine learning (ML) models, often independently for each data set, to achieve their best possible results.
Although time consuming, this process is required for the vast majority of ML models before their deployment into production [vRH17, vRH18]. | Important contributions of this research include the formalization of primary concepts [CDM15], the identification of methods for assessing hyperparameter importance [JWXY16, PBB19, vRH17, HHLB13, HHLB14, vRH18], and resulting libraries and frameworks for specific hyperparameter optimization methods [KGG∗18, THHLB13]. Indeed, several packages exist that focus on automatically optimizing Bayesian methods with the use of a single performance measurement [Bay, HHLB11, HHLBS09, SSW∗16], and there are popular commercial platforms developed for hyperparameter optimization [Dat, Aut]. This widespread automation does not stop in supervised classification problems, but also includes dimensionality reduction (DR) algorithms (e.g., t-SNE) [BCA∗19, KB19].
| Visualization tools have been implemented for sequential-based, bandit-based, and population-based approaches [PNKC21], and for more straightforward techniques such as grid and random search [LCW∗18]. Evolutionary optimization, however, has not experienced similar consideration by the InfoVis and VA communities, with the exception of more general visualization approaches such as EAVis [KE05, Ker06] and interactive evolutionary computation (IEC) [Tak01]. To the best of our knowledge, there is no literature describing the use of VA in hyperparameter tuning of evolutionary optimization (as defined in Section 1) with the improvement of performance based on majority-voting ensembles.
In this section, we review prior work on automatic approaches, visual hyperparameter search, and tools with which users may tune ML ensembles. Finally, we discuss the differences of such systems when compared to VisEvol in order to clarify the novelty of our tool. | One common focus of related work is the hyperparameter search for deep learning models. HyperTuner [LCW∗18] is an interactive VA system that enables hyperparameter search by using a multi-class confusion matrix for summarizing the predictions and setting user-defined ranges for multiple validation metrics to filter out and evaluate the hyperparameters. Users can intervene in the running procedure to anchor a few hyperparameters and modify others. However, this could be hard to generalize for more than one algorithm at the same time. In our case, we combine the power of diverse algorithms, with one of them being a neural network (NN). HyperTendril [PNKC21] is a visualization tool that supports random search, population-based training [JDO∗17], Bayesian optimization, HyperBand [LJD∗17], and the last two methods joined together [FKH18]. It enables the users to set an initial budget, search the space for the best configuration, and select suitable algorithms. However, its effectiveness is only tested in scenarios specifically designed for NNs.
Other examples of publications which work explicitly with deep learning only, and do not support evolutionary optimization, are VisualHyperTuner [PKK∗19], Jönsson et al. [JES∗20], and Hamid et al. [HDK∗19]. | Numerous techniques exist that try to solve this challenge, such as the well-known grid search, random search [BB12], and Bayesian optimization that belong to the generic type of sequential-based methods [BBBK11, SSW∗16]. Other proposed methods include bandit-based approaches [FKH18, LJD∗17], population-based methods [JDO∗17], and evolutionary optimization [DDF∗18, YRK∗15], which is our focus in this paper.
| D |
The fundamental idea underlying MCMC algorithms is to synthesize a Markov chain that converges to a specified steady-state distribution.
Random sampling of a large state space while adhering to a predefined probability distribution is the predominant use of MCMC algorithms. | The current literature covers a broad spectrum of methodologies for Markov chain synthesis, incorporating both heuristic approaches and optimization-based techniques [4, 5, 6]. Each method provides specialized algorithms tailored to the synthesis of Markov chains in alignment with specific objectives or constraints.
Markov chain synthesis plays a central role in probabilistic swarm guidance, which has led to the development of various algorithms incorporating additional transition and safety constraints [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. | This algorithm treats the spatial distribution of swarm agents, called the density distribution, as a probability distribution and employs the Metropolis-Hastings (M-H) algorithm to synthesize a Markov chain that guides the density distribution toward a desired state.
The probabilistic guidance algorithm led to the development of numerous Markov chain synthesis algorithms involving specific objectives and constraints [8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. | Unlike the homogeneous Markov chain synthesis algorithms in [4, 7, 5, 6, 8, 9], the Markov matrix, synthesized by our algorithm, approaches the identity matrix as the probability distribution converges to the desired steady-state distribution. Hence the proposed algorithm attempts to minimize the number of state transitions, which eventually converge to zero as the probability distribution converges to the desired steady-state distribution.
Whereas previous time-inhomogeneous Markov chain synthesis algorithms in [14, 15] only provide asymptotic convergence, the DSMC algorithm provides an exponential convergence rate guarantee. | In this section, we apply the DSMC algorithm to the probabilistic swarm guidance problem and provide numerical simulations that show the convergence rate of the DSMC algorithm is considerably faster as compared to the previous Markov chain synthesis algorithms in [7] and [14].
| A |
We have proven that the IsoMuSh algorithm is convergent in the objective f(⋅,⋅)𝑓⋅⋅f(\cdot,\cdot)italic_f ( ⋅ , ⋅ ). However, we did not establish convergence of the variables U𝑈Uitalic_U and Q𝑄Qitalic_Q. In this context, we note that there are equivalence classes of U𝑈Uitalic_U and Q𝑄Qitalic_Q that lead to the same objective value. To be more specific, for any (full) d×d𝑑𝑑d\times ditalic_d × italic_d permutation matrix P𝑃Pitalic_P, and any 𝒞∈𝕆b𝒞subscript𝕆𝑏\mathcal{C}\in\mathbb{O}_{b}caligraphic_C ∈ blackboard_O start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT we have (UP)∈ℙ𝑈𝑃ℙ(UP)\in\mathbb{P}( italic_U italic_P ) ∈ blackboard_P, (Q𝒞)∈𝕆𝑄𝒞𝕆(Q\mathcal{C})\in\mathbb{O}( italic_Q caligraphic_C ) ∈ blackboard_O, and f(U,Q)=f(UP,Q𝒞)𝑓𝑈𝑄𝑓𝑈𝑃𝑄𝒞f(U,Q)=f(UP,Q\mathcal{C})italic_f ( italic_U , italic_Q ) = italic_f ( italic_U italic_P , italic_Q caligraphic_C ). The latter can be verified by plugging UP𝑈𝑃UPitalic_U italic_P and Q𝒞𝑄𝒞Q\mathcal{C}italic_Q caligraphic_C into f𝑓fitalic_f while making use of the orthogonality of P𝑃Pitalic_P and 𝒞𝒞\mathcal{C}caligraphic_C. Although the IsoMuSh algorithm is convergent, and we have empirically verified that it improves upon the state-of-the-art for the isometric multi-shape matching problem, the investigation of stronger convergence results is an interesting direction for future work.
|
We presented a novel formulation for the isometric multi-shape matching problem. Our main idea is to simultaneously solve for shape-to-universe matchings and shape-to-universe functional maps. By doing so, we generalise the popular functional map framework to multi-matching, while guaranteeing cycle consistency, both for the shape-to-universe matchings, as well as for the shape-to-universe functional maps. This contrasts the recent ConsistentZoomOut [31] method, which does not obtain cycle-consistent multi-matchings. Our algorithm is efficient, straightforward to implement, and montonically increases the objective function. Experimentally we have demonstrated that our method outperforms recent state-of-the-art techniques in terms of matching quality, while producing cycle-consistent results and being efficient. | In contrast, HiPPI and our method require shape-to-universe representations. To obtain these, we use synchronisation to extract the shape-to-universe representation from the pairwise transformations. By doing so, we obtain the initial U𝑈Uitalic_U and Q𝑄Qitalic_Q. We refer to this method of synchronising the ZoomOut results as ZoomOut+Sync, which directly serves as initialisation for HiPPI and our method. Throughout this section we also report results of the initialisation methods ZoomOut and ZoomOut+Sync. Further details can be found in the supplementary material.
| Similar to the previous section, we want to impose cycle consistency on the pairwise functional maps 𝒞ijsubscript𝒞𝑖𝑗\mathcal{C}_{ij}caligraphic_C start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT.
We do so by defining a shape-to-universe functional map 𝒞isubscript𝒞𝑖\mathcal{C}_{i}caligraphic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT from 𝒳isubscript𝒳𝑖\mathcal{X}_{i}caligraphic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to a (virtual) universe shape. We achieve cycle consistency by composing each pairwise functional map using shape-to-universe functional maps, i.e. | A shortcoming when applying the mentioned multi-shape matching approaches to isometric settings is that they do not exploit structural properties of isometric shapes. Hence, they lead to suboptimal multi-matchings, which we experimentally confirm in Sec. 5. One exception is the recent work on spectral map synchronisation [31], which builds upon functional maps and is, in principal, well-suited for isometric multi-shape matching. However, although the authors take into account cycle consistency, respective penalties are only imposed on pairwise functional maps, rather than on the point-wise correspondences. In Sec. 5 we demonstrate that it leads to multi-matchings that have large cycle errors.
| A |
If there exists a polynomial algorithm that tests if a graph G𝐺Gitalic_G is a path graph and returns a clique path tree of G𝐺Gitalic_G when the answer is “yes”, then there exists an algorithm with the same complexity to test if a graph is a directed
path graph. |
In this section we introduce some results and notations in [1], that give a new characterization of path graphs resumed in Theorem 6. Indirectly, some of these results allow us to efficiently recognize directed path graphs too (see Section 5 and Theorem 9). | On the side of directed path graphs, at the state of art, our algorithm is the only one that does not use the results in [4], in which it is given a linear time algorithm able to establish whether a path graph is a directed path graph too (see Theorem 5 for further details). Thus, prior to this paper, it was necessary to implement two algorithms to recognize directed path graphs, while we obtain our recognition algorithm for directed path graphs by slightly modifying the recognition algorithm for path graphs.
| The paper is organized as follows. In Section 2 we present the characterization of path graphs and directed path graphs given by Monma and Wei [18], while in Section 3 we explain the characterization of path graphs by Apollonio and Balzotti [1]. In Section 4 we present our recognition algorithm for path graphs, we prove its correctness, we report some implementation details and we compute its time complexity. Finally, in Section 5 we provide a similar analysis for directed path graphs.
| interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs.interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs\text{interval graphs $\subset$ rooted path graphs $\subset$ directed path %
graphs $\subset$ path graphs $\subset$ chordal graphs}.interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs . | A |
In experiments 1(a) and 1(b), we study how the fraction of pure nodes affects the behaviors of these mixed membership community detection methods under MMSB and DCMM, respectively. We fix (x,ρ)=(0.4,0.1)𝑥𝜌0.40.1(x,\rho)=(0.4,0.1)( italic_x , italic_ρ ) = ( 0.4 , 0.1 ) and let n0subscript𝑛0n_{0}italic_n start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT range in {40,60,80,100,120,140,160}406080100120140160\{40,60,80,100,120,140,160\}{ 40 , 60 , 80 , 100 , 120 , 140 , 160 }. In Experiment 1(a) generate θ𝜃\thetaitalic_θ as θ(i)=0.4𝜃𝑖0.4\theta(i)=0.4italic_θ ( italic_i ) = 0.4 for all 1≤i≤n1𝑖𝑛1\leq i\leq n1 ≤ italic_i ≤ italic_n, that is, it is under MMSB model. In Experiment 1(b), generate θ𝜃\thetaitalic_θ as θ(i)=0.2+0.8(i/n)2𝜃𝑖0.20.8superscript𝑖𝑛2\theta(i)=0.2+0.8(i/n)^{2}italic_θ ( italic_i ) = 0.2 + 0.8 ( italic_i / italic_n ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT for all 1≤i≤n1𝑖𝑛1\leq i\leq n1 ≤ italic_i ≤ italic_n, i.e., it is under DCMM model.
|
Panels (e) and (f) of Figure 1 report the numerical results of these two sub-experiments. They suggest that estimating the memberships becomes harder as the purity of mixed nodes decreases. Mixed-SLIM and Mixed-SCORE perform similarly and both two approaches perform better than OCCAM and GeoNMF under the MMSB setting. Meanwhile, Mixed-SLIM significantly outperforms the other three methods under the DCMM setting. |
Numerical results of these two sub-experiments are shown in panels (c) and (d) of Figure 1. From subfigure (c), under the MMSB model, we can find that Mixed-SLIM, Mixed-SCORE, OCCAM, and GeoNMF have similar performances, and as ρ𝜌\rhoitalic_ρ increases they all perform poorer. Under the DCMM model, the mixed Humming error rate of Mixed-SLIM decreases as ρ𝜌\rhoitalic_ρ decreases, while the performances of the other three approaches are still unsatisfactory. |
Numerical results of these two sub-experiments are shown in panels (a) and (b) of Figure 1, respectively. From the results in subfigure 1(a), it can be found that Mixed-SLIM performs similarly to Mixed-SCORE while both two methods perform better than OCCAM and GeoNMF under the MMSB setting. Subfigure 1(b) suggests that Mixed-SLIM significantly outperforms Mixed-SCORE, OCCAM, and GeoNMF under the DCMM setting. It is interesting to find that only Mixed-SLIM enjoys better performances as the fraction of pure nodes increases under the DCMM setting. |
The numerical results are given by the last two panels of Figure 1. Subfigure 1(k) suggests that Mixed-SLIM, Mixed-SCORE, and GeoNMF share similar performances and they perform better than OCCAM under the MMSB setting. the proposed Mixed-SLIM significantly outperforms the other three methods under the DCMM setting. | C |
For any functional F:ℳ→ℝ:𝐹→ℳℝF\colon\mathcal{M}\rightarrow\mathbb{R}italic_F : caligraphic_M → blackboard_R, we let gradFgrad𝐹\operatorname{{\mathrm{grad}}}Froman_grad italic_F denote the functional gradient of F𝐹Fitalic_F with respect to the Riemannian metric g𝑔gitalic_g.
| To study optimization problems on the space of probability measures, we first introduce the background knowledge of the Riemannian manifold and the Wasserstein space. In addition, to analyze the statistical estimation problem that arises in estimating the Wasserstein gradient, we introduce the reproducing kernel Hilbert space.
| Here the statistical error is incurred in estimating the Wasserstein gradient by solving the dual maximization problem using functions in a reproducing kernel Hilbert space (RKHS) with finite data, which converges sublinearly to zero as the number of particles goes to infinity.
Therefore, in this scenario, variational transport provably enjoys both computational efficiency and global optimality. | Second, when the Wasserstein gradient is approximated using RKHS functions and the objective functional satisfies the PL condition, we prove that the sequence of probability distributions constructed by variational transport converges linearly to the global minimum of the objective functional, up to certain statistical error that converges to zero as the number of particles goes to infinity.
To the best of our knowledge, we seem to propose the first particle-based algorithm for general distributional optimization problems with both global convergence and global optimality guarantees. | we prove that variational transport constructs a sequence of probability distributions that converges linearly to the global minimizer of the objective functional up to a statistical error due to estimating the Wasserstein gradient with finite particles. Moreover, such a statistical error converges to zero as the number of particles goes to infinity.
| A |
To make the policy transferable, traffic signal control is also modeled as a meta-learning problem in [14, 49, 36]. Specifically, the method in [14] performs meta-learning on multiple independent MDPs and ignores the influences of neighbor agents. A data augmentation method is proposed in [49] to generates diverse traffic flows to enhance meta-RL, and also regards agents as independent individuals, without explicitly considering neighbors. In addition, a model-based RL method is proposed in [36] for high data efficiency. However it may introduce cumulative errors due to error of the learned environment model and it is hard to achieve the asymptotic performance of model-free methods. Our method both belongs to meta-RL paradigms, the main advantages are two main aspects Firstly, we consider the neighbour information during the meta-learning, which is critical for the multi-agent coordination. Secondly, our method learns a latent variable to represent task-specific information, which can not only balance exploration and exploitation [50], but also help to learn the shared structures of reward and transition across tasks. As far as we know, our work is the first to propose an intrinsic motivation to enhance the robustness of the policy on traffic signal control. See Appendix F for a brief overview of the above methods.
| Secondly, even for a specific task, the received rewards and observations are uncertain to the agent, as illustrated in Fig. 1, which make the policy learning unstable and non-convergent. Even if the agent performs the same action on the same observation at different timesteps, the agent may receive different rewards and observation transitions because of neighbor agents’ different actions. In this case, the received rewards and observation transitions of the current agent could not be well predicted only conditioned on its own or partial neighbors’ observations and performed actions. To avoid this situation, four decoders are introduced to predict the next observations and rewards without neighbor agents’ policies or with partially neighbor agents, respectively. In addition, an intrinsic reward is designed to reduce the bias among different predictions and enhance learning stability. In other words, the design of the decoders and intrinsic reward is similar to the law of contra-positive. The unstable learning will cause the predicted rewards and observation transitions unstable in a decentralized way, while our decoders and intrinsic reward encourage the prediction convergent. In addition, from the perspective of information theory, the intrinsic reward design makes the policy of each agent robust to neighbours’ polices, which could make the learned policy easy to transfer.
| Intrinsic motivation methods have been widely studied in the literature, such as handling the difficult-to-learn dilemma in sparse reward environments [51] or trading off the exploration and exploitation in non-sparse reward scenarios [50]. Most of the intrinsic reward approaches can be classified into two classes. The first class is counted-based paradigm, where agents are incentivized to reach infrequently visited states by maintaining state visitation counts [52, 53] or density estimators [54, 55]. However, this paradigm is challenging in continuous or high-dimensional state space. The second is curiosity-based paradigm, in which agents are rewarded for high prediction error in a learned reward [56, 17] or inverse dynamics model [55, 57]. The uncertainty of the agent’s assessment of its behavior can be measured as a curiosity for environmental exploration.
|
Besides the above two classes, other intrinsic reward methods are mainly task-oriented and for a specific purpose. For example, the method in [19] uses the discrepancy between the marginal policy and the conditional policy as the intrinsic reward for encouraging agents to have a greater social impact on others. The errors between the joint cooperative behaviors and the individual actions are defined in [58] as an intrinsic reward, which is suitable for agent-pair tasks that rely heavily on collaboration, such as dual-arm robot tasks. Similar with them, the proposed intrinsic reward is specially designed | Before formulating the problem, we firstly design the learning paradigm by analyzing the characteristics of the traffic signal control (TSC). Due to the coordination among different signals, the most direct paradigm may be centralized learning. However, the global state information in TSC is not only highly redundant and difficult to obtain in realistic deployment, but also likely suffers from dimensional explosion. Moreover, once the policy function relies on the global state information or neighbors on execution, it is hard to transfer the policy from the training scenario to other unseen scenarios containing different road networks. Hence, it is natural to resort to the decentralized policy, which controls each signal only conditioned on its own history. However, the fully decentralized learning ignores the coordination. If agents are behaved independently, agents maximize their own rewards and may sacrifice the interests of others, it is difficult for the entire system to reach the optimum. Therefore, we model the task as Decentralized Partially Observable Markov Decision Process (Dec-POMDP) [67]. The neighbors’ information is considered, all agents’ policies are optimized synchronously in training, while only the agent’s observation history is used in the execution.
| B |
Then, for every 𝐱0∈Sτ(𝐱∗)subscript𝐱0subscript𝑆𝜏subscript𝐱\mathbf{x}_{0}\,\in\,S_{\tau}(\mathbf{x}_{*})bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∈ italic_S start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ), we have
|
‖𝐱k+1−𝐱0‖2subscriptnormsubscript𝐱𝑘1subscript𝐱02\displaystyle\|\mathbf{x}_{k+1}-\mathbf{x}_{0}\|_{2}∥ bold_x start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT - bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT | ≤‖𝐱k+1−𝐱k‖2+⋯+‖𝐱1−𝐱0‖2absentsubscriptnormsubscript𝐱𝑘1subscript𝐱𝑘2⋯subscriptnormsubscript𝐱1subscript𝐱02\displaystyle~{}~{}\leq~{}~{}\|\mathbf{x}_{k+1}-\mathbf{x}_{k}\|_{2}+\cdots+\|%
\mathbf{x}_{1}-\mathbf{x}_{0}\|_{2}≤ ∥ bold_x start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT - bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT + ⋯ + ∥ bold_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT | 𝐱1=𝐱0−A†(A𝐱0−𝐛)=A†𝐛+(I−A†A)𝐱0subscript𝐱1subscript𝐱0superscript𝐴†𝐴subscript𝐱0𝐛superscript𝐴†𝐛𝐼superscript𝐴†𝐴subscript𝐱0\mathbf{x}_{1}~{}~{}=~{}~{}\mathbf{x}_{0}-A^{\dagger}\,(A\,\mathbf{x}_{0}-%
\mathbf{b})~{}~{}=~{}~{}A^{\dagger}\,\mathbf{b}+(I-A^{\dagger}\,A)\,\mathbf{x}% | ‖𝐱1−𝐱∗‖normsubscript𝐱1subscript𝐱\displaystyle\|\mathbf{x}_{1}-\mathbf{x}_{*}\|∥ bold_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∥
≤‖𝐱1−𝐱0‖2+‖𝐱0−𝐱∗‖2absentsubscriptnormsubscript𝐱1subscript𝐱02subscriptnormsubscript𝐱0subscript𝐱2\displaystyle~{}~{}\leq~{}~{}\|\mathbf{x}_{1}-\mathbf{x}_{0}\|_{2}+\|\mathbf{x% | D |
The second type of benchmarks is generated from the BPPLIB library (?), a collection of bin packing benchmarks used in various works on (offline) algorithms for bin packing. In particular, we report results on the benchmarks “GI” (?), “Schwerin” (?), “Randomly_Generated” (?), “Schoenfield_Hard28” (?) and “Wäscher” (?).
| As illustrated in Figure 3(c), the smaller the parameter λ𝜆\lambdaitalic_λ, the better the performance of Hybrid(λ𝜆\lambdaitalic_λ); in particular, ProfilePacking performs the best, which suggests that for inputs from a small set of item sizes, it is beneficial to choose a small value of λ𝜆\lambdaitalic_λ. This can be explained by the fact that the prediction error is relatively smaller for these types of inputs.
This finding can be useful in the context of applications such as VM placement: this is because there is only a small number of different VMs that can be assigned to any given physical machine, as we discussed in Section 5.1. |
In this work, we focus on the online variant of bin packing, in which the set of items is not known in advance but is rather revealed in the form of a sequence. Upon the arrival of a new item, the online algorithm must either place it into one of the currently open bins, as long as this action does not violate the bin’s capacity, or into a new bin. The online model has several applications related to dynamic resource management, such as virtual machine placement for server consolidation (?, ?) and memory allocation in data centers (?). Online bin packing has a long history of study; in Section 1.2 we discuss, in more detail, some of the most significant known results in this setting. |
In this section, we present an experimental evaluation of the performance of our algorithms111The code on which the experiments are based is available at https://github.com/shahink84/BinPackingPredictions.. Specifically, in Section 6.1 we describe the benchmarks and the input generation model; in Section 6.2, we expand on the predictions and error measurement; and in Section 6.3, we present and discuss the main experimental results. In addition, in Section 6.4 we report further experiments on the profile size, and in Section 6.5 we provide further methodology for reporting the average performance of our algorithms over multiple runs. Last, in Section 6.6, we study the performance of our algorithms in dynamic settings in which the input is generated from an evolving distribution. | We set the bin capacity to k=100𝑘100k=100italic_k = 100, and we also scale down each item to the closest integer in [1,k]1𝑘[1,k][ 1 , italic_k ].
This choice is relevant for applications such as Virtual Machine placement (Section 5.1), as explained in Section 5.1. We generate two classes of input sequences. | D |
Efficient 3D object representations are fundamental building blocks of many computer vision and machine learning applications, ranging from robotic manipulation (Kehoe et al., 2015) to autonomous driving (Yang et al., 2018a). Contemporary 3D registration devices, such as LIDARs and depth cameras, generate these representations in the form of unordered sets of 3D points sampled sparsely on object surfaces, called point clouds. Although a single point cloud (Qi et al., 2017a, b) can be used to regenerate an object’s surface details (Fan et al., 2017), it does not contain enough information about 3D points’ neighborhood structure to successfully reconstruct a smooth, high-fidelity manifold of the entire surface of an object. This shortcoming limits point clouds’ applicability since surface reconstructions provide an intuitive and efficient object representation, comprehensible for both humans and machines.
|
We compare the results with the existing solutions that aim at point cloud generation: latent-GAN (Achlioptas et al., 2017), PC-GAN (Li et al., 2018), PointFlow (Yang et al., 2019), HyperCloud(P) (Spurek et al., 2020a) and HyperFlow(P) (Spurek et al., 2020b). We also consider in the experiment two baselines, HyperCloud(M) and HyperFlow(M) variants, that are capable of generating the meshes from the unit sphere. |
Patch-based approaches (Yang et al., 2018b; Groueix et al., 2018; Bednarik et al., 2020; Deng et al., 2020b) are much more flexible and enable modeling virtually any surfaces, including those with a non-disk topology. It is achieved using parametric mappings to transform 2D patches into a set of 3D shapes. The first deep neural network which uses 2D manifold into 3D space was FoldingNet (Yang et al., 2018b). FoldingNet uses a single patch to model the surface of an object. | In literature, there exist a huge variety of 3D shape reconstruction models. The most popular ones are dense, pixel-wise depth maps, or normal maps (Eigen et al., 2014; Bansal et al., 2016; Bednarik et al., 2018; Tsoli et al., 2019; Zeng et al., 2019), point clouds (Fan et al., 2017; Qi et al., 2017b; Yang et al., 2018b), meshes (Wang et al., 2018; Gundogdu et al., 2019; Yao et al., 2020; Yifan et al., 2020), implicit functions (Chen & Zhang, 2019; Mescheder et al., 2019; Park et al., 2019; Xu et al., 2019; Atzmon & Lipman, 2020), voxels (Choy et al., 2016; Häne et al., 2017), shape primitives (Chen et al., 2020b; Deng et al., 2020a; Smirnov et al., 2020; Paschalidou et al., 2020), parametric mappings (Yang et al., 2018b; Groueix et al., 2018; Williams et al., 2019; Deprelle et al., 2019; Bednarik et al., 2020) or combinations of some of these (Muralikrishnan et al., 2019; Poursaeed et al., 2020).
All of the above representations have their pros and cons based on memory requirements and surface fitting precision. |
Recently proposed object representations address this pitfall of point clouds by modeling object surfaces with polygonal meshes (Wang et al., 2018; Groueix et al., 2018; Yang et al., 2018b; Spurek et al., 2020a, b). They define a mesh as a set of vertices that are joined with edges in triangles. These triangles create the surface of an object. The resulting representation is efficient and easy-to-render, while at the same time it offers additional benefits, e.g. the possibility of sampling the surface at the desired resolution, and straightforward texturing in any 3D computer graphics software. To obtain such a representation, state-of-the-art approaches leverage deep learning models based on the autoencoder architecture (Wang et al., 2018; Spurek et al., 2020a, b) or based on an ensemble of parametric mappings from 2D rectangular patches to 3D primitives, often referred to as an atlas (Groueix et al., 2018; Yang et al., 2018b; Bednarik et al., 2020; Deng et al., 2020b). The former methods are limited by the topology of the autoencoder latent space distribution, e.g., they cannot model complex structures with a nonspherical topology (Spurek et al., 2020a, b; Wang et al., 2018). Atlas-based approaches, on the other hand, are much more flexible and enable modeling virtually any surface. However, since individual mappings’ consistency is not guaranteed, those methods often yield discontinuities of the reconstructed shapes and their deformation. | D |
O(n2εnlnnmaxi,jCij2χ).𝑂superscript𝑛2𝜀𝑛𝑛subscript𝑖𝑗superscriptsubscript𝐶𝑖𝑗2𝜒O\left(\frac{n^{2}}{\varepsilon}\sqrt{n\ln n}\max_{i,j}C_{ij}^{2}\chi\right).italic_O ( divide start_ARG italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_ε end_ARG square-root start_ARG italic_n roman_ln italic_n end_ARG roman_max start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT italic_C start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_χ ) .
| parameter γ𝛾\gammaitalic_γ to solve the WB problem.
We ran the IBP and the ADCWB algorithms with different values of the regularization parameter γ𝛾\gammaitalic_γ starting from γ=0.1𝛾0.1\gamma=0.1italic_γ = 0.1 and gradually decreasing its value to γ=10−4𝛾superscript104\gamma=10^{-4}italic_γ = 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT. The number of iterations was taken proportionally to 1/γ1𝛾1/\gamma1 / italic_γ in the IBP and proportionally to 1/γ1𝛾1/\sqrt{\gamma}1 / square-root start_ARG italic_γ end_ARG in the ADCWB according to the theoretical bounds. Figure 2 shows that for a certain value of γ𝛾\gammaitalic_γ (depending on the the experiment set and the number of method iterations) the regularized algorithms diverge. Our unregularized DMP algorithm is capable to achieve any accuracy, the more iteration the better accuracy. We ran it to achieve about 10−8superscript10810^{-8}10 start_POSTSUPERSCRIPT - 8 end_POSTSUPERSCRIPT accuracy, probably the machine accuracy. |
We demonstrate the performance of the DMP algorithm on different network architectures with different conditional number χ𝜒\chiitalic_χ: complete graph, star graph, cycle graph and the Erdős-Rényi random graphs with the probability of edge creation p=0.5𝑝0.5p=0.5italic_p = 0.5 and p=0.4𝑝0.4p=0.4italic_p = 0.4 under the random seed =10absent10=10= 10. As the true barycenter of Gaussian measures can be calculated theoretically [14], we use them to study the convergence of the DMP to the non-optimality gap. | We comment on the complexity of the DMP algorithm compared to the existing state-of-the-art methods: the iterative Bregman projections (IBP) algorithm, its accelerated versions and primal dual algorithm (ADCWB), see Table 1. All of these methods use entropic regularization of Wasserstein metric with parameter γ𝛾\gammaitalic_γ which must be taken proportionally to accuracy ε𝜀\varepsilonitalic_ε.
| Finally, we show how the proposed method can be applied to prominent problem of computing Wasserstein barycenters to tackle the problem of instability of regularization-based approaches under a small value of regularizing parameter. The idea is based on the saddle point reformulation of the Wasserstein barycenter problem (see [17]). Wasserstein barycenters, which define the mean of objects that can be modeled as probability measures on a metric space (images, texts, videos), are used in many fields including Bayesian computations [55], texture mixing [50], clustering (k𝑘kitalic_k-means for probability measures) [13], shape interpolation and color transferring [53], statistical estimation of template models [10] and neuroimaging [25].
| C |
The set of cycles of a graph has a vector space structure over ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, in the case of undirected graphs, and over ℚℚ\mathbb{Q}blackboard_Q, in the case of directed graphs [5]. A basis of such a vector space is denoted cycle basis and its dimension is the cyclomatic number ν=|E|−|V|+|CC|𝜈𝐸𝑉𝐶𝐶\nu=|E|-|V|+|CC|italic_ν = | italic_E | - | italic_V | + | italic_C italic_C | where E𝐸Eitalic_E, V𝑉Vitalic_V ad CC𝐶𝐶CCitalic_C italic_C are the set of edges, vertices and connected components of the graph, resp. Given a cycle basis B𝐵Bitalic_B we can define its cycle matrix Γ∈K|E|×νΓsuperscript𝐾𝐸𝜈\Gamma\in K^{|E|\times\nu}roman_Γ ∈ italic_K start_POSTSUPERSCRIPT | italic_E | × italic_ν end_POSTSUPERSCRIPT where K𝐾Kitalic_K is the scalar field (i.e.: ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT or ℚℚ\mathbb{Q}blackboard_Q), as the matrix that has the cycles of B𝐵Bitalic_B as columns. | In the case that we can find some non-star spanning tree T𝑇Titalic_T of
G𝐺Gitalic_G such that ∩(T)<∩(Ts)𝑇subscript𝑇𝑠\cap(T)<\cap(T_{s})∩ ( italic_T ) < ∩ ( italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ) then, we can “simplify” the instance by removing the interbranch cycle-edges with respect to T𝑇Titalic_T in G𝐺Gitalic_G without affecting the inequality (see Lemma 18). |
In the introduction of this article we mentioned that the MSTCI problem is a particular case of finding a cycle basis with sparsest cycle intersection matrix. Another possible analysis would be to consider this in the context of the cycle basis classes described in [6]. |
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the strictly fundamental class context. In more concrete terms this problem is equivalent to finding the cycle basis with the sparsest cycle matrix. In [5] a unified perspective of the problem is presented. The authors show that the MCB problem is different in nature for each class. For example in [10] a remarkable reduction is constructed to prove that the MCB problem is NP-hard for the strictly fundamental class, while in [11] a polynomial time algorithm is given to solve the problem for the undirected class. Some applications of the MCB problem are described in [5, 11, 10, 12]. |
Different classes of cycle bases can be considered. In [6] the authors characterize them in terms of their corresponding cycle matrices and present a Venn diagram that shows their inclusion relations. Among these classes we can find the strictly fundamental class. | D |
In this respect, the case of convex lattice sets, that is, sets of the form C∩ℤd𝐶superscriptℤ𝑑C\cap\mathbb{Z}^{d}italic_C ∩ blackboard_Z start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT where C𝐶Citalic_C is a convex set in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, showcases an interesting phenomenon: the Helly number is 2dsuperscript2𝑑2^{d}2 start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT [14, 36], an exponential dependency on the dimension that contributes to the computational intractability of integer programming [12, §6§6\mathsection 6§ 6], but a (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem holds for every p≥q≥d+1𝑝𝑞𝑑1p\geq q\geq d+1italic_p ≥ italic_q ≥ italic_d + 1 [7];
in the words of Bárány and Matoušek [7, §1§1\mathsection 1§ 1], “… this large Helly number can be regarded as a ‘local anomaly’ and that the relevant number for other, more global Helly-type properties is only d+1𝑑1d+1italic_d + 1.”. | We first prove, in Section 3, that complexes with a forbidden simplicial homological minor also have a forbidden grid-like homological minor.
The proof uses the stair convexity of Bukh et al. [8] to build, in a systematic way, chain maps from simplicial complexes to cubical complexes. We then adapt, in Section 4, the method of constrained chain maps and forbidden homological minors to study colorful intersection patterns. We conclude with the proof of Theorem 1.2 in Section 5. | The support of a chain σ𝜎\sigmaitalic_σ, denoted supp(σ)supp𝜎\operatorname{supp}(\sigma)roman_supp ( italic_σ ), in a simplicial complex is the set of simplices with nonzero coefficients in σ𝜎\sigmaitalic_σ. We say that two chains σ𝜎\sigmaitalic_σ and τ𝜏\tauitalic_τ have overlapping supports if there exists a simplex in the support of σ𝜎\sigmaitalic_σ that intersects a simplex in the support of τ𝜏\tauitalic_τ; if no such pair of simplices exist we say that σ𝜎\sigmaitalic_σ and τ𝜏\tauitalic_τ have nonoverlapping supports. A chain map f∙:C∙(K)→C∙(𝒰):subscript𝑓∙→subscript𝐶∙𝐾subscript𝐶∙𝒰f_{\bullet}\colon C_{\bullet}(K)\to C_{\bullet}(\mathcal{U})italic_f start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT : italic_C start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT ( italic_K ) → italic_C start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT ( caligraphic_U ) is nontrivial if the image of every vertex of K𝐾Kitalic_K is a 00-chain of 𝒰𝒰\mathcal{U}caligraphic_U supported on an odd number of vertices. The simplicial complex K𝐾Kitalic_K is a homological minor of 𝒰𝒰\mathcal{U}caligraphic_U, written K≺H𝒰subscriptprecedes𝐻𝐾𝒰{K\prec_{H}\mathcal{U}}italic_K ≺ start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT caligraphic_U, if there exists a nontrivial chain map f∙:C∙(K)→C∙(𝒰):subscript𝑓∙→subscript𝐶∙𝐾subscript𝐶∙𝒰f_{\bullet}\colon C_{\bullet}(K)\to C_{\bullet}(\mathcal{U})italic_f start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT : italic_C start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT ( italic_K ) → italic_C start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT ( caligraphic_U ) such that disjoint simplices are mapped to chains with nonoverlapping supports. If no such chain map exists we say that that K𝐾Kitalic_K is a forbidden minor of 𝒰𝒰\mathcal{U}caligraphic_U, and write K⊀H𝒰subscriptnot-precedes𝐻𝐾𝒰K\not\prec_{H}\mathcal{U}italic_K ⊀ start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT caligraphic_U. 111The notion of homological minor readily extends to any triangulable space: K𝐾Kitalic_K is a forbidden homological minor of a space X𝑋Xitalic_X if K⊀HTsubscriptnot-precedes𝐻𝐾𝑇K\not\prec_{H}Titalic_K ⊀ start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT italic_T for every triangulation T𝑇Titalic_T of X𝑋Xitalic_X. For instance, it can be shown (see [18, Corollary 13]) that the complete graph on 5 vertices (viewed as a 1-dimensional simplicial complex) is a forbidden homological minor of every triangulation of a disk.
| In this paper, we show that the gap observed for convex lattice sets occurs in the broad topological setting of triangulable spaces with a forbidden homological minor, a notion introduced by Wagner [37] as a higher-dimensional analogue of the familiar notion of graph minors [34].
| Theorem 1.1
depends on p𝑝pitalic_p, q𝑞qitalic_q, K𝐾Kitalic_K and b𝑏bitalic_b (but, as usual, is independent of the size of the cover). Moreover, while the Helly number of a (K,b)𝐾𝑏(K,b)( italic_K , italic_b )-free cover can grow with b𝑏bitalic_b (it is at least (b−1)(μ(K)+2)𝑏1𝜇𝐾2(b-1)(\mu(K)+2)( italic_b - 1 ) ( italic_μ ( italic_K ) + 2 ) [18, Example 2]), the range for which the (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem holds is independent of b𝑏bitalic_b, thus displaying a similar gap as observed for convex lattice sets. | C |
Feature transformation usually denotes less sophisticated modifications over the features [14]. Some of the standard transformations also supported by our approach are: (1) rounding, (2) binning, (3) scaling, (4) logarithmic transformations, (5) exponential transformations, and (6) power functions. In this scenario, ML experts and practitioners usually perform exploratory data analysis (EDA), sometimes utilizing visualization of the data distribution to understand which transformation they should apply [15]. Another option is to use automated feature transformation approaches such as a transformation graph [1], which can, however, result in overfitting [2]. Unfortunately, such methods were only employed in regression and reinforcement learning problems [1]. This directs us to an additional open question: (RQ2) which features should we transform, and how can we understand their impact on the final outcome when using a specific data set?
| There is a rather large body of existing work on automatic feature selection techniques [16, 19, 17]. However, one limitation is that features can be redundant if there is a strong correlation among them, and the correlation coefficient is unable to characterize nonlinear relationships. Thus, this is a problem where the feature selection techniques struggle to find a solution because of multiple parameters they have to optimize simultaneously. Guyon and Elisseeff [16] performed a survey including an extensive description of automatic feature selection pitfalls. The authors stress the general problem of finding the smallest possible subset of features for a given data set. They suggest that an automated method cannot be expected to find the best feature subset in all cases by itself. Other methods that face the same challenge are wrappers that use regression or classification models to find an ideal feature subset by iteratively including or excluding features. The combination of learning models (e.g., SVM [39]) and wrapper methods (e.g., RFE [40]) is a commonly used approach for automatic feature selection [20, 21]. Also, metric-based ranking followed by the selection of the k𝑘kitalic_k best features [23, 24] and more complex metrics—such as those used in genetic algorithms [41]—have been examined in the past. But they suffer from the same issues as described before. In our VA system, we implement several alternative feature selection techniques belonging to different types, and we allow users to decide if their aggregation is ideal or they want to focus on one of them.
| Next, as XGBoost [29] is a nonlinear ML algorithm, we also train a linear classifier (a logistic regression [83] model with the default Scikit-learn’s hyperparameters [84]) to compute the coefficients matrix and then use Recursive Feature Elimination (RFE) [40] to rank the features from the best to the worst in terms of contribution.
This technique is referred to as Ranking-based FS [85] in our VA system. We would like to include further techniques in the future, however, the current selection is specifically assembled to contain one candidate for each of the high-level categories of feature selection methods introduced in Section 1. For every method, we normalize the output from 0 to 1 to set a common ground for the user to compare them, as indicated in the legend of Fig. 1(b). Hence, their average is calculated and displayed in the penultimate column. Following the design guidelines from the conventions introduced by prior works [86, 87], we choose red and green colors for the table heatmap. This view also automatically extends for the newly-generated features from combinations of already existing features (cf. Fig. 1(b)). The original features used for the creation of new features are depicted in dark gray in the last column of the table heatmap view. The table is automatically sorted based on the average; however, Impurity-based FI is selected by the user for the Fig. 1(b)) scenario. Due to this selection, the table heatmap resorts the features from the highest to the lowest importance only according to the XGBoost model’s inherent feature importance. More details can be found in Section 4.4. |
Various visualization techniques have been proposed for the task of feature selection, including correlation matrices [42, 43], radial visualizations [44, 45, 46], scatterplots [47], scatterplot matrices [48], feature ranking [49, 50, 51, 52, 53, 54, 55, 56], feature clustering [57], and dimensionality reduction (DR) [53, 58, 59]. The category of techniques more related to our work is feature ranking, since we use automatic feature selection techniques to rank the importance of the different features. For example, a VA tool called INFUSE [50] was designed to aid users in understanding how features are being ranked by the automated feature selection techniques. It presents an aggregated view of results produced by automatic techniques, assisting the user in learning how these work and compare their results with multiple algorithms. Similarly, Klemm et al. [60] propose an approach that performs regression analysis exhaustively between independent features and the target class. These approaches take into account the user’s ability to identify patterns from analyzing the data (e.g., with the colors in a heatmap representation) or choose the feature subset by some quantitative metric. A few other VA systems have leveraged a balanced blending between automatic and visual feature selection techniques. RegressionExplorer [61] is one example for examining logistic regression models. Additionally, the exploration of linear relationships among features was studied by Barlowe et al. [62]. FeatureEnVi offers rather similar characteristics with the tools analyzed above. However, we combine several automatic feature selection techniques and statistical heuristics in cohesive visualizations for evaluating feature selection and feature extraction concurrently. |
Feature selection is about choosing a subset of features from the pool of features available by that time. Feature selection methods can be generally divided into four high-level categories: (1) filter methods, (2) wrapper methods, (3) embedded methods, and (4) hybrid methods [16, 17, 18]. Our feature selection strategy belongs to the last category, as we incorporate techniques from all the other categories. Also, instead of appending features progressively (called forward selection) or considering all features and then discarding some (known as backward elimination), we choose a stepwise selection approach. Therefore, we start with all features, but we can add or remove any number of features at different stages. For feature selection, the vast majority of the existing literature is on automatic feature selection techniques. They may, however, lack transparency without the assistance of visualizations [19, 16, 17, 20, 21]. Furthermore, there is an opportunity to select features from a candidate set, which can be time-consuming if this set is large [22, 23, 24]. Even though a series of analytical tools and systems have been developed to address such challenges with the use of visualizations [25, 26, 27], a remaining open question is: (RQ3) which feature selection technique should we follow when they present diverging results, and how can we verify their effectiveness for particular problems? | D |
The goal is to tune the parameters of the MPC-based planning unit without introducing any modification in the structure of the underlying control system.
We leverage the repeatability of the system, which is higher than the integrated encoder error of 3μm3𝜇𝑚3\mu m3 italic_μ italic_m, | which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combination of the identified system model with the contouring terms. In our approach the tracking error is coupled with the progression along the path through the cost function. The automated tuning of the parameters is performed using a cost that accounts for the global performance over the whole trajectory. Additional constraints in the Bayesian optimization algorithm allow for balancing traversal time, accuracy, and minimization of oscillations, according to the specific crucial requirements of the application. We demonstrate enhanced performance in simulation for a 2-axis gantry, for geometries of different nature.
| MPC accounts for the real behavior of the machine and the axis drive dynamics can be excited to compensate for the contour error to a big extent, even without including friction effects in the model [4, 5]. High-precision trajectories or set points can be generated prior to the actual machining process following various optimization methods, including MPC, feed-forward PID control strategies, or iterative-learning control [6, 7], where friction or vibration-induced disturbances can be corrected. In MPC, closed-loop performance is pushed to the limits only if the plant under control is accurately modeled, alternatively, the performance degrades due to imposed robustness constraints. Instead of adapting the controller for the worst case scenarios, the prediction model can be selected to provide the best closed-loop performance by tuning the parameters in the MPC optimization objective for maximum performance [8, 9, 10]. Using Bayesian optimization-based tuning for enhanced performance has been further demonstrated for cascade controllers of linear axis drives, where data-driven performance metrics have been used to specifically increase the traversal time and the tracking accuracy while reducing vibrations in the systems [11, 12]. The approach has been successfully applied to linear and rotational axis embedded in grinding machines and shown to standardize and automate tuning of multiple parameters [13].
| To bring the model close to the real system, we unify the terms required for the contour control formulation with the velocity and acceleration for each axis from the identified, discretized state-space model from (4).
Also, we include the path progress sksubscript𝑠𝑘s_{k}italic_s start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT and the two error terms e^l,ksubscript^𝑒𝑙𝑘\hat{e}_{l,k}over^ start_ARG italic_e end_ARG start_POSTSUBSCRIPT italic_l , italic_k end_POSTSUBSCRIPT and e^c,ksubscript^𝑒𝑐𝑘\hat{e}_{c,k}over^ start_ARG italic_e end_ARG start_POSTSUBSCRIPT italic_c , italic_k end_POSTSUBSCRIPT. Here, the velocities and accelerations correspond to the identified system dynamics (4). | The physical system is a 2-axis gantry stage for (x,y)𝑥𝑦(x,y)( italic_x , italic_y ) positioning with industrial grade actuators and sensors [14].
The plant can be modeled as a mass-spring-damper system with two masses linked with a damper and a spring for capturing imperfection and friction in the transmitting movement [15]. | D |
An interesting observation was that a weaker architecture, CNNs, were able to ignore position bias, whereas a more powerful architecture, CoordConv, resorted to exploiting this bias resulting in worse performance. While the community has largely focused on training procedures for bias mitigation, an exciting avenue for future work is to incorporate appropriate inductive biases into the architectures, perhaps endowing them with the ability to choose the the minimal computational power to do a task so that they are less sensitive to unwanted biases. This will essentially enable the algorithms to use Occam’s razor to determine the minimal capabilities required to do a task to reduce their ability to utilize biases. | We have pointed to issues with the existing bias mitigation approaches, which alter the loss or use resampling. An orthogonal avenue for attacking bias mitigation is to use alternative architectures. Neuro-symbolic and graph-based systems could be created that focus on learning and grounding predictions on structured concepts, which have shown promising generalization capabilities [68, 44, 34, 24, 60]. Causality is another relevant line of research, where the goal is to uncover the underlying causal mechanisms [49, 45, 9, 2]. Discovery and usage of causal concepts is a promising direction for building robust systems. These areas have not been explicitly studied for their ability to overcome dataset bias.
| Deep learning systems are trained to minimize their loss on a training dataset. However, datasets often contain spurious correlations and hidden biases which result in systems that have low loss on the training data distribution, but then fail to work appropriately on minority groups because they exploit and even amplify these spurious correlations [71, 35].
| Without bias mitigation mechanisms, standard models (StdM) often use spurious bias variables for inference, rather than developing invariance to them, which often results in their inability to perform well on minority patterns [27, 11, 3, 61]. To address this, several bias mitigation mechanisms have been proposed, and they can be categorized into two groups: 1) methods that access explicit bias labels during training, and 2) methods that do not assume such access. We briefly review methods from these categories, with an emphasis on the methods assessed in our studies.
|
An interesting observation was that a weaker architecture, CNNs, were able to ignore position bias, whereas a more powerful architecture, CoordConv, resorted to exploiting this bias resulting in worse performance. While the community has largely focused on training procedures for bias mitigation, an exciting avenue for future work is to incorporate appropriate inductive biases into the architectures, perhaps endowing them with the ability to choose the the minimal computational power to do a task so that they are less sensitive to unwanted biases. This will essentially enable the algorithms to use Occam’s razor to determine the minimal capabilities required to do a task to reduce their ability to utilize biases. | A |
The first two types of methods estimate gaze based on geometric features such as contours, reflection and eye corners. The geometric features can be accurately extracted with the assistance of dedicated devices, e.g., infrared cameras.
More concretely, the 2D eye feature regression method learns a mapping function from geometric feature to point of gaze, e.g., the polynomials [25, 26] and the neural networks [27]. The 3D eye model recovery method builds subject-specific geometric eye models to estimate human gaze directions. | The 3D eye model recovery-based methods usually require personal calibration to recover person-specific parameters such as iris radius and kappa angle.
While these methods often achieve high accuracy, they require dedicated devices such as infrared cameras. |
It is non-trivial to learn an accurate and universal gaze estimation model. Conventional 3D eye model recovery methods usually build a unified gaze model including subject-specific parameters such as eyeball radius [28]. They perform a personal calibration to estimate these subject-specific parameters. In the field of deep learning-based gaze estimation, | The eye model is fitted with geometric features, such as the infrared corneal reflections [28, 29], pupil center [30] and iris contours [31]. However, they usually require a personal calibration process for each subject, since the eye model contains subject-specific parameters such as cornea radius, kappa angles.
| The first two types of methods estimate gaze based on geometric features such as contours, reflection and eye corners. The geometric features can be accurately extracted with the assistance of dedicated devices, e.g., infrared cameras.
More concretely, the 2D eye feature regression method learns a mapping function from geometric feature to point of gaze, e.g., the polynomials [25, 26] and the neural networks [27]. The 3D eye model recovery method builds subject-specific geometric eye models to estimate human gaze directions. | C |
Inspired by the high performance of CNN based methods that have strong robustness to illumination, facial expression, and facial occlusion changes, we propose in this paper an occlusion removal approach and deep CNN based model to address the problem of masked face recognition during the COVID-19 pandemic. Motivations and more details about the proposed method are presented in the following sections. | Experimental results are carried out on Real-world Masked Face Recognition Dataset (RMFRD) and Simulated Masked Face Recognition Dataset (SMFRD) presented in wang2020masked . We start by localizing the mask region. To do so, we apply a cropping filter in order to obtain only the informative regions of the masked face (i.e. forehead and eyes). Next, we describe the selected regions using a pre-trained deep learning model as a feature extractor. This strategy is more suitable in real-world applications comparing to restoration approaches. Recently, some works have applied supervised learning on the missing region to restore them such as in din2020novel . This strategy, however, is a difficult and highly time-consuming process.
|
Real-World-Masked-Face-Dataset wang2020masked is a masked face dataset devoted mainly to improve the recognition performance of the existing face recognition technology on the masked faces during the COVID-19 pandemic. It contains three types of images namely, Masked Face Detection Dataset (MFDD), Real-world Masked Face Recognition Dataset (RMFRD), and Simulated Masked Face Recognition Dataset (SMFRD). In this paper, we focus on the last two datasets described in the following. | The obtained high accuracy compared to other face recognizers is achieved due to the best features extracted from the last convolutional layers of the pre-trained models, and the high efficiency of the proposed BoF paradigm that gives a lightweight and more discriminative power comparing to classical CNN with softmax function. Moreover, dealing with only the unmasked regions, the high generalization of the proposed method makes it applicable in real-time applications. Other methods, however, aim to unmask the masked face using generative networks such as in din2020novel . This strategy is a greedy task and not preferable for real-world applications.
| To tackle these problems, we distinguish two different tasks namely: face mask recognition and masked face recognition. The first one checks whether the person is wearing a mask or no. This can be applied in public places where the mask is compulsory. Masked face recognition, on the other hand, aims to recognize a face with a mask basing on the eyes and the forehead regions. In this paper, we handle the second task using a deep learning-based method. We use a pre-trained deep learning-based model in order to extract features from the unmasked face regions (out of the mask region). It is worth stating that the occlusions in our case can occur in only one predictable facial region (nose and mouth regions), this can be a good guide to handle this problem efficiently.
| A |
recursive definitions of the form y←fi¯x¯=Pf(i¯,x¯,y)←𝑦𝑓¯𝑖¯𝑥subscript𝑃𝑓¯𝑖¯𝑥𝑦y\leftarrow f~{}\overline{i}~{}\overline{x}=P_{f}(\overline{i},\overline{x},y)italic_y ← italic_f over¯ start_ARG italic_i end_ARG over¯ start_ARG italic_x end_ARG = italic_P start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ( over¯ start_ARG italic_i end_ARG , over¯ start_ARG italic_x end_ARG , italic_y ) | There are eight kinds of processes: two for the structural rules (identity and cut), one for each combination of type polarity (positive or negative) and rule type (left or right), one for definition calls, and one for unreachable code.
{defi}[Process] |
The first two kinds of processes correspond to the identity and cut rules. Values V𝑉Vitalic_V and continuations K𝐾Kitalic_K are specified on a per-type-and-rule basis in the following two tables. Note the address variable x𝑥xitalic_x distinguished by each rule. |
where the arithmetic variables in 𝒱𝒱\mathcal{V}caligraphic_V are free in the constraints (arithmetic formulas) in 𝒞𝒞\mathcal{C}caligraphic_C, the types in ΓΓ\Gammaroman_Γ, the process P𝑃Pitalic_P, and type C𝐶Citalic_C; moreover, the address variables in ΓΓ\Gammaroman_Γ, which are free in P𝑃Pitalic_P, stand for addresses of memory cells representing futures. In particular, P𝑃Pitalic_P reads from x,y,…𝑥𝑦…x,y,\ldotsitalic_x , italic_y , … (sources) and writes to z𝑧zitalic_z (a destination) according to the protocols specified by A,B,…𝐴𝐵…A,B,\ldotsitalic_A , italic_B , … and C𝐶Citalic_C, respectively. z𝑧zitalic_z is written to exactly once corresponding to the population of a future [Hal85]. Lastly, the vector (indicated by the overline) of arithmetic expressions e¯¯𝑒\overline{e}over¯ start_ARG italic_e end_ARG will be used to track the sizes encountered at each recursive call as mentioned in the introduction. Now, let us examine the definitions of types and processes. For our purposes, detailed syntaxes for expressions e𝑒eitalic_e and formulas ϕitalic-ϕ\phiitalic_ϕ are unnecessary. |
The first rule for →→\to→ corresponds to the identity rule and copies the contents of one cell into another. The second rule, which is for cut, models computing with futures [Hal85]: it allocates a new cell to be populated by the newly spawned P𝑃Pitalic_P. Concurrently, Q𝑄Qitalic_Q may read from said new cell, which blocks if it is not yet populated. The third and fourth rules resolve principal cuts by passing a value to a continuation, whereas the fifth one resolves definition calls. Lastly, the final two rules perform the action of writing to a cell. | B |
Figure 13: The comparison of cloud-side computational efficiency between FairCMS-I and FairCMS-II. The bars and polyline correspond to the left and right Y-axes, respectively. The time consumed by FairCMS-II is 100 times the reading on the Y-axis. (a) Efficiency comparison under different number of users. (b) Efficiency comparison under different image pixels. | The owner-side efficiency and scalability performance of FairCMS-II are directly inherited from FairCMS-I, and the achievement of the three security goals of FairCMS-II is also shown in Section VI. Comparing to FairCMS-I, it is easy to see that in FairCMS-II the cloud’s overhead is increased considerably due to the adoption of re-encryption operations and homomorphic operations on the ciphertext of the media content, which means it will cost more for the owner on renting the cloud’s resources. We regard this as the trade-off between security and cost. In actual use, the two proposed schemes can be selected according to different security requirements. The flexibility of choice in cloud-side efficiency also constitutes one of the prominent advantages of our work.
| Finally, the comparison between the two proposed schemes and the existing relevant schemes is summarized in Table I. As can be seen therein, the two proposed schemes FairCMS-I and FairCMS-II have advantages over the existing works. In addition, the two proposed schemes offer owners the flexibility to choose. If the security requirements for the media content are not excessively rigorous and the size of the media content is small (e.g., images with a moderate pixel count), the owner can choose FairCMS-I to minimize the cost of renting cloud resources; otherwise, the owner can choose FairCMS-II. There is no fixed security requirement or content size threshold to guide the selection between these two options. Instead, it is up to the owner to make a decision based on the objective application scenario and his/her subjective considerations. In Section VIII, we conduct a comparative experiment on the cloud-side efficiency of FairCMS-I and FairCMS-II to provide a quantitative reference for the owner’s decision-making.
|
This paper solves the three problems faced by cloud media sharing and proposes two schemes FairCMS-I and FairCMS-II. FairCMS-I gives a method to transfer the management of LUTs to the cloud, enabling the calculation of each user’s D-LUT in the ciphertext domain and its subsequent distribution. However, utilizing the single-value alteration method for masking the original media content does not achieve the IND-CPA security. Then FairCMS-II offers an enhanced privacy solution by replacing the encryption method with the lifted-ElGamal based PRE scheme, albeit at the cost of increased cloud overhead. Notably, both FairCMS-I and FairCMS-II fulfill scalability and owner-side efficiency requirements. In summary, the two proposed schemes can facilitate the media sharing of owners, while simultaneously ensuring the joint protection of copyright and users’ rights, ultimately promoting the sustainable growth of the media sharing industry. | Second, we compare the cloud-side efficiency of FairCMS-I and FairCMS-II, and the results are presented in Fig. 13. As shown therein, the cloud-side efficiency of FairCMS-I is significantly higher than that of FairCMS-II, thus validating our analysis in Section VII. The main reason for the cloud-side efficiency gain of FairCMS-I lies in the use of lightweight single-value alteration method to encrypt the media content, as shown in Fig. 14. This is the key to ensuring that the system is efficient when the size of the media content being shared (e.g., vedio) is large. The time cost of encrypting a video using the single-value alteration method is depicted in Table V, and it is shown to be acceptably low. Therefore, we suggest that owners select FairCMS-I when the media content size is large and the security requirements is not excessively rigorous. In spite of this, there are no fixed thresholds for media content size and security requirements that can be used as a basis for recommendations regarding scheme selection.
| D |
It should be noted that fssubscript𝑓𝑠f_{s}italic_f start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT is invariant to the order of its input, i.e., fs(𝐞i,𝐞j)=fs(𝐞j,𝐞i)subscript𝑓𝑠subscript𝐞𝑖subscript𝐞𝑗subscript𝑓𝑠subscript𝐞𝑗subscript𝐞𝑖f_{s}(\mathbf{e}_{i},\mathbf{e}_{j})=f_{s}(\mathbf{e}_{j},\mathbf{e}_{i})italic_f start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( bold_e start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_e start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) = italic_f start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( bold_e start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , bold_e start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ). Therefore, the estimated edge weights are identical to the same pair of nodes.
Such continuous modeling of graph structure enables backpropagation of the gradients. | To overcome this limitation, we replace the edge set E𝐸Eitalic_E with weighted adjacency 𝐏𝐏\mathbf{P}bold_P, where pijsubscript𝑝𝑖𝑗p_{ij}italic_p start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT is interpreted as the probability of (vi,vj)∈Esubscript𝑣𝑖subscript𝑣𝑗𝐸(v_{i},v_{j})\in E( italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) ∈ italic_E, which also reflects how beneficial their interaction is.
It should be noted that we learn different graph structures 𝐏(k)superscript𝐏𝑘\mathbf{P}^{(k)}bold_P start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT at each k𝑘kitalic_k-th layer. |
From the estimated edge weighted matrix 𝐏(k)superscript𝐏𝑘\mathbf{P}^{(k)}bold_P start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT at each layer, we then sample the beneficial feature interactions, which is also to sample the neighborhood for each feature field. | where 𝐏(k)[i,:]superscript𝐏𝑘𝑖:\mathbf{P}^{(k)}[i,:]bold_P start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT [ italic_i , : ] denotes the i𝑖iitalic_i-th column of matrix 𝐏(k)superscript𝐏𝑘\mathbf{P}^{(k)}bold_P start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT at k𝑘kitalic_k-th layer, 𝐏(k)[i,−idxi]superscript𝐏𝑘𝑖subscriptidx𝑖\mathbf{P}^{(k)}[i,-\text{idx}_{i}]bold_P start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT [ italic_i , - idx start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ] contains a subset of columns of 𝐏(k)superscript𝐏𝑘\mathbf{P}^{(k)}bold_P start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT that are not indexed by idxisubscriptidx𝑖\text{idx}_{i}idx start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
argtopmksubscriptargtopsubscript𝑚𝑘\text{argtop}_{m_{k}}argtop start_POSTSUBSCRIPT italic_m start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT is an operator that performs the mksubscript𝑚𝑘m_{k}italic_m start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT-most important nodes selection for the query node i𝑖iitalic_i to attend. We only keep these mksubscript𝑚𝑘m_{k}italic_m start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT feature nodes, and the others are masked. | At each layer of GraphFM, we select the beneficial feature interactions and treat them as edges in a graph. Then we utilize a neighborhood/interaction aggregation operation to encode the interactions into feature representations.
By design, the highest order of feature interaction increases at each layer and is determined by layer depth, and thus the feature interactions of order up to the highest can be learned. | B |
which is reminiscient of the 𝒪(Lf𝒳D2/t)𝒪superscriptsubscript𝐿𝑓𝒳superscript𝐷2𝑡\mathcal{O}(L_{f}^{\mathcal{X}}D^{2}/t)caligraphic_O ( italic_L start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_X end_POSTSUPERSCRIPT italic_D start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_t ) rate of the original Frank-Wolfe algorithm for the smooth and convex case. |
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is very similar to the one in Jaggi [2013]. In a nutshell, as the primal progress per iteration is directly related to the step size times the Frank-Wolfe gap, we know that the Frank-Wolfe gap cannot remain indefinitely above a given value, as otherwise we would obtain a large amount of primal progress, which would make the primal gap become negative. This is formalized in Theorem 2.6. | For AFW, we can see that the algorithm either chooses to perform what is known as a Frank-Wolfe step in Line 7 of Algorithm 5
if the Frank-Wolfe gap g(𝐱)𝑔𝐱g(\mathbf{x})italic_g ( bold_x ) is greater than the away gap ⟨∇f(𝐱t),𝐚t−𝐱t⟩∇𝑓subscript𝐱𝑡subscript𝐚𝑡subscript𝐱𝑡\left\langle\nabla f(\mathbf{x}_{t}),\mathbf{a}_{t}-\mathbf{x}_{t}\right\rangle⟨ ∇ italic_f ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) , bold_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ⟩ or an Away step in 9 of Algorithm 5 otherwise. | We can make use of the proof of convergence in primal gap to prove linear convergence in Frank-Wolfe gap. In order to do so, we recall a quantity formally defined in Kerdreux et al. [2019] but already implicitly used earlier in Lacoste-Julien & Jaggi [2015] as:
| Moreover, as the upper bound on the Bregman divergence holds for ν=2𝜈2\nu=2italic_ν = 2 regardless of the value of d2(𝐱,𝐲)subscript𝑑2𝐱𝐲d_{2}(\mathbf{x},\mathbf{y})italic_d start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( bold_x , bold_y ), we can modify the proof of Theorem 2.4 to obtain a convergence rate of the form:
| A |
In the semi-streaming model, which is the most commonly established variant of the graph stream model, the algorithm is given O~(n)~𝑂𝑛\widetilde{O}(n)over~ start_ARG italic_O end_ARG ( italic_n )222The O~~𝑂\widetilde{O}over~ start_ARG italic_O end_ARG hides poly-logarithmic terms, thus O~(n)=n⋅poly(logn)~𝑂𝑛⋅𝑛poly𝑛\widetilde{O}(n)=n\cdot\operatorname{poly}(\log n)over~ start_ARG italic_O end_ARG ( italic_n ) = italic_n ⋅ roman_poly ( roman_log italic_n ). space for input graphs with n𝑛nitalic_n nodes. This has turned out to be the sweet spot since even basic graph problems such as connectivity become intractable with less space [FKM+05]. Moreover, note that often even just storing a solution requires Ω(nlogn)Ω𝑛𝑛\Omega(n\log n)roman_Ω ( italic_n roman_log italic_n ) memory.
|
In the first pass, we apply a simple greedy algorithm to find a maximal matching, hence a 2222-approximation. This 2222-approximate maximum matching is our starting matching. The rest of our algorithm is divided into multiples phases. In each phase, we iteratively improve the approximation ratio of our current matching M𝑀Mitalic_M by finding a set of disjoint M𝑀Mitalic_M-augmenting paths (and performing the augmentations accordingly). We stop the algorithm after certain number of phases to be fixed later (see Algorithm 1). |
It is known that finding an exact matching requires linear space in the size of the graph and hence it is not possible to find an exact maximum matching in the semi-streaming model [FKM+04], at least for sufficiently dense graphs. Nevertheless, this result does not apply to computing a good approximation to the maximum matching in this model. We call an algorithm an α𝛼\alphaitalic_α-approximation if the matching has a size at least 1/α1𝛼1/\alpha1 / italic_α times the optimum matching. |
Given a graph on n𝑛nitalic_n vertices, there is a deterministic (1+ε)1𝜀(1+\varepsilon)( 1 + italic_ε )-approximation algorithm for maximum matching that runs in poly(1/ε)poly1𝜀\operatorname{poly}(1/\varepsilon)roman_poly ( 1 / italic_ε ) passes in the semi-streaming model. |
Let ΔΔ\Deltaroman_Δ be an upper bound on the structure size and h(ε)|M|ℎ𝜀𝑀h(\varepsilon)|M|italic_h ( italic_ε ) | italic_M | be the maximum number of active nodes at the end of a phase. Fix a phase. Let k≥3𝑘3k\geq 3italic_k ≥ 3 be an integer parameter. Let M𝑀Mitalic_M be the matching at the beginning of that phase, and assume that M𝑀Mitalic_M is not a (1+2/k)12𝑘(1+2/k)( 1 + 2 / italic_k ) approximation of a maximum matching. Then, by the end of the same phase the matching size will increase by factor | B |
}^{n}f_{i}(\bm{\mathit{x}}),roman_min start_POSTSUBSCRIPT bold_italic_x ∈ blackboard_R start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_f ( bold_italic_x ) = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( bold_italic_x ) ,
|
The communication between all the agents is modeled by directed graphs. Given a strongly connected graph 𝒢=(𝒩,ℰ)𝒢𝒩ℰ\mathcal{G}=\left(\mathcal{N},\mathcal{E}\right)caligraphic_G = ( caligraphic_N , caligraphic_E ) with ℰ⊂𝒩×𝒩ℰ𝒩𝒩\mathcal{E}\subset\mathcal{N}\times\mathcal{N}caligraphic_E ⊂ caligraphic_N × caligraphic_N being the edge set, agent i𝑖iitalic_i can receive information from agent j𝑗jitalic_j if and only if (i,j)∈ℰ𝑖𝑗ℰ(i,j)\in\mathcal{E}( italic_i , italic_j ) ∈ caligraphic_E. There are two n𝑛nitalic_n-by-n𝑛nitalic_n nonnegative matrices 𝑹𝑹\bm{\mathit{R}}bold_italic_R and 𝑪𝑪\bm{\mathit{C}}bold_italic_C. | Many methods have been proposed to solve the problem (1) under various settings on the optimization objectives, network topologies, and communication protocols.
The paper [10] developed a decentralized subgradient descent method (DGD) with diminishing stepsizes to reach the optimum for convex objective functions over an undirected network topology. | The n𝑛nitalic_n agents are connected through a general directed network and only communicate directly with their immediate neighbors.
The problem (1) has received much attention in recent years due to its wide applications in distributed machine learning [1, 2, 3], multi-agent target seeking [4, 5], and wireless networks [6, 7, 8], among many others. | For example, the rapid development of distributed machine learning involves data whose size is getting increasingly large, and they are usually stored across multiple computing agents that are spatially distributed. Centering large amounts of data is often undesirable due to limited communication resources and/or privacy concerns,
and decentralized optimization serves as an important tool to solve such large-scale distributed learning problems due to its scalability, sparse communication, and better protection for data privacy [9]. | C |
\tfrac{\lambda}{2}\|\sqrt{W}Y\|^{2}\right\}{ ∑ start_POSTSUBSCRIPT italic_m = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT italic_f start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) + divide start_ARG italic_λ end_ARG start_ARG 2 end_ARG ∥ square-root start_ARG italic_W end_ARG italic_X ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - divide start_ARG italic_λ end_ARG start_ARG 2 end_ARG ∥ square-root start_ARG italic_W end_ARG italic_Y ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT } as a whole, not to take into account its composite structure. As a basic method we may consider the classical method for smooth saddle point problems – Extra Step Method [40] (or Mirror Prox [41]). Then the number of oracle calls for the saddle function and for the composites are the same.
Note that in the problem (1) the step along the gradient of the regularizer (λ2‖WX‖2−λ2‖WY‖2𝜆2superscriptnorm𝑊𝑋2𝜆2superscriptnorm𝑊𝑌2\tfrac{\lambda}{2}\|\sqrt{W}X\|^{2}-\tfrac{\lambda}{2}\|\sqrt{W}Y\|^{2}divide start_ARG italic_λ end_ARG start_ARG 2 end_ARG ∥ square-root start_ARG italic_W end_ARG italic_X ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - divide start_ARG italic_λ end_ARG start_ARG 2 end_ARG ∥ square-root start_ARG italic_W end_ARG italic_Y ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT) requires communication with neighboring nodes (due to multiplication by the matrix W𝑊Witalic_W). Meanwhile, for gradient calculation of ∑m=1Mfm(xm,ym)superscriptsubscript𝑚1𝑀subscript𝑓𝑚subscript𝑥𝑚subscript𝑦𝑚\sum_{m=1}^{M}f_{m}(x_{m},y_{m})∑ start_POSTSUBSCRIPT italic_m = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT italic_f start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) it is enough to calculate all the local gradients of fmsubscript𝑓𝑚f_{m}italic_f start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT and do not exchange information at all. | Certainly, we want to reduce the number of communications (or calls the regularizer gradient) as much as possible.
This is especially important when the problem (1) is a fairly personalized (λ≪Lmuch-less-than𝜆𝐿\lambda\ll Litalic_λ ≪ italic_L) and information from other nodes is not significant. To solve this problem and separate the oracle complexities for the saddle function and the composites, we base our method on sliding technique [27]. | To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detailed comparison with them in Appendix C. Due to the fact that we consider a personalized setting, we can have a significant gain in communications. For example, when λ=0𝜆0\lambda=0italic_λ = 0 or small enough in (1) the importance of local models increases and we may communicate less frequently.
We now outline the main contribution of our work as follows (please refer also Table 1 for an overview of the results): |
In this paper, we present a novel formulation for the Personalized Federated Learning Saddle Point Problem (1). This formulation incorporates a penalty term that accounts for the specific structure of the network and is applicable to both centralized and decentralized network settings. Additionally, we provide the lower bounds both on the communication and the number of local oracle calls required to solve problem (1). Furthermore, we have developed the novel methods (Algorithm 1, Algorithm 2, Algorithm 3) for this problem that are optimal up to logarithmic factor in certain scenarios (see Table 1). These algorithms are based on sliding or variance reduction techniques. The theoretical analysis and experimental evidence corroborate our methods. Moreover, we have customized our approach for neural network training. | or open-source codes). This definition comes from the standard results, that for smooth functions the stepsize ∼1Lsimilar-toabsent1𝐿\sim\frac{1}{L}∼ divide start_ARG 1 end_ARG start_ARG italic_L end_ARG. We do not say that this is a good definition of L𝐿Litalic_L, but we only need it for the intuition. We also mention that we are interested in the case when λ≪Lmuch-less-than𝜆𝐿\lambda\ll Litalic_λ ≪ italic_L.
| A |
This highlights the main drawback of MW(C)CE which does not select for unique solutions (for example, in constant-sum games all solutions have maximum welfare). One selection criterion for NEs is maximum entropy Nash equilibrium (MENE) (Balduzzi et al., 2018), however outside of the two-player constant-sum setting, these are generally not easy to compute (Daskalakis et al., 2009). CEs exist in a convex polytope, so any convex function can select among them. Maximum entropy correlated equilibrium (MECE) (Ortiz et al., 2007) is limited to full-support solutions, which may not exist when ϵ=0italic-ϵ0\epsilon=0italic_ϵ = 0, and can be hard to solve in practice. Therefore, there is a gap in the literature for a computationally tractable, unique, solution concept and this work proposes MG(C)CE fills this gap. |
The new solution concept MG(C)CE is rooted in the powerful principles of entropy and margin maximisation. Therefore it is a simple solution that makes limited assumptions, and is robust to many possible counter strategies (Jaynes, 1957). The MG(C)CE defines a family of unique solutions parameterized by ϵitalic-ϵ\epsilonitalic_ϵ, that can control for the properties of the distribution. We have compared it to other NE, CE, and α𝛼\alphaitalic_α-Rank solutions, and have shown it has several advantages over these approaches, and performs very well across a variety of games. | There are two important solution concepts in the space of CEs. The first is Maximum Welfare Correlated Equilibrium (MWCE) which is defined as the CE that maximises the sum of all player’s payoffs. An MWCE can be obtained by solving a linear program, however the MWCE may not be unique and therefore does not fully solve the equilibrium selection problem (e.g. constant-sum game solutions all have equal payoff). The second such concept is Maximum Entropy Correlated Equilibrium (MECE) (Ortiz et al., 2007) which maximises Shannon’s entropy (Shannon, 1948) as an objective. MECE also shares some interesting properties with MGCE such as computational scalability when the solution is full-support (positive probability mass everywhere). Drawbacks of this approach are that the literature does not provide algorithms when the solution is general-support (non-negative probability) and, maximising Shannon’s entropy can be complex.
| MG(C)CE, however, is the solution to a quadratic program, and therefore can be solved in polynomial time. Furthermore, if the assumption is made that the solution is full-support, the algorithm’s variables scale better than the number of σ𝜎\sigmaitalic_σ parameters.
| MG(C)CE can provide solutions in general-support and, similar to MECE, MG(C)CE permits a scalable representation when the solution is full-support. Under this scenario, the distribution inequality constraint variables, β𝛽\betaitalic_β, are inactive, are equal to zero, can be dropped, and the α𝛼\alphaitalic_α variables can fully parameterize the solution.
| D |
Given η>0𝜂0\eta>0italic_η > 0 and a query q𝑞qitalic_q, the Gaussian mechanism with noise parameter η𝜂\etaitalic_η returns its empirical mean q(s)𝑞𝑠{q}\left(s\right)italic_q ( italic_s ) after adding a random value, sampled from an unbiased Gaussian distribution with variance η2superscript𝜂2\eta^{2}italic_η start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. Formally, M(s,q)∼𝒩(q(s),η2)similar-to𝑀𝑠𝑞𝒩𝑞𝑠superscript𝜂2{M}\left(s,q\right)\sim{\mathcal{N}}\left({q}\left(s\right),\eta^{2}\right)italic_M ( italic_s , italic_q ) ∼ caligraphic_N ( italic_q ( italic_s ) , italic_η start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ).444In the case of an adaptive process, one can also consider the case where ηisubscript𝜂𝑖\eta_{i}italic_η start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are adaptively chosen by the analyst and provided to the mechanism as the auxiliary parameter θisubscript𝜃𝑖\theta_{i}italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
| Since achieving posterior accuracy is relatively straightforward, guaranteeing Bayes stability is the main challenge in leveraging this theorem to achieve distribution accuracy with respect to adaptively chosen queries. The following lemma gives a useful and intuitive characterization of the quantity that the Bayes stability definition requires be bounded. Simply put, the Bayes factor K(⋅,⋅)𝐾⋅⋅{K}\left(\cdot,\cdot\right)italic_K ( ⋅ , ⋅ ) (defined in the lemma below) represents the amount of information leaked about the dataset during the interaction with an analyst, by moving from the prior distribution over
data elements to the posterior induced by some view v𝑣vitalic_v. The degree to which a query q𝑞qitalic_q overfits to the dataset is expressed by the correlation between the query and that Bayes factor. This simple lemma is at the heart of the progress that we make in this paper, both in our intuitive understanding of adaptive data analysis, and in the concrete results we show in subsequent sections. Its corresponding version for arbitrary queries are presented in Section C.2. | In this section, we give a clean, new characterization of the harms of adaptivity. Our goal is to bound the distribution error of a mechanism that responds to queries generated by an adaptive analyst.
This bound will be achieved via a triangle inequality, by bounding both the posterior accuracy and the Bayes stability (Definition 3.3). Missing proofs from this section appear in Appendix C. |
In order to leverage Lemma 3.5, we need a stability notion that implies Bayes stability of query responses in a manner that depends on the actual datasets and the actual queries (not just the worst case). In this section we propose such a notion and prove several key properties of it. Missing proofs from this section can be found in Appendix D. | Using the first part of the lemma, we guarantee Bayes stability by bounding the correlation between specific q𝑞qitalic_q and K(⋅,v)𝐾⋅𝑣{K}\left(\cdot,v\right)italic_K ( ⋅ , italic_v ) as discussed in Section 6. The second part of this Lemma implies that bounding the appropriate divergence is necessary and sufficient for bounding the Bayes stability of the worst query in the corresponding family, which is how the main theorems of this paper are all achieved, using the next corollary.
| B |
However, we argue that these results on kernelization do not explain the often exponential speed-ups (e.g. [3], [5, Table 6]) caused by applying effective preprocessing steps to non-trivial algorithms. Why not? A kernelization algorithm guarantees that the input size is reduced to a function of the parameter k𝑘kitalic_k; but the running time of modern parameterized algorithms for NP-hard problems is not exponential in the total input size. Instead, fixed-parameter tractable (FPT) algorithms have a running time that scales polynomially with the input size, and which only depends exponentially on a problem parameter such as the solution size or treewidth. Hence an exponential speed-up of such algorithms cannot be explained by merely a decrease in input size, but only by a decrease in the parameter! |
However, we argue that these results on kernelization do not explain the often exponential speed-ups (e.g. [3], [5, Table 6]) caused by applying effective preprocessing steps to non-trivial algorithms. Why not? A kernelization algorithm guarantees that the input size is reduced to a function of the parameter k𝑘kitalic_k; but the running time of modern parameterized algorithms for NP-hard problems is not exponential in the total input size. Instead, fixed-parameter tractable (FPT) algorithms have a running time that scales polynomially with the input size, and which only depends exponentially on a problem parameter such as the solution size or treewidth. Hence an exponential speed-up of such algorithms cannot be explained by merely a decrease in input size, but only by a decrease in the parameter! |
We start by motivating the need for a new direction in the theoretical analysis of preprocessing. The use of preprocessing, often via the repeated application of reduction rules, has long been known [3, 4, 44] to speed up the solution of algorithmic tasks in practice. The introduction of the framework of parameterized complexity [21] in the 1990s made it possible to also analyze the power of preprocessing theoretically, through the notion of kernelization. It applies to parameterized decision problems Π⊆Σ*×ℕΠsuperscriptΣℕ\Pi\subseteq\Sigma^{*}\times\mathbb{N}roman_Π ⊆ roman_Σ start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT × blackboard_N, in which every instance x∈Σ*𝑥superscriptΣx\in\Sigma^{*}italic_x ∈ roman_Σ start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT has an associated integer parameter k𝑘kitalic_k which captures one dimension of its complexity. For Feedback Vertex Set, typical choices for the parameter include the size of the desired solution or structural measures of the complexity of the input graph. A kernelization for a parameterized problem ΠΠ\Piroman_Π is then a polynomial-time algorithm that reduces any instance with parameter value k𝑘kitalic_k to an equivalent instance, of the same problem, whose total size is bounded by f(k)𝑓𝑘f(k)italic_f ( italic_k ) for some computable function f𝑓fitalic_f of the parameter alone. The function f𝑓fitalic_f is the size of the kernelization. | We therefore propose the following novel research direction: to investigate how preprocessing algorithms can decrease the parameter value (and hence search space) of FPT algorithms, in a theoretically sound way. It is nontrivial to phrase meaningful formal questions in this direction. To illustrate this difficulty, note that strengthening the definition of kernelization to “a preprocessing algorithm that is guaranteed to always output an equivalent instance of the same problem with a strictly smaller parameter” is useless. Under minor technical assumptions, such an algorithm would allow the problem to be solved in polynomial time by repeatedly reducing the parameter, and solving the problem using an FPT or XP algorithm once the parameter value becomes constant. Hence NP-hard problems do not admit such parameter-decreasing algorithms. To formalize a meaningful line of inquiry, we take our inspiration from the Vertex Cover problem, the fruit fly of parameterized algorithms.
| We have taken the first steps into a new direction for preprocessing which aims to investigate how and when a preprocessing phase can guarantee to identify parts of an optimal solution to an 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problem, thereby reducing the running time of the follow-up algorithm. Aside from the technical results concerning antler structures for Feedback Vertex Set and their algorithmic properties, we consider the conceptual message of this research direction an important contribution of our theoretical work on understanding the power of preprocessing and the structure of solutions to 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP-hard problems.
| C |
Object placement [2, 24, 65, 154, 197] tends to seek for reasonable location, size, and shape by predicting the foreground transformation to avoid the abovementioned inconsistencies. Previous object placement methods [197, 154] mainly predict simple form of spatial transformation, that is, shifting and scaling the foreground to achieve reasonable location and size. Some other methods [65, 88] predict more general form of spatial transformation (e.g., affine transformation, perspective transformation, thin plate spline transformation) to warp the foreground. In terms of more advanced geometric transformation like view synthesis and pose transfer, we should resort to generative approaches [183, 141] to change the viewpoint/pose of the foreground.
When placing the object on the background, unreasonable occlusion may occur. Most previous methods seek for reasonable placement to avoid unreasonable occlusions, while some methods [2, 190, 147] aim to fix unreasonable occlusion by removing the occluded regions of foreground based on the estimated depth information. | After compositing a new image with foreground and background, there exist many issues that could make the composite image unrealistic and thus significantly degrade its quality. These issues can be summarized as the inconsistency between foreground and background, which can be divided into appearance inconsistency, geometric inconsistency, and semantic inconsistency. Each type of inconsistency involves a number of issues to be solved. Image composition task could be decomposed into multiple sub-tasks, in which each sub-task
targets at one or more issues. Next, we will introduce each type of inconsistency one by one. | Figure 2: The quality of composite image is degraded by the appearance inconsistency, geometric inconsistency, and semantic inconsistency. Each type of inconsistency involves a number of issues. Each sub-task targets at addressing one or more issues.
| The semantic inconsistency is including but not limited to: 1) the foreground appears at a semantically unreasonable place (e.g., a zebra is placed in the living room); 2) the foreground has unreasonable interactions with other objects or people (e.g., a person is riding a motorbike, but the person and the motorbike are facing towards opposite directions); 3) the background may have semantic impact on the foreground appearance. The semantic inconsistency is judged based on commonsense knowledge, so the cases of semantic inconsistency may be arguable according to subjective judgement. For example, when a car is placed in the water, it can be argued that a car is sinking into the water after a car accident. However, such event has rather low probability compared with commonly seen cases, so we can claim that the car appears at an unreasonable place, which belongs to semantic inconsistency.
Partial solution to semantic inconsistency falls into the scope of object placement. To be exact, by predicting suitable spatial transformation for the foreground, we can relocate the foreground to a reasonable place or adjust the pose of foreground to make its interactions with environment more convincing. Additionally, the appearance of foreground object may be affected by the background semantically, which is different from low-level appearance inconsistency (illumination, shadow). For example, a car placed on the snowy ground may be covered by snow. A student inserted into a group of students wearing school uniforms should wear the same school uniform. Such semantic appearance variation is very flexible and challenging, which will not be fully discussed in this survey. | Object placement aims to paste the foreground on the background with suitable location, size, and shape. As shown in Fig. 4, the cases of unreasonable object placement are including but not limited to: a) the foreground object has inappropriate size (e.g., the dog is too large); b) the foreground object has unreasonable occlusion with background objects (e.g., the fences are unreasonably occluded by the giraffe); c) the foreground object does not have reasonable force condition (e.g., the suitcase is floating in the air); d) the foreground object appears at a semantically unreasonable place (e.g., the boat appears on the land); e) inconsistent perspectives between foreground and background (e.g., the car and the bus have inconsistent perspectives).
By taking all the above factors into consideration, object placement is a very challenging task. | C |
Table VII presents the results of our inter-city transfer learning experiments. Specifically, we report the results obtained by training our models using both full and 3-day target data, which correspond to the lower and upper bounds of errors, respectively. Furthermore, we also include the results of fine-tuning and RegionTrans methods. Based on the results, we obtain the following observations:
|
Degradation under data scarcity. Our findings reveal that when only 3-day training data are available, non-deep learning models such as LR achieve similar performances as compared to using full data, whereas LSTM models suffer from an increased error rate of 50%, as observed in the case of Chengdu. This suggests that deep learning models exhibit greater sensitivity to the amount of training data as compared to non-deep learning models. |
As depicted in Table V, deep learning models can generate highly accurate predictions when provided with ample data. However, the level of digitization varies significantly among cities, and it is likely that many cities may not be able to construct accurate deep learning prediction models due to a lack of data. One effective solution to this problem is transfer learning [20], which leverages knowledge from a source domain with abundant data to a target domain with limited data. In our case, this involves transferring knowledge from one city to another. Therefore, we conduct transfer learning experiments on CityNet to demonstrate that inter-city connections can facilitate positive knowledge transfer and to establish benchmarks for future research on inter-city transfer learning. |
Deep Learning or Not: When provided with ample data (e.g., 10 days, 1-2 months), deep learning models such as CNN, LSTM, GCN, and GAT exhibit superior performance compared to traditional time-series forecasting methods such as HA and LR. This highlights the potency of deep learning in spatio-temporal predictions and the benefits of utilizing information in both Euclidean and non-Euclidean spaces. | Among the methods studied, HA and LR are traditional time series forecasting models, while CNN and LSTM are deep learning models designed for Euclidean structures (such as grid networks). By contrast, GCN and GAT are graph deep learning models that leverage additional region-wise connections, such as POI and road connectivity.
| A |
Most of the data sets were obtained from the UCI repository Dua2019 . Specific references are given in Table 2. This table also shows the number of data points and (used) features and the skewness and (Pearson) kurtosis of the response variable. All data sets were standardized (both features and target variables) before training. The data sets blog and fb1 were also analysed after first taking a log transform of the response variable because these data sets are extremely skewed, which is reflected in the high skewness and kurtosis, as shown in the fourth column of Table 2, and are believed to follow a power law distribution. This strongly improved the R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient of the various models, but did not improve the prediction intervals, and therefore, these results are not included. The crime data set comes in two versions: the original data set consists of integer-valued data (count data), while the version used here was preprocessed using an unsupervised standardization algorithm redmond2002data . Although standardized, the data set retains (some of) its count data properties. The traffic data set, aside of being very small, is also extremely sparse (on average 14 features are zero). It should be noted that all of the data sets used in this study were considered as ordinary (static) data sets. Even though some of them could be considered in a time series context, no autoregressive features were additionally extracted. The main reason to exclude autoregressive features is that most, if not all, methods considered in this study assume the data to be i.i.d. (or exchangeable), a property that is generically not valid for autoregressive data.
| Neural network: A standard neural network point predictor was chosen as a baseline model. Early stopping as for the ensemble methods was used as a regularization method. Furthermore, dropout with p=0.1𝑝0.1p=0.1italic_p = 0.1 and L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-regularization with λ=10−6𝜆superscript106\lambda=10^{-6}italic_λ = 10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT were applied at training time.
| Deep ensembles: The ensemble consisted of 5 estimators and the adversarial step size equalled 0.01 times the range of the corresponding dimension (cf. lakshminarayanan2017simple ). At training time L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-regularization with λ=10−6𝜆superscript106\lambda=10^{-6}italic_λ = 10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT was applied.
| Neural network quantile regression: The same softening factor w=2𝑤2w=2italic_w = 2 as in romano2019conformalized ; sesia2020comparison was used. The early stopping criterion for neural networks was also modified to work with the average length and coverage degree of the prediction intervals instead of the loss function of the network. Dropout with p=0.1𝑝0.1p=0.1italic_p = 0.1 and L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-regularization with λ=10−6𝜆superscript106\lambda=10^{-6}italic_λ = 10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT were applied at training time.
|
All neural networks were constructed using the default implementations fromPyTorch pytorch . The general architecture for all neural-network-based models was fixed. The Adam optimizer was used for weight optimization with a fixed learning rate of 5×10−45superscript1045\times 10^{-4}5 × 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT, in accordance with romano2019conformalized . The number of epochs was limited to 100, unless stated otherwise. All neural networks contained only a single hidden layer with 64 neurons. The activation functions after the first layer and the hidden layer were of the ReLU type, while the activation function at the output node was simply a linear function. | D |
The results show that MusicBERT achieves a testing accuracy of 37.25% for style classification and 77.78% for emotion classification. Specifically, in the style classification task, MusicBERT exhibits clear signs of overfitting and falls short in performance when compared to our model (81.75%). This outcome can be attributed to the limited size of the Pianist8 dataset, comprising only 411 songs. Conversely, in the emotion classification task, MusicBERT demonstrates impressive performance, surpassing our model (70.64%) by a significant margin. This finding is intriguing and suggests that the application of large-scale pre-training may yield substantial benefits in classifying the emotional content of a MIDI piece.
|
To train Transformers, it is required that all input sequences have the same length. For both REMI and CP, we divide the token sequence for each entire piece into a number of shorter sequences with equal sequence length 512, zero-padding those at the end of a piece to 512 with an appropriate number of Pad tokens. |
Tab. 2 also shows that “our model (performance)+++CP” outperforms “our model (score)+++CP” greatly for the two sequence-level tasks, style classification and emotion classification. This matches our intuition as the two tasks are highly related to performance styles and expressions of the piano pieces. | To study whether the accuracy gain comes simply from a longer musical context enjoyed by CP, we also train “our model (performance)+++CP” with a sequence of length 128, obtaining 95.43, 80.32 and 64.04 accuracies for three-class melody classification, style classification and emotion classification, respectively.
We note a sequence of length 512 for REMI | In particular, the combination of our model (score) and CP, referred to as “our model (score)+CP” hereafter, exhibits the highest accuracy in the two note-level tasks. Additionally, the combination of our model (performance) and CP, denoted as “our model (performance)+CP”, achieves the best result in the style classification task, while demonstrating a notable improvement in accuracy compared to REMI for emotion classification.
We also observe that our models outperform Bi-LSTM+++CP with just 1 or 2 epochs of fine-tuning, validating the strength of PTMs on symbolic-domain music classification tasks. | C |
Otherwise, F𝐹Fitalic_F has a leaf v∈A𝑣𝐴v\in Aitalic_v ∈ italic_A with a neighbor u∈B𝑢𝐵u\in Bitalic_u ∈ italic_B. We can assign c(v)=a2𝑐𝑣subscript𝑎2c(v)=a_{2}italic_c ( italic_v ) = italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, c(u)=b2𝑐𝑢subscript𝑏2c(u)=b_{2}italic_c ( italic_u ) = italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and invoke a subproblem for F′=F−{u,v}superscript𝐹normal-′𝐹𝑢𝑣F^{\prime}=F-\{u,v\}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_F - { italic_u , italic_v }, A′=A∖{v}superscript𝐴normal-′𝐴𝑣A^{\prime}=A\setminus\{v\}italic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_A ∖ { italic_v }, B′=B∖{u}superscript𝐵normal-′𝐵𝑢B^{\prime}=B\setminus\{u\}italic_B start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_B ∖ { italic_u } with the same coloring c𝑐citalic_c and color intervals [a1,a2−1]subscript𝑎1subscript𝑎21[a_{1},a_{2}-1][ italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT - 1 ] and [b1,b2−1]subscript𝑏1subscript𝑏21[b_{1},b_{2}-1][ italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT - 1 ]. The solution for F′superscript𝐹normal-′F^{\prime}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT would be consistent with coloring of u𝑢uitalic_u and v𝑣vitalic_v, since all other neighbors of u𝑢uitalic_u in F𝐹Fitalic_F would get colors at most a2−1≤b2−1−λ<c(u)−λsubscript𝑎21subscript𝑏21𝜆𝑐𝑢𝜆a_{2}-1\leq b_{2}-1-\lambda<c(u)-\lambdaitalic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT - 1 ≤ italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT - 1 - italic_λ < italic_c ( italic_u ) - italic_λ. | The linear running time follows directly from the fact that we compute c𝑐citalic_c only once and we can pass additionally through recursion the lists of leaves and isolated vertices in an uncolored induced subtree. The total number of updates of these lists is proportional to the total number of edges in the tree, hence the claim follows.
| Next, let us count the total number of jumps necessary for finding central vertices over all loops in Algorithm 1. As it was stated in the proof of Lemma 2.2, while searching for a central vertex we always jump from a vertex to its neighbor in a way that decreases the largest remaining component by one. Thus, if in the next iteration we start at exactly the neighbor of the previous central vertex, there can be only O(n)𝑂𝑛O(n)italic_O ( italic_n ) such jumps in total.
|
Now, observe that if the block to the left is also of type A, then a respective block from Z(S)𝑍𝑆Z(S)italic_Z ( italic_S ) is (0,1,0)010(0,1,0)( 0 , 1 , 0 ) – and when we add the backward carry (0,0,1)001(0,0,1)( 0 , 0 , 1 ) to it, we obtain the forward carry to the rightmost block. And regardless of the value of the appropriate block of Z(S2)𝑍subscript𝑆2Z(S_{2})italic_Z ( italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ), the total sum of the blocks and the backward carry cannot generate any further backward carry. | To obtain the total running time we first note that each of the initial steps – obtaining (R,B,Y)𝑅𝐵𝑌(R,B,Y)( italic_R , italic_B , italic_Y ) from Corollary 2.11 (e.g. using Algorithm 1), contraction of F𝐹Fitalic_F into F′superscript𝐹normal-′F^{\prime}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, and finding both Y1subscript𝑌1Y_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and Y2subscript𝑌2Y_{2}italic_Y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT – requires only linear time.
Coloring Y1∪R1∪B1subscript𝑌1subscript𝑅1subscript𝐵1Y_{1}\cup R_{1}\cup B_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∪ italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∪ italic_B start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT also requires O(n)𝑂𝑛O(n)italic_O ( italic_n ) time, since we need to traverse each edge between these vertices only once to ensure the proper distances between the colors, and it is sufficient to use bucket sort to order vertices within B1subscript𝐵1B_{1}italic_B start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and R1subscript𝑅1R_{1}italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. The same argument follows symmetrically for Y2∪R2∪B2subscript𝑌2subscript𝑅2subscript𝐵2Y_{2}\cup R_{2}\cup B_{2}italic_Y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∪ italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∪ italic_B start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. | A |