context
stringlengths
250
7.19k
A
stringlengths
250
4.12k
B
stringlengths
250
8.2k
C
stringlengths
250
5.47k
D
stringlengths
250
3.94k
label
stringclasses
4 values
(−1)a⁢(b−1−a)⁢[dd⁢x⁢xm⁢F⁢(a,b;c;z)+xm⁢dd⁢x⁢F⁢(a,b;c;z)];superscript1𝑎binomial𝑏1𝑎delimited-[]𝑑𝑑𝑥superscript𝑥𝑚𝐹𝑎𝑏𝑐𝑧superscript𝑥𝑚𝑑𝑑𝑥𝐹𝑎𝑏𝑐𝑧\displaystyle(-1)^{a}{b-1\choose-a}\Big{[}\frac{d}{dx}x^{m}F(a,b;c;z)+x^{m}% \frac{d}{dx}F(a,b;c;z)\Big{]};( - 1 ) start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( binomial start_ARG italic_b - 1 end_ARG start_ARG - italic_a end_ARG ) [ divide start_ARG italic_d end_ARG start_ARG italic_d italic_x end_ARG italic_x start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT italic_F ( italic_a , italic_b ; italic_c ; italic_z ) + italic_x start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT divide start_ARG italic_d end_ARG start_ARG italic_d italic_x end_ARG italic_F ( italic_a , italic_b ; italic_c ; italic_z ) ] ;
d2d⁢x2⁢F⁢(a,b;c;z)superscript𝑑2𝑑superscript𝑥2𝐹𝑎𝑏𝑐𝑧\displaystyle\frac{d^{2}}{dx^{2}}F(a,b;c;z)divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG italic_F ( italic_a , italic_b ; italic_c ; italic_z ) =\displaystyle==
}\left[\left(n(n+D)-\frac{m(D-2+m)}{x^{2}}\right)\frac{R_{n}^{m}(x)}{{R_{n}^{m% }}^{\prime}(x)}+\frac{D-1-(D+1)x^{2}}{x}\right].divide start_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG start_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG = divide start_ARG 1 end_ARG start_ARG italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 end_ARG [ ( italic_n ( italic_n + italic_D ) - divide start_ARG italic_m ( italic_D - 2 + italic_m ) end_ARG start_ARG italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ) divide start_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) end_ARG start_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG + divide start_ARG italic_D - 1 - ( italic_D + 1 ) italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_x end_ARG ] .
d3d⁢x3⁢Rnm⁢(x)superscript𝑑3𝑑superscript𝑥3superscriptsubscript𝑅𝑛𝑚𝑥\displaystyle\frac{d^{3}}{dx^{3}}R_{n}^{m}(x)divide start_ARG italic_d start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT end_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) =\displaystyle==
d2d⁢x2⁢Rnm⁢(x)superscript𝑑2𝑑superscript𝑥2superscriptsubscript𝑅𝑛𝑚𝑥\displaystyle\frac{d^{2}}{dx^{2}}R_{n}^{m}(x)divide start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_d italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x )
D
Now let d𝑑ditalic_d be even. The same results for the transvections t21⁢(ωℓ)subscript𝑡21superscript𝜔ℓt_{21}(\omega^{\ell})italic_t start_POSTSUBSCRIPT 21 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ) and t12⁢(ωℓ)subscript𝑡12superscript𝜔ℓt_{12}(\omega^{\ell})italic_t start_POSTSUBSCRIPT 12 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ) as for d𝑑ditalic_d odd can be obtained by replacing v𝑣vitalic_v by x𝑥xitalic_x in the formula for t21⁢(ωℓ)subscript𝑡21superscript𝜔ℓt_{21}(\omega^{\ell})italic_t start_POSTSUBSCRIPT 21 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ). It remains to compute t32⁢(ωℓ)subscript𝑡32superscript𝜔ℓt_{32}(\omega^{\ell})italic_t start_POSTSUBSCRIPT 32 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ) and t23⁢(ωℓ)subscript𝑡23superscript𝜔ℓt_{23}(\omega^{\ell})italic_t start_POSTSUBSCRIPT 23 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ) which can be done using Lemmas 3.2 and 3.6. First, we compute x⁢v−1𝑥superscript𝑣1xv^{-1}italic_x italic_v start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT and store it in the slot p⁢[2,3,1]𝑝231p[2,3,1]italic_p [ 2 , 3 , 1 ] for t23⁢(ω0)subscript𝑡23superscript𝜔0t_{23}(\omega^{0})italic_t start_POSTSUBSCRIPT 23 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT ) which takes one operation. Then we compute t32⁢(ωℓ)=(x⁢v−1)⁢t21⁢(ωℓ)⁢(x⁢v−1)−1subscript𝑡32superscript𝜔ℓ𝑥superscript𝑣1subscript𝑡21superscript𝜔ℓsuperscript𝑥superscript𝑣11t_{32}(\omega^{\ell})=(xv^{-1})t_{21}(\omega^{\ell})(xv^{-1})^{-1}italic_t start_POSTSUBSCRIPT 32 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ) = ( italic_x italic_v start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) italic_t start_POSTSUBSCRIPT 21 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ) ( italic_x italic_v start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT for 0≤ℓ<f0ℓ𝑓0\leq\ell<f0 ≤ roman_ℓ < italic_f which needs three operations per transvection, and hence 3⁢f3𝑓3f3 italic_f operations overall. Lastly we compute s1=v⁢s⁢v−1subscript𝑠1𝑣𝑠superscript𝑣1s_{1}=vsv^{-1}italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_v italic_s italic_v start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT and store it in slot p⁢[2,3,f−1]𝑝23𝑓1p[2,3,f-1]italic_p [ 2 , 3 , italic_f - 1 ] which needs two operations and t23⁢(ωℓ)subscript𝑡23superscript𝜔ℓt_{23}(\omega^{\ell})italic_t start_POSTSUBSCRIPT 23 end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT ) for 0≤ℓ<f0ℓ𝑓0\leq\ell<f0 ≤ roman_ℓ < italic_f which needs 3⁢f3𝑓3f3 italic_f operations overall. This requires at most 16⁢f+716𝑓716f+716 italic_f + 7 operations.
Finally, we construct a second MSLP, described in Section 3.5, that writes a diagonal matrix h∈SL⁢(d,q)ℎSL𝑑𝑞h\in\textnormal{SL}(d,q)italic_h ∈ SL ( italic_d , italic_q ) as a word in the standard generators of SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) (when evaluated with these generators as input). Combining the constructions in Sections 3.4 and 3.5 yields, as required, the monomial matrix
The first step of the algorithm is the one-off computation of T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT from the LGO standard generators of SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ). The length and memory requirement of an MSLP for this step is as follows.
We now compute upper bounds for the length and memory quota of an MSLP for expressing an arbitrary diagonal matrix h∈SL⁢(d,q)ℎSL𝑑𝑞h\in\textnormal{SL}(d,q)italic_h ∈ SL ( italic_d , italic_q ) as a word in the LGO generators, i.e. the computation phase of the algorithm.
Our aim is to determine the length and memory quota for an MSLP for the Bruhat decomposition of an arbitrary matrix g∈SL⁢(d,q)𝑔SL𝑑𝑞g\in\textnormal{SL}(d,q)italic_g ∈ SL ( italic_d , italic_q ) via the above method, with the matrices u1subscript𝑢1u_{1}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, u2subscript𝑢2u_{2}italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, w𝑤witalic_w returned as words in the LGO generators s,t,v,δ,x𝑠𝑡𝑣𝛿𝑥s,t,v,\delta,xitalic_s , italic_t , italic_v , italic_δ , italic_x of SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) given in Section 3.1.
C
where Ω⊂ℝdΩsuperscriptℝ𝑑\Omega\subset\mathbb{R}^{d}roman_Ω ⊂ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with d=2𝑑2d=2italic_d = 2 or 3333 for simplicity, and is an open bounded domain with polyhedral boundary ∂ΩΩ\partial\Omega∂ roman_Ω, the symmetric tensor 𝒜∈[L∞⁢(Ω)]symd×d𝒜superscriptsubscriptdelimited-[]superscript𝐿Ωsym𝑑𝑑\mathcal{A}\in[L^{\infty}(\Omega)]_{\text{sym}}^{d\times d}caligraphic_A ∈ [ italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( roman_Ω ) ] start_POSTSUBSCRIPT sym end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_d × italic_d end_POSTSUPERSCRIPT is uniformly positive definite and bounded, and g𝑔gitalic_g is part of the given data.
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local computations are required, although these are not restricted to a single element. It is interesting to notice that, although the formulation is based on hybridization, the final numerical solution is defined by a sequence of elliptic problems.
One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method not so practical. Here in this paper, in the presence of rough coefficients, spectral techniques are employed to overcome such hurdle, and by solving local eigenvalue problems we define a space where the exponential decay of solutions is insensitive to high-contrast coefficients. Additionally, the spectral techniques remove macro-elements corner singularities that occur in LOD methods based on
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85, MR1979846, MR2058933, HMV, MR1642758, MR3584539, MR2030161, MR2383203, vs1, vs2, MR2740478]. Some methods work even considering that the solution has low regularity [MR2801210, MR2753343, MR3225627, MR3177856, MR2861254] but are based on ideas that differ considerably from what we advocate here
In [MR2718268] is shown that the number of eigenvalues that are very large is related to the number of connected sub-regions on τ¯∪τ¯′¯𝜏superscript¯𝜏′\bar{\tau}\cup{\bar{\tau}}^{\prime}over¯ start_ARG italic_τ end_ARG ∪ over¯ start_ARG italic_τ end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with large coefficients surrounded by regions with small coefficients. Generalized eigenvalue problems also have been used on overlapping domain decomposition solvers [MR2718268, MR2916377, MR3175183, MR3033238]. The design of robust discretizations with respect to coefficients using domain decomposition ideas have been studied in [MR2666649, MR1642758, MR3350765] assuming some regularity on the solution, and in [MR2718268] for a class of problems when the weighted Poincaré constant [MR3047947, MR3013465, MR2867661] is not large, otherwise the exponential decay of the multiscale functions deteriorates. See also [MR2753343, MR3109775] where a priori error estimates are obtained in terms of spectral norms.
C
Moreover, (iii) A back-stable edge (e.g. the one at ersubscript𝑒𝑟e_{r}italic_e start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT) remains back-stable when we change another edge (e.g. the one at essubscript𝑒𝑠e_{s}italic_e start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT or etsubscript𝑒𝑡e_{t}italic_e start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT) forwardly (e.g. s←s+1←𝑠𝑠1s\leftarrow s+1italic_s ← italic_s + 1 or t←t+1←𝑡𝑡1t\leftarrow t+1italic_t ← italic_t + 1).
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
It is easy to compute one 3-stable triangle in O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) time; we show how to do this in section 4111Alg-DS fails to find one 3-stable triangle and so we introduce the algorithm in section 4. This algorithm in section 4 is not the same as and does not originate from Alg-DS (see appendix A.2).. Denote the computed 3-stable triangle by △⁢vr⁢vs⁢vt△subscript𝑣𝑟subscript𝑣𝑠subscript𝑣𝑡\triangle v_{r}v_{s}v_{t}△ italic_v start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and assume r,s,t𝑟𝑠𝑡r,s,titalic_r , italic_s , italic_t are given in the following.
Our algorithm given in section 4 (denoted by Alg-One) is different from Alg-DS. First, step 1 of Alg-One sets the initial value of (r,s,t)𝑟𝑠𝑡(r,s,t)( italic_r , italic_s , italic_t ) differently from the initial value (1,2,3)123(1,2,3)( 1 , 2 , 3 ) used by Alg-DS.
D
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analysis on the wide range of features change during diffusion time. Ma et al. [19] used Recurrent Neural Networks for rumor detection, they batch tweets into time intervals and model the time series as a RNN sequence. Without any other handcrafted features, they got almost 90% accuracy for events reported in Snope.com. As the same disadvantage of all other deep learning models, the process of learning is a black box, so we cannot envisage the cause of the good performance based only on content features. The model performance is also dependent on the tweet retrieval mechanism, of which quality is uncertain for stream-based trending sub-events.
. As shown in Table 5, CreditScore is the best feature in overall. In Figure 4 we show the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, especially for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit after 16-20 hours, but it is not significant. CrowdWisdom is also a good feature which can get 75.8% accuracy as a single feature. But its performance is poor (less than 70%) in the first 32 hours getting better over time (see Table 5). Table 5 also shows the performance of sentiment feature (PolarityScores), which is generally low. This demonstrates the effectiveness of our curated approach over the sentiments, yet the crowd needs time to unify their views toward the event while absorbing different kinds of information.
The processing pipeline of our classification approach is shown in Figure 2. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline, we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (CreditScore).
In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet level and let each tweet vote for the credibility of its event. We show the CreditScore measured over time in Figure 5(a). It can be seen that although the credibility of some tweets are low (rumor-related), averaging still makes the CreditScore of Munich shooting higher than the average of news events (hence, close to a news). In addition, we show the feature analysis for ContainNews (percentage of URLs containing news websites) for the event Munich shooting in Figure 5(b). We can see the curve of Munich shooting event is also close to the curve of average news, indicating the event is more news-related.
B
In a follow-up work Nacson et al. (2018) provided partial answers to these questions. They proved that the exponential tail has the optimal convergence rate, for tails for which ℓ′⁢(u)superscriptℓ′𝑢\ell^{\prime}(u)roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) is of the form exp⁡(−uν)superscript𝑢𝜈\exp(-u^{\nu})roman_exp ( - italic_u start_POSTSUPERSCRIPT italic_ν end_POSTSUPERSCRIPT ) with ν>0.25𝜈0.25\nu>0.25italic_ν > 0.25. They then conjectured, based on heuristic analysis, that the exponential tail is optimal among all possible tails. Furthermore, they demonstrated that polynomial or heavier tails do not converge to the max margin solution. Lastly, for the exponential loss they proposed a normalized gradient scheme which can significantly improve convergence rate, achieving O⁢(log⁡(t)/t)𝑂𝑡𝑡O(\log(t)/\sqrt{t})italic_O ( roman_log ( italic_t ) / square-root start_ARG italic_t end_ARG ).
The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile continuing to optimize long after we have zero training error, and
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameterization asymptotically to the maximum margin solution with unit nuclear norm. Unlike the case of squared loss, the result for exponential loss are independent of initialization and with only mild conditions on the step size. Here again, we see the asymptotic nature of exponential loss on separable data nullifying the initialization effects thereby making the analysis simpler compared to squared loss.
Perhaps most similar to our study is the line of work on understanding AdaBoost in terms its implicit bias toward large L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-margin solutions, starting with the seminal work of Schapire et al. (1998). Since AdaBoost can be viewed as coordinate descent on the exponential loss of a linear model, these results can be interpreted as analyzing the bias of coordinate descent, rather then gradient descent, on a monotone decreasing loss with an exact exponential tail. Indeed, with small enough step sizes, such a coordinate descent procedure does converge precisely to the maximum L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-margin solution (Zhang et al., 2005; Telgarsky, 2013). In fact, Telgarsky (2013) also generalizes these results to other losses with tight exponential tails, similar to the class of losses we consider here.
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is also independent of the step-size
B
The effective cascaded model that engages both low and high-level features for rumor classification is proposed in our other work (DBLP:journals/corr/abs-1709-04402, ). The model uses time-series structure of features to capture their temporal dynamics. In this paper, we make the following contributions with respect to rumor detection:
We investigate how the performance of different types of low and high-level features changes over time (during the spreading of rumors); improving the understanding of feature impact and model design for rumor detection at different points in time.
In this work, we present a deep analysis on the feature variants over 48 hours for the rumor detection task. The results show that the low-level hidden representation of tweets feature is at least the second best features over time. We also derive explanations on the low performance of supposed-to-be-strong high-level features at early stage. The study also indicates that, there is still considerable room to improve the effectiveness of the neural network-based rumor detection methods, e.g., by leveragining the embeddings from different sources rather than only text contents.
The performance of user features is similar with the Twitter features, they are both quite stable from the first hour to the last hour. As shown in Table 9, the best feature over 48 hours of the user feature group is UserTweetsPerDays and it is the best feature overall in the first 4 hours, but its rank decreases with time going by. Others user-based features like UserReputationScore and UserJoinDate also have a better performance in the first fews hours. That means the sources (the posters in the first few hours) of news and rumors are quite different with each other. But with more and more users joining in the discussion, the bias of two groups of users becomes less. After 6 hours, it seems that we can better distinguish the rumors based on the tweet contents (text features), rather than relying on the features of users.
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet level and let each tweet vote for the credibility of its event. We show the CreditScore measured over time in Figure 13(a). It can be seen that although the credibility of some tweets are low (rumor-related), averaging still makes the CreditScore of Munich shooting higher than the average of news events (hence, close to a news). In addition, we show the feature analysis for ContainNews (percentage of URLs containing news websites) for the event Munich shooting in Figure 13(b). We can see the curve of Munich shooting event is also close to the curve of average news, indicating the event is more news-related.
A
Evaluating methodology. For RQ1, given an event entity e, at time t, we need to classify them into either Breaking or Anticipated class. We select a studied time for each event period randomly in the range of 5 days before and after the event time. In total, our training dataset for AOL consists of 1,740 instances of breaking class and 3,050 instances of anticipated, with over 300 event entities. For GoogleTrends, there are 2,700 and 4,200 instances respectively. We then bin the entities in the two datasets chronologically into 10 different parts. We set up 4 trials with each of the last 4 bins (using the history bins for training in a rolling basic) for testing; and report the results as average of the trials.
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The results are shown in Table 3-bottom, showing that our cascaded model, with features inherited from the performance of SVM in previous task, substantially improves the single model. However, the overall modest results show the difficulty of this multi-class classification task.
RQ2. Figure 4 shows the performance of the aspect ranking models for our event entities at specific times and types. The most right three models in each metric are the models proposed in this work. The overall results show that, the performances of these models, even better than the baselines (for at least one of the three), vary greatly among the cases. In general, S⁢V⁢Ms⁢a⁢l⁢i⁢e⁢n⁢c⁢e𝑆𝑉subscript𝑀𝑠𝑎𝑙𝑖𝑒𝑛𝑐𝑒SVM_{salience}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_s italic_a italic_l italic_i italic_e italic_n italic_c italic_e end_POSTSUBSCRIPT performs well at the before stage of breaking events, and badly at the after stage of the same event type. Whereas S⁢V⁢Mt⁢i⁢m⁢e⁢l⁢i⁢n⁢e⁢s⁢s𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT gives a contradictory performance for the cases. For anticipated events, S⁢V⁢Mt⁢i⁢m⁢e⁢l⁢i⁢n⁢e⁢s⁢s𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT performs well at the before and after stages, but gives a rather low performance at the during stage. For this event type, S⁢V⁢Ms⁢a⁢l⁢i⁢e⁢n⁢c⁢e𝑆𝑉subscript𝑀𝑠𝑎𝑙𝑖𝑒𝑛𝑐𝑒SVM_{salience}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_s italic_a italic_l italic_i italic_e italic_n italic_c italic_e end_POSTSUBSCRIPT generally performs worse than S⁢V⁢Mt⁢i⁢m⁢e⁢l⁢i⁢n⁢e⁢s⁢s𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT. Overall, The S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT with all features combined gives a good and stable performance, but for most cases, are not better than the well-performed single set of features L2R model. In general, these results prove our assumption that salience and timeliness should be traded-off for different event types, at different event times. For feature importances, we observe regularly, stable performances of same-group features across these cases. Salience features from knowledge bases tend to perform better than from query logs for short-duration or less popular events. We leave the more in-depth analysis of this part for future work.
Results. The baseline and the best results of our 1s⁢tsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achieves marginally better result at F1 metric.
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall, improved the baseline, yet not significantly. Our Ensemble model, that is learned to trade-off between salience and timeliness achieves the best results for all metrics, outperforms the baseline significantly. As the testing entity queries in this experiment are at all event times and with all event types, these improvements illustrate the robustness of our model. Overall, we witness the low performance of adapted QAC methods. One reason is as mentioned, QACs, even time-aware generally favor already salient queries as follows the rich-get-richer phenomenon, and are not ideal for entity queries that are event-related (where aspect relevance can change abruptly). Time-aware QACs for partially long prefixes like entities often encounter sparse traffic of query volumes, that also contributes to the low results.
C
RT=𝔼⁢{∑t=1TYt,at∗−Yt,At},subscript𝑅𝑇𝔼superscriptsubscript𝑡1𝑇subscript𝑌𝑡subscriptsuperscript𝑎𝑡subscript𝑌𝑡subscript𝐴𝑡R_{T}=\mathbb{E}\left\{\sum_{t=1}^{T}Y_{t,a^{*}_{t}}-Y_{t,A_{t}}\right\}\;,italic_R start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT = blackboard_E { ∑ start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_Y start_POSTSUBSCRIPT italic_t , italic_a start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUBSCRIPT - italic_Y start_POSTSUBSCRIPT italic_t , italic_A start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUBSCRIPT } ,
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains, from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
one uses p⁢(θt|ℋ1:t)𝑝conditionalsubscript𝜃𝑡subscriptℋ:1𝑡p(\theta_{t}|\mathcal{H}_{1:t})italic_p ( italic_θ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) to compute the probability of an arm being optimal, i.e., π⁢(A|xt+1,ℋ1:t)=ℙ⁢(A=at+1∗|xt+1,θt,ℋ1:t)𝜋conditional𝐴subscript𝑥𝑡1subscriptℋ:1𝑡ℙ𝐴conditionalsuperscriptsubscript𝑎𝑡1subscript𝑥𝑡1subscript𝜃𝑡subscriptℋ:1𝑡\pi(A|x_{t+1},\mathcal{H}_{1:t})=\mathbb{P}\left(A=a_{t+1}^{*}|x_{t+1},\theta_%
Thompson sampling (TS) [Thompson, 1935] is an alternative MAB policy that has been popularized in practice, and studied theoretically by many. TS is a probability matching algorithm that randomly selects an action to play according to the probability of it being optimal [Russo et al., 2018].
the combination of Bayesian neural networks with approximate inference has also been investigated. Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; Li et al., 2016].
C
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only two glucose measurements per day on average and measured glucose within 4 hours or less after a meal only 5 out of 54 times.
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day. In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app.
The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day. Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning.
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
B
Table 2 demonstrates that we obtained state-of-the-art scores for the CAT2000 test dataset regarding the AUC-J, sAUC, and KLD evaluation metrics, and competitive results on the remaining measures. The cumulative rank (as computed above) suggests that our model outperformed all previous approaches, including the ones based on a pre-trained VGG16 classification network (Cornia et al., 2018; Kruthiventi et al., 2017). Our final evaluation results for both the MIT300 and CAT2000 datasets can be viewed on the MIT saliency benchmark under the model name MSI-Net, representing our multi-scale information network. Qualitatively, the proposed architecture successfully captures semantically meaningful image features such as faces and text towards the prediction of saliency, as can be seen in Figure 1. Unfortunately, a visual comparison with the results from prior work was not possible since most models are not openly available.
Table 6: A summary of the quantitative results for the models with ⊕direct-sum\oplus⊕ and without ⊖symmetric-difference\ominus⊖ an ASPP module. The evaluation was carried out on five eye tracking datasets respectively. Each network was independently trained 10 times resulting in a distribution of values characterized by the mean μ𝜇\muitalic_μ and standard deviation σ𝜎\sigmaitalic_σ. The star * denotes a significant increase of performance between the two conditions according to a one sided paired t-test. Arrows indicate whether the metrics assess similarity
To quantify the contribution of multi-scale contextual information to the overall performance, we conducted a model ablation analysis. A baseline architecture without the ASPP module was constructed by replacing the five parallel convolutional layers with a single 3×3333\times 33 × 3 convolutional operation that resulted in 1,280 activation maps. This representation was then forwarded to a 1×1111\times 11 × 1 convolutional layer with 256 channels. While the total number of feature maps stayed constant, the amount of trainable parameters increased in this ablation setting. Table 6 summarizes the results according to validation instances of five eye tracking datasets for the model with and without an ASPP module. It can be seen that our multi-scale architecture reached significantly higher performance (one tailed paired t-test) on most metrics and is therefore able to leverage the information captured by convolutional layers with different receptive field sizes. An ablation analysis of the multi-level component adapted from Cornia et al. (2016) can be viewed in the A.
Table 2: Quantitative results of our model for the CAT2000 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone) from shallow networks and other machine learning methods. Entries between the second and third lines are models based on theoretical considerations and define a baseline rather than competitive performance. Arrows indicate whether the metrics assess similarity
Table 1: Quantitative results of our model for the MIT300 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone) from shallow networks and other machine learning methods. Entries between the second and the third line are models based on theoretical considerations and define a baseline rather than competitive performance. Arrows indicate whether the metrics assess similarity
A
For example, the path decomposition ({u,w,x},{u,v,x},{v,y,z})𝑢𝑤𝑥𝑢𝑣𝑥𝑣𝑦𝑧(\{u,w,x\},\{u,v,x\},\{v,y,z\})( { italic_u , italic_w , italic_x } , { italic_u , italic_v , italic_x } , { italic_v , italic_y , italic_z } ) for graph H𝐻Hitalic_H can be represented as a pd-marking scheme as illustrated in Figure 3 (for convenience, we omit the vertex labels; see also Figure 2 for an illustration of H𝐻Hitalic_H).
In the following, we obtain an approximation algorithm for the locality number by reducing it to the problem of computing the pathwidth of a graph. To this end, we first describe another way of how a word can be represented by a graph. Recall that the reduction to cutwidth from Section 4 also transforms words into graphs. The main difference is that the reduction from Section 4 turns every symbol from the alphabet into an individual vertex of the graph (thus, producing a graph with O⁡(|Σ|)OΣ\operatorname{O}(|\Sigma|)roman_O ( | roman_Σ | ) vertices), while the reduction to pathwidth will use a vertex per position of the word α𝛼\alphaitalic_α, i. e., |α|𝛼|\alpha|| italic_α | individual vertices. In the reduction from Section 4 the information of the actual occurrences of the symbols in the word is encoded by the edges (in particular, the length |α|𝛼|\alpha|| italic_α | is represented by the number of edges), while in the following reduction the alphabet is encoded by connecting the vertices that correspond to positions of the same symbol to cliques in the graph (in particular, the number of edges may range between |α|𝛼|\alpha|| italic_α | and |α|2superscript𝛼2|\alpha|^{2}| italic_α | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT). We proceed with a formal definition and an example.
The locality number is rather new and we shall discuss it in more detail. A word is k𝑘kitalic_k-local if there exists an order of its symbols such that, if we mark the symbols in the respective order (which is called a marking sequence), at each stage there are at most k𝑘kitalic_k contiguous blocks of marked symbols in the word. This k𝑘kitalic_k is called the marking number of that marking sequence. The locality number of a word is the smallest k𝑘kitalic_k for which that word is k𝑘kitalic_k-local, or, in other words, the minimum marking number over all marking sequences. For example, the marking sequence σ=(𝚡,𝚢,𝚣)𝜎𝚡𝚢𝚣\sigma=(\mathtt{x},\mathtt{y},\mathtt{z})italic_σ = ( typewriter_x , typewriter_y , typewriter_z ) marks α=𝚡𝚢𝚡𝚢𝚣𝚡𝚣𝛼𝚡𝚢𝚡𝚢𝚣𝚡𝚣\alpha=\mathtt{x}\mathtt{y}\mathtt{x}\mathtt{y}\mathtt{z}\mathtt{x}\mathtt{z}italic_α = typewriter_xyxyzxz as follows (marked blocks are illustrated by overlines):
Both the locality number of a word and the pathwidth of a graph is defined via markings. In order to avoid confusion, we therefore use different terminology to distinguish between these two concepts (see also the terminology defined in Section 2.2): The markings for words are called marking sequences, while the markings for graphs are called pd-marking schemes; the versions of a word during a marking sequence are called the stages (of the marking sequence), while the different marked version of a graph during a pd-marking scheme are called the steps (of the pd-marking scheme).
We use Gαsubscript𝐺𝛼G_{\alpha}italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT as a unique graph representation for words and whenever we talk about a path decomposition for α𝛼\alphaitalic_α, we actually refer to a path decomposition of Gαsubscript𝐺𝛼G_{\alpha}italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT. Recall that we consider path-decompositions as certain marking schemes, which we called pd-marking schemes (see Section 2.3 and Figure 3). Since Gαsubscript𝐺𝛼G_{\alpha}italic_G start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT has the positions of α𝛼\alphaitalic_α as its vertices, the pd-marking scheme behind a path decomposition (and its respective terminology) directly translates to a marking scheme of the positions of α𝛼\alphaitalic_α.
C
In[128] the authors created a recurrent u-net that learns image representations from a stack of 2D slices and has the ability to leverage inter-slice spatial dependencies through internal memory units. It combines anatomical detection and segmentation into a single end-to-end architecture, achieving comparable results with other non end-to-end methods, outperforming the baselines DBN, recurrent DBN and FCN in terms of Dice.
Tan et al.[135] parameterize all short axis slices and phases of the LV segmentation task in terms of the radial distances between the LV center-point and the endocardial and epicardial contours in polar space. Then, they train a CNN regression on STA11 to infer these parameters and test the generalizability of the method on DS16 with good results.
Other papers combined deep learning methods with level set for LV segmentation. Rupprecht et al.[129] trained a class-specific four layer CNN which predicts a vector pointing from the respective point on the evolving contour towards the closest point on the boundary of the object of interest.
These predictions formed a vector field which was then used for evolving the contour using the Sobolev active contour framework. Anh et al.[130] created a non-rigid segmentation method based on the distance regularized level set method that was initialized and constrained by the results of a structured inference using a DBN.
For this task they introduce marginal space deep learning which provides high run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. Given the object localization, they propose a combined deep learning active shape model to estimate the non-rigid object boundary.
B
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, and PPO (Schulman et al., 2017), a model-free policy gradient algorithm (see Appendix E for details of tuning of Rainbow and PPO). The results of the comparison are presented in Figure 3. For each game, we plot the number of time steps needed for either Rainbow or PPO to reach the same score that our method reaches after 100100100100K interaction steps. The red line indicates 100100100100K steps: any bar larger than this indicates a game where the model-free method required more steps. SimPLe outperforms the model-free algorithms in terms of learning speed on nearly all of the games, and in the case of a few games, does so by over an order of magnitude. For some games, it reaches the same performance that our PPO implementation reaches at 10101010M steps. This indicates that model-based reinforcement learning provides an effective approach to learning Atari games, at a fraction of the sample complexity.
While SimPLe is able to learn more quickly than model-free methods, it does have limitations. First, the final scores are on the whole lower than the best state-of-the-art model-free methods. This can be improved with better dynamics models and, while generally common with model-based RL algorithms, suggests an important direction for future work. Another, less obvious limitation is that the performance of our method generally varied substantially between different runs on the same game. The complex interactions between the model, policy, and data collection were likely responsible for this. In future work, models that capture uncertainty via Bayesian parameter posteriors or ensembles (Kurutach et al., 2018; Chua et al., 2018) may improve robustness.
The iterative process of training the model, training the policy, and collecting data is crucial for non-trivial tasks where random data collection is insufficient. In a game-by-game analysis, we quantified the number of games where the best results were obtained in later iterations of training. In some games, good policies could be learned very early. While this might have been due to the high variability of training, it does suggest the possibility of much faster training (i.e. in fewer step than 100k) with more directed exploration policies. In Figure 9 in the Appendix we present the cumulative distribution plot for the (first) point during learning when the maximum score for the run was achieved in the main training loop of Algorithm 1.
Figure 1: Main loop of SimPLe. 1) the agent starts interacting with the real environment following the latest policy (initialized to random). 2) the collected observations will be used to train (update) the current world model. 3) the agent updates the policy by acting inside the world model. The new policy will be evaluated to measure the performance of the agent as well as collecting more data (back to 1). Note that world model training is self-supervised for the observed states and supervised for the reward.
The results in these figures are generated by averaging 5555 runs for each game. The model-based agent is better than a random policy for all the games except Bank Heist. Interestingly, we observed that the best of the 5555 runs was often significantly better. For 6666 of the games, it exceeds the average human score (as reported in Table 3 of Pohlen et al. (2018)). This suggests that further stabilizing SimPLe should improve its performance, indicating an important direction for future work. In some cases during training we observed high variance of the results during each step of the loop. There are a number of possible reasons, such as mutual interactions of the policy training and the supervised training or domain mismatch between the model and the real environment. We present detailed numerical results, including best scores and standard deviations, in Appendix D.
D
Zhang et al. [11] trained an ensemble of CNNs containing two to ten layers using STFT features extracted from EEG band frequencies for mental workload classification. Giri et al. [12] extracted statistical and information measures from frequency domain to train an 1D CNN with two layers to identify ischemic stroke.
The spectrogram S2I results are in contrary with the expectation that the interpretable time-frequency representation would help in finding good features for classification. We hypothesize that the spectrogram S2I was hindered by its lack of non-trainable parameters.
Figure 1: High level overview of a feed-forward pass of the combined methods. xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the input, m𝑚mitalic_m is the Signal2Image module, bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT is the 1D or 2D architecture ‘base model’ for d=1,2𝑑12d=1,2italic_d = 1 , 2 respectively and yi^^subscript𝑦𝑖\hat{y_{i}}over^ start_ARG italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG is the predicted output.
The names of the classes are depicted at the right along with the predictions for this example signal. The image between m𝑚mitalic_m and bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT depicts the output of the one layer CNN Signal2Image module, while the ‘signal as image’ and spectrogram have intermediate images as those depicted at the second and third row of Fig. 2.
For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems. An important property of a S2I is whether it consists of trainable parameters such as convolutional and linear layers or it is non-trainable such as traditional time-frequency methods.
D
While the study of legged locomotion gaits has been a topic of research for several decades, the investigation of locomotion in wheel-legged robots is a relatively recent area of study [9]. Hybrid ground robots, equipped with highly articulated legs with more than three degrees-of-freedom, present unique challenges in gait development. Our study contributes to this growing field by suggesting two novel climbing gaits to surmount steps of different dimensions (h, 2h, and 3h, where h represents the track height as displayed in Fig. 3). We term these the whole-body climbing gait and the rear-body climbing gait [10], demonstrated in Fig. 5 and Fig. 6, respectively.
The evaluation of energy consumption for the walking locomotion mode encompassed the entire step negotiation process, from the commencement of the negotiation until its completion. Fig. 8 reveals minimal discrepancies in energy consumption for the whole-body climbing gait, which can be attributed to the thoughtful design of the climbing gaits. These gaits incorporate identical desired joint accelerations, leg stride length, and forward movement height, as highlighted in [4]. Consequently, variations in energy consumption during different step negotiations primarily stem from negotiation time and body movements. In order to establish the threshold values (Tw⁢bsubscript𝑇𝑤𝑏T_{wb}italic_T start_POSTSUBSCRIPT italic_w italic_b end_POSTSUBSCRIPT and Tr⁢bsubscript𝑇𝑟𝑏T_{rb}italic_T start_POSTSUBSCRIPT italic_r italic_b end_POSTSUBSCRIPT) for the energy criterion, they were equated to the energy expenditure of the walking locomotion mode, utilizing the whole-body climbing and rear-body climbing gaits, respectively. To identify the threshold values (Tw⁢bsubscript𝑇𝑤𝑏T_{wb}italic_T start_POSTSUBSCRIPT italic_w italic_b end_POSTSUBSCRIPT and Tr⁢bsubscript𝑇𝑟𝑏T_{rb}italic_T start_POSTSUBSCRIPT italic_r italic_b end_POSTSUBSCRIPT) for the energy criterion, they were set equal to the energy expenditure of the walking locomotion mode using the whole body climbing and rear body climbing gaits, respectively. Unlike other methods that use empirical values [2, 8], the threshold values in this study were decided upon based on a novel rule that evaluates the alternative locomotion mode. Moreover, these threshold values are not fixed and are determined based on the terrain profiles the robot is negotiating.
Fig. 7 illustrates the hierarchical control design for the autonomous locomotion mode transition. The decision-making process for this transition is accomplished in MATLAB, whereas the control of each separate locomotion mode is enacted in CoppeliaSim. The connection between MATLAB and the physical robot model in CoppeliaSim is facilitated through the use of the remote API function available in the CoppeliaSim environment. Within CoppeliaSim, control is applied to rolling locomotion in order to maintain the required vehicle speed and home configuration. As for walking locomotion, the climbing gaits created from the step height data, as discussed in Sec. 2.2, are employed. In order to facilitate motion control in both locomotion modes, all the necessary kinematics and dynamics calculations are carried out within the CoppeliaSim simulation environment. This includes computing torques and angular velocities for each joint. The simulation outputs, along with these calculated values, are then sent back to MATLAB for further data analysis and energy usage calculations. During the step negotiation simulations, a timestep of 2 milliseconds is employed to simulate real-time dynamics accurately.
The track tip positioning was the key parameter controlled during the creation of these climbing gaits. To assure seamless locomotion, trajectories for each joint of the robot were defined through a fifth-order polynomial along with their first and second derivatives. The trajectory design took into account six constraints: initial and final position, velocity, and acceleration [23]. The Reflexxes Motion Library IV [24] was utilized to perform the inverse kinematics calculation.
Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the rear legs (depicted by the green line) exceeded the predetermined threshold values set by the rear body climbing gait for heights of 2h. The overlap between the red line (ongoing energy consumption of the robot) and the blue line (pre-studied energy consumption of step negotiation in rolling locomotion mode only) illustrates this. After the mode transition is triggered, the robot enters a well-defined preparation phase, wherein it moves backward a short distance to ensure the rear tracks are separated from the step. Following the preparation phase, the robot switches to the rear body climbing gait. Despite the noticeable improvement in energy consumption, the transition to the rear body climbing gait takes more time for the robot to tackle a 2h step.
C
As argued in detail in [9], there are compelling reasons to study the advice complexity of online computation. Lower bounds establish strict limitations on the power of any online algorithm; there are strong connections between randomized online algorithms and online algorithms with advice (see, e.g., [27]); online algorithms with advice can be of practical interest in settings in which it is feasible to run multiple algorithms and output the best solution (see [20] about obtaining improved data compression algorithms by means of list update algorithms with advice); and the first complexity classes for online computation have been based on advice complexity [10].
Under the current models, the advice bits can encode any information about the input sequence; indeed, defining the “right” information to be conveyed to the algorithm plays an important role in obtaining better online algorithms. Clearly, the performance of the online algorithm can only improve with larger number of advice bits. The objective is thus to identify the exact trade-offs between the size of the advice and the performance of the algorithm. This is meant to provide a smooth transition between the purely online world (nothing is known about the input) and the purely “offline” world (everything is known about the input).
In future work, we would like to expand the model so as to incorporate, into the analysis, the concept of advice error. More specifically, given an advice string of size k𝑘kitalic_k, let η𝜂\etaitalic_η denote the number of erroneous bits (which may be not known to the algorithm). In this setting, the objective would be to study the power and limitations of online algorithms, i.e., from the point of view of both upper and lower bounds on the competitive ratio. A first approach towards this direction was made recently in the context of problems such as contract
It should be fairly clear that such assumptions are very unrealistic or undesirable. Advice bits, as all information, are prone to transmission errors. In addition, the known advice models often allow information that one may arguably consider unrealistic, e.g., an encoding of some part of the offline optimal solution. Last, and perhaps more significantly, a malicious entity that takes control of the advice oracle can have a catastrophic impact. For a very simple example, consider the well-known ski rental problem: this is a simple, yet fundamental resource allocation, in which we have to decide ahead of time whether to rent or buy equipment without knowing the time horizon in advance. In the traditional advice model, one bit suffices to be optimal: 0 for renting throughout the horizon, 1 for buying right away. However, if this bit is wrong, then the online algorithm has unbounded competitive ratio, i.e., can perform extremely badly. In contrast, an online algorithm that does not use advice at all has competitive ratio at most 2, i.e., its output can be at most twice as costly as the optimal one.
Notwithstanding such interesting attributes, the known advice model has certain drawbacks. The advice is always assumed to be some error-free information that may be used to encode some property often explicitly connected to the optimal solution. In many settings, one can argue that such information cannot be readily available, which implies that the resulting algorithms are often impractical.
D
With the aim of avoiding cases of misclassification like in (d), we decided to implement the second classifier, SS3Δ, whose policy also takes into account the changes in both slopes. As it can be seen from Algorithm 3 and as mentioned before, SS3Δ additionally classifies a subject as positive if the positive slope changes, at least, four times faster than the other one.
the accumulated negative confidence value starts being greater than the positive one, but as more chunks are read (specifically starting after reading the 3rd chunk), the positive value starts and stays growing until it exceeds the other one. In this case, this subject is classified as depressed after reading the 6th chunk.
the subject is misclassified as positive since the positive accumulated exceeded the negative one. When we manually analyzed cases like these we often found out that the classifier was correctly accumulating positive evidence since the users were, in fact, apparently depressed.
This problem can be detected in this subject by seeing the blue dotted peek at around the 60th writing, indicating that “the positive slope changed around five times faster than the negative” there, and therefore misclassifying the subject as positive. However, note that this positive change was in fact really small (less than 1).
In Figure 7 is shown again the subject 1914, this time including information about the changes in the slopes. Note that this subject was previously misclassified as not depressed because the accumulated positive value never exceeded the negative one, but by adding this new extra policy, this time it is correctly classified as positive after reading the 8th chunk262626Note the peek in the blue dotted line pointing out that, at this point, the positive value has grown around 11 times faster than the negative one..
D
There are some other ways to combine momentum and error feedback. For example, we can put the momentum term on the server. However, these ways lead to worse performance than the way adopted in this paper. More discussions can be found in Appendix A.
We can find that both local momentum and global momentum implementations of DMSGD are equivalent to the serial MSGD if no sparse communication is adopted. However, when it comes to adopting sparse communication, things become different. In the later sections, we will demonstrate that global momentum is better than local momentum when implementing sparse communication in DMSGD.
GMC combines error feedback and momentum to achieve sparse communication in distributed learning. But different from existing sparse communication methods like DGC which adopt local momentum, GMC adopts global momentum. To the best of our knowledge, this is the first work to introduce global momentum into sparse communication methods.
However, the theory about the convergence of DGC is still lacking. Furthermore, although DGC combines momentum and error feedback, the momentum in DGC only accumulates stochastic gradients computed by each worker locally. Therefore, the momentum in DGC is a local momentum without global information.
We can find that DGC (Lin et al., 2018) is mainly based on the local momentum while GMC is based on the global momentum. Hence, each worker in DGC cannot capture the global information from its local momentum, while that in GMC can capture the global information from the global momentum even if sparse communication is adopted.
D
φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG is non-differentiable due to the presence of the ℓ0subscriptℓ0\ell_{0}roman_ℓ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT pseudo-norm in Eq. 3. A way to overcome this is using ℒℒ\mathcal{L}caligraphic_L as the differentiable optimization function during training and φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG as the metric for model selection during validation on which hyperparameter value decisions (such as kernel size) are made.
We set m⁢e⁢d=m(i)𝑚𝑒𝑑superscript𝑚𝑖med=m^{(i)}italic_m italic_e italic_d = italic_m start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT for utilizing fair comparison between the sparse activation functions. Specifically for Extrema activation function we introduce a ‘border tolerance’ parameter to allow neuron activation within another neuron activated area.
The Extrema-Pool indices activation function (defined at Algorithm 2) keeps only the index of the activation with the maximum absolute amplitude from each region outlined by a grid as granular as the kernel size m(i)superscript𝑚𝑖m^{(i)}italic_m start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and zeros out the rest. It consists of a max-pooling layer followed by a max-unpooling layer with the same parameters while the sparsity parameter d(i)superscript𝑑𝑖d^{(i)}italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT in this case is set d(i)=m(i)<n∈ℕsuperscript𝑑𝑖superscript𝑚𝑖𝑛ℕd^{(i)}=m^{(i)}<n\in\mathbb{N}italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT = italic_m start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT < italic_n ∈ blackboard_N.
We then pass 𝒔(i)superscript𝒔𝑖\bm{s}^{(i)}bold_italic_s start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and a sparsity parameter d(i)superscript𝑑𝑖d^{(i)}italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT in the sparse activation function ϕitalic-ϕ\phiitalic_ϕ resulting in the activation map 𝜶(i)superscript𝜶𝑖\bm{\alpha}^{(i)}bold_italic_α start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT:
We choose values for d(i)superscript𝑑𝑖d^{(i)}italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT for each activation function in such as way, to approximately have the same number of activations for fair comparison of the sparse activation functions.
D
The essence of PBLLA is selecting an alternative UAV randomly in one iteration and improving its utility by altering power and altitude with a certain probability, which is determined by the utilities of two strategies and τ𝜏\tauitalic_τ. UAV prefers to select the power and altitude which provide higher utility. Nevertheless, highly dynamic scenarios will cause UAVs to make mistakes and pick the worse strategy. The dynamic degree index τ𝜏\tauitalic_τ determines the dynamic degree of the situation and UAV’s performance. Small τ𝜏\tauitalic_τ means less dynamic scenarios and fewer mistakes when UAVs are making decisions. When τ→0→𝜏0\tau\rightarrow 0italic_τ → 0 which equals to stabilization, UAV will always select the power and altitude with higher utility; when τ→∞→𝜏\tau\rightarrow\inftyitalic_τ → ∞ where exists sever dynamics, UAV will choose them randomly. However, PBLLA has its limitations that PBLLA is only one single UAV is allowed for altering strategies in one iteration. We will propose a new algorithm in the next section to overcome the restrictions.
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approaching [9][32]. The BLLA has been employed by [33], which is modified from LLA to update strategies in each iteration to converge to the NE. However, only a single agent is allowed to alter strategies in one iteration. In large-scale scenarios, more iterations are required, which makes BLLA inefficient. It is obvious that more UAVs altering strategies in one iteration would be more efficient. To achieve it, the works in [34] and [35] have provided a novel synchronous algorithm. However, there exist superabundant restrictions that make the algorithm impractical in most scenarios. Compared with the formers, SPBLLA has fewer constraints and can achieve synchronous operation, which can significantly improve the computational efficiency.
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV changes strategy in the next iteration based on the new game state. It means that UAVs are not permitted to update strategies at the same time. Besides, to determine which UAV to update strategy, the coordinating process will occupy plenty of channel capacities and require more time between two iterations [15]. If the algorithm can learn synchronously, more than one UAV can update strategies based on the current game state in one iteration. Thus, the algorithm can be more efficient. To sum up, synchronous update algorithms which can learn from previous experiences are desirable, but only a little research investigated on it.
Since PBLLA only allows one single UAV to alter strategies in one iteration, such defect would cause computation time to grow exponentially in large-scale UAVs systems. In terms of large-scale UAVs ad-hoc networks with a number of UAVs denoted as M𝑀Mitalic_M, M2superscript𝑀2M^{2}italic_M start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT times of exchange messages will be needed to coordinate and guarantee that only one UAV changes strategy in each iteration. Such a process not only consumes large energy but also prolongs convergence time. Algorithms that can improve the learning rate and reduce messages exchange is urgently needed. Thus, we propose the Synchronous Payoff-based Binary Log-linear Learning Algorithm (SPBLLA), which permits each UAV altering their strategies synchronously and learning with no message exchange.
Fig. 15 presents the learning rate of PBLLA and SPBLLA when τ=0.01𝜏0.01\tau=0.01italic_τ = 0.01. As m𝑚mitalic_m increases the learning rate of SPBLLA decreases, which has been shown in Fig. 15. However, when m𝑚mitalic_m is small, SPBLLA’s learning rate is about 3 times that of PBLLA showing the great advantage of synchronous learning. When τ=0.015𝜏0.015\tau=0.015italic_τ = 0.015 and τ=0.02𝜏0.02\tau=0.02italic_τ = 0.02 as shown in Fig. 15, such phenomenon also exists. Since PBLLA merely permits a single UAV to alter strategies in one iteration, SPBLLA’s synchronous learning rate will much larger than PBLLA. Moreover, in the large-scale UAV network with high dynamic, PBLLA needs information exchange to decide the update order, which would severely prolong the learning time. PBLLA’s learning time might be four times as long as that of SPBLLA. Thus we can make the conclusion that in the same condition (the same τ𝜏\tauitalic_τ and other indexes), SPBLLA performs better and is more suitable for large-scale highly dynamic environment than PBLLA, and SPBLLA can improve the learning rate several times. With larger altering strategies probability, SPBLLA will be even more powerful.
C
+[1μ0⁢ω⁢𝐁⋅∇f+1μ0⁢f⁢∇⋅(ω⁢𝐁)]delimited-[]⋅1subscript𝜇0𝜔𝐁∇𝑓⋅1subscript𝜇0𝑓∇𝜔𝐁\displaystyle+\left[\frac{1}{\mu_{0}}\omega\mathbf{B}\cdot\nabla f+\frac{1}{% \mu_{0}}f\nabla\cdot\bigg{(}\omega\mathbf{B}\bigg{)}\right]+ [ divide start_ARG 1 end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG italic_ω bold_B ⋅ ∇ italic_f + divide start_ARG 1 end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG italic_f ∇ ⋅ ( italic_ω bold_B ) ]
with Poynting flux. Note that the terms +(𝐯⋅∇ψ)μ0⁢r2⁢∇ψ⋅𝐯∇𝜓subscript𝜇0superscript𝑟2∇𝜓+\frac{(\mathbf{v}\cdot\nabla\psi)}{\mu_{0}r^{2}}\nabla\psi+ divide start_ARG ( bold_v ⋅ ∇ italic_ψ ) end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ∇ italic_ψ and −η⁢Δ∗⁢ψμ0⁢r2⁢∇ψ𝜂superscriptΔ𝜓subscript𝜇0superscript𝑟2∇𝜓-\frac{\eta\Delta^{*}\psi}{\mu_{0}r^{2}}\nabla\psi- divide start_ARG italic_η roman_Δ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT italic_ψ end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ∇ italic_ψ in the final
+[η⁢(Δ∗⁢ψ)2μ0⁢r2+1μ0⁢r2⁢∇ψ⋅∇(η⁢Δ∗⁢ψ)]delimited-[]𝜂superscriptsuperscriptΔ𝜓2subscript𝜇0superscript𝑟2⋅1subscript𝜇0superscript𝑟2∇𝜓∇𝜂superscriptΔ𝜓\displaystyle+\left[\frac{\eta(\Delta^{*}\psi)^{2}}{\mu_{0}r^{2}}+\frac{1}{\mu% _{0}r^{2}}\nabla\psi\cdot\nabla(\eta\Delta^{*}\psi)\right]+ [ divide start_ARG italic_η ( roman_Δ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT italic_ψ ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG + divide start_ARG 1 end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ∇ italic_ψ ⋅ ∇ ( italic_η roman_Δ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT italic_ψ ) ]
+[η⁢(∇f)2μ0⁢r2+fμ0⁢∇⋅(ηr2⁢∇f)]delimited-[]𝜂superscript∇𝑓2subscript𝜇0superscript𝑟2⋅𝑓subscript𝜇0∇𝜂superscript𝑟2∇𝑓\displaystyle+\left[\frac{\eta(\nabla f)^{2}}{\mu_{0}r^{2}}+\frac{f}{\mu_{0}}% \nabla\cdot\bigg{(}\frac{\eta}{r^{2}}\nabla f\bigg{)}\right]+ [ divide start_ARG italic_η ( ∇ italic_f ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG + divide start_ARG italic_f end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG ∇ ⋅ ( divide start_ARG italic_η end_ARG start_ARG italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ∇ italic_f ) ]
−[1μ0⁢r2⁢Δ∗⁢ψ⁢(𝐯⋅∇ψ)+1μ0⁢r2⁢∇ψ⋅∇(𝐯⋅∇ψ)]delimited-[]1subscript𝜇0superscript𝑟2superscriptΔ𝜓⋅𝐯∇𝜓⋅1subscript𝜇0superscript𝑟2∇𝜓∇⋅𝐯∇𝜓\displaystyle-\left[\frac{1}{\mu_{0}r^{2}}\Delta^{*}\psi(\mathbf{v}\cdot\nabla% \psi)+\frac{1}{\mu_{0}r^{2}}\nabla\psi\cdot\nabla(\mathbf{v}\cdot\nabla\psi)\right]- [ divide start_ARG 1 end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG roman_Δ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT italic_ψ ( bold_v ⋅ ∇ italic_ψ ) + divide start_ARG 1 end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ∇ italic_ψ ⋅ ∇ ( bold_v ⋅ ∇ italic_ψ ) ]
B
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12. Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right.
First, remark that both A⁢→⁡B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B⁢→⁡A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible. Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA⁢→⁡Bsubscriptmodels𝑔𝑟𝐴→𝐵r\models_{g}A\operatorname{\rightarrow}Bitalic_r ⊧ start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT italic_A → italic_B as there are no counter-examples in the resulting closure system.
The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to B⁢C⁢→⁡A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and g3subscript𝑔3g_{3}italic_g start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT.
For convenience we give in Table 7 the list of all possible realities along with the abstract tuples which will be interpreted as counter-examples to A⁢→⁡B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B⁢→⁡A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A.
If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use ≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∨ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT, respectively.
C
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and after applying Dropout (Dropout methods DQN). There was a statistically significant decrease in Variance (14.72% between Gaussian Dropout and DQN, 48.89% between Variational Dropout and DQN). Furthermore one of the Dropout methods outperformed DQN score.
Q-learning is among the most widely used reinforcement learning (RL) algorithms[4]. It’s based on an incremental dynamic programming technique because of the step by step look-up table representation in which it determines the optimal policy[22]. The Q-learning algorithm employs a table to estimate the optimal action value function, Q∗superscript𝑄Q^{*}italic_Q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT. This table encompasses all states and actions within the environment and utilizes the value function to assess the quality (Q-function) of state-action pairs. It then updates using the following rule:
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Reinforcement Learning is concerned with finding a sequence of actions an agent can follow that could lead to solve the task on the environment [1][2][3]. Most of Reinforcement Learning techniques estimate the consequences of actions in order to find an optimal policy in the form of sequence of actions that can be followed by the agent to solve the task. The process of choosing the optimal policy is based on selecting actions that maximize the future payoff of an action. Finding an optimal policy is the main concern of Reinforcement Learning for that reason many algorithms have been introduced over a course of time, e.g, Q-learning[4], SARSA[5], and policy gradient methods[6]. These methods use linear function approximation techniques to estimate action value, where convergence is guaranteed [7]. However, as challenges in modeling complex patterns increase, the need for expressive and flexible non-linear function approximators becomes clear. The recent advances in deep neural networks helped to develop artificial agent named deep Q-network(DQN)[8] that can learn successful policies directly from high-dimensional features. Despite the remarkable flexibility and the huge representative capability of DQN, some issues emerge from the combination of Q-learning and neural networks. One of these issues, known as ”overestimation phenomenon,” was first explored by [9]. They noted that the expansion of the action space in the Q-learning algorithm, along with generalization errors in neural networks, often results in an overestimation and increased variance of state-action values. They suggested that to counter these issues, further modifications and enhancements to the standard algorithm would be necessary to boost training stability and diminish overestimation. In response, [10] introduced Double-DQN, an improvement that incorporates the double Q-learning estimator [11], aiming to address the challenges of variance and overestimation. Additionally, [31] developed the Averaged-DQN algorithm, a significant improvement over the standard DQN. By averaging previously learned Q-values, Averaged-DQN effectively lowers the variance in target value estimates, thus enhancing training stability and overall performance.
The Gridworld problem (Figure 4) is a common RL benchmark. Its relatively small state space permits the Experience Replay (ER) buffer to store all possible state-action pairs. Moreover, this setup allows for the precise computation of the optimal action value function.
where st+1subscript𝑠𝑡1s_{t+1}italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT is the resulting state after applying action a in the state s, r is the immediate reward observed for action a at state s, γ𝛾\gammaitalic_γ is the discount factor, and α𝛼\alphaitalic_α is learning rate.
C
Weakly supervised segmentation using image-level labels versus a few images with segmentation annotations. Most new weakly supervised localization methods apply attention maps or region proposals in a multiple instance learning formulations. While attention maps can be noisy, leading to erroneously highlighted regions, it is not simple to decide on an optimal window or bag size for multiple instance learning approaches.
We provide comprehensive coverage of research contributions in the field of semantic segmentation of natural and medical images. In terms of medical imaging modalities, we cover the literature pertaining to both 2D (RGB and grayscale) as well as volumetric medical images.
Deep learning has had a tremendous impact on various fields in science. The focus of the current study is on one of the most critical areas of computer vision: medical image analysis (or medical computer vision), particularly deep learning-based approaches for medical image segmentation. Segmentation is an important processing step in natural images for scene understanding and medical image analysis, for image-guided interventions, radiotherapy, or improved radiological diagnostics, etc. Image segmentation is formally defined as “the partition of an image into a set of nonoverlapping regions whose union is the entire image” (Haralick and Shapiro, 1992). A plethora of deep learning approaches for medical image segmentation have been introduced in the literature for different medical imaging modalities, including X-ray, visible-light imaging (e.g. colour dermoscopic images), magnetic resonance imaging (MRI), positron emission tomography (PET), computerized tomography (CT), and ultrasound (e.g. echocardiographic scans). Deep architectural improvement has been a focus of many researchers for different purposes, e.g., tackling gradient vanishing and exploding of deep models, model compression for efficient small yet accurate models, while other works have tried to improve the performance of deep networks by introducing new optimization functions.
Because of the large number of imaging modalities, the significant signal noise present in imaging modalities such as PET and ultrasound, and the limited amount of medical imaging data mainly because of high acquisition cost compounded by legal, ethical, and privacy issues, it is difficult to develop universal solutions that yield acceptable performances across various imaging modalities. Therefore, a proper research direction would be along the work of  Raghu et al. (2019) on image classification models, studying the risks of using non-medical pre-trained models for medical image segmentation.
While most deep segmentation models for medical image analysis rely on only clinical images for their predictions, there is often multi-modal patient data in the form of other imaging modalities as well as patient metadata that can provide valuable information, which most deep segmentation models do not use. Therefore, a valuable research direction for improving segmentation performance of medical images would be to develop models which are able to leverage multi-modal patient data.
D
Black line: the threshold from [28] indicating the value of λmaxs/2subscriptsuperscript𝜆𝑠max2\lambda^{s}_{\text{max}}/2italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT / 2 below which one should switch to the random cut to obtain a solution ≥0.53absent0.53\geq 0.53≥ 0.53 MAXCUT. The x-axis indicates the density of the graph connectivity, which increases by randomly adding edges.
Fig. 4 illustrates how the size of the cut γ⁢(𝐳)𝛾𝐳\gamma({\mathbf{z}})italic_γ ( bold_z ) induced by the spectral partition 𝐳𝐳{\mathbf{z}}bold_z changes as more edges are added and the original structure of the graph is corrupted (blue line). The figure also reports the size of the random cut (orange line) and the MAXCUT upper bound from Eq. (12) (green line). The black line indicates the threshold from [28], i.e., the value of λmax2/2subscriptsuperscript𝜆2max2\lambda^{2}_{\text{max}}/2italic_λ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT / 2 below which the spectral cut is no longer guaranteed to be larger than the random cut. The graph used to generate the figure is a regular grid; however, similar results hold also for other families of random graphs and are reported in the supplementary material.
We replicate for each graph type the experiment in Sect. IV-B, which illustrates how the size of the cut obtained with the proposed algorithm changes as we randomly add edges. Fig. 11 reports in blue the size of the cut associated with the partition yielded by the spectral algorithm; in orange the size of the cut yielded by the random partition; in green the MAXCUT upper bound; in black the theoretical threshold that indicates when to switch to the random partition to obtain a cut with size ≥0.53absent0.53\geq 0.53≥ 0.53 MAXCUT.
Black line: the threshold from [28] indicating the value of λmaxs/2subscriptsuperscript𝜆𝑠max2\lambda^{s}_{\text{max}}/2italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT / 2 below which one should switch to the random cut to obtain a solution ≥0.53absent0.53\geq 0.53≥ 0.53 MAXCUT. The x-axis indicates the density of the graph connectivity, which increases by randomly adding edges.
We replicate for each graph type the experiment in Sect. IV-B, which illustrates how the size of the cut obtained with the proposed algorithm changes as we randomly add edges. Fig. 11 reports in blue the size of the cut associated with the partition yielded by the spectral algorithm; in orange the size of the cut yielded by the random partition; in green the MAXCUT upper bound; in black the theoretical threshold that indicates when to switch to the random partition to obtain a cut with size ≥0.53absent0.53\geq 0.53≥ 0.53 MAXCUT.
B
In contrast to neural networks, random forests are very robust to overfitting due to their ensemble of multiple decision trees. Each decision tree is trained on randomly selected features and samples. Random forests have demonstrated remarkable performance in many domains (Fernández-Delgado et al., 2014).
Decision trees learn rules by splitting the data. The rules are easy to interpret and additionally provide an importance score of the features. Random forests (Breiman, 2001) are an ensemble method consisting of multiple decision trees, with each decision tree being trained using a random subset of samples and features.
While the generated decision rules are simple and interpretable, the orthogonal separation of the feature space can also be disadvantageous on other datasets, especially with correlated features (Menze et al., 2011). Additionally, random forests are not differentiable and cannot be fine-tuned with gradient-based optimization.
The number of parameters of the networks becomes enormous as the number of nodes grows exponentially with the increasing depth of the decision trees. Additionally, many weights are set to zero so that an inefficient representation is created. Due to both reasons, the mappings do not scale and are only applicable to simple random forests.
(1) We enable the generation of neural networks with very few training examples. (2) The resulting network can be used as a warm start, is fully differentiable, and allows further end-to-end fine-tuning. (3) The generated network can be easily integrated into any trainable pipeline (e.g., jointly with feature extraction) and existing high-performance deep learning frameworks can be used directly. This accelerates the process and enables parallelization via GPUs.
B
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In particular, OPPO is based on PPO (and similarly, NPG and TRPO), which is shown to converge to the globally optimal policy at sublinear rates in tabular and linear settings, as well as nonlinear settings involving neural networks (Liu et al., 2019; Wang et al., 2019). However, without assuming the access to a “simulator” or finite concentratability coefficients, both of which imply that the state space is already well explored, it remains unclear whether any of such algorithms is sample-efficient, that is, attains a finite regret or sample complexity. In comparison, by incorporating uncertainty quantification into the action-value function at each update, which explicitly encourages exploration, OPPO not only attains the same computational efficiency as NPG, TRPO, and PPO, but is also shown to be sample-efficient with a d2⁢H3⁢Tsuperscript𝑑2superscript𝐻3𝑇\sqrt{d^{2}H^{3}T}square-root start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT italic_T end_ARG-regret up to logarithmic factors.
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient (PG) (Williams, 1992; Baxter and Bartlett, 2000; Sutton et al., 2000), natural policy gradient (NPG) (Kakade, 2002), trust-region policy optimization (TRPO) (Schulman et al., 2015), proximal policy optimization (PPO) (Schulman et al., 2017), and actor-critic (AC) (Konda and Tsitsiklis, 2000), converge to the globally optimal policy at sublinear rates of convergence, even when they are coupled with neural networks (Liu et al., 2019; Wang et al., 2019). However, such computational efficiency guarantees rely on the regularity condition that the state space is already well explored. Such a condition is often implied by assuming either the access to a “simulator” (also known as the generative model) (Koenig and Simmons, 1993; Azar et al., 2011, 2012a, 2012b; Sidford et al., 2018a, b; Wainwright, 2019) or finite concentratability coefficients (Munos and Szepesvári, 2008; Antos et al., 2008; Farahmand et al., 2010; Tosatto et al., 2017; Yang et al., 2019b; Chen and Jiang, 2019), both of which are often unavailable in practice.
Our work is closely related to another line of work (Even-Dar et al., 2009; Yu et al., 2009; Neu et al., 2010a, b; Zimin and Neu, 2013; Neu et al., 2012; Rosenberg and Mansour, 2019a, b) on online MDPs with adversarially chosen reward functions, which mostly focuses on the tabular setting.
Assuming the transition dynamics are known but only the bandit feedback of the received rewards is available, the work of Neu et al. (2010a, b); Zimin and Neu (2013) establishes an H2⁢|𝒜|⁢T/βsuperscript𝐻2𝒜𝑇𝛽H^{2}\sqrt{|\mathcal{A}|T}/\betaitalic_H start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT square-root start_ARG | caligraphic_A | italic_T end_ARG / italic_β-regret (Neu et al., 2010b), a T2/3superscript𝑇23T^{2/3}italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT-regret (Neu et al., 2010a), and a H⁢|𝒮|⁢|𝒜|⁢T𝐻𝒮𝒜𝑇\sqrt{H|{\mathcal{S}}||\mathcal{A}|T}square-root start_ARG italic_H | caligraphic_S | | caligraphic_A | italic_T end_ARG-regret (Zimin and Neu, 2013), respectively, all up to logarithmic factors. Here 𝒮𝒮{\mathcal{S}}caligraphic_S is the state space and |𝒮|𝒮|{\mathcal{S}}|| caligraphic_S | is its cardinality. In particular, it is assumed by Neu et al. (2010b) that, with probability at least β𝛽\betaitalic_β, any state is reachable under any policy.
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020), as well as nonlinear settings involving general function approximators (Wen and Van Roy, 2017; Jiang et al., 2017; Du et al., 2019b; Dong et al., 2019). In particular, our setting is the same as the linear setting studied by Ayoub et al. (2020); Zhou et al. (2020), which generalizes the one proposed by Yang and Wang (2019a). We remark that our setting differs from the linear setting studied by Yang and Wang (2019b); Jin et al. (2019). It can be shown that the two settings are incomparable in the sense that one does not imply the other (Zhou et al., 2020). Also, our setting is related to the low-Bellman-rank setting studied by Jiang et al. (2017); Dong et al. (2019). In comparison, we focus on policy-based reinforcement learning, which is significantly less studied in theory. In particular, compared with the work of Yang and Wang (2019b, a); Jin et al. (2019); Ayoub et al. (2020); Zhou et al. (2020), which focuses on value-based reinforcement learning, OPPO attains the same T𝑇\sqrt{T}square-root start_ARG italic_T end_ARG-regret even in the presence of adversarially chosen reward functions. Compared with optimism-led iterative value-function elimination (OLIVE) (Jiang et al., 2017; Dong et al., 2019), which handles the more general low-Bellman-rank setting but is only sample-efficient, OPPO simultaneously attains computational efficiency and sample efficiency in the linear setting. Despite the differences between policy-based and value-based reinforcement learning, our work shows that the general principle of “optimism in the face of uncertainty” (Auer et al., 2002; Bubeck and Cesa-Bianchi, 2012) can be carried over from existing algorithms based on value iteration, e.g., optimistic LSVI, into policy optimization algorithms, e.g., NPG, TRPO, and PPO, to make them sample-efficient, which further leads to a new general principle of “conservative optimism in the face of uncertainty and adversary” that additionally allows adversarially chosen reward functions.
B
On the contrary, GPUs feature large register files and aim to hide memory latency by leveraging parallel slackness. Another critical aspect of loop-back architectures is low compute utilization, which can potentially occur if certain layer or operation types do not fit the static compute array (i.e., if operation size is too low).
The results reveal that quantization does not provide throughput improvements on this processor. This is mainly due to the efficient floating-point units within the CPU in combination with fast on-chip memory and the high overhead resulting from performing low-bit-width computations.
The advantage of their approach is that weight assignments need not be stored explicitly since they are given implicitly by the hashing function. The authors show a memory footprint reduction by a factor of 10 while keeping the prediction quality essentially unaffected.
The advantage of such a generic compute architecture is that they allow arbitrary operations in combination with productive code generation since the hardware does not need to be optimized for a certain task. Continuous improvements in semi-conductor and processor technology are the main improvement factor of such inference engines.
While domain-specific accelerators, such as Google’s TPU, excel in their specific performance, they are usually limited to a set of specific operations and are neither flexible in terms of data types nor sparse calculations. Furthermore, in particular for the TPU, experimentation is often hindered due to limitations in the tool chain which is not flexible enough to support such optimizations. They are not suited to execute generic compressed models and are therefore not included in the following experiments.
C
{v0,v27}+{v27,v28}+{v28,v14}+{v14⁢v29}+{v29,v23}+{v23,v30}+{v30,v31}+{v31,v0},subscript𝑣0subscript𝑣27subscript𝑣27subscript𝑣28subscript𝑣28subscript𝑣14subscript𝑣14subscript𝑣29subscript𝑣29subscript𝑣23subscript𝑣23subscript𝑣30subscript𝑣30subscript𝑣31subscript𝑣31subscript𝑣0\displaystyle\quad\{v_{0},v_{27}\}+\{v_{27},v_{28}\}+\{v_{28},v_{14}\}+\{v_{14% }v_{29}\}+\{v_{29},v_{23}\}+\{v_{23},v_{30}\}+\{v_{30},v_{31}\}+\{v_{31},v_{0}\},{ italic_v start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 27 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 27 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 28 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 28 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 14 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 14 end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT 29 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 29 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 23 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 23 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 30 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 30 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 31 end_POSTSUBSCRIPT } + { italic_v start_POSTSUBSCRIPT 31 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT } ,
ω1⁢ is the degree-1 homology class induced bysubscript𝜔1 is the degree-1 homology class induced by\displaystyle\omega_{1}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the degree-1 homology class induced by
ω0⁢ is the degree-1 homology class induced bysubscript𝜔0 is the degree-1 homology class induced by\displaystyle\omega_{0}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the degree-1 homology class induced by
and seeks the infimal r>0𝑟0r>0italic_r > 0 such that the map induced by ιrsubscript𝜄𝑟\iota_{r}italic_ι start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at n𝑛nitalic_n-th homology level annihilates the fundamental class [M]delimited-[]𝑀[M][ italic_M ] of M𝑀Mitalic_M. This infimal value defines FillRad⁢(M)FillRad𝑀\mathrm{FillRad}(M)roman_FillRad ( italic_M ), the filling radius of M𝑀Mitalic_M.
ω2⁢ is the degree-1 homology class induced bysubscript𝜔2 is the degree-1 homology class induced by\displaystyle\omega_{2}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the degree-1 homology class induced by
A
A DR method is an algorithm that projects a high-dimensional data set to a low-dimensional representation, preserving the structure of the original data as much as possible. Most of these algorithms have some (or many) hyper-parameters that may considerably affect their results, but setting them correctly is not a trivial task. In Subsection 2.1, we briefly describe techniques that try to solve this problem, and discuss the differences to our tool’s functionality. The resulting projection is usually visualized with scatterplots, which support tasks such as finding groups of similar points, correlations, and outliers [16]. However, a scatterplot is simply the first step in analyzing a high-dimensional data set through a projection: questions regarding the quality of the results (see Subsection 2.2) and how to interpret them (see Subsection 2.3) are pervasive in the literature on the subject.
Overall Accuracy   We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are quite different and none of them appears to have a clear advantage over the others, we pick one with good values for all the rest of the quality metrics (i.e., greater than 40%). The overview in Figure 7(a) shows the selected projection with three clear clusters of varying sizes (marked with C1, C2, and C3). However, the labels seem to be mixed in all of them. That means either the projections are not very good, or the labels are simply very hard to separate. By analyzing the Shepard Heatmap (Figure 7(b)), it seems that there is a distortion in how the projection represents the original N-D distances: the darker cells of the heatmap are above the diagonal and concentrated near the origin, which means that the lowest N-D distances (up to 30% of the maximum) have been represented in the projection with a wide range of 2-D distances (up to 60% of the maximum). While it may be argued that the data is too spread in the projection, we must always consider that t-SNE’s goal is not to preserve all pairwise distances, but only close neighborhoods. The projection has used most of its available 2-D space to represent (as best as possible) the smallest N-D distances, which can be considered a good trade-off for this specific objective. In the following paragraphs, we concentrate on some of the goals described in Subsection 4.3 and Subsection 4.4 for each of the three clusters.
Fujiwara et al. [44] proposed the contrasting clusters in PCA (ccPCA) method to find which dimensions contributed more to the formation of a selected cluster and why it differs from the rest of the dataset, based on information on separation and internal vs. external variability. We have similar goals, but approach them with different methods. For exploring clusters and selections in general, we use PCA to filter and order a local PCP plot; this could be easily adapted to use ccPCA instead as an underlying method for choosing which dimensions to filter and how to re-order the axes, without affecting the overall proposed analytical flow of the tool. On the other hand, ccPCA does not deal with the analysis of shapes, which we support with our proposed Dimension Correlation. Other recent approaches include DimReader [45], where the authors create so-called generalized axes for non-linear DR methods, but besides explaining a single dimension at a time, it is currently unclear how exactly it can be used in an interactive exploration scenario; and
A DR method is an algorithm that projects a high-dimensional data set to a low-dimensional representation, preserving the structure of the original data as much as possible. Most of these algorithms have some (or many) hyper-parameters that may considerably affect their results, but setting them correctly is not a trivial task. In Subsection 2.1, we briefly describe techniques that try to solve this problem, and discuss the differences to our tool’s functionality. The resulting projection is usually visualized with scatterplots, which support tasks such as finding groups of similar points, correlations, and outliers [16]. However, a scatterplot is simply the first step in analyzing a high-dimensional data set through a projection: questions regarding the quality of the results (see Subsection 2.2) and how to interpret them (see Subsection 2.3) are pervasive in the literature on the subject.
A few other tools have been proposed throughout the years that incorporate these techniques to deal with the problem of supporting the exploration of multidimensional data with DR. In Subsection 2.4, we discuss their goals and trade-offs, and compare them with t-viSNE.
D
Topologies: A promising research direction is to jointly consider topologies and ensemble strategies to leverage the superior explorative/exploitative powers of ensembles and also topologies for population-based metaheuristics to achieve better solutions than other solvers.
We should pause and reflect on which research directions should be pursued in the future in regard to bio-inspired optimization and related areas, as there are other remarkable fields to be noted as direct applications for bio-inspired optimization. In [3], the authors show a full discussion of the status of the field from both descriptive (where we stand) and prescriptive (what’s next) points of view. Here, we describe the areas in which bio-inspired optimization algorithms are used, and research niches related to them, as shown in Figure 7. The areas and their main aspects that can be studied as promising research lines are:
Surrogate model-assisted optimization: This area has promising research lines of investigation with highly dimensional search spaces and DL models, where there is a need to alleviate high computational efforts, with evaluation times that range from hours to days per experiment.
From a design perspective, nature- and bio-inspired optimization algorithms are usually conceived after observing a natural process or the behavioral patterns of biological organisms, which are then converted into a computational optimization algorithm. New discoveries in Nature and the undoubted increase of worldwide investigation efforts have ignited the interest of the research community in biological processes and their extrapolation to computational problems. As a result, many new bio-inspired meta-heuristics have appeared in the literature, increasing the outbreak of proposals and applications every year. Nowadays, every natural process can be thought to be adaptable and emulated to produce a new meta-heuristic approach, yet with different capabilities of reaching global optimum solutions to optimization problems.
Going deeper into the creation of Machine Learning (ML) and Deep Learning (DL) models: Although most algorithms have been developed in recent years, the impact of EAs, a classical family of algorithms, has risen in the last few years. Their use in ML has been widely studied both for the design of models [615] and also as a support for the optimization of those models [616]. These algorithms have gained momentum under the evidence reported around their usage to evolve and improve other AI techniques: most notably, the optimization of the structure and training parameters of deep neural networks [8], or the creation of new data-based models from scratch (i.e. by evolving very essential data processing primitives) that has been presented in the groundbreaking work by Google [617]. With this ongoing development, the research trend of Neural Architecture Search has emerged as another important area full of EAs applications [618], which mainly focuses on the construction of the DL model via the evolution of block of layers [619, 14, 620]. Recently, we have witnessed the use of EAs to model more AI models, as in the case of POET [621] where more environments are generated to learn from the diversity created, with the merging of EAs with Large Language Model (LLM) [622], and with other areas such as Automated Machine Learning [623], Reinforcement Learning and robotics [624], and Multi-task Learning [625]. In recent years, an interesting synergy between bio-inspired optimization and modern ML systems has been observed in the literature, in particular General-Purpose Artificial Intelligence Systems (GPAIS), as we will highlight later in the report.
B
where φ⁢(⋅)𝜑⋅\varphi(\cdot)italic_φ ( ⋅ ) is certain activation function, A^=D~−12⁢A~⁢D~−12^𝐴superscript~𝐷12~𝐴superscript~𝐷12\hat{A}=\widetilde{D}^{-\frac{1}{2}}\widetilde{A}\widetilde{D}^{-\frac{1}{2}}over^ start_ARG italic_A end_ARG = over~ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT - divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_POSTSUPERSCRIPT over~ start_ARG italic_A end_ARG over~ start_ARG italic_D end_ARG start_POSTSUPERSCRIPT - divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_POSTSUPERSCRIPT, A~=A+I~𝐴𝐴𝐼\widetilde{A}=A+Iover~ start_ARG italic_A end_ARG = italic_A + italic_I, D~~𝐷\widetilde{D}over~ start_ARG italic_D end_ARG denotes the degree matrix (D~i⁢i=∑j=1nA~i⁢jsubscript~𝐷𝑖𝑖superscriptsubscript𝑗1𝑛subscript~𝐴𝑖𝑗\widetilde{D}_{ii}=\sum_{j=1}^{n}\widetilde{A}_{ij}over~ start_ARG italic_D end_ARG start_POSTSUBSCRIPT italic_i italic_i end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT over~ start_ARG italic_A end_ARG start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT), and W𝑊Witalic_W denotes the parameters of GCN. It should be pointed out that A~~𝐴\widetilde{A}over~ start_ARG italic_A end_ARG is a graph with self-loop for each node and A^^𝐴\hat{A}over^ start_ARG italic_A end_ARG is the normalized adjacency matrix. More importantly, A^⁢X^𝐴𝑋\hat{A}Xover^ start_ARG italic_A end_ARG italic_X is equivalent to compute weighted means for each node with its first-order neighbors from the spatial aspect. To improve the performance, MixHop [26] aims to mix information from different order neighbors and SGC [27] tries to utilize higher-order neighbors.
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update the graph from the learned embedding with a larger sparsity, k𝑘kitalic_k. With the new graph, we re-train the GAE. These steps are repeated until the convergence.
Network embedding is a fundamental task for graph type data such as recommendation systems, social networks, etc. The goal is to map nodes of a given graph into latent features (namely embedding) such that the learned embedding can be utilized on node classification, node clustering, and link prediction.
To apply graph convolution on unsupervised learning, GAE is proposed [20]. GAE firstly transforms each node into latent representation (i.e., embedding) via GCN, and then aims to reconstruct some part of the input. GAEs proposed in [20, 29, 22] intend to reconstruct the adjacency via decoder while GAEs developed in [21] attempt to reconstruct the content. The difference is which extra mechanism (such as attention, adversarial learning, graph sharpness, etc.) is used.
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for decoders. (2) As we utilize GAE to exploit the high-level information to construct a desirable graph, we find that the model suffers from a severe collapse due to the simple update of the graph. We analyze the degeneration theoretically and experimentally to understand the phenomenon. We further propose a simple but effective strategy to avoid it.
C
∙∙\bullet∙ Traffic load. Network scans, such as (Lyon, 2009; Durumeric et al., 2013; Kührer et al., 2014), require exchanging packets with a large number of Internet networks as well as IP addresses inside the networks. To avoid scanning the Internet we periodically download a dataset of a full scan of the Internet done by Sonar.
Limitations of filtering studies. The measurement community provided indispensable studies for assessing “spoofability” in the Internet, and has had success in detecting the ability to spoof in some individual networks using active measurements, e.g., via agents installed on those networks (Mauch, 2013; Lone et al., 2018), or by identifying spoofed packets using offline analysis of traffic, e.g., (Lone et al., 2017; Luckie et al., 2019). The need to install agents on networks or the ability to obtain traces only from some networks limits the studies to non-uniform coverage of the Internet. Therefore it is not clear how representative these statistics are. Unfortunately, this limitation to a small set of networks creates a bias in the assessments of the overall number of spoofable networks. The extrapolation from the small set of networks to the entire Internet typically result in assessment that at least 30% of the Internet networks do not filter spoofed packets (Luckie et al., 2019; Man et al., 2020). As we show, the number of spoofable networks is above 72% which is significantly higher than what was previous believed.
∙∙\bullet∙ Consent of the scanned. It is often impossible to request permission from owners of all the tested networks in advance, this challenge similarly applies to other Internet-wide studies (Lyon, 2009; Durumeric et al., 2013, 2014; Kührer et al., 2014). Like the other studies, (Durumeric et al., 2013, 2014), we provide an option to opt out of our scans. To opt out the network has to provide either its network block (in CIDR notation), domain or ASN through the contact page at https://smap.cad.sit.fraunhofer.de. Performing security scans is important - the networks that do not enforce filtering of spoofed packets pose a hazard not only to their operators but also to their users, customers and services, as well as other networks. Due to the importance of identifying such networks, in their recent study (Luckie et al., 2019) even make public the (“name-and-shame”) lists of providers with missing or misconfigured filtering of spoofed packets; (Luckie et al., 2019) also discuss stronger measures against spoofable networks, including liability for damages, and various types of regulation. Inevitably, due to the risks that such networks pose to the Internet ecosystem, it is of public interest to know who those networks are. We do not make the identity of the networks, that do not filter spoofed packets, publicly available, but inform the general public on the fraction of such networks and provide their characterisation (i.e., size, geo-location, business type) in Section 5.
How widespread is the ability to spoof? There are significant research and operational efforts to understand the extent and the scope of (ingress and egress)-filtering enforcement and to characterise the networks which do not filter spoofed packets; we discuss these in Related Work, Section 2. Although the existing studies and tools, such as the Open Resolver (Mauch, 2013) and the Spoofer (Beverly and Bauer, 2005; Beverly et al., 2009, 2013; Lone et al., 2018; Luckie et al., 2019) projects, provide a valuable contribution for inferring networks which do not enforce spoofing, they are nevertheless insufficient: they provide a meager (often non-uniform) coverage of the Internet networks and are limited in their applicability as well as effectiveness.
∙∙\bullet∙ Traffic load. Network scans, such as (Lyon, 2009; Durumeric et al., 2013; Kührer et al., 2014), require exchanging packets with a large number of Internet networks as well as IP addresses inside the networks. To avoid scanning the Internet we periodically download a dataset of a full scan of the Internet done by Sonar.
B
Experiments in this paper used the gas sensor drift array dataset [7]. The data consists of 10 sequential collection periods, called batches. Every batch contains between 161161161161 to 3,60036003{,}6003 , 600 samples, and each sample is represented by a 128-dimensional feature vector; 8 features each from 16 metal oxide-based gas sensors. These features summarizing the time series sensor responses are the raw and normalized steady-state features and the exponential moving average of the increasing and decaying transients taken at three different alpha values. The experiments used six gases, ammonia, acetaldehyde, acetone, ethylene, ethanol, and toluene, presented in arbitrary order and at variable concentrations. Chemical interferents were also presented to the sensors between batches, and the time between presentations varied, both of which contributed to further sensor variability. The dataset thus exemplifies sensor variance due to contamination and variable odor concentration in a controlled setting.
Experiments in this paper used the gas sensor drift array dataset [7]. The data consists of 10 sequential collection periods, called batches. Every batch contains between 161161161161 to 3,60036003{,}6003 , 600 samples, and each sample is represented by a 128-dimensional feature vector; 8 features each from 16 metal oxide-based gas sensors. These features summarizing the time series sensor responses are the raw and normalized steady-state features and the exponential moving average of the increasing and decaying transients taken at three different alpha values. The experiments used six gases, ammonia, acetaldehyde, acetone, ethylene, ethanol, and toluene, presented in arbitrary order and at variable concentrations. Chemical interferents were also presented to the sensors between batches, and the time between presentations varied, both of which contributed to further sensor variability. The dataset thus exemplifies sensor variance due to contamination and variable odor concentration in a controlled setting.
The current design of the context-based network relies on labeled data because the odor samples for a given class are presented as ordered input to the context layer. However, the model can be modified to be trained on unlabeled data, simply by allowing arbitrary data samples as input to the context layer. This design introduces variation in training inputs, which makes it harder to learn consistent context patterns. For this task, semisupervised learning techniques, such as self-labeled samples, may help. If the context layer can process unlabeled data, then it is no longer necessary to include every class in every batch. The full six-gas sensor drift dataset can be used, as well as other unbalanced and therefore realistic datasets.
Figure 2: Neural network architectures. (A.) The batches used for training and testing illustrate the training procedure. The first T−1𝑇1T-1italic_T - 1 batches are used for training, while the next unseen batch T𝑇Titalic_T is used for evaluation. When training the context network, subsequences of the training data are selected to be processed recurrently, indicated by the labels s𝑠sitalic_s through p𝑝pitalic_p. In all cases, training data is obtained only from the first T−1𝑇1T-1italic_T - 1 batches of data. (B.) A feature vector is input to a collection of SVMs, one trained on each prior batch. Each SVM output is weighted by its corresponding coefficient, β𝛽\betaitalic_β, and the weighted sum of the output class predictions is taken to be the output, 𝐲^normal-^𝐲\hat{\mathbf{y}}over^ start_ARG bold_y end_ARG, of the ensemble. (C.) A schematic of the skill model shows feedforward progression of input through two hidden layers 𝐬𝐬\mathbf{s}bold_s and 𝐝𝐝\mathbf{d}bold_d followed by the output layer 𝐲^normal-^𝐲\hat{\mathbf{y}}over^ start_ARG bold_y end_ARG. (D.) A schematic of the context+skill model introduces a sequential processing of prior samples as a separate processing pathway. For each context batch from s𝑠sitalic_s through p−1𝑝1p-1italic_p - 1, one sample per odor class is chosen as a representative. The context information is then utilized by the “decision-making” layer 𝐝𝐝\mathbf{d}bold_d and is thus integrated into the feedforward pathway.
Two processing steps were applied to the data used by all models included in this paper. The first preprocessing step was to remove all samples taken for gas 6, toluene, because there were no toluene samples in batches 3, 4, and 5. Data was too incomplete for drawing meaningful conclusions. Also, with such data missing it was not possible to construct contexts from odor samples from each class in previous batches. The second preprocessing step normalized each feature so that all values corresponding to any feature dimension of the 128 total have zero mean and unit variance as is standard practice in deep learning.
D
The goal would be to obtain an algorithm with running time 2O⁢(f⁢(δ)⁢n)superscript2𝑂𝑓𝛿𝑛2^{O(f(\delta)\sqrt{n})}2 start_POSTSUPERSCRIPT italic_O ( italic_f ( italic_δ ) square-root start_ARG italic_n end_ARG ) end_POSTSUPERSCRIPT, where f⁢(n)=O⁢(n1/6)𝑓𝑛𝑂superscript𝑛16f(n)=O(n^{1/6})italic_f ( italic_n ) = italic_O ( italic_n start_POSTSUPERSCRIPT 1 / 6 end_POSTSUPERSCRIPT ). Such a running time becomes 2O⁢(n)superscript2𝑂𝑛2^{O(\sqrt{n})}2 start_POSTSUPERSCRIPT italic_O ( square-root start_ARG italic_n end_ARG ) end_POSTSUPERSCRIPT for constant δ𝛿\deltaitalic_δ (which is optimal for TSP in ℝ2superscriptℝ2\mathbb{R}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, under ETH), and it becomes 2O⁢(n2/3)superscript2𝑂superscript𝑛232^{O(n^{2/3})}2 start_POSTSUPERSCRIPT italic_O ( italic_n start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT for δ=n𝛿𝑛\delta=nitalic_δ = italic_n (which is optimal for TSP in ℝ3superscriptℝ3\mathbb{R}^{3}blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT, assuming ETH).
First of all, the ΔisubscriptΔ𝑖\Delta_{i}roman_Δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are now independent. Second, as we will prove next, the expected running time of an algorithm on a uniformly distributed point set can be bounded by the expected running time of that algorithm on a point set generated this way.
In the second step, we therefore describe a method to generate the random point set in a different way, and we show how to relate the expected running times in these two settings. In the third step, we will explain which changes are made to the algorithm.
It would be interesting to see whether a direct proof can be given for this fundamental result. We note that the proof of Theorem 2.1 can easily be adapted to point sets of which the x𝑥xitalic_x-coordinates of the points need not be integer, as long as the difference between x𝑥xitalic_x-coordinates of any two consecutive points is at least 1.
We believe that our algorithm can serve as the basis of an algorithm solving such a problem, under the assumption that the point sets are dense enough to ensure that the solution will generally follow these curves / segments. Making this precise, and investigating how the running time depends on the number of line segments, would be interesting.
D
The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the element on the (full) subtree rooted at the node is the same as that of a (possibly different) element on the entire tree (i. e. at the root). The idea for the name here is that the action on a full subtree is similar to the action of the group or semigroup on the entire tree. An important special case of such a self-similar presentation occurs when there is a finite set of generators such that the action of any generator on the subtree below any node is the same as the action of some (potentially different) generator at the root. By identifying the nodes of the infinite regular tree with the strings over an appropriate finite alphabet, we can describe such an action using a finite automaton (more precisely, a finite-state letter-to-letter – or synchronous – transducer), which leads to the class of automaton semigroups and automaton groups (also often called ‘automata groups’). If we relax the finite-state requirement and also consider infinite automata, we can even describe any self-similar action in this way. This is the approach we will take in this paper.
There is a quite interesting evolution of constructions to present free groups in a self-similar way or even as automaton groups (see [15] for an overview). This culminated in constructions to present free groups of arbitrary rank as automaton groups where the number of states coincides with the rank [18, 17]. While these constructions and the involved proofs are generally deemed quite complicated, the situation for semigroups turns out to be much simpler. While it is known that the free semigroup of rank one is not an automaton semigroup [4, Proposition 4.3], the free semigroups of higher rank can be generated by an automaton [4, Proposition 4.1]. In fact, the construction to generate these semigroups is quite simple [4, Proposition 4.1] (compare also to 3). The same construction can also be used to generate free monoids as automaton semigroups or monoids. Here, the main difference is that the free monoid in one generator can indeed be generated by an automaton: it is generated by the adding machine (see 1), which also generates the free group of rank one if inverses are added. On a side note, it is also worthwhile to point out that – although there does not seem to be much research on the topic – there are examples to generate the free inverse semigroup of rank one as a subsemigroup of an automaton semigroup [14, Theorem 25] and an adaption to present the free inverse monoid of rank one as an automaton semigroup [6, Example 2] (see also [8, Example 23]).
There are quite a few results on free (and related) products of self-similar or automaton groups (again see [15] for an overview) but many of them present the product as a subgroup of an automaton/self-similar group and, thus, loose the self-similarity property. An exception here is a line of research based on the Bellaterra automaton which resulted in a construction to generate the free product of an arbitrary number of copies of the group of order two as an automaton group [16] (see also [17]).
from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata). Third, we show this result in the more general setting of self-similar semigroups111Note that the constructions from [2, Theorem 2], [3, Theorem 4] and [19] mentioned above do not use that the generating automata for S𝑆Sitalic_S and for T𝑇Titalic_T are finite. Therefore, these constructions also work for self-similar semigroups, although this is not explicitly stated there. (Theorem 6) but observe that the constructed generating automaton for S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is finite (and/or complete) if this was the case for the original two automata generating S𝑆Sitalic_S and T𝑇Titalic_T. The existence of a homomorphism from S𝑆Sitalic_S to T𝑇Titalic_T (or vice-versa) is a very lax requirement and is satisfied by large classes of semigroups. For example, it suffices to have an idempotent (10) or a length function (11) in (at least) one of the two semigroups. By induction, we can even extend the result to arbitrary free products of (finitely many) semigroups where at least one contains an idempotent (12). The construction itself yields further results. As an example, we modify it to show that a new free generator can be adjoined to any self-similar semigroup (or automaton semigroup) without losing the property of self-similarity (or being an automaton semigroup; Theorem 14). This is noteworthy because – as mentioned above – the free semigroup of rank one is not an automaton semigroup (not even if we allow partial automata, see [8, Theorem 19] and [20, Theorem 1.2.1.4]).
The construction used to prove Theorem 6 can also be used to obtain results which are not immediate corollaries of the theorem (or its corollary for automaton semigroups in 8). As an example, we prove in the following theorem that it is possible to adjoin a free generator to every self-similar semigroup without losing the self-similarity property and that the analogous statement for automaton semigroups holds as well. The version for automaton semigroups does not follow directly from 8, as the free monogenic semigroup is not a complete automaton semigroup [4, Proposition 4.3] or even a (partial) automaton semigroup (see [8, Theorem 18] or [20, Theorem 1.2.1.4]).
A
Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible to achieve such gains even when the model is trained to look at: a) irrelevant visual regions, and b) random visual regions. Second, we show that differences in the predictions from the variants trained with relevant, irrelevant and random visual regions are not statistically significant. Third, we show that these methods degrade performance when the priors remain intact and instead work on VQA-CPv2 by hurting its train accuracy.
The usage of visual cues and sensitivities in existing methods is superfluous because the results indicate that performance improves through degradation of training accuracy. We hypothesize that simple regularization that does not rely on cues or sensitivities can also achieve large performance gains for VQA-CP. To test this hypothesis, we devise a simple loss function which continuously degrades the training accuracy by training the network to always predict a score of zero for all possible answers i.e. produce a zero vector (𝟎0\mathbf{0}bold_0). The overall loss function can be written as:
Based on these observations, we hypothesize that controlled degradation on the train set allows models to forget the training priors to improve test accuracy. To test this hypothesis, we introduce a simple regularization scheme that zeros out the ground truth answers, thereby always penalizing the model, whether the predictions are correct or incorrect. We find that this approach also achieves near state-of-the-art performance (48.9%percent48.948.9\%48.9 % on VQA-CPv2), providing further support for our claims.
Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible to achieve such gains even when the model is trained to look at: a) irrelevant visual regions, and b) random visual regions. Second, we show that differences in the predictions from the variants trained with relevant, irrelevant and random visual regions are not statistically significant. Third, we show that these methods degrade performance when the priors remain intact and instead work on VQA-CPv2 by hurting its train accuracy.
It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in any of the methods, indicating that they are not producing right answers for the right reasons.
B
To train the RoBERTa model on the privacy policy classification task, we used the sequence classification head of the pretrained language model from HuggingFace (Wolf et al., 2019). We used the pretrained RoBERTa tokenizer to tokenize text extracted from the documents. Since Roberta accepts a maximum of 512 tokens as input, only the first 512 tokens of text from the documents were used for training while the rest was discarded. As shown in the analysis section, the average length of a privacy policy in terms of the number of words is 1,871. Thus 512 tokens would take into account about a fourth of an average privacy policy.
For the question answering task, we leveraged the PrivacyQA corpus (Ravichander et al., 2019). PrivacyQA consists of 1,750 questions about the contents of privacy policies from 35 privacy documents. While crowdworkers were asked to come up with privacy related questions based on public information about an application from the Google Play Store, legal experts were recruited to identify relevant evidence within respective privacy policies that answered the question asked by the crowdworkers. The goal of the question answering task is to identify a set sentences in the privacy policy that has information relevant to the question. Ravichander et al. (2019) divided the corpus into 1,350 questions for training and validation and 400 questions for testing where each question in the test set is annotated by at least three experts. We fine-tuned PrivBERT on the training set as a binary classification task on each question-answer sentence pair to identify if the sentence is evidence for the question or not. We trained the model with a dropout of 0.2 and a learning rate of 3e-6 with the positive and negative classes weighted in the ratio 8:1 during training. We used sentence level F1 as the evaluation metric as described by Ravichander et al. (2019), where precision and recall are calculated by measuring the overlap between the predicted sentences and gold standard sentences.
The 1,600 labelled documents were randomly divided into 960 documents for training, 240 documents for validation and 400 documents for testing. Using 5-fold cross-validation, we tuned the hyperparameters for the models separately with the validation set and then used the held-out test set to report the test results. Due to its size, it was possible for the held out test set to have a biased sample. Thus we repeated the sampling and training processes with a 5-fold cross-validation approach. Table 1 shows performance of the models after the results from test sets were averaged. Since the transformer based model had the best results, we ran it on all the the candidate privacy policies. Out of 2.1 million English candidate privacy polices, 1.54 million were classified as privacy policies and the rest were discarded.
Document Classification. Some of the web pages in the English language candidate document set may not have been privacy policies and instead simply satisfied our URL selection criteria. To separate privacy policies from other web documents we used a supervised machine learning approach. Two researchers in the team labeled 1,600 randomly selected candidate documents based on a preset scheme in consultation with a privacy expert. While both the researchers had substantial prior experience with privacy policies, the privacy expert was consulted to eliminate uncertainty in the annotations of a few documents. Lack of agreement in the annotations occurred for six documents, which were settled by discussion with the expert. Out of 1,600 documents, 1,145 were privacy policies and 455 were not privacy policies.
The complete set of documents was divided into 97 languages and an unknown language category. We found that the vast majority of documents were in English. We set aside candidate documents that were not identified as English by Langid and were left with 2.1 million candidates.
B
The second expert (E2) is a senior researcher in software engineering and applied ML working in a government research institute and as an adjunct professor. He has worked with ML for the past 7 years, and 2 years with stacking ensemble learning. The third expert (E3) is the head of applied ML in a large multinational corporation, working with recommendation systems. She has approximately 7 years of experience with ML, of which 1.5 years are related to stacking ensemble learning. All three experts have a PhD in computer science and none of them reported any colorblindness issues. The process was as follows: (1) we presented the main goals of our system, (2) we explained the process of improving the heart disease data set results (see section 4), and (3) after that, we gave them a couple of minutes to interact with the VA system by using the simple iris data set.
Workflow. E1, E2, and E3 agreed that the workflow of StackGenVis made sense. They all suggested that data wrangling could happen before the algorithms’ exploration, but also that it is usual to first train a few algorithms and then, based on their predictions, wrangle the data.
Another positive opinion from E3 was that, with a few adaptations to the performance metrics, StackGenVis could work with regression or even ranking problems. E3 also mentioned that supporting feature generation in the feature selection phase might be helpful. Finally, E1 suggested that the circular barcharts could only show the positive or negative difference compared to the first stored stack. To avoid an asymmetric design and retain a lower complexity level for StackGenVis, we omitted his proposal for the time being, but we consider implementing both methods in the future.
(ii) in the next algorithm exploration phase, we compare and choose specific ML algorithms for the ensemble and then proceed with their particular instantiations, i.e., the models; (iii) during the data wrangling phase, we manipulate the instances and features with two different views for each of them; (iv) model exploration allows us to reduce the size of the stacking ensemble, discard any unnecessary models, and observe the predictions of the models collectively (StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(d));
Thus, it is considered an iterative process: the expert might start with the algorithms’ exploration and move to the data wrangling, or vice versa. “The former approach is even more suitable for your VA system, because you use the accuracy of the base ML models as feedback/guidance to the expert in order to understand which instances should be wrangled”, said E3. E2 stated that having an evaluation metric from early on is important for benchmarking purposes to choose the best strategy while data scientists and domain experts are collaborating. He also noted that flexibility of the workflow—not forcing the user to use all parts of the VA system for every problem—is an extra benefit.
A
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
(E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ), (E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscript𝑢1delimited-[]112subscript𝑢2delimited-[]010(E^{\mathbf{C}},((u_{1},[112]),(u_{2},[010])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( ( italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , [ 112 ] ) , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 010 ] ) ) ).
cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG, and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ].
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
D
To answer RQ3, we conduct experiments on different data quantity and task similarity settings. We compare two baselines with MAML : Transformer/CNN, which pre-trains the base model (Transformer/CNN) on the meta-training set and evaluates directly on the meta-testing set, and Transformer/CNN-F, which fine-tunes Transformer/CNN on each meta-testing task.
Model-Agnostic Meta-Learning (MAML) [Finn et al., 2017] is one of the most popular meta-learning methods. It is trained on plenty of tasks (i.e. small data sets) to get a parameter initialization which is easy to adapt to target tasks with a few samples. As a model-agnostic framework, MAML is successfully employed in different NLP applications. Some works use MAML for few-shot text classification, such as relation classification [Obamuyide and Vlachos, 2019] and topic classification [Bao et al., 2020].
To answer RQ3, we conduct experiments on different data quantity and task similarity settings. We compare two baselines with MAML : Transformer/CNN, which pre-trains the base model (Transformer/CNN) on the meta-training set and evaluates directly on the meta-testing set, and Transformer/CNN-F, which fine-tunes Transformer/CNN on each meta-testing task.
Task similarity. In Persona and Weibo, each task is a set of dialogues for one user, so tasks are different from each other. We shuffle the samples and randomly divide tasks to construct the setting that tasks are similar to each other. For a fair comparison, each task on this setting also has 120 and 1200 utterances on average in Persona and Weibo respectively. We train and evaluate Transformer-F and MAML on this setting. (Table 2). When tasks are similar to each other, MAML performs comparatively poorly. In Persona and Weibo, the performance of MAML is similar to that of Transformer-F, while MAML performs significantly better than Transformer-F when tasks are different. A possible explanation is that if there is no clear distinction between tasks, the meta-learning setting can be viewed as a transfer learning setting, which only has a source domain and a target domain, and fine-tuning performs well in transfer learning. So if the tasks are similar to each other, we can simply use Transformer-F rather than MAML.
Data Quantity. In Persona, we evaluate Transformer/CNN, Transformer/CNN-F and MAML on 3 data quantity settings: 50/100/120-shot (each task has 50, 100, 120 utterances on average). In Weibo, FewRel and Amazon, the settings are 500/1000/1500-shot, 3/4/5-shot and 3/4/5-shot respectively (Table 2). When the data quantity is small, the advantage of MAML is more significant. In Persona, the C Score and BLEU of MAML outperform baselines on 50-shot and 100-shot settings, but on 120-shot setting, the BLEU of MAML is lower than Transformer-F. In Weibo, FewRel and Amazon, the percentages that MAML outperforms the baselines by also decrease as the data quantity increasing. This finding is in line with the mechanism of MAML. MAML finds a sensitive parameter initialization that can adapt with few data samples [Finn et al., 2017].
D
As αisubscript𝛼𝑖\alpha_{i}italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and βjsubscript𝛽𝑗\beta_{j}italic_β start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT is the quantization of azimuth angle and elevation angle, respectively, the indexes of the optimal codewords ik*superscriptsubscript𝑖𝑘i_{k}^{*}italic_i start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT and jk*superscriptsubscript𝑗𝑘j_{k}^{*}italic_j start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT in the given layer of the codebook according to (42) are given by ik*=⌈αt,k⁢(t)B⁢Wa⌉superscriptsubscript𝑖𝑘subscript𝛼𝑡𝑘𝑡𝐵subscript𝑊𝑎i_{k}^{*}=\biggl{\lceil}{\frac{\alpha_{t,k}(t)}{BW_{a}}}\biggr{\rceil}italic_i start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = ⌈ divide start_ARG italic_α start_POSTSUBSCRIPT italic_t , italic_k end_POSTSUBSCRIPT ( italic_t ) end_ARG start_ARG italic_B italic_W start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT end_ARG ⌉, and jk*=⌈βt,k⁢(t)B⁢We⌉superscriptsubscript𝑗𝑘subscript𝛽𝑡𝑘𝑡𝐵subscript𝑊𝑒j_{k}^{*}=\biggl{\lceil}{\frac{\beta_{t,k}(t)}{BW_{e}}}\biggr{\rceil}italic_j start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = ⌈ divide start_ARG italic_β start_POSTSUBSCRIPT italic_t , italic_k end_POSTSUBSCRIPT ( italic_t ) end_ARG start_ARG italic_B italic_W start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT end_ARG ⌉.
Multiuser-resultant Receiver Subarray Partition: As shown in Fig. 3, the r-UAV needs to activate multiple subarrays to serve multiple t-UAVs at the same time. Assuming that an element can not be contained in different subarrays, then the problem of activated CCA subarray partition rises at the r-UAV side for the fast multi-UAV beam tracking. The dynamic CCA subarray partition can be considered as the dynamic antenna resource allocation for multiple t-UAVs, which has strong impact on the sum SE of the UAV mmWave network.
Figure 6: The subarray patterns on the cylinder and the corresponding expanded cylinder. (a) The t-UAV subarray partition pattern. (b) The r-UAV subarray partition pattern with conflict. (c) The r-UAV subarray partition pattern without conflict. (d) The t-UAV subarray partition pattern with beamwidth selection.
The t-UAV needs to select an appropriate codeword 𝒗⁢(i,j,𝒮)𝒗𝑖𝑗𝒮\boldsymbol{v}(i,j,\mathcal{S})bold_italic_v ( italic_i , italic_j , caligraphic_S ) from our proposed codebook 𝒱ksubscript𝒱𝑘\mathcal{V}_{k}caligraphic_V start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT to solve the subarray partition and AWV selection problem in (35). Note that after the codeword 𝒗⁢(i,j,𝒮)𝒗𝑖𝑗𝒮\boldsymbol{v}(i,j,\mathcal{S})bold_italic_v ( italic_i , italic_j , caligraphic_S ) is selected, the beam pattern and the subarray pattern are determined. Given AODs, the maximum size of the activated subarray should be selected and the quantization error between the AODs and the beam angles in the codeword should be minimized to maximize the beam gain of the beamforming vector of the k𝑘kitalic_k-th t-UAV. Therefore, the optimal codeword 𝒗⁢(ik*,jk*,𝒮⁢(ms,k*,ns,k*,𝒑c,k⁢(ik*)))𝒗superscriptsubscript𝑖𝑘superscriptsubscript𝑗𝑘𝒮superscriptsubscript𝑚𝑠𝑘superscriptsubscript𝑛𝑠𝑘subscript𝒑𝑐𝑘superscriptsubscript𝑖𝑘\boldsymbol{v}\left(i_{k}^{*},j_{k}^{*},\mathcal{S}\left(m_{s,k}^{*},n_{s,k}^{%
According to (20), the codeword 𝒗⁢(i,j,𝒮)𝒗𝑖𝑗𝒮\boldsymbol{v}(i,j,\mathcal{S})bold_italic_v ( italic_i , italic_j , caligraphic_S ) includes both the beam pattern information and the subarray pattern information. The beam pattern information mainly includes the beam angle (αi,βj)subscript𝛼𝑖subscript𝛽𝑗(\alpha_{i},\beta_{j})( italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_β start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) and the beam width determined by the size of 𝒮𝒮\mathcal{S}caligraphic_S; the subarray pattern information includes the subarray location and size determined by 𝒮𝒮\mathcal{S}caligraphic_S.
B
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the arguments, and will also be used as the base cases in inductive constructions for the case with arbitrary colors.
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the arguments, and will also be used as the base cases in inductive constructions for the case with arbitrary colors.
The requirement that M¯|N¯conditional¯𝑀¯𝑁\bar{M}|\bar{N}over¯ start_ARG italic_M end_ARG | over¯ start_ARG italic_N end_ARG is extra big enough ensures that we have enough edges to perform the edge swapping. This completes the proof for case 2 when the assumptions (a1) and (a2) hold.
This will be bootstrapped to the multi-color case in later sections. Note that the 1111-color case with the completeness requirement is not very interesting, and also not useful for the general case: completeness states that every node on the left must be connected, via the unique edge relation, to every node on the right – regardless of the matrix. We
To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict
C
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear whether the attained solution is globally optimal. On the other hand, when the value function approximator in TD is an overparameterized multi-layer neural network, which is required to be properly scaled, such a feature representation stabilizes at the initial one (Cai et al., 2019), making the explicit local linearization in nonlinear gradient TD unnecessary. Moreover, the implicit local linearization enabled by overparameterization allows TD (and Q-learning) to converge to the globally optimal solution. However, such a required scaling, also known as the neural tangent kernel (NTK) regime (Jacot et al., 2018), effectively constrains the evolution of the induced feature presentation to an infinitesimal neighborhood of the initial one, which is not data-dependent.
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature representation is able to deviate from the initial one and subsequently evolve into the globally optimal one, which corresponds to the global minimizer of the MSPBE. We further extend our analysis to soft Q-learning, which is connected to policy gradient.
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear whether the attained solution is globally optimal. On the other hand, when the value function approximator in TD is an overparameterized multi-layer neural network, which is required to be properly scaled, such a feature representation stabilizes at the initial one (Cai et al., 2019), making the explicit local linearization in nonlinear gradient TD unnecessary. Moreover, the implicit local linearization enabled by overparameterization allows TD (and Q-learning) to converge to the globally optimal solution. However, such a required scaling, also known as the neural tangent kernel (NTK) regime (Jacot et al., 2018), effectively constrains the evolution of the induced feature presentation to an infinitesimal neighborhood of the initial one, which is not data-dependent.
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, TD possibly diverges (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997). Bhatnagar et al. (2009) propose nonlinear gradient TD, which converges but only to a locally optimal solution. See Geist and Pietquin (2013); Bertsekas (2019) for a detailed survey. When the value function approximator is an overparameterized multi-layer neural network, Cai et al. (2019) prove that TD converges to the globally optimal solution in the NTK regime. See also the independent work of Brandfonbrener and Bruna (2019a, b); Agazzi and Lu (2019); Sirignano and Spiliopoulos (2019), where the state space is required to be finite. In contrast to the previous analysis in the NTK regime, our analysis allows TD to attain a data-dependent feature representation that is globally optimal.
In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
A
In this paper, we replace residual connections of the Transformer with depth-wise LSTMs, to selectively manage the representation aggregation of layers benefiting performance while ensuring convergence of the Transformer. Specifically, we show how to integrate the computation of multi-head attention networks and feed-forward networks with the depth-wise LSTM for the Transformer.
We show that the 6-layer Transformer using depth-wise LSTM can bring significant improvements in both WMT tasks and the challenging OPUS-100 multilingual NMT task. We show that depth-wise LSTM also has the ability to support deep Transformers with up to 24242424 layers, and that the 12-layer Transformer using depth-wise LSTM already performs at the level of the 24-layer vanilla Transformer.
Notably, on the En-De task, the 12-layer Transformer with depth-wise LSTM already outperforms the 24-layer vanilla Transformer, suggesting efficient use of layer parameters. On the Cs-En task, the 12-layer model with depth-wise LSTM performs on a par with the 24-layer baseline. Unlike in the En-De task, increasing depth over the 12-layer Transformer can still achieve some BLEU improvements, with the 18-layer model resulting in the best performance. We conjecture that this is probably because the data set of the Cs-En task (∼similar-to\sim∼15151515M) is larger than that of the En-De task (∼similar-to\sim∼4.54.54.54.5M), and increasing the depth of the model for the Cs-En task also increases its number of parameters and capacity. For the En-De task, the 12-layer Transformer with depth-wise LSTM may already provide both sufficient complexity and capacity for the data set.
When using the depth-wise RNN, the architecture is quite similar to the standard Transformer layer without residual connections but using the concatenation of the input to the encoder/decoder layer with the output(s) of attention layer(s) as the input to the last FFN sub-layer. Table 2 shows that the 6-layer Transformer with the depth-wise RNN is able to converge, but its performance is much worse than the model with the depth-wise LSTM (and also much worse than the vanilla Transformer) with depth-wise LSTM outperforming the vanilla Transformer, suggesting the importance of the gating mechanisms of the depth-wise LSTM. The decoding speed of our baseline vanilla Transformer implementation (750.58750.58750.58750.58 sentences/s) is quite fast, and is 1.121.121.121.12 times as fast as the depth-wise LSTM approach, but our approach leads to a higher BLEU score than the baseline, and as shown in Table 6, our approach indeed requires fewer parameters and brings about faster decoding speed than the vanilla Transformer for a comparable BLEU score.
Our experiments with the 6-layer Transformer show that our approach using depth-wise LSTM can achieve significant BLEU improvements in both WMT news translation tasks and the very challenging OPUS-100 many-to-many multilingual translation task over baselines. Our deep Transformer experiments demonstrate that: 1) the depth-wise LSTM approach ensures that deep Transformers with up to 24242424 layers converge, 2) the 12-layer Transformer using depth-wise LSTM already performs on a par with the 24-layer vanilla Transformer, suggesting more efficient usage of per-layer parameters with our depth-wise LSTM approach than the baseline.
D
^{\circ}\!\left(X\right)\right\}caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_Y ) ⊇ { italic_U ∩ italic_Y ∣ italic_U ∈ caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_X ) }. Note that this stronger property is preserved
on ⟨⟦𝖥𝖮[σ]⟧𝒟≤2∩τ⊆i⟩\langle\llbracket\mathsf{FO}[\upsigma]\rrbracket_{\mathcal{D}_{\leq 2}}\cap% \uptau_{\subseteq_{i}}\rangle⟨ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT caligraphic_D start_POSTSUBSCRIPT ≤ 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ⟩ is a pre-spectral space,
⟨Fin⁡(σ),τ≤,𝖥𝖮⁢[σ]⟩Finσsubscriptτ𝖥𝖮delimited-[]σ\left\langle\operatorname{Fin}(\upsigma),\uptau_{\leq},\mathsf{FO}[\upsigma]\right\rangle⟨ roman_Fin ( roman_σ ) , roman_τ start_POSTSUBSCRIPT ≤ end_POSTSUBSCRIPT , sansserif_FO [ roman_σ ] ⟩ is a lpps.
⟨𝒟≤2,τ⊆i,𝖥𝖮⁢[σ]⟩subscript𝒟absent2subscriptτsubscript𝑖𝖥𝖮delimited-[]σ\left\langle\mathcal{D}_{\leq 2},\uptau_{\subseteq_{i}},\mathsf{FO}[\upsigma]\right\rangle⟨ caligraphic_D start_POSTSUBSCRIPT ≤ 2 end_POSTSUBSCRIPT , roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT , sansserif_FO [ roman_σ ] ⟩ is a lpps by Remark 3.5 and the fact that
⟨Struct⁡(σ),τ⊆i,𝖥𝖮⁢[σ]⟩Structσsubscriptτsubscript𝑖𝖥𝖮delimited-[]σ\left\langle\operatorname{Struct}(\upsigma),\uptau_{\subseteq_{i}},\mathsf{FO}% [\upsigma]\right\rangle⟨ roman_Struct ( roman_σ ) , roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT , sansserif_FO [ roman_σ ] ⟩ is a lpps by Claim 2.2.
C
In the training stage, we crop each distorted image into four distortion elements and learn the parameters of the neural network using all data. Note that this training process is data-independent, where each part of the entire image is fed into the network one by one without the data correlation. In the test stage, we only need one distortion element, i.e., 1/4 of an image, to estimate the ordinal distortion. For a clear exhibition of our approach, we draw the detailed algorithm schemes of the training process and test process as listed in Algorithm 1 and Algorithm 2, respectively.
To demonstrate a quantitative comparison with the state-of-the-art approaches, we evaluate the rectified images based on the PSNR (peak signal-to-noise ratio), SSIM (structural similarity index), and the proposed MDLD (mean distortion level deviation). All the comparison methods are used to conduct the distortion rectification on the test dataset including 2,000 distorted images. For the PSNR and SSIM, we compute these two metrics using the pixel difference between each rectified image and the ground truth image. For the MDLD, we first exploit the estimated distortion parameters to obtain all distortion levels of the test distorted image based on Eq. 5. Then, the value of MDLD can be calculated by the difference between estimated distortion levels and the ground truth distortion levels based on Eq. 21. Note that the generated-based methods such as Li [11] and Liao [12] directly learn the transformation manner of the pixel mapping instead of estimating the distortion parameters, so we only evaluate these two methods in terms of the PSNR and SSIM.
Evaluation Metrics: Crucially, evaluating the performance of different methods with reasonable metrics benefits experimental comparisons. In the distortion rectification problem, the corrected image can be evaluated with the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM). For the evaluation of the estimated distortion label, it is straightforward to employ the root mean square error (RMSE) between the estimated coefficients 𝒦^^𝒦\hat{\mathcal{K}}over^ start_ARG caligraphic_K end_ARG and ground truth coefficients 𝒦𝒦\mathcal{K}caligraphic_K:
In contrast to RMSE, MDLD is more suitable for parameter evaluation due to the uniqueness of the distortion distribution. Moreover, RMSE fails to evaluate the different numbers and attributes of estimated parameters for different camera models. Thanks to the objective description of the distortion, MDLD is capable of evaluating different distortion estimation methods using different camera models.
As listed in Table II, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods [23, 24] based on the hand-crafted features, our approach overcomes the scene limitation and simple camera model assumption, showing more promising generality and flexibility. Compared with the learning distortion rectification methods [8][11][12], which omit the prior knowledge of the distortion, our approach transfers the heterogeneous estimation problem into a homogeneous one, eliminating the implicit relationship between image features and predicted values in a more explicit expression. As benefits of the effective ordinal supervision and guidance of distortion information during the learning process, our approach outperforms Liao [12] by a significant margin, with approximately 23% improvement on PSNR and 17% improvement on SSIM. Besides the high quality of the rectified image, our approach can obtain the accurate distortion parameters of a distorted image, which is crucial for the subsequent tasks such as the camera calibration. However, the generation-based methods [11][12] mainly focus on the pixel reconstruction of a rectified image and ignore the parameter estimation.
B
We use a pre-trained ViT 555https://huggingface.co/google/vit-base-patch16-224-in21k [4] model and fine-tune it on the CIFAR-10/CIFAR-100 datasets. The experiments are implemented based on the Transformers 666https://github.com/huggingface/transformers framework. We fine-tune the model with 20 epochs.
Many methods have been proposed for improving the performance of SGD with large batch sizes. The works in [7, 33] proposed several tricks, such as warm-up and learning rate scaling schemes, to bridge the generalization gap under large-batch training settings. Researchers in [11]
We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework. Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings.
Figure 2 shows the learning curves of the five methods. We can observe that in the small-batch training, SNGM and other large-batch training methods achieve similar performance in terms of training loss and test accuracy as MSGD. In large-batch training, SNGM achieves better training loss and test accuracy than the four baselines. Furthermore, it achieves faster convergence rates than LARS for the small and large batch sizes, which is consistent with our convergence analysis for the block-wise update strategy.
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different batch sizes.
B
When the algorithm terminates with Cs=∅subscript𝐶𝑠C_{s}=\emptysetitalic_C start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = ∅, Lemma 5.2 ensure the solution zfinalsuperscript𝑧finalz^{\text{final}}italic_z start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT is integral. By Lemma 5.5, any client j𝑗jitalic_j with d⁢(j,S)>9⁢Rj𝑑𝑗𝑆9subscript𝑅𝑗d(j,S)>9R_{j}italic_d ( italic_j , italic_S ) > 9 italic_R start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT must have j∈C0final𝑗subscriptsuperscript𝐶final0j\in C^{\text{final}}_{0}italic_j ∈ italic_C start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. Hence, ∑j:d⁢(j,S)>9⁢Rjvj≤∑j∈C0vjsubscript:𝑗𝑑𝑗𝑆9subscript𝑅𝑗subscript𝑣𝑗subscript𝑗subscript𝐶0subscript𝑣𝑗\sum_{j:d(j,S)>9R_{j}}v_{j}\leq\sum_{j\in C_{0}}v_{j}∑ start_POSTSUBSCRIPT italic_j : italic_d ( italic_j , italic_S ) > 9 italic_R start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≤ ∑ start_POSTSUBSCRIPT italic_j ∈ italic_C start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT. For the facility costs, we have ∑i∈Swi=∑izifinal⁢wisubscript𝑖𝑆subscript𝑤𝑖subscript𝑖superscriptsubscript𝑧𝑖finalsubscript𝑤𝑖\sum_{i\in S}w_{i}=\sum_{i}z_{i}^{\text{final}}w_{i}∑ start_POSTSUBSCRIPT italic_i ∈ italic_S end_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Finally, by Lemma 5.3, and noting that Csfinal=∅superscriptsubscript𝐶𝑠finalC_{s}^{\text{final}}=\emptysetitalic_C start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT = ∅, we have ∑izifinal⁢wi+∑j∈C0vj≤Vsubscript𝑖superscriptsubscript𝑧𝑖finalsubscript𝑤𝑖subscript𝑗subscript𝐶0subscript𝑣𝑗𝑉\sum_{i}z_{i}^{\text{final}}w_{i}+\sum_{j\in C_{0}}v_{j}\leq V∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + ∑ start_POSTSUBSCRIPT italic_j ∈ italic_C start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≤ italic_V.
Brian Brubach was supported in part by NSF awards CCF-1422569 and CCF-1749864, and by research awards from Adobe. Nathaniel Grammel and Leonidas Tsepenekas were supported in part by NSF awards CCF-1749864 and CCF-1918749, and by research awards from Amazon and Google. Aravind Srinivasan was supported in part by NSF awards CCF-1422569, CCF-1749864 and CCF-1918749, and by research awards from Adobe, Amazon, and Google.
        do FA←{ijA|j∈HA⁢ and ⁢FI∩GπI⁢j=∅}←subscript𝐹𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{\pi^{I}j}=\emptyset\}italic_F start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ← { italic_i start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | italic_j ∈ italic_H start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ∩ italic_G start_POSTSUBSCRIPT italic_π start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT italic_j end_POSTSUBSCRIPT = ∅ }
For instance, during the COVID-19 pandemic, testing and vaccination centers were deployed at different kinds of locations, and access was an important consideration [18, 20]; access can be quantified in terms of different objectives including distance, as in our work. Here, ℱℱ\mathcal{F}caligraphic_F and 𝒞𝒞\mathcal{C}caligraphic_C correspond to such locations and the population affected by the outbreak, and needing services, respectively.
  FAs¯←{ijA|j∈HA⁢ and ⁢FI∩GπI⁢j=∅}←subscriptsuperscript𝐹¯𝑠𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F^{\bar{s}}_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{% \pi^{I}j}=\emptyset\}italic_F start_POSTSUPERSCRIPT over¯ start_ARG italic_s end_ARG end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ← { italic_i start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | italic_j ∈ italic_H start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ∩ italic_G start_POSTSUBSCRIPT italic_π start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT italic_j end_POSTSUBSCRIPT = ∅ }
A
Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15], and additive and multiplicative communication noises may co-exist in communication links ([21]).
II. The structure of the networks among optimizers is modeled by a more general sequence of random digraphs. The sequence of random digraphs is conditionally balanced, and the weighted adjacency matrices are not required to have special statistical properties such as independency with identical distribution, Markovian switching, or stationarity, etc. The edge weights are also not required to be nonnegative at every time instant. By introducing the concept of conditional digraphs and developing the stochastic Lyapunov method for distributed optimization over non-stationary randomly time-varying networks, uniformly conditionally joint connectivity condition is established to ensure the convergence of the distributed stochastic optimization algorithms.
We have studied the distributed stochastic subgradient algorithm for the stochastic optimization by networked nodes to cooperatively minimize a sum of convex cost functions. We have proved that if the local subgradient functions grow linearly and the sequence of digraphs is conditionally balanced and uniformly conditionally jointly connected, then proper algorithm step sizes can be designed so that all nodes’ states converge to the global optimal solution almost surely.
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be spatially and temporally dependent. The local cost functions are not required to be differentiable, nor do their subgradients need to be bounded. The local optimizers can only obtain measurement information of the local subgradients with random noises. The additive and multiplicative communication noises co-exist in communication links. We consider the distributed stochastic subgradient optimization algorithm and prove that if the sequence of random digraphs is conditionally balanced and uniformly conditionally jointly connected, then the states of all local optimizers converge to the same global optimal solution almost surely. The main contributions of our paper are listed as follows.
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition. The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error. This leads the nonnegative supermartingale convergence theorem not to be applied directly
C
Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces the contradiction between privacy protection and data analysis [9]. For instance, a smaller ϵitalic-ϵ\epsilonitalic_ϵ for ϵitalic-ϵ\epsilonitalic_ϵ-differential privacy provides better protection but worse information utility.
The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution in the original data. Second, the anonymization of MuCo is a “black box” process for recipients because the only difference between the original data and the anonymized data is that some original QI values are replaced with random values. Thus, the adversary cannot determine which QI values are altered as well as the ranges of variations, causing that the matching tuples are more likely to be wrong or even does not exist when the adversary uses more QI values to match, but the adversary obtains much more matching records if the size of the combination of QI values is not big enough. While for the recipient, the results of query statements are specific records rather than groups. Accordingly, the results are more accurate. The conducted extensive experiments also illustrate the effectiveness of the proposed method.
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics without violating the privacy. Inspired by local differential privacy, this paper uses the method of randomized response to perturb original QI values before release to prevent the disclosure of matching the combination of QI values.
Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the original microdata and publish the anonymized version of microdata. Therefore, differential privacy is inapplicable to the scenario we addressed in this paper.
Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces the contradiction between privacy protection and data analysis [9]. For instance, a smaller ϵitalic-ϵ\epsilonitalic_ϵ for ϵitalic-ϵ\epsilonitalic_ϵ-differential privacy provides better protection but worse information utility.
B
3D-FUTURE dataset is a recently public large-scale indoor dataset with 34 categories. Following the official splits, we adopt 12,144 images for training, 2,024 for validation and 6,072 for testing. From the size distribution of bounding boxes in 3D-FUTURE and COCO shown in Figure 1, the medium object size of 3D-FUTURE is about 250 while roughly 50 for COCO, indicating that 3D-FUTURE contains much more larger instances222Followed by 3D-FUTURE official setting, we refer area <113×113absent113113\textless 113\times 113< 113 × 113 for small, 113×113∼256×256similar-to113113256256113\times 113\sim 256\times 256113 × 113 ∼ 256 × 256 for medium, and >256×256absent256256\textgreater 256\times 256> 256 × 256 for large, compared to 32×32323232\times 3232 × 32 and 96×96969696\times 9696 × 96 defined in COCO.. This distribution divergence motivates us to explore fine-grained large object segmentation methods like PointRend.
Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “FP16” means mixed precision training.
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.9 mAP and surpasses MaskRCNN by a remarkable margin of 9.7 mAP. More Points Test. By increasing the number of subdivision points from default 28 to 70 during inference, we gain another 1.1 mAP with free training cost. Large Backbone. X101-64x4d Xie et al. (2017) is then used as large backbone and it boosts 6 mAP against ResNet50. DCN and More Points Train. We adopt more interpolated points during training, by increasing the number of sampled points from original 14 to 26 for coarse prediction head, and from 14 to 24 for fine-grained point head. Then by adopting DCN Dai et al. (2017), we gain 71.6 mAP, which already outperforms HTC and SOLOV2 from our offline observation. Large Resolution and P6 Feature. Due to PointRend’s lightweight segmentation head and less memory consumption compared to HTC, the input resolution can be further increased from range [800,1000] to [1200,1400] during multi-scale training. P6 level of FPN is also added for both coarse prediction head and fine-grained point head, which finally yields 74.3 mAP on our splitted validation set. Other tricks we tried on PointRend give little improvement, including MaskScoring head, GC Block and DoubleHead Wu et al. (2020). In the following, we refer the model in the last row (74.3 mAP) of Table 2 as PointRend baseline. The baseline trained on the official training set finally reaches 79.17 and 77.38 mAP on validation and testing set respectively, as shown in Table 1 and Table 3. It surpasses SOLOv2 by a large margin: 6.2, 4.5 and 3.5 mAP respectively for small, medium and large size on validation set. We believe that PointRend’s iteratively rendering process acts as a pivot for generating high-quality masks, especially fine-grained instance boundaries. Due to its superior performance, we only choose PointRend as ensemble candidates for the final submission.
Table 3: PointRend’s performance on testing set (trackB). “EnrichFeat” means enhance the feature representation of coarse mask head and point head by increasing the number of fully-connected layers or its hidden sizes. “BFP” means Balanced Feature Pyramid. Note that BFP and EnrichFeat gain little improvements, we guess that our PointRend baseline already achieves promising performance (77.38 mAP).
PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared to HTC’s mask head, PointRend’s lightweight segmentation head alleviates both memory and computation costs dramatically, thus enables larger input image resolutions during training and testing, which further improves the segmentation quality. To fully understand which components contribute to PointRend’s performance, we construct our own validation set by randomly selecting 3000 images from original training data to evaluate offline. We will show the step-by-step improvements adopted on PointRend.
A
I⁢(f)<1,andH⁢(|f^|2)>nn+1⁢log⁡n.formulae-sequence𝐼𝑓1and𝐻superscript^𝑓2𝑛𝑛1𝑛I(f)<1,\ \ {\mbox{and}}\ \ H(|\hat{f}|^{2})>\frac{n}{n+1}\log n.italic_I ( italic_f ) < 1 , and italic_H ( | over^ start_ARG italic_f end_ARG | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) > divide start_ARG italic_n end_ARG start_ARG italic_n + 1 end_ARG roman_log italic_n .
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma some time ago (see [K], comment from April 2, 2011). More specifically, we proved
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Maybe the presentation below is what was known.
(0⁢log⁡0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^⁢(A)|2}A⊆[n]subscriptsuperscript^𝑓𝐴2𝐴delimited-[]𝑛\{|\hat{f}(A)|^{2}\}_{A\subseteq[n]}{ | over^ start_ARG italic_f end_ARG ( italic_A ) | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_A ⊆ [ italic_n ] end_POSTSUBSCRIPT sums up to 1111 and thus this is the usual definition of entropy of this probability distribution.
C
The proof idea is similar to that of Theorem 1. The only difference is that within each piecewise-stationary segment, we use the hard instance constructed by Zhou et al. (2021); Hu et al. (2022) for inhomogenous linear MDPs. Optimizing the length of each piecewise-stationary segment N𝑁Nitalic_N and the variation magnitude between consecutive segments (subject to the constraints of the total variation budget) leads to our lower bound. ∎
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 2020) to propose the LSVI-UCB-Restart algorithm with low dynamic regret when the total variations are known. We then designed a parameter-free algorithm Ada-LSVI-UCB-Restart that enjoys a slightly worse dynamic regret bound without knowing the total variations. We derived a minimax regret lower bound for nonstationary linear MDPs to demonstrate that our proposed algorithms are near-optimal. Specifically, when the local variations are known, LSVI-UCB-Restart is near order-optimal except for the dependency on feature dimension d𝑑ditalic_d, planning horizon H𝐻Hitalic_H, and some poly-logarithmic factors. Numerical experiments demonstrates the effectiveness of our algorithms.
In this section, we describe our proposed algorithm LSVI-UCB-Restart, and discuss how to tune the hyper-parameters for cases when local variation is known or unknown. For both cases, we present their respective regret bounds. Detailed proofs are deferred to Appendix B. Note that our algorithms are all designed for inhomogeneous setting.
The rest of the paper is organized as follows. Section 2 presents our problem definition. Section 3 establishes the minimax regret lower bound for nonstationary linear MDPs. Section 4 and Section 5 present our algorithms LSVI-UCB-Restart, Ada-LSVI-UCB-Restart and their dynamic regret bounds. Section 6 shows our experiment results. Section 7 concludes the paper and discusses some future directions. All detailed proofs can be found in Appendices.
In this section, we derive minimax regret lower bounds for nonstationary linear MDPs in both inhomogeneous and homogeneous settings, which quantify the fundamental difficulty when measured by the dynamic regret in nonstationary linear MDPs. More specifically, we consider inhomogeneous setting in this paper, where the transition function Phksuperscriptsubscript𝑃ℎ𝑘P_{h}^{k}italic_P start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT (as introduced in Section 1) can be different for different hℎhitalic_h. In contrast, for the homogeneous setting, the transition function Phksuperscriptsubscript𝑃ℎ𝑘P_{h}^{k}italic_P start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT will be the same within an episode, i.e., for any k𝑘kitalic_k, Phk≡Pksuperscriptsubscript𝑃ℎ𝑘superscript𝑃𝑘P_{h}^{k}\equiv P^{k}italic_P start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ≡ italic_P start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT for any h={1,…,H}ℎ1…𝐻h=\{1,\ldots,H\}italic_h = { 1 , … , italic_H }. All of the detailed proofs for this section are in Appendix A.
B
Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on instant messaging apps compared to social media, and have reported the least trust in them. They have also rated the sharing of fake news to be a greater problem than its creation. These suggest that, in Singapore, communication with personal contacts such as through the forwarding of messages, rather than with the public such as by sharing posts on social media feeds, is the larger issue. As an Asian country, Singapore tends towards a collectivist culture where emphasis is placed on establishing and maintaining relationships in one’s social group. Research has shown that this is linked to lesser use of social media (Jackson and Wang, 2013), and stronger preferences towards group chats in instant messaging apps (Li et al., 2011), signaling that instant messaging apps feature more prominently in daily communication. An opportunity here is to design more effective interventions, such as warning mechanisms (Gao et al., 2018), to preempt the private sharing of fake news.
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms, and post corrections and warnings when they encounter fake news. That respondents show strong trust and reliance on government communication platforms, such as official websites and hotlines, signifies the relatively strong faith that Singapore residents have in the Singapore Government to provide truthful and helpful information and to debunk fake news. This may be attributed to the successful ongoing efforts in making transparent government decisions and the readiness of the government in addressing public concerns through online forums and dialogues (REACH, [n.d.]). There is opportunity here for the government to launch programs such as campaigns, call-to-actions and civic tech initiatives that aim to more actively involve the public in discussing the local impacts of fake news and the strategies to manage it, and to encourage them to play a part through personal and community actions.
Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on instant messaging apps compared to social media, and have reported the least trust in them. They have also rated the sharing of fake news to be a greater problem than its creation. These suggest that, in Singapore, communication with personal contacts such as through the forwarding of messages, rather than with the public such as by sharing posts on social media feeds, is the larger issue. As an Asian country, Singapore tends towards a collectivist culture where emphasis is placed on establishing and maintaining relationships in one’s social group. Research has shown that this is linked to lesser use of social media (Jackson and Wang, 2013), and stronger preferences towards group chats in instant messaging apps (Li et al., 2011), signaling that instant messaging apps feature more prominently in daily communication. An opportunity here is to design more effective interventions, such as warning mechanisms (Gao et al., 2018), to preempt the private sharing of fake news.
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures  1 and  2) which is statistically significant (r⁢(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and the presence of fake news, which is deceptive and usually meant to serve hidden agendas, may erode trust. It is worthwhile to consider whether the trust in media items is due to people’s own encounters with fake news, or because of secondary factors. In Singapore, there have been active efforts through campaigns from various organizations (e.g., S.U.R.E. (Board, [n.d.]), Better Internet (Council, [n.d.]), VacciNationSG (Lai, 2021)) to raise awareness on misinformation, disinformation and fake news. If it is through the exposure to the messages of these campaigns that people’s trust in media items have been influenced, especially those who might not have personally encountered fake news, this suggests the importance of media literacy education in addressing fake news, particularly when secondary effects such as practicing greater caution due to a lack of trust comes into play.
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by political and financial gains, and its influence has led to increasing social costs due to the adverse effects it has on people’s truth discernment and behavior (Duffy et al., 2020). With fake news stemming mainly from digital media and causing misguided dissent that could compromise collaboration among people, we see this to be of concern to the CSCW community. As global efforts addressing fake news take off, we aim to understand what the perceptions and practices of news sharing and fake news are in a local context, with Singapore as the place of interest, to gain insights on where best to direct local mitigation efforts.
C
README.md exists but content is empty.
Downloads last month
43