context
stringlengths
250
5.97k
A
stringlengths
250
8.2k
B
stringlengths
250
3.83k
C
stringlengths
250
5.02k
D
stringlengths
250
5.14k
label
stringclasses
4 values
The generic third order Newton’s Method—also known as Halley’s method—to compute roots f⁢(x)=0𝑓𝑥0f(x)=0italic_f ( italic_x ) = 0 numerically improves solutions xi→xi+1=xi+Δ⁢x→subscript𝑥𝑖subscript𝑥𝑖1subscript𝑥𝑖Δ𝑥x_{i}\rightarrow x_{i+1}=x_{i}+\Delta xitalic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT → italic_x start_POSTSUBSCRIPT italic_i + 1 end_POSTSUBSCRIPT = italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + roman_Δ italic_x iteratively, starting
\prime\prime}(x)+\frac{(\Delta x)^{3}}{3!}f^{\prime\prime\prime}(x)\approx 0.italic_f ( italic_x + roman_Δ italic_x ) ≈ italic_f ( italic_x ) + roman_Δ italic_x italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) + divide start_ARG ( roman_Δ italic_x ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG 2 ! end_ARG italic_f start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ( italic_x ) + divide start_ARG ( roman_Δ italic_x ) start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT end_ARG start_ARG 3 ! end_ARG italic_f start_POSTSUPERSCRIPT ′ ′ ′ end_POSTSUPERSCRIPT ( italic_x ) ≈ 0 .
\frac{f^{\prime\prime}(x)}{f^{\prime}(x)}\right)roman_Δ italic_x = - divide start_ARG italic_f ( italic_x ) end_ARG start_ARG italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG / ( 1 - divide start_ARG italic_f ( italic_x ) end_ARG start_ARG 2 italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG divide start_ARG italic_f start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG )
{\prime}(x)}\left(h_{0}(x)\frac{f(x)}{f^{\prime}(x)}+h_{1}(x)\right)\right].roman_Δ italic_x = - divide start_ARG italic_f ( italic_x ) end_ARG start_ARG italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG / [ 1 + divide start_ARG 1 end_ARG start_ARG 2 italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_x ) end_ARG divide start_ARG italic_f ( italic_x ) end_ARG start_ARG italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG ( italic_h start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_x ) divide start_ARG italic_f ( italic_x ) end_ARG start_ARG italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG + italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_x ) ) ] .
^{2}}{6}\frac{f^{\prime\prime\prime}(x)}{f^{\prime}(x)}\approx 0,1 + divide start_ARG roman_Δ italic_x end_ARG start_ARG 2 end_ARG divide start_ARG italic_f start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG + divide start_ARG ( roman_Δ italic_x ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG 6 end_ARG divide start_ARG italic_f start_POSTSUPERSCRIPT ′ ′ ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG start_ARG italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG ≈ 0 ,
B
There are several well-known generating sets for classical groups. For example, special linear groups are generated by the subset of all transvections [21, Theorem 4.3] or by two well chosen matrices, such as the Steinberg generators [19]. Another generating set which has become important in algorithms and applications in the last 10-15 years is the Leedham-Green and O’Brien standard generating set in the following called the LGO generating set. These generators are defined for all classical groups in odd characteristic in [11] and even characteristic in [10].
One important task in this context is writing elements of classical groups as words in standard generators using SLPs. This is done in Magma [14] using the results of Elliot Costi [6] and in GAP using the results of this paper see Section 6. Other rewriting algorithms also exist, for example Cohen et al. [26] present algorithms to compute with elements of finite Lie groups.
Note that a small variation of these standard generators for SL⁢(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) are used in Magma [14] as well as in algorithms to verify presentations of classical groups, see [12], where only the generator v𝑣vitalic_v is slightly different in the two scenarios when d𝑑ditalic_d is even.
There are several well-known generating sets for classical groups. For example, special linear groups are generated by the subset of all transvections [21, Theorem 4.3] or by two well chosen matrices, such as the Steinberg generators [19]. Another generating set which has become important in algorithms and applications in the last 10-15 years is the Leedham-Green and O’Brien standard generating set in the following called the LGO generating set. These generators are defined for all classical groups in odd characteristic in [11] and even characteristic in [10].
The LGO generating set offers a variety of advantages. In practice it is the generating set produced by the constructive recognition algorithms from [10, 11] as implemented in MAGMA. Consequently, algorithms in the composition tree data structure, both in MAGMA and in GAP, store elements in classical groups as words in the LGO generators. Moreover, the LGO generators can be used directly to verify representations of classical groups [12].
D
To show the existence and uniqueness of solutions for (21), we proceed by parts. The existence of solution for the first equation follows from Lemma LABEL:l:lrmsystem. Solving the second equation is equivalent to (22), and such system is well-posed due to the coercivity of (⋅,T⋅)∂𝒯H(\cdot,T\cdot)_{{\partial\mathcal{T}_{H}}}( ⋅ , italic_T ⋅ ) start_POSTSUBSCRIPT ∂ caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT end_POSTSUBSCRIPT on Λ~hfsuperscriptsubscript~Λℎ𝑓\tilde{\Lambda}_{h}^{f}over~ start_ARG roman_Λ end_ARG start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT; see [AHPV, HMV] and [MR1802366, MR1921914, MR2104179]. The same arguments hold for the third equation of (21), rewritten in (24). Another way to see this is to consider (25) with zero right hand side. From the coercivity of (⋅,T⋅)∂𝒯H(\cdot,T\cdot)_{{\partial\mathcal{T}_{H}}}( ⋅ , italic_T ⋅ ) start_POSTSUBSCRIPT ∂ caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT end_POSTSUBSCRIPT on Λ~~Λ{\widetilde{\Lambda}}over~ start_ARG roman_Λ end_ARG we have (I−P⁢T)⁢λ~h0=0𝐼𝑃𝑇subscriptsuperscript~𝜆0ℎ0(I-PT)\tilde{\lambda}^{0}_{h}=0( italic_I - italic_P italic_T ) over~ start_ARG italic_λ end_ARG start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT = 0. But since Λ~h0∩Λ~hf={0}superscriptsubscript~Λℎ0superscriptsubscript~Λℎ𝑓0{\widetilde{\Lambda}}_{h}^{0}\cap\tilde{\Lambda}_{h}^{f}=\{0\}over~ start_ARG roman_Λ end_ARG start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT ∩ over~ start_ARG roman_Λ end_ARG start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT = { 0 }, then λ~h0=0subscriptsuperscript~𝜆0ℎ0\tilde{\lambda}^{0}_{h}=0over~ start_ARG italic_λ end_ARG start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT = 0. Finally, the fourth equation of (21) is again finite dimension, and if (μ0,uh0)∂𝒯H=0subscriptsuperscript𝜇0subscriptsuperscript𝑢0ℎsubscript𝒯𝐻0({\mu^{0}},u^{0}_{h})_{{\partial\mathcal{T}_{H}}}=0( italic_μ start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT , italic_u start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT ∂ caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 0 for all μ0∈Λ0superscript𝜇0superscriptΛ0{\mu^{0}}\in\Lambda^{0}italic_μ start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT ∈ roman_Λ start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT, then, from Lemma LABEL:l:lrmsystem, uh0=0subscriptsuperscript𝑢0ℎ0u^{0}_{h}=0italic_u start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT = 0.
It is essential for the performing method that the static condensation is done efficiently. The solutions of (22) decay exponentially fast if w𝑤witalic_w has local support, so instead of solving the problems in the whole domain it would be reasonable to solve it locally using patches of elements. We note that the idea of performing global static condensation goes back to the Variational Multiscale Finite Element Method–VMS [MR1660141, MR2300286]. Recently variations of the VMS
Above, and in what follows, c𝑐citalic_c denotes an arbitrary constant that does not depend on H𝐻Hitalic_H, ℋℋ{\mathscr{H}}script_H, hℎhitalic_h, 𝒜𝒜\mathcal{A}caligraphic_A, depending only on the shape regularity of the elements of 𝒯Hsubscript𝒯𝐻{\mathcal{T}_{H}}caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT.
Except for (ii), all steps above above can be performed efficiently as the matrices involved are sparse and either local or independent of hℎhitalic_h. Solving (25) on the other hand involves computing the hℎhitalic_h-dependent, global operator P𝑃Pitalic_P, leading to a dense matrix in (25). From now on, we concentrate on approximating P𝑃Pitalic_P so that (25) can be accurately and efficiently solved.
The key to approximate (25) is the exponential decay of P⁢w𝑃𝑤Pwitalic_P italic_w, as long as w∈H1⁢(𝒯H)𝑤superscript𝐻1subscript𝒯𝐻w\in{H^{1}({\mathcal{T}_{H}})}italic_w ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) has local support. That allows replacing P𝑃Pitalic_P by a semi-local operator Pjsuperscript𝑃𝑗P^{j}italic_P start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT. That works fine for low-contrast coefficients and is the subject of Section 3.2. For high-contrast coefficients however, the exponential decay rate is smaller, and to circumvent that we consider in Section 3.1 a spectral decomposition of Λ~hfsuperscriptsubscript~Λℎ𝑓\tilde{\Lambda}_{h}^{f}over~ start_ARG roman_Λ end_ARG start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT.
C
The difference is mainly due to the degenerate case (where a chord of P𝑃Pitalic_P is parallel to an edge of P𝑃Pitalic_P) and the float issue of both programs. Our implementations of Alg-K and Alg-CM have logical difference in handling degenerate cases.
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
The difference is mainly due to the degenerate case (where a chord of P𝑃Pitalic_P is parallel to an edge of P𝑃Pitalic_P) and the float issue of both programs. Our implementations of Alg-K and Alg-CM have logical difference in handling degenerate cases.
Moreover, Alg-A is more stable than the alternatives. During the iterations of Alg-CM, the coordinates of three corners and two midpoints of a P-stable triangle (see Figure 37) are maintained. These coordinates are computed somehow and their true values can differ from their values stored in the computer. Alg-CM uses an involved subroutine (far more complicated than ours given in Algorithm 1) to update the coordinates in each iteration, which accumulates the inaccuracy of coordinates. Even worse, this subroutine computes three angles and selects the smallest to decide how to proceed each time, and due to float issue it is possible to select a wrong angle when angles are close, which causes the subroutine performs incorrectly.
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
C
The processing pipeline of our classification approach is shown in Figure 2. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline, we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (CreditScore).
In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analysis on the wide range of features change during diffusion time. Ma et al. [19] used Recurrent Neural Networks for rumor detection, they batch tweets into time intervals and model the time series as a RNN sequence. Without any other handcrafted features, they got almost 90% accuracy for events reported in Snope.com. As the same disadvantage of all other deep learning models, the process of learning is a black box, so we cannot envisage the cause of the good performance based only on content features. The model performance is also dependent on the tweet retrieval mechanism, of which quality is uncertain for stream-based trending sub-events.
The processing pipeline of our classification approach is shown in Figure 2. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline, we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (CreditScore).
As observed in [19, 20], rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in [20]. We base our credibility feature on the time series approach and train the classifier with features from diffent high-level contexts (i.e., users, Twitter and propagation) in a cascaded manner. In this section, we first detail the employed Dynamic Series-Time Structure, then describe the high and low-level ensemble features used for learning in this pipeline step.
A
\prime}\left(u\right)=0roman_lim start_POSTSUBSCRIPT italic_u → ∞ end_POSTSUBSCRIPT roman_ℓ ( italic_u ) = roman_lim start_POSTSUBSCRIPT italic_u → ∞ end_POSTSUBSCRIPT roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) = 0), a β𝛽\betaitalic_β-smooth function, i.e. its derivative is β𝛽\betaitalic_β-Lipshitz, and limsupu→−∞ℓ′⁢(u)<0subscriptsupremum→𝑢superscriptℓ′𝑢0\lim\sup_{u\rightarrow-\infty}\ell^{\prime}\left(u\right)<0roman_lim roman_sup start_POSTSUBSCRIPT italic_u → - ∞ end_POSTSUBSCRIPT roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) < 0.
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameterization asymptotically to the maximum margin solution with unit nuclear norm. Unlike the case of squared loss, the result for exponential loss are independent of initialization and with only mild conditions on the step size. Here again, we see the asymptotic nature of exponential loss on separable data nullifying the initialization effects thereby making the analysis simpler compared to squared loss.
Assumption 1 includes many common loss functions, including the logistic, exp-loss222The exp-loss does not have a global β𝛽\betaitalic_β smoothness parameter. However, if we initialize with η<1/ℒ⁢(𝐰⁢(0))𝜂1ℒ𝐰0\eta<1/\mathcal{L}(\mathbf{w}(0))italic_η < 1 / caligraphic_L ( bold_w ( 0 ) ) then it is straightforward to show the gradient descent iterates maintain bounded local smoothness. and probit losses. Assumption 1 implies
decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail part does not affect the bias. The bias is also independent of the step-size
loss function (Assumption 1) with an exponential tail (Assumption 3), any stepsize η<2⁢β−1⁢σmax−2⁢(𝐗 )𝜂2superscript𝛽1superscriptsubscript𝜎2𝐗 \eta<2\beta^{-1}\sigma_{\max}^{-2}\left(\text{$\mathbf{X}$ }\right)italic_η < 2 italic_β start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_σ start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT ( bold_X )
B
The time period of a rumor event is sometimes fuzzy and hard to define. One reason is a rumor may have been triggered for a long time and kept existing, but it did not attract public attention. However it can be triggered by other events after a uncertain time and suddenly spreads as a bursty event. E.g., a rumor999http://www.snopes.com/robert-byrd-kkk-photo/ claimed that Robert Byrd was member of KKK. This rumor has been circulating in Twitter for a while. As shown in Figure 7(a) that almost every day there were several tweets talking about this rumor. But this rumor was triggered by a picture about Robert Byrd kissing Hillary Clinton in 2016 101010http://www.snopes.com/clinton-byrd-photo-klan/ and Twitter users suddenly noticed this rumor and it was spreaded burstily. In this work, what we are really interested in is the tweets which are posted in hours around the bursty peak. We defined the hour with the most tweets’ volume as tm⁢a⁢xsubscript𝑡𝑚𝑎𝑥t_{max}italic_t start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT and we want to detect the rumor event as soon as possible before its burst, so we define the time of the first tweet before tm⁢a⁢xsubscript𝑡𝑚𝑎𝑥t_{max}italic_t start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT within 48 hours as the beginning of this rumor event, marked as t0subscript𝑡0t_{0}italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. And the end time of the event is defined as te⁢n⁢d=t0+48subscript𝑡𝑒𝑛𝑑subscript𝑡048t_{end}=t_{0}+48italic_t start_POSTSUBSCRIPT italic_e italic_n italic_d end_POSTSUBSCRIPT = italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + 48. We show the tweet volumes in Figure 7(b) of the above rumor example.
Given a tweet, our task is to classify whether it is associated with either a news or rumor. Most of the previous work (castillo2011information, ; gupta2014tweetcred, ) on tweet level only aims to measure the trustfulness based on human judgment (note that even if a tweet is trusted, it could anyway relate to a rumor). Our task is, to a point, a reverse engineering task; to measure the probability a tweet refers to a news or rumor event; which is even trickier. We hence, consider this a weak learning process. Inspired by (zhou2015c, ), we combine CNN and RNN into a unified model for tweet representation and classification. The model utilizes CNN to extract a sequence of higher-level phrase representations, which are fed into a long short-term memory (LSTM) RNN to obtain the tweet representation. This model, called CNN+RNN henceforth, is able to capture both local features of phrases (by CNN) as well as global and temporal tweet semantics (by LSTM).
For this task, we developed two kinds of classification models: traditional classifier with handcrafted features and neural networks without tweet embeddings. For the former, we used 27 distinct surface-level features extracted from single tweets (analogously to the Twitter-based features presented in Section 3.2). For the latter, we select the baselines from state-of-the-art text classification models, i.e., Basic tanh-RNN (madetecting, ), 1-layer GRU-RNN (madetecting, ), 1-layer LSTM (madetecting, ), 2-layer GRU-RNN (madetecting, ), FastText (joulin2016bag, ) and CNN+LSTM (zhou2015c, ) model. The hybrid model CNN+LSTM is adapted in our work for tweet classification.
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumors.  (madetecting, ) also use RNN for rumor debunking. However, in their work, RNN is used at event-level. The classification leverages only the deep data representations of aggregated tweet contents of the whole event, while ignoring exploiting other –in latter stage–effective features such as user-based features and propagation features. Although, tweet contents are merely the only reliable source of clue at early stage, they are also likely to have doubtful perspectives and different stands in this specific moment. In addition, they could relate to rumorous sub-events (see e.g., the Munich shooting). Aggregating all relevant tweets of the event at this point can be of noisy and harm the classification performance. One could think of a sub-event detection mechanism as a solution, however, detecting sub-events at real-time over Twitter stream is a challenging task (meladianos2015degeneracy, ), which increases latency and complexity. In this work, we address this issue by deep neural modeling only at single tweet level. Our intuition is to leverage the “wisdom of the crowd” theory; such that even a certain portion of tweets at a moment (mostly early stage) are weakly predicted (because of these noisy factors), the ensemble of them would attribute to a stronger prediction.
We consider two types of Ensemble Features: features accumulating crowd wisdom and averaging feature for the Tweet credit Scores. The former are extracted from the surface level while the latter comes from the low dimensional level of tweet embeddings; that in a way augments the sparse crowd at early stage.
B
Evaluating methodology. For RQ1, given an event entity e, at time t, we need to classify them into either Breaking or Anticipated class. We select a studied time for each event period randomly in the range of 5 days before and after the event time. In total, our training dataset for AOL consists of 1,740 instances of breaking class and 3,050 instances of anticipated, with over 300 event entities. For GoogleTrends, there are 2,700 and 4,200 instances respectively. We then bin the entities in the two datasets chronologically into 10 different parts. We set up 4 trials with each of the last 4 bins (using the history bins for training in a rolling basic) for testing; and report the results as average of the trials.
Results. The baseline and the best results of our 1s⁢tsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achieves marginally better result at F1 metric.
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall, improved the baseline, yet not significantly. Our Ensemble model, that is learned to trade-off between salience and timeliness achieves the best results for all metrics, outperforms the baseline significantly. As the testing entity queries in this experiment are at all event times and with all event types, these improvements illustrate the robustness of our model. Overall, we witness the low performance of adapted QAC methods. One reason is as mentioned, QACs, even time-aware generally favor already salient queries as follows the rich-get-richer phenomenon, and are not ideal for entity queries that are event-related (where aspect relevance can change abruptly). Time-aware QACs for partially long prefixes like entities often encounter sparse traffic of query volumes, that also contributes to the low results.
RQ2. Figure 4 shows the performance of the aspect ranking models for our event entities at specific times and types. The most right three models in each metric are the models proposed in this work. The overall results show that, the performances of these models, even better than the baselines (for at least one of the three), vary greatly among the cases. In general, S⁢V⁢Ms⁢a⁢l⁢i⁢e⁢n⁢c⁢e𝑆𝑉subscript𝑀𝑠𝑎𝑙𝑖𝑒𝑛𝑐𝑒SVM_{salience}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_s italic_a italic_l italic_i italic_e italic_n italic_c italic_e end_POSTSUBSCRIPT performs well at the before stage of breaking events, and badly at the after stage of the same event type. Whereas S⁢V⁢Mt⁢i⁢m⁢e⁢l⁢i⁢n⁢e⁢s⁢s𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT gives a contradictory performance for the cases. For anticipated events, S⁢V⁢Mt⁢i⁢m⁢e⁢l⁢i⁢n⁢e⁢s⁢s𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT performs well at the before and after stages, but gives a rather low performance at the during stage. For this event type, S⁢V⁢Ms⁢a⁢l⁢i⁢e⁢n⁢c⁢e𝑆𝑉subscript𝑀𝑠𝑎𝑙𝑖𝑒𝑛𝑐𝑒SVM_{salience}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_s italic_a italic_l italic_i italic_e italic_n italic_c italic_e end_POSTSUBSCRIPT generally performs worse than S⁢V⁢Mt⁢i⁢m⁢e⁢l⁢i⁢n⁢e⁢s⁢s𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT. Overall, The S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT with all features combined gives a good and stable performance, but for most cases, are not better than the well-performed single set of features L2R model. In general, these results prove our assumption that salience and timeliness should be traded-off for different event types, at different event times. For feature importances, we observe regularly, stable performances of same-group features across these cases. Salience features from knowledge bases tend to perform better than from query logs for short-duration or less popular events. We leave the more in-depth analysis of this part for future work.
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The results are shown in Table 3-bottom, showing that our cascaded model, with features inherited from the performance of SVM in previous task, substantially improves the single model. However, the overall modest results show the difficulty of this multi-class classification task.
A
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains, from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
SMC weights are updated based on the likelihood of the observed rewards: wt,a(m)∝pa⁢(yt|xt,θt,a(m))proportional-tosuperscriptsubscript𝑤𝑡𝑎𝑚subscript𝑝𝑎conditionalsubscript𝑦𝑡subscript𝑥𝑡superscriptsubscript𝜃𝑡𝑎𝑚w_{t,a}^{(m)}\propto p_{a}(y_{t}|x_{t},\theta_{t,a}^{(m)})italic_w start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ∝ italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ( italic_y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_θ start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ) —Step (9.c) in Algorithm 1; and
the fundamental operation in the proposed SMC-based MAB Algorithm 1 is to sequentially update the random measure pM⁢(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT )
The techniques used in these success stories are grounded on statistical advances on sequential decision processes and multi-armed bandits. The MAB crystallizes the fundamental trade-off between exploration and exploitation in sequential decision making.
we propagate forward the sequential random measure pM⁢(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) by drawing new samples from the transition density, conditioned on resampled particles, i.e.,
C
For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal. Overall, patients measure blood glucose within 10 minutes before meals most of the time – for more than 2/3 of the meals for most patients.
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal. Overall, patients measure blood glucose within 10 minutes before meals most of the time – for more than 2/3 of the meals for most patients.
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients. For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only two glucose measurements per day on average and measured glucose within 4 hours or less after a meal only 5 out of 54 times.
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day. In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app.
C
Weight values from the ASPP module and decoder were initialized according to the Xavier method by Glorot and Bengio (2010). It specifies parameter values as samples drawn from a uniform distribution with zero mean and a variance depending on the total number of incoming and outgoing connections. Such initialization schemes are demonstrably important for training deep neural networks successfully from scratch Sutskever et al. (2013). The encoding layers were based on the VGG16 architecture pre-trained on both ImageNet Deng et al. (2009) and Places2 Zhou et al. (2017) data towards object and scene classification respectively.
Various measures are used in the literature and by benchmarks to evaluate the performance of fixation models. In practice, results are typically reported for all of them to include different notions about saliency and allow a fair comparison of model predictions Kümmerer et al. (2018); Riche et al. (2013). A set of nine metrics is commonly selected: Kullback-Leibler divergence (KLD), Pearson’s correlation coefficient (CC), histogram intersection (SIM), Earth Mover’s distance (EMD), information gain (IG), normalized scanpath saliency (NSS), and three variants of area under ROC curve (AUC-Judd, AUC-Borji, shuffled AUC). The former four are location-based metrics, which require ground truth maps as binary fixation matrices. By contrast, the remaining metrics quantify saliency approximations after convolving gaze locations with a Gaussian kernel and representing the target output as a probability distribution. We refer readers to an overview by Bylinskii et al. (2018) for more information regarding the implementation details and properties of the stated measures.
To quantify the contribution of multi-scale contextual information to the overall performance, we conducted a model ablation analysis. A baseline architecture without the ASPP module was constructed by replacing the five parallel convolutional layers with a single 3×3333\times 33 × 3 convolutional operation that resulted in 1,280 activation maps. This representation was then forwarded to a 1×1111\times 11 × 1 convolutional layer with 256 channels. While the total number of feature maps stayed constant, the amount of trainable parameters increased in this ablation setting. Table 6 summarizes the results according to validation instances of five eye tracking datasets for the model with and without an ASPP module. It can be seen that our multi-scale architecture reached significantly higher performance (one tailed paired t-test) on most metrics and is therefore able to leverage the information captured by convolutional layers with different receptive field sizes. An ablation analysis of the multi-level component adapted from Cornia et al. (2016) can be viewed in the A.
Table 2 demonstrates that we obtained state-of-the-art scores for the CAT2000 test dataset regarding the AUC-J, sAUC, and KLD evaluation metrics, and competitive results on the remaining measures. The cumulative rank (as computed above) suggests that our model outperformed all previous approaches, including the ones based on a pre-trained VGG16 classification network (Cornia et al., 2018; Kruthiventi et al., 2017). Our final evaluation results for both the MIT300 and CAT2000 datasets can be viewed on the MIT saliency benchmark under the model name MSI-Net, representing our multi-scale information network. Qualitatively, the proposed architecture successfully captures semantically meaningful image features such as faces and text towards the prediction of saliency, as can be seen in Figure 1. Unfortunately, a visual comparison with the results from prior work was not possible since most models are not openly available.
We normalized the model output such that all values are non-negative with unit sum. The estimation of saliency maps can hence be regarded as a probability distribution prediction task as formulated by Jetley et al. (2016). To determine the difference between an estimated and a target distribution, the Kullback-Leibler (KL) divergence is an appropriate measure rooted in information theory to quantify the statistical distance D𝐷Ditalic_D. This can be defined as follows:
D
We observe that the reduction from MinCutwidth to MinLoc from Section 4.1 combined with the reduction from MinLoc to MinPathwidth from Section 5.2 gives a reduction from MinCutwidth to MinPathwidth. Moreover, this reduction is approximation preserving; thus, it carries over approximations for MinPathwidth (e. g., [21, 30]) to MinCutwidth, and yields new results for MinCutwidth.
In the following, we obtain an approximation algorithm for the locality number by reducing it to the problem of computing the pathwidth of a graph. To this end, we first describe another way of how a word can be represented by a graph. Recall that the reduction to cutwidth from Section 4 also transforms words into graphs. The main difference is that the reduction from Section 4 turns every symbol from the alphabet into an individual vertex of the graph (thus, producing a graph with O⁡(|Σ|)OΣ\operatorname{O}(|\Sigma|)roman_O ( | roman_Σ | ) vertices), while the reduction to pathwidth will use a vertex per position of the word α𝛼\alphaitalic_α, i. e., |α|𝛼|\alpha|| italic_α | individual vertices. In the reduction from Section 4 the information of the actual occurrences of the symbols in the word is encoded by the edges (in particular, the length |α|𝛼|\alpha|| italic_α | is represented by the number of edges), while in the following reduction the alphabet is encoded by connecting the vertices that correspond to positions of the same symbol to cliques in the graph (in particular, the number of edges may range between |α|𝛼|\alpha|| italic_α | and |α|2superscript𝛼2|\alpha|^{2}| italic_α | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT). We proceed with a formal definition and an example.
We observe that the reduction from MinCutwidth to MinLoc from Section 4.1 combined with the reduction from MinLoc to MinPathwidth from Section 5.2 gives a reduction from MinCutwidth to MinPathwidth. Moreover, this reduction is approximation preserving; thus, it carries over approximations for MinPathwidth (e. g., [21, 30]) to MinCutwidth, and yields new results for MinCutwidth.
One of the main results of this section is a reduction from the problem of computing the locality number of a word α𝛼\alphaitalic_α to the probem of computing the pathwidth of a graph. This reduction, however, does not technically provide a reduction from the decision problem Loc to Pathwidth, since the constructed graph’s pathwidth ranges between loc⁡(α)loc𝛼\operatorname{\textsf{loc}}(\alpha)loc ( italic_α ) and 2⁢loc⁡(α)2loc𝛼2\operatorname{\textsf{loc}}(\alpha)2 loc ( italic_α ), and therefore the reduction cannot be used to solve MinLoc exactly. The main purpose of this reduction is to carry over approximation results from MinPathwidth to MinLoc (also recall that exact and fpt-algorithms for MinLoc are obtained in Section 4 via a reduction to MinCutwidth). Hence, in this section we are mainly concerned with approximation algorithms.
Pathwidth and cutwidth are classical graph parameters that play an important role for graph algorithms, independent from our application for computing the locality number. Therefore, it is the main purpose of this section to translate the reduction from MinCutwidth to MinPathwidth that takes MinLoc as an intermediate step into a direct reduction from MinCutwidth to MinPathwidth. Such a reduction is of course implicitly hidden in the reductions of Sections 4.1 and 5.2, but we believe that explaining the connection in a more explicit way will be helpful for researchers that are mainly interested in the graph parameters cutwidth and pathwidth.
D
This model compared with vanilla conv-deconv and u-net performs better by an average of 5% in terms of Dice. Patravali et al.[140] trained a model based on u-net using Dice combined with cross entropy as a metric for LV/RV and myocardium segmentation.
The model was designed to accept a stack of image slices as input channels and the output is predicted for the middle slice. Based on experiments they conducted, it was concluded that three input slices were optimal as an input for the model, instead of one or five.
Autoencoders (AEs) are neural networks that are trained with the objective to copy the input x𝑥xitalic_x to the output in such a way that they encode useful properties of the data. It usually consists of an encoding part that downsamples the input down to a linear feature and a decoding part that upsamples to the original dimensions.
A common AE architecture is Stacked Denoised AE (SDAE) that has an objective to reconstruct the clean input from an artificially corrupted version of the input[20] which prevents the model from learning trivial solutions. Another AE-like architecture is u-net[4], which is of special interest to the biomedical community since it was first applied on segmentation of biomedical images.
Another three models were trained using the signals as 1D. The first model was a FNN with dropout, the second a three layer 1D CNN and the third a 2D CNN same as the first but trained with a stacked version of the signal (also trained with data augmentation).
A
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, and PPO (Schulman et al., 2017), a model-free policy gradient algorithm (see Appendix E for details of tuning of Rainbow and PPO). The results of the comparison are presented in Figure 3. For each game, we plot the number of time steps needed for either Rainbow or PPO to reach the same score that our method reaches after 100100100100K interaction steps. The red line indicates 100100100100K steps: any bar larger than this indicates a game where the model-free method required more steps. SimPLe outperforms the model-free algorithms in terms of learning speed on nearly all of the games, and in the case of a few games, does so by over an order of magnitude. For some games, it reaches the same performance that our PPO implementation reaches at 10101010M steps. This indicates that model-based reinforcement learning provides an effective approach to learning Atari games, at a fraction of the sample complexity.
While SimPLe is able to learn more quickly than model-free methods, it does have limitations. First, the final scores are on the whole lower than the best state-of-the-art model-free methods. This can be improved with better dynamics models and, while generally common with model-based RL algorithms, suggests an important direction for future work. Another, less obvious limitation is that the performance of our method generally varied substantially between different runs on the same game. The complex interactions between the model, policy, and data collection were likely responsible for this. In future work, models that capture uncertainty via Bayesian parameter posteriors or ensembles (Kurutach et al., 2018; Chua et al., 2018) may improve robustness.
The iterative process of training the model, training the policy, and collecting data is crucial for non-trivial tasks where random data collection is insufficient. In a game-by-game analysis, we quantified the number of games where the best results were obtained in later iterations of training. In some games, good policies could be learned very early. While this might have been due to the high variability of training, it does suggest the possibility of much faster training (i.e. in fewer step than 100k) with more directed exploration policies. In Figure 9 in the Appendix we present the cumulative distribution plot for the (first) point during learning when the maximum score for the run was achieved in the main training loop of Algorithm 1.
Figure 1: Main loop of SimPLe. 1) the agent starts interacting with the real environment following the latest policy (initialized to random). 2) the collected observations will be used to train (update) the current world model. 3) the agent updates the policy by acting inside the world model. The new policy will be evaluated to measure the performance of the agent as well as collecting more data (back to 1). Note that world model training is self-supervised for the observed states and supervised for the reward.
The results in these figures are generated by averaging 5555 runs for each game. The model-based agent is better than a random policy for all the games except Bank Heist. Interestingly, we observed that the best of the 5555 runs was often significantly better. For 6666 of the games, it exceeds the average human score (as reported in Table 3 of Pohlen et al. (2018)). This suggests that further stabilizing SimPLe should improve its performance, indicating an important direction for future work. In some cases during training we observed high variance of the results during each step of the loop. There are a number of possible reasons, such as mutual interactions of the policy training and the supervised training or domain mismatch between the model and the real environment. We present detailed numerical results, including best scores and standard deviations, in Appendix D.
D
This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data. Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for diagnosis and prediction problems.
One common approach that previous studies have used for classifying EEG signals was feature extraction from the frequency and time-frequency domains utilizing the theory behind EEG band frequencies [8]: delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–20 Hz) and gamma (20–64 Hz). Truong et al. [9] used Short-Time Fourier Transform (STFT) on a 30 second sliding window to train a three layer CNN on stacked time-frequency representations for seizure prediction and evaluated their method on three EEG databases.
This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data. Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for diagnosis and prediction problems.
For the spectrogram module, which is used for visualizing the change of the frequency of a non-stationary signal over time [18], we used a Tukey window with a shape parameter of 0.250.250.250.25, a segment length of 8888 samples, an overlap between segments of 4444 samples and a fast Fourier transform of 64646464 samples to convert the xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT into the time-frequency domain. The resulted spectrogram, which represents the magnitude of the power spectral density (V2/H⁢zsuperscript𝑉2𝐻𝑧V^{2}/Hzitalic_V start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_H italic_z) of xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, was then upsampled to 178×178178178178\times 178178 × 178 using bilinear pixel interpolation.
Zhang et al. [11] trained an ensemble of CNNs containing two to ten layers using STFT features extracted from EEG band frequencies for mental workload classification. Giri et al. [12] extracted statistical and information measures from frequency domain to train an 1D CNN with two layers to identify ischemic stroke.
A
The track tip positioning was the key parameter controlled during the creation of these climbing gaits. To assure seamless locomotion, trajectories for each joint of the robot were defined through a fifth-order polynomial along with their first and second derivatives. The trajectory design took into account six constraints: initial and final position, velocity, and acceleration [23]. The Reflexxes Motion Library IV [24] was utilized to perform the inverse kinematics calculation.
Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the rear legs (depicted by the green line) exceeded the predetermined threshold values set by the rear body climbing gait for heights of 2h. The overlap between the red line (ongoing energy consumption of the robot) and the blue line (pre-studied energy consumption of step negotiation in rolling locomotion mode only) illustrates this. After the mode transition is triggered, the robot enters a well-defined preparation phase, wherein it moves backward a short distance to ensure the rear tracks are separated from the step. Following the preparation phase, the robot switches to the rear body climbing gait. Despite the noticeable improvement in energy consumption, the transition to the rear body climbing gait takes more time for the robot to tackle a 2h step.
Figure 10: The Cricket robot tackles a step of height h using rolling locomotion mode, negating the need for a transition to the walking mode. The total energy consumed throughout the entire step negotiation process in rolling locomotion stayed below the preset threshold value. This threshold value was established based on the whole body climbing gait at height h, as shown in Fig. 8, or the rear body climbing gait at height h, as seen in Fig. 9. The blue line illustrates the total energy consumed (in rolling locomotion mode), while the green line represents the ongoing cumulative energy consumption of the rear legs, indicating it did not exceed the threshold values set by the rear body climbing gait.
The whole-body climbing gait involves utilizing the entire body movement of the robot, swaying forwards and backwards to enlarge the stability margins before initiating gradual leg movement to overcome a step. This technique optimizes stability during the climbing process. To complement this, the rear-body climbing gait was developed. In this approach, once the front legs and body have completed their upward rolling motion, the rear legs are elevated to ascend the step. This strategy is particularly beneficial in situations where the mobility of rolling locomotion is hindered by the rear wheels. For a more detailed discussion of the whole-body climbing gait and the rear-body climbing gait, we direct readers to [10].
The evaluation of energy consumption for the walking locomotion mode encompassed the entire step negotiation process, from the commencement of the negotiation until its completion. Fig. 8 reveals minimal discrepancies in energy consumption for the whole-body climbing gait, which can be attributed to the thoughtful design of the climbing gaits. These gaits incorporate identical desired joint accelerations, leg stride length, and forward movement height, as highlighted in [4]. Consequently, variations in energy consumption during different step negotiations primarily stem from negotiation time and body movements. In order to establish the threshold values (Tw⁢bsubscript𝑇𝑤𝑏T_{wb}italic_T start_POSTSUBSCRIPT italic_w italic_b end_POSTSUBSCRIPT and Tr⁢bsubscript𝑇𝑟𝑏T_{rb}italic_T start_POSTSUBSCRIPT italic_r italic_b end_POSTSUBSCRIPT) for the energy criterion, they were equated to the energy expenditure of the walking locomotion mode, utilizing the whole-body climbing and rear-body climbing gaits, respectively. To identify the threshold values (Tw⁢bsubscript𝑇𝑤𝑏T_{wb}italic_T start_POSTSUBSCRIPT italic_w italic_b end_POSTSUBSCRIPT and Tr⁢bsubscript𝑇𝑟𝑏T_{rb}italic_T start_POSTSUBSCRIPT italic_r italic_b end_POSTSUBSCRIPT) for the energy criterion, they were set equal to the energy expenditure of the walking locomotion mode using the whole body climbing and rear body climbing gaits, respectively. Unlike other methods that use empirical values [2, 8], the threshold values in this study were decided upon based on a novel rule that evaluates the alternative locomotion mode. Moreover, these threshold values are not fixed and are determined based on the terrain profiles the robot is negotiating.
C
The algorithm classifies items according to their size. Tiny items have their size in the range (0,1/3]013(0,1/3]( 0 , 1 / 3 ], small items in (1/3,1/2]1312(1/3,1/2]( 1 / 3 , 1 / 2 ], critical items in (1/2,2/3]1223(1/2,2/3]( 1 / 2 , 2 / 3 ], and large items in (2/3,1]231(2/3,1]( 2 / 3 , 1 ]. In addition, the algorithm has four kinds of bins, called tiny, small, critical and large bins. Large items are placed alone in large bins, which are opened at each arrival. Small items are placed in pairs in small bins, which are opened every other arrival. Critical bins contain a single critical item, and tiny items up to a total size of 1/3131/31 / 3 per bin, while tiny bins contain only tiny items. The algorithm receives as advice the number of critical items, denoted by c𝑐citalic_c, and opens c𝑐citalic_c critical bins at the beginning. Inside each critical bin, a space of 2/3 is reserved for a critical item, and tiny items are placed using First-Fit into the remaining space of these bins possibly opening new bins dedicated to tiny items. Each critical item is placed in one of the critical bins. Note that the algorithm is heavily dependent on the advice being trusted. Imagine that the encoded advice overestimates the number of critical items. This results in critical bins which contain only tiny items.
Intuitively, Rrc works similarly to Reserved-Critical except that it might not open as many critical bins as suggested by the advice. The algorithm is more “conservative” in the sense that it does not keep two thirds of many (critical) bins open for critical items that might never arrive. The smaller the value of α𝛼\alphaitalic_α is, the more conservative the algorithm is. Our analysis is based on two possibilities in the final packing of the algorithm. In the first case (case I), all critical bins receive a critical item, while in the second case (case II) some of them have their reserved space empty.
The algorithm classifies items according to their size. Tiny items have their size in the range (0,1/3]013(0,1/3]( 0 , 1 / 3 ], small items in (1/3,1/2]1312(1/3,1/2]( 1 / 3 , 1 / 2 ], critical items in (1/2,2/3]1223(1/2,2/3]( 1 / 2 , 2 / 3 ], and large items in (2/3,1]231(2/3,1]( 2 / 3 , 1 ]. In addition, the algorithm has four kinds of bins, called tiny, small, critical and large bins. Large items are placed alone in large bins, which are opened at each arrival. Small items are placed in pairs in small bins, which are opened every other arrival. Critical bins contain a single critical item, and tiny items up to a total size of 1/3131/31 / 3 per bin, while tiny bins contain only tiny items. The algorithm receives as advice the number of critical items, denoted by c𝑐citalic_c, and opens c𝑐citalic_c critical bins at the beginning. Inside each critical bin, a space of 2/3 is reserved for a critical item, and tiny items are placed using First-Fit into the remaining space of these bins possibly opening new bins dedicated to tiny items. Each critical item is placed in one of the critical bins. Note that the algorithm is heavily dependent on the advice being trusted. Imagine that the encoded advice overestimates the number of critical items. This results in critical bins which contain only tiny items.
The worst case is reached when tiny items form a subsequence (1/6,ϵ,1/6,ϵ,…)16italic-ϵ16italic-ϵ…(1/6,\epsilon,1/6,\epsilon,\ldots)( 1 / 6 , italic_ϵ , 1 / 6 , italic_ϵ , … ), while there is no critical item. In this case, all critical bins are filled up to a level slightly more than 1/6161/61 / 6. Hence, untrusted advice can result in a competitive ratio as bad as 6.
First, if γ≤α𝛾𝛼\gamma\leq\alphaitalic_γ ≤ italic_α, by Lemma 10, the competitive ratio will be at most 1.5+152k/2+11.515superscript2𝑘211.5+\frac{15}{2^{k/2+1}}1.5 + divide start_ARG 15 end_ARG start_ARG 2 start_POSTSUPERSCRIPT italic_k / 2 + 1 end_POSTSUPERSCRIPT end_ARG. Next, assume α<γ𝛼𝛾\alpha<\gammaitalic_α < italic_γ, that is β=α𝛽𝛼\beta=\alphaitalic_β = italic_α. All critical bins receive a critical item in this case. This is because the algorithm maintains a critical ratio α𝛼\alphaitalic_α which is smaller than γ𝛾\gammaitalic_γ. In other words, the algorithm declares a smaller ratio of its bins critical compared to the actual ratio in the Reserve-Critical algorithm. Hence, all critical bins receive a critical item. By Lemma 12, the competitive ratio is at most 1.5+1−α4−3⁢α1.51𝛼43𝛼1.5+\frac{1-\alpha}{4-3\alpha}1.5 + divide start_ARG 1 - italic_α end_ARG start_ARG 4 - 3 italic_α end_ARG.
C
Since ⊕1subscriptdirect-sum1\oplus_{1}⊕ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the addition, instead of processing the whole document again, we could update the already computed vector, (0.15,3.65,2.0,0.15)0.153.652.00.15(0.15,3.65,2.0,0.15)( 0.15 , 3.65 , 2.0 , 0.15 ), by adding it to the new sentence confidence vector— Note that this incremental classification, in which only the new sentence needs to be processed, would produce exactly the same result as if the process were applied to the whole document again each time.
Another important aspect of this incremental approach is that since this confidence vector is a value that “summarizes the past history”, keeping track of how this vector changes over time should allow us to derive simple and clear rules to decide when the system should make an early classification. As an example of this, suppose we need to classify a social media user (i.e. a subject) as depressed (positive) or non-depressed (negative) based on his/her writings. Let us assume that this user is the subject 9579, he/she is depressed, and that the change of each confidence vector component over time (measured in writings) is the one shown in Figure 2.
In this pilot task, classifiers must decide, as early as possible, whether each user is depressed or not based on his/her writings. In order to accomplish this, during the test stage and in accordance with the pilot task definition, the subject’s writings were divided into 10 chunks —thus each chunk contained 10% of the user’s history. Then, classifiers were given the user’s history, one chunk at a time, and after each chunk submission, the classifiers were asked to decide whether the subject was depressed, not depressed or that more chunks need to be read.
However, this is a vital aspect, especially when the task involves sensitive or risky decisions in which, usually, people are involved. In Figure 9 is shown an example of a piece of what could be a visual description of the classification process for the subject 9579292929Note that this is the same subject who was previously used in the example shown in Figure 2, in subsubsection 3.1.1. The interested readers could see the relation between the green/positive curve there and the color intensity of each writing shown in 9(a).. In this example, we show in (a) a painted piece of the subject’s writings history that the system users could use to identify which were the writings involved, and to what degree, in the decision making (classification) process. if the user wanted to further analyze, let us say, the writing 60 in more details, the same process could be applied at two different lower levels, as shown in (b) and (c) for sentences and words respectively.
We could make use of this “dynamic information” to apply certain policies to decide when to classify subjects as depressed. For example, one of such a policy would be “classify a subject as positive when the accumulated positive value becomes greater than the negative one” —in which case, note that our subject would be classified as depressed after reading his/her 66th writing.
A
Since 𝒞⁢(𝐞t+12,k)𝒞subscript𝐞𝑡12𝑘\mathcal{C}({\bf e}_{t+\frac{1}{2},k})caligraphic_C ( bold_e start_POSTSUBSCRIPT italic_t + divide start_ARG 1 end_ARG start_ARG 2 end_ARG , italic_k end_POSTSUBSCRIPT ) is sparse, 𝐰t+1−𝐰tsubscript𝐰𝑡1subscript𝐰𝑡{\bf w}_{t+1}-{\bf w}_{t}bold_w start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT - bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is sparse as well. Hence, sending 𝐰t+1−𝐰tsubscript𝐰𝑡1subscript𝐰𝑡{\bf w}_{t+1}-{\bf w}_{t}bold_w start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT - bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT can reduce the communication cost compared with sending 𝐰tsubscript𝐰𝑡{\bf w}_{t}bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. Workers can get 𝐰t+1subscript𝐰𝑡1{\bf w}_{t+1}bold_w start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT by 𝐰t+1=𝐰t+(𝐰t+1−𝐰t)subscript𝐰𝑡1subscript𝐰𝑡subscript𝐰𝑡1subscript𝐰𝑡{\bf w}_{t+1}={\bf w}_{t}+({\bf w}_{t+1}-{\bf w}_{t})bold_w start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT = bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT + ( bold_w start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT - bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ).
There are some other ways to combine momentum and error feedback. For example, we can put the momentum term on the server. However, these ways lead to worse performance than the way adopted in this paper. More discussions can be found in Appendix A.
Recently, parameter server (Li et al., 2014) has been one of the most popular distributed frameworks in machine learning. GMC can also be implemented on the parameter server framework. In this paper, we adopt the parameter server framework for illustration. The theories in this paper can also be adapted for the all-reduce framework.
The error feedback technique keeps the compressed error into the error residual on each worker and incorporates the error residual into the next update. Error feedback based sparse communication methods have been widely adopted by recent communication compression methods and achieved better performance than quantization methods and other sparse communication methods without error feedback.
In this paper, we propose a novel method, called global momentum compression (GMC), for sparse communication in distributed learning. To the best of our knowledge, this is the first work that introduces global momentum for sparse communication in DMSGD. Furthermore, to enhance the convergence performance when using more aggressive sparsification compressors (e.g., RBGS), we extend GMC to GMC+. We prove the convergence of GMC and GMC+ theoretically. Empirical results verify the superiority of global momentum and show that GMC and GMC+ can outperform other baselines to achieve state-of-the-art performance.
A
These results suggest that reconstruction error by itself is not a sufficient metric for decomposing data in interpretable components. Trying to solely achieve lower reconstruction error (such as the case for the Identity activation function) produces noisy learned kernels, while using the combined measure of reconstruction error and compression ratio (smaller φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG) results in interpretable kernels.
Comparing the differences of φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG between the Identity, the ReLU and the rest sparse activation functions in Fig. 4LABEL:sub@subfig:flithos_m we notice that the latter produce a minimum region in which we observe interpretable kernels.
During validation we selected the models with the kernel size that achieved the best φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG out of all epochs. During testing we feed the test data into the selected model and calculate C⁢R−1𝐶superscript𝑅1CR^{-1}italic_C italic_R start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT, ℒ~~ℒ\tilde{\mathcal{L}}over~ start_ARG caligraphic_L end_ARG and φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG for this set of hyperparameters as shown in Table I.
The three separate clusters which are depicted in Fig. 3 and the aggregated density plot in Fig. 4LABEL:sub@subfig:crrl_density_plot between the Identity activation function, the ReLU and the rest show the effect of a sparser activation function on the representation.
These results suggest that reconstruction error by itself is not a sufficient metric for decomposing data in interpretable components. Trying to solely achieve lower reconstruction error (such as the case for the Identity activation function) produces noisy learned kernels, while using the combined measure of reconstruction error and compression ratio (smaller φ¯¯𝜑\bar{\varphi}over¯ start_ARG italic_φ end_ARG) results in interpretable kernels.
A
Typical wireless protocol 802.11b/g only provides limited channels for users, which is far more than enough for high-quality communication services [18]. To reduce the load in central system, making use of distributed available resources in networks turns out to be an ideal solution. Underlay Device-to-Device (D2D) communication is considered as one of the crucial technologies for cellular spectrum reuse for user devices in communication networks [19]. The advantage of D2D communication that allows end users to operate on licensed channels through power control sheds light on how interference management would work in UAV ad-hoc networks [22].
Game theory provides an efficient tool for the cooperation through resource allocation and sharing [20][21]. A computation offloading game has been designed in order to balance the UAV’s tradeoff between execution time and energy consumption [25]. A sub-modular game is adopted in the scheduling of beaconing periods for the purpose of less energy consumption [23]. Sedjelmaci et al. applied the Bayesian game-theoretic methodology in UAV’s intrusion detection and attacker ejection [24]. However, most existing models focus on common scenarios with less number of UAVs, which are not compatible with large-scale scenarios with large numbers of UAVs [26]. Aggregative game is a characteristic game model which treats other agents’ strategies as a whole influence, thus avoids overwhelming strategies information from every single agent [27][28]. Inspired by this, our model is built upon the aggregative game theory which suits for large-scale scenarios.
We propose a novel UAV ad-hoc network model with the aggregative game which is compatible with the large-scale highly dynamic environments, in which several influences are coupled together. In the aggregative game, the interference from other UAVs can be regarded as the integral influence, which makes the model more practical and efficient.
Since the UAV ad-hoc network game is a special type of potential game, we can apply the properties of the potential game in the later analysis. Some algorithms that have been applied in the potential game can also be employed in the UAV ad-hoc network game. In the next section, we investigate the existing algorithm with its learning rate in large-scale post-disaster scenarios and propose a new algorithm which is more suitable for the UAV ad-hoc network in such scenarios.
In post-disaster scenarios, a great many of UAVs are required to support users [4]. Therefore, we propose aggregative game theory into such scenarios and permit UAV to learn in the constrained strategy sets. Because the aggregative game can integrate the impact of all other UAVs on one UAV, it reduces the complexity of receiving information and reduces the data processing capacity of UAVs. For instance, in a conventional game applied a scenario with N UAVs, it needs to analyze N strategies which decide noise and coverage sizes from each other individual UAV. However aggregative game only needs to process the integrated noise and coverage sizes of all other UAVs. Such an advantage is more obvious when the number of UAVs is extremely large since figuring out each others’ strategies is unrealistic [8]. In terms of constrained strategy sets, due to environmental factors such as violent winds [11] and tempestuous rainstorms, the action set of UAVs has a restriction that cannot switch rapidly between extreme high power or elevate altitude to low ones, but only levels adjacent to them [12]. For instance, the power can change from 1⁢m⁢W1𝑚𝑊1mW1 italic_m italic_W to 1.5⁢m⁢W1.5𝑚𝑊1.5mW1.5 italic_m italic_W in the first time slot and from 1.5⁢m⁢W1.5𝑚𝑊1.5mW1.5 italic_m italic_W to 2⁢m⁢W2𝑚𝑊2mW2 italic_m italic_W in the next one, but it cannot alter it directly from 1⁢m⁢W1𝑚𝑊1mW1 italic_m italic_W to 2⁢m⁢W2𝑚𝑊2mW2 italic_m italic_W. Therefore, the aggregative game with constrained sets is an ideal model for post-disaster scenarios.
A
Equation 5.16 can be solved for the constant fIsubscript𝑓𝐼f_{I}italic_f start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT if fPisubscript𝑓subscript𝑃𝑖f_{P_{i}}italic_f start_POSTSUBSCRIPT italic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT is temporarily set to zero at the fixed-point nodes along
where hIsubscriptℎ𝐼h_{I}italic_h start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT is the height of the rectangular cross-section of the insulating wall, and ro⁢u⁢tsubscript𝑟𝑜𝑢𝑡r_{out}italic_r start_POSTSUBSCRIPT italic_o italic_u italic_t end_POSTSUBSCRIPT and ri⁢nsubscript𝑟𝑖𝑛r_{in}italic_r start_POSTSUBSCRIPT italic_i italic_n end_POSTSUBSCRIPT are the outer and inner
fI(t)=−1L~i⁢n⁢s+L~i⁢n⁢t⁢Δi[=1]NnΣ(fP⁢0i⁢(t)⁢si3⁢ri)f_{I}(t)=\>\frac{-1}{\tilde{L}_{ins}+\tilde{L}{}_{int\Delta}}\stackrel{{% \scriptstyle[}}{{i}}=1]{N_{n}}{\Sigma}\left(\frac{f_{P0_{i}}(t)s_{i}}{3r_{i}}\right)italic_f start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ( italic_t ) = divide start_ARG - 1 end_ARG start_ARG over~ start_ARG italic_L end_ARG start_POSTSUBSCRIPT italic_i italic_n italic_s end_POSTSUBSCRIPT + over~ start_ARG italic_L end_ARG start_FLOATSUBSCRIPT italic_i italic_n italic_t roman_Δ end_FLOATSUBSCRIPT end_ARG start_RELOP SUPERSCRIPTOP start_ARG italic_i end_ARG start_ARG [ end_ARG end_RELOP = 1 ] italic_N start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT roman_Σ ( divide start_ARG italic_f start_POSTSUBSCRIPT italic_P 0 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_t ) italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG start_ARG 3 italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG )
the inner wall of the insulator (see figure 10), so that fP→fP⁢0→subscript𝑓𝑃subscript𝑓𝑃0f_{P}\rightarrow f_{P0}italic_f start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT → italic_f start_POSTSUBSCRIPT italic_P 0 end_POSTSUBSCRIPT. Equation 5.16 is modified
Equation 5.16 can be solved for the constant fIsubscript𝑓𝐼f_{I}italic_f start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT if fPisubscript𝑓subscript𝑃𝑖f_{P_{i}}italic_f start_POSTSUBSCRIPT italic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT is temporarily set to zero at the fixed-point nodes along
C
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, in order to get a semantic of comparability closer to equality. Even more, it could be possible to make the functions reflexive on all values but null where some freedom is allowed.
Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it. Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly
fA⁢(u,v)=fB⁢(u,v)={1if ⁢u=v≠nullaif ⁢u≠null,v≠null and ⁢u≠vbif ⁢u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\ a&\text{if }u\neq\texttt{null},v\neq\texttt{null}\text{ and }u\neq v\\
Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality) by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, any value yA≥AxAsubscript𝐴subscript𝑦𝐴subscript𝑥𝐴y_{A}\geq_{A}x_{A}italic_y start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ≥ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT must be set to 1111 since it is closer to
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, in order to get a semantic of comparability closer to equality. Even more, it could be possible to make the functions reflexive on all values but null where some freedom is allowed.
A
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes because of the unseen state transitions and the finite size of experience reply buffer. This type of variance leads to converging to sub-optimal policies and brutally hurts DQN performance. The second source of variance Target Approximation Error which is the error coming from the inexact minimization of DQN parameters. Many of the proposed extensions focus on minimizing the variance that comes from AGE by finding methods to optimize the learning trajectory or from TAE by using methods like averaging to exact DQN parameters. Dropout methods have the ability to assemble these two solutions which minimize different source of variance. Dropout methods can achieve a consistence learning trajectory and exact DQN parameters with averaging, which comes inherently with Dropout methods.
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Reinforcement Learning is concerned with finding a sequence of actions an agent can follow that could lead to solve the task on the environment [1][2][3]. Most of Reinforcement Learning techniques estimate the consequences of actions in order to find an optimal policy in the form of sequence of actions that can be followed by the agent to solve the task. The process of choosing the optimal policy is based on selecting actions that maximize the future payoff of an action. Finding an optimal policy is the main concern of Reinforcement Learning for that reason many algorithms have been introduced over a course of time, e.g, Q-learning[4], SARSA[5], and policy gradient methods[6]. These methods use linear function approximation techniques to estimate action value, where convergence is guaranteed [7]. However, as challenges in modeling complex patterns increase, the need for expressive and flexible non-linear function approximators becomes clear. The recent advances in deep neural networks helped to develop artificial agent named deep Q-network(DQN)[8] that can learn successful policies directly from high-dimensional features. Despite the remarkable flexibility and the huge representative capability of DQN, some issues emerge from the combination of Q-learning and neural networks. One of these issues, known as ”overestimation phenomenon,” was first explored by [9]. They noted that the expansion of the action space in the Q-learning algorithm, along with generalization errors in neural networks, often results in an overestimation and increased variance of state-action values. They suggested that to counter these issues, further modifications and enhancements to the standard algorithm would be necessary to boost training stability and diminish overestimation. In response, [10] introduced Double-DQN, an improvement that incorporates the double Q-learning estimator [11], aiming to address the challenges of variance and overestimation. Additionally, [31] developed the Averaged-DQN algorithm, a significant improvement over the standard DQN. By averaging previously learned Q-values, Averaged-DQN effectively lowers the variance in target value estimates, thus enhancing training stability and overall performance.
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Classic Control Environment. The game of CARTPOLE was selected due to its widespread use and the ease with which the DQN can achieve a steady state policy.
To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optimal value function can be computed exactly.
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and after applying Dropout (Dropout methods DQN). There was a statistically significant decrease in Variance (14.72% between Gaussian Dropout and DQN, 48.89% between Variational Dropout and DQN). Furthermore one of the Dropout methods outperformed DQN score.
C
Figure 5: Top: An illustration of the SegNet architecture. There are no fully connected layers, and hence it is only convolutional. Bottom: An illustration of SegNet and FCN (Long et al., 2015) decoders. a,b,c,d𝑎𝑏𝑐𝑑a,b,c,ditalic_a , italic_b , italic_c , italic_d correspond to values in a feature map. SegNet uses the max-pooling indices to upsample (without learning) the feature map(s) and convolves with a trainable decoder filter bank. FCN upsamples by learning to deconvolve the input feature map and adds the corresponding encoder feature map to produce the decoder output. This feature map is the output of the max-pooling layer (includes sub-sampling) in the corresponding encoder. Note that there are no trainable decoder filters in FCN (Badrinarayanan et al. (2015)).
Milletari et al. (2016) proposed a similar architecture (V-Net; Figure 7) which added residual connections and replaced 2D operations with their 3D counterparts in order to process volumetric images. Milletari et al. also proposed optimizing for a widely used segmentation metric, i.e., Dice, which will be discussed in more detail in the section 4.
V-Net (Milletari et al., 2016) and FCN (Long et al., 2015). Sinha and Dolz (2019) proposed a multi-level attention based architecture for abdominal organ segmentation from MRI images.  Qin et al. (2018) proposed a dilated convolution base block to preserve more detailed attention in 3D medical image segmentation. Similarly, other papers (Lian et al., 2018; Isensee et al., 2019; Li et al., 2019b; Ni et al., 2019; Oktay et al., 2018; Schlemper et al., 2019) have leveraged the attention concept into medical image segmentation as well.
To perform image segmentation in real-time and be able to process larger images or (sub) volumes in case of processing volumetric and high-resolution 2D images such as CT, MRI, and histopathology images, several methods have attempted to compress deep models. Weng et al. (2019a) applied a neural architecture search method to U-Net to obtain a smaller network with a better organ/tumor segmentation performance on CT, MR, and ultrasound images. Brügger et al. (2019) by leveraging group normalization (Wu and He, 2018) and leaky ReLU function, redesigned the U-Net architecture in order to make the network more memory efficient for 3D medical image segmentation. Perone et al. (2018) and Bonta and Kiran (2019) designed a dilated convolution neural network with fewer parameters as compared to the original convolution-based one. Some other works (Xu et al., 2018; Paschali et al., 2019) have focused on weight quantization of deep networks for making segmentation networks smaller.
The standard CE loss function and its weighted versions, as discussed in Section 4, have been applied to numerous medical image segmentation problems (Isensee et al., 2019; Li et al., 2019b; Lian et al., 2018; Ni et al., 2019; Nie et al., 2018; Oktay et al., 2018; Schlemper et al., 2019). However, Milletari et al. (2016) found that optimizing CNNs for DL (Eqn. 10) in some cases, e.g., in the case of having very small foreground objects in a large background, works better than the original cross-entropy.
A
Each fold is, in turn, selected as the test set, while the remaining 9 folds become the training set. For each different train/test split, we set aside 10% of the training data as validation set, which is used for early stopping, i.e., we interrupt the training procedure after the loss on the validation set does not decrease for 50 epochs.
The LSTM baseline generally achieves a better accuracy than Dense, since it captures the sequential ordering of the words in the reviews, which also helps to prevent overfitting on training data. Finally, the TCN baseline always outperforms LSTM, both in terms of accuracy and computational costs.
Additional baselines are the Weisfeiler-Lehman (WL) graph kernel [47], a GNN with only MP layers (Flat), and a network with only dense layers (Dense). The comparison with Flat helps to understand whether pooling operations are useful for a given task.
Interestingly, the Dense architecture achieves the best performance on MUTAG, indicating that in this case, the connectivity of the graps does not carry useful information for the classification task. The performance of the Flat baseline indicates that in Enzymes and COLLAB pooling operations are not necessary to improve the classification accuracy.
Interestingly, the GNNs configured with GRACLUS and NDP always achieve better results than the Dense network, even if the latter generates the word embeddings used to build the graph on which the GNN operates. This can be explained by the fact that the Dense network immediately overfits the dataset, whereas the graph structure provides a strong regularization, as the GNN combines only words that are neighboring on the vocabulary graph.
C
NRFI with and without the original data is shown for different network architectures. The smallest architecture has 2222 neurons in both hidden layers and the largest 128128128128. For NRFI (gen-ori), we can see that a network with 16161616 neurons in both hidden layers (NN-16-16) is already sufficient to learn the decision boundaries of the random forest and achieve the same accuracy. When fewer training samples are available, NN-8-8 already has the required capacity. In the following, we will further analyze the accuracy and number of network parameters.
Current state-of-the-art methods directly map random forests into neural networks. The number of parameters of the resulting network is evaluated on all datasets with different numbers of training examples. The overall performance is shown in the last column. Due to the stochastic process when training the random forests, the results can vary marginally.
Here, we additionally include decision trees, support vector machines, random forests, and neural networks in the comparison. The evaluation is performed on all nine datasets, and results for different numbers of training examples are shown (increasing from left to right). The overall performance of each method is summarized in the last column. For neural random forest imitation, a network architecture with 128128128128 neurons in both hidden layers is used. From the analysis, we can make the following observations:
NRFI introduces imitation instead of direct mapping. In the following, a network architecture with 32323232 neurons in both hidden layers is selected. The previous analysis has shown that this architecture is capable of imitating the random forests (see Figure 4 for details) across all datasets and different numbers of training examples.
First, we analyze the performance of state-of-the-art methods for mapping random forests into neural networks and neural random forest imitation. The results are shown in Figure 4 for different numbers of training examples per class. For each method, the average number of parameters of the generated networks across all datasets is plotted depending on the test error. That means that the methods aim for the lower-left corner (smaller number of network parameters and higher accuracy). Please note that the y-axis is shown on a logarithmic scale.
B
In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or sample complexity, which remains even more challenging to answer than the computational question. As a result, such a lack of statistical understanding hinders the development of more sample-efficient policy optimization algorithms beyond heuristics. In fact, empirically, vanilla policy gradient is known to exhibit a possibly worse sample complexity than random search (Mania et al., 2018), even in basic settings such as linear-quadratic regulators. Meanwhile, theoretically, vanilla policy gradient can be shown to suffer from exponentially large variance in the well-known “combination lock” setting (Kakade, 2003; Leffler et al., 2007; Azar et al., 2012a), which only has a finite state space.
The policy improvement step defined in (3.2) corresponds to one iteration of NPG (Kakade, 2002), TRPO (Schulman et al., 2015), and PPO (Schulman et al., 2017). In particular, PPO solves the same KL-regularized policy optimization subproblem as in (3.2) at each iteration, while TRPO solves an equivalent KL-constrained subproblem. In the special case where the reward function rhk−1subscriptsuperscript𝑟𝑘1ℎr^{k-1}_{h}italic_r start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT is linear in the feature map ϕhk−1subscriptsuperscriptitalic-ϕ𝑘1ℎ\phi^{k-1}_{h}italic_ϕ start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT defined subsequently, which implies that the Q-function Qhπk−1,k−1subscriptsuperscript𝑄superscript𝜋𝑘1𝑘1ℎQ^{\pi^{k-1},k-1}_{h}italic_Q start_POSTSUPERSCRIPT italic_π start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT , italic_k - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT is also linear in ϕhk−1subscriptsuperscriptitalic-ϕ𝑘1ℎ\phi^{k-1}_{h}italic_ϕ start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT, the updated policy πksuperscript𝜋𝑘\pi^{k}italic_π start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT can be equivalently obtained by one iteration of NPG when the policy is parameterized by an energy-based distribution (Agarwal et al., 2019; Wang et al., 2019). Such a policy improvement step can also be cast as one iteration of infinite-dimensional mirror descent (Nemirovsky and Yudin, 1983) or dual averaging (Xiao, 2010), where the Q-function plays the role of the gradient (Liu et al., 2019; Wang et al., 2019).
To answer this question, we propose the first policy optimization algorithm that incorporates exploration in a principled manner. In detail, we develop an Optimistic variant of the PPO algorithm, namely OPPO. Our algorithm is also closely related to NPG and TRPO. At each update, OPPO solves a Kullback-Leibler (KL)-regularized policy optimization subproblem, where the linear component of the objective function is defined using the action-value function. As is shown subsequently, solving such a subproblem corresponds to one iteration of infinite-dimensional mirror descent (Nemirovsky and Yudin, 1983) or dual averaging (Xiao, 2010), where the action-value function plays the role of the gradient. To encourage exploration, we explicitly incorporate a bonus function into the action-value function, which quantifies the uncertainty that arises from only observing finite historical data. Through uncertainty quantification, such a bonus function ensures the (conservative) optimism of the updated policy. Based on NPG, TRPO, and PPO, OPPO only augments the action-value function with the bonus function in an additive manner, which makes it easily implementable in practice.
We study the sample efficiency of policy-based reinforcement learning in the episodic setting of linear MDPs with full-information feedback. We proposed an optimistic variant of the proximal policy optimization algorithm, dubbed as OPPO, which incorporates the principle of “optimism in the face of uncertainty” into policy optimization. When applied to the episodic MDP with unknown transition and adversarial reward, OPPO provably achieves a d2⁢H3⁢Tsuperscript𝑑2superscript𝐻3𝑇\sqrt{d^{2}H^{3}T}square-root start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT italic_T end_ARG-regret up to logarithmic factors, which is near-optimal. To the best of our knowledge, OPPO is the first provably efficient policy optimization algorithm that explicitly incorporates exploration.
step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces an H2superscript𝐻2H^{2}italic_H start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-regret. However, in the realistic setting, the Q-function Qhπk−1,k−1subscriptsuperscript𝑄superscript𝜋𝑘1𝑘1ℎQ^{\pi^{k-1},k-1}_{h}italic_Q start_POSTSUPERSCRIPT italic_π start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT , italic_k - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT in (3.1)-(3.3) is replaced by the estimated Q-function Qhk−1subscriptsuperscript𝑄𝑘1ℎQ^{k-1}_{h}italic_Q start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT in Line 6 of Algorithm 1, which is obtained by the policy evaluation step defined in (3.1). As a result of the estimation uncertainty that arises from only observing finite historical data, it is indeed impossible to do better than the T𝑇\sqrt{T}square-root start_ARG italic_T end_ARG-regret even in the tabular setting (Jin et al., 2018), which is shown to be an information-theoretic lower bound. In the linear setting, OPPO attains such a lower bound in terms of the total number of steps T=H⁢K𝑇𝐻𝐾T=HKitalic_T = italic_H italic_K. In other words, in the stationary setting, being “conservatively” greedy suffices to achieve sample-efficiency, which complements its advantages in terms of robustness in the more challenging setting with adversarially chosen reward functions.
B
The challenge is to reduce the number of bits as much as possible while at the same time keeping the prediction accuracy close to that of a well-tuned full-precision DNN. Subsequently, we provide a literature overview of approaches that train reduced-precision DNNs, and, in a broader view, we also consider methods that use reduced-precision computations during backpropagation to facilitate low-resource training.
Knowledge distillation is an approach where a small student DNN is trained to mimic the behavior of a larger teacher DNN, which has been shown to yield improved results compared to training the small DNN directly. The idea of weight sharing is to use a small set of weights that is shared among several connections of a DNN to reduce the memory footprint.
In recent years, the STE (Bengio et al., 2013) (see Section 2.6) became the method of choice to compute an approximate gradient for training DNNs with weights that are represented using a very small number of bits. Such methods typically maintain a set of full-precision weights that are quantized during forward propagation.
By injecting additive noise to the deterministic weights before rounding, one can compute probabilities of the weights being rounded to specific values in a predefined discrete set. Subsequently, these probabilities are used to differentiably round the weights using the Gumbel-softmax approximation (Jang et al., 2017).
The two works of Höhfeld and Fahlman (Höhfeld and Fahlman, 1992a, b) rounded the weights during training to fixed-point format with different numbers of bits. They observed that training eventually stalls as small gradient updates are always rounded to zero.
D
by inequality (6) and Remark 9.2. Again, as in the first item, J=(0,l23]𝐽0superscript𝑙23J=\left(0,\frac{l^{2}}{3}\right]italic_J = ( 0 , divide start_ARG italic_l start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG 3 end_ARG ]. Note that the existence of the interval (0,l23]0superscript𝑙23\left(0,\frac{l^{2}}{3}\right]( 0 , divide start_ARG italic_l start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG 3 end_ARG ] in barc1VR⁢(X;𝔽)subscriptsuperscriptbarcVR1𝑋𝔽\mathrm{barc}^{\mathrm{VR}}_{1}(X;\mathbb{F})roman_barc start_POSTSUPERSCRIPT roman_VR end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_X ; blackboard_F ) can also be proved via the “crushing” technique introduced by Hausmann (see [50, Proposition 2.2]) since X𝑋Xitalic_X can be crushed onto the loop of length l2subscript𝑙2l_{2}italic_l start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.
In this section, we will see one such example which arises from the interplay between the hyperbolicity of the geodesic metric space X𝑋Xitalic_X and its tight span E⁢(X)𝐸𝑋E(X)italic_E ( italic_X ) (see Example 3.1 to recall the definition of tight span).
Let X𝑋Xitalic_X be the metric gluing of a loop of length l2subscript𝑙2l_{2}italic_l start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and an interval of length l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT (glued to the circle at one of its endpoints). Then, by Proposition 9.1, I≤spread⁢(X)𝐼spread𝑋I\leq\mathrm{spread}(X)italic_I ≤ roman_spread ( italic_X ) for any I∈barckVR⁢(X;𝔽)𝐼subscriptsuperscriptbarcVR𝑘𝑋𝔽I\in\mathrm{barc}^{\mathrm{VR}}_{k}(X;\mathbb{F})italic_I ∈ roman_barc start_POSTSUPERSCRIPT roman_VR end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_X ; blackboard_F ). However, observe that one can make spread⁢(X)spread𝑋\mathrm{spread}(X)roman_spread ( italic_X ) arbitrarily large by increasing l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. But, if J∈barc1VR⁢(X;𝔽)𝐽subscriptsuperscriptbarcVR1𝑋𝔽J\in\mathrm{barc}^{\mathrm{VR}}_{1}(X;\mathbb{F})italic_J ∈ roman_barc start_POSTSUPERSCRIPT roman_VR end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_X ; blackboard_F ) and a family of nonzero homology classes {(ωs,s)}s∈J⊆Spec1⁢(X,𝔽)subscriptsubscript𝜔𝑠𝑠𝑠𝐽subscriptSpec1𝑋𝔽\{(\omega_{s},s)\}_{s\in J}\subseteq\mathrm{Spec}_{1}(X,\mathbb{F}){ ( italic_ω start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_s ) } start_POSTSUBSCRIPT italic_s ∈ italic_J end_POSTSUBSCRIPT ⊆ roman_Spec start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_X , blackboard_F ) corresponding to J𝐽Jitalic_J is supported by the loop, then
Motivated by Example 9.2 above, in the proposition below we will clarify the relationship between the persistence barcode and the multiset consisting of all I(ω,s)subscript𝐼𝜔𝑠I_{(\omega,s)}italic_I start_POSTSUBSCRIPT ( italic_ω , italic_s ) end_POSTSUBSCRIPT.
An example similar to the one described in the previous item arises from Figure 3. Consider the tube connecting the two blobs to be large: in that case the standard spread of the space will be large yet the lifetime of the individual H2subscriptH2\mathrm{H}_{2}roman_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT classes will be much smaller.
D
In their tool, Coimbra et al. [42] support interactive exploration of 3-D projections using adapted biplots and different widgets for viewpoint selection. Our tool is similar to theirs from the perspective of providing a collection of interconnected views for projection exploration, but they focus on projection-agnostic 3-D scatterplots, and the widgets have different goals. Probing Projections [36] is another such interactive system that supports both explaining and assessing projections, but limited to MDS [43]. Groups of points can be compared in terms of the data set’s dimensions, and a heatmap of the distribution of a selected dimension can be overlaid on the visualization, but there is no special prioritization of dimensions to deal with very high-dimensional data; the user must simply cycle through all of them in order to find the most relevant one.
Most similarly to one of our proposed interactions (the Dimension Correlation, Subsection 4.4), in AxiSketcher [47] (and its prior version InterAxis [48]) the user can draw a polyline in the scatterplot to identify a shape, which results in new non-linear high-dimensional axes to match the user’s intentions. Since the resulting dimension contributions to the axes are not uniform, it is not possible to represent them using simple means such as bar charts. In our Dimension Correlation tool, the user also draws a polyline to identify a shape, but our intention is exactly the opposite of AxiSketcher: we want to capture dimension contributions in an easy and accessible way. For this, we project low-dimensional points into the line (not high-dimensional ones, as in AxiSketcher), and we compute the dimension contributions in a different way, using Spearman’s rank correlation. In summary, although there is a superficial similarity between the two techniques regarding how the user interacts with the scatterplot, their goals and their inner workings are quite different. Since t-viSNE adopts an approach of combining many different coordinated views, it is important for the Dimension Correlation to maintain—as much as possible—the users’ mental map of the projection, and to give simple and straightforward interpretations of the patterns they see.
Adaptive Parallel Coordinates Plot   Our first proposal to support the task of interpreting patterns in a t-SNE projection is an Adaptive PCP [59], as shown in Figure 1(k). It highlights the dimensions of the points selected with the lasso tool, using a maximum of 8 axes at any time, to avoid clutter. The shown axes (and their order) are, however, not fixed, as is the usual case. Instead, they are adapted to the selection in the following way. First, a Principal Component Analysis (PCA) [1] is performed using only the selected points, but with all dimensions. That yields two results: (1) a set of eigenvectors that represent a new base that best explains the variance of the selected points, and (2) a set of eigenvalues that represent how much variance is explained by each eigenvector. Simulating a reduction of the dimensions of the selected points to 1111-Dimensional space using PCA, we pick the eigenvector with the largest eigenvalue, i.e., the most representative one. This N𝑁Nitalic_N-D vector can be seen as sequence w𝑤witalic_w of N𝑁Nitalic_N weights, one per original dimension, where the value of wjsubscript𝑤𝑗w_{j}italic_w start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT indicates the importance of dimension j𝑗jitalic_j in explaining the variance of the user-selected subset of the data. Finally, we sort w𝑤witalic_w in descending order, then pick the dimensions that correspond to the first (up to) 8 values of the sorted w𝑤witalic_w. These are the (up to) 8 dimensions shown in the PCP axes, in the same descending order (from left to right).
Fujiwara et al. [44] proposed the contrasting clusters in PCA (ccPCA) method to find which dimensions contributed more to the formation of a selected cluster and why it differs from the rest of the dataset, based on information on separation and internal vs. external variability. We have similar goals, but approach them with different methods. For exploring clusters and selections in general, we use PCA to filter and order a local PCP plot; this could be easily adapted to use ccPCA instead as an underlying method for choosing which dimensions to filter and how to re-order the axes, without affecting the overall proposed analytical flow of the tool. On the other hand, ccPCA does not deal with the analysis of shapes, which we support with our proposed Dimension Correlation. Other recent approaches include DimReader [45], where the authors create so-called generalized axes for non-linear DR methods, but besides explaining a single dimension at a time, it is currently unclear how exactly it can be used in an interactive exploration scenario; and
Adaptive PCP vs. PCP   Although it is not uncommon to find tools that use PCP views together with DR-based scatterplots (e.g., iPCA [69]) with various schemes for re-ordering and prioritizing the axes (e.g., [70, 71]), the arrangement and presentation of these PCP’s are usually static in order to reflect attributes of the data (or the projection) as a whole. In our proposed Adaptive PCP, the arrangement of the axes is dynamically updated every time the user makes a new selection (using a local PCA); this way, the PCP only shows, at any given time, the most relevant dimensions for the user’s current focus, which may differ significantly from the global aspects of the projection as a whole. Coupled with the Dimension Correlation view, this provides a highly-customized toolset for inspecting and interpreting the meanings of specific neighborhoods of data points.
C
The complete list of reviewed algorithms in this category is provided in Tables 9 and 10 (physics-based algorithms) and Table 11 (chemistry-based methods). In this category we can find some well-known algorithms reported in the last century such as Simulated Annealing [79], or one of the most important algorithms in physics-based meta-heuristic optimization, Gravitational Search Algorithm, GSA [391]. Interestingly, a variety of space-based algorithms are rooted in GSA, such as Black Hole optimization (BH, [392]) or Galaxy Based Search Algorithm (GBSA, [393]). Other algorithms such as Harmony Search (HS, [394]) relate to the music composition process, a human invention that has more in common with other physical algorithms in what refers to the usage of sound waves than with Social Human Behavior based algorithms, the category discussed in what follows.
Algorithms falling in this category are inspired by human social concepts, such as decision-making and ideas related to the expansion/competition of ideologies inside the society as ideology (Ideology Algorithm, IA, [466]), or political concepts such as the Imperialist Colony Algorithm (ICA, [467]). This category also includes algorithms that emulate sports competitions such as the Soccer League Competition Algorithm (SLC, [468]). Brainstorming processes have also laid the inspirational foundations of several algorithms such as Brain Storm Optimization algorithm (BSO.2, [469]) and Global-Best Brain Storm Optimization algorithm (GBSO, [470]). The complete list of algorithms in this category is given in Table 12 and in Table 13.
Tables 18, 19, 20, 21, 22, 23 and 24 show the different algorithms in this subcategory. An exemplary algorithm of this category that has been a major meta-heuristic solver in the history of the field is PSO [80]. In this solver, each solution or particle is guided by the global current best solution and the best solution obtained by that particle during the search. Another classical algorithm in this category is the majority of the family of DE approaches [59]. In most of the variants of this evolutionary algorithm, the influence of the best solution(s) is hybridized with a differential vector that perturbs the new solution toward random individuals for the sake of increased diversity along the search. However, this subcategory also includes many other algorithms with differences in considering nearly better solutions (as in the Bat Inspired Algorithm [153] or the Brain Storm Optimization Algorithm [469]) or the worse solutions (to avoid less promising regions), as in the Grasshopper Optimization Algorithm (GOA, [118]). More than half of all algorithmic proposals dwell in this subcategory, with a prominence of Swarm Intelligence solvers due to their behavioral inspiration in PSO and DE. We will revolve around these identified similarities in Section 5.
In this same line of reasoning, the largest subcategory of the second taxonomy (Differential Vector Movements guided by representative solutions) not only contains more than half of the reviewed algorithms (almost 60%), but it also comprises algorithms from all the different categories in the first taxonomy: Social Human Behavior (as Anarchic Society Optimization, ASO, [472]), microorganisms (Bacterial Colony Optimization, [145]), Physics/Chemistry category (correspondingly, Fireworks Algorithm Optimization, FAO, [575]), Breeding-based Evolution (as Variable Mesh Optimization, VMO [113]), or even Plants-Based (such as Flower Pollination Algorithm, FPA [529]). This inspirational diversity is not exclusive to this subcategory. Others, such as Solution Creation, also include algorithms relying on the heterogeneity of natural concepts.
The complete list of reviewed algorithms in this category is provided in Tables 9 and 10 (physics-based algorithms) and Table 11 (chemistry-based methods). In this category we can find some well-known algorithms reported in the last century such as Simulated Annealing [79], or one of the most important algorithms in physics-based meta-heuristic optimization, Gravitational Search Algorithm, GSA [391]. Interestingly, a variety of space-based algorithms are rooted in GSA, such as Black Hole optimization (BH, [392]) or Galaxy Based Search Algorithm (GBSA, [393]). Other algorithms such as Harmony Search (HS, [394]) relate to the music composition process, a human invention that has more in common with other physical algorithms in what refers to the usage of sound waves than with Social Human Behavior based algorithms, the category discussed in what follows.
A
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for decoders. (2) As we utilize GAE to exploit the high-level information to construct a desirable graph, we find that the model suffers from a severe collapse due to the simple update of the graph. We analyze the degeneration theoretically and experimentally to understand the phenomenon. We further propose a simple but effective strategy to avoid it.
To study the impact of different parts of the loss in Eq. (12), the performance with different λ𝜆\lambdaitalic_λ is reported in Figure 4. From it, we find that the second term (corresponding to problem (7)) plays an important role especially on UMIST. If λ𝜆\lambdaitalic_λ is set as a large value, we may get the trivial embedding according to the constructed graph. AdaGAE will obtain good results when λ𝜆\lambdaitalic_λ is not too large.
(3) AdaGAE is a scalable clustering model that works stably on different scale and type datasets, while the other deep clustering models usually fail when the training set is not large enough. Besides, it is insensitive to different initialization of parameters and needs no pretraining.
As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method. Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore, they are widely used in practice. Due to the success of deep learning, how to combine neural networks and traditional clustering models has been studied a lot [7, 8, 9]. In particular, CNN-based clustering models have been extensively investigated [10, 11, 12]. However, the convolution operation may be unavailable on other kinds of datasets, e.g., text, social network, signal, data mining, etc.
Classical clustering models work poorly on large scale datasets. Instead, DEC and SpectralNet work better on the large scale datasets. Although GAE-based models (GAE, MGAE, and GALA) achieve impressive results on graph type datasets, they fail on the general datasets, which is probably caused by the fact that the graph is constructed by an algorithm rather than prior information. If the graph is not updated, the contained information is low-level. The adaptive learning will induce the model to exploit the high-level information. In particular, AdaGAE is stable on all datasets.
B
Recent work showed that even TCP traffic gets fragmented under certain conditions (Dai et al., 2021b). Fragmentation has long history of attacks which affect both the UDP and TCP traffic (Kent and Mogul, 1987; Herzberg and Shulman, 2013; Shulman and Waidner, 2014).
Identifying servers with global IPID counters. We send packets from two hosts (with different IP addresses) to a server on a tested network. We implemented probing over TCP SYN, ping and using requests/responses to Name servers and we apply the suitable test depending on the server that we identify on the tested network. If the responses contain globally incremental IPID values - we use the service for ingress filtering measurement with IPID technique. We located globally incremental IPID in 63.27%percent63.2763.27\%63.27 % of the measured networks. There are certainly more hosts on networks that support globally incremental IPID values, yet our goal was to validate our measurement techniques while keeping the measurement traffic low - hence we avoided scanning the networks for additional hosts and only checked for Web, Email or Name servers with globally incremental IPID counters via queries to the tested domain.
The challenge here is to accurately probe the increments rate of the IPID value (caused by the packets from other sources not controlled by us), in order to be able to extrapolate the value that will have been assigned to our second probe from a real source IP. This allows us to infer if the spoofed packets incremented the IPID counter.
Methodology. The core idea of the Path MTU Discovery (PMTUD) based tool is to send the ICMP Packet too Big (PTB) message from a spoofed source IP address, belonging to the tested network, and in the 8 bytes payload of the ICMP to insert the real IP address belonging to the prober. If the network does not enforce ingress filtering, the server will receive the PMTUD message and will reduce the MTU to the IP address specified in the first 8 bytes of the ICMP payload. We first probe the MTU to a service on the tested network, then send ICMP PTB from a spoofed IP address. If the packet arrives at the service, it will reduce the MTU to our prober, and we will identify this event in the next packet from the service - this event implies that the tested network does not apply ingress filtering.
Methodology. We use services that assign globally incremental IPID values. The idea is that globally incremental IPID [RFC6864] (Touch, 2013) values leak traffic volume arriving at the service and can be measured by any Internet host. Given a server with a globally incremental IPID on the tested network, we sample the IPID value (send a packet to the server and receive a response) from the IP addresses controlled by us. We then generate a set of packets to the server from spoofed IP addresses, belonging to the tested network. We probe the IPID value again, by sending packets from our real IP address. If the spoofed packets reached the server, they incremented the IPID counter on the server - an event which we infer when probing the value from our real IP address the second time.
D
The purpose of this study was to demonstrate that explicit representation of context can allow a classification system to adapt to sensor drift. Several gas classifier models were placed in a setting with progressive sensor drift and were evaluated on samples from future contexts. This task reflects the practical goal to deploy an artificial nose in a dynamic environment without recalibration.
Second, skill NN and context+skill NN models were compared. The context-based network extracts features from preceding batches in sequence in order to model how the sensors drift over time. When added to the feedforward NN representation, such contextual information resulted in improved ability to compensate for sensor drift. This benefit was larger in later batches where the drift was the largest and where there was a longer context to use as a basis for the adaptation.
For each batch T𝑇Titalic_T from 3 through 10, the batches 1,2,…,T−112…𝑇11,2,\ldots,T-11 , 2 , … , italic_T - 1 were used to train skill NN and context+skill NN models for 30 random initializations of the starting weights. The accuracy was measured classifying examples from batch T𝑇Titalic_T (Fig. 3A, Table 1, Skill NN and Context+Skill NN). The context models achieved a greater average accuracy, computed as the mean over all batches tested of the average accuracy in that batch (p<0.05𝑝0.05p<0.05italic_p < 0.05, two-sided t-test blocked by batch). In batch 6, the skill NN outperformed the context+skill NN, while the context+skill NN achieved greater performance in batches 7, 9, and 10 (two-sided t-tests, p<0.05𝑝0.05p<0.05italic_p < 0.05).
The context+skill NN model builds on the skill NN model by adding a recurrent processing pathway (Fig. 2D). Before classifying an unlabeled sample, the recurrent pathway processes a sequence of labeled samples from the preceding batches to generate a context representation, which is fed into the skill processing layer. The recurrent layers are modified via backpropagation through time, and, in this manner, the recurrent pathway learns to generate representations that support classification. The context system thus transforms samples of recently seen odors into a representation that helps classification on the next time period. This approach is similar to the context+skill technique for opponent modeling and enhanced extrapolation in games [26, 27]; the main difference is that in prior work the approach was based on neuroevolution of agent behavior, whereas in this paper it is implemented via backpropagation to generalize classification performance.
While context did introduce more parameters to the model (7,57575757{,}5757 , 575 parameters without context versus 14,3151431514{,}31514 , 315 including context), the model is still very small compared to most neural network models, and is trainable in a few hours on a CPU. When units were added to the “skill” layer of the feedforward NN model until the total number of parameters reached 14,4291442914{,}42914 , 429, the larger model was not significantly better (p≥0.05𝑝0.05p\geq 0.05italic_p ≥ 0.05, one-sided t-test blocked by batch). This reinforces the idea that the benefit may be attributed to context, and not to the size of the network.
A
Now we can define the tables A(1)superscript𝐴1A^{(1)}italic_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT, A(2)superscript𝐴2A^{(2)}italic_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT and A(3)superscript𝐴3A^{(3)}italic_A start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT that our algorithm uses. Recall that for Algorithm 1, we used
A⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒A[i,B]:=\begin{cases}\parbox{346.89731pt}{A representative set containing %
A(1)⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒A^{(1)}[i,B]:=\begin{cases}\parbox{346.89731pt}{A representative set % containing pairs $(M,x)$, where $M$ is a perfect matching on\leavevmode%
A(2)⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒A^{(2)}[i,B]:=\begin{cases}\parbox{346.89731pt}{A representative set % containing pairs $(M,x)$, where $M$ is a perfect matching on\leavevmode%
A⁢[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒A[i,B]:=\begin{cases}\parbox{346.89731pt}{A representative set containing %
D
We conclude this section by presenting a pair S,T𝑆𝑇S,Titalic_S , italic_T of semigroups without a homomorphism S→T→𝑆𝑇S\to Titalic_S → italic_T or T→S→𝑇𝑆T\to Sitalic_T → italic_S where S𝑆Sitalic_S and T𝑇Titalic_T possess typical properties of automaton semigroups, which makes them good candidates for also belonging to this class (and therefore interesting in the light of Theorem 6 and 8): S𝑆Sitalic_S and T𝑇Titalic_T will be
The word problem of a semigroup finitely generated by some set Q𝑄Qitalic_Q is the decision problem whether two input words over Q𝑄Qitalic_Q represent the same semigroup element. The word problem of any automaton semigroup can be solved in polynomial space and, under common complexity theoretic assumptions, this cannot significantly be improved as there is an automaton group whose word problem is hard for this complexity class [5].
A semigroup S𝑆Sitalic_S is generated by a set Q𝑄Qitalic_Q if every element s∈S𝑠𝑆s\in Sitalic_s ∈ italic_S can be written as a product q1⁢…⁢qnsubscript𝑞1…subscript𝑞𝑛q_{1}\dots q_{n}italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT … italic_q start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT of factors from Q𝑄Qitalic_Q. If there exists a finite generating set for S𝑆Sitalic_S, then S𝑆Sitalic_S is finitely generated.
A semigroup arising in this way is called self-similar. Furthermore, if the generating automaton is finite, it is an automaton semigroup. If the generating automaton is additionally complete, we speak of a completely self-similar semigroup or of a complete automaton semigroup.
The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the element on the (full) subtree rooted at the node is the same as that of a (possibly different) element on the entire tree (i. e. at the root). The idea for the name here is that the action on a full subtree is similar to the action of the group or semigroup on the entire tree. An important special case of such a self-similar presentation occurs when there is a finite set of generators such that the action of any generator on the subtree below any node is the same as the action of some (potentially different) generator at the root. By identifying the nodes of the infinite regular tree with the strings over an appropriate finite alphabet, we can describe such an action using a finite automaton (more precisely, a finite-state letter-to-letter – or synchronous – transducer), which leads to the class of automaton semigroups and automaton groups (also often called ‘automata groups’). If we relax the finite-state requirement and also consider infinite automata, we can even describe any self-similar action in this way. This is the approach we will take in this paper.
A
We compare the training accuracies to analyze the regularization effects. As shown in Table 1, the baseline method has the highest training results, while the other methods cause 6.0−14.0%6.0percent14.06.0-14.0\%6.0 - 14.0 % and 3.3−10.5%3.3percent10.53.3-10.5\%3.3 - 10.5 % drops in the training accuracy on VQA-CPv2 and VQAv2, respectively. We hypothesize that degrading performance on the train set helps forget linguistic biases, which in turn helps accuracy on VQA-CPv2’s test set but hurts accuracy on VQAv2’s val set.
We compare the baseline UpDn model with HINT and SCR-variants trained on VQAv2 or VQA-CPv2 to study the causes behind the improvements. We report mean accuracies across 5555 runs, where a pre-trained UpDn model is fine-tuned on subsets with human attention maps and textual explanations for HINT and SCR respectively. Further training details are provided in the Appendix.
As observed by Selvaraju et al. (2019) and as shown in Fig. 2, we observe small improvements on VQAv2 when the models are fine-tuned on the entire train set. However, if we were to compare against the improvements in VQA-CPv2 in a fair manner, i.e., only use the instances with visual cues while fine-tuning, then, the performance on VQAv2 drops continuously during the course of the training. This indicates that HINT and SCR help forget linguistic priors, which is beneficial for VQA-CPv2 but not for VQAv2.
We compare the training accuracies to analyze the regularization effects. As shown in Table 1, the baseline method has the highest training results, while the other methods cause 6.0−14.0%6.0percent14.06.0-14.0\%6.0 - 14.0 % and 3.3−10.5%3.3percent10.53.3-10.5\%3.3 - 10.5 % drops in the training accuracy on VQA-CPv2 and VQAv2, respectively. We hypothesize that degrading performance on the train set helps forget linguistic biases, which in turn helps accuracy on VQA-CPv2’s test set but hurts accuracy on VQAv2’s val set.
We compare four different variants of HINT and SCR to study the causes behind the improvements including the models that are fine-tuned on: 1) relevant regions (state-of-the-art methods) 2) irrelevant regions 3) fixed random regions and 4) variable random regions. For all variants, we fine-tune a pre-trained UpDn, which was trained on either VQA-CPv2 or VQAv2 for 40 epochs with a learning rate of 10−3superscript10310^{-3}10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT. When fine-tuning with HINT, SCR or our method, we also use the main binary cross entropy VQA loss, whose weight is set to 1111. The batch size is set to 384384384384 for all of the experiments.
B
Prior work in privacy and human-computer interaction establishes the motivation for studying these documents. Although most internet users are concerned about privacy (Madden, 2017), Rudolph et al. (2018) reports that a significant number do not make the effort to read privacy notices because they perceive them to be too time-consuming or too complicated (Obar and Oeldorf-Hirsch, 2018). Responding to the opaqueness of these document, Schaub et al. (2015) introduced methods to ease the design of privacy notices and their integration, and Kelley et al. (2010) designed and tested a “privacy nutrition label” approach to present privacy information visually. Suggestions to improve the presentation of privacy information, have not been adopted by many organisations. Apple has begun displaying privacy labels in its app stores having collected the information from App developers; however, concise privacy information for websites remains an open problem.
To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies Amos et al. (2020), and it surpasses the aggregate of unique websites represented in all other publicly available web privacy policy corpora combined. We describe the corpus creation pipeline, with stages including a web crawler, language detection, document classification, duplicate and near-duplication removal, and content extraction. We then analyse the lengths and top level distribution of the privacy policies in the corpus and use topic modelling to explore the component topics. Subsequently, we pretrain PrivBERT, a transformer-based language model, using the corpus and evaluate it on data practice classification and question answering tasks. We release the corpus, a search engine for the corpus (Srinath et al., 2021), the document collection pipeline, and a language model to support further research in the privacy domain.111All artifacts are available at https://privaseer.ist.psu.edu/.
To build the PrivaSeer corpus, we create a pipeline concentrating on focused crawling Chakrabarti et al. (1999); Diligenti et al. (2000) of privacy policy documents. We used Common Crawl,222https://commoncrawl.org/ described below, to gather seed URLs to privacy policies on the web. We filtered the Common Crawl URLs to gather a set of possible links to web site privacy policies. We then crawled the filtered set to obtain candidate privacy policy documents. The complete pipeline from the Common Crawl URL dump to the gold standard privacy policy corpus is in Figure 1.
URL Cross Verification. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users. As a result, most organisations include a link to their privacy policy in the footer of their website landing page. In order to focus PrivaSeer Corpus on privacy policies that users are intended to read, we cross-verified the URLs of the privacy policies in our corpus with those that we obtained by crawling the homepages (landing page) of these domains. Between the 8th and 10th November 2019, we crawled the landing pages and pages one hop from the landing pages for all the domains of the URLs in our corpus. We then gathered the URLs satisfying our selection criteria and cross-verified them with the URLs in our existing corpus. After cross-verifying the URLs, we were left with a set of 1.1 million web pages.
We selected those URLs which had the word “privacy” or the words “data” and “protection” from the Common Crawl URL archive. We were able to extract 3.9 million URLs that fit this selection criterion. Informal experiments suggested that this selection of keywords was optimal for retrieving the most privacy policies with as few false positives as possible. To find the accuracy of this technique, we manually examined 115 English language website landing pages and their privacy policy URLs from the OPP-115 Corpus (Wilson et al., 2016) since it was built to cover the diverse distribution of privacy policies on the web, in terms of website popularity and sector of commerce. We found that out of 115 websites, 4 websites did not have their privacy policy links either on the landing page or one hop from the landing page and 5 other websites did not satisfy our URL selection criteria. Thus, our crawling technique would cover about 92.17% ±plus-or-minus\pm± 6.51% of English privacy policies on the web with a 95% confidence interval.
B
Figure 1: Knowledge generation model for ensemble learning with VA derived from the model by Sacha et al. [44]. On the left, it illustrates how a VA system can enable the exploration of the data and the models with the use of visualization. On the right, a number of design goals assist the human in the exploration, verification, and knowledge generation for ensemble learning.
Visualization systems have been developed for the exploration of diverse aspects of bagging, boosting, and further strategies such as “bucket of models”. Stacking, however, has so far not received comparable attention by the InfoVis/VA communities: actually, we have not found any literature describing the construction and improvement of stacking ensemble learning with the use of VA.
The rest of this paper is organized as follows. In the next section, we discuss the literature related to visualization of ensemble learning. Afterwards, we describe the knowledge generation model for ensemble learning with VA, design goals, and analytical tasks for attaching VA to ensemble learning.
In a bucket of models, the best model for a specific problem is automatically chosen from a set of available options. This strategy is conceptually different to the ideas of bagging, boosting, and stacking, but still related to ensemble learning. Chen et al. [6] utilize a bucket of latent Dirichlet allocation (LDA) models for combining topics based on criteria such as distinctiveness and coverage of the set of actions performed.
Figure 1: Knowledge generation model for ensemble learning with VA derived from the model by Sacha et al. [44]. On the left, it illustrates how a VA system can enable the exploration of the data and the models with the use of visualization. On the right, a number of design goals assist the human in the exploration, verification, and knowledge generation for ensemble learning.
A
We thus have 3333 cases, depending on the value of the tuple (p⁢(v,[010]),p⁢(v,[323]),p⁢(v,[313]),p⁢(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v , [ 313 ] ) , italic_p ( italic_v , [ 003 ] ) ):
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
p⁢(v,[013])=p⁢(v,[313])=p⁢(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1. Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ],
{0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],% [112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end_ARG , over¯ start_ARG 2 end_ARG , over¯ start_ARG 3 end_ARG , [ 013 ] , [ 010 ] , [ 323 ] , [ 313 ] , [ 112 ] , [ 003 ] , [ 113 ] } .
B
where ℒDit⁢r⁢a⁢i⁢n⁢(θ)subscriptℒsuperscriptsubscript𝐷𝑖𝑡𝑟𝑎𝑖𝑛𝜃\mathcal{L}_{D_{i}^{train}}(\theta)caligraphic_L start_POSTSUBSCRIPT italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t italic_r italic_a italic_i italic_n end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_θ ) and ℒDiv⁢a⁢l⁢i⁢d⁢(θi)subscriptℒsuperscriptsubscript𝐷𝑖𝑣𝑎𝑙𝑖𝑑subscript𝜃𝑖\mathcal{L}_{D_{i}^{valid}}(\theta_{i})caligraphic_L start_POSTSUBSCRIPT italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_v italic_a italic_l italic_i italic_d end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) are the loss functions of θ𝜃\thetaitalic_θ on Dit⁢r⁢a⁢i⁢nsuperscriptsubscript𝐷𝑖𝑡𝑟𝑎𝑖𝑛D_{i}^{train}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t italic_r italic_a italic_i italic_n end_POSTSUPERSCRIPT and θisubscript𝜃𝑖\theta_{i}italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT on Div⁢a⁢l⁢i⁢dsuperscriptsubscript𝐷𝑖𝑣𝑎𝑙𝑖𝑑D_{i}^{valid}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_v italic_a italic_l italic_i italic_d end_POSTSUPERSCRIPT, α𝛼\alphaitalic_α and β𝛽\betaitalic_β are the learning rates. In fine-tuning stage, each task fine-tunes from the pre-trained initialization θ𝜃\thetaitalic_θ on its Dit⁢r⁢a⁢i⁢nsuperscriptsubscript𝐷𝑖𝑡𝑟𝑎𝑖𝑛D_{i}^{train}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t italic_r italic_a italic_i italic_n end_POSTSUPERSCRIPT.
Task similarity. In Persona and Weibo, each task is a set of dialogues for one user, so tasks are different from each other. We shuffle the samples and randomly divide tasks to construct the setting that tasks are similar to each other. For a fair comparison, each task on this setting also has 120 and 1200 utterances on average in Persona and Weibo respectively. We train and evaluate Transformer-F and MAML on this setting. (Table 2). When tasks are similar to each other, MAML performs comparatively poorly. In Persona and Weibo, the performance of MAML is similar to that of Transformer-F, while MAML performs significantly better than Transformer-F when tasks are different. A possible explanation is that if there is no clear distinction between tasks, the meta-learning setting can be viewed as a transfer learning setting, which only has a source domain and a target domain, and fine-tuning performs well in transfer learning. So if the tasks are similar to each other, we can simply use Transformer-F rather than MAML.
Model-Agnostic Meta-Learning (MAML) [Finn et al., 2017] is one of the most popular meta-learning methods. It is trained on plenty of tasks (i.e. small data sets) to get a parameter initialization which is easy to adapt to target tasks with a few samples. As a model-agnostic framework, MAML is successfully employed in different NLP applications. Some works use MAML for few-shot text classification, such as relation classification [Obamuyide and Vlachos, 2019] and topic classification [Bao et al., 2020].
In Experiment II: Dialogue Generation, we use Persona [Zhang et al., 2018] and Weibo, regarding building a dialogue model for a user as a task. Persona is a personalized dialogue dataset with 1137/99/100 users for meta-training/meta-validation/meta-testing. Each user has 121 utterances on average. Weibo is a personalized dialogue dataset collected from Weibo conversations with 371/40/38 users for meta-training/meta-validation/meta-testing. Each user has 1200 utterances on average.
In Experiment I: Text Classification, we use FewRel [Han et al., 2018] and Amazon [He and McAuley, 2016]. They are datasets for 5-way 5-shot classification, which means 5 classes are randomly sampled from the full dataset for each task, and each class has 5 samples. FewRel is a relation classification dataset with 65/5/10 tasks for meta-training/meta-validation/meta-testing.
D
The rest of this paper is as follows. In Section II, the system model is introduced. In Section III, the CCA codebook design and the codebook-based joint subarray partition and AWV selection algorithms are proposed. Next, the TE-aware codebook-based beam tracking with 3D beamwidth control is further proposed in Section IV. Simulation results are given in Section V, and finally Section VI concludes this paper.
A CCA-enabled UAV mmWave network is considered in this paper. Here, we first establish the DRE-covered CCA model in Section II-A. Then the system setup of the considered UAV mmWave network is described in Section II-B. Finally, the beam tracking problem for the CA-enabled UAV mmWave network is modeled in Section II-C.
In addition, the AOAs and AODs should be tracked in the highly dynamic UAV mmWave network. To this end, in Section IV we will further propose a novel predictive AOA/AOD tracking scheme in conjunction with tracking error treatment to address the high mobility challenge, then we integrate these operations into the codebook-based SPAS to achieve reliable beam-tracking for the considered UAV mmWave network.
Note that directly solving the above beam tracking problem is very challenging, especially in the considered highly dynamic UAV mmWave network. Therefore, developing new and efficient beam tracking solution for the CA-enabled UAV mmWave network is the major focus of our work. Recall that several efficient codebook-based beam training and tracking schemes have been proposed for conventional mmWave network with uniform ULA and UPA [22, 23]. These prior works inspire us to propose a specialized new codebook design and the corresponding codeword selection/processing strategy that can drive the CCA to achieve fast beam tracking in the highly dynamic UAV mmWave network. To this end, the properties of the CCA should be exploited in the design of the codebook, which are briefly discussed as follows.
The rest of this paper is as follows. In Section II, the system model is introduced. In Section III, the CCA codebook design and the codebook-based joint subarray partition and AWV selection algorithms are proposed. Next, the TE-aware codebook-based beam tracking with 3D beamwidth control is further proposed in Section IV. Simulation results are given in Section V, and finally Section VI concludes this paper.
A
There are other logics, incomparable in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The
In addition, to make the main line of argument clearer, we consider only the finite graph case in the body of the paper, which already implies decidability of the finite satisfiability of 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT.
Related one-variable fragments in which we have only a unary relational vocabulary and the main quantification is ∃Sx⁢ϕ⁢(x)superscript𝑆𝑥italic-ϕ𝑥\exists^{S}x~{}\phi(x)∃ start_POSTSUPERSCRIPT italic_S end_POSTSUPERSCRIPT italic_x italic_ϕ ( italic_x ) are known to be decidable (see, e.g. [2]), and their decidability is the basis
There are other logics, incomparable in expressiveness with 𝖥𝖮Pres2subscriptsuperscript𝖥𝖮2Pres\mathsf{FO}^{2}_{\textup{Pres}}sansserif_FO start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT Pres end_POSTSUBSCRIPT, where periodicity of the spectrum has been proven [17]. The
The paper [4] shows decidability for a logic with incomparable expressiveness: the quantification allows a more powerful quantitative comparison, but must be guarded – restricting the counts only of sets of elements that are adjacent to a given element.
D
Deep reinforcement learning achieves phenomenal empirical successes, especially in challenging applications where an agent acts upon rich observations, e.g., images and texts. Examples include video gaming (Mnih et al., 2015), visuomotor manipulation (Levine et al., 2016), and language generation (He et al., 2015). Such empirical successes are empowered by expressive nonlinear function approximators such as neural networks, which are used to parameterize both policies (actors) and value functions (critics) (Konda and Tsitsiklis, 2000). In particular, the neural network learned from interacting with the environment induces a data-dependent feature representation, which embeds rich observations into a latent space encoding semantic structures (Hinton, 1986; Bengio, 2012; Yosinski et al., 2014; LeCun et al., 2015). In contrast, classical reinforcement learning mostly relies on a handcrafted feature representation that is fixed throughout learning (Sutton and Barto, 2018).
In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, TD possibly diverges (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997). Bhatnagar et al. (2009) propose nonlinear gradient TD, which converges but only to a locally optimal solution. See Geist and Pietquin (2013); Bertsekas (2019) for a detailed survey. When the value function approximator is an overparameterized multi-layer neural network, Cai et al. (2019) prove that TD converges to the globally optimal solution in the NTK regime. See also the independent work of Brandfonbrener and Bruna (2019a, b); Agazzi and Lu (2019); Sirignano and Spiliopoulos (2019), where the state space is required to be finite. In contrast to the previous analysis in the NTK regime, our analysis allows TD to attain a data-dependent feature representation that is globally optimal.
Moreover, soft Q-learning is equivalent to a variant of policy gradient (O’Donoghue et al., 2016; Schulman et al., 2017; Nachum et al., 2017; Haarnoja et al., 2017). Hence, Proposition 6.4 also characterizes the global optimality and convergence of such a variant of policy gradient.
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et al., 2017; Nachum et al., 2017; Haarnoja et al., 2017). In particular, we aim to characterize how an overparameterized two-layer neural network and its induced feature representation evolve in TD and Q-learning, especially their rate of convergence and global optimality. A fundamental obstacle, however, is that such an evolving feature representation possibly leads to the divergence of TD and Q-learning. For example, TD converges when the value function approximator is linear in a feature representation, which is fixed throughout learning, and possibly diverges otherwise (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997).
D
As for the costs, the decoder depth has a strong impact on inference speed, as the decoder has to be computed once for each decoding step during auto-regressive decoding Kasai et al. (2021); Xu et al. (2021c), and the use of only deep encoders Bapna et al. (2018); Wang et al. (2019); Li et al. (2022a); Chai et al. (2020) normally leads to faster inference speed than using both a deep encoder and a deep decoder. But in general, Table 6 shows that our approach uses fewer parameters and leads to faster decoding speed than the baselines to obtain a comparable BLEU score, showing the efficiency of our method.
For machine translation, the performance of the Transformer translation model Vaswani et al. (2017) benefits from including residual connections He et al. (2016) in stacked layers and sub-layers Bapna et al. (2018); Wu et al. (2019b); Wei et al. (2020); Zhang et al. (2019); Xu et al. (2020a); Li et al. (2020); Huang et al. (2020); Xiong et al. (2020); Mehta et al. (2021); Li et al. (2021); Xu et al. (2021d). However, the residual connections within each layer only fuse information through simple, one-step operations Yu et al. (2018), which may make the model “forget” distant layers, and aggregating layers is of profound value to better fuse linguistic information at different levels of representation Peters et al. (2018); Shen et al. (2018); Wang et al. (2018, 2019); Dou et al. (2018, 2019). Selectively aggregating different layer representations of the Transformer may further improve the performance.
For the convergence of deep Transformers, Bapna et al. (2018) propose the Transparent Attention mechanism which allows each decoder layer to attend weighted combinations of all encoder layer outputs. Wang et al. (2019) present the Dynamic Linear Combination of Layers approach that additionally aggregates shallow layers’ outputs for each encoder layer. Wu et al. (2019b) propose a two-stage approach. Wei et al. (2020) introduce a depth-wise GRU to additionally aggregate outputs of all encoder layers for the top decoder layer, but residual connections are still kept. Zhang et al. (2019) and Xu et al. (2020a) propose the layer-wise Depth-Scaled Initialization approach and the Lipschitz constrained parameter initialization approach, respectively, to reduce the standard deviation of layer normalization inputs and to ensure the functionality of residual connection. Kasai et al. (2021); Xu et al. (2021c) propose to accelerate decoding by using deep encoders and shallower decoders. Li et al. (2022a) design an ODE Transformer which is analogous to the Runge-Kutta method. Hao et al. (2022) present approaches to exploring hyperparameters of deep Transformers for low-resource NMT with shallow Transformers.
Multilingual translation uses a single model to translate between multiple language pairs Firat et al. (2016); Johnson et al. (2017); Aharoni et al. (2019). Model capacity has been found crucial for massively multilingual NMT to support language pairs with varying typological characteristics Zhang et al. (2020); Xu et al. (2021a). Using model layers efficiently with depth-wise LSTMs is likely to benefit multilingual NMT.
To test the effectiveness of depth-wise LSTMs in the multilingual setting, we conducted experiments on the challenging massively many-to-many translation task on the OPUS-100 corpus Tiedemann (2012); Aharoni et al. (2019); Zhang et al. (2020). We tested the performance of 6-layer models following the experiment settings of Zhang et al. (2020) for fair comparison. We adopted BLEU Papineni et al. (2002) for translation evaluation with the SacreBLEU toolkit Post (2018). 111BLEU+case.mixed+numrefs.1+smooth.exp+tok.13a
C
introduce here the notation 𝒦∘⁢(X)≜{U∈τ∣U⁢ is compact}≜superscript𝒦𝑋conditional-set𝑈τ𝑈 is compact\mathcal{K}^{\circ}\!\left(X\right)\triangleq\{U\in\uptau\mid U\text{ is % compact}\}caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_X ) ≜ { italic_U ∈ roman_τ ∣ italic_U is compact }. When the topology ττ\uptauroman_τ is not clear from
⟨τ⊆i∩⟦𝖥𝖮[σ]⟧Struct⁡(σ)⟩\langle\tau_{\subseteq_{i}}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{% \operatorname{Struct}(\upsigma)}\rangle⟨ italic_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT ⟩.
\rrbracket_{\operatorname{Struct}(\upsigma)}\right\rangle=\left\langle% \llbracket\mathsf{EFO}[\upsigma]\rrbracket_{\operatorname{Struct}(\upsigma)}\right\rangle⟨ roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT ⟩ = ⟨ ⟦ sansserif_EFO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT ⟩.
topology ⟨τ⊆i∩⟦𝖥𝖮[σ]⟧Struct⁡(σ)⟩\langle\uptau_{\subseteq_{i}}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{% \operatorname{Struct}(\upsigma)}\rangle⟨ roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT ⟩, i.e.,
τ⊆i∩⟦𝖥𝖮[σ]⟧Struct⁡(σ)\uptau_{\subseteq_{i}}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{% \operatorname{Struct}(\upsigma)}roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT roman_Struct ( roman_σ ) end_POSTSUBSCRIPT
C
Overall, the completed framework achieves the lowest error of distortion estimation as shown in Fig. 9, verifying the effectiveness of our proposed approach. For the optimization strategy, the BS-2 used ℒs⁢msubscriptℒ𝑠𝑚\mathcal{L}_{sm}caligraphic_L start_POSTSUBSCRIPT italic_s italic_m end_POSTSUBSCRIPT performs much better than BS-1 used ℒ2subscriptℒ2\mathcal{L}_{2}caligraphic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT since the ℒs⁢msubscriptℒ𝑠𝑚\mathcal{L}_{sm}caligraphic_L start_POSTSUBSCRIPT italic_s italic_m end_POSTSUBSCRIPT loss function boosts a more stable training process. Due to the effective normalization of distortion distribution, the network gains explicit spatial guidance with the flip operation on the global distortion context. We also show the training loss of the first 30 epochs derived from the BS-2 and BS-2 + FO in Fig. 10, where we can observe that the distribution normalization can significantly accelerate the convergence of the training process. On contrary, the BS-2 without flip operation suffers from a confused learning period especially in the first 10 epochs, which indicates that the neural network is unsure how to find a direct optimization way from the distribution difference. Moreover, the ordinal supervision fully measures the strong ordinal correlation in the proposed representation, and thus facilitates the accurate approximation of distortion distribution. With the special attention mechanism and distortion feature extraction, our learning model gains further improvements using the region of interest mask and distortion-aware perception layer.
In this section, we first state the details of the synthetic distorted image dataset and the training process of our learning model. Subsequently, we analyze the learning representation for distortion estimation. To demonstrate the effectiveness of each module in our framework, we conduct an ablation study to show the different performances. Additionally, the experimental results of our approach compared with the state-of-the-art methods are exhibited, in both quantitative measurement and visual qualitative appearance. Finally, we discuss two main limitations of our approach and present the possible solutions for future work.
Figure 1: Method Comparisons. (a) Previous learning methods, (b) Our proposed approach. We aim to transfer the traditional calibration objective into a learning-friendly representation. Previous methods roughly feed the whole distorted image into their learning models and directly estimate the implicit and heterogeneous distortion parameters. In contrast, our proposed approach only requires a part of a distorted image (distortion element) and estimates the ordinal distortion. Due to its explicit description and homogeneity, we can obtain more accurate distortion estimation and achieve better corrected results.
As listed in Table II, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods [23, 24] based on the hand-crafted features, our approach overcomes the scene limitation and simple camera model assumption, showing more promising generality and flexibility. Compared with the learning distortion rectification methods [8][11][12], which omit the prior knowledge of the distortion, our approach transfers the heterogeneous estimation problem into a homogeneous one, eliminating the implicit relationship between image features and predicted values in a more explicit expression. As benefits of the effective ordinal supervision and guidance of distortion information during the learning process, our approach outperforms Liao [12] by a significant margin, with approximately 23% improvement on PSNR and 17% improvement on SSIM. Besides the high quality of the rectified image, our approach can obtain the accurate distortion parameters of a distorted image, which is crucial for the subsequent tasks such as the camera calibration. However, the generation-based methods [11][12] mainly focus on the pixel reconstruction of a rectified image and ignore the parameter estimation.
In this part, we compare our approach with the state-of-the-art methods in both quantitative and qualitative evaluations, in which the compared methods can be classified into traditional methods [23][24] and learning methods [8][11][12]. Note that our approach only requires a patch of the input distorted image to estimate the ordinal distortion.
D
We further conduct CTR prediction experiments to evaluate SNGM. We train DeepFM [8] on a CTR prediction dataset containing ten million samples that are sampled from the Criteo dataset 777https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/. We set aside 20% of the samples as the test set and divide the remaining samples into training and validation sets with a ratio of 4:1.
If we avoid these tricks, these methods may suffer from severe performance degradation. For LARS and its variants, the proposal of the layer-wise update strategy is primarily based on empirical observations. Its reasonability and necessity remain doubtful from an optimization perspective.
We compare SNGM with four baselines: MSGD, LARS [34], EXTRAP-SGD [19] and CLARS [12]. For LARS, EXTRAP-SGD and CLARS, we adopt the open source code 222https://github.com/NUS-HPC-AI-Lab/LARS-ImageNet-PyTorch 333http://proceedings.mlr.press/v119/lin20b.html 444https://github.com/slowbull/largebatch
We use a pre-trained ViT 555https://huggingface.co/google/vit-base-patch16-224-in21k [4] model and fine-tune it on the CIFAR-10/CIFAR-100 datasets. The experiments are implemented based on the Transformers 666https://github.com/huggingface/transformers framework. We fine-tune the model with 20 epochs.
We compare SNGM with four baselines: MSGD, ADAM [14], LARS [34] and LAMB [34]. LAMB is a layer-wise adaptive large-batch optimization method based on ADAM, while LARS is based on MSGD. The experiments are implemented based on the DeepCTR 888https://github.com/shenweichen/DeepCTR-Torch framework.
D
{\mathcal{F}}roman_support ( caligraphic_D ) ⊆ 2 start_POSTSUPERSCRIPT caligraphic_C end_POSTSUPERSCRIPT × blackboard_R start_POSTSUPERSCRIPT caligraphic_F end_POSTSUPERSCRIPT and, in the black-box setting, |𝒟|𝒟|\mathcal{D}|| caligraphic_D | may be uncountably infinite.
The other three results are based on a reduction to a single-stage, deterministic robust outliers problem described in Section 4; namely, convert any ρ𝜌\rhoitalic_ρ-approximation algorithm for the robust outlier problem into a (ρ+2)𝜌2(\rho+2)( italic_ρ + 2 )-approximation algorithm for the corresponding two-stage stochastic problem. This is similar to a robust supplier problem considered in [3] under the name priority center, and many of the approximation algorithms of [3] can be adapted to our setting.
Stochastic optimization, first introduced in the work of Beale [4] and Dantzig [8], provides a way to model uncertainty in the realization of the input data. In this paper, we give approximation algorithms for a family of problems in stochastic optimization, and more precisely in the 2222-stage recourse model [27]. Our formal problem definitions follow.
Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific knowledge of the distribution is unknown but we have the ability to sample or simulate from the distribution. To our knowledge, radius minimization has not been previously considered in the two-stage stochastic paradigm. Most prior work in this setting has focused on Facility Location [23, 24, 21, 22, 11, 19, 25]. On similar lines, [1] studies a stochastic k𝑘kitalic_k-center variant, where points arrive independently and each point only needs to get covered with some given probability. 2S-Sup is the natural two-stage counterpart of the well-known Knapsack-Supplier problem, which has a well-known 3333-approximation [14].
The most general way to represent the scenario distribution 𝒟𝒟\mathcal{D}caligraphic_D is the black-box model [24, 12, 22, 19, 25], where we have access to an oracle to sample scenarios A𝐴Aitalic_A according to 𝒟𝒟\mathcal{D}caligraphic_D. We also consider the polynomial-scenarios model [23, 15, 21, 10], where the distribution 𝒟𝒟\mathcal{D}caligraphic_D is listed explicitly. We use the suffixes BB and Poly to distinguish these settings. For example, 2S-Sup-BB is the previously defined 2S-Sup in the black-box model.
C
Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15], and additive and multiplicative communication noises may co-exist in communication links ([21]).
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be spatially and temporally dependent. The local cost functions are not required to be differentiable, nor do their subgradients need to be bounded. The local optimizers can only obtain measurement information of the local subgradients with random noises. The additive and multiplicative communication noises co-exist in communication links. We consider the distributed stochastic subgradient optimization algorithm and prove that if the sequence of random digraphs is conditionally balanced and uniformly conditionally jointly connected, then the states of all local optimizers converge to the same global optimal solution almost surely. The main contributions of our paper are listed as follows.
II. The structure of the networks among optimizers is modeled by a more general sequence of random digraphs. The sequence of random digraphs is conditionally balanced, and the weighted adjacency matrices are not required to have special statistical properties such as independency with identical distribution, Markovian switching, or stationarity, etc. The edge weights are also not required to be nonnegative at every time instant. By introducing the concept of conditional digraphs and developing the stochastic Lyapunov method for distributed optimization over non-stationary randomly time-varying networks, uniformly conditionally joint connectivity condition is established to ensure the convergence of the distributed stochastic optimization algorithms.
I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition. The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error. This leads the nonnegative supermartingale convergence theorem not to be applied directly
We have studied the distributed stochastic subgradient algorithm for the stochastic optimization by networked nodes to cooperatively minimize a sum of convex cost functions. We have proved that if the local subgradient functions grow linearly and the sequence of digraphs is conditionally balanced and uniformly conditionally jointly connected, then proper algorithm step sizes can be designed so that all nodes’ states converge to the global optimal solution almost surely.
A
For instance, since the random output tables in Figure 3 comply with 1212\frac{1}{2}divide start_ARG 1 end_ARG start_ARG 2 end_ARG-probability, for any QI value whose corresponding column has at least one probability greater than 0, there are at least 2 records can carry the QI value.
In this work, we propose a novel technique called the Mutual Cover (MuCo) to impede adversary from matching the combination of QI values while overcoming the above issues. The key idea of MuCo is to make similar tuples to cover for each other by randomizing their QI values according to random output tables.
The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution in the original data. Second, the anonymization of MuCo is a “black box” process for recipients because the only difference between the original data and the anonymized data is that some original QI values are replaced with random values. Thus, the adversary cannot determine which QI values are altered as well as the ranges of variations, causing that the matching tuples are more likely to be wrong or even does not exist when the adversary uses more QI values to match, but the adversary obtains much more matching records if the size of the combination of QI values is not big enough. While for the recipient, the results of query statements are specific records rather than groups. Accordingly, the results are more accurate. The conducted extensive experiments also illustrate the effectiveness of the proposed method.
This section presents the algorithm to implement the Mutual Cover (MuCo) framework111The code is available at https://github.com/liboyuty/Mutual-Cover.. We aim to achieve two goals. First, MuCo satisfies δ𝛿\deltaitalic_δ-probability to hinder the adversary from matching the combination of QI values. Second, the records cover for each other at the minimum cost, i.e., maintaining the original QI values as much as possible. The procedure is given in Algorithm 1.
For instance, suppose that we add another QI attribute of gender as shown in Figure 4, the mutual cover strategy first divides the records into groups in which the records in the same group cover for each other by perturbing their QI values. Then, the mutual cover strategy calculates a random output table on each QI attribute (i.e., age and gender) within each group. Finally, an anonymized table, as shown in Figure 4, is generated by replacing the original QI values with the random output values. Note that, since the anonymization process is hidden, the adversary does not know the partition of groups and the random output tables. Therefore, the adversary can not determine which QI values are changed as well as the ranges of the variations.
C
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains another 2 mAP. Armed with DCN, GC block and SyncBN training, our HTC with Res2NetR101 backbone yields 74.58 mAP on validation set, as shown in Table 1. However, the convolutional mask heads adopted in all stages bring non-negligible computation and memory costs, which constrain the mask resolution and further limit the segmentation quality for large instances.
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRend Kirillov et al. (2020). Most of these detectors focus on an overall performance on public datasets like COCO, which contains much smaller instances than 3D-FUTURE, while paying less attention to large objects segmentation. As illustrated in Figure 1, the size distribution of bounding boxes in 3D-FUTURE and COCO indicates that the former contains much larger objects while the latter is dominated by smaller instances. Thus, the prominent methods used in COCO, like MaskRCNN He et al. (2017) and HTC, may generate blurry contours for large instances. Their mask heads output segmentation from a limited small feature size (e.g., 14×14141414\times 1414 × 14), which is dramatically insufficient to represent large objects. All of these motivate us to segment large instances in a fine-grained and high-quality manner. SOLOv2 builds an efficient single-shot framework with strong performance and dynamically generates predictions with much larger mask size (e.g., 1/4 scale of input size) than HTC. PointRend iteratively renders the output mask over adaptively sampled uncertain points in a coarse-to-fine fashion, which is naturally suitable for generating smooth and fine-grained instance boundaries. By conducting extensive experiments on HTC, SOLOv2 and PointRend, PointRend succeeds in producing finer mask boundaries and significantly outperforms other methods by a large margin. Our step-by-step modifications adopted on PointRend finally achieves state-of-the-art performance on 3D-FUTURE dataset, which yields 79.2 mAP and 77.38 mAP on validation and test set respectively. The final submission is an ensemble of 5 PointRend models with slightly different settings, reaching the 1st place in this competition.
Due to limited mask representation of HTC, we move on to SOLOv2, which utilizes much larger mask to segment objects. It builds an efficient yet simple instance segmentation framework, outperforming other segmentation methods like TensorMask Chen et al. (2019c), CondInst Tian et al. (2020) and BlendMask Chen et al. (2020) on COCO. In SOLOv2, the unified mask feature branch is dynamically convoluted by learned kernels, and the adaptively generated mask for each location benefits from the whole image view instead of cropped region proposals like HTC. Using ResNeXt101-64x4d plugined with DCN and GC block, SOLOv2 achieves 75.29 mAP on validation set (see Table 1). It’s worth noting that other attempts, including NASFPN, data augmentation and Mask Scoring, bring little improvement in our experiments.
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.9 mAP and surpasses MaskRCNN by a remarkable margin of 9.7 mAP. More Points Test. By increasing the number of subdivision points from default 28 to 70 during inference, we gain another 1.1 mAP with free training cost. Large Backbone. X101-64x4d Xie et al. (2017) is then used as large backbone and it boosts 6 mAP against ResNet50. DCN and More Points Train. We adopt more interpolated points during training, by increasing the number of sampled points from original 14 to 26 for coarse prediction head, and from 14 to 24 for fine-grained point head. Then by adopting DCN Dai et al. (2017), we gain 71.6 mAP, which already outperforms HTC and SOLOV2 from our offline observation. Large Resolution and P6 Feature. Due to PointRend’s lightweight segmentation head and less memory consumption compared to HTC, the input resolution can be further increased from range [800,1000] to [1200,1400] during multi-scale training. P6 level of FPN is also added for both coarse prediction head and fine-grained point head, which finally yields 74.3 mAP on our splitted validation set. Other tricks we tried on PointRend give little improvement, including MaskScoring head, GC Block and DoubleHead Wu et al. (2020). In the following, we refer the model in the last row (74.3 mAP) of Table 2 as PointRend baseline. The baseline trained on the official training set finally reaches 79.17 and 77.38 mAP on validation and testing set respectively, as shown in Table 1 and Table 3. It surpasses SOLOv2 by a large margin: 6.2, 4.5 and 3.5 mAP respectively for small, medium and large size on validation set. We believe that PointRend’s iteratively rendering process acts as a pivot for generating high-quality masks, especially fine-grained instance boundaries. Due to its superior performance, we only choose PointRend as ensemble candidates for the final submission.
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains another 2 mAP. Armed with DCN, GC block and SyncBN training, our HTC with Res2NetR101 backbone yields 74.58 mAP on validation set, as shown in Table 1. However, the convolutional mask heads adopted in all stages bring non-negligible computation and memory costs, which constrain the mask resolution and further limit the segmentation quality for large instances.
B
I⁢(f)<1,andH⁢(|f^|2)>nn+1⁢log⁡n.formulae-sequence𝐼𝑓1and𝐻superscript^𝑓2𝑛𝑛1𝑛I(f)<1,\ \ {\mbox{and}}\ \ H(|\hat{f}|^{2})>\frac{n}{n+1}\log n.italic_I ( italic_f ) < 1 , and italic_H ( | over^ start_ARG italic_f end_ARG | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) > divide start_ARG italic_n end_ARG start_ARG italic_n + 1 end_ARG roman_log italic_n .
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma some time ago (see [K], comment from April 2, 2011). More specifically, we proved
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Maybe the presentation below is what was known.
(0⁢log⁡0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^⁢(A)|2}A⊆[n]subscriptsuperscript^𝑓𝐴2𝐴delimited-[]𝑛\{|\hat{f}(A)|^{2}\}_{A\subseteq[n]}{ | over^ start_ARG italic_f end_ARG ( italic_A ) | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_A ⊆ [ italic_n ] end_POSTSUBSCRIPT sums up to 1111 and thus this is the usual definition of entropy of this probability distribution.
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
B
We propose a parameter-free algorithm called Ada-LSVI-UCB-Restart, an adaptive version of LSVI-UCB-Restart, and prove that it can achieve O~⁢(B1/4⁢d5/4⁢H5/4⁢T3/4)~𝑂superscript𝐵14superscript𝑑54superscript𝐻54superscript𝑇34\tilde{O}(B^{1/4}d^{5/4}H^{5/4}T^{3/4})over~ start_ARG italic_O end_ARG ( italic_B start_POSTSUPERSCRIPT 1 / 4 end_POSTSUPERSCRIPT italic_d start_POSTSUPERSCRIPT 5 / 4 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 5 / 4 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 3 / 4 end_POSTSUPERSCRIPT ) dynamic regret without knowing the total variations.
We consider the setting of episodic RL with nonstationary reward and transition functions. To measure the performance of an algorithm, we use the notion of dynamic regret, the performance difference between an algorithm and the set of policies optimal for individual episodes in hindsight. For nonstationary RL, dynamic regret is a stronger and more appropriate notion of performance measure than static regret, but is also more challenging for algorithm design and analysis. To incorporate function approximation, we focus on a subclass of MDPs in which the reward and transition dynamics are linear in a known feature map (Melo & Ribeiro, 2007), termed linear MDP. For any linear MDP, the value function of any policy is linear in the known feature map since the Bellman equation is linear in reward and transition dynamics (Jin et al., 2020). Since the optimal policy is greedy with respect to the optimal value function, linear function approximation suffices to learn the optimal policy. For nonstationary linear MDPs, we show that one can design a near-optimal statistically-efficient algorithm to achieve sublinear dynamic regret as long as the total variation of reward and transition dynamics is sublinear. Let T𝑇Titalic_T be the total number of time steps, B𝐵Bitalic_B be the total variation of reward and transition function throughout the entire time horizion, d𝑑ditalic_d be the ambient dimension of the features, and H𝐻Hitalic_H be the planning horizon.
Bandit problems can be viewed as a special case of MDP problems with unit planning horizon. It is the simplest model that captures the exploration-exploitation tradeoff, a unique feature of sequential decision-making problems. There are several ways to define nonstationarity in the bandit literature. The first one is piecewise-stationary (Garivier & Moulines, 2011), which assumes the expected rewards of arms change in a piecewise manner, i.e., stay fixed for a time period and abruptly change at unknown time steps. The second one is to quantify the total variations of expected rewards of arms (Besbes et al., 2014). The general strategy to adapt to nonstationarity
However, all of the aforementioned empirical and theoretical works on RL with function approximation assume the environment is stationary, which is insufficient to model problems with time-varying dynamics. For example, consider online advertising. The instantaneous reward is the payoff when viewers are redirected to an advertiser, and the state is defined as the the details of the advertisement and user contexts. If the target users’ preferences are time-varying, time-invariant reward and transition function are unable to capture the dynamics. In general nonstationary random processes naturally occur in many settings and are able to characterize larger classes of problems of interest (Cover & Pombra, 1989). Can one design a theoretically sound algorithm for large-scale nonstationary MDPs? In general it is impossible to design algorithm to achieve sublinear regret for MDPs with non-oblivious adversarial reward and transition functions in the worst case (Yu et al., 2009). Then what is the maximum nonstationarity a learner can tolerate to adapt to the time-varying dynamics of an MDP with potentially infinite number of states? This paper addresses these two questions.
The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and allowed to change in l𝑙litalic_l times for the reward and transition functions. They show that UCRL2 with restart achieves O~⁢(l1/3⁢T2/3)~𝑂superscript𝑙13superscript𝑇23\tilde{O}(l^{1/3}T^{2/3})over~ start_ARG italic_O end_ARG ( italic_l start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) dynamic regret, where T𝑇Titalic_T is the time horizon. Later works (Ortner et al., 2020; Cheung et al., 2020; Fei et al., 2020) generalize the nonstationary setting to allow reward and transition functions vary for any number of time steps, as long as the total variation is bounded. Specifically, the work of (Ortner et al., 2020) proves that UCRL with restart achieves O~⁢((Br+Bp)1/3⁢T2/3)~𝑂superscriptsubscript𝐵𝑟subscript𝐵𝑝13superscript𝑇23\tilde{O}((B_{r}+B_{p})^{1/3}T^{2/3})over~ start_ARG italic_O end_ARG ( ( italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT + italic_B start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) dynamic regret (when the variation in each epoch is known), where Brsubscript𝐵𝑟B_{r}italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT and Bpsubscript𝐵𝑝B_{p}italic_B start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT denote the total variation of reward and transition functions over all time steps. Cheung et al. (2020) proposes an algorithm based on UCRL2 by combining sliding windows and a confidence widening technique. Their algorithm has slightly worse dynamic regret bound O~⁢((Br+Bp)1/4⁢T3/4)~𝑂superscriptsubscript𝐵𝑟subscript𝐵𝑝14superscript𝑇34\tilde{O}((B_{r}+B_{p})^{1/4}T^{3/4})over~ start_ARG italic_O end_ARG ( ( italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT + italic_B start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 1 / 4 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 3 / 4 end_POSTSUPERSCRIPT ) without knowing the local variations. Further, Fei et al. (2020) develops an algorithm which directly optimizes the policy and enjoys near-optimal regret in the low-variation regime. A different model of nonstationary MDP is proposed by Lykouris et al. (2021), which smoothly interpolates between stationary and adversarial environments, by assuming that most episodes are stationary except for a small number of adversarial episodes. Note that Lykouris et al. (2021) considers linear function approximation, but their nonstationarity assumption is different from ours. In this paper, we assume the variation budget for reward and transition function is bounded, which is similar to the settings in Ortner et al. (2020); Cheung et al. (2020); Mao et al. (2021). Concurrently to our work, Touati & Vincent (2020) propose an algorithm combining weighted least-squares value iteration and the optimistic principle, achieving the same O~⁢(B1/4⁢d5/4⁢H5/4⁢T3/4)~𝑂superscript𝐵14superscript𝑑54superscript𝐻54superscript𝑇34\tilde{O}(B^{1/4}d^{5/4}H^{5/4}T^{3/4})over~ start_ARG italic_O end_ARG ( italic_B start_POSTSUPERSCRIPT 1 / 4 end_POSTSUPERSCRIPT italic_d start_POSTSUPERSCRIPT 5 / 4 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 5 / 4 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 3 / 4 end_POSTSUPERSCRIPT ) regret as we do with knowledge of the total variation B𝐵Bitalic_B. They do not have a dynamic regret bound when the knowledge of local variations is available. Their proposed algorithm uses exponential weights to smoothly forget data that are far in the past. By contrast, our algorithm periodically restarts the LSVI-UCB algorithm from scratch to handle the non-stationarity and is much more computationally efficient. Another concurrent work by Wei & Luo (2021) follows a substantially different approach to achieve the optimal T2/3superscript𝑇23T^{2/3}italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT regret. The key idea of their algorithm is to run multiple base algorithms for stationary instances with different duration simultaneously, under a carefully designed random schedule. Compared with them, our algorithm has a slightly worse rate, but a much better computational complexity, since we only require to maintain one instance of the base algorithm. Both of these two concurrent works do not have empirical results, and we are also the first one to conduct numerical experiments on online exploration for non-stationary MDPs (Section 6). Other related and concurrent works investigate online exploration in different classes of non-stationary MDPs, including linear kernal MDP (Zhong et al., 2021), constrained tabular MDP (Ding & Lavaei, 2022), and stochastic shorted path problem (Chen & Luo, 2022).
B
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures  1 and  2) which is statistically significant (r⁢(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and the presence of fake news, which is deceptive and usually meant to serve hidden agendas, may erode trust. It is worthwhile to consider whether the trust in media items is due to people’s own encounters with fake news, or because of secondary factors. In Singapore, there have been active efforts through campaigns from various organizations (e.g., S.U.R.E. (Board, [n.d.]), Better Internet (Council, [n.d.]), VacciNationSG (Lai, 2021)) to raise awareness on misinformation, disinformation and fake news. If it is through the exposure to the messages of these campaigns that people’s trust in media items have been influenced, especially those who might not have personally encountered fake news, this suggests the importance of media literacy education in addressing fake news, particularly when secondary effects such as practicing greater caution due to a lack of trust comes into play.
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms, and post corrections and warnings when they encounter fake news. That respondents show strong trust and reliance on government communication platforms, such as official websites and hotlines, signifies the relatively strong faith that Singapore residents have in the Singapore Government to provide truthful and helpful information and to debunk fake news. This may be attributed to the successful ongoing efforts in making transparent government decisions and the readiness of the government in addressing public concerns through online forums and dialogues (REACH, [n.d.]). There is opportunity here for the government to launch programs such as campaigns, call-to-actions and civic tech initiatives that aim to more actively involve the public in discussing the local impacts of fake news and the strategies to manage it, and to encourage them to play a part through personal and community actions.
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures  1 and  2) which is statistically significant (r⁢(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and the presence of fake news, which is deceptive and usually meant to serve hidden agendas, may erode trust. It is worthwhile to consider whether the trust in media items is due to people’s own encounters with fake news, or because of secondary factors. In Singapore, there have been active efforts through campaigns from various organizations (e.g., S.U.R.E. (Board, [n.d.]), Better Internet (Council, [n.d.]), VacciNationSG (Lai, 2021)) to raise awareness on misinformation, disinformation and fake news. If it is through the exposure to the messages of these campaigns that people’s trust in media items have been influenced, especially those who might not have personally encountered fake news, this suggests the importance of media literacy education in addressing fake news, particularly when secondary effects such as practicing greater caution due to a lack of trust comes into play.
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by political and financial gains, and its influence has led to increasing social costs due to the adverse effects it has on people’s truth discernment and behavior (Duffy et al., 2020). With fake news stemming mainly from digital media and causing misguided dissent that could compromise collaboration among people, we see this to be of concern to the CSCW community. As global efforts addressing fake news take off, we aim to understand what the perceptions and practices of news sharing and fake news are in a local context, with Singapore as the place of interest, to gain insights on where best to direct local mitigation efforts.
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Government to more directly address falsehoods that hurt the public interest. The rising attention of fake news in the local scene has motivated various research including studies on the perceptions and motivations of fake news sharing (Chen et al., 2015) and responses to fake news (Edson C Tandoc et al., 2020). Although there are parallels between these studies and ours, we want to highlight that our study explores fake news in general media instead of solely social media, examining both usage and trust. Furthermore, we investigate more broadly the attitudes and behaviors on news sharing and fake news.
A
README.md exists but content is empty.
Downloads last month
53