context
stringlengths
250
4.37k
A
stringlengths
250
8.2k
B
stringlengths
250
4.23k
C
stringlengths
250
4.99k
D
stringlengths
250
3.54k
label
stringclasses
4 values
to the weight such that a Gauss-Legendre integration for moments xD+m−1superscript𝑥𝐷𝑚1x^{D+m-1}italic_x start_POSTSUPERSCRIPT italic_D + italic_m - 1 end_POSTSUPERSCRIPT is engaged and the wiggly remainder of Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT multiplied by f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ) is
Rnm⁢(x)=∑s=0(n−m)/2(−1)s⁢(n−m2s)⁢(D2+n−s−1n−m2)⁢xn−2⁢s.superscriptsubscript𝑅𝑛𝑚𝑥superscriptsubscript𝑠0𝑛𝑚2superscript1𝑠binomial𝑛𝑚2𝑠binomial𝐷2𝑛𝑠1𝑛𝑚2superscript𝑥𝑛2𝑠\displaystyle R_{n}^{m}(x)=\sum_{s=0}^{(n-m)/2}(-1)^{s}\binom{\frac{n-m}{2}}{s% }\binom{\frac{D}{2}+n-s-1}{\frac{n-m}{2}}x^{n-2s}.italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) = ∑ start_POSTSUBSCRIPT italic_s = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_n - italic_m ) / 2 end_POSTSUPERSCRIPT ( - 1 ) start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( FRACOP start_ARG divide start_ARG italic_n - italic_m end_ARG start_ARG 2 end_ARG end_ARG start_ARG italic_s end_ARG ) ( FRACOP start_ARG divide start_ARG italic_D end_ARG start_ARG 2 end_ARG + italic_n - italic_s - 1 end_ARG start_ARG divide start_ARG italic_n - italic_m end_ARG start_ARG 2 end_ARG end_ARG ) italic_x start_POSTSUPERSCRIPT italic_n - 2 italic_s end_POSTSUPERSCRIPT .
that adds the results of 1+(n−m)/21𝑛𝑚21+(n-m)/21 + ( italic_n - italic_m ) / 2 Gaussian integrations for moments xD−1+n−2⁢ssuperscript𝑥𝐷1𝑛2𝑠x^{D-1+n-2s}italic_x start_POSTSUPERSCRIPT italic_D - 1 + italic_n - 2 italic_s end_POSTSUPERSCRIPT. The disadvantage
Gaussian integration rules for integrals ∫01xD−1⁢Rnm⁢(x)⁢f⁢(x)⁢𝑑xsuperscriptsubscript01superscript𝑥𝐷1superscriptsubscript𝑅𝑛𝑚𝑥𝑓𝑥differential-d𝑥\int_{0}^{1}x^{D-1}R_{n}^{m}(x)f(x)dx∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_f ( italic_x ) italic_d italic_x
to the weight such that a Gauss-Legendre integration for moments xD+m−1superscript𝑥𝐷𝑚1x^{D+m-1}italic_x start_POSTSUPERSCRIPT italic_D + italic_m - 1 end_POSTSUPERSCRIPT is engaged and the wiggly remainder of Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT multiplied by f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ) is
B
In other words, our algorithm initialises w:=gassign𝑤𝑔w:=gitalic_w := italic_g, u1:=1assignsubscript𝑢11u_{1}:=1italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT := 1 and u2:=1assignsubscript𝑢21u_{2}:=1italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT := 1 and multiplies w𝑤witalic_w, u1subscript𝑢1u_{1}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and u2subscript𝑢2u_{2}italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT by the transvections necessary to render g=u1⁢w⁢u2𝑔subscript𝑢1𝑤subscript𝑢2g=u_{1}wu_{2}italic_g = italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_w italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT with w𝑤witalic_w monomial and u1,u2subscript𝑢1subscript𝑢2u_{1},u_{2}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT lower unitriangular.
For the purposes of determining the cost of Taylor’s algorithm in terms of matrix operations, namely determining the length of an MSLP for the algorithm, we assume that the field elements −gi⁢c⁢gr⁢c−1subscript𝑔𝑖𝑐superscriptsubscript𝑔𝑟𝑐1-g_{ic}g_{rc}^{-1}- italic_g start_POSTSUBSCRIPT italic_i italic_c end_POSTSUBSCRIPT italic_g start_POSTSUBSCRIPT italic_r italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT in (11) (and similarly in (12)) are given to us as polynomials of degree at most f−1𝑓1f-1italic_f - 1 in the primitive element ω𝜔\omegaitalic_ω, where q=pf𝑞superscript𝑝𝑓q=p^{f}italic_q = italic_p start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT for some prime p𝑝pitalic_p.
does not yield an upper bound for the memory requirement in a theoretical analysis. Moreover, the result of SlotUsagePattern improves the memory usage but it is not necessarily optimized overall and, hence, the number of slots can still be greater than the number of slots of a carefully computed MSLP. It should also be mentioned that in some cases the number of slots can even be smaller than that of a constructed MSLP but it is not possible to predict this without a careful analysis which would result in an MSLP construction as in this paper.
As for the simpler examples considered in the previous section, here to keep the presentation clear we do not write down explicit MSLP instructions, but instead determine the cost of Algorithm 3 while keeping track of the number of elements that an MSLP for this algorithm would need to keep in memory at any given time.
The cost of the subroutines is determined with this in mind; that is, for each subroutine we determine the maximum length and memory requirement for an MSLP that returns the required output when evaluated with an initial memory containing the appropriate input.
C
The key to approximate (25) is the exponential decay of P⁢w𝑃𝑤Pwitalic_P italic_w, as long as w∈H1⁢(𝒯H)𝑤superscript𝐻1subscript𝒯𝐻w\in{H^{1}({\mathcal{T}_{H}})}italic_w ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) has local support. That allows replacing P𝑃Pitalic_P by a semi-local operator Pjsuperscript𝑃𝑗P^{j}italic_P start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT. That works fine for low-contrast coefficients and is the subject of Section 3.2. For high-contrast coefficients however, the exponential decay rate is smaller, and to circumvent that we consider in Section 3.1 a spectral decomposition of Λ~hfsuperscriptsubscript~Λℎ𝑓\tilde{\Lambda}_{h}^{f}over~ start_ARG roman_Λ end_ARG start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT.
It is essential for the performing method that the static condensation is done efficiently. The solutions of (22) decay exponentially fast if w𝑤witalic_w has local support, so instead of solving the problems in the whole domain it would be reasonable to solve it locally using patches of elements. We note that the idea of performing global static condensation goes back to the Variational Multiscale Finite Element Method–VMS [MR1660141, MR2300286]. Recently variations of the VMS
Solving (22) efficiently is crucial for the good performance of the method, since it is the only large dimensional system of (21), in the sense that its size grows with order of h−dsuperscriptℎ𝑑h^{-d}italic_h start_POSTSUPERSCRIPT - italic_d end_POSTSUPERSCRIPT.
mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element removing the dependence of the contrast.
One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method not so practical. Here in this paper, in the presence of rough coefficients, spectral techniques are employed to overcome such hurdle, and by solving local eigenvalue problems we define a space where the exponential decay of solutions is insensitive to high-contrast coefficients. Additionally, the spectral techniques remove macro-elements corner singularities that occur in LOD methods based on
A
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
The difference is mainly due to the degenerate case (where a chord of P𝑃Pitalic_P is parallel to an edge of P𝑃Pitalic_P) and the float issue of both programs. Our implementations of Alg-K and Alg-CM have logical difference in handling degenerate cases.
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM.
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.)
Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]), Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases.
A
It has to be noted here that even though we obtain reasonable results on the classification task in general, the prediction performance varies considerably along the time dimension. This is understandable, since tweets become more distinguishable, only when the user gains more knowledge about the event.
Training data for single tweet classification. Here we follow our assumption that an event might include sub-events for which relevant tweets are rumorous. To deal with this complexity, we train our single-tweet learning model only with manually selected breaking and subless 333the terminology subless indicates an event with no sub-events for short. events from the above dataset. In the end, we used 90 rumors and 90 news associated with 72452 tweets, in total. This results in a highly-reliable large-scale ground-truth of tweets labelled as news-related and rumor-related, respectively. Note that the labeling of a tweet is inherited from the event label, thus can be considered as an semi-automatic process.
We use the same dataset described in Section 5.1. In total –after cutting off 180 events for pre-training single tweet model – our dataset contains 360 events and 180 of them are labeled as rumors. Those rumors and news fall comparatively evenly in 8 different categories, namely Politics, Science, Attacks, Disaster, Art, Business, Health and Other. Note, that the events in our training data are not necessarily subless, because it is natural for high-impact events (e.g., Missing MH370 or Munich shooting) to contain sub-events. Actually, we empirically found that roughly 20% of our events (mostly news) contain sub-events. As a rumor is often of a long circulating story [10], this results in a rather long time span. In this work, we develop an event identification strategy that focuses on the first 48 hours after the rumor is peaked. We also extract 11,038 domains, which are contained in tweets in this 48 hours time range.
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet level and let each tweet vote for the credibility of its event. We show the CreditScore measured over time in Figure 5(a). It can be seen that although the credibility of some tweets are low (rumor-related), averaging still makes the CreditScore of Munich shooting higher than the average of news events (hence, close to a news). In addition, we show the feature analysis for ContainNews (percentage of URLs containing news websites) for the event Munich shooting in Figure 5(b). We can see the curve of Munich shooting event is also close to the curve of average news, indicating the event is more news-related.
story descriptions we manually constructed queries to retrieve the relevant tweets for 270 rumors with high impact. Our approach to query construction mainly follows [11]. For the news event instances (non-rumor examples), we make use of the manually constructed corpus from Mcminn et al. [21], which covers 500 real-world events. In [21], tweets are retrieved via Twitter firehose API from 10t⁢hsuperscript10𝑡ℎ10^{th}10 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT of October 2012 to 7t⁢hsuperscript7𝑡ℎ7^{th}7 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT of November 2012. The involved events are manually verified and relate to tweets with relevance judgments, which results in a high quality corpus. From the 500 events, we select top 230 events with the highest tweet volumes (as a criteria for event impact). Furthermore, we have added 40 other news events, which happened around the time periods of our rumors. This results in a dataset of 270 rumors and 270 events. The dataset details are shown in Table 1. To serve our learning task. we then constructs two distinct datasets for (1) single tweet credibility and (2) rumor classification.
B
\left(\sqrt{\frac{\log\log t}{\log t}}\right)∥ divide start_ARG bold_w ( italic_t ) end_ARG start_ARG ∥ bold_w ( italic_t ) ∥ end_ARG - divide start_ARG over^ start_ARG bold_w end_ARG end_ARG start_ARG ∥ over^ start_ARG bold_w end_ARG ∥ end_ARG ∥ = italic_O ( square-root start_ARG divide start_ARG roman_log roman_log italic_t end_ARG start_ARG roman_log italic_t end_ARG end_ARG ). Our analysis provides a more precise characterization of the iterates, and also shows the convergence is actually quadratically faster (see Section 3). However, Ji and Telgarsky go even further and provide a characterization also when the data is non-separable but 𝐰⁢(t)𝐰𝑡\mathbf{w}(t)bold_w ( italic_t ) still goes to infinity.
In some non-degenerate cases, we can further characterize the asymptotic behavior of 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ). To do so, we need to refer to the KKT conditions (eq. 6) of the SVM problem (eq. 4) and the associated
The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameterization asymptotically to the maximum margin solution with unit nuclear norm. Unlike the case of squared loss, the result for exponential loss are independent of initialization and with only mild conditions on the step size. Here again, we see the asymptotic nature of exponential loss on separable data nullifying the initialization effects thereby making the analysis simpler compared to squared loss.
where 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) has a bounded norm for almost all datasets, while in zero measure case 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) contains additional O⁢(log⁡log⁡(t))𝑂𝑡O(\log\log(t))italic_O ( roman_log roman_log ( italic_t ) ) components which are orthogonal to the support vectors in 𝒮1subscript𝒮1\mathcal{S}_{1}caligraphic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, and, asymptotically, have a positive angle with the other support vectors. In this section we first calculate the various convergence rates for the non-degenerate case of Theorem 2, and then write the correction in the zero measure cases, if there is such a correction.
where the residual 𝝆k⁢(t)subscript𝝆𝑘𝑡\boldsymbol{\rho}_{k}(t)bold_italic_ρ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t ) is bounded and 𝐰^ksubscript^𝐰𝑘\hat{\mathbf{w}}_{k}over^ start_ARG bold_w end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is the solution of the K-class SVM:
A
The performance of this feature group is not so convincing. The feature Pasubscript𝑃𝑎P_{a}italic_P start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT from SpikeM model is the best one of them. The problem of these two models which we have already figured out in Section 3.2.3 is that two models need substantial data to fit the parameters. After 24 hours, model trained with these epidemiological featuresreaches 60% in accuracy. In other words, before 24 hour these is no clear propagation pattern of these events. In (kwon2013prominent, ), the durations of dataset are more than 60 days. In (jin2013epidemiological, ), they use 160 hours’ tweets’ volume to fit the SEIZ models. Their event durations are far larger than ours focused 48 hours. The Pasubscript𝑃𝑎P_{a}italic_P start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT parameter from SpikeM is the only feature barely has some contributions for rumor detection in our experiment. It stands for the strength of periodicity in SpikeM. (kwon2013prominent, ) add 3 more parameters Qasubscript𝑄𝑎Q_{a}italic_Q start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT,Qpsubscript𝑄𝑝Q_{p}italic_Q start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT and Qssubscript𝑄𝑠Q_{s}italic_Q start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT to explain the periodicity of the external shock, but they do not produce same effect in our experiment, because 48 hours time period is rather too short to contain multi-pike patterns.
. As shown in Table 11, CreditScore is the best feature in general. Figure 10 shows the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, significantly for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit after 16-20 hours, but it is not significant. CrowdWisdom is also a good feature which can get 75.8% accuracy as a single feature. But its performance is poor (less than 70%) in the first 32 hours getting better over time (see Table 11). Table 11 also shows the performance of sentiment feature (PolarityScores), which is generally low. This demonstrates the effectiveness of our curated approach over the sentiments, yet the crowd needs time to unify their views toward the event while absorbing different kinds of information.
Text feature set contains totally 16 features. The feature ranking are shown in Table 7. The best one is NumOfChar which is the average number of different characters in tweets. PolarityScores is the best feature when we tested the single tweets model, but its performance in time series model is not ideal. It is true that rumor contains more negative sentiment, but in an event (rumor or news) people can show their mixed views about this event (mendoza2010twitter, ; starbird2014rumors, ) like discussing or denying, so the PolarityScores’s performance becomes worse over time. Text features overall are shown to be more effective than Twitter and user feature sets.
As we can see in Figure 9 the best result on average over 48 hours is the BestSet. Second one is All features. Except those two, the best group feature is Text features. One reason is the text feature set has the largest group of feature with totally 16 features. But if look into each feature in text feature group, we can see the best and the worst features are all in this set. User features and Twitter features are stable over time around 82%. The performances of 3 different models (SIS, SEIZ and SpikeM) describing the propagation pattern of rumors and news are not ideal especially within 24 hours. CrowdWisdom and CreditScore both contain only one feature, but they already have impressive results comparing with the User features and Twitter features.
The performance of user features is similar with the Twitter features, they are both quite stable from the first hour to the last hour. As shown in Table 9, the best feature over 48 hours of the user feature group is UserTweetsPerDays and it is the best feature overall in the first 4 hours, but its rank decreases with time going by. Others user-based features like UserReputationScore and UserJoinDate also have a better performance in the first fews hours. That means the sources (the posters in the first few hours) of news and rumors are quite different with each other. But with more and more users joining in the discussion, the bias of two groups of users becomes less. After 6 hours, it seems that we can better distinguish the rumors based on the tweet contents (text features), rather than relying on the features of users.
A
Evaluating methodology. For RQ1, given an event entity e, at time t, we need to classify them into either Breaking or Anticipated class. We select a studied time for each event period randomly in the range of 5 days before and after the event time. In total, our training dataset for AOL consists of 1,740 instances of breaking class and 3,050 instances of anticipated, with over 300 event entities. For GoogleTrends, there are 2,700 and 4,200 instances respectively. We then bin the entities in the two datasets chronologically into 10 different parts. We set up 4 trials with each of the last 4 bins (using the history bins for training in a rolling basic) for testing; and report the results as average of the trials.
RQ2. Figure 4 shows the performance of the aspect ranking models for our event entities at specific times and types. The most right three models in each metric are the models proposed in this work. The overall results show that, the performances of these models, even better than the baselines (for at least one of the three), vary greatly among the cases. In general, S⁢V⁢Ms⁢a⁢l⁢i⁢e⁢n⁢c⁢e𝑆𝑉subscript𝑀𝑠𝑎𝑙𝑖𝑒𝑛𝑐𝑒SVM_{salience}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_s italic_a italic_l italic_i italic_e italic_n italic_c italic_e end_POSTSUBSCRIPT performs well at the before stage of breaking events, and badly at the after stage of the same event type. Whereas S⁢V⁢Mt⁢i⁢m⁢e⁢l⁢i⁢n⁢e⁢s⁢s𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT gives a contradictory performance for the cases. For anticipated events, S⁢V⁢Mt⁢i⁢m⁢e⁢l⁢i⁢n⁢e⁢s⁢s𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT performs well at the before and after stages, but gives a rather low performance at the during stage. For this event type, S⁢V⁢Ms⁢a⁢l⁢i⁢e⁢n⁢c⁢e𝑆𝑉subscript𝑀𝑠𝑎𝑙𝑖𝑒𝑛𝑐𝑒SVM_{salience}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_s italic_a italic_l italic_i italic_e italic_n italic_c italic_e end_POSTSUBSCRIPT generally performs worse than S⁢V⁢Mt⁢i⁢m⁢e⁢l⁢i⁢n⁢e⁢s⁢s𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT. Overall, The S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT with all features combined gives a good and stable performance, but for most cases, are not better than the well-performed single set of features L2R model. In general, these results prove our assumption that salience and timeliness should be traded-off for different event types, at different event times. For feature importances, we observe regularly, stable performances of same-group features across these cases. Salience features from knowledge bases tend to perform better than from query logs for short-duration or less popular events. We leave the more in-depth analysis of this part for future work.
Results. The baseline and the best results of our 1s⁢tsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achieves marginally better result at F1 metric.
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall, improved the baseline, yet not significantly. Our Ensemble model, that is learned to trade-off between salience and timeliness achieves the best results for all metrics, outperforms the baseline significantly. As the testing entity queries in this experiment are at all event times and with all event types, these improvements illustrate the robustness of our model. Overall, we witness the low performance of adapted QAC methods. One reason is as mentioned, QACs, even time-aware generally favor already salient queries as follows the rich-get-richer phenomenon, and are not ideal for entity queries that are event-related (where aspect relevance can change abruptly). Time-aware QACs for partially long prefixes like entities often encounter sparse traffic of query volumes, that also contributes to the low results.
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The results are shown in Table 3-bottom, showing that our cascaded model, with features inherited from the performance of SVM in previous task, substantially improves the single model. However, the overall modest results show the difficulty of this multi-class classification task.
B
\right)\;.\\ \end{cases}where { start_ROW start_CELL roman_Θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT : italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_a end_POSTSUBSCRIPT = [ italic_θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_a end_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + 1 , italic_a end_POSTSUBSCRIPT ⋯ italic_θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - 1 , italic_a end_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_a end_POSTSUBSCRIPT ] ∈ blackboard_R start_POSTSUPERSCRIPT italic_d × ( italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT , end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL italic_B start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT = ( roman_Θ start_POSTSUBSCRIPT 0 : italic_t - 2 , italic_a end_POSTSUBSCRIPT roman_Θ start_POSTSUBSCRIPT 0 : italic_t - 2 , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT + italic_B start_POSTSUBSCRIPT 0 , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT , end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL italic_L start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT = ( roman_Θ start_POSTSUBSCRIPT 1 : italic_t - 1 , italic_a end_POSTSUBSCRIPT roman_Θ start_POSTSUBSCRIPT 0 : italic_t - 2 , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT + italic_L start_POSTSUBSCRIPT 0 , italic_a end_POSTSUBSCRIPT italic_B start_POSTSUBSCRIPT 0 , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) italic_B start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT , end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL italic_V start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT = ( roman_Θ start_POSTSUBSCRIPT 1 : italic_t - 1 , italic_a end_POSTSUBSCRIPT - italic_L start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT roman_Θ start_POSTSUBSCRIPT 0 : italic_t - 2 , italic_a end_POSTSUBSCRIPT ) ( roman_Θ start_POSTSUBSCRIPT 1 : italic_t - 1 , italic_a end_POSTSUBSCRIPT - italic_L start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT roman_Θ start_POSTSUBSCRIPT 0 : italic_t - 2 , italic_a end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL + ( italic_L start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT - italic_L start_POSTSUBSCRIPT 0 , italic_a end_POSTSUBSCRIPT ) italic_B start_POSTSUBSCRIPT 0 , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_L start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT - italic_L start_POSTSUBSCRIPT 0 , italic_a end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT + italic_V start_POSTSUBSCRIPT 0 , italic_a end_POSTSUBSCRIPT , end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL italic_U start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT italic_U start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT = ( italic_θ start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT + italic_B start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) . end_CELL start_CELL end_CELL end_ROW
—i.e., the dependence on past samples decays exponentially, and is negligible after a certain lag— one can establish uniform-in-time convergence of SMC methods for functions that depend only on recent states, see [Kantas et al., 2015] and references therein.
the combination of Bayesian neural networks with approximate inference has also been investigated. Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; Li et al., 2016].
The use of SMC in the context of bandit problems was previously considered for probit [Cherkassky and Bornn, 2013] and softmax [Urteaga and Wiggins, 2018c] reward models, and to update latent feature posteriors in a probabilistic matrix factorization model [Kawale et al., 2015].
More broadly, one can establish uniform-in-time convergence for path functionals that depend only on recent states, as the Monte Carlo error of pM⁢(θt−τ:t|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃:𝑡𝜏𝑡subscriptℋ:1𝑡p_{M}(\theta_{t-\tau:t}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t - italic_τ : italic_t end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) with respect to p⁢(θt−τ:t|ℋ1:t)𝑝conditionalsubscript𝜃:𝑡𝜏𝑡subscriptℋ:1𝑡p(\theta_{t-\tau:t}|\mathcal{H}_{1:t})italic_p ( italic_θ start_POSTSUBSCRIPT italic_t - italic_τ : italic_t end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) is uniformly bounded over time.
A
Overall, the distribution of all three kinds of values throughout the day roughly correspond to each other. In particular, for most patients the number of glucose measurements roughly matches or exceeds the number of rapid insulin applications throughout the days.
The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day. Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning.
Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening. For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it is possible the discrepancy is a result of missing (glucose and carbohydrate) measurements.
Overall, the distribution of all three kinds of values throughout the day roughly correspond to each other. In particular, for most patients the number of glucose measurements roughly matches or exceeds the number of rapid insulin applications throughout the days.
Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients. The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
B
This representation constitutes the input to an Atrous Spatial Pyramid Pooling (ASPP) module Chen et al. (2018). It utilizes several convolutional layers with different dilation factors in parallel to capture multi-scale image information. Additionally, we incorporated scene content via global average pooling over the final encoder output, as motivated by the study of Torralba et al. (2006) who stated that contextual information plays an important role for the allocation of attention. Our implementation of the ASPP architecture thus closely follows the modifications proposed by Chen et al. (2017). These authors augmented multi-scale information with global context and demonstrated performance improvements on semantic segmentation tasks.
To quantify the contribution of multi-scale contextual information to the overall performance, we conducted a model ablation analysis. A baseline architecture without the ASPP module was constructed by replacing the five parallel convolutional layers with a single 3×3333\times 33 × 3 convolutional operation that resulted in 1,280 activation maps. This representation was then forwarded to a 1×1111\times 11 × 1 convolutional layer with 256 channels. While the total number of feature maps stayed constant, the amount of trainable parameters increased in this ablation setting. Table 6 summarizes the results according to validation instances of five eye tracking datasets for the model with and without an ASPP module. It can be seen that our multi-scale architecture reached significantly higher performance (one tailed paired t-test) on most metrics and is therefore able to leverage the information captured by convolutional layers with different receptive field sizes. An ablation analysis of the multi-level component adapted from Cornia et al. (2016) can be viewed in the A.
In this work, we laid out three convolutional layers with kernel sizes of 3×3333\times 33 × 3 and dilation rates of 4, 8, and 12 in parallel, together with a 1×1111\times 11 × 1 convolutional layer that could not learn new spatial dependencies but nonlinearly combined existing feature maps. Image-level context was represented as the output after global average pooling (i.e. after averaging the entries of a tensor across both spatial dimensions to a single value) and then brought to the same resolution as all other representations via bilinear upsampling, followed by another point-wise convolutional operation. Each of the five branches in the module contains 256 filters, which resulted in an aggregated tensor of 1,280 feature maps. Finally, the combined output was forwarded to a 1×1111\times 11 × 1 convolutional layer with 256 channels that contained the resulting multi-scale responses.
Figure 2: An illustration of the modules that constitute our encoder-decoder architecture. The VGG16 backbone was modified to account for the requirements of dense prediction tasks by omitting feature downsampling in the last two max-pooling layers. Multi-level activations were then forwarded to the ASPP module, which captured information at different spatial scales in parallel. Finally, the input image dimensions were restored via the decoder network. Subscripts beneath convolutional layers denote the corresponding number of feature maps.
To restore the original image resolution, extracted features were processed by a series of convolutional and upsampling layers. Previous work on saliency prediction has commonly utilized bilinear interpolation for that task Cornia et al. (2018); Liu and Han (2018), but we argue that a carefully chosen decoder architecture, similar to the model by Pan et al. (2017), results in better approximations. Here we employed three upsampling blocks consisting of a bilinear scaling operation, which doubled the number of rows and columns, and a subsequent convolutional layer with kernel size 3×3333\times 33 × 3. This setup has previously been shown to prevent checkerboard artifacts in the upsampled image space in contrast to deconvolution Odena et al. (2016). Besides an increase of resolution throughout the decoder, the amount of channels was halved in each block to yield 32 feature maps. Our last network layer transformed activations into a continuous saliency distribution by applying a final 3×3333\times 33 × 3 convolution. The outputs of all but the last linear layer were modified via rectified linear units. Figure 2 visualizes the overall architecture design as described in this section.
B
There is a polynomial-time O⁡(log⁡(opt)⁢log⁡(n))Oopt𝑛\operatorname{O}(\sqrt{\log(\operatorname{\textsf{opt}})}\log(n))roman_O ( square-root start_ARG roman_log ( opt ) end_ARG roman_log ( italic_n ) )-approximation algorithm and a polynomial-time O⁡(log⁡(opt)⁢opt)Ooptopt\operatorname{O}(\sqrt{\log(\operatorname{\textsf{opt}})}\operatorname{\textsf% {opt}})roman_O ( square-root start_ARG roman_log ( opt ) end_ARG opt )-approximation algorithm for MinCutwidth.
In this section, we discuss some examples that illustrate the concepts of marking sequences and the locality number, and we also discuss some word combinatorial properties related to the locality number. Note that for illustration purposes, the example words considered in this section are not necessarily condensed.
The main results are presented in Sections 4, 5 and 6. First, in Section 4, we present the reductions from Loc to Cutwidth and vice versa, and we discuss the consequences of these reductions. Then, in Section 5, we show how Loc can be reduced to Pathwidth, which yields an approximation algorithm for computing the locality number; furthermore, we investigate the performance of direct greedy strategies for approximating the locality number. Finally, since we consider this of high importance independent of the locality number, we provide a direct reduction from cutwidth to pathwidth in Section 6.
As mentioned several times already, our reductions to and from the problem of computing the locality number also establish the locality number for words as a (somewhat unexpected) link between the graph parameters cutwidth and pathwidth. We shall discuss in more detail in Section 6 the consequences of this connection. Next, we conclude this section by providing a formal proof of Lemma 5.7, which is the main result of this section.
In Section 2, we give basic definitions (including the central parameters of the locality number, the cutwidth and the pathwidth). In the next Section 3, we discuss the concept of the locality number with some examples and some word combinatorial considerations. The purpose of this section is to develop a better understanding of this parameter for readers less familiar with string parameters and combinatorics on words (the technical statements of this section are formally proven in the appendix).
D
Besides solving the data and interpretability problems, researchers in cardiology could utilize the already established deep learning architectures that have not been widely applied in cardiology such as capsule networks. Capsule networks[265] are deep neural networks that require less training data than CNNs and its layers capture the ‘pose’ of features thus making their inner-workings more interpretable and closer to the human way of perception.
They have been used by a number of publications in cardiology in medical history prediction[70], ECG beat classification[86] and CVD prediction using fundus[192]. Another simpler tool for interpretability is saliency maps[264] that uses the gradient of the output with respect to the input which intuitively shows the regions that most contribute toward the output.
Amongst their experiments they found that rotational and scaling data augmentations did not help increase accuracy, attributing it to interpolation altering pixel intensities which is problematic due to the sensitivity of CNN to pixel distribution patterns.
However an important constraint they currently have which limits them from achieving wider use, is the high computational cost compared to CNNs due to the ‘routing by agreement’ algorithm. Amongst their recent uses in medicine include brain tumor classification[266] and breast cancer classification[267].
Lessman et al.[195] method for coronary calcium scoring utilizes three independently trained CNNs to estimate a bounding box around the heart, in which connected components above a Hounsfield unit threshold are considered candidates for CACs. Classification of extracted voxels was performed by feeding two-dimensional patches from three orthogonal planes into three concurrent CNNs to separate them from other high intensity lesions.
C
An important step in this direction was made by Leibfried et al. (2016), which extends the work of Oh et al. (2015) by including reward prediction, but does not use the model to learn policies that play the games. Most of these approaches, including ours, encode knowledge of the game in implicit way. Unlike this, there are works in which modeling is more explicit, for example Ersen & Sariel (2014) uses testbed of the Incredible Machines to learn objects behaviors and their interactions.
Using models of environments, or informally giving the agent ability to predict its future, has a fundamental appeal for reinforcement learning. The spectrum of possible applications is vast, including learning policies from the model (Watter et al., 2015; Finn et al., 2016; Finn & Levine, 2017; Ebert et al., 2017; Hafner et al., 2019; Piergiovanni et al., 2018; Rybkin et al., 2018; Sutton & Barto, 2017, Chapter 8), capturing important details of the scene (Ha & Schmidhuber, 2018), encouraging exploration (Oh et al., 2015), creating intrinsic motivation (Schmidhuber, 2010) or counterfactual reasoning (Buesing et al., 2019).
have incorporated images into real-world (Finn et al., 2016; Finn & Levine, 2017; Babaeizadeh et al., 2017a; Ebert et al., 2017; Piergiovanni et al., 2018; Paxton et al., 2019; Rybkin et al., 2018; Ebert et al., 2018) and simulated (Watter et al., 2015; Hafner et al., 2019) robotic control. Our video models of Atari environments described in Section 4 are motivated by models developed in the context of robotics. Another source of inspiration are discrete autoencoders proposed by van den Oord et al. (2017) and Kaiser & Bengio (2018).
Notable exceptions are the works of Oh et al. (2017), Sodhani et al. (2019), Ha & Schmidhuber (2018), Holland et al. (2018), Leibfried et al. (2018) and Azizzadenesheli et al. (2018). Oh et al. (2017) use a model of rewards to augment model-free learning with good results on a number of Atari games. However, this method does not actually aim to model or predict future frames, and achieves clear but relatively modest gains in efficiency.
Atari games gained prominence as a benchmark for reinforcement learning with the introduction of the Arcade Learning Environment (ALE) Bellemare et al. (2015). The combination of reinforcement learning and deep models then enabled RL algorithms to learn to play Atari games directly from images of the game screen, using variants of the DQN algorithm (Mnih et al., 2013; 2015; Hessel et al., 2018) and actor-critic algorithms (Mnih et al., 2016; Schulman et al., 2017; Babaeizadeh et al., 2017b; Wu et al., 2017; Espeholt et al., 2018). The most successful methods in this domain remain model-free algorithms (Hessel et al., 2018; Espeholt et al., 2018). Although the sample complexity of these methods has substantially improved recently, it remains far higher than the amount of experience required for human players to learn each game (Tsividis et al., 2017). In this work, we aim to learn Atari games with a budget of just 100K agent steps (400K frames), corresponding to about two hours of play time. Prior methods are generally not evaluated in this regime, and we therefore optimized Rainbow (Hessel et al., 2018) for optimal performance on 1M steps, see Appendix E for details.
C
Here we also refer to CNN as a neural network consisting of alternating convolutional layers each one followed by a Rectified Linear Unit (ReLU) and a max pooling layer and a fully connected layer at the end while the term ‘layer’ denotes the number of convolutional layers.
This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data. Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for diagnosis and prediction problems.
A high level overview of these combined methods is shown in Fig. 1. Although we choose the EEG epileptic seizure recognition dataset from University of California, Irvine (UCI) [13] for EEG classification, the implications of this study could be generalized in any kind of signal classification problem.
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals.
For the spectrogram module, which is used for visualizing the change of the frequency of a non-stationary signal over time [18], we used a Tukey window with a shape parameter of 0.250.250.250.25, a segment length of 8888 samples, an overlap between segments of 4444 samples and a fast Fourier transform of 64646464 samples to convert the xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT into the time-frequency domain. The resulted spectrogram, which represents the magnitude of the power spectral density (V2/H⁢zsuperscript𝑉2𝐻𝑧V^{2}/Hzitalic_V start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_H italic_z) of xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, was then upsampled to 178×178178178178\times 178178 × 178 using bilinear pixel interpolation.
C
The track tip positioning was the key parameter controlled during the creation of these climbing gaits. To assure seamless locomotion, trajectories for each joint of the robot were defined through a fifth-order polynomial along with their first and second derivatives. The trajectory design took into account six constraints: initial and final position, velocity, and acceleration [23]. The Reflexxes Motion Library IV [24] was utilized to perform the inverse kinematics calculation.
Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the rear legs (depicted by the green line) exceeded the predetermined threshold values set by the rear body climbing gait for heights of 2h. The overlap between the red line (ongoing energy consumption of the robot) and the blue line (pre-studied energy consumption of step negotiation in rolling locomotion mode only) illustrates this. After the mode transition is triggered, the robot enters a well-defined preparation phase, wherein it moves backward a short distance to ensure the rear tracks are separated from the step. Following the preparation phase, the robot switches to the rear body climbing gait. Despite the noticeable improvement in energy consumption, the transition to the rear body climbing gait takes more time for the robot to tackle a 2h step.
Figure 10: The Cricket robot tackles a step of height h using rolling locomotion mode, negating the need for a transition to the walking mode. The total energy consumed throughout the entire step negotiation process in rolling locomotion stayed below the preset threshold value. This threshold value was established based on the whole body climbing gait at height h, as shown in Fig. 8, or the rear body climbing gait at height h, as seen in Fig. 9. The blue line illustrates the total energy consumed (in rolling locomotion mode), while the green line represents the ongoing cumulative energy consumption of the rear legs, indicating it did not exceed the threshold values set by the rear body climbing gait.
The whole-body climbing gait involves utilizing the entire body movement of the robot, swaying forwards and backwards to enlarge the stability margins before initiating gradual leg movement to overcome a step. This technique optimizes stability during the climbing process. To complement this, the rear-body climbing gait was developed. In this approach, once the front legs and body have completed their upward rolling motion, the rear legs are elevated to ascend the step. This strategy is particularly beneficial in situations where the mobility of rolling locomotion is hindered by the rear wheels. For a more detailed discussion of the whole-body climbing gait and the rear-body climbing gait, we direct readers to [10].
The evaluation of energy consumption for the walking locomotion mode encompassed the entire step negotiation process, from the commencement of the negotiation until its completion. Fig. 8 reveals minimal discrepancies in energy consumption for the whole-body climbing gait, which can be attributed to the thoughtful design of the climbing gaits. These gaits incorporate identical desired joint accelerations, leg stride length, and forward movement height, as highlighted in [4]. Consequently, variations in energy consumption during different step negotiations primarily stem from negotiation time and body movements. In order to establish the threshold values (Tw⁢bsubscript𝑇𝑤𝑏T_{wb}italic_T start_POSTSUBSCRIPT italic_w italic_b end_POSTSUBSCRIPT and Tr⁢bsubscript𝑇𝑟𝑏T_{rb}italic_T start_POSTSUBSCRIPT italic_r italic_b end_POSTSUBSCRIPT) for the energy criterion, they were equated to the energy expenditure of the walking locomotion mode, utilizing the whole-body climbing and rear-body climbing gaits, respectively. To identify the threshold values (Tw⁢bsubscript𝑇𝑤𝑏T_{wb}italic_T start_POSTSUBSCRIPT italic_w italic_b end_POSTSUBSCRIPT and Tr⁢bsubscript𝑇𝑟𝑏T_{rb}italic_T start_POSTSUBSCRIPT italic_r italic_b end_POSTSUBSCRIPT) for the energy criterion, they were set equal to the energy expenditure of the walking locomotion mode using the whole body climbing and rear body climbing gaits, respectively. Unlike other methods that use empirical values [2, 8], the threshold values in this study were decided upon based on a novel rule that evaluates the alternative locomotion mode. Moreover, these threshold values are not fixed and are determined based on the terrain profiles the robot is negotiating.
C
Suppose that you have an investment account with a significant amount in it, and that your financial institution advises you periodically on investments. One day, your banker informs you that company X will soon receive a big boost, and advises to use the entire account to buy stocks. If you were to completely trust the banker’s advice, there are naturally two possibilities: either the advice will prove correct (which would be great) or it will prove wrong (which would be catastrophic). A prudent customer would take this advice with a grain of salt, and would not be willing to risk everything. In general, our understanding of advice is that it entails knowledge that is not foolproof.
Under the current models, the advice bits can encode any information about the input sequence; indeed, defining the “right” information to be conveyed to the algorithm plays an important role in obtaining better online algorithms. Clearly, the performance of the online algorithm can only improve with larger number of advice bits. The objective is thus to identify the exact trade-offs between the size of the advice and the performance of the algorithm. This is meant to provide a smooth transition between the purely online world (nothing is known about the input) and the purely “offline” world (everything is known about the input).
In future work, we would like to expand the model so as to incorporate, into the analysis, the concept of advice error. More specifically, given an advice string of size k𝑘kitalic_k, let η𝜂\etaitalic_η denote the number of erroneous bits (which may be not known to the algorithm). In this setting, the objective would be to study the power and limitations of online algorithms, i.e., from the point of view of both upper and lower bounds on the competitive ratio. A first approach towards this direction was made recently in the context of problems such as contract
We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as function of the advice size. We also proved the first lower bounds for online algorithms in this setting. Any other online problem should be amenable to analysis under this framework, and in particular any other of the many problems studied under the classic framework of (standard) advice complexity.
In this work we focus on the online computation with advice. Our motivation stems from observing that, unlike the real world, the advice under the known models is often closer to “fiat” than “recommendation”. Our objective is to propose a model which allows the possibility of incorrect advice, with the objective of obtaining more realistic and robust online algorithms.
D
With the aim of avoiding cases of misclassification like in (d), we decided to implement the second classifier, SS3Δ, whose policy also takes into account the changes in both slopes. As it can be seen from Algorithm 3 and as mentioned before, SS3Δ additionally classifies a subject as positive if the positive slope changes, at least, four times faster than the other one.
the accumulated negative confidence value starts being greater than the positive one, but as more chunks are read (specifically starting after reading the 3rd chunk), the positive value starts and stays growing until it exceeds the other one. In this case, this subject is classified as depressed after reading the 6th chunk.
This problem can be detected in this subject by seeing the blue dotted peek at around the 60th writing, indicating that “the positive slope changed around five times faster than the negative” there, and therefore misclassifying the subject as positive. However, note that this positive change was in fact really small (less than 1).
In Figure 7 is shown again the subject 1914, this time including information about the changes in the slopes. Note that this subject was previously misclassified as not depressed because the accumulated positive value never exceeded the negative one, but by adding this new extra policy, this time it is correctly classified as positive after reading the 8th chunk262626Note the peek in the blue dotted line pointing out that, at this point, the positive value has grown around 11 times faster than the negative one..
the subject is misclassified as positive since the positive accumulated exceeded the negative one. When we manually analyzed cases like these we often found out that the classifier was correctly accumulating positive evidence since the users were, in fact, apparently depressed.
C
Stochastic gradient descent (SGD) and its variants (Robbins and Monro, 1951; Bottou, 2010; Johnson and Zhang, 2013; Zhao et al., 2018, 2020, 2021) have been the dominating optimization methods for solving (1). In each iteration, SGD calculates a (mini-batch) stochastic gradient and uses it to update the model parameters. Inspired by momentum and Nesterov’s accelerated gradient descent, momentum SGD (MSGD) (Polyak, 1964; Tseng, 1998; Lan, 2012; Kingma and Ba, 2015) has been proposed and widely used in machine learning. In practice, MSGD often outperforms SGD (Krizhevsky et al., 2012; Sutskever et al., 2013). Many machine learning platforms, such as TensorFlow, PyTorch and MXNet, include MSGD as one of their optimization methods.
Furthermore, when we distribute the training across multiple workers, the local objective functions may differ from each other due to the heterogeneous training data distribution. In Section 5, we will demonstrate that the global momentum method outperforms its local momentum counterparts in distributed deep model training.
With the rapid growth of data, distributed SGD (DSGD) and its variant distributed MSGD (DMSGD) have garnered much attention. They distribute the stochastic gradient computation across multiple workers to expedite the model training. These methods can be implemented on distributed frameworks like parameter server and all-reduce frameworks.
Recently, parameter server (Li et al., 2014) has been one of the most popular distributed frameworks in machine learning. GMC can also be implemented on the parameter server framework. In this paper, we adopt the parameter server framework for illustration. The theories in this paper can also be adapted for the all-reduce framework.
GMC can be easily implemented on the all-reduce distributed framework in which each worker sends the sparsified vector 𝒞⁢(𝐞t+12,k)𝒞subscript𝐞𝑡12𝑘\mathcal{C}({\bf e}_{t+\frac{1}{2},k})caligraphic_C ( bold_e start_POSTSUBSCRIPT italic_t + divide start_ARG 1 end_ARG start_ARG 2 end_ARG , italic_k end_POSTSUBSCRIPT ) to all the other workers, then each worker updates 𝐰t+1subscript𝐰𝑡1{\bf w}_{t+1}bold_w start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT after receiving the sparsified vectors from all the other workers.
B
The three separate clusters which are depicted in Fig. 3 and the aggregated density plot in Fig. 4LABEL:sub@subfig:crrl_density_plot between the Identity activation function, the ReLU and the rest show the effect of a sparser activation function on the representation.
Figure 1: Visualization of the activation maps of five activation functions (Identity, ReLU, top-k absolutes, Extrema-Pool indices and Extrema) for 1D and 2D input in the top and bottom row respectively. The 1D input to the activation functions is denoted with the continuous transparent green line using an example from the UCI dataset.
Imposing a m⁢e⁢d𝑚𝑒𝑑meditalic_m italic_e italic_d on the extrema detection algorithm makes 𝜶𝜶\bm{\alpha}bold_italic_α sparser than the previous cases and solves the problem of double extrema activations that Extrema-Pool indices have (as shown in Fig. 1LABEL:sub@subfig:extrema). The sparsity parameter in this case is set d(i)=m⁢e⁢dsuperscript𝑑𝑖𝑚𝑒𝑑d^{(i)}=meditalic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT = italic_m italic_e italic_d, where 1≤m⁢e⁢d<n∈ℕ1𝑚𝑒𝑑𝑛ℕ1\leq med<n\in\mathbb{N}1 ≤ italic_m italic_e italic_d < italic_n ∈ blackboard_N is the minimum extrema distance.
The sparser an activation function is the more it compresses, sometimes at the expense of reconstruction error. However, by visual inspection of Fig. 5 one could confirm that the learned kernels of the SAN with sparser activation maps (Extrema-Pool indices and Extrema) correspond to the reoccurring patterns in the datasets, thus having high interpretability.
The three separate clusters which are depicted in Fig. 3 and the aggregated density plot in Fig. 4LABEL:sub@subfig:crrl_density_plot between the Identity activation function, the ReLU and the rest show the effect of a sparser activation function on the representation.
C
Figure 1: The topological structure of UAV ad-hoc networks. a) The UAV ad-hoc network supports user communications. b) The coverage of a UAV depends on its altitude and field angle. c) There are two kinds of links between users, and the link supported by UAV is better.
Fig. 12 shows how the number of UAVs affect the computation complexity of SPBLLA. Since the total number of UAVs is diverse, the goal functions are different. The goal functions’ value in the optimum states increase with the growth in UAVs’ number. Since goal functions are the summation function of utility functions, more UAVs offer more utilities which result in higher potential function value. Moreover, more UAVs can cover more area and support more users, which also corresponds with more utilities. Fig. 12 also shows how many iterations that UAV ad-hoc network needs to approach to convergence. With the number of UAVs improves, more iterations are required in this network.
Figure 1: The topological structure of UAV ad-hoc networks. a) The UAV ad-hoc network supports user communications. b) The coverage of a UAV depends on its altitude and field angle. c) There are two kinds of links between users, and the link supported by UAV is better.
We construct a UAV ad-hoc network in a post-disaster scenario with M𝑀Mitalic_M identical UAVs being randomly deployed, in which M𝑀Mitalic_M is a huge number compared with normal Multi-UAV system. All the UAVs have the same volume of battery E𝐸Eitalic_E and communication capability. The topological structure of Multi-UAV network is shown in Fig. 1 (a).
Since the UAV ad-hoc network game is a special type of potential game, we can apply the properties of the potential game in the later analysis. Some algorithms that have been applied in the potential game can also be employed in the UAV ad-hoc network game. In the next section, we investigate the existing algorithm with its learning rate in large-scale post-disaster scenarios and propose a new algorithm which is more suitable for the UAV ad-hoc network in such scenarios.
C
Π¯rsubscript¯Π𝑟\displaystyle\overline{\Pi}_{r}over¯ start_ARG roman_Π end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT =[−2⁢D⁢r¯^∗(μ^⁢r^⁢(D⁢r^¯∗v¯r))−D⁢z¯^∗(μ^⁢r^⁢(D⁢r^¯∗v¯z+D⁢z^¯∗v¯r))]/r¯absentdelimited-[]absent2^¯𝐷𝑟^𝜇^𝑟¯^𝐷𝑟subscript¯𝑣𝑟absent^¯𝐷𝑧^𝜇^𝑟¯^𝐷𝑟subscript¯𝑣𝑧¯^𝐷𝑧subscript¯𝑣𝑟¯𝑟\displaystyle=\biggl{[}\underset{}{-2\widehat{\overline{Dr}}*\left(\widehat{%
}}\,\,\widehat{r}\,\,\left(\overline{\widehat{Dr}}*\overline{v}_{z}+\overline{% \widehat{Dz}}*\overline{v}_{r}\right)\right)}\biggr{]}\,/\,\overline{r}= [ start_UNDERACCENT end_UNDERACCENT start_ARG - 2 over^ start_ARG over¯ start_ARG italic_D italic_r end_ARG end_ARG ∗ ( over^ start_ARG italic_μ end_ARG over^ start_ARG italic_r end_ARG ( over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) ) end_ARG - start_UNDERACCENT end_UNDERACCENT start_ARG over^ start_ARG over¯ start_ARG italic_D italic_z end_ARG end_ARG ∗ ( over^ start_ARG italic_μ end_ARG over^ start_ARG italic_r end_ARG ( over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT + over¯ start_ARG over^ start_ARG italic_D italic_z end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) ) end_ARG ] / over¯ start_ARG italic_r end_ARG
\widehat{Dz}}*\overline{v}_{r}\right)\right)}\biggr{]}\,/\,\overline{r}= [ start_UNDERACCENT end_UNDERACCENT start_ARG - 2 over^ start_ARG over¯ start_ARG italic_D italic_z end_ARG end_ARG ∗ ( over^ start_ARG italic_μ end_ARG over^ start_ARG italic_r end_ARG ( over¯ start_ARG over^ start_ARG italic_D italic_z end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT ) ) end_ARG - start_UNDERACCENT end_UNDERACCENT start_ARG over^ start_ARG over¯ start_ARG italic_D italic_r end_ARG end_ARG ∗ ( over^ start_ARG italic_μ end_ARG over^ start_ARG italic_r end_ARG ( over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT + over¯ start_ARG over^ start_ARG italic_D italic_z end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) ) end_ARG ] / over¯ start_ARG italic_r end_ARG
\overline{\psi}\right)\,\,\left(\overline{\widehat{Dz}}*\overline{f}\right)% \right)\,/\,\widehat{r}\right\}= divide start_ARG 2 italic_π end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG ( over^ start_ARG italic_s end_ARG over^ start_ARG italic_r end_ARG ) start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∗ { ( - ( over¯ start_ARG over^ start_ARG italic_D italic_z end_ARG end_ARG ∗ over¯ start_ARG italic_ψ end_ARG ) ( over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_f end_ARG ) + ( over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_ψ end_ARG ) ( over¯ start_ARG over^ start_ARG italic_D italic_z end_ARG end_ARG ∗ over¯ start_ARG italic_f end_ARG ) ) / over^ start_ARG italic_r end_ARG }
}_{r}\,/\,\overline{r}^{2}}start_UNDERACCENT end_UNDERACCENT start_ARG + divide start_ARG 2 end_ARG start_ARG 3 end_ARG ( over^ start_ARG over¯ start_ARG italic_D italic_r end_ARG end_ARG ∗ ( over^ start_ARG italic_μ end_ARG ( over¯ start_ARG over^ start_ARG ∇ end_ARG end_ARG ⋅ over¯ start_ARG bold_v end_ARG ) ) ) end_ARG + start_UNDERACCENT end_UNDERACCENT start_ARG 2 over¯ start_ARG italic_μ end_ARG over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT / over¯ start_ARG italic_r end_ARG start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG
A
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, in order to get a semantic of comparability closer to equality. Even more, it could be possible to make the functions reflexive on all values but null where some freedom is allowed.
Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it. Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, in order to get a semantic of comparability closer to equality. Even more, it could be possible to make the functions reflexive on all values but null where some freedom is allowed.
fA⁢(u,v)=fB⁢(u,v)={1if ⁢u=v≠nullaif ⁢u≠null,v≠null and ⁢u≠vbif ⁢u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\ a&\text{if }u\neq\texttt{null},v\neq\texttt{null}\text{ and }u\neq v\\
Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality) by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, any value yA≥AxAsubscript𝐴subscript𝑦𝐴subscript𝑥𝐴y_{A}\geq_{A}x_{A}italic_y start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ≥ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT must be set to 1111 since it is closer to
A
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Classic Control Environment. The game of CARTPOLE was selected due to its widespread use and the ease with which the DQN can achieve a steady state policy.
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and after applying Dropout (Dropout methods DQN). There was a statistically significant decrease in Variance (14.72% between Gaussian Dropout and DQN, 48.89% between Variational Dropout and DQN). Furthermore one of the Dropout methods outperformed DQN score.
For the experiments, fully connected neural network architecture was used. It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers. To minimize the DQN loss, ADAM optimizer was used[25].
It’s the original Dropout method. It was introduced in 2012. Standard Dropout provides a simple technique for avoiding over-fitting in fully connected neural networks[12]. During each training phase, each neuron is excluded from the network with a probability p. Once trained, in the testing phase the full network is used, but each of the neurons’ output is multiplied by the probability p that the neuron was excluded with. This approach gives approximately the same result as averaging of the outcome of a great number of different networks which is very expensive approach to evaluate, this compensates that in the testing phase Dropout achieves a green model averaging. The probability can vary for each layer, the original paper recommend p=0.2p0.2\textit{p}=0.2p = 0.2 for the input layer and p =0.5p 0.5\textit{p }=0.5p = 0.5 for hidden layers. Neurons in the output layer are not dropped. This method proved effective for regularizing neural networks, enabling them to be trained for longer periods without over-fitting and resulting in improved performance, and since then many Dropout techniques have been improved for different types neural networks architectures (Figure 1).
A fully connected neural network architecture was used. It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers. ADAM optimizer for the minimization[25].
B
In medical image segmentation works, researchers have converged toward using classical cross-entropy loss functions along with a second distance or overlap based functions. Incorporating domain/prior knowledge (such as coding the location of different organs explicitly in a deep model) is more sensible in the medical datasets. As shown in Taghanaki et al. (2019e), when only a distance-based or overlap-based loss function is used in a network, and the final layer applies sigmoid function, the risk of gradient vanishing increases. Although overlap based loss function are used in case of a class imbalance (small foregrounds), in Figure 13, we show how using (only) overlap based loss functions as the main term can be problematic for smooth optimization where they highly penalize a model under/over-segmenting a small foreground. However, the cross-entropy loss returns a reasonable score for the same cases. Besides using integrated cross-entropy based loss functions, future work can be exploring a single loss function that follows the behavior of the cross-entropy and at the same time, offers more features such capturing contour distance. This can be achieved by revisiting the current distance and overlap based loss functions. Another future path can be exploring auto loss function (or regularization term) search similar to the neural architecture search mentioned above. Similarly, gradient based optimizations based on Sobolev (Adams and Fournier, 2003) gradients (Czarnecki et al., 2017), such as the works of Goceri (2019b, 2020) are an interesting research direction.
Going beyond pixel intensity-based scene understanding by incorporating prior knowledge, which have been an active area of research for the past several decades (Nosrati and Hamarneh, 2016; Xie et al., 2020). Encoding prior knowledge in medical image analysis models is generally more possible as compared to natural images. Currently, deep models receive matrices of intensity values, and usually, they are not forced to learn prior information. Without explicit reinforcement, the models might still learn object relations to some extent. However, it is difficult to interpret a learned strategy.
Deep learning has had a tremendous impact on various fields in science. The focus of the current study is on one of the most critical areas of computer vision: medical image analysis (or medical computer vision), particularly deep learning-based approaches for medical image segmentation. Segmentation is an important processing step in natural images for scene understanding and medical image analysis, for image-guided interventions, radiotherapy, or improved radiological diagnostics, etc. Image segmentation is formally defined as “the partition of an image into a set of nonoverlapping regions whose union is the entire image” (Haralick and Shapiro, 1992). A plethora of deep learning approaches for medical image segmentation have been introduced in the literature for different medical imaging modalities, including X-ray, visible-light imaging (e.g. colour dermoscopic images), magnetic resonance imaging (MRI), positron emission tomography (PET), computerized tomography (CT), and ultrasound (e.g. echocardiographic scans). Deep architectural improvement has been a focus of many researchers for different purposes, e.g., tackling gradient vanishing and exploding of deep models, model compression for efficient small yet accurate models, while other works have tried to improve the performance of deep networks by introducing new optimization functions.
Exploring reinforcement learning approaches similar to Song et al. (2018) and Wang et al. (2018c) for semantic (medical) image segmentation to mimic the way humans delineate objects of interest. Deep CNNs are successful in extracting features of different classes of objects, but they lose the local spatial information of where the borders of an object should be. Some researchers resort to traditional computer vision methods such as conditional random fields (CRFs) to overcome this problem, which however, add more computation time to the models.
For image segmentation, sequenced models can be used to segment temporal data such as videos. These models have also been applied to 3D medical datasets, however the advantage of processing volumetric data using 3D convolutions versus the processing the volume slice by slice using 2D sequenced models. Ideally, seeing the whole object of interest in a 3D volume might help to capture the geometrical information of the object, which might be missed in processing a 3D volume slice by slice. Therefore a future direction in this area can be through analysis of sequenced models versus volumetric convolution-based approaches.
A
From Fig. 9(b) we notice that the graphs 𝐀(1)superscript𝐀1{\mathbf{A}}^{(1)}bold_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT and 𝐀(2)superscript𝐀2{\mathbf{A}}^{(2)}bold_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT in GRACLUS have additional nodes that are disconnected. As discussed in Sect. V, these are the fake nodes that are added to the graph so that its size can be halved at every pooling operation.
Fig. 9(c) shows that NMF produces graphs that are very dense, as a consequence of the multiplication with the dense soft-assignment matrix to construct the coarsened graph. Finally, Fig. 9(d) shows that NDP produces coarsened graphs that are sparse and preserve well the topology of the original graph.
Fig. 12 shows for the result of the NDP coarsening procedure on the 6 types of graphs. The first column shows the subset of nodes of the original graph that are selected (𝒱+superscript𝒱\mathcal{V}^{+}caligraphic_V start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, in red) and discarded (𝒱−superscript𝒱\mathcal{V}^{-}caligraphic_V start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT, in blue) after each pooling step.
Fig. 12 shows for the result of the NDP coarsening procedure on the 6 types of graphs. The first column shows the subset of nodes of the original graph that are selected (𝒱+superscript𝒱\mathcal{V}^{+}caligraphic_V start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, in red) and discarded (𝒱−superscript𝒱\mathcal{V}^{-}caligraphic_V start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT, in blue) after each pooling step.
From Fig. 9(b) we notice that the graphs 𝐀(1)superscript𝐀1{\mathbf{A}}^{(1)}bold_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT and 𝐀(2)superscript𝐀2{\mathbf{A}}^{(2)}bold_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT in GRACLUS have additional nodes that are disconnected. As discussed in Sect. V, these are the fake nodes that are added to the graph so that its size can be halved at every pooling operation.
A
For real-world applications, the dependency on large amounts of labeled data represents a significant limitation (Breiman et al., 1984; Hekler et al., 2019; Barz & Denzler, 2020; Qi & Luo, 2020; Phoo & Hariharan, 2021; Wang et al., 2021). Frequently, there is little or even no labeled data for a particular task and hundreds or thousands of examples have to be collected and annotated. This particularly affects new applications and rare labels (e.g., detecting rare diseases or defects in manufacturing).
Transfer learning and regularization methods are usually applied to reduce overfitting. However, for training with little data, the networks still have a considerable number of parameters that have to be fine-tuned – even if just the last layers are trained.
Random forests and neural networks share some similar characteristics, such as the ability to learn arbitrary decision boundaries; however, both methods have different advantages. Random forests are based on decision trees. Various tree models have been presented – the most well-known are C4.5 (Quinlan, 1993) and CART (Breiman et al., 1984).
Additionally, the experiment shows that the training is very robust to overfitting even when the number of parameters in the network increases. When combining the generated data and original data, the accuracy on Car and Covertype improves with an increasing number of training examples.
First, we analyze the performance of state-of-the-art methods for mapping random forests into neural networks and neural random forest imitation. The results are shown in Figure 4 for different numbers of training examples per class. For each method, the average number of parameters of the generated networks across all datasets is plotted depending on the test error. That means that the methods aim for the lower-left corner (smaller number of network parameters and higher accuracy). Please note that the y-axis is shown on a logarithmic scale.
A
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy optimization remain rather limited from both computational and statistical perspectives. More specifically, from the computational perspective, it remains unclear until recently whether policy optimization converges to the globally optimal policy in a finite number of iterations, even given infinite data. Meanwhile, from the statistical perspective, it still remains unclear how to attain the globally optimal policy with a finite regret or sample complexity.
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020), as well as nonlinear settings involving general function approximators (Wen and Van Roy, 2017; Jiang et al., 2017; Du et al., 2019b; Dong et al., 2019). In particular, our setting is the same as the linear setting studied by Ayoub et al. (2020); Zhou et al. (2020), which generalizes the one proposed by Yang and Wang (2019a). We remark that our setting differs from the linear setting studied by Yang and Wang (2019b); Jin et al. (2019). It can be shown that the two settings are incomparable in the sense that one does not imply the other (Zhou et al., 2020). Also, our setting is related to the low-Bellman-rank setting studied by Jiang et al. (2017); Dong et al. (2019). In comparison, we focus on policy-based reinforcement learning, which is significantly less studied in theory. In particular, compared with the work of Yang and Wang (2019b, a); Jin et al. (2019); Ayoub et al. (2020); Zhou et al. (2020), which focuses on value-based reinforcement learning, OPPO attains the same T𝑇\sqrt{T}square-root start_ARG italic_T end_ARG-regret even in the presence of adversarially chosen reward functions. Compared with optimism-led iterative value-function elimination (OLIVE) (Jiang et al., 2017; Dong et al., 2019), which handles the more general low-Bellman-rank setting but is only sample-efficient, OPPO simultaneously attains computational efficiency and sample efficiency in the linear setting. Despite the differences between policy-based and value-based reinforcement learning, our work shows that the general principle of “optimism in the face of uncertainty” (Auer et al., 2002; Bubeck and Cesa-Bianchi, 2012) can be carried over from existing algorithms based on value iteration, e.g., optimistic LSVI, into policy optimization algorithms, e.g., NPG, TRPO, and PPO, to make them sample-efficient, which further leads to a new general principle of “conservative optimism in the face of uncertainty and adversary” that additionally allows adversarially chosen reward functions.
for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018, 2019; Yang and Wang, 2019b, a), where the reward function is fixed across all the episodes.
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient (PG) (Williams, 1992; Baxter and Bartlett, 2000; Sutton et al., 2000), natural policy gradient (NPG) (Kakade, 2002), trust-region policy optimization (TRPO) (Schulman et al., 2015), proximal policy optimization (PPO) (Schulman et al., 2017), and actor-critic (AC) (Konda and Tsitsiklis, 2000), converge to the globally optimal policy at sublinear rates of convergence, even when they are coupled with neural networks (Liu et al., 2019; Wang et al., 2019). However, such computational efficiency guarantees rely on the regularity condition that the state space is already well explored. Such a condition is often implied by assuming either the access to a “simulator” (also known as the generative model) (Koenig and Simmons, 1993; Azar et al., 2011, 2012a, 2012b; Sidford et al., 2018a, b; Wainwright, 2019) or finite concentratability coefficients (Munos and Szepesvári, 2008; Antos et al., 2008; Farahmand et al., 2010; Tosatto et al., 2017; Yang et al., 2019b; Chen and Jiang, 2019), both of which are often unavailable in practice.
Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In particular, OPPO is based on PPO (and similarly, NPG and TRPO), which is shown to converge to the globally optimal policy at sublinear rates in tabular and linear settings, as well as nonlinear settings involving neural networks (Liu et al., 2019; Wang et al., 2019). However, without assuming the access to a “simulator” or finite concentratability coefficients, both of which imply that the state space is already well explored, it remains unclear whether any of such algorithms is sample-efficient, that is, attains a finite regret or sample complexity. In comparison, by incorporating uncertainty quantification into the action-value function at each update, which explicitly encourages exploration, OPPO not only attains the same computational efficiency as NPG, TRPO, and PPO, but is also shown to be sample-efficient with a d2⁢H3⁢Tsuperscript𝑑2superscript𝐻3𝑇\sqrt{d^{2}H^{3}T}square-root start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT italic_T end_ARG-regret up to logarithmic factors.
C
In this section, we review approaches that aim to reduce the model size by employing efficient matrix representations. There exist several methods using low-rank decompositions which represent a large matrix (or a large tensor) using only a fraction of the parameters.
Several works have investigated special matrix structures that require fewer parameters and allow for faster matrix multiplications—the main workload in fully connected layers. Furthermore, there exist several manually designed architectures that introduced lightweight building blocks or modified existing building blocks to enhance resource efficiency.
In this section, we review approaches that aim to reduce the model size by employing efficient matrix representations. There exist several methods using low-rank decompositions which represent a large matrix (or a large tensor) using only a fraction of the parameters.
In most cases, the implicitly represented matrix is never computed explicitly such that also a computational speed-up is achieved. Furthermore, there exist approaches using special matrices that are specified by only few parameters and whose structure allows for extremely efficient matrix multiplications.
In Cheng et al. (2015), the weight matrices of fully connected layers are restricted to circulant matrices 𝐖∈ℝn×n𝐖superscriptℝ𝑛𝑛\mathbf{W}\in\mathbb{R}^{n\times n}bold_W ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × italic_n end_POSTSUPERSCRIPT, which are fully specified by only n𝑛nitalic_n parameters. While this dramatically reduces the memory footprint of fully connected layers, circulant matrices also facilitate faster computation as matrix-vector multiplication can be efficiently computed using the fast Fourier transform.
C
(iλ,λ′)∗⁢(ω0)=ω1+ω2subscriptsubscript𝑖𝜆superscript𝜆′subscript𝜔0subscript𝜔1subscript𝜔2(i_{\lambda,\lambda^{\prime}})_{*}(\omega_{0})=\omega_{1}+\omega_{2}( italic_i start_POSTSUBSCRIPT italic_λ , italic_λ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) = italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT
ω2⁢ is the degree-1 homology class induced bysubscript𝜔2 is the degree-1 homology class induced by\displaystyle\omega_{2}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the degree-1 homology class induced by
and seeks the infimal r>0𝑟0r>0italic_r > 0 such that the map induced by ιrsubscript𝜄𝑟\iota_{r}italic_ι start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at n𝑛nitalic_n-th homology level annihilates the fundamental class [M]delimited-[]𝑀[M][ italic_M ] of M𝑀Mitalic_M. This infimal value defines FillRad⁢(M)FillRad𝑀\mathrm{FillRad}(M)roman_FillRad ( italic_M ), the filling radius of M𝑀Mitalic_M.
ω1⁢ is the degree-1 homology class induced bysubscript𝜔1 is the degree-1 homology class induced by\displaystyle\omega_{1}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the degree-1 homology class induced by
ω0⁢ is the degree-1 homology class induced bysubscript𝜔0 is the degree-1 homology class induced by\displaystyle\omega_{0}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the degree-1 homology class induced by
D
The remaining costs are one aspect of estimating the projection quality. This means that projected points with high remaining costs can be moved by an additional optimization step. Akin to this idea, t-viSNE might show a preview of the data points in the next optimization step. In consequence, users could determine whether the t-SNE optimization is completed or not, simply by observing the points’ trajectories in low-dimensional space. This remains as possible future work.
Clustervision [51] is a visualization tool used to test multiple batches of a varying number of clusters and allows the users to pick the best partitioning according to their task. Then, the dimensions are ordered according to a cluster separation importance ranking. As a result, the interpretation and assessment of the final results are intrinsically tied to the choice of clustering algorithm, which is an external technique that is (in general) not related to the DR itself. Thus, the quality of the results is tied to the quality of the chosen clustering algorithm. With t-viSNE it is also possible to explore the results of a clustering technique by, for example, mapping them to labels, then using the labels as regions of interest during the interactive exploration of the data. However, the labels do not influence the results of t-viSNE, whether they exist or not, since we did not intend to tie the quality of our results to other external (and independent) techniques.
The goals of the comparative study presented in this paper were to provide initial evidence of the acceptance of t-viSNE by analysts, the consistency of their results when exploring a t-SNE projection using our tool, and the improvement over another state-of-the-art tool. The tasks of the study were designed to test how each tool helps the analyst in overcoming the six pitfalls defined by Wattenberg et al. [14]), which was also one of the design goals of t-viSNE itself. Since that might not have been the case for GEP, this could be seen as a bias towards t-viSNE.
we present t-viSNE, a tool designed to support the interactive exploration of t-SNE projections (an extension to our previous poster abstract [17]). In contrast to other, more general approaches, t-viSNE was designed with the specific problems related to the investigation of t-SNE projections in mind, bringing to light some of the hidden internal workings of the algorithm which, when visualized, may provide important insights about the high-dimensional data set under analysis. Our proposed solution is composed of a set of coordinated views that work together in order to fulfill four main goals: (G1) facilitate the choice of hyper-parameters through visual exploration and the use of quality metrics; (G2) provide a quick overview of the accuracy of the projection, to support the decision of either moving forward with the analysis or repeating the process of hyper-parameter exploration; (G3) provide the means to investigate quality further, differentiating between the trustworthiness of different regions of the projection; and (G4) allow the interpretation of different visible patterns of the projection in terms of the original data set’s dimensions.
In this paper, we introduced t-viSNE, an interactive tool for the visual investigation of t-SNE projections. By partly opening the black box of the t-SNE algorithm, we managed to give power to users allowing them to test the quality of the projections and understand the rationale behind the choices of the algorithm when forming clusters. Additionally, we brought into light the usually lost information from the inner parts of the algorithm such as densities of points and highlighted areas which are not well-optimized according to t-SNE. To confirm the effectiveness of t-viSNE, we presented a hypothetical usage scenario and a use case with real-world data sets. We also evaluated our approach with a user study by comparing it with Google’s Embedding Projector (GEP): the results show that, in general, the participants could manage to reach the intended analysis tasks even with limited training, and their feedback indicates that t-viSNE reached a better level of support for the given tasks than GEP. However, both tools were similar with respect to completion time.
D
Nature inspired optimization algorithms or simply variations of metaheuristics? - 2021 [15]: This overview focuses on the study of the frequency of new proposals that are no more than variations of old ones. The authors critique a large set of algorithms based on three criteria: (1) whether there is a physical analogy that follows the metaheuristic, (2) whether most algorithms are duplicates or similarly inspired, and (3) whether the authors propose different techniques based on the same idea. They then specify their criteria for introducing a new metaheuristic.
Initialization of metaheuristics: comprehensive review, critical analysis, and research directions - 2023 [35]: This review addresses a gap in the literature by developing a taxonomy of initialization methods for metaheuristics. This classification is based on the initialization of metaheuristics according to random techniques, learning methods (supervised learning, Markov models, opposition- and diversification-based learning), and other generic methods based on sampling, clustering, and cooperation. The review also examines the initialization of metaheuristics with local search approaches, offers guidance on designing a diverse and informative sequence of initial solutions, and provides insights that will help research in constrained and discrete optimization problems.
50 years of metaheuristics - 2024 [40]: This overview traces the last 50 years of the field, starting from the roots of the area to the latest proposals to hybridize metaheuristics with machine learning. The revision encompasses constructive (GRASP and ACO), local search (iterated local search, Tabu search, variable neighborhood search), and population-based heuristics (memetic algorithms, biased random-key genetic algorithms, scatter search, and path relinking). Each category presents its core characteristics and the description of the mentioned algorithms. This review presents metaheuristic frameworks to guide the design of heuristic optimization algorithms during the last 50 years. It discusses the role of the journal in which it is published in introducing solid heuristic papers. This work also recalls the maturity of the field, which leads to solving very complex problems, with a growing number of researchers applying them, as shown in the numerous conferences and related events. Also, they criticize the fragmentation as each group of research usually applies the same methods regardless of the type of problem being solved, the lack of theoretical foundations, the limited analytical understanding of novel proposals, the problem-specific tuning of metaheuristics, the lack of standardized benchmarking protocols and the absence of general guidelines. Several research directions are also annotated for researchers to be applied in the future.
In the last update of this report, which is herein released 4 years after its original version, we note that there has been an evolution within the nature and bio-inspired optimization field. There is an excessive use of the biological approach as opposed to the real problem-solving approach to tackle real and complex optimization goals, as those discussed in Section 8.1. This issue needs to be addressed in the future by following guidelines that will allow for the definition of metaheuristics in a way that is appropriate to current challenges. This is important for the constructive design and development of proposals in response to emerging problems. For this reason, the potential impact the emerging problems and GPAIS, population-based metaheuristics as nature and bio-inspired optimization algorithms are poised to shape the future of AI, contributing to the design of continuously emerging AI systems, and serving as an inspiration for the new era of innovation and progress in AI.
An exhaustive review of the metaheuristic algorithms for search and optimization: taxonomy, applications, and open challenges - 2023 [34]: This taxonomy provides a large classification of metaheuristics based on the number of control parameters of the algorithm. In this work, the authors question the novelty of new proposals and discuss the fact that calling an algorithm new is often based on relatively minor modifications to existing methods. They highlight the limitations of metaheuristics, open challenges, and potential future research directions in the field.
D
}).italic_Z = italic_φ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( over^ start_ARG italic_A end_ARG italic_φ start_POSTSUBSCRIPT italic_m - 1 end_POSTSUBSCRIPT ( ⋯ italic_φ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( over^ start_ARG italic_A end_ARG italic_X italic_W start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) ⋯ ) italic_W start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) .
To study the impact of different parts of the loss in Eq. (12), the performance with different λ𝜆\lambdaitalic_λ is reported in Figure 4. From it, we find that the second term (corresponding to problem (7)) plays an important role especially on UMIST. If λ𝜆\lambdaitalic_λ is set as a large value, we may get the trivial embedding according to the constructed graph. AdaGAE will obtain good results when λ𝜆\lambdaitalic_λ is not too large.
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update the graph from the learned embedding with a larger sparsity, k𝑘kitalic_k. With the new graph, we re-train the GAE. These steps are repeated until the convergence.
Figure 2: Visualization of the learning process of AdaGAE on USPS. Figure 2(b)-2(i) show the embedding learned by AdaGAE at the i𝑖iitalic_i-th epoch, while the raw features and the final results are shown in Figure 2(a) and 2(j), respectively. An epoch corresponds to an update of the graph.
To illustrate the process of AdaGAE, Figure 2 shows the learned embedding on USPS at the i𝑖iitalic_i-th epoch. An epoch means a complete training of GAE and an update of the graph. The maximum number of epochs, T𝑇Titalic_T, is set as 10. In other words, the graph is updated 10 times. Clearly, the embedding becomes more cohesive with the update.
C
We also want to understand the types of networks that we could test via domains-wide scans. To derive the business types we use the PeeringDB. We classify the ASes according to the following business types: content, enterprise, Network Service Provider (NSP), Cable/DSL/ISP, non-profit, educational/research, route server at Internet Exchange Point (IXP)111A route server directs traffic among Border Gateway Protocol (BGP) routers. We plot the networks that do not enforce ingress filtering according to business types in Figure 12. According to our study enterprise and non-profit networks enforce ingress filtering more than other networks. In contrast, NSPs contain the most networks that do not enforce ingress filtering.
SMap (The Spoofing Mapper). In this work we present the first Internet-wide scanner for networks that filter spoofed inbound packets, we call the Spoofing Mapper (SMap). We apply SMap for scanning ingress-filtering in more than 90% of the Autonomous Systems (ASes) in the Internet. The measurements with SMap show that more than 80% of the tested ASes do not enforce ingress filtering (i.e., 72.4% of all the ASes in the routing system), in contrast to 2.4% identified by the latest measurement of the Spoofer Project (Luckie et al., 2019). The reason for this significant difference is the limitation of the previous studies of ingress filtering to a small set of networks.
Domain-scan and IPv4-scan both show that the number of spoofable ASes grows with the overall number of the ASes in the Internet, see Figure 1. Furthermore, there is a correlation between fraction of scanned domains and ASes. Essentially the more domains are scanned, the more ASes are covered, and more spoofable ASes are discovered; see Figure 7. This result is of independent interest as it implies that one can avoid scanning the IPv4 and instead opt for domains-scan, obtaining a good enough approximation. This not only reduces the volume of traffic needed to carry out studies but also makes the study much more efficient.
There is a strong correlation between the AS size and the enforcement of spoofing, see Figure 13. Essentially, the larger the AS, the higher the probability that our tools identify that it does not filter spoofed packets. The reason can be directly related to our methodologies and the design of our study: the larger the network the more services it hosts. This means that we have more possibilities to test if spoofing is possible: for instance, we can identify a higher fraction of servers with a globally incremental IPID counters, which are not “load balanced”. In Figure 14 we plot the statistics of the tested networks according to their size and type. The results show a correlation between the size of the network and its type. For instance, most NSP networks are large, with CIDR/6. This is aligned with our finding that among NSP networks there was the highest number of spoofable networks.
Identifying servers with global IPID counters. We send packets from two hosts (with different IP addresses) to a server on a tested network. We implemented probing over TCP SYN, ping and using requests/responses to Name servers and we apply the suitable test depending on the server that we identify on the tested network. If the responses contain globally incremental IPID values - we use the service for ingress filtering measurement with IPID technique. We located globally incremental IPID in 63.27%percent63.2763.27\%63.27 % of the measured networks. There are certainly more hosts on networks that support globally incremental IPID values, yet our goal was to validate our measurement techniques while keeping the measurement traffic low - hence we avoided scanning the networks for additional hosts and only checked for Web, Email or Name servers with globally incremental IPID counters via queries to the tested domain.
C
While context did introduce more parameters to the model (7,57575757{,}5757 , 575 parameters without context versus 14,3151431514{,}31514 , 315 including context), the model is still very small compared to most neural network models, and is trainable in a few hours on a CPU. When units were added to the “skill” layer of the feedforward NN model until the total number of parameters reached 14,4291442914{,}42914 , 429, the larger model was not significantly better (p≥0.05𝑝0.05p\geq 0.05italic_p ≥ 0.05, one-sided t-test blocked by batch). This reinforces the idea that the benefit may be attributed to context, and not to the size of the network.
The estimation of context by learned temporal patterns should be most effective when the environment results in recurring or cyclical patterns, such as in cyclical variations of temperature and humidity and regular patterns of human behavior generating interferents. In such cases, the recurrent pathway can identify useful patterns analagously to how cortical regions help the olfactory bulb filter out previously seen background information [21]. A context-based approach will be applied to longer-timescale data and to environments with cyclical patterns.
This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The context model has two parts: (1) a recurrent context layer, which encodes classification-relevant properties of previously seen data, and (2) a feedforward layer, which integrates the context with the current odor stimulus to generate an odor-class prediction. The results indicate improvement from two sources: The use of neural networks in place of SVMs, and the use of context, particularly in cases where a substantial number of context sequences are available for training. Thus, emulation of adaptation in natural systems leads to an approach that can make a difference in real-world applications.
The current design of the context-based network relies on labeled data because the odor samples for a given class are presented as ordered input to the context layer. However, the model can be modified to be trained on unlabeled data, simply by allowing arbitrary data samples as input to the context layer. This design introduces variation in training inputs, which makes it harder to learn consistent context patterns. For this task, semisupervised learning techniques, such as self-labeled samples, may help. If the context layer can process unlabeled data, then it is no longer necessary to include every class in every batch. The full six-gas sensor drift dataset can be used, as well as other unbalanced and therefore realistic datasets.
One prominent feature of the mammalian olfactory system is feedback connections to the olfactory bulb from higher-level processing regions. Activity in the olfactory bulb is heavily influenced by behavioral and value-based information [19], and in fact, the bulb receives more neural projections from higher-level regions than from the nose [20]. In computational modeling, this principle has been taken into account by the piriform cortical region that recognizes familiar background odors through associative memory [21]. It projects this information to the olfactory bulb to improve odor recognition when there are background odors. Following this same principle, the neural network classifier in this paper integrates context that is outside the immediate input signal.
A
For the second change, we need to take another look at how we place the separators tisubscript𝑡𝑖t_{i}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. We previously placed these separators in every second nonempty drum σi:=[i⁢δ,(i+1)⁢δ]×Balld−1⁢(δ/2)assignsubscript𝜎𝑖𝑖𝛿𝑖1𝛿superscriptBall𝑑1𝛿2\sigma_{i}:=[i\delta,(i+1)\delta]\times\mathrm{Ball}^{d-1}(\delta/2)italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT := [ italic_i italic_δ , ( italic_i + 1 ) italic_δ ] × roman_Ball start_POSTSUPERSCRIPT italic_d - 1 end_POSTSUPERSCRIPT ( italic_δ / 2 ) based on the points in σi−1∪σi∪σi+1subscript𝜎𝑖1subscript𝜎𝑖subscript𝜎𝑖1\sigma_{i-1}\cup\sigma_{i}\cup\sigma_{i+1}italic_σ start_POSTSUBSCRIPT italic_i - 1 end_POSTSUBSCRIPT ∪ italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∪ italic_σ start_POSTSUBSCRIPT italic_i + 1 end_POSTSUBSCRIPT.
We generalize the case of integer x𝑥xitalic_x-coordinates to the case where the drum [x,x+1]×Balld−1⁢(δ/2)𝑥𝑥1superscriptBall𝑑1𝛿2[x,x+1]\times\mathrm{Ball}^{d-1}(\delta/2)[ italic_x , italic_x + 1 ] × roman_Ball start_POSTSUPERSCRIPT italic_d - 1 end_POSTSUPERSCRIPT ( italic_δ / 2 ) contains O⁢(1)𝑂1O(1)italic_O ( 1 ) points for all x∈ℝ𝑥ℝx\in\mathbb{R}italic_x ∈ blackboard_R. Furthermore, we investigate how the complexity of Euclidean TSP grows with δ𝛿\deltaitalic_δ.
However, in order for our algorithm to meet the requirements of Lemma 5.7, we would like to avoid having a point enter a drum after the x𝑥xitalic_x-coordinates are multiplied by some factor λ>1𝜆1\lambda>1italic_λ > 1. Furthermore, since the proof of Lemma 4.3 requires every drum to be at least δ𝛿\deltaitalic_δ wide, we cannot simply scale the drums as well.
It would be interesting to see whether a direct proof can be given for this fundamental result. We note that the proof of Theorem 2.1 can easily be adapted to point sets of which the x𝑥xitalic_x-coordinates of the points need not be integer, as long as the difference between x𝑥xitalic_x-coordinates of any two consecutive points is at least 1.
Finally, we will show that the requirements for Lemma 5.7 hold, where we take 𝒜𝒜\mathcal{A}caligraphic_A to be the algorithm described above. The only nontrivial requirement is that T𝒜⁢(Pλ)⩽T𝒜⁢(P)subscript𝑇𝒜subscript𝑃𝜆subscript𝑇𝒜𝑃T_{\mathcal{A}}(P_{\lambda})\leqslant T_{\mathcal{A}}(P)italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic_P start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ) ⩽ italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic_P ) for all point sets P𝑃Pitalic_P and x𝑥xitalic_x-axis scaling factors λ>1𝜆1\lambda>1italic_λ > 1.
B
There are quite a few results on free (and related) products of self-similar or automaton groups (again see [15] for an overview) but many of them present the product as a subgroup of an automaton/self-similar group and, thus, loose the self-similarity property. An exception here is a line of research based on the Bellaterra automaton which resulted in a construction to generate the free product of an arbitrary number of copies of the group of order two as an automaton group [16] (see also [17]).
from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata). Third, we show this result in the more general setting of self-similar semigroups111Note that the constructions from [2, Theorem 2], [3, Theorem 4] and [19] mentioned above do not use that the generating automata for S𝑆Sitalic_S and for T𝑇Titalic_T are finite. Therefore, these constructions also work for self-similar semigroups, although this is not explicitly stated there. (Theorem 6) but observe that the constructed generating automaton for S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is finite (and/or complete) if this was the case for the original two automata generating S𝑆Sitalic_S and T𝑇Titalic_T. The existence of a homomorphism from S𝑆Sitalic_S to T𝑇Titalic_T (or vice-versa) is a very lax requirement and is satisfied by large classes of semigroups. For example, it suffices to have an idempotent (10) or a length function (11) in (at least) one of the two semigroups. By induction, we can even extend the result to arbitrary free products of (finitely many) semigroups where at least one contains an idempotent (12). The construction itself yields further results. As an example, we modify it to show that a new free generator can be adjoined to any self-similar semigroup (or automaton semigroup) without losing the property of self-similarity (or being an automaton semigroup; Theorem 14). This is noteworthy because – as mentioned above – the free semigroup of rank one is not an automaton semigroup (not even if we allow partial automata, see [8, Theorem 19] and [20, Theorem 1.2.1.4]).
There are quite a few results on free (and related) products of self-similar or automaton groups (again see [15] for an overview) but many of them present the product as a subgroup of an automaton/self-similar group and, thus, loose the self-similarity property. An exception here is a line of research based on the Bellaterra automaton which resulted in a construction to generate the free product of an arbitrary number of copies of the group of order two as an automaton group [16] (see also [17]).
While our main result significantly relaxes the hypothesis for showing that the free product of self-similar semigroups (or automaton semigroups) is self-similar (an automaton semigroup), it does not settle the underlying question whether these semigroup classes are closed under free product. It is possible that there is a different construction for the free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T of two self-similar or automaton semigroup without the requirement of a homomorphism from one to the other and it is also possible that there is a pair of self-similar (or automaton) semigroups such that S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is not a self-similar (or an automaton semigroup). In this case, however, no homomorphism S→T→𝑆𝑇S\to Titalic_S → italic_T or T→S→𝑇𝑆T\to Sitalic_T → italic_S can exist. Thus, to make progress in either direction (towards a better construction or towards a counter-example), we need to look at pairs S,T𝑆𝑇S,Titalic_S , italic_T of self-similar (or even automaton) semigroups without a homomorphism from one to the other. However, it turns out that finding such a pair is not easy. In particular, neither S𝑆Sitalic_S nor T𝑇Titalic_T may contain an idempotent. Thus, we have to consider idempotent-free semigroups here. We will show, however, that we cannot find a pair of such semigroups in the class of finitely generated simple semigroups. More precisely, using results by Jones on idempotent-free semigroups [11], we show that finitely generated simple (or 00-simple) idempotent-free semigroups are not residually finite (Theorem 21) and, thus, not self-similar (and, in particular, not automaton semigroups; 22). We then conclude the paper with an example222The authors would like to thank Emanuele Rodaro for his help in finding this example. of a finitely generated residually finite semigroup (23) which has no homomorphism to its opposite semigroup (25). While this comes close to the sought pair S,T𝑆𝑇S,Titalic_S , italic_T, it is not clear whether the given semigroup is self-similar (26).
However, there do not seem to be constructions for presenting arbitrary free products of self-similar groups in a self-similar way. For semigroups, on the other hand, such results do exist. In fact, the free product of two automaton semigroups S𝑆Sitalic_S and T𝑇Titalic_T is always at least very close to being an automaton semigroup: adjoining an identity to S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T
D
Here, we showed that existing visual grounding based bias mitigation methods for VQA are not working as intended. We found that the accuracy improvements stem from a regularization effect rather than proper visual grounding. We proposed a simple regularization scheme which, despite not requiring additional annotations, rivals state-of-the-art accuracy. Future visual grounding methods should be tested with a more comprehensive experimental setup and datasets for proper evaluation.
Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible to achieve such gains even when the model is trained to look at: a) irrelevant visual regions, and b) random visual regions. Second, we show that differences in the predictions from the variants trained with relevant, irrelevant and random visual regions are not statistically significant. Third, we show that these methods degrade performance when the priors remain intact and instead work on VQA-CPv2 by hurting its train accuracy.
This work was supported in part by AFOSR grant [FA9550-18-1-0121], NSF award #1909696, and a gift from Adobe Research. We thank NVIDIA for the GPU donation. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements of any sponsor. We are grateful to Tyler Hayes for agreeing to review the paper at short notice and suggesting valuable edits and corrections for the paper.
It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in any of the methods, indicating that they are not producing right answers for the right reasons.
Since Wu and Mooney (2019) reported that human-based textual explanations Huk Park et al. (2018) gave better results than human-based attention maps for SCR, we train all of the SCR variants on the subset containing textual explanation-based cues. SCR is trained in two phases. For the first phase, which strengthens the influential objects, we use a learning rate of 5×10−55superscript1055\times 10^{-5}5 × 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT, loss weight of 3333 and train the model to a maximum of 12 epochs. Then, following Wu and Mooney (2019), for the second phase, we use the best performing model from the first phase to train the second phase, which criticizes incorrect dominant answers. For the second phase, we use a learning rate of 10−4superscript10410^{-4}10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT and weight of 1000100010001000, which is applied alongside the loss term used in the first phase. The specified hyperparameters worked better for us than the values provided in the original paper.
B
For each topic, we identified a corresponding entry from the OPP-115 annotation scheme (Wilson et al., 2016), which was created by legal experts to label the contents of privacy policies. While Wilson et al. (2016) followed a bottom-up approach and identified different categories from analysis of data practices in privacy policies, we followed a top-down approach and applied topic modelling to the corpus in order to extract common themes for paragraphs. The categories identified in the OPP-115 Corpus can be found in Table 2.
Topic Modelling. Topic modelling is an unsupervised machine learning method that extracts the most probable distribution of words into topics through an iterative process (Wallach, 2006). We used topic modelling to explore the distribution of themes of text in our corpus. Topic modelling using a large corpus such as PrivaSeer helps investigate the themes present in privacy policies at web scale and also enables the comparison of themes that occur in the rapidly evolving online privacy landscape. We used Latent Dirichlet Allocation (LDA), as our approach to topic modelling (Blei et al., 2003). Since LDA works well when each input document deals with a single topic, we divided each privacy policy into its constituent paragraphs (Sarne et al., 2019), tokenized the paragraphs using a regex character matching tokenizer and lemmatized the individual words using NLTK’s WordNet lemmatizer. We experimented with topics sizes of 7, 8, 9, 10, 11, 13 and 15. We manually evaluated the topic clusters by inspecting the words that most represented the topics. We noted that the cohesiveness of the topics decreased as the number of topics increased. We chose a topic size of 9, since larger topic sizes produced markedly less coherent topics.
We found that two LDA topics contained vocabulary corresponding with the OPP-115 category First Party Collection/Use, one dealing with purpose and information type collected and the other dealing with collection method. Two LDA topics corresponded with the OPP-115 category Third Party Sharing and Collection, one detailing the action of collection, and one explaining its purpose and effects(advertising and analytics). One of the LDA topics exclusively comprised of vocabulary related to cookies which could be related to both first party or third party data collection techniques. The OPP-115 categories Privacy Contact Information, Data Security and Policy Change appeared as separate topics while a topic corresponding to the OPP-115 category International and Specific Audiences appeared to be primarily related to European audiences and GDPR.
It is likely that the divergence between OPP-115 categories and LDA topics comes from a difference in approaches: the OPP-115 categories represent themes that privacy experts expected to find in privacy policies, which diverge from the actual distribution of themes in this text genre. Figure 2 shows the percentage of privacy policies in the corpus that contain each topic. From the figure we see that information regarding the type and purpose of data collected by first and third party sources are the most common topics. About 77% of policies contain language regarding third parties. This is consistent with prior research on third party data collection (Libert, 2018). In contrast, language regarding advertising and analytics appears in only 38% of policies in the corpus. Topics corresponding to data security, policy change and contact information also occur in a majority of privacy policies. Language corresponding to the GDPR and European audiences appears in 55% of policies. A study of the distribution of privacy policy topics on the web is important since they inform us about real-world trends and the need for resource allocation to enforce of privacy regulations.
For the data practice classification task, we leveraged the OPP-115 Corpus introduced by Wilson et al. (2016). The OPP-115 Corpus contains manual annotations of 23K fine-grained data practices on 115 privacy policies annotated by legal experts. To the best of our knowledge, this is the most detailed and widely used dataset of annotated privacy policies in the research community. The OPP-115 Corpus contains paragraph-sized segments annotated according to one or more of the twelve coarse-grained categories of data practices. We fine-tuned PrivBERT on the OPP-115 Corpus to predict the coarse-grained categories of data practices. We divided the corpus in the ratio 3:1:1 for training, validation and testing respectively. Since each segment in the corpus could belong to more than one category and there are twelve categories in total, we treated the problem as a multi-class, multi-label classification problem. After manually tuning hyperparameters, we trained the model with a dropout of 0.15 and a learning rate of 2.5e-5.
B
T5: Inspect the same view with alternative techniques and visualizations. To eventually avoid the appearance of cognitive biases, alternative interaction methods and visual representations of the same data from another perspective should be offered to the user (G5).
As in the data space, each point of the projection is an instance of the data set. However, instead of its original features, the instances are characterized as high-dimensional vectors where each dimension represents the prediction of one model. Thus, since there are currently 174 models in \raisebox{-.0pt} {\tiny\bfS6}⃝, each instance is a 174-dimensional vector, projected into 2D. Groups of points represent instances that were consistently predicted to be in the same class. In StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(f), for example, the points in the two clusters in both extremes of the projection (left and right sides, unselected) are well-classified, since they were consistently determined to be in the same class by most models of \raisebox{-.0pt} {\tiny\bfS6}⃝. The instances that are in-between these clusters, however, do not have a well-defined profile, since different models classified them differently. After selecting these instances with the lasso tool, the two histograms below the projection in StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(f) show a comparison of the performance of the available models in the selected points (gray, upside down) vs. all points (black). The x-axis represents the performance according to the user-weighted metrics (in bins of 5%), and the y-axis shows the number of models in each bin. Our goal here is to look for models in the current stack \raisebox{-.0pt} {\tiny\bfS6}⃝ that could improve the performance for the selected points. However, by looking at the histograms, it does not look like we can achieve it this time, since all models perform worse in the selected points than in all points.
Figure 2(a.2) displays overlapping barcharts for depicting the per-class performances for each algorithm, i.e., two colors for the two classes in our example. The more saturated bar in the center of each class bar represents the altered performance when the parameters of algorithms are modified. Note that the view only supports three performance metrics: precision, recall, and f1-score. The y-axes in both figures represent aggregated performance, while the different algorithms are arranged along the x-axis with different colors.
Figure 2: The exploration process of ML algorithms. View (a.1) summarizes the performance of all available algorithms, and (a.2) the per-class performance based on precision, recall, and f1-score for each algorithm. (b) presents a selection of parameters for KNN in order to boost the per-class performance shown in (c.1). (c.2) illustrates in light blue the selected models and in gray the remaining ones. Also from (a.2), both RF and ExtraT performances seem to be equal. However in (d), after resetting class optimization, ExtraT models appear to perform better overall. In view (e), the boxplots were replaced by point clouds that represent the individual models of activated algorithms. The color encoding is the same as for the algorithms, but unselected models are greyed out. Finally, the radar chart in (f) displays a portion of the models’ space in black that will be used to create the initial stack against the entire exploration space in yellow. The chart axes are normalized from 0 to 100%.
Figure 6: The process of exploration of distinct algorithms in hypotheticality stance analysis. (a) presents the selection of appropriate validation metrics for the specification of the data set. (b) aggregates the information after the exploration of different models and shows the active ones which will be used for the stack in the next step. (c) presents the per-class performance of all the models vs. the active ones per algorithm.
C
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and (v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these
cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG, and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ].
Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that
(E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ), (E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscript𝑢1delimited-[]112subscript𝑢2delimited-[]010(E^{\mathbf{C}},((u_{1},[112]),(u_{2},[010])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( ( italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , [ 112 ] ) , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 010 ] ) ) ).
C
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla MAML assumes that the data distribution is the same across tasks, in real-world NLP tasks, the data distributions can differ significantly [Li et al., 2018, Balaji et al., 2018]. For example, PAML [Madotto et al., 2019] regards each person’s dialogues as a task for MAML and they have different personal profiles. This variation manifests both between training tasks and between training and testing tasks, similarly affecting the performance of MAML. Few works have thoroughly studied these impact factors.
The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation. Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the language model becomes too “general”, it will lose the ability of adapting to specific tasks. It is noteworthy that the ”too general” problem is not the same as over-fitting, since the ”too general” model performs well before fine-tuning, which means it does not over-fit to the training data.
In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy: RQ1. Since the parameter initialization learned by MAML can be seen as a general language model of training tasks, when the training and testing tasks have different data distributions, how can the general language model training affect the model’s task-specific adaptation ability?
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla MAML assumes that the data distribution is the same across tasks, in real-world NLP tasks, the data distributions can differ significantly [Li et al., 2018, Balaji et al., 2018]. For example, PAML [Madotto et al., 2019] regards each person’s dialogues as a task for MAML and they have different personal profiles. This variation manifests both between training tasks and between training and testing tasks, similarly affecting the performance of MAML. Few works have thoroughly studied these impact factors.
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the meta-testing set before fine-tuning, using the quality performance (accuracy for classification and BLEU for generation) to
B
Activated Subarray with Limited DREs: As shown in Fig. 1, given a certain azimuth angle, there are limited DREs that can be activated. Due to the directivity, the DREs of the CCA subarray at different positions are anisotropic, and this phenomenon is different from the UPA. If an inappropriate subarray is activated, the beam angle may go beyond the radiation range of certain subarray elements, degrading the beam gain and SE.
After the discussion on the characteristics of CCA, in this subsection, we continue to explain the specialized codebook design for the DRE-covered CCA. Revisiting Theorem 1 and Theorem 3, the size and position of the activated CCA subarray are related to the azimuth angle; meanwhile, the beamwidth is determined by the size of the activated subarray according to Theorem 2. Therefore, the conventional codebook only consisting of different beamwidth and beam angles is not able to reveal the relationship among the beam angle, beamwidth and the corresponding supporting subarray for the DRE-covered CCA. In order to solve the beam tracking problem in (13), the subarray activation/partition and AWV selection needs to be jointly optimized at the same time. To this end, a new specialized hierarchical codebook 𝒱𝒱\mathcal{V}caligraphic_V should be designed to facilitate efficient beam tracking, wherein the codeword 𝒗𝒗\boldsymbol{v}bold_italic_v should contain both the angular-domain beam pattern information (αi,βi)subscript𝛼𝑖subscript𝛽𝑖(\alpha_{i},\beta_{i})( italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_β start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) and the corresponding subarray patten information 𝒮𝒮\mathcal{S}caligraphic_S.
The r-UAV needs to select multiple appropriate AWVs 𝒗⁢(ms,k,ns,k,ik,jk,𝒮k),k∈𝒦𝒗subscript𝑚𝑠𝑘subscript𝑛𝑠𝑘subscript𝑖𝑘subscript𝑗𝑘subscript𝒮𝑘𝑘𝒦\boldsymbol{v}(m_{s,k},n_{s,k},i_{k},j_{k},\mathcal{S}_{k}),k\in\mathcal{K}bold_italic_v ( italic_m start_POSTSUBSCRIPT italic_s , italic_k end_POSTSUBSCRIPT , italic_n start_POSTSUBSCRIPT italic_s , italic_k end_POSTSUBSCRIPT , italic_i start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , italic_j start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , caligraphic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) , italic_k ∈ caligraphic_K from our proposed codebook 𝒱𝒱\mathcal{V}caligraphic_V to solve the subarray partition and AWVs selection problem. If an element is contained in different subarrays, there is a conflict between the subarrays. To solve the problem in (43), the joint SPAS problem without considering the conflict is discussed first and the conflict avoidance will be discussed later. Given AOAs, the maximum size of the activated subarray should be selected and the quantization error between the AOAs and the beam angles in the codeword should be minimized to maximize the beam gain of the combining vector for the k𝑘kitalic_k-th t-UAV. Similarly with (42),
Multiuser-resultant Receiver Subarray Partition: As shown in Fig. 3, the r-UAV needs to activate multiple subarrays to serve multiple t-UAVs at the same time. Assuming that an element can not be contained in different subarrays, then the problem of activated CCA subarray partition rises at the r-UAV side for the fast multi-UAV beam tracking. The dynamic CCA subarray partition can be considered as the dynamic antenna resource allocation for multiple t-UAVs, which has strong impact on the sum SE of the UAV mmWave network.
In the considered UAV mmWave network, the r-UAV needs to activate multiple subarrays and select multiple combining vectors to serve multiple t-UAVs at the same time. Hence, the beam gain of the combining vector maximization problem for r-UAV with our proposed codebook can be rewritten as
C
Thus, a¯|b¯conditional¯𝑎¯𝑏\bar{a}|\bar{b}over¯ start_ARG italic_a end_ARG | over¯ start_ARG italic_b end_ARG-regular digraphs with size M¯¯𝑀\bar{M}over¯ start_ARG italic_M end_ARG can be characterized as a¯|b¯conditional¯𝑎¯𝑏\bar{a}|\bar{b}over¯ start_ARG italic_a end_ARG | over¯ start_ARG italic_b end_ARG-biregular graphs with size M¯|M¯conditional¯𝑀¯𝑀\bar{M}|\bar{M}over¯ start_ARG italic_M end_ARG | over¯ start_ARG italic_M end_ARG
This will be bootstrapped to the multi-color case in later sections. Note that the 1111-color case with the completeness requirement is not very interesting, and also not useful for the general case: completeness states that every node on the left must be connected, via the unique edge relation, to every node on the right – regardless of the matrix. We
We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the arguments, and will also be used as the base cases in inductive constructions for the case with arbitrary colors.
To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict
The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges. The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from
C
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, TD possibly diverges (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997). Bhatnagar et al. (2009) propose nonlinear gradient TD, which converges but only to a locally optimal solution. See Geist and Pietquin (2013); Bertsekas (2019) for a detailed survey. When the value function approximator is an overparameterized multi-layer neural network, Cai et al. (2019) prove that TD converges to the globally optimal solution in the NTK regime. See also the independent work of Brandfonbrener and Bruna (2019a, b); Agazzi and Lu (2019); Sirignano and Spiliopoulos (2019), where the state space is required to be finite. In contrast to the previous analysis in the NTK regime, our analysis allows TD to attain a data-dependent feature representation that is globally optimal.
Although Assumption 6.1 is strong, we are not aware of any weaker regularity condition in the literature, even in the linear setting (Melo et al., 2008; Zou et al., 2019; Chen et al., 2019b) and the NTK regime (Cai et al., 2019). Let the initial distribution ν0subscript𝜈0\nu_{0}italic_ν start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT be the standard Gaussian distribution N⁢(0,ID)𝑁0subscript𝐼𝐷N(0,I_{D})italic_N ( 0 , italic_I start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT ). In parallel to Theorem 4.3, we establish the following theorem, which characterizes the global optimality and convergence of Q-learning. Recall that we write 𝒳=𝒮×𝒜𝒳𝒮𝒜{\mathcal{X}}={\mathcal{S}}\times\mathcal{A}caligraphic_X = caligraphic_S × caligraphic_A and x=(s,a)∈𝒳𝑥𝑠𝑎𝒳x=(s,a)\in{\mathcal{X}}italic_x = ( italic_s , italic_a ) ∈ caligraphic_X. Also, νtsubscript𝜈𝑡\nu_{t}italic_ν start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is the PDE solution in (6.3), while θ(m)⁢(k)superscript𝜃𝑚𝑘\theta^{(m)}(k)italic_θ start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( italic_k ) is the Q-learning dynamics in (6.2).
Assumption 4.1 can be ensured by normalizing all state-action pairs. Such an assumption is commonly used in the mean-field analysis of neural networks (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Araújo et al., 2019; Fang et al., 2019a, b; Chen et al., 2020). We remark that our analysis straightforwardly generalizes to the setting where ‖x‖≤Cnorm𝑥𝐶\|x\|\leq C∥ italic_x ∥ ≤ italic_C for an absolute constant C>0𝐶0C>0italic_C > 0.
Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Chen et al., 2020). See also the previous analysis in the NTK regime (Daniely, 2017; Chizat and Bach, 2018a; Jacot et al., 2018; Li and Liang, 2018; Allen-Zhu et al., 2018a, b; Du et al., 2018a, b; Zou et al., 2018; Arora et al., 2019a, b; Lee et al., 2019; Cao and Gu, 2019; Chen et al., 2019a; Zou and Gu, 2019; Ji and Telgarsky, 2019; Bai and Lee, 2019). Specifically, the previous mean-field analysis casts SGD as the Wasserstein gradient flow of an energy functional, which corresponds to the objective function in supervised learning. In contrast, TD follows the stochastic semigradient of the MSPBE (Sutton and Barto, 2018), which is biased. As a result, there does not exist an energy functional for casting TD as its Wasserstein gradient flow. Instead, our analysis combines a generalized notion of one-point monotonicity (Harker and Pang, 1990) and the first variation formula in the Wasserstein space (Ambrosio et al., 2008), which is of independent interest.
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, TD possibly diverges (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997). Bhatnagar et al. (2009) propose nonlinear gradient TD, which converges but only to a locally optimal solution. See Geist and Pietquin (2013); Bertsekas (2019) for a detailed survey. When the value function approximator is an overparameterized multi-layer neural network, Cai et al. (2019) prove that TD converges to the globally optimal solution in the NTK regime. See also the independent work of Brandfonbrener and Bruna (2019a, b); Agazzi and Lu (2019); Sirignano and Spiliopoulos (2019), where the state space is required to be finite. In contrast to the previous analysis in the NTK regime, our analysis allows TD to attain a data-dependent feature representation that is globally optimal.
C
Regarding parameter efficiency for NMT, Wu et al. (2019a) present lightweight and dynamic convolutions. Ma et al. (2021) approximate softmax attention with two nested linear attention functions. These methods are orthogonal to our work and it should be possible to combine them with our approach.
We suggest that selectively aggregating different layer representations of the Transformer may improve the performance, and propose to use depth-wise LSTMs to connect stacked (sub-) layers of Transformers. We show how Transformer layer normalization and feed-forward sub-layers can be absorbed by depth-wise LSTMs, while connecting pure Transformer attention layers by depth-wise LSTMs (for Transformer encoder and decoder blocks), replacing residual connections.
Directly replacing residual connections with LSTM units will introduce a large amount of additional parameters and computation. Given that the task of computing the LSTM hidden state is similar to the feed-forward sub-layer in the original Transformer layers, we propose to replace the feed-forward sub-layer with the newly introduced LSTM unit, which only introduces one LSTM unit per layer, and the parameters of the LSTM can be shared across layers.
We use depth-wise LSTM rather than a depth-wise multi-head attention network Dou et al. (2018) with which we can build the NMT model solely based on the attention mechanism for two reasons: 1) we have to compute the stacking of Transformer layers sequentially as in sequential token-by-token decoding, and compared to the use of depth-wise LSTM of O⁢(n)𝑂𝑛O(n)italic_O ( italic_n ) complexity, depth-wise multi-head attention networks suffer from O⁢(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) complexity and they cannot be parallelized at the depth level. 2) the attention mechanism linearly combines representations with attention weights. Thus, it lacks the ability to provide the non-linearity compared to the LSTM, which we suggest is important.
In this paper, we replace residual connections of the Transformer with depth-wise LSTMs, to selectively manage the representation aggregation of layers benefiting performance while ensuring convergence of the Transformer. Specifically, we show how to integrate the computation of multi-head attention networks and feed-forward networks with the depth-wise LSTM for the Transformer.
D
\upsigma_{i}]\rrbracket_{X_{i}}caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = roman_τ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ] ⟧ start_POSTSUBSCRIPT italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT. By Lemma 5.9, the topological sum of these spaces Y≜∑i∈I(Xi,θi)≜𝑌subscript𝑖𝐼subscript𝑋𝑖subscriptθ𝑖Y\triangleq\sum_{i\in I}(X_{i},\uptheta_{i})italic_Y ≜ ∑ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , roman_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) is a
lpps is indeed a pre-spectral space. Conversely, ⟨X,τ,𝒦∘⁢(X)⟩𝑋τsuperscript𝒦𝑋\left\langle X,\uptau,\mathcal{K}^{\circ}\!\left(X\right)\right\rangle⟨ italic_X , roman_τ , caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_X ) ⟩ is well-defined whenever (X,τ)𝑋τ(X,\uptau)( italic_X , roman_τ ) is a pre-spectral space; in
definition, this map is surjective. Notice that this map is actually a logical map from ⟨Y,τY,𝒦∘⁢(Y)⟩𝑌subscriptτ𝑌superscript𝒦𝑌\left\langle Y,\uptau_{Y},\mathcal{K}^{\circ}\!\left(Y\right)\right\rangle⟨ italic_Y , roman_τ start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT , caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_Y ) ⟩ to
{U∣U∈⟨τY∩⟦𝖥𝖮[σ]⟧Y⟩}\left\{U\mid U\in\langle\uptau_{Y}\cap\llbracket\mathsf{FO}[\upsigma]% \rrbracket_{Y}\rangle\right\}{ italic_U ∣ italic_U ∈ ⟨ roman_τ start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT ⟩ }
pre-spectral space. Recall that ⟨Y,τY,𝒦∘⁢(Y)⟩𝑌subscriptτ𝑌superscript𝒦𝑌\langle Y,\uptau_{Y},\mathcal{K}^{\circ}\!\left(Y\right)\rangle⟨ italic_Y , roman_τ start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT , caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_Y ) ⟩ is a lpps. We are going to exhibit a surjective map f𝑓fitalic_f from Y𝑌Yitalic_Y to the logical sum X𝑋Xitalic_X of
D
In particular, we redesign the whole pipeline of deep distortion rectification and present an intermediate representation based on the distortion parameters. The comparison of the previous methods and the proposed approach is illustrated in Fig. 1. Our key insight is that distortion rectification can be cast as a problem of learning an ordinal distortion from a distorted image. The ordinal distortion indicates the distortion levels of a series of pixels, which extend outward from the principal point. To predict the ordinal distortion, we design a local-global associated estimation network optimized with an ordinal distortion loss function. A distortion-aware perception layer is exploited to boost the feature extraction of different degrees of distortion.
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed out earlier, the proposed ordinal distortion is explicit to the image feature and is observable from a distorted image; thus it boosts the neural networks’ learning ability. On the other hand, the performance of the distortion parameter estimation drops as the amount of training data decreases. In contrast, our ordinal distortion estimation performs more consistently due to the homogeneity of the learning representation.
Figure 1: Method Comparisons. (a) Previous learning methods, (b) Our proposed approach. We aim to transfer the traditional calibration objective into a learning-friendly representation. Previous methods roughly feed the whole distorted image into their learning models and directly estimate the implicit and heterogeneous distortion parameters. In contrast, our proposed approach only requires a part of a distorted image (distortion element) and estimates the ordinal distortion. Due to its explicit description and homogeneity, we can obtain more accurate distortion estimation and achieve better corrected results.
Previous learning methods directly regress the distortion parameters from a distorted image. However, such an implicit and heterogeneous representation confuses the distortion learning of neural networks and causes the insufficient distortion perception. To bridge the gap between image feature and calibration objective, we present a novel intermediate representation, i.e., ordinal distortion, which displays a learning-friendly attribute for learning models. For an intuitive and comprehensive analysis, we compare these two representations from the following three aspects.
In this part, we compare our approach with the state-of-the-art methods in both quantitative and qualitative evaluations, in which the compared methods can be classified into traditional methods [23][24] and learning methods [8][11][12]. Note that our approach only requires a patch of the input distorted image to estimate the ordinal distortion.
B
To further verify the superiority of SNGM with respect to LARS, we also evaluate them on a larger dataset ImageNet [2] and a larger model ResNet50 [10]. We train the model with 90 epochs. As recommended in [32], we use warm-up and polynomial learning rate strategy.
First, we use the dataset CIFAR-10 and the model ResNet20 [10] to evaluate SNGM. We train the model with eight GPUs. Each GPU will compute a gradient with the batch size being B/8𝐵8B/8italic_B / 8. If B/8≥128𝐵8128B/8\geq 128italic_B / 8 ≥ 128, we will use the gradient accumulation [28] with the batch size being 128. We train the model with 160160160160 epochs (i.e., pass through the dataset 160160160160 times). The cosine annealing learning rate [24] (without restarts) is adopted for the five methods. In the m𝑚mitalic_m-th epoch, the learning rate is ηm=η0∗0.5⁢(1+cos⁡(m⁢π/160))subscript𝜂𝑚subscript𝜂00.51𝑚𝜋160\eta_{m}=\eta_{0}*0.5(1+\cos(m\pi/160))italic_η start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_η start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∗ 0.5 ( 1 + roman_cos ( italic_m italic_π / 160 ) ), m=0,1,…⁢…,159𝑚01……159m=0,1,...\ldots,159italic_m = 0 , 1 , … … , 159.
We compare SNGM with four baselines: MSGD, ADAM [14], LARS [34] and LAMB [34]. LAMB is a layer-wise adaptive large-batch optimization method based on ADAM, while LARS is based on MSGD. The experiments are implemented based on the DeepCTR 888https://github.com/shenweichen/DeepCTR-Torch framework.
We use a pre-trained ViT 555https://huggingface.co/google/vit-base-patch16-224-in21k [4] model and fine-tune it on the CIFAR-10/CIFAR-100 datasets. The experiments are implemented based on the Transformers 666https://github.com/huggingface/transformers framework. We fine-tune the model with 20 epochs.
To further verify the superiority of SNGM with respect to LARS, we also evaluate them on a larger dataset ImageNet [2] and a larger model ResNet50 [10]. We train the model with 90 epochs. As recommended in [32], we use warm-up and polynomial learning rate strategy.
C
When the algorithm terminates with Cs=∅subscript𝐶𝑠C_{s}=\emptysetitalic_C start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = ∅, Lemma 5.2 ensure the solution zfinalsuperscript𝑧finalz^{\text{final}}italic_z start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT is integral. By Lemma 5.5, any client j𝑗jitalic_j with d⁢(j,S)>9⁢Rj𝑑𝑗𝑆9subscript𝑅𝑗d(j,S)>9R_{j}italic_d ( italic_j , italic_S ) > 9 italic_R start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT must have j∈C0final𝑗subscriptsuperscript𝐶final0j\in C^{\text{final}}_{0}italic_j ∈ italic_C start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. Hence, ∑j:d⁢(j,S)>9⁢Rjvj≤∑j∈C0vjsubscript:𝑗𝑑𝑗𝑆9subscript𝑅𝑗subscript𝑣𝑗subscript𝑗subscript𝐶0subscript𝑣𝑗\sum_{j:d(j,S)>9R_{j}}v_{j}\leq\sum_{j\in C_{0}}v_{j}∑ start_POSTSUBSCRIPT italic_j : italic_d ( italic_j , italic_S ) > 9 italic_R start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≤ ∑ start_POSTSUBSCRIPT italic_j ∈ italic_C start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT. For the facility costs, we have ∑i∈Swi=∑izifinal⁢wisubscript𝑖𝑆subscript𝑤𝑖subscript𝑖superscriptsubscript𝑧𝑖finalsubscript𝑤𝑖\sum_{i\in S}w_{i}=\sum_{i}z_{i}^{\text{final}}w_{i}∑ start_POSTSUBSCRIPT italic_i ∈ italic_S end_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Finally, by Lemma 5.3, and noting that Csfinal=∅superscriptsubscript𝐶𝑠finalC_{s}^{\text{final}}=\emptysetitalic_C start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT = ∅, we have ∑izifinal⁢wi+∑j∈C0vj≤Vsubscript𝑖superscriptsubscript𝑧𝑖finalsubscript𝑤𝑖subscript𝑗subscript𝐶0subscript𝑣𝑗𝑉\sum_{i}z_{i}^{\text{final}}w_{i}+\sum_{j\in C_{0}}v_{j}\leq V∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + ∑ start_POSTSUBSCRIPT italic_j ∈ italic_C start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≤ italic_V.
  FAs¯←{ijA|j∈HA⁢ and ⁢FI∩GπI⁢j=∅}←subscriptsuperscript𝐹¯𝑠𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F^{\bar{s}}_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{% \pi^{I}j}=\emptyset\}italic_F start_POSTSUPERSCRIPT over¯ start_ARG italic_s end_ARG end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ← { italic_i start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | italic_j ∈ italic_H start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ∩ italic_G start_POSTSUBSCRIPT italic_π start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT italic_j end_POSTSUBSCRIPT = ∅ }
Brian Brubach was supported in part by NSF awards CCF-1422569 and CCF-1749864, and by research awards from Adobe. Nathaniel Grammel and Leonidas Tsepenekas were supported in part by NSF awards CCF-1749864 and CCF-1918749, and by research awards from Amazon and Google. Aravind Srinivasan was supported in part by NSF awards CCF-1422569, CCF-1749864 and CCF-1918749, and by research awards from Adobe, Amazon, and Google.
For instance, during the COVID-19 pandemic, testing and vaccination centers were deployed at different kinds of locations, and access was an important consideration [18, 20]; access can be quantified in terms of different objectives including distance, as in our work. Here, ℱℱ\mathcal{F}caligraphic_F and 𝒞𝒞\mathcal{C}caligraphic_C correspond to such locations and the population affected by the outbreak, and needing services, respectively.
        do FA←{ijA|j∈HA⁢ and ⁢FI∩GπI⁢j=∅}←subscript𝐹𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{\pi^{I}j}=\emptyset\}italic_F start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ← { italic_i start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | italic_j ∈ italic_H start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ∩ italic_G start_POSTSUBSCRIPT italic_π start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT italic_j end_POSTSUBSCRIPT = ∅ }
B
In real networked systems, the information exchange among nodes is often affected by communication noises, and the structure of the network often changes randomly due to packet dropouts, link/node failures and recreations, which are studied in [8]-[10].
such as the economic dispatch in power grids ([1]) and the traffic flow control in intelligent transportation networks ([2]), et al. Considering the various uncertainties in practical network environments, distributed stochastic optimization algorithms have been widely studied. The (sub)gradients of local cost functions are used in many distributed optimization algorithms. However, it is difficult to get accurate (sub)gradients in many practical applications. For example, in distributed statistical machine learning ([3]), the local loss functions are the mathematical expectations of random functions so that the local optimizers can only obtain the measurement of the (sub)gradients with random noises. The influence of (sub)gradient measurement noises has been considered for distributed optimization algorithms in [4]-[7].
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be spatially and temporally dependent. The local cost functions are not required to be differentiable, nor do their subgradients need to be bounded. The local optimizers can only obtain measurement information of the local subgradients with random noises. The additive and multiplicative communication noises co-exist in communication links. We consider the distributed stochastic subgradient optimization algorithm and prove that if the sequence of random digraphs is conditionally balanced and uniformly conditionally jointly connected, then the states of all local optimizers converge to the same global optimal solution almost surely. The main contributions of our paper are listed as follows.
Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15], and additive and multiplicative communication noises may co-exist in communication links ([21]).
However, a variety of random factors may co-exist in practical environment. In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the distributed optimization with multiple uncertain factors ([11]-[15]).
D
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics without violating the privacy. Inspired by local differential privacy, this paper uses the method of randomized response to perturb original QI values before release to prevent the disclosure of matching the combination of QI values.
Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces the contradiction between privacy protection and data analysis [9]. For instance, a smaller ϵitalic-ϵ\epsilonitalic_ϵ for ϵitalic-ϵ\epsilonitalic_ϵ-differential privacy provides better protection but worse information utility.
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics without violating the privacy. Inspired by local differential privacy, this paper uses the method of randomized response to perturb original QI values before release to prevent the disclosure of matching the combination of QI values.
The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution in the original data. Second, the anonymization of MuCo is a “black box” process for recipients because the only difference between the original data and the anonymized data is that some original QI values are replaced with random values. Thus, the adversary cannot determine which QI values are altered as well as the ranges of variations, causing that the matching tuples are more likely to be wrong or even does not exist when the adversary uses more QI values to match, but the adversary obtains much more matching records if the size of the combination of QI values is not big enough. While for the recipient, the results of query statements are specific records rather than groups. Accordingly, the results are more accurate. The conducted extensive experiments also illustrate the effectiveness of the proposed method.
Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the original microdata and publish the anonymized version of microdata. Therefore, differential privacy is inapplicable to the scenario we addressed in this paper.
D
Table 3: PointRend’s performance on testing set (trackB). “EnrichFeat” means enhance the feature representation of coarse mask head and point head by increasing the number of fully-connected layers or its hidden sizes. “BFP” means Balanced Feature Pyramid. Note that BFP and EnrichFeat gain little improvements, we guess that our PointRend baseline already achieves promising performance (77.38 mAP).
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains another 2 mAP. Armed with DCN, GC block and SyncBN training, our HTC with Res2NetR101 backbone yields 74.58 mAP on validation set, as shown in Table 1. However, the convolutional mask heads adopted in all stages bring non-negligible computation and memory costs, which constrain the mask resolution and further limit the segmentation quality for large instances.
PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared to HTC’s mask head, PointRend’s lightweight segmentation head alleviates both memory and computation costs dramatically, thus enables larger input image resolutions during training and testing, which further improves the segmentation quality. To fully understand which components contribute to PointRend’s performance, we construct our own validation set by randomly selecting 3000 images from original training data to evaluate offline. We will show the step-by-step improvements adopted on PointRend.
Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRend Kirillov et al. (2020). Most of these detectors focus on an overall performance on public datasets like COCO, which contains much smaller instances than 3D-FUTURE, while paying less attention to large objects segmentation. As illustrated in Figure 1, the size distribution of bounding boxes in 3D-FUTURE and COCO indicates that the former contains much larger objects while the latter is dominated by smaller instances. Thus, the prominent methods used in COCO, like MaskRCNN He et al. (2017) and HTC, may generate blurry contours for large instances. Their mask heads output segmentation from a limited small feature size (e.g., 14×14141414\times 1414 × 14), which is dramatically insufficient to represent large objects. All of these motivate us to segment large instances in a fine-grained and high-quality manner. SOLOv2 builds an efficient single-shot framework with strong performance and dynamically generates predictions with much larger mask size (e.g., 1/4 scale of input size) than HTC. PointRend iteratively renders the output mask over adaptively sampled uncertain points in a coarse-to-fine fashion, which is naturally suitable for generating smooth and fine-grained instance boundaries. By conducting extensive experiments on HTC, SOLOv2 and PointRend, PointRend succeeds in producing finer mask boundaries and significantly outperforms other methods by a large margin. Our step-by-step modifications adopted on PointRend finally achieves state-of-the-art performance on 3D-FUTURE dataset, which yields 79.2 mAP and 77.38 mAP on validation and test set respectively. The final submission is an ensemble of 5 PointRend models with slightly different settings, reaching the 1st place in this competition.
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.9 mAP and surpasses MaskRCNN by a remarkable margin of 9.7 mAP. More Points Test. By increasing the number of subdivision points from default 28 to 70 during inference, we gain another 1.1 mAP with free training cost. Large Backbone. X101-64x4d Xie et al. (2017) is then used as large backbone and it boosts 6 mAP against ResNet50. DCN and More Points Train. We adopt more interpolated points during training, by increasing the number of sampled points from original 14 to 26 for coarse prediction head, and from 14 to 24 for fine-grained point head. Then by adopting DCN Dai et al. (2017), we gain 71.6 mAP, which already outperforms HTC and SOLOV2 from our offline observation. Large Resolution and P6 Feature. Due to PointRend’s lightweight segmentation head and less memory consumption compared to HTC, the input resolution can be further increased from range [800,1000] to [1200,1400] during multi-scale training. P6 level of FPN is also added for both coarse prediction head and fine-grained point head, which finally yields 74.3 mAP on our splitted validation set. Other tricks we tried on PointRend give little improvement, including MaskScoring head, GC Block and DoubleHead Wu et al. (2020). In the following, we refer the model in the last row (74.3 mAP) of Table 2 as PointRend baseline. The baseline trained on the official training set finally reaches 79.17 and 77.38 mAP on validation and testing set respectively, as shown in Table 1 and Table 3. It surpasses SOLOv2 by a large margin: 6.2, 4.5 and 3.5 mAP respectively for small, medium and large size on validation set. We believe that PointRend’s iteratively rendering process acts as a pivot for generating high-quality masks, especially fine-grained instance boundaries. Due to its superior performance, we only choose PointRend as ensemble candidates for the final submission.
B
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma some time ago (see [K], comment from April 2, 2011). More specifically, we proved
We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi⁢(δ1,…,δn)=δisubscript𝜀𝑖subscript𝛿1…subscript𝛿𝑛subscript𝛿𝑖\varepsilon_{i}(\delta_{1},\dots,\delta_{n})=\delta_{i}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_δ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_δ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) = italic_δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. For a subset A𝐴Aitalic_A of [n]:={1,…,n}assigndelimited-[]𝑛1…𝑛[n]:=\{1,\dots,n\}[ italic_n ] := { 1 , … , italic_n } we denote WA=∏i∈Aεisubscript𝑊𝐴subscriptproduct𝑖𝐴subscript𝜀𝑖W_{A}=\prod_{i\in A}\varepsilon_{i}italic_W start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT = ∏ start_POSTSUBSCRIPT italic_i ∈ italic_A end_POSTSUBSCRIPT italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, WA:{−1,1}n→{−1,1}:subscript𝑊𝐴→superscript11𝑛11W_{A}:\{-1,1\}^{n}\to\{-1,1\}italic_W start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 }. The WAsubscript𝑊𝐴W_{A}italic_W start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT-s are the characters of the Cantor group {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT (with coordintewise multiplication) and form an orthonormal basis in L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT of the Cantor group equipped with the normalized counting measure. In this note we shall be concerned with functions from {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT into the complex plane, ℂℂ\mathbb{C}blackboard_C. These can also be considered as a couple of real functions. Each such function f:{−1,1}n→ℂ:𝑓→superscript11𝑛ℂf:\{-1,1\}^{n}\to\mathbb{C}italic_f : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → blackboard_C has a unique expansion
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Maybe the presentation below is what was known.
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
A
Corollary 1 shows that if local variations are known, we can achieve near-optimal dependency on the the total variation B𝛉,B𝛍subscript𝐵𝛉subscript𝐵𝛍B_{\bm{\theta}},B_{\bm{\mu}}italic_B start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT bold_italic_μ end_POSTSUBSCRIPT and time horizon T𝑇Titalic_T compared to the lower bound provided in Theorem 1. However, the dependency on d𝑑ditalic_d and H𝐻Hitalic_H is worse. The dependency on d𝑑ditalic_d is unlikely to improve unless there is an improvement to LSVI-UCB.
Reinforcement learning (RL) is a core control problem in which an agent sequentially interacts with an unknown environment to maximize its cumulative reward (Sutton & Barto, 2018). RL finds enormous applications in real-time bidding in advertisement auctions (Cai et al., 2017), autonomous driving (Shalev-Shwartz et al., 2016), gaming-AI (Silver et al., 2018), and inventory control (Agrawal & Jia, 2019), among others. Due to the large dimension of sequential decision-making problems that are of growing interest, classical RL algorithms designed for finite state space such as tabular Q-learning (Watkins & Dayan, 1992) no longer yield satisfactory performance. Recent advances in RL rely on function approximators such as deep neural nets to overcome the curse of dimensionality, i.e., the value function is approximated by a function which is able to predict the value function for unseen state-action pairs given a few training samples. This function approximation technique has achieved remarkable success in various large-scale decision-making problems such as playing video games (Mnih et al., 2015), the game of Go (Silver et al., 2017), and robot control (Akkaya et al., 2019). Motivated by the empirical success of RL algorithms with function approximation, there is growing interest in developing RL algorithms with function approximation that are statistically efficient (Yang & Wang, 2019; Cai et al., 2020; Jin et al., 2020; Modi et al., 2020; Wang et al., 2020; Wei et al., 2021; Neu & Olkhovskaya, 2021; Jiang et al., 2017; Wang et al., 2020; Jin et al., 2021; Du et al., 2021). The focus of this line of work is to develop statistically efficient algorithms with function approximation for RL in terms of either regret or sample complexity. Such efficiency is especially crucial in data-sparse applications such as medical trials (Zhao et al., 2009).
The definition of total variation B𝐵Bitalic_B is related to the misspecification error defined by Jin et al. (2020). One can apply the Cauchy-Schwarz inequality to show that our total variation bound implies that misspecification in Eq. (4) of Jin et al. is also bounded (but not vice versa). However, the regret analysis in the misspecified linear MDP of Jin et al. (2020) is restricted to static regret, so we cannot directly borrow their analysis for the misspecified setting (Jin et al., 2020) to handle our dynamic regret (as defined in Eq. (1)).
The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and allowed to change in l𝑙litalic_l times for the reward and transition functions. They show that UCRL2 with restart achieves O~⁢(l1/3⁢T2/3)~𝑂superscript𝑙13superscript𝑇23\tilde{O}(l^{1/3}T^{2/3})over~ start_ARG italic_O end_ARG ( italic_l start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) dynamic regret, where T𝑇Titalic_T is the time horizon. Later works (Ortner et al., 2020; Cheung et al., 2020; Fei et al., 2020) generalize the nonstationary setting to allow reward and transition functions vary for any number of time steps, as long as the total variation is bounded. Specifically, the work of (Ortner et al., 2020) proves that UCRL with restart achieves O~⁢((Br+Bp)1/3⁢T2/3)~𝑂superscriptsubscript𝐵𝑟subscript𝐵𝑝13superscript𝑇23\tilde{O}((B_{r}+B_{p})^{1/3}T^{2/3})over~ start_ARG italic_O end_ARG ( ( italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT + italic_B start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) dynamic regret (when the variation in each epoch is known), where Brsubscript𝐵𝑟B_{r}italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT and Bpsubscript𝐵𝑝B_{p}italic_B start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT denote the total variation of reward and transition functions over all time steps. Cheung et al. (2020) proposes an algorithm based on UCRL2 by combining sliding windows and a confidence widening technique. Their algorithm has slightly worse dynamic regret bound O~⁢((Br+Bp)1/4⁢T3/4)~𝑂superscriptsubscript𝐵𝑟subscript𝐵𝑝14superscript𝑇34\tilde{O}((B_{r}+B_{p})^{1/4}T^{3/4})over~ start_ARG italic_O end_ARG ( ( italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT + italic_B start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 1 / 4 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 3 / 4 end_POSTSUPERSCRIPT ) without knowing the local variations. Further, Fei et al. (2020) develops an algorithm which directly optimizes the policy and enjoys near-optimal regret in the low-variation regime. A different model of nonstationary MDP is proposed by Lykouris et al. (2021), which smoothly interpolates between stationary and adversarial environments, by assuming that most episodes are stationary except for a small number of adversarial episodes. Note that Lykouris et al. (2021) considers linear function approximation, but their nonstationarity assumption is different from ours. In this paper, we assume the variation budget for reward and transition function is bounded, which is similar to the settings in Ortner et al. (2020); Cheung et al. (2020); Mao et al. (2021). Concurrently to our work, Touati & Vincent (2020) propose an algorithm combining weighted least-squares value iteration and the optimistic principle, achieving the same O~⁢(B1/4⁢d5/4⁢H5/4⁢T3/4)~𝑂superscript𝐵14superscript𝑑54superscript𝐻54superscript𝑇34\tilde{O}(B^{1/4}d^{5/4}H^{5/4}T^{3/4})over~ start_ARG italic_O end_ARG ( italic_B start_POSTSUPERSCRIPT 1 / 4 end_POSTSUPERSCRIPT italic_d start_POSTSUPERSCRIPT 5 / 4 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 5 / 4 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 3 / 4 end_POSTSUPERSCRIPT ) regret as we do with knowledge of the total variation B𝐵Bitalic_B. They do not have a dynamic regret bound when the knowledge of local variations is available. Their proposed algorithm uses exponential weights to smoothly forget data that are far in the past. By contrast, our algorithm periodically restarts the LSVI-UCB algorithm from scratch to handle the non-stationarity and is much more computationally efficient. Another concurrent work by Wei & Luo (2021) follows a substantially different approach to achieve the optimal T2/3superscript𝑇23T^{2/3}italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT regret. The key idea of their algorithm is to run multiple base algorithms for stationary instances with different duration simultaneously, under a carefully designed random schedule. Compared with them, our algorithm has a slightly worse rate, but a much better computational complexity, since we only require to maintain one instance of the base algorithm. Both of these two concurrent works do not have empirical results, and we are also the first one to conduct numerical experiments on online exploration for non-stationary MDPs (Section 6). Other related and concurrent works investigate online exploration in different classes of non-stationary MDPs, including linear kernal MDP (Zhong et al., 2021), constrained tabular MDP (Ding & Lavaei, 2022), and stochastic shorted path problem (Chen & Luo, 2022).
Motivated by empirical success of deep RL, there is a recent line of work analyzing the theoretical performance of RL algorithms with function approximation (Yang & Wang, 2019; Cai et al., 2020; Jin et al., 2020; Modi et al., 2020; Ayoub et al., 2020; Wang et al., 2020; Zhou et al., 2021; Wei et al., 2021; Neu & Olkhovskaya, 2021; Huang et al., 2021; Modi et al., 2021; Jiang et al., 2017; Agarwal et al., 2020; Dong et al., 2020; Jin et al., 2021; Du et al., 2021; Foster et al., 2021a; Chen et al., 2022). Recent work also studies the instance-dependent sample complexity bound for RL with function approximation, which adapts to the complexity of the specific MDP instance (Foster et al., 2021b; Dong & Ma, 2022). All of these works assume that the learner is interacting with a stationary environment. In sharp contrast, this paper considers learning in a nonstationary environment. As we will show later, if we do not properly adapt to the nonstationarity, linear regret is incurred.
B
Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on instant messaging apps compared to social media, and have reported the least trust in them. They have also rated the sharing of fake news to be a greater problem than its creation. These suggest that, in Singapore, communication with personal contacts such as through the forwarding of messages, rather than with the public such as by sharing posts on social media feeds, is the larger issue. As an Asian country, Singapore tends towards a collectivist culture where emphasis is placed on establishing and maintaining relationships in one’s social group. Research has shown that this is linked to lesser use of social media (Jackson and Wang, 2013), and stronger preferences towards group chats in instant messaging apps (Li et al., 2011), signaling that instant messaging apps feature more prominently in daily communication. An opportunity here is to design more effective interventions, such as warning mechanisms (Gao et al., 2018), to preempt the private sharing of fake news.
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms, and post corrections and warnings when they encounter fake news. That respondents show strong trust and reliance on government communication platforms, such as official websites and hotlines, signifies the relatively strong faith that Singapore residents have in the Singapore Government to provide truthful and helpful information and to debunk fake news. This may be attributed to the successful ongoing efforts in making transparent government decisions and the readiness of the government in addressing public concerns through online forums and dialogues (REACH, [n.d.]). There is opportunity here for the government to launch programs such as campaigns, call-to-actions and civic tech initiatives that aim to more actively involve the public in discussing the local impacts of fake news and the strategies to manage it, and to encourage them to play a part through personal and community actions.
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures  1 and  2) which is statistically significant (r⁢(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and the presence of fake news, which is deceptive and usually meant to serve hidden agendas, may erode trust. It is worthwhile to consider whether the trust in media items is due to people’s own encounters with fake news, or because of secondary factors. In Singapore, there have been active efforts through campaigns from various organizations (e.g., S.U.R.E. (Board, [n.d.]), Better Internet (Council, [n.d.]), VacciNationSG (Lai, 2021)) to raise awareness on misinformation, disinformation and fake news. If it is through the exposure to the messages of these campaigns that people’s trust in media items have been influenced, especially those who might not have personally encountered fake news, this suggests the importance of media literacy education in addressing fake news, particularly when secondary effects such as practicing greater caution due to a lack of trust comes into play.
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by political and financial gains, and its influence has led to increasing social costs due to the adverse effects it has on people’s truth discernment and behavior (Duffy et al., 2020). With fake news stemming mainly from digital media and causing misguided dissent that could compromise collaboration among people, we see this to be of concern to the CSCW community. As global efforts addressing fake news take off, we aim to understand what the perceptions and practices of news sharing and fake news are in a local context, with Singapore as the place of interest, to gain insights on where best to direct local mitigation efforts.
Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on instant messaging apps compared to social media, and have reported the least trust in them. They have also rated the sharing of fake news to be a greater problem than its creation. These suggest that, in Singapore, communication with personal contacts such as through the forwarding of messages, rather than with the public such as by sharing posts on social media feeds, is the larger issue. As an Asian country, Singapore tends towards a collectivist culture where emphasis is placed on establishing and maintaining relationships in one’s social group. Research has shown that this is linked to lesser use of social media (Jackson and Wang, 2013), and stronger preferences towards group chats in instant messaging apps (Li et al., 2011), signaling that instant messaging apps feature more prominently in daily communication. An opportunity here is to design more effective interventions, such as warning mechanisms (Gao et al., 2018), to preempt the private sharing of fake news.
B
README.md exists but content is empty.
Downloads last month
45