context
stringlengths 250
4.88k
| A
stringlengths 250
4.73k
| B
stringlengths 250
3.79k
| C
stringlengths 250
8.2k
| D
stringlengths 250
4.17k
| label
stringclasses 4
values |
---|---|---|---|---|---|
This is f′′(x)/f′(x)superscript𝑓′′𝑥superscript𝑓′𝑥f^{\prime\prime}(x)/f^{\prime}(x)italic_f start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ( italic_x ) / italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) of the generic formula, and can be quickly | Rnm′′/Rnm′superscriptsuperscriptsubscript𝑅𝑛𝑚′′superscriptsuperscriptsubscript𝑅𝑛𝑚′{R_{n}^{m}}^{\prime\prime}/{R_{n}^{m}}^{\prime}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT / italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and Rnm/Rnm′superscriptsubscript𝑅𝑛𝑚superscriptsuperscriptsubscript𝑅𝑛𝑚′R_{n}^{m}/{R_{n}^{m}}^{\prime}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT / italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with relay to
the generic formulas of associated, terminating hypergeometric | computed from Rnm(x)/Rnm′(x)=f(x)/f′(x)superscriptsubscript𝑅𝑛𝑚𝑥superscriptsuperscriptsubscript𝑅𝑛𝑚′𝑥𝑓𝑥superscript𝑓′𝑥R_{n}^{m}(x)/{R_{n}^{m}}^{\prime}(x)=f(x)/f^{\prime}(x)italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) / italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) = italic_f ( italic_x ) / italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x )
of the lower order. | \prime},+ 12 ( 1 + italic_m ) italic_x start_POSTSUPERSCRIPT italic_m + 1 end_POSTSUPERSCRIPT italic_F start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT + 8 italic_x start_POSTSUPERSCRIPT italic_m + 3 end_POSTSUPERSCRIPT italic_F start_POSTSUPERSCRIPT ′ ′ ′ end_POSTSUPERSCRIPT ,
| {n,n^{\prime}}.∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_R start_POSTSUBSCRIPT italic_n start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_d italic_x = divide start_ARG 1 end_ARG start_ARG 2 italic_n + italic_D end_ARG italic_δ start_POSTSUBSCRIPT italic_n , italic_n start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT .
| B |
The lower-unitriangular matrices u1subscript𝑢1u_{1}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and u2subscript𝑢2u_{2}italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are returned as words in the Leedham-Green–O’Brien standard generators [11] for SL(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) defined in Section 3.1 below.
In the rest of this section we explain how to express an arbitrary monomial matrix w∈SL(d,q)𝑤SL𝑑𝑞w\in\textnormal{SL}(d,q)italic_w ∈ SL ( italic_d , italic_q ) as a word in the Leedham-Green–O’Brien standard generators, yielding an MSLP from the standard generators to w𝑤witalic_w. |
There are several well-known generating sets for classical groups. For example, special linear groups are generated by the subset of all transvections [21, Theorem 4.3] or by two well chosen matrices, such as the Steinberg generators [19]. Another generating set which has become important in algorithms and applications in the last 10-15 years is the Leedham-Green and O’Brien standard generating set in the following called the LGO generating set. These generators are defined for all classical groups in odd characteristic in [11] and even characteristic in [10]. |
Therefore, we decided to base the procedures we present on a set of generators very close to the LGO standard generators. Note, that the choice of the generating set has no impact on the results as it is always possible to determine an MSLP which computes the LGO standard generators given an arbritary generating set and preface an MSLP for another application by this MSLP. | Note that a small variation of these standard generators for SL(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) are used in Magma [14] as well
as in algorithms to verify presentations of classical groups, see [12], where only the generator v𝑣vitalic_v is slightly different in the two scenarios when d𝑑ditalic_d is even. | The LGO generating set offers a variety of advantages. In practice it is the generating set produced by the constructive recognition algorithms from [10, 11] as implemented in MAGMA. Consequently, algorithms in the composition tree data structure, both in MAGMA and in GAP, store elements in classical groups as words in the LGO generators. Moreover, the LGO generators can be used directly to verify representations of classical groups [12].
| A |
where Ω⊂ℝdΩsuperscriptℝ𝑑\Omega\subset\mathbb{R}^{d}roman_Ω ⊂ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with d=2𝑑2d=2italic_d = 2 or 3333 for simplicity, and is an open bounded domain with polyhedral boundary ∂ΩΩ\partial\Omega∂ roman_Ω, the symmetric tensor 𝒜∈[L∞(Ω)]symd×d𝒜superscriptsubscriptdelimited-[]superscript𝐿Ωsym𝑑𝑑\mathcal{A}\in[L^{\infty}(\Omega)]_{\text{sym}}^{d\times d}caligraphic_A ∈ [ italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( roman_Ω ) ] start_POSTSUBSCRIPT sym end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_d × italic_d end_POSTSUPERSCRIPT is uniformly positive definite and bounded, and g𝑔gitalic_g is part of the given data. | In [MR2718268] is shown that the number of eigenvalues that are very large is related to the number of connected sub-regions on τ¯∪τ¯′¯𝜏superscript¯𝜏′\bar{\tau}\cup{\bar{\tau}}^{\prime}over¯ start_ARG italic_τ end_ARG ∪ over¯ start_ARG italic_τ end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with large coefficients surrounded by regions with small coefficients. Generalized eigenvalue problems also have been used on overlapping domain decomposition solvers [MR2718268, MR2916377, MR3175183, MR3033238]. The design of robust discretizations with respect to coefficients using domain decomposition ideas have been studied in [MR2666649, MR1642758, MR3350765] assuming some regularity on the solution, and in [MR2718268] for a class of problems when the weighted Poincaré constant [MR3047947, MR3013465, MR2867661] is not large, otherwise the exponential decay of the multiscale functions deteriorates. See also [MR2753343, MR3109775] where a priori error estimates are obtained in terms of spectral norms.
| One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method not so practical. Here in this paper, in the presence of rough coefficients, spectral techniques are employed to overcome such hurdle, and by solving local eigenvalue problems we define a space where the exponential decay of solutions is insensitive to high-contrast coefficients. Additionally, the spectral techniques remove
macro-elements corner singularities that occur in LOD methods based on | It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85, MR1979846, MR2058933, HMV, MR1642758, MR3584539, MR2030161, MR2383203, vs1, vs2, MR2740478]. Some methods work even considering that the solution has low regularity [MR2801210, MR2753343, MR3225627, MR3177856, MR2861254]
but are based on ideas that differ considerably from what we advocate here |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local computations are required, although these are not restricted to a single element. It is interesting to notice that, although the formulation is based on hybridization, the final numerical solution is defined by a sequence of elliptic problems. | C |
We think Alg-A is better in almost every aspect. This is because it is essentially simpler.
Among other merits, Alg-A is much faster, because it has a smaller constant behind the asymptotic complexity O(n)𝑂𝑛O(n)italic_O ( italic_n ) than the others: |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. |
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM. | Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K.
(by experiment, Alg-CM and Alg-K have to compute roughly 4.66n4.66𝑛4.66n4.66 italic_n candidate triangles.) | D |
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet level and let each tweet vote for the credibility of its event. We show the CreditScore measured over time in Figure 5(a). It can be seen that although the credibility of some tweets are low (rumor-related), averaging still makes the CreditScore of Munich shooting higher than the average of news events (hence, close to a news). In addition, we show the feature analysis for ContainNews (percentage of URLs containing news websites) for the event Munich shooting in Figure 5(b). We can see the curve of Munich shooting event is also close to the curve of average news, indicating the event is more news-related. |
As observed in [19, 20], rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in [20]. We base our credibility feature on the time series approach and train the classifier with features from diffent high-level contexts (i.e., users, Twitter and propagation) in a cascaded manner. In this section, we first detail the employed Dynamic Series-Time Structure, then describe the high and low-level ensemble features used for learning in this pipeline step. |
In this work, we propose an effective cascaded rumor detection approach using deep neural networks at tweet level in the first stage and wisdom of the “machines”, together with a variety of other features in the second stage, in order to enhance rumor detection performance in the early phase of an event. The proposed approach outperforms state of the |
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. We build upon the idea of their Series-Time Structure, when building our approach for early rumor detection with our extended dataset, and we provide a deep analysis on the wide range of features change during diffusion time. Ma et al. [19] used Recurrent Neural Networks for rumor detection, they batch tweets into time intervals and model the time series as a RNN sequence. Without any other handcrafted features, they got almost 90% accuracy for events reported in Snope.com. As the same disadvantage of all other deep learning models, the process of learning is a black box, so we cannot envisage the cause of the good performance based only on content features. The model performance is also dependent on the tweet retrieval mechanism, of which quality is uncertain for stream-based trending sub-events. | at an early stage. Our fully automatic, cascading rumor detection method follows
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumors. [7, 19] also use RNN for rumor debunking. However, in their work, RNN is used at event-level. The classification leverages only the deep data representations of aggregated tweet contents of the whole event, while ignoring exploiting other –in latter stage–effective features such as user-based features and propagation features. Although, tweet contents are merely the only reliable source of clue at early stage, they are also likely to have doubtful perspectives and different stands in this specific moment. In addition, they could relate to rumorous sub-events (see e.g., the Munich shooting). Aggregating all relevant tweets of the event at this point can be of noisy and harm the classification performance. One could think of a sub-event detection mechanism as a solution, however, detecting sub-events at real-time over Twitter stream is a challenging task [22], which increases latency and complexity. In this work, we address this issue by deep neural modeling only at single tweet level. Our intuition is to leverage the “wisdom of the crowd” theory; such that even a certain portion of tweets at a moment (mostly early stage) are weakly predicted (because of these noisy factors), the ensemble of them would attribute to a stronger prediction. | B |
In a follow-up work Nacson et al. (2018) provided partial answers to these questions. They proved that the exponential tail has the optimal convergence rate, for tails for which ℓ′(u)superscriptℓ′𝑢\ell^{\prime}(u)roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) is of the form exp(−uν)superscript𝑢𝜈\exp(-u^{\nu})roman_exp ( - italic_u start_POSTSUPERSCRIPT italic_ν end_POSTSUPERSCRIPT ) with ν>0.25𝜈0.25\nu>0.25italic_ν > 0.25. They then conjectured, based on heuristic analysis, that the exponential tail is optimal among all possible tails. Furthermore, they demonstrated that polynomial or heavier tails do not converge to the max margin solution. Lastly, for the exponential loss they proposed a normalized gradient scheme which can significantly improve convergence rate, achieving O(log(t)/t)𝑂𝑡𝑡O(\log(t)/\sqrt{t})italic_O ( roman_log ( italic_t ) / square-root start_ARG italic_t end_ARG ). | Perhaps most similar to our study is the line of work on understanding AdaBoost in terms its implicit bias toward large L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-margin solutions, starting with the seminal work of Schapire et al. (1998). Since AdaBoost can be viewed as coordinate descent on the exponential loss of a linear model, these results can be interpreted as analyzing the bias of coordinate descent, rather then gradient descent, on a monotone decreasing loss with an exact exponential tail. Indeed, with small enough step sizes, such a coordinate descent procedure does converge precisely to the maximum L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-margin solution (Zhang et al., 2005; Telgarsky, 2013). In fact, Telgarsky (2013) also generalizes these results to other losses with tight exponential tails, similar to the class of losses we consider here.
| decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail
part does not affect the bias. The bias is also independent of the step-size | The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile
continuing to optimize long after we have zero training error, and | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameterization asymptotically to the maximum margin solution with unit nuclear norm. Unlike the case of squared loss, the result for exponential loss are independent of initialization and with only mild conditions on the step size. Here again, we see the asymptotic nature of exponential loss on separable data nullifying the initialization effects thereby making the analysis simpler compared to squared loss.
| D |
For analysing the employed features, we rank them by importances using RF (see 4). The best feature is related to sentiment polarity scores. There is a big bias between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news event is -0.066 and the average of rumors is -0.1393, showing that rumor-related messages tend to contain more negative sentiments. Furthermore, we would expect that verified users are less involved in the rumor spreading. However, the feature appears near-bottom in the ranked list, indicating that it is not as reliable as expected. Also interestingly, the feature“IsRetweet” is also not as good a feature as expected, which means the probability of people retweeting rumors or true news are similar (both appear near-bottom in the ranked feature list).It has to be noted here that even though we obtain reasonable results on the classification task in general, the prediction performance varies considerably along the time dimension. This is understandable, since tweets become more distinguishable, only when the user gains more knowledge about the event.
| To construct the training dataset, we collected rumor stories from the rumor tracking websites snopes.com and urbanlegends.about.com. In more detail, we crawled 4300 stories from these websites. From the
story descriptions we manually constructed queries to retrieve the relevant tweets for the 270 rumors with highest impact. Our approach to query construction mainly follows (gupta2014tweetcred, ). For the news event instances (non-rumor examples), we make use of the corpus from Mcminn et al. (mcminn2013building, ), which covers 500 real-world events. They have crawled tweets via the streaming API from 10thsuperscript10𝑡ℎ10^{th}10 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT of October 2012 to 7thsuperscript7𝑡ℎ7^{th}7 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT of November 2012. The involved events have been manually verified and related to tweets with relevance judgments, which has resulted in a high quality corpus. From the 500 events, we select top 230 events with the highest tweet volumes (as a criteria for event impact). Furthermore, we have added 40 other news events, which happened around the time periods of our rumors. This results in a dataset of 270 rumors and 270 events. The dataset details are shown in Table 2. We then constructs two distinct datasets for (1) single tweet and (2) rumor classification. | We use the same dataset described in Section 4.1. In total –after cutting off 180 events for pre-training single tweet model – our dataset contains 360 events and 180 of them are labeled as rumors. As a rumor is often of a long circurlating story (friggeri2014rumor, ), this results in a rather long time span. In this work, we develop an event identification strategy that focuses on the first 48 hours after the rumor is peaked. We also extract 11,038 domains, which are contained in tweets in this 48 hours time range.
|
Training data for single tweet classification. An event might include sub-events for which relevant tweets are rumorous. To deal with this complexity, we train our single-tweet learning model only with manually selected breaking and subless events from the above dataset. In the end, we used 90 rumors and 90 news associated with 72452 tweets, in total. This results in a highly-reliable ground-truth of tweets labelled as news-related and rumor-related, respectively. Note that the labeling of a tweet is inherited from the event label, thus can be considered as an semi-automatic process. | The time period of a rumor event is sometimes fuzzy and hard to define. One reason is a rumor may have been triggered for a long time and kept existing, but it did not attract public attention. However it can be triggered by other events after a uncertain time and suddenly spreads as a bursty event. E.g., a rumor999http://www.snopes.com/robert-byrd-kkk-photo/ claimed that Robert Byrd was member of KKK. This rumor has been circulating in Twitter for a while. As shown in Figure 7(a) that almost every day there were several tweets talking about this rumor. But this rumor was triggered by a picture about Robert Byrd kissing Hillary Clinton in 2016 101010http://www.snopes.com/clinton-byrd-photo-klan/ and Twitter users suddenly noticed this rumor and it was spreaded burstily. In this work, what we are really interested in is the tweets which are posted in hours around the bursty peak. We defined the hour with the most tweets’ volume as tmaxsubscript𝑡𝑚𝑎𝑥t_{max}italic_t start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT and we want to detect the rumor event as soon as possible before its burst, so we define the time of the first tweet before tmaxsubscript𝑡𝑚𝑎𝑥t_{max}italic_t start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT within 48 hours as the beginning of this rumor event, marked as t0subscript𝑡0t_{0}italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. And the end time of the event is defined as tend=t0+48subscript𝑡𝑒𝑛𝑑subscript𝑡048t_{end}=t_{0}+48italic_t start_POSTSUBSCRIPT italic_e italic_n italic_d end_POSTSUBSCRIPT = italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + 48. We show the tweet volumes in Figure 7(b) of the above rumor example.
| B |
Evaluating methodology.
For RQ1, given an event entity e, at time t, we need to classify them into either Breaking or Anticipated class. We select a studied time for each event period randomly in the range of 5 days before and after the event time. In total, our training dataset for AOL consists of 1,740 instances of breaking class and 3,050 instances of anticipated, with over 300 event entities. For GoogleTrends, there are 2,700 and 4,200 instances respectively. We then bin the entities in the two datasets chronologically into 10 different parts. We set up 4 trials with each of the last 4 bins (using the history bins for training in a rolling basic) for testing; and report the results as average of the trials. |
RQ2. Figure 4 shows the performance of the aspect ranking models for our event entities at specific times and types. The most right three models in each metric are the models proposed in this work. The overall results show that, the performances of these models, even better than the baselines (for at least one of the three), vary greatly among the cases. In general, SVMsalience𝑆𝑉subscript𝑀𝑠𝑎𝑙𝑖𝑒𝑛𝑐𝑒SVM_{salience}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_s italic_a italic_l italic_i italic_e italic_n italic_c italic_e end_POSTSUBSCRIPT performs well at the before stage of breaking events, and badly at the after stage of the same event type. Whereas SVMtimeliness𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT gives a contradictory performance for the cases. For anticipated events, SVMtimeliness𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT performs well at the before and after stages, but gives a rather low performance at the during stage. For this event type, SVMsalience𝑆𝑉subscript𝑀𝑠𝑎𝑙𝑖𝑒𝑛𝑐𝑒SVM_{salience}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_s italic_a italic_l italic_i italic_e italic_n italic_c italic_e end_POSTSUBSCRIPT generally performs worse than SVMtimeliness𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT. Overall, The SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT with all features combined gives a good and stable performance, but for most cases, are not better than the well-performed single set of features L2R model. In general, these results prove our assumption that salience and timeliness should be traded-off for different event types, at different event times. For feature importances, we observe regularly, stable performances of same-group features across these cases. Salience features from knowledge bases tend to perform better than from query logs for short-duration or less popular events. We leave the more in-depth analysis of this part for future work. | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The results are shown in Table 3-bottom, showing that our cascaded model, with features inherited from the performance of SVM in previous task, substantially improves the single model. However, the overall modest results show the difficulty of this multi-class classification task.
| RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall, improved the baseline, yet not significantly. Our Ensemble model, that is learned to trade-off between salience and timeliness achieves the best results for all metrics, outperforms the baseline significantly. As the testing entity queries in this experiment are at all event times and with all event types, these improvements illustrate the robustness of our model. Overall, we witness the low performance of adapted QAC methods. One reason is as mentioned, QACs, even time-aware generally favor already salient queries as follows the rich-get-richer phenomenon, and are not ideal for entity queries that are event-related (where aspect relevance can change abruptly). Time-aware QACs for partially long prefixes like entities often encounter sparse traffic of query volumes, that also contributes to the low results.
|
Results. The baseline and the best results of our 1stsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achieves marginally better result at F1 metric. | D |
RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains,
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023], | SMC weights are updated based on the likelihood of the observed rewards:
wt,a(m)∝pa(yt|xt,θt,a(m))proportional-tosuperscriptsubscript𝑤𝑡𝑎𝑚subscript𝑝𝑎conditionalsubscript𝑦𝑡subscript𝑥𝑡superscriptsubscript𝜃𝑡𝑎𝑚w_{t,a}^{(m)}\propto p_{a}(y_{t}|x_{t},\theta_{t,a}^{(m)})italic_w start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ∝ italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ( italic_y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_θ start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ) —Step (9.c) in Algorithm 1; and | The techniques used in these success stories are grounded on statistical advances on sequential decision processes and multi-armed bandits.
The MAB crystallizes the fundamental trade-off between exploration and exploitation in sequential decision making. | we propagate forward the sequential random measure pM(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT )
by drawing new samples from the transition density, conditioned on resampled particles, i.e., | the fundamental operation in the proposed SMC-based MAB Algorithm 1
is to sequentially update the random measure pM(θt,a|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃𝑡𝑎subscriptℋ:1𝑡p_{M}(\theta_{t,a}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) | B |
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only two glucose measurements per day on average and measured glucose within 4 hours or less after a meal only 5 out of 54 times. | For time delays between carb entries and the next glucose measurements we distinguish cases where glucose was measured at most 30 minutes before logging the meal, to account for cases where multiple measurements are made for one meal – in such cases it might not make sense to predict the glucose directly after the meal.
Overall, patients measure blood glucose within 10 minutes before meals most of the time – for more than 2/3 of the meals for most patients. | In order to have a broad overview of different patients’ patterns over the one month period, we first show the figures illustrating measurements aggregated by days-in-week.
For consistency, we only consider the data recorded from 01/03/17 to 31/03/17 where the observations are most stable. | Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available.
The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14. | The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day.
Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning. | B |
We propose a new CNN architecture with modules adapted from the semantic segmentation literature to predict fixation density maps of the same image resolution as the input. Our approach is based on a large body of research regarding saliency models that leverage object-specific features and functionally replicate human behavior under free-viewing conditions. In the following sections, we describe our contributions to this challenging task.
| To restore the original image resolution, extracted features were processed by a series of convolutional and upsampling layers. Previous work on saliency prediction has commonly utilized bilinear interpolation for that task Cornia et al. (2018); Liu and Han (2018), but we argue that a carefully chosen decoder architecture, similar to the model by Pan et al. (2017), results in better approximations. Here we employed three upsampling blocks consisting of a bilinear scaling operation, which doubled the number of rows and columns, and a subsequent convolutional layer with kernel size 3×3333\times 33 × 3. This setup has previously been shown to prevent checkerboard artifacts in the upsampled image space in contrast to deconvolution Odena et al. (2016). Besides an increase of resolution throughout the decoder, the amount of channels was halved in each block to yield 32 feature maps. Our last network layer transformed activations into a continuous saliency distribution by applying a final 3×3333\times 33 × 3 convolution. The outputs of all but the last linear layer were modified via rectified linear units. Figure 2 visualizes the overall architecture design as described in this section.
| Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer et al. (2014) and II Kümmerer et al. (2016) employed a pre-trained classification model to read out salient image locations from a small subset of encoding layers. This is similar to the network by Cornia et al. (2016) which utilizes the output at three stages of the hierarchy. Oyama and Yamanaka (2018) demonstrated that classification performance of pre-trained architectures strongly correlates with the accuracy of saliency predictions, highlighting the importance of object information. Related approaches also focused on the potential benefits of incorporating activation from both coarse and fine image resolutions Huang et al. (2015), and recurrent connections to capture long-range spatial dependencies in convolutional feature maps Cornia et al. (2018); Liu and Han (2018). Our model explicitly combines semantic representations at multiple spatial scales to include contextual information in the predictive process. For a more complete account of existing saliency architectures, we refer the interested reader to a comprehensive review by Borji (2018).
| Image-to-image learning problems require the preservation of spatial features throughout the whole processing stream. As a consequence, our network does not include any fully-connected layers and reduces the number of downsampling operations inherent to classification models. We adapted the popular VGG16 architecture Simonyan and Zisserman (2014) as an image encoder by reusing the pre-trained convolutional layers to extract increasingly complex features along its hierarchy. Striding in the last two pooling layers was removed, which yields spatial representations at 1/818\nicefrac{{1}}{{8}}/ start_ARG 1 end_ARG start_ARG 8 end_ARG of their original input size. All subsequent convolutional encoding layers were then dilated at a rate of 2 by expanding their kernel, and thereby increased the receptive field to compensate for the higher resolution Yu and Koltun (2015). This modification still allowed us to initialize the model with pre-trained weights since the number of trainable parameters remained unchanged. Prior work has shown the effectiveness of this approach in the context of saliency prediction problems Cornia et al. (2018); Liu and Han (2018).
|
Figure 2: An illustration of the modules that constitute our encoder-decoder architecture. The VGG16 backbone was modified to account for the requirements of dense prediction tasks by omitting feature downsampling in the last two max-pooling layers. Multi-level activations were then forwarded to the ASPP module, which captured information at different spatial scales in parallel. Finally, the input image dimensions were restored via the decoder network. Subscripts beneath convolutional layers denote the corresponding number of feature maps. | C |
The procedure which, for each vertex v∈V𝑣𝑉v\in Vitalic_v ∈ italic_V, constructs αesubscript𝛼𝑒\alpha_{e}italic_α start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT for some e∈E𝑒𝐸e\in Eitalic_e ∈ italic_E adjacent to v𝑣vitalic_v in O(h)Oℎ\operatorname{O}(h)roman_O ( italic_h ), runs 𝒜𝒜\mathcal{A}caligraphic_A in O(f(|αe|))=O(f(h))O𝑓subscript𝛼𝑒O𝑓ℎ\operatorname{O}(f(|\alpha_{e}|))=\operatorname{O}(f(h))roman_O ( italic_f ( | italic_α start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT | ) ) = roman_O ( italic_f ( italic_h ) ) and checks the resulting linear arrangement in O(h)Oℎ\operatorname{O}(h)roman_O ( italic_h ) and returns the best linear arrangement among all v∈V𝑣𝑉v\in Vitalic_v ∈ italic_V, yields an r(opt,h)𝑟optℎr(\operatorname{\textsf{opt}},h)italic_r ( opt , italic_h )-approximation for MinCutwidth on multigraphs in O(n(f(h)+h))O𝑛𝑓ℎℎ\operatorname{O}(n(f(h)+h))roman_O ( italic_n ( italic_f ( italic_h ) + italic_h ) ).
∎ | In the following, we obtain an approximation algorithm for the locality number by reducing it to the problem of computing the pathwidth of a graph. To this end, we first describe another way of how a word can be represented by a graph. Recall that the reduction to cutwidth from Section 4 also transforms words into graphs. The main difference is that the reduction from Section 4 turns every symbol from the alphabet into an individual vertex of the graph (thus, producing a graph with O(|Σ|)OΣ\operatorname{O}(|\Sigma|)roman_O ( | roman_Σ | ) vertices), while the reduction to pathwidth will use a vertex per position of the word α𝛼\alphaitalic_α, i. e., |α|𝛼|\alpha|| italic_α | individual vertices. In the reduction from Section 4 the information of the actual occurrences of the symbols in the word is encoded by the edges (in particular, the length |α|𝛼|\alpha|| italic_α | is represented by the number of edges), while in the following reduction the alphabet is encoded by connecting the vertices that correspond to positions of the same symbol to cliques in the graph (in particular, the number of edges may range between |α|𝛼|\alpha|| italic_α | and |α|2superscript𝛼2|\alpha|^{2}| italic_α | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT). We proceed with a formal definition and an example.
|
In the following, we discuss the lower and upper complexity bounds that we obtain from the reductions provided above. We first note that since Cutwidth is NP-complete, so is Loc. In particular, note that this answers one of the main questions left open in [15]. | The main results are presented in Sections 4, 5 and 6. First, in Section 4, we present the reductions from Loc to Cutwidth and vice versa, and we discuss the consequences of these reductions. Then, in Section 5, we show how Loc can be reduced to Pathwidth, which yields an approximation algorithm for computing the locality number; furthermore, we investigate the performance of direct greedy strategies for approximating the locality number. Finally, since we consider this of high importance independent of the locality number, we provide a direct reduction from cutwidth to pathwidth in Section 6.
|
In this section, we introduce polynomial-time reductions from the problem of computing the locality number of a word to the problem of computing the cutwidth of a graph, and vice versa. This establishes a close relationship between these two problems (and their corresponding parameters), which lets us derive several upper and lower complexity bounds for Loc. We also discuss the approximation-preserving properties of our reductions, which shall be important later on. | B |
The first network is a six layer CNN that detects the slice located within heart limits, and segments the thoracic and epicardial-paracardial masks.
The second network is a five layer CNN that detects the pericardium line from the CT scan in cylindrical coordinates. | The literature phrase search is the combined presence of each one of the cardiology terms indicated by (∗*∗) in Table I with each one of the deep learning terms related to architecture, indicated by (+++) in Table II using Google Scholar111https://scholar.google.com, Pubmed222https://ncbi.nlm.nih.gov/pubmed/ and Scopus333https://www.scopus.com/search/form.uri?=display=basic.
Results are then curated to match the selection criteria of the review and summarized according to two main axis: neural network architecture and the type of data that was used for training/validation/testing. | First, optimal paths in a computed flow field are found and then a CNN classifier is used for removing extraneous paths in the detected centerlines.
The method was enhanced using a model-based detection of coronary specific territories and main branches to constrain the search space. | These predictions formed a vector field which was then used for evolving the contour using the Sobolev active contour framework.
Anh et al.[130] created a non-rigid segmentation method based on the distance regularized level set method that was initialized and constrained by the results of a structured inference using a DBN. | A graph was then constructed from the retinal vascular network where the nodes are defined as the vessel branches and each edge gets associated to a cost that evaluates whether the two branches should have the same label.
The CNN classification was propagated through the minimum spanning tree of the graph. | B |
This demonstrates that SimPLe excels in a low data regime, but its advantage disappears with a bigger amount of data.
Such a behavior, with fast growth at the beginning of training, but lower asymptotic performance is commonly observed when comparing model-based and model-free methods (Wang et al. (2019)). As observed in Section 6.4 assigning bigger computational budget helps in 100100100100K setting. We suspect that gains would be even bigger for the settings with more samples. | The iterative process of training the model, training the policy, and collecting data is crucial for non-trivial tasks where random data collection is insufficient. In a game-by-game analysis, we quantified the number of games where the best results were obtained in later iterations of training. In some games, good policies could be learned very early. While this might have been due to the high variability of training, it does suggest the possibility of much faster training (i.e. in fewer step than 100k) with more directed exploration policies. In Figure 9 in the Appendix we present the cumulative distribution plot for the (first) point during learning when the maximum score for the run was achieved in the main training loop of Algorithm 1.
| Finally, we verified if a model obtained with SimPLe using 100100100100K is a useful initialization for model-free PPO training.
Based on the results depicted in Figure 5 (b) we can positively answer this conjecture. Lower asymptotic performance is probably due to worse exploration. A policy pre-trained with SimPLe was meant to obtain the best performance on 100100100100K, at which point its entropy is very low thus hindering further PPO training. | We focused our work on learning games with 100100100100K interaction steps with the environment. In this section we present additional results for settings with 20202020K, 50505050K, 200200200200K, 500500500500K and 1111M interactions; see Figure 5 (a).
Our results are poor with 20202020K interactions. For 50505050K they are already almost as good as with 100100100100K interactions. From there the results improve until 500500500500K samples – it is also the point at which they are on par with model-free PPO. Detailed per game results can be found in Appendix F. |
The primary evaluation in our experiments studies the sample efficiency of SimPLe, in comparison with state-of-the-art model-free deep RL methods in the literature. To that end, we compare with Rainbow (Hessel et al., 2018; Castro et al., 2018), which represents the state-of-the-art Q-learning method for Atari games, and PPO (Schulman et al., 2017), a model-free policy gradient algorithm (see Appendix E for details of tuning of Rainbow and PPO). The results of the comparison are presented in Figure 3. For each game, we plot the number of time steps needed for either Rainbow or PPO to reach the same score that our method reaches after 100100100100K interaction steps. The red line indicates 100100100100K steps: any bar larger than this indicates a game where the model-free method required more steps. SimPLe outperforms the model-free algorithms in terms of learning speed on nearly all of the games, and in the case of a few games, does so by over an order of magnitude. For some games, it reaches the same performance that our PPO implementation reaches at 10101010M steps. This indicates that model-based reinforcement learning provides an effective approach to learning Atari games, at a fraction of the sample complexity. | B |
Here we also refer to CNN as a neural network consisting of alternating convolutional layers each one followed by a Rectified Linear Unit (ReLU) and a max pooling layer and a fully connected layer at the end while the term ‘layer’ denotes the number of convolutional layers.
|
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals. | A high level overview of these combined methods is shown in Fig. 1.
Although we choose the EEG epileptic seizure recognition dataset from University of California, Irvine (UCI) [13] for EEG classification, the implications of this study could be generalized in any kind of signal classification problem. | For the spectrogram module, which is used for visualizing the change of the frequency of a non-stationary signal over time [18], we used a Tukey window with a shape parameter of 0.250.250.250.25, a segment length of 8888 samples, an overlap between segments of 4444 samples and a fast Fourier transform of 64646464 samples to convert the xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT into the time-frequency domain.
The resulted spectrogram, which represents the magnitude of the power spectral density (V2/Hzsuperscript𝑉2𝐻𝑧V^{2}/Hzitalic_V start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_H italic_z) of xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, was then upsampled to 178×178178178178\times 178178 × 178 using bilinear pixel interpolation. | This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data.
Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for diagnosis and prediction problems. | A |
A major obstacle in achieving seamless autonomous locomotion transition lies in the need for an efficient sensing methodology that can promptly and reliably evaluate the interaction between the robot and the terrain, referred to as terramechanics. These methods generally involve performing comprehensive on-site measurements of soil attributes prior to robot deployment [9]. Moreover, it’s important to consider that these terramechanics models, striving to predict robot-terrain interactions, often involve substantial computational costs due to their complexity [16]. Therefore, terramechanics methods are unsuitable for use in autonomous locomotion mode transition control directly, particularly in scenarios where robots need to move at high speeds, for example in search and rescue missions. To bypass the limitations of terramechanics methods, researchers have probed into alternative strategies for accomplishing autonomous locomotion transition. For example, certain studies have utilized energy consumption as a metric for evaluating the transverse-ability of different locomotion modes in wheel/track-legged robots [8]. By scrutinizing the energy expenditure for different locomotion modes, researchers can evaluate their efficiency in navigating various terrains. Additionally, other general parameters like stability margin and motion efficiency have been examined in the quest to achieve autonomous locomotion transition [2].
|
In the literature review, Gorilla [2] is able to switch between bipedal and quadrupedal walking locomotion modes autonomously using criteria developed based on motion efficiency and stability margin. WorkPartner [8] demonstrated its capability to seamlessly transition between two locomotion modes: rolling and rolking. The rolking mode, a combination of rolling and walking, empowered WorkPartner to navigate with enhanced agility. This feat was accomplished through the implementation of devised criteria that took into account a comprehensive analysis of energy utilization, wheel slip percentage, and the intricate dynamics between the wheels and the demanding terrain. However, it’s noteworthy that Gorilla only has walking locomotion mode and does not fit into the wheel/track-legged hybrid robot category. It is important to note that the approach introduced by WorkPartner is tailored specifically to it. The threshold values for locomotion transition criteria were established empirically through prior experimental evaluations conducted on the target terrains. However, a critical aspect that deserves emphasis is that the prevailing criteria proposed for locomotion mode transitions have primarily concentrated on the robot’s internal states, neglecting the integration of external environmental information into the decision-making process. This oversight underscores the need for future developments that incorporate a more comprehensive understanding of the external context and environmental factors, enabling robots like WorkPartner to make informed decisions based on a holistic assessment of both internal and external conditions. | In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and walking locomotion modes. Through energy consumption analyses during step negotiations of varied heights, we establish energy criterion thresholds that guide the robot’s transition from rolling to walking mode. Our simulation studies reveal that the Cricket robot can autonomously switch to the most suitable locomotion mode based on the height of the steps encountered.
| There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that determine the best mode—either rolling or walking—based on the robot’s environmental interactions and internal states [7, 8]. In addressing the first challenge, the dynamics of rolling locomotion are well understood and are similar to those of traditional wheeled/tracked robots. However, despite extensive research on the walking dynamics of standard legged robots, focused studies on the walking patterns specific to wheel/track-legged robots are limited [9]. Transition control between these locomotion modes for wheel/track-legged robots also requires more exploration [6]. In this study, we focus on the second challenge to develop efficient decision-making algorithms for transitioning between locomotion modes. This remains a very less explored area [3], but is essential to achieve an autonomous locomotion transition in hybrid robots. Building upon our prior work, we employ two climbing gaits to ensure smooth walking locomotion for wheel/track-legged robots, particularly when navigating steps [10].
| Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the rear legs (depicted by the green line) exceeded the predetermined threshold values set by the rear body climbing gait for heights of 2h. The overlap between the red line (ongoing energy consumption of the robot) and the blue line (pre-studied energy consumption of step negotiation in rolling locomotion mode only) illustrates this. After the mode transition is triggered, the robot enters a well-defined preparation phase, wherein it moves backward a short distance to ensure the rear tracks are separated from the step. Following the preparation phase, the robot switches to the rear body climbing gait. Despite the noticeable improvement in energy consumption, the transition to the rear body climbing gait takes more time for the robot to tackle a 2h step.
| A |
As argued in detail in [9], there are compelling reasons to study the advice complexity of online computation.
Lower bounds establish strict limitations on the power of any online algorithm; there are strong connections between randomized online algorithms and online algorithms with advice (see, e.g., [27]); online algorithms with advice can be of practical interest in settings in which it is feasible to run multiple algorithms and output the best solution (see [20] about obtaining improved data compression algorithms by means of list update algorithms with advice); and the first complexity classes for online computation have been based on advice complexity [10]. | Notwithstanding such interesting attributes, the known advice model has certain drawbacks. The advice is always assumed to be some error-free information that may be used to encode some property often explicitly connected to the optimal solution. In many settings, one can argue that such information cannot be readily available, which implies that the resulting algorithms are often impractical.
|
Under the current models, the advice bits can encode any information about the input sequence; indeed, defining the “right” information to be conveyed to the algorithm plays an important role in obtaining better online algorithms. Clearly, the performance of the online algorithm can only improve with larger number of advice bits. The objective is thus to identify the exact trade-offs between the size of the advice and the performance of the algorithm. This is meant to provide a smooth transition between the purely online world (nothing is known about the input) and the purely “offline” world (everything is known about the input). |
In future work, we would like to expand the model so as to incorporate, into the analysis, the concept of advice error. More specifically, given an advice string of size k𝑘kitalic_k, let η𝜂\etaitalic_η denote the number of erroneous bits (which may be not known to the algorithm). In this setting, the objective would be to study the power and limitations of online algorithms, i.e., from the point of view of both upper and lower bounds on the competitive ratio. A first approach towards this direction was made recently in the context of problems such as contract | It should be fairly clear that such assumptions are very unrealistic or undesirable. Advice bits, as all information, are prone to transmission errors. In addition, the known advice models often allow
information that one may arguably consider unrealistic, e.g., an encoding of some part of the offline optimal solution. Last, and perhaps more significantly, a malicious entity that takes control of the advice oracle can have a catastrophic impact. For a very simple example, consider the well-known ski rental problem: this is a simple, yet fundamental resource allocation, in which we have to decide ahead of time whether to rent or buy equipment without knowing the time horizon in advance. In the traditional advice model, one bit suffices to be optimal: 0 for renting throughout the horizon, 1 for buying right away. However, if this bit is wrong, then the online algorithm has unbounded competitive ratio, i.e., can perform extremely badly. In contrast, an online algorithm that does not use advice at all has competitive ratio at most 2, i.e., its output can be at most twice as costly as the optimal one. | A |
Another (more elaborated) policy could have taken into account how fast the positive value grows (the slope) in relation with the negative one, and if a given threshold was exceeded, classify subjects as depressed —in such case our subject could have been classified as depressed, for instance, after reading his/her 92nd writing.
Note that we could also combine multiple policies as we will see in Section 5. | This brief subsection describes the training process, which is trivial. Only a dictionary of term-frequency pairs is needed for each category.
Then, during training, dictionaries are updated as new documents are processed —i.e. unseen terms are added and frequencies of already seen terms are updated. | Otherwise, it can be omitted since, during classification, gv𝑔𝑣gvitalic_g italic_v can be dynamically computed based on the frequencies stored in the dictionaries.
It is worth mentioning that this algorithm could be easily parallelized by following the MapReduce model as well —for instance, all training documents could be split into batches, then frequencies locally calculated within each batch, and finally, all these local frequencies summed up to obtain the total frequencies. | In the rest of this subsection, we will exemplify how the SS3 framework carries out the classification and training process and how the early classification and explainability aspects are addressed. The last subsection goes into more technical details and we will study how the local and global value of a term is actually computed. As we will see, these values are the basis of the entire classification process.
| Note that with this simple training method there is no need neither to store all documents nor to re-train from scratch every time a new training document is added, making the training incremental101010Even new categories could be dynamically added.. Additionally, there is no need to compute the document-term matrix because, during classification, gv𝑔𝑣gvitalic_g italic_v can be dynamically computed based on the frequencies stored in the dictionaries —although, in case we are working in an offline fashion and to speed up classification, it is still possible to create the document-term matrix holding the gv𝑔𝑣gvitalic_g italic_v value for each term.
Finally, also note that training computation is very cheap since involves only updating term frequencies i.e only one addition operation is needed. | A |
We run DMSGD, DGC (w/ mfm), DGC (w/o mfm) and GMC respectively to solve the optimization problem: min𝐰∈ℝdF(𝐰)subscript𝐰superscriptℝ𝑑𝐹𝐰\min_{{\bf w}\in{\mathbb{R}}^{d}}F({\bf w})roman_min start_POSTSUBSCRIPT bold_w ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_F ( bold_w ). The momentum coefficient β𝛽\betaitalic_β is set as 0.90.90.90.9 and the learning rate is set as 0.005. We use top-s𝑠sitalic_s as the sparsification compressor.
| Table 2 and Figure 4 show the performance under non-IID data distribution. We can find that GMC can achieve much better test accuracy and faster convergence speed compared to other methods. Furthermore, we can find that the momentum factor masking trick will severely impair the performance of DGC under non-IID data distribution.
| Figure 2(b), 2(c) and 2(d) show the distances to the global optimal point when using different s𝑠sitalic_s for the case when d=20𝑑20d=20italic_d = 20. We can find that, compared with the local momentum methods, the global momentum method GMC converges faster and more stably.
| process. As for global momentum, the momentum term −(𝐰t−𝐰t−1)/ηsubscript𝐰𝑡subscript𝐰𝑡1𝜂-({\bf w}_{t}-{\bf w}_{t-1})/\eta- ( bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - bold_w start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT ) / italic_η contains global information from all the workers. Since we are optimizing the objective function F(𝐰)𝐹𝐰F({\bf w})italic_F ( bold_w ), 𝐰t−𝐰t−1subscript𝐰𝑡subscript𝐰𝑡1{\bf w}_{t}-{\bf w}_{t-1}bold_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - bold_w start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT denotes the descent direction of F(𝐰)𝐹𝐰F({\bf w})italic_F ( bold_w ) with high probability in the next iteration, which will help the parameter to converge to the global optimal point.
If no sparse communication is adopted, DGC (w/o mfm) will degenerate to DMSGD, and DGC (w/ mfm) will degenerate to DSGD. | We can find that after a sufficient number of iterations, the parameter in DGC (w/o mfm) can only oscillate within a relatively large neighborhood of the optimal point. Compared with DGC (w/o mfm), the parameter in GMC converges closer to the optimal point and then remains stable.
Figure 2(a) shows the distances to the global optimal point during the optimization process. We can find that although the momentum factor masking trick can make the convergence trajectory appear more stable, it also slows down the convergence. | D |
For the purposes of this paper we use a variation of the database444https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals in total.
| During supervised learning the weights of the kernels are frozen and a one layer fully connected network (FNN) is stacked on top of the reconstruction output of the SANs.
The FNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as the loss function. | We use one signal from each of 15 signal datasets from Physionet listed in the first column of Table I.
Each signal consists of 12000120001200012000 samples which in turn is split in 12121212 signals of 1000100010001000 samples each, to create the training (6666 signals), validation (2222 signals) and test datasets (4444 signals). | We then split the 11500115001150011500 signals into 76%percent7676\%76 %, 12%percent1212\%12 % and 12%percent1212\%12 % (8740,1380,13808740138013808740,1380,13808740 , 1380 , 1380 signals) as training, validation and test data respectively and normalize in the range [0,1]01[0,1][ 0 , 1 ] using the global max and min.
For the SANs, we used two kernels q=2𝑞2q=2italic_q = 2 with a varying size in the range of [8,15]815[8,15][ 8 , 15 ] and trained for 5555 epochs with a batch size of 64646464. | The first two fully connected layers are followed by a ReLU while the last one produces the predictions.
The CNN is trained for an additional 5555 epochs with the same batch size and model selection procedure as with SANs and categorical cross-entropy as the loss function. | C |
A new algorithm which can learn from previous experiences is required, and the algorithm with faster learning speed is more desirable. Existing algorithms’ learning method is learning by prediction. It means that UAV knows current strategies with corresponding payoff and it can randomly select another strategy and calculate its payoff. And then UAV compares two payoffs. If the payoff of new strategy is larger, the current strategy will be replaced by the new strategy; if the current payoff strategy is large, it will remain in the current strategy. However, under highly dynamic scenarios, complicate network conditions make UAVs hard to calculate their expected payoffs but only learn from previous experiences [11]. In this situation, UAV only can learn from previous experience. UAV merely knows the current and the last strategy with the corresponding payoff, and it can only learn from these. In this case, if the two strategies are different, UAV chooses the strategy with the larger payoff as the current strategy in the next iteration; if the two strategies are the same, a new strategy is randomly selected as the current strategy of the next iteration. To sum up, the difference between the existing and required algorithms is that the existing algorithm calculates payoff for a new strategy (equivalent to prediction) and chooses it based on prediction; while required algorithm can be available when UAV only knows strategy’s payoff if strategy has been selected, and then UAV can decide whether to stick to the current strategy or return to the past strategy by comparing their payoffs. Therefore, an algorithm which can learn from previous experiences is required.
| A new algorithm which can learn from previous experiences is required, and the algorithm with faster learning speed is more desirable. Existing algorithms’ learning method is learning by prediction. It means that UAV knows current strategies with corresponding payoff and it can randomly select another strategy and calculate its payoff. And then UAV compares two payoffs. If the payoff of new strategy is larger, the current strategy will be replaced by the new strategy; if the current payoff strategy is large, it will remain in the current strategy. However, under highly dynamic scenarios, complicate network conditions make UAVs hard to calculate their expected payoffs but only learn from previous experiences [11]. In this situation, UAV only can learn from previous experience. UAV merely knows the current and the last strategy with the corresponding payoff, and it can only learn from these. In this case, if the two strategies are different, UAV chooses the strategy with the larger payoff as the current strategy in the next iteration; if the two strategies are the same, a new strategy is randomly selected as the current strategy of the next iteration. To sum up, the difference between the existing and required algorithms is that the existing algorithm calculates payoff for a new strategy (equivalent to prediction) and chooses it based on prediction; while required algorithm can be available when UAV only knows strategy’s payoff if strategy has been selected, and then UAV can decide whether to stick to the current strategy or return to the past strategy by comparing their payoffs. Therefore, an algorithm which can learn from previous experiences is required.
|
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approaching [9][32]. The BLLA has been employed by [33], which is modified from LLA to update strategies in each iteration to converge to the NE. However, only a single agent is allowed to alter strategies in one iteration. In large-scale scenarios, more iterations are required, which makes BLLA inefficient. It is obvious that more UAVs altering strategies in one iteration would be more efficient. To achieve it, the works in [34] and [35] have provided a novel synchronous algorithm. However, there exist superabundant restrictions that make the algorithm impractical in most scenarios. Compared with the formers, SPBLLA has fewer constraints and can achieve synchronous operation, which can significantly improve the computational efficiency. |
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV changes strategy in the next iteration based on the new game state. It means that UAVs are not permitted to update strategies at the same time. Besides, to determine which UAV to update strategy, the coordinating process will occupy plenty of channel capacities and require more time between two iterations [15]. If the algorithm can learn synchronously, more than one UAV can update strategies based on the current game state in one iteration. Thus, the algorithm can be more efficient. To sum up, synchronous update algorithms which can learn from previous experiences are desirable, but only a little research investigated on it. | In the literatures, most works search PSNE by using the Binary Log-linear Learning Algorithm (BLLA). However, there are limitations of this algorithm. In BLLA, each UAV can calculate and predict its utility for any si∈Sisubscript𝑠𝑖subscript𝑆𝑖s_{i}\in S_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in the complete strategy set. While in UAV ad-hoc networks, UAVs are accessible only to the constrained strategy set and corresponding utilities in the last two decision periods. Thus, conventional BLLA is no more suitable for the scenario we considered here, we propose a revised algorithm based on BLLA, called Payoff-based Binary Log-linear Learning Algorithm (PBLLA) to resolve the issue.
| C |
resistivity, η[m2/s]=η′/μ0𝜂delimited-[]superscriptm2ssuperscript𝜂′subscript𝜇0\eta[\mbox{m}^{2}/\mbox{s}]=\eta^{\prime}/\mu_{0}italic_η [ m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / s ] = italic_η start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT / italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is magnetic
diffusivity, and the adiabatic index γ=53𝛾53\gamma=\frac{5}{3}italic_γ = divide start_ARG 5 end_ARG start_ARG 3 end_ARG is the | \perp}|_{\Gamma}=\mathbf{0},\,(\nabla_{\perp}\psi)|_{\Gamma}=0\mbox{ and }(%
\nabla_{\perp}f)|_{\Gamma}=0bold_v | start_POSTSUBSCRIPT roman_Γ end_POSTSUBSCRIPT = bold_0 , bold_q start_POSTSUBSCRIPT italic_i ⟂ end_POSTSUBSCRIPT | start_POSTSUBSCRIPT roman_Γ end_POSTSUBSCRIPT = bold_q start_POSTSUBSCRIPT italic_e ⟂ end_POSTSUBSCRIPT | start_POSTSUBSCRIPT roman_Γ end_POSTSUBSCRIPT = bold_0 , ( ∇ start_POSTSUBSCRIPT ⟂ end_POSTSUBSCRIPT italic_ψ ) | start_POSTSUBSCRIPT roman_Γ end_POSTSUBSCRIPT = 0 and ( ∇ start_POSTSUBSCRIPT ⟂ end_POSTSUBSCRIPT italic_f ) | start_POSTSUBSCRIPT roman_Γ end_POSTSUBSCRIPT = 0, | are standard. The boundary conditions and closure for this model (namely,
definitions of thermal fluxes 𝐪isubscript𝐪𝑖\mathbf{q}_{i}bold_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and 𝐪esubscript𝐪𝑒\mathbf{q}_{e}bold_q start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT, | With reference to the definitions of the discrete forms for the thermal
flux ∇¯^⋅𝐪^α⋅^¯∇subscript^𝐪𝛼\widehat{\overline{\nabla}}\cdot\widehat{\mathbf{q}}_{\alpha}over^ start_ARG over¯ start_ARG ∇ end_ARG end_ARG ⋅ over^ start_ARG bold_q end_ARG start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT | terms 𝐪^isubscript^𝐪𝑖\widehat{\mathbf{q}}_{i}over^ start_ARG bold_q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and 𝐪^esubscript^𝐪𝑒\widehat{\mathbf{q}}_{e}over^ start_ARG bold_q end_ARG start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT.
For the viscous terms, we use, for simplicity, the unmagnetised version | B |
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12.
Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right. | The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to BC→A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and
g3subscript𝑔3g_{3}italic_g start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT. | First, remark that both A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible.
Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA→Bsubscriptmodels𝑔𝑟𝐴→𝐵r\models_{g}A\operatorname{\rightarrow}Bitalic_r ⊧ start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT italic_A → italic_B as there are no counter-examples in the resulting closure system. | If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use
≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∨ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT, respectively. | For convenience we give in Table 7 the list of all possible realities
along with the abstract tuples which will be interpreted as counter-examples to A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A. | D |
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes because of the unseen state transitions and the finite size of experience reply buffer. This type of variance leads to converging to sub-optimal policies and brutally hurts DQN performance. The second source of variance Target Approximation Error which is the error coming from the inexact minimization of DQN parameters. Many of the proposed extensions focus on minimizing the variance that comes from AGE by finding methods to optimize the learning trajectory or from TAE by using methods like averaging to exact DQN parameters. Dropout methods have the ability to assemble these two solutions which minimize different source of variance. Dropout methods can achieve a consistence learning trajectory and exact DQN parameters with averaging, which comes inherently with Dropout methods. |
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and after applying Dropout (Dropout methods DQN). There was a statistically significant decrease in Variance (14.72% between Gaussian Dropout and DQN, 48.89% between Variational Dropout and DQN). Furthermore one of the Dropout methods outperformed DQN score. | To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optimal value function can be computed exactly.
|
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Classic Control Environment. The game of CARTPOLE was selected due to its widespread use and the ease with which the DQN can achieve a steady state policy. |
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Reinforcement Learning is concerned with finding a sequence of actions an agent can follow that could lead to solve the task on the environment [1][2][3]. Most of Reinforcement Learning techniques estimate the consequences of actions in order to find an optimal policy in the form of sequence of actions that can be followed by the agent to solve the task. The process of choosing the optimal policy is based on selecting actions that maximize the future payoff of an action. Finding an optimal policy is the main concern of Reinforcement Learning for that reason many algorithms have been introduced over a course of time, e.g, Q-learning[4], SARSA[5], and policy gradient methods[6]. These methods use linear function approximation techniques to estimate action value, where convergence is guaranteed [7]. However, as challenges in modeling complex patterns increase, the need for expressive and flexible non-linear function approximators becomes clear. The recent advances in deep neural networks helped to develop artificial agent named deep Q-network(DQN)[8] that can learn successful policies directly from high-dimensional features. Despite the remarkable flexibility and the huge representative capability of DQN, some issues emerge from the combination of Q-learning and neural networks. One of these issues, known as ”overestimation phenomenon,” was first explored by [9]. They noted that the expansion of the action space in the Q-learning algorithm, along with generalization errors in neural networks, often results in an overestimation and increased variance of state-action values. They suggested that to counter these issues, further modifications and enhancements to the standard algorithm would be necessary to boost training stability and diminish overestimation. In response, [10] introduced Double-DQN, an improvement that incorporates the double Q-learning estimator [11], aiming to address the challenges of variance and overestimation. Additionally, [31] developed the Averaged-DQN algorithm, a significant improvement over the standard DQN. By averaging previously learned Q-values, Averaged-DQN effectively lowers the variance in target value estimates, thus enhancing training stability and overall performance. | B |
Chaichulee et al. (2017) extended the VGG16 architecture (Simonyan and Zisserman, 2014) to include a global average pooling layer for patient detection and a fully convolutional network for skin segmentation. The proposed model was evaluated on images from a clinical study conducted at a neonatal intensive care unit, and was robust to changes in lighting, skin tone, and pose. He et al. (2019) trained a U-Net (Ronneberger et al., 2015)-like encoder-decoder architecture to simultaneously segment thoracic organs from CT scans and perform global slice classification. Ke et al. (2019) trained a multi-task U-Net architecture to solve three tasks - separating wrongly connected objects, detecting class instances, and pixelwise labeling for each object, and evaluated it on a food microscopy image dataset. Other multi-task models have also been proposed for segmentation and classification for detecting manipulated faces in images and video (Nguyen et al., 2019) and diagnosis of breast biopsy images (Mehta et al., 2018) and mammograms (Le et al., 2019). | V-Net (Milletari et al., 2016) and FCN (Long et al., 2015). Sinha and Dolz (2019) proposed a multi-level attention based architecture for abdominal organ segmentation from MRI images. Qin et al. (2018) proposed a dilated convolution base block to preserve more detailed attention in 3D medical image segmentation. Similarly, other papers (Lian et al., 2018; Isensee et al., 2019; Li et al., 2019b; Ni et al., 2019; Oktay et al., 2018; Schlemper et al., 2019) have leveraged the attention concept into medical image segmentation as well.
| Bischke et al. (2019) proposed a cascaded multi-task loss to preserve boundary information from segmentation masks for segmenting building footprints and achieved state-of-the-art performance on an aerial image labeling task. He et al. (2017) extended Faster R-CNN (Ren et al., 2015) by adding a new branch to predict the object mask along with a class label and a bounding box, and the proposed model was called Mask R-CNN. Mask R-CNN has been used extensively for multi-task segmentation models for a wide range of application areas (Abdulla, 2017), such as adding sports fields to OpenStreetMap (Remillard, 2018), detection and segmentation for surgery robots (SUYEgit, 2018), understanding climate change patterns from aerial imagery of the Arctic (Zhang et al., 2018a), converting satellite imagery to maps (Mohanty, 2018), detecting image forgeries (Wang et al., 2019d), and segmenting tree canopy (Zhao et al., 2018).
| Mask R-CNN has also been used for segmentation tasks in medical image analysis such as automatically segmenting and tracking cell migration in phase-contrast microscopy (Tsai et al., 2019), detecting and segmenting nuclei from histological and microscopic images (Johnson, 2018; Vuola et al., 2019; Wang et al., 2019a, b), detecting and segmenting oral diseases (Anantharaman et al., 2018), segmenting neuropathic ulcers (Gamage et al., 2019), and labeling and segmenting ribs in chest X-rays (Wessel et al., 2019). Mask R-CNN has also been extended to work with 3D volumes and has been evaluated on lung nodule detection and segmentation from CT scans and breast lesion detection and categorization on diffusion MR images (Jaeger et al., 2018; Kopelowitz and Engelhard, 2019).
|
Chaichulee et al. (2017) extended the VGG16 architecture (Simonyan and Zisserman, 2014) to include a global average pooling layer for patient detection and a fully convolutional network for skin segmentation. The proposed model was evaluated on images from a clinical study conducted at a neonatal intensive care unit, and was robust to changes in lighting, skin tone, and pose. He et al. (2019) trained a U-Net (Ronneberger et al., 2015)-like encoder-decoder architecture to simultaneously segment thoracic organs from CT scans and perform global slice classification. Ke et al. (2019) trained a multi-task U-Net architecture to solve three tasks - separating wrongly connected objects, detecting class instances, and pixelwise labeling for each object, and evaluated it on a food microscopy image dataset. Other multi-task models have also been proposed for segmentation and classification for detecting manipulated faces in images and video (Nguyen et al., 2019) and diagnosis of breast biopsy images (Mehta et al., 2018) and mammograms (Le et al., 2019). | C |
The red line indicates the number of edges that remain in 𝐀¯¯𝐀\bar{{\mathbf{A}}}over¯ start_ARG bold_A end_ARG after sparsification.
It is possible to see that for small increments of ϵitalic-ϵ\epsilonitalic_ϵ the spectral distance increases linearly, while the number of edges in the graph drops exponentially. | We notice that the coarsened graphs are pre-computed before training the GNN.
Therefore, the computational time of graph coarsening is much lower compared to training the GNN for several epochs, since each MP operation in the GNN has a cost 𝒪(N2)𝒪superscript𝑁2\mathcal{O}(N^{2})caligraphic_O ( italic_N start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). |
The proposed spectral algorithm is not designed to handle very dense graphs; an intuitive explanation is that 𝐯maxssubscriptsuperscript𝐯𝑠max{\mathbf{v}}^{s}_{\text{max}}bold_v start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT can be interpreted as the graph signal with the highest frequency, since its sign oscillates as much as possible when transiting from a node to one of its neighbors. | The GNN is then trained to fit its node representations to these pre-determined structures.
Pre-computing graph coarsening not only makes the training much faster by avoiding to perform graph reduction at every forward pass, but it also provides a strong inductive bias that prevents degenerate solutions, such as entire graphs collapsing into a single node or entire graph sections being discarded. | The reason can be once again attributed to the low information content of the individual node features and in the sparsity of the graph signal (most node features are 0), which makes it difficult for the feature-based pooling methods to infer global properties of the graph by looking at local sub-structures.
| C |
The following analyses are shown exemplarily on the Soybean dataset. This dataset has 35353535 features and 19191919 classes. First, we analyze the generated data with a fixed number of decision trees, i.e., the number of sampled decision trees in RFsub𝑅subscript𝐹subRF_{\text{sub}}italic_R italic_F start_POSTSUBSCRIPT sub end_POSTSUBSCRIPT. The resulting confidence distributions for different numbers of decision trees are shown in the first column of Figure 5.
When adopting the data sample to only a few decision trees, the confidence of the generated samples is lower (around 0.20.20.20.2 for 5555 samples per class). | Probability distribution of the predicted confidences for different data generation settings on Soybean with 5555 (top) and 50505050 samples per class (bottom). Generating data with different numbers of decision trees is visualized in the left column. Additionally, a comparison between random sampling (red), NRFI uniform (orange), and NRFI dynamic (green) is shown in the right column.
By optimizing the decision tree sampling, NRFI dynamic automatically balances the confidences and generates the most diverse and evenly distributed data. | This shows that neural random forest imitation is able to generate significantly better data samples based on the knowledge in the random forest.
NRFI dynamic improves the performance by automatically optimizing the decision tree sampling and generating the largest variation in the data. | The analysis shows that random data samples and uniform sampling have a bias to generate data samples that are classified with high confidence.
NRFI dynamic automatically balances the number of decision trees and archives an evenly distributed data distribution, i.e., generates the most diverse data samples. |
NRFI uniform and NRFI dynamic sample the number of decision trees for each data point uniformly, respectively, optimized via automatic confidence distribution (see Section 4.1.4). The confidence distributions for both sampling modes are visualized in the second column of Figure 5. Additionally, sampling random data points without generating data from the random forest is included as a baseline. | D |
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy optimization remain rather limited from both computational and statistical perspectives. More specifically, from the computational perspective, it remains unclear until recently whether policy optimization converges to the globally optimal policy in a finite number of iterations, even given infinite data. Meanwhile, from the statistical perspective, it still remains unclear how to attain the globally optimal policy with a finite regret or sample complexity.
|
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020), as well as nonlinear settings involving general function approximators (Wen and Van Roy, 2017; Jiang et al., 2017; Du et al., 2019b; Dong et al., 2019). In particular, our setting is the same as the linear setting studied by Ayoub et al. (2020); Zhou et al. (2020), which generalizes the one proposed by Yang and Wang (2019a). We remark that our setting differs from the linear setting studied by Yang and Wang (2019b); Jin et al. (2019). It can be shown that the two settings are incomparable in the sense that one does not imply the other (Zhou et al., 2020). Also, our setting is related to the low-Bellman-rank setting studied by Jiang et al. (2017); Dong et al. (2019). In comparison, we focus on policy-based reinforcement learning, which is significantly less studied in theory. In particular, compared with the work of Yang and Wang (2019b, a); Jin et al. (2019); Ayoub et al. (2020); Zhou et al. (2020), which focuses on value-based reinforcement learning, OPPO attains the same T𝑇\sqrt{T}square-root start_ARG italic_T end_ARG-regret even in the presence of adversarially chosen reward functions. Compared with optimism-led iterative value-function elimination (OLIVE) (Jiang et al., 2017; Dong et al., 2019), which handles the more general low-Bellman-rank setting but is only sample-efficient, OPPO simultaneously attains computational efficiency and sample efficiency in the linear setting. Despite the differences between policy-based and value-based reinforcement learning, our work shows that the general principle of “optimism in the face of uncertainty” (Auer et al., 2002; Bubeck and Cesa-Bianchi, 2012) can be carried over from existing algorithms based on value iteration, e.g., optimistic LSVI, into policy optimization algorithms, e.g., NPG, TRPO, and PPO, to make them sample-efficient, which further leads to a new general principle of “conservative optimism in the face of uncertainty and adversary” that additionally allows adversarially chosen reward functions. |
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient (PG) (Williams, 1992; Baxter and Bartlett, 2000; Sutton et al., 2000), natural policy gradient (NPG) (Kakade, 2002), trust-region policy optimization (TRPO) (Schulman et al., 2015), proximal policy optimization (PPO) (Schulman et al., 2017), and actor-critic (AC) (Konda and Tsitsiklis, 2000), converge to the globally optimal policy at sublinear rates of convergence, even when they are coupled with neural networks (Liu et al., 2019; Wang et al., 2019). However, such computational efficiency guarantees rely on the regularity condition that the state space is already well explored. Such a condition is often implied by assuming either the access to a “simulator” (also known as the generative model) (Koenig and Simmons, 1993; Azar et al., 2011, 2012a, 2012b; Sidford et al., 2018a, b; Wainwright, 2019) or finite concentratability coefficients (Munos and Szepesvári, 2008; Antos et al., 2008; Farahmand et al., 2010; Tosatto et al., 2017; Yang et al., 2019b; Chen and Jiang, 2019), both of which are often unavailable in practice. | for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018, 2019; Yang and Wang, 2019b, a), where the reward function is fixed across all the episodes.
| Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In particular, OPPO is based on PPO (and similarly, NPG and TRPO), which is shown to converge to the globally optimal policy at sublinear rates in tabular and linear settings, as well as nonlinear settings involving neural networks (Liu et al., 2019; Wang et al., 2019). However, without assuming the access to a “simulator” or finite concentratability coefficients, both of which imply that the state space is already well explored, it remains unclear whether any of such algorithms is sample-efficient, that is, attains a finite regret or sample complexity. In comparison, by incorporating uncertainty quantification into the action-value function at each update, which explicitly encourages exploration, OPPO not only attains the same computational efficiency as NPG, TRPO, and PPO, but is also shown to be sample-efficient with a d2H3Tsuperscript𝑑2superscript𝐻3𝑇\sqrt{d^{2}H^{3}T}square-root start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT italic_T end_ARG-regret up to logarithmic factors.
| B |
Both training and inference have extremely high demands on their targeted platform and certain hardware requirements can be the deciding factor whether an application can be realized.
This section briefly introduces the most important hardware for deep learning and discusses their potentials and limitations. | Jacob et al. (2018) proposed a quantization scheme that accurately approximates floating-point operations using only integer arithmetic to speed up computation.
During training, their forward pass simulates the quantization step to keep the performance of the quantized DNN close to the performance of using single-precision. | In Huang and Wang (2018), the outputs of different structures are scaled with individual trainable scaling factors.
By using a sparsity enforcing ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT-norm regularizer on these scaling factors, the outputs of the corresponding structures are driven to zero and can be pruned. | Quantized DNNs with 1-bit weights and activations are the worst performing models, which is due to the severe implications of extreme quantization on prediction performance.
As can be seen, however, the overall performance of the quantized models increases considerably when the bit width of activations is increased to 2–3 bit whilst the bit width of the weights is kept low. | CPUs were originally designed to optimize single-thread performance in order to execute an individual computation within the shortest possible latency.
Unfortunately, single-thread performance is stagnating since the end of Dennard scaling (Dennard et al., 1974), and now performance scaling usually requires parallelization. | D |
In Section 3, we construct a category of metric pairs. This category will be the natural setting for our extrinsic persistent homology. Although being functorial is trivial in the case of Vietoris-Rips persistence, the type of functoriality which one is supposed to expect in the case of metric embeddings is a priori not obvious. We address this question in Section 3 by introducing a suitable category structure.
| One main contribution of this paper is establishing a precise relationship (i.e. a filtered homotopy equivalence) between the Vietoris-Rips simplicial filtration of a metric space and a more geometric (or extrinsic) way of assigning a persistence module to a metric space, which consists of first isometrically embedding it into a larger space and then considering the persistent homology of the filtration obtained by considering the resulting system of nested neighborhoods of the original space inside this ambient space. These neighborhoods, being also metric (and thus topological) spaces, permit giving a short proof of the Künneth formula for Vietoris-Rips persistent homology.
|
In Section 4, we show that the Vietoris-Rips filtration can be (categorically) seen as a special case of persistent homology obtained through metric embeddings via the isomorphism theorem (Theorem 1). In this section, we also we also establish the stability of the filtration obtained via metric embeddings. | In Section 3, we construct a category of metric pairs. This category will be the natural setting for our extrinsic persistent homology. Although being functorial is trivial in the case of Vietoris-Rips persistence, the type of functoriality which one is supposed to expect in the case of metric embeddings is a priori not obvious. We address this question in Section 3 by introducing a suitable category structure.
| In Section 8, we reprove Rips and Gromov’s result about the contractibility of the Vietoris-Rips complex of hyperbolic geodesic metric spaces, by using our method consisting of isometric embeddings into injective metric spaces. As a result, we will be able to bound the length of intervals in Vietoris-Rips persistence barcode by the hyperbolicity of the underlying space.
| B |
The difference line plot (d), on the other hand, builds on the standard plot by highlighting the differences between the selection and the global average, shown as positive and negative values around the 0 value of the y-axis.
It provides a clearer overall picture of the difference in preservation among all the shown scales, but compromises the precision and simplicity of interpretation of the y-axis (where the exact percentage of Neighborhood Preservation was previously shown). The difference bar chart (b) is a combination of the designs (a) and (d). Similar to (d), the interpretation of the y-values might be misleading. | After choosing a projection, users will proceed with the visual analysis using all the functionalities described in the next sections. However, the hyper-parameter exploration does not necessarily stop here. The top 6 representatives (according to a user-selected quality measure) are still shown at the top of the main view (Figure 1(e)), and the projection can be switched at any time if the user is not satisfied with the initial choice. We also provide the mechanism for a selection-based ranking of the representatives. During the exploration of the projection, if the user finds a certain pattern of interest (i.e., cluster, shape, etc.), one possible question might be whether this specific pattern is better visible or better represented in another projection. After selecting these points, the list of top representatives can be ranked again to contain the projections with the best quality regarding the selection (as opposed to the best global quality, which is the default). The way this “selection-based quality” is computed is by adapting the global quality measures we used, taking advantage of the fact that they all work by aggregating a measure-specific quality computation over all the points of the projection. In the case of the selection-based quality, we aggregate only over the selected points to reach the final value of the quality measure, which is then used to re-rank the representatives.
| Adaptive PCP vs. PCP
Although it is not uncommon to find tools that use PCP views together with DR-based scatterplots (e.g., iPCA [69]) with various schemes for re-ordering and prioritizing the axes (e.g., [70, 71]), the arrangement and presentation of these PCP’s are usually static in order to reflect attributes of the data (or the projection) as a whole. In our proposed Adaptive PCP, the arrangement of the axes is dynamically updated every time the user makes a new selection (using a local PCA); this way, the PCP only shows, at any given time, the most relevant dimensions for the user’s current focus, which may differ significantly from the global aspects of the projection as a whole. Coupled with the Dimension Correlation view, this provides a highly-customized toolset for inspecting and interpreting the meanings of specific neighborhoods of data points. | Apart from the adaptive filtering and re-ordering of the axes, we maintained a rather standard visual presentation of the PCP plot, to make sure it is as easy and natural as possible for users to inspect it. The colors reflect the labels of the data with the same colors as in the overview (Subsection 4.2), when available, and the rest of the instances of the data—which are not selected—are shown with high transparency. Each axis maps the entire range of each dimension, from bottom to top. A simple example is given in Figure 4(b), where we can see that the dimensions of the selected points roughly appear at the intersection between two species, versicolor (brown) and virginica (orange).
|
Adaptive Parallel Coordinates Plot Our first proposal to support the task of interpreting patterns in a t-SNE projection is an Adaptive PCP [59], as shown in Figure 1(k). It highlights the dimensions of the points selected with the lasso tool, using a maximum of 8 axes at any time, to avoid clutter. The shown axes (and their order) are, however, not fixed, as is the usual case. Instead, they are adapted to the selection in the following way. First, a Principal Component Analysis (PCA) [1] is performed using only the selected points, but with all dimensions. That yields two results: (1) a set of eigenvectors that represent a new base that best explains the variance of the selected points, and (2) a set of eigenvalues that represent how much variance is explained by each eigenvector. Simulating a reduction of the dimensions of the selected points to 1111-Dimensional space using PCA, we pick the eigenvector with the largest eigenvalue, i.e., the most representative one. This N𝑁Nitalic_N-D vector can be seen as sequence w𝑤witalic_w of N𝑁Nitalic_N weights, one per original dimension, where the value of wjsubscript𝑤𝑗w_{j}italic_w start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT indicates the importance of dimension j𝑗jitalic_j in explaining the variance of the user-selected subset of the data. Finally, we sort w𝑤witalic_w in descending order, then pick the dimensions that correspond to the first (up to) 8 values of the sorted w𝑤witalic_w. These are the (up to) 8 dimensions shown in the PCP axes, in the same descending order (from left to right). | B |
The correct design of a bio-inspired algorithm involves the execution of a series of steps in a conscientious and organized manner, both at the time of algorithm development and during subsequent experimentation and application to real-world optimization problems. In [5], a complete tutorial on the design of new bio-inspired algorithms is presented, and in this work, we make a brief introduction to the phases that are necessary for quality research.
|
In such work, an analysis is conducted from a critical yet constructive point of view, aiming to correct misconceptions and bad methodological habits. Each phase of the analysis includes the prescription of application guidelines and recommendations intended for adoption by the community. These guidelines are intended to promote actionable metaheuristics designed and tested in a principled manner, to achieve valuable research results and ensure their practical use in real-world applications. | The correct design of a bio-inspired algorithm involves the execution of a series of steps in a conscientious and organized manner, both at the time of algorithm development and during subsequent experimentation and application to real-world optimization problems. In [5], a complete tutorial on the design of new bio-inspired algorithms is presented, and in this work, we make a brief introduction to the phases that are necessary for quality research.
| The rest of this paper is organized as follows. In Section 2, we examine previous surveys, taxonomies, and reviews of nature- and bio-inspired algorithms reported so far in the literature. Section 3 delves into the taxonomy based on the inspiration of the algorithms. In Section 4, we present and populate the taxonomy based on the behavior of the algorithm. In Section 5, we analyze similarities and differences found between both taxonomies, ultimately identifying the most influential algorithms in our reviewed papers. In Section 6, we report several lessons learned and recommendations as the result of the previous analysis. In addition, as novel contributions of this version over its preceding ones, Section 7 provides an extended critical analysis of the state of the art in the field, highlighting the aforementioned good, the bad, and the ugly in the metaheuristic landscape [2]. In Section 8, we discuss future directions in bio-inspired optimization algorithms, and prescribe potential solutions and analysis toward ensuring good practices and correct experimental procedures with these algorithms. Section 9 shows studies and guidelines for good practices, together with recent studies including taxonomies, overviews, and general approaches related to metaheuristics. Finally, in Section 10, we summarize our current main conclusions and reflections on the field, with builds upon a five-year reflection and literature study.
| As we have mentioned in the introduction, we revisit a triple study of evolutionary and bio-inspired algorithms from a triple perspective, where we stand and what’s next from a perspective published in 2020, but still valid in terms of the need to address important problems and challenges in optimization for EAs and population-based optimization models, a prescription of methodological guidelines for comparing bio-inspired optimization algorithms, and a tutorial on the design, experimentation, and application of metaheuristic algorithms to real-world optimization problems.
| A |
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update the graph from the learned embedding with a larger sparsity, k𝑘kitalic_k. With the new graph, we re-train the GAE. These steps are repeated until the convergence. |
In recent years, GCNs have been studied a lot to extend neural networks to graph type data. How to design a graph convolution operator is a key issue and has attracted a mass of attention. Most of them can be classified into 2 categories, spectral methods [24] and spatial methods[25]. | However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods.
In this paper, we propose an Adaptive Graph Auto-Encoder (AdaGAE) to extend graph auto-encoder into common scenarios. The main contributions are listed as follows: | As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method.
Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore, they are widely used in practice. Due to the success of deep learning, how to combine neural networks and traditional clustering models has been studied a lot [7, 8, 9]. In particular, CNN-based clustering models have been extensively investigated [10, 11, 12]. However, the convolution operation may be unavailable on other kinds of datasets, e.g., text, social network, signal, data mining, etc. | (1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for decoders.
(2) As we utilize GAE to exploit the high-level information to construct a desirable graph, we find that the model suffers from a severe collapse due to the simple update of the graph. We analyze the degeneration theoretically and experimentally to understand the phenomenon. We further propose a simple but effective strategy to avoid it. | B |
Since the Open Resolver and the Spoofer Projects are the only two infrastructures providing vantage points for measuring spoofing - their importance is immense as they facilitated many research works analysing the spoofability of networks based on the datasets collected by these infrastructures. Nevertheless, the studies using these infrastructure, e.g., (Huz et al., 2015; Luckie et al., 2019), point out the problems with the representativeness of the collected data of the larger Internet. Both projects (the Spoofer and the Open Resolver) acknowledged the need to increase the coverage of the measurements, as well as the challenges for obtaining better coverage and stable vantage points.
| Limitations of filtering studies. The measurement community provided indispensable studies for assessing “spoofability” in the Internet, and has had success in detecting the ability to spoof in some individual networks using active measurements, e.g., via agents installed on those networks (Mauch, 2013; Lone et al., 2018), or by identifying spoofed packets using offline analysis of traffic, e.g., (Lone et al., 2017; Luckie et al., 2019). The need to install agents on networks or the ability to obtain traces only from some networks limits the studies to non-uniform coverage of the Internet. Therefore it is not clear how representative these statistics are.
Unfortunately, this limitation to a small set of networks creates a bias in the assessments of the overall number of spoofable networks. The extrapolation from the small set of networks to the entire Internet typically result in assessment that at least 30% of the Internet networks do not filter spoofed packets (Luckie et al., 2019; Man et al., 2020). As we show, the number of spoofable networks is above 72% which is significantly higher than what was previous believed. |
Network Traces. To overcome the dependency on vantage points for running the tests, researchers explored alternatives for inferring filtering of spoofed packets. A recent work used loops in traceroute to infer ability to send packets from spoofed IP addresses, (Lone et al., 2017). |
(Lichtblau et al., 2017) developed a methodology to passively detect spoofed packets in traces recorded at a European IXP connecting 700 networks. The limitation of this approach is that it requires cooperation of the IXP to perform the analysis over the traffic and applies only to networks connected to the IXP. Allowing to identify spoofing that defacto took place, the approach proposed in (Lichtblau et al., 2017) misses out on the networks which do not enforce filtering but which did not receive packets from spoofed IP addresses (at least during the time frame in which the traces were collected). | Vantage Points. Measurement of networks which do not perform egress filtering of packets with spoofed IP addresses was first presented by the Spoofer Project in 2005 (Beverly and Bauer, 2005). The idea behind the Spoofer Project is to craft packets with spoofed IP addresses and check receipt thereof on the vantage points operated by the volunteers, i.e., participants who run a “spoofer” software provided by the authors. Based on the data collected by the Spoofer Project many reports were published providing statistics on the deployment of egress filtering in the Internet (Beverly et al., 2009, 2013; Lone et al., 2018; Luckie et al., 2019); we list the statistics in Table 1.
| B |
Sensor drift in industrial processes is one such use case. For example, sensing gases in the environment is mostly tasked to metal oxide-based sensors, chosen for their low cost and ease of use [1, 2]. An array of sensors with variable selectivities, coupled with a pattern recognition algorithm, readily recognizes a broad range of odors. The arrangement is called an artificial nose since it resembles the multiplicity of sensory neuron types in the nasal epithelium. However, while metal oxide-based sensors are economical and flexible, they are unstable over time. Changes to the response properties of sensors make it difficult to detect and identify odors in the long term, and sensors have to be recalibrated to compensate [3]. Recalibration requires collecting and labeling new samples, which is costly because a skilled operator is needed, and challenging because the experimental conditions need to be controlled precisely [3]. Recalibrating a model with unlabeled examples, called semisupervised learning, is a possible alternative but difficult to establish in practice. |
More specifically, natural odors consist of complex and variable mixtures of molecules present at variable concentrations [4]. Sensor variance arises from environmental dynamics of temperature, humidity, and background chemicals, all contributing to concept drift [5], as well as sensor drift arising from modification of the sensing device. The hard problem of olfaction in nature calls for the learning of new odor assocations [6]. In an attempt to capture much of this complexity, Vergara et al. [7] developed a publicly available benchmark dataset demonstrating sensor drift over a period of 36 months. This dataset offers a controlled testbed for sensor drift mitigation algorithms and thus defines the scope of this paper. | An alternative approach is to emulate adaptation in natural sensor systems. The system expects and automatically adapts to sensor drift, and is thus able to maintain its accuracy for a long time. In this manner, the lifetime of sensor systems can be extended without recalibration.
| While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this paper introduced an approach based on continual adaptation. A recurrent neural network uses a sequence of previously seen gas recordings to form a representation of the current state of the sensors. It then modulates the skill of odor recognition with this context, allowing the system to adapt to sensor drift. Context models can thus play a useful role in lifelong adaptation to changing environments in artificial systems.
|
The purpose of this study was to demonstrate that explicit representation of context can allow a classification system to adapt to sensor drift. Several gas classifier models were placed in a setting with progressive sensor drift and were evaluated on samples from future contexts. This task reflects the practical goal to deploy an artificial nose in a dynamic environment without recalibration. | B |
Now we can define the tables A(1)superscript𝐴1A^{(1)}italic_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT, A(2)superscript𝐴2A^{(2)}italic_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT and A(3)superscript𝐴3A^{(3)}italic_A start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT that our algorithm uses.
Recall that for Algorithm 1, we used | A(2)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒A^{(2)}[i,B]:=\begin{cases}\parbox{346.89731pt}{A representative set %
containing pairs $(M,x)$, where $M$ is a perfect matching on\leavevmode% |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒A[i,B]:=\begin{cases}\parbox{346.89731pt}{A representative set containing % |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒A[i,B]:=\begin{cases}\parbox{346.89731pt}{A representative set containing % | A(1)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒A^{(1)}[i,B]:=\begin{cases}\parbox{346.89731pt}{A representative set %
containing pairs $(M,x)$, where $M$ is a perfect matching on\leavevmode% | C |
Let S𝑆Sitalic_S be a (completely) self-similar semigroup and let T𝑇Titalic_T be a finite or free semigroup. Then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is (completely) self-similar. If furthermore S𝑆Sitalic_S is a (complete) automaton semigroup, then so is S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T.
| While our main result significantly relaxes the hypothesis for showing that the free product of self-similar semigroups (or automaton semigroups) is self-similar (an automaton semigroup), it does not settle the underlying question whether these semigroup classes are closed under free product. It is possible that there is a different construction for the free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T of two self-similar or automaton semigroup without the requirement of a homomorphism from one to the other and it is also possible that there is a pair of self-similar (or automaton) semigroups such that S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is not a self-similar (or an automaton semigroup). In this case, however, no homomorphism S→T→𝑆𝑇S\to Titalic_S → italic_T or T→S→𝑇𝑆T\to Sitalic_T → italic_S can exist. Thus, to make progress in either direction (towards a better construction or towards a counter-example), we need to look at pairs S,T𝑆𝑇S,Titalic_S , italic_T of self-similar (or even automaton) semigroups without a homomorphism from one to the other. However, it turns out that finding such a pair is not easy. In particular, neither S𝑆Sitalic_S nor T𝑇Titalic_T may contain an idempotent. Thus, we have to consider idempotent-free semigroups here. We will show, however, that we cannot find a pair of such semigroups in the class of finitely generated simple semigroups. More precisely, using results by Jones on idempotent-free semigroups [11], we show that finitely generated simple (or 00-simple) idempotent-free semigroups are not residually finite (Theorem 21) and, thus, not self-similar (and, in particular, not automaton semigroups; 22). We then conclude the paper with an example222The authors would like to thank Emanuele Rodaro for his help in finding this example. of a finitely generated residually finite semigroup (23) which has no homomorphism to its opposite semigroup (25). While this comes close to the sought pair S,T𝑆𝑇S,Titalic_S , italic_T, it is not clear whether the given semigroup is self-similar (26).
| By Corollaries 10 and 11, we have to look into idempotent-free automaton semigroups without length functions in order to find a pair of self-similar (or automaton) semigroups not satisfying the hypothesis of Theorem 6 (or 8), which would be required in order to either relax the hypothesis even further (possibly with a new construction), or provide a pair S,T𝑆𝑇S,Titalic_S , italic_T of self-similar semigroups such that S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is not self-similar. It turns out that we can reduce the class of potential candidates even further: we will show next that no finitely generated simple or 00-simple idempotent-free semigroup is self-similar (and, thus, that no simple or 00-simple idempotent-free semigroup is an automaton semigroup).
| from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata).
Third, we show this result in the more general setting of self-similar semigroups111Note that the constructions from [2, Theorem 2], [3, Theorem 4] and [19] mentioned above do not use that the generating automata for S𝑆Sitalic_S and for T𝑇Titalic_T are finite. Therefore, these constructions also work for self-similar semigroups, although this is not explicitly stated there. (Theorem 6) but observe that the constructed generating automaton for S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is finite (and/or complete) if this was the case for the original two automata generating S𝑆Sitalic_S and T𝑇Titalic_T. The existence of a homomorphism from S𝑆Sitalic_S to T𝑇Titalic_T (or vice-versa) is a very lax requirement and is satisfied by large classes of semigroups. For example, it suffices to have an idempotent (10) or a length function (11) in (at least) one of the two semigroups. By induction, we can even extend the result to arbitrary free products of (finitely many) semigroups where at least one contains an idempotent (12). The construction itself yields further results. As an example, we modify it to show that a new free generator can be adjoined to any self-similar semigroup (or automaton semigroup) without losing the property of self-similarity (or being an automaton semigroup; Theorem 14). This is noteworthy because – as mentioned above – the free semigroup of rank one is not an automaton semigroup (not even if we allow partial automata, see [8, Theorem 19] and [20, Theorem 1.2.1.4]). | The construction used to prove Theorem 6 can also be used to obtain results which are not immediate corollaries of the theorem (or its corollary for automaton semigroups in 8). As an example, we prove in the following theorem that it is possible to adjoin a free generator to every self-similar semigroup without losing the self-similarity property and that the analogous statement for automaton semigroups holds as well. The version for automaton semigroups does not follow directly from 8, as the free monogenic semigroup is not a complete automaton semigroup [4, Proposition 4.3] or even a (partial) automaton semigroup (see [8, Theorem 18] or [20, Theorem 1.2.1.4]).
| B |
SCR divides the region proposals into influential and non-influential regions and penalizes the model if: 1) 𝒮(agt)𝒮subscript𝑎𝑔𝑡\mathcal{S}(a_{gt})caligraphic_S ( italic_a start_POSTSUBSCRIPT italic_g italic_t end_POSTSUBSCRIPT ) of a non-influential region is higher than an influential region, and 2) the region most influential for the correct answer has even higher sensitivity for incorrect answers. | Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible to achieve such gains even when the model is trained to look at: a) irrelevant visual regions, and b) random visual regions. Second, we show that differences in the predictions from the variants trained with relevant, irrelevant and random visual regions are not statistically significant. Third, we show that these methods degrade performance when the priors remain intact and instead work on VQA-CPv2 by hurting its train accuracy.
| As observed by Selvaraju et al. (2019) and as shown in Fig. 2, we observe small improvements on VQAv2 when the models are fine-tuned on the entire train set. However, if we were to compare against the improvements in VQA-CPv2 in a fair manner, i.e., only use the instances with visual cues while fine-tuning, then, the performance on VQAv2 drops continuously during the course of the training. This indicates that HINT and SCR help forget linguistic priors, which is beneficial for VQA-CPv2 but not for VQAv2.
| We test our regularization method on random subsets of varying sizes. Fig. A6 shows the results when we apply our loss to 1−100%1percent1001-100\%1 - 100 % of the training instances. Clearly, the ability to regularize the model does not vary much with respect to the size of the train subset, with the best performance occurring when our loss is applied to 1%percent11\%1 % of the training instances. These results support our claims that it is possible to improve performance without actually performing visual grounding.
| We probe the reasons behind the performance improvements of HINT and SCR. We first analyze if the results improve even when the visual cues are irrelevant (Sec. 4.2) or random (Sec. 4.3) and examine if their differences are statistically significant (Sec. 4.4). Then, we analyze the regularization effects by evaluating the performance on VQA-CPv2’s train split (Sec. 4.5) and the behavior on a dataset without changing priors (Sec. 2). We present a new metric to assess visual grounding in Sec. 4.7 and describe our regularization method in Sec. 5.
| D |
A privacy policy is a legal document that an organisation uses to disclose how they collect, analyze, share, and protect users’ personal information. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users, and laws such as General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) place specific expectations upon privacy policies. However, although many internet users have concerns about their privacy Madden (2017), most fail to understand privacy policies Meiselwitz (2013). Studies show that privacy policies require a considerable investment in time to read Obar and Oeldorf-Hirsch (2018) and estimate that it would require approximately 200 hours to read all the privacy policies that an average person would come across every year McDonald and Cranor (2008).
|
For the question answering task, we leveraged the PrivacyQA corpus (Ravichander et al., 2019). PrivacyQA consists of 1,750 questions about the contents of privacy policies from 35 privacy documents. While crowdworkers were asked to come up with privacy related questions based on public information about an application from the Google Play Store, legal experts were recruited to identify relevant evidence within respective privacy policies that answered the question asked by the crowdworkers. The goal of the question answering task is to identify a set sentences in the privacy policy that has information relevant to the question. Ravichander et al. (2019) divided the corpus into 1,350 questions for training and validation and 400 questions for testing where each question in the test set is annotated by at least three experts. We fine-tuned PrivBERT on the training set as a binary classification task on each question-answer sentence pair to identify if the sentence is evidence for the question or not. We trained the model with a dropout of 0.2 and a learning rate of 3e-6 with the positive and negative classes weighted in the ratio 8:1 during training. We used sentence level F1 as the evaluation metric as described by Ravichander et al. (2019), where precision and recall are calculated by measuring the overlap between the predicted sentences and gold standard sentences. |
Prior collections of privacy policy corpora have led to progress in privacy research. Wilson et al. (2016) released the OPP-115 Corpus, a dataset of 115 privacy policies with manual annotations of 23k fine-grained data practices, and they created a baseline for classifying privacy policy text into one of ten categories. The corpus was used to train models to extract opt-out choices from privacy policies (Sathyendra et al., 2016), to automatically identify policies on websites and find compliance issues (Story et al., 2019), and to classify privacy practices and answer privacy related non-factoid questions (Harkous et al., 2018). | Other corpora similar to OPP-115 Corpus have enabled research on privacy practices. The PrivacyQA corpus contains 1,750 questions and expert-annotated answers for the privacy question answering task (Ravichander et al., 2019). Similarly, Lebanoff and Liu (2018) constructed the first corpus of human-annotated vague words and sentences in privacy policies and studied automatic vagueness detection. Sathyendra et al. (2017) presented a dataset and developed a model to automatically identify and label opt-out choices offered in privacy policies. Similarly, Zimmeck et al. (2019) released a set of over 400k URLs to Android app privacy policy pages collected by crawling the Google Play store. Amos et al. (2020) collected privacy policies from around 130,000 websites from over two decades and analysed the evolution of the online privacy landscape. Finally, Nokhbeh Zaeem and Barber (2021) collected a corpus of around 100k privacy policies using the domains from DMOZ, a website which maintained categories of websites on the internet.
|
Natural language processing (NLP) provides an opportunity to automate the extraction of salient details from privacy policies, thereby reducing human effort and enabling the creation of tools for internet users to understand and control their online privacy. Existing research has achieved some success using expert annotated corpora of a few hundred or a few thousand privacy policies Wilson et al. (2016); Zimmeck et al. (2019); Ramanath et al. (2014), but issues of accuracy, scalability and generalization remain. More importantly, annotations in the privacy policy domain are expensive. Privacy policies are difficult to understand and many tasks such as privacy practice classification (Wilson et al., 2016), privacy question answering (Ravichander et al., 2019), vague sentence detection (Lebanoff and Liu, 2018), and detection of compliance issues (Zimmeck et al., 2019) require skilled legal experts to annotate the dataset. In contrast, approaches involving large amounts of unlabeled privacy policies remain relatively unexplored. | D |
We answered that the per-class performance is also a very important component, and exploratory visualization can assist in the selection process, as seen in Figure 2(b and c.1).
The expert understood the importance of visualization in that situation, compared to not using it. | Interpretability and explainability is another challenge (mentioned by E3) in complicated ensemble methods, which is not necessarily always a problem depending on the data and the tasks. However, the utilization of user-selected weights for multiple validation metrics is one way towards interpreting and trusting the results of stacking ensembles. This is an advantage identified by E2. In the first use case we presented to him, he noted that: “if you are interested in the fairness of the results, you could show with the history preservation view of the system how you reached to these predictions without removing the age or sex features, consequently, not leading to discrimination against patients, for example”.
The visual exploration of stacking methods that use multiple layers [28] mentioned by E1 is set as another future work goal. | Workflow. E1, E2, and E3 agreed that the workflow of StackGenVis made sense.
They all suggested that data wrangling could happen before the algorithms’ exploration, but also that it is usual to first train a few algorithms and then, based on their predictions, wrangle the data. |
Figure 4: Our feature selection view that provides three different feature selection techniques. The y-axis of the table heatmap depicts the data set’s features, and the x-axis depicts the selected models in the current stored stack. Univariate-, permutation-, and accuracy-based feature selection is available as long with any combination of them (a). (b) displays the normalized importance color legend. The per-model feature accuracy is depicted in (c), and (d) presents the user’s interaction to disable specific features to be used for all the models (only seven features are shown here). This could also happen on an individual basis for every model. | Another positive opinion from E3 was that, with a few adaptations to the performance metrics, StackGenVis could work with regression or even ranking problems.
E3 also mentioned that supporting feature generation in the feature selection phase might be helpful. Finally, E1 suggested that the circular barcharts could only show the positive or negative difference compared to the first stored stack. To avoid an asymmetric design and retain a lower complexity level for StackGenVis, we omitted his proposal for the time being, but we consider implementing both methods in the future. | D |
We thus have 3333 cases, depending on the value of the tuple
(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v , [ 313 ] ) , italic_p ( italic_v , [ 003 ] ) ): | p(v,[013])=p(v,[313])=p(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1.
Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ], | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | {0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],%
[112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end_ARG , over¯ start_ARG 2 end_ARG , over¯ start_ARG 3 end_ARG , [ 013 ] , [ 010 ] , [ 323 ] , [ 313 ] , [ 112 ] , [ 003 ] , [ 113 ] } . | C |
In Experiment I: Text Classification, we use FewRel [Han et al., 2018] and Amazon [He and McAuley, 2016]. They are datasets for 5-way 5-shot classification, which means 5 classes are randomly sampled from the full dataset for each task, and each class has 5 samples. FewRel is a relation classification dataset with 65/5/10 tasks for meta-training/meta-validation/meta-testing. |
In Experiment II: Dialogue Generation, we use Persona [Zhang et al., 2018] and Weibo, regarding building a dialogue model for a user as a task. Persona is a personalized dialogue dataset with 1137/99/100 users for meta-training/meta-validation/meta-testing. Each user has 121 utterances on average. Weibo is a personalized dialogue dataset collected from Weibo conversations with 371/40/38 users for meta-training/meta-validation/meta-testing. Each user has 1200 utterances on average. |
In Experiment I: Text Classification, we use FewRel [Han et al., 2018] and Amazon [He and McAuley, 2016]. They are datasets for 5-way 5-shot classification, which means 5 classes are randomly sampled from the full dataset for each task, and each class has 5 samples. FewRel is a relation classification dataset with 65/5/10 tasks for meta-training/meta-validation/meta-testing. | Task similarity. In Persona and Weibo, each task is a set of dialogues for one user, so tasks are different from each other. We shuffle the samples and randomly divide tasks to construct the setting that tasks are similar to each other. For a fair comparison, each task on this setting also has 120 and 1200 utterances on average in Persona and Weibo respectively. We train and evaluate Transformer-F and MAML on this setting. (Table 2).
When tasks are similar to each other, MAML performs comparatively poorly. In Persona and Weibo, the performance of MAML is similar to that of Transformer-F, while MAML performs significantly better than Transformer-F when tasks are different. A possible explanation is that if there is no clear distinction between tasks, the meta-learning setting can be viewed as a transfer learning setting, which only has a source domain and a target domain, and fine-tuning performs well in transfer learning. So if the tasks are similar to each other, we can simply use Transformer-F rather than MAML. | In meta-learning, we have multiple tasks T𝑇Titalic_T sampled from distribution p(𝒯)𝑝𝒯p(\mathcal{T})italic_p ( caligraphic_T ) [Ravi and Larochelle, 2017, Andrychowicz et al., 2016, Santoro et al., 2016]. For each task Tisubscript𝑇𝑖T_{i}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, we train a base model fiθsuperscriptsubscript𝑓𝑖𝜃f_{i}^{\theta}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_θ end_POSTSUPERSCRIPT with parameter θisubscript𝜃𝑖\theta_{i}italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT on its training corpus Ditrainsuperscriptsubscript𝐷𝑖𝑡𝑟𝑎𝑖𝑛D_{i}^{train}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t italic_r italic_a italic_i italic_n end_POSTSUPERSCRIPT which only has a few samples, and evaluate the model on the testing corpus Divalidsuperscriptsubscript𝐷𝑖𝑣𝑎𝑙𝑖𝑑D_{i}^{valid}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_v italic_a italic_l italic_i italic_d end_POSTSUPERSCRIPT. We divide the tasks into meta-training, meta-validation, and meta-testing. The goal of meta-learning is that after training on meta-training, we can quickly find fiθsuperscriptsubscript𝑓𝑖𝜃f_{i}^{\theta}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_θ end_POSTSUPERSCRIPT via fine-tuning (adaptation) with Ditrainsuperscriptsubscript𝐷𝑖𝑡𝑟𝑎𝑖𝑛D_{i}^{train}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t italic_r italic_a italic_i italic_n end_POSTSUPERSCRIPT for each task Tisubscript𝑇𝑖T_{i}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in meta-testing.
| A |
The rest of this paper is as follows. In Section II, the system model is introduced. In Section III, the CCA codebook design and the codebook-based joint subarray partition and AWV selection algorithms are proposed. Next, the TE-aware codebook-based beam tracking with 3D beamwidth control is further proposed in Section IV. Simulation results are given in Section V, and finally Section VI concludes this paper. | In addition, the AOAs and AODs should be tracked in the highly dynamic UAV mmWave network.
To this end, in Section IV we will further propose a novel predictive AOA/AOD tracking scheme in conjunction with tracking error treatment to address the high mobility challenge, then we integrate these operations into the codebook-based SPAS to achieve reliable beam-tracking for the considered UAV mmWave network. | Note that directly solving the above beam tracking problem is very challenging, especially in the considered highly dynamic UAV mmWave network. Therefore, developing new and efficient beam tracking solution for the CA-enabled UAV mmWave network is the major focus of our work. Recall that several efficient codebook-based beam training and tracking schemes have been proposed for conventional mmWave network with uniform ULA and UPA [22, 23]. These prior works inspire us to propose a specialized new codebook design and the corresponding codeword selection/processing strategy that can drive the CCA to achieve fast beam tracking in the highly dynamic UAV mmWave network. To this end, the properties of the CCA should be exploited in the design of the codebook, which are briefly discussed as follows.
|
A CCA-enabled UAV mmWave network is considered in this paper. Here, we first establish the DRE-covered CCA model in Section II-A. Then the system setup of the considered UAV mmWave network is described in Section II-B. Finally, the beam tracking problem for the CA-enabled UAV mmWave network is modeled in Section II-C. |
The rest of this paper is as follows. In Section II, the system model is introduced. In Section III, the CCA codebook design and the codebook-based joint subarray partition and AWV selection algorithms are proposed. Next, the TE-aware codebook-based beam tracking with 3D beamwidth control is further proposed in Section IV. Simulation results are given in Section V, and finally Section VI concludes this paper. | C |
The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from
either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging. | To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer
analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict | The case of 1111-color is characterized by a Presburger formula that just expresses the equality of the number of edges calculated from
either side of the bipartite graph. The non-trivial direction of correctness is shown via distributing edges and then merging. | After the merging the total degree of each vertex increases by tδ(A0,B0)2𝑡𝛿superscriptsubscript𝐴0subscript𝐵02t\delta(A_{0},B_{0})^{2}italic_t italic_δ ( italic_A start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.
We perform the “edge swapping” to get rid of the parallel edges without effecting the degree of each vertex. | The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges.
The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from | D |
Deep reinforcement learning achieves phenomenal empirical successes, especially in challenging applications where an agent acts upon rich observations, e.g., images and texts. Examples include video gaming (Mnih et al., 2015), visuomotor manipulation (Levine et al., 2016), and language generation (He et al., 2015). Such empirical successes are empowered by expressive nonlinear function approximators such as neural networks, which are used to parameterize both policies (actors) and value functions (critics) (Konda and Tsitsiklis, 2000). In particular, the neural network learned from interacting with the environment induces a data-dependent feature representation, which embeds rich observations into a latent space encoding semantic structures (Hinton, 1986; Bengio, 2012; Yosinski et al., 2014; LeCun et al., 2015). In contrast, classical reinforcement learning mostly relies on a handcrafted feature representation that is fixed throughout learning (Sutton and Barto, 2018).
|
Moreover, soft Q-learning is equivalent to a variant of policy gradient (O’Donoghue et al., 2016; Schulman et al., 2017; Nachum et al., 2017; Haarnoja et al., 2017). Hence, Proposition 6.4 also characterizes the global optimality and convergence of such a variant of policy gradient. |
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et al., 2017; Nachum et al., 2017; Haarnoja et al., 2017). In particular, we aim to characterize how an overparameterized two-layer neural network and its induced feature representation evolve in TD and Q-learning, especially their rate of convergence and global optimality. A fundamental obstacle, however, is that such an evolving feature representation possibly leads to the divergence of TD and Q-learning. For example, TD converges when the value function approximator is linear in a feature representation, which is fixed throughout learning, and possibly diverges otherwise (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997). | In this section, we extend our analysis of TD to Q-learning and policy gradient. In §6.1, we introduce Q-learning and its mean-field limit. In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
| Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, TD possibly diverges (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997). Bhatnagar et al. (2009) propose nonlinear gradient TD, which converges but only to a locally optimal solution. See Geist and Pietquin (2013); Bertsekas (2019) for a detailed survey. When the value function approximator is an overparameterized multi-layer neural network, Cai et al. (2019) prove that TD converges to the globally optimal solution in the NTK regime. See also the independent work of Brandfonbrener and
Bruna (2019a, b); Agazzi and Lu (2019); Sirignano and Spiliopoulos (2019), where the state space is required to be finite. In contrast to the previous analysis in the NTK regime, our analysis allows TD to attain a data-dependent feature representation that is globally optimal. | B |
Regarding parameter efficiency for NMT, Wu et al. (2019a) present lightweight and dynamic convolutions. Ma et al. (2021) approximate softmax attention with two nested linear attention functions. These methods are orthogonal to our work and it should be possible to combine them with our approach. |
In this paper, we replace residual connections of the Transformer with depth-wise LSTMs, to selectively manage the representation aggregation of layers benefiting performance while ensuring convergence of the Transformer. Specifically, we show how to integrate the computation of multi-head attention networks and feed-forward networks with the depth-wise LSTM for the Transformer. |
We use depth-wise LSTM rather than a depth-wise multi-head attention network Dou et al. (2018) with which we can build the NMT model solely based on the attention mechanism for two reasons: 1) we have to compute the stacking of Transformer layers sequentially as in sequential token-by-token decoding, and compared to the use of depth-wise LSTM of O(n)𝑂𝑛O(n)italic_O ( italic_n ) complexity, depth-wise multi-head attention networks suffer from O(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) complexity and they cannot be parallelized at the depth level. 2) the attention mechanism linearly combines representations with attention weights. Thus, it lacks the ability to provide the non-linearity compared to the LSTM, which we suggest is important. | We suggest that selectively aggregating different layer representations of the Transformer may improve the performance, and propose to use depth-wise LSTMs to connect stacked (sub-) layers of Transformers. We show how Transformer layer normalization and feed-forward sub-layers can be absorbed by depth-wise LSTMs, while connecting pure Transformer attention layers by depth-wise LSTMs (for Transformer encoder and decoder blocks), replacing residual connections.
| Directly replacing residual connections with LSTM units will introduce a large amount of additional parameters and computation. Given that the task of computing the LSTM hidden state is similar to the feed-forward sub-layer in the original Transformer layers, we propose to replace the feed-forward sub-layer with the newly introduced LSTM unit, which only introduces one LSTM unit per layer, and the parameters of the LSTM can be shared across layers.
| A |
define compact sets in X𝑋Xitalic_X for the topology generated by ℒ′superscriptℒ′\mathcal{L}^{\prime}caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT. We usually instantiate the theorem with
X⊆Struct(σ)𝑋StructσX\subseteq\operatorname{Struct}(\upsigma)italic_X ⊆ roman_Struct ( roman_σ ) ℒ=⟦𝖥𝖮[σ]⟧X\mathcal{L}=\llbracket\mathsf{FO}[\upsigma]\rrbracket_{X}caligraphic_L = ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT | 𝒪∩⟦𝖥𝖮[σ]⟧X=⟦𝖥⟧X.\mathcal{O}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{X}=\llbracket\mathsf%
{F}\rrbracket_{X}\;.caligraphic_O ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT = ⟦ sansserif_F ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT . | instantiated with ℒ=⟦𝖥𝖮[σ]⟧X\mathcal{L}=\llbracket\mathsf{FO}[\upsigma]\rrbracket_{X}caligraphic_L = ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT and ℒ′=⟦𝖥⟧X\mathcal{L}^{\prime}=\llbracket\mathsf{F}\rrbracket_{X}caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = ⟦ sansserif_F ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT for a fragment 𝖥𝖥\mathsf{F}sansserif_F of 𝖥𝖮[σ]𝖥𝖮delimited-[]σ\mathsf{FO}[\upsigma]sansserif_FO [ roman_σ ].
| and ℒ′=⟦𝖥⟧X\mathcal{L}^{\prime}=\llbracket\mathsf{F}\rrbracket_{X}caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = ⟦ sansserif_F ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT where 𝖥𝖥\mathsf{F}sansserif_F is a
fragment of 𝖥𝖮[σ]𝖥𝖮delimited-[]σ\mathsf{FO}[\upsigma]sansserif_FO [ roman_σ ]. | that ⟦𝖥⟧X\llbracket\mathsf{F}\rrbracket_{X}⟦ sansserif_F ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT is a base of
⟨τ≤∩⟦𝖥𝖮[σ]⟧X⟩\left\langle\uptau_{\leq}\cap\llbracket\mathsf{FO}[\upsigma]\rrbracket_{X}\right\rangle⟨ roman_τ start_POSTSUBSCRIPT ≤ end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ⟩. | C |
Relationship to Distortion Distribution: We first emphasize the relationship between two learning representations and the realistic distortion distribution of a distorted image. In detail, we train a learning model to estimate the distortion parameters and the ordinal distortions separately, and the errors of estimated results are built in the relationship to the distortion reprojection error. As shown in Fig. 5, we visualize the scatter diagram of two learning representations using 1,000 test distorted images. For the distortion parameter, its relationship to the distortion distribution is ambiguous and the similar parameter errors are related to quite different reprojection errors, which indicates that optimizing the parameter error would confuse the learning of neural networks. In contrast, the ordinal distortion error displays an evident positive correlation to the distortion distribution error, and thus the learning model gains intuitive distortion perception. Therefore, the proposed representation helps to decrease the error of distortion estimation. |
Relationship to Distortion Distribution: We first emphasize the relationship between two learning representations and the realistic distortion distribution of a distorted image. In detail, we train a learning model to estimate the distortion parameters and the ordinal distortions separately, and the errors of estimated results are built in the relationship to the distortion reprojection error. As shown in Fig. 5, we visualize the scatter diagram of two learning representations using 1,000 test distorted images. For the distortion parameter, its relationship to the distortion distribution is ambiguous and the similar parameter errors are related to quite different reprojection errors, which indicates that optimizing the parameter error would confuse the learning of neural networks. In contrast, the ordinal distortion error displays an evident positive correlation to the distortion distribution error, and thus the learning model gains intuitive distortion perception. Therefore, the proposed representation helps to decrease the error of distortion estimation. |
To exhibit the performance fairly, we employ three common network architectures VGG16, ResNet50, and InceptionV3 as the backbone networks of the learning model. The proposed MDLD metric is used to express the distortion estimation error due to its unique and fair measurement for distortion distribution. To be specific, we visualize the error and convergence epoch when estimating two representations under the same number of training data in Fig. 6, which is sampled with 20%, 40%, 60%, 80%, and 100% from the entire training data. Besides, the training and validation loss curves of two learning representations are shown in Fig. 7, in which the distortion parameters are processed without (top) and with (middle) the normalization of magnitude. From these learning evaluations, we can observe: | (1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed out earlier, the proposed ordinal distortion is explicit to the image feature and is observable from a distorted image; thus it boosts the neural networks’ learning ability. On the other hand, the performance of the distortion parameter estimation drops as the amount of training data decreases. In contrast, our ordinal distortion estimation performs more consistently due to the homogeneity of the learning representation.
| Distortion Learning Evaluation: Then, we introduce three key elements for evaluating the learning representation: training data, convergence, and error. Supposed that the settings such as the network architecture and optimizer are the same, a better learning representation can be described from the less the training data is, the faster convergence and the lower error are. For example, a student is able to achieve the highest test grade (the lowest error) with the fastest learning speed and the least homework, meaning that he grasps the best learning strategy compared with other students. In terms of the above description, we evaluate the distortion parameter and ordinal distortion as shown in Fig. 6 and Fig. 7.
| D |
Please note that EXTRAP-SGD has two learning rates for tuning and needs to compute two mini-batch gradients in each iteration. EXTRAP-SGD requires more time than other methods to tune hyperparameters and train models.
Similarly, CLARS needs to compute extra mini-batch gradients to estimate the layer-wise learning rate for each iteration, which requires more training time and computing resources. | First, we use the dataset CIFAR-10 and the model ResNet20 [10] to evaluate SNGM. We train the model with eight GPUs. Each GPU will compute a gradient with the batch size being B/8𝐵8B/8italic_B / 8. If B/8≥128𝐵8128B/8\geq 128italic_B / 8 ≥ 128, we will use the gradient accumulation [28]
with the batch size being 128. We train the model with 160160160160 epochs (i.e., pass through the dataset 160160160160 times). The cosine annealing learning rate [24] (without restarts) is adopted for the five methods. In the m𝑚mitalic_m-th epoch, the learning rate is ηm=η0∗0.5(1+cos(mπ/160))subscript𝜂𝑚subscript𝜂00.51𝑚𝜋160\eta_{m}=\eta_{0}*0.5(1+\cos(m\pi/160))italic_η start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_η start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∗ 0.5 ( 1 + roman_cos ( italic_m italic_π / 160 ) ), m=0,1,……,159𝑚01……159m=0,1,...\ldots,159italic_m = 0 , 1 , … … , 159. |
To further verify the superiority of SNGM with respect to LARS, we also evaluate them on a larger dataset ImageNet [2] and a larger model ResNet50 [10]. We train the model with 90 epochs. As recommended in [32], we use warm-up and polynomial learning rate strategy. | Hence, with the same number of gradient computations, SNGM can adopt a larger batch size than MSGD to converge to the ϵitalic-ϵ\epsilonitalic_ϵ-stationary point.
Empirical results on deep learning further verify that SNGM can achieve better test accuracy than MSGD and other state-of-the-art large-batch training methods. | We don’t use training tricks such as warm-up [7]. We adopt the linear learning rate decay strategy as default in the Transformers framework.
Table 5 shows the test accuracy results of the methods with different batch sizes. SNGM achieves the best performance for almost all batch size settings. | B |
Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific knowledge of the distribution is unknown but we have the ability to sample or simulate from the distribution. To our knowledge, radius minimization has not been previously considered in the two-stage stochastic paradigm. Most prior work in this setting has focused on Facility Location [23, 24, 21, 22, 11, 19, 25]. On similar lines, [1] studies a stochastic k𝑘kitalic_k-center variant, where points arrive independently and each point only needs to get covered with some given probability. 2S-Sup is the natural two-stage counterpart of the well-known Knapsack-Supplier problem, which has a well-known 3333-approximation [14].
| An outbreak is an instance from 𝒟𝒟\mathcal{D}caligraphic_D, and after it actually happened, additional testing and vaccination locations were deployed or altered based on the new requirements, e.g., [20], which corresponds to stage-II decisions.
To continue this example, there may be further constraints on FIsubscript𝐹𝐼F_{I}italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT, irrespective of the stage-II decisions, which cannot be directly reduced to the budget B𝐵Bitalic_B. For instance, there may be a limited number of personnel available prior to the disease outbreak, assuming that facility i𝑖iitalic_i requires fisubscript𝑓𝑖f_{i}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT people to keep it operational during the waiting period. (These additional stage-I constraints have not been previously considered in the two-stage stochastic regime.) | For instance, during the COVID-19 pandemic, testing and vaccination centers were deployed at different kinds of locations, and access was an important consideration [18, 20]; access can be quantified in terms of different objectives including distance, as in our work. Here,
ℱℱ\mathcal{F}caligraphic_F and 𝒞𝒞\mathcal{C}caligraphic_C correspond to such locations and the population affected by the outbreak, and needing services, respectively. | We are given a set of clients 𝒞𝒞\mathcal{C}caligraphic_C and a set of facilities ℱℱ\mathcal{F}caligraphic_F, in a metric space with a distance function d𝑑ditalic_d. We let n=|𝒞|𝑛𝒞n=|\mathcal{C}|italic_n = | caligraphic_C | and m=|ℱ|𝑚ℱm=|\mathcal{F}|italic_m = | caligraphic_F |. Our paradigm unfolds in two stages. First, in stage-I, each i∈ℱ𝑖ℱi\in\mathcal{F}italic_i ∈ caligraphic_F has a cost ciIsubscriptsuperscript𝑐𝐼𝑖c^{I}_{i}italic_c start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, but at that time we do not know which clients from 𝒞𝒞\mathcal{C}caligraphic_C will need service. At this point, we can proactively open a set of facilities FIsubscript𝐹𝐼F_{I}italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT. After committing to FIsubscript𝐹𝐼F_{I}italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT, a scenario A𝐴Aitalic_A is sampled from some underlying distribution 𝒟𝒟\mathcal{D}caligraphic_D, which specifies some subset of clients 𝒞Asuperscript𝒞𝐴\mathcal{C}^{A}caligraphic_C start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT needing service; each i∈ℱ𝑖ℱi\in\mathcal{F}italic_i ∈ caligraphic_F now has a cost ciAsubscriptsuperscript𝑐𝐴𝑖c^{A}_{i}italic_c start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT (which may vary across scenarios A∈𝒟𝐴𝒟A\in\mathcal{D}italic_A ∈ caligraphic_D). When this scenario A𝐴Aitalic_A arrives we can augment the solution by opening some additional stage-II facilities FAsubscript𝐹𝐴F_{A}italic_F start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT.
|
There is an important connection between our generalization scheme and the design of our polynomial-scenarios approximation algorithms. In Theorem 1.1, the sample bounds are given in terms of the cardinality |𝒮|𝒮|\mathcal{S}|| caligraphic_S |. Our polynomial-scenarios algorithms are carefully designed to make |𝒮|𝒮|\mathcal{S}|| caligraphic_S | as small as possible. Indeed, one of the major contributions of this work is to show that effective bounds on |𝒮|𝒮|\mathcal{S}|| caligraphic_S | are possible for sophisticated approximation algorithms using complex LP rounding. | B |
In addition to uncertainties in information exchange, different assumptions on the cost functions have been discussed.
In the most of existing works on the distributed convex optimization, it is assumed that the subgradients are bounded if the local cost | Both (sub)gradient noises and random graphs are considered in [11]-[13]. In [11], the local gradient noises are independent with bounded second-order moments and the graph sequence is i.i.d.
In [12]-[14], the (sub)gradient measurement noises are martingale difference sequences and their second-order conditional moments depend on the states of the local optimizers. The random graph sequences in [12]-[15] are i.i.d. with connected and undirected mean graphs. In addition, additive communication noises are considered in [14]-[15]. |
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be spatially and temporally dependent. The local cost functions are not required to be differentiable, nor do their subgradients need to be bounded. The local optimizers can only obtain measurement information of the local subgradients with random noises. The additive and multiplicative communication noises co-exist in communication links. We consider the distributed stochastic subgradient optimization algorithm and prove that if the sequence of random digraphs is conditionally balanced and uniformly conditionally jointly connected, then the states of all local optimizers converge to the same global optimal solution almost surely. The main contributions of our paper are listed as follows. | Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15],
and additive and multiplicative communication noises may co-exist in communication links ([21]). | However, a variety of random factors may co-exist in practical environment.
In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the distributed optimization with multiple uncertain factors ([11]-[15]). | C |
Typically, the attributes in microdata can be divided into three categories: (1) Explicit-Identifier (EI, also known as Personally-Identifiable Information), such as name and social security number, which can uniquely or mostly identify the record owner; (2) Quasi-Identifier (QI), such as age, gender and zip code, which can be used to re-identify the record owner when taken together; and (3) Sensitive Attribute (SA), such as salary and disease, which contains the confidential information of individuals. According to the work of Sweeney [31], even with all EI attributes being removed, the record owners can still be re-identified by matching the combination of QI values. | Specifically, there are three main steps in the proposed approach. First, MuCo partitions the tuples into groups and assigns similar records into the same group as far as possible. Second, the random output tables, which control the distribution of random output values within each group, are calculated to make similar tuples to cover for each other at the minimal cost. Finally, MuCo generates anonymized microdata by replacing the original QI values with random values according to the random output tables. For instance, for the original table in Figure 1(a), MuCo partitions the records into four groups and calculates random output tables on age as shown in Figure 3. In the random output tables, the rows correspond to the records, and the columns correspond to the ranges of age values. Every entry value denotes the probability that the record carries the column value in the anonymized table. For example, we can observe that Helen is covered with Daphne and Dean, and her age outputs 28 with a probability of 0.7129 and outputs 29 with a probability of 0.2871. Then, MuCo generates an anonymized table in which the original QI values are replaced by the random values according to the random output tables.
| Generalization [8, 26] is one of the most widely used privacy-preserving techniques. It transforms the values on QI attributes into general forms, and the tuples with equally generalized values constitute an equivalence group. In this way, records in the same equivalence group are indistinguishable. k𝑘kitalic_k-Anonymity [31, 28] ensures that the probability of identity disclosure is at most 1/k1𝑘1/k1 / italic_k. For instance, Figure 1(b) is a generalized table of Figure 1(a) that complies with 2-anonymity, and the adversary has to acquire at least two different tuples by matching the age value of any person.
|
Although the generalization for k𝑘kitalic_k-anonymity provides enough protection for identities, it is vulnerable to the attribute disclosure [23]. For instance, in Figure 1(b), the sensitive values in the third equivalence group are both “pneumonia”. Therefore, an adversary can infer the disease value of Dave by matching his age without re-identifying his exact record. To prevent such disclosure, many effective principles have been proposed, such as l𝑙litalic_l-diversity [23] and t𝑡titalic_t-closeness [19]. For example, Figure 1(c) is the generalized version of Figure 1(a) complying with 5-diversity, such that the proportion of each sensitive value inside the equivalence group is no more than 1/5151/51 / 5. Thus, for any individual, the adversary has to obtain at least five different sensitive values by matching the age value. | However, despite protecting against both identity disclosure and attribute disclosure, the information loss of generalized table cannot be ignored. On the one hand, the generalized values are determined by only the maximum and the minimum QI values in equivalence groups, causing that the equivalence groups only preserve the ranges of QI values and the number of records. Consequently, the distributions of QI values are hardly maintained and the information utility is reduced significantly. For instance, as shown in Figure 2, the red polyline and the magenta polyline represent the distributions on age in Figure 1(a) and Figure 1(c), respectively. We can observe that the original distribution is barely preserved in the generalized table. On the other hand, the partition of equivalence groups also increases the information loss of anonymized table because the results of query statements are always the matching equivalence groups rather than the specific matching tuples. For example, if we want to select the tuples whose age values are more than 30 in Figure 1(c), both equivalence groups are considered as the results.
| B |
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.9 mAP and surpasses MaskRCNN by a remarkable margin of 9.7 mAP. More Points Test. By increasing the number of subdivision points from default 28 to 70 during inference, we gain another 1.1 mAP with free training cost. Large Backbone. X101-64x4d Xie et al. (2017) is then used as large backbone and it boosts 6 mAP against ResNet50. DCN and More Points Train. We adopt more interpolated points during training, by increasing the number of sampled points from original 14 to 26 for coarse prediction head, and from 14 to 24 for fine-grained point head. Then by adopting DCN Dai et al. (2017), we gain 71.6 mAP, which already outperforms HTC and SOLOV2 from our offline observation. Large Resolution and P6 Feature. Due to PointRend’s lightweight segmentation head and less memory consumption compared to HTC, the input resolution can be further increased from range [800,1000] to [1200,1400] during multi-scale training. P6 level of FPN is also added for both coarse prediction head and fine-grained point head, which finally yields 74.3 mAP on our splitted validation set. Other tricks we tried on PointRend give little improvement, including MaskScoring head, GC Block and DoubleHead Wu et al. (2020).
In the following, we refer the model in the last row (74.3 mAP) of Table 2 as PointRend baseline. The baseline trained on the official training set finally reaches 79.17 and 77.38 mAP on validation and testing set respectively, as shown in Table 1 and Table 3. It surpasses SOLOv2 by a large margin: 6.2, 4.5 and 3.5 mAP respectively for small, medium and large size on validation set. We believe that PointRend’s iteratively rendering process acts as a pivot for generating high-quality masks, especially fine-grained instance boundaries. Due to its superior performance, we only choose PointRend as ensemble candidates for the final submission. | HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains another 2 mAP. Armed with DCN, GC block and SyncBN training, our HTC with Res2NetR101 backbone yields 74.58 mAP on validation set, as shown in Table 1. However, the convolutional mask heads adopted in all stages bring non-negligible computation and memory costs, which constrain the mask resolution and further limit the segmentation quality for large instances.
| Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRend Kirillov et al. (2020). Most of these detectors focus on an overall performance on public datasets like COCO, which contains much smaller instances than 3D-FUTURE, while paying less attention to large objects segmentation. As illustrated in Figure 1, the size distribution of bounding boxes in 3D-FUTURE and COCO indicates that the former contains much larger objects while the latter is dominated by smaller instances. Thus, the prominent methods used in COCO, like MaskRCNN He et al. (2017) and HTC, may generate blurry contours for large instances. Their mask heads output segmentation from a limited small feature size (e.g., 14×14141414\times 1414 × 14), which is dramatically insufficient to represent large objects. All of these motivate us to segment large instances in a fine-grained and high-quality manner.
SOLOv2 builds an efficient single-shot framework with strong performance and dynamically generates predictions with much larger mask size (e.g., 1/4 scale of input size) than HTC. PointRend iteratively renders the output mask over adaptively sampled uncertain points in a coarse-to-fine fashion, which is naturally suitable for generating smooth and fine-grained instance boundaries. By conducting extensive experiments on HTC, SOLOv2 and PointRend, PointRend succeeds in producing finer mask boundaries and significantly outperforms other methods by a large margin. Our step-by-step modifications adopted on PointRend finally achieves state-of-the-art performance on 3D-FUTURE dataset, which yields 79.2 mAP and 77.38 mAP on validation and test set respectively. The final submission is an ensemble of 5 PointRend models with slightly different settings, reaching the 1st place in this competition. |
We implement PointRend using MMDetection Chen et al. (2019b) and adopt the modifications and tricks mentioned in Section 3.3. Both X101-64x4d and Res2Net101 Gao et al. (2019) are used as our backbones, pretrained on ImageNet only. SGD with momentum 0.9 and weight decay 1e-4 is adopted. The initial learning rate is set to 0.01 for Res2Net101 and 0.02 for X101-64x4d defaultly and decayed by factor 0.1 at epoch 32. During training process, the batch size is 8 (one image per GPU) and all BN statistics are freezed. Mixed precision training enables to reduce GPU memory. The input images are randomly resized to n×n𝑛𝑛n\times nitalic_n × italic_n, which is uniformly sampled from range [1200,1400]12001400[1200,1400][ 1200 , 1400 ]. All models are trained for 44 epochs. For inference, images are resized to 1400×1400140014001400\times 14001400 × 1400 and horizontal flip is used. | Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.9 mAP and surpasses MaskRCNN by a remarkable margin of 9.7 mAP. More Points Test. By increasing the number of subdivision points from default 28 to 70 during inference, we gain another 1.1 mAP with free training cost. Large Backbone. X101-64x4d Xie et al. (2017) is then used as large backbone and it boosts 6 mAP against ResNet50. DCN and More Points Train. We adopt more interpolated points during training, by increasing the number of sampled points from original 14 to 26 for coarse prediction head, and from 14 to 24 for fine-grained point head. Then by adopting DCN Dai et al. (2017), we gain 71.6 mAP, which already outperforms HTC and SOLOV2 from our offline observation. Large Resolution and P6 Feature. Due to PointRend’s lightweight segmentation head and less memory consumption compared to HTC, the input resolution can be further increased from range [800,1000] to [1200,1400] during multi-scale training. P6 level of FPN is also added for both coarse prediction head and fine-grained point head, which finally yields 74.3 mAP on our splitted validation set. Other tricks we tried on PointRend give little improvement, including MaskScoring head, GC Block and DoubleHead Wu et al. (2020).
In the following, we refer the model in the last row (74.3 mAP) of Table 2 as PointRend baseline. The baseline trained on the official training set finally reaches 79.17 and 77.38 mAP on validation and testing set respectively, as shown in Table 1 and Table 3. It surpasses SOLOv2 by a large margin: 6.2, 4.5 and 3.5 mAP respectively for small, medium and large size on validation set. We believe that PointRend’s iteratively rendering process acts as a pivot for generating high-quality masks, especially fine-grained instance boundaries. Due to its superior performance, we only choose PointRend as ensemble candidates for the final submission. | C |
I(f)<1,andH(|f^|2)>nn+1logn.formulae-sequence𝐼𝑓1and𝐻superscript^𝑓2𝑛𝑛1𝑛I(f)<1,\ \ {\mbox{and}}\ \ H(|\hat{f}|^{2})>\frac{n}{n+1}\log n.italic_I ( italic_f ) < 1 , and italic_H ( | over^ start_ARG italic_f end_ARG | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) > divide start_ARG italic_n end_ARG start_ARG italic_n + 1 end_ARG roman_log italic_n .
| For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
|
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma some time ago (see [K], comment from April 2, 2011). More specifically, we proved |
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Maybe the presentation below is what was known. | (0log0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^(A)|2}A⊆[n]subscriptsuperscript^𝑓𝐴2𝐴delimited-[]𝑛\{|\hat{f}(A)|^{2}\}_{A\subseteq[n]}{ | over^ start_ARG italic_f end_ARG ( italic_A ) | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_A ⊆ [ italic_n ] end_POSTSUBSCRIPT sums up to 1111 and thus this is the usual definition of entropy of this probability distribution.
| C |
^{l},a_{h}^{l})+\max_{a\in\mathcal{A}}Q_{h+1}^{k-1}(s_{h+1}^{l},a)-\langle\bm{%
\phi}(s_{h}^{l},a_{h}^{l}),\bm{w}\rangle]^{2}+\left\lVert\bm{w}\right\rVert_{2}.bold_italic_w start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT = roman_arg roman_min start_POSTSUBSCRIPT bold_italic_w end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_l = italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT [ italic_r start_POSTSUBSCRIPT italic_h , italic_l end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT , italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT ) + roman_max start_POSTSUBSCRIPT italic_a ∈ caligraphic_A end_POSTSUBSCRIPT italic_Q start_POSTSUBSCRIPT italic_h + 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT ( italic_s start_POSTSUBSCRIPT italic_h + 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT , italic_a ) - ⟨ bold_italic_ϕ ( italic_s start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT , italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT ) , bold_italic_w ⟩ ] start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + ∥ bold_italic_w ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT . |
In practice, the transition function ℙℙ\mathbb{P}blackboard_P is unknown, and the state space might be so large that it is impossible for the learner to fully explore all states. If we parametrize the action-value function in a linear form as ⟨ϕ(⋅,⋅),𝒘⟩bold-italic-ϕ⋅⋅𝒘\langle\bm{\phi}(\cdot,\cdot),\bm{w}\rangle⟨ bold_italic_ϕ ( ⋅ , ⋅ ) , bold_italic_w ⟩, it is natural to solve a regularized least-squares problems using collected data inspired by classical value iteration. Specifically, the update formula of 𝒘hksuperscriptsubscript𝒘ℎ𝑘\bm{w}_{h}^{k}bold_italic_w start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT in Algorithm 1 (line 8) is the analytic solution of the following regularized least-squares problem: |
Finally, we use epoch restart strategy to adapt to the drifting environment, which achieves near-optimal dynamic regret notwithstanding its simplicity. Specifically, we restart the estimation of 𝒘𝒘\bm{w}bold_italic_w after WH𝑊𝐻\frac{W}{H}divide start_ARG italic_W end_ARG start_ARG italic_H end_ARG episodes, all illustrated in the outer loop of Algorithm 1. Note that in general epoch size W𝑊Witalic_W can vary for different epochs, but we find that a fixed length is sufficient to achieve near-optimal performance. | From Figure 1, we see LSVI-UCB-Restart with the knowledge of global variation drastically outperforms all other methods designed for stationary environments , in both abruptly-changing and gradually-changing environments, since it restarts the estimation of the Q𝑄Qitalic_Q function with knowledge of the total variations. Ada-LSVI-UCB-Restart also outperforms the baselines because it also takes the nonstationarity into account by periodically updating the epoch size for restart. In addition, Ada-LSVI-UCB-Restart has a huge gain compared to LSVI-UCB-Unknown, which agrees with our theoretical analysis. This suggests that Ada-LSVI-UCB-Restart works well when the knowledge of global variation is unavailable. Our proposed algorithms not only perform systemic exploration, but also adapt to the environment change.
|
One might be skeptical since simply applying least-squares method to solve 𝒘𝒘\bm{w}bold_italic_w does not take the distribution drift in ℙℙ\mathbb{P}blackboard_P and r𝑟ritalic_r into account and hence, may lead to non-trivial estimation error. However, we show that the estimation error can gracefully adapt to the nonstationarity, and it suffices to restart the estimation periodically to achieve good dynamic regret. | D |
Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et al., 2017). The usage of fake news ranges from self-serving purposes like clickbait for moneymaking (Geçkil et al., 2018) to agendas on a national scale like political manipulation (Allcott and Gentzkow, 2017) and terrorism (Fang, 2021). With the rapid and extensive adoption of social platforms, fake news has come to be more closely integrated with daily life, resulting in rising social costs due to people making poorly justified and unwarranted choices based on inaccurate knowledge (Duffy et al., 2020). This has spurred CSCW research on areas like attitudes towards news (Wang and Mark, 2013), news transmission (Liao and Shi, 2013), and forms of innovative countermeasures (Bhuiyan et al., 2018; Mitra et al., 2017), revealing the breadth of interests in this issue.
| Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Government to more directly address falsehoods that hurt the public interest. The rising attention of fake news in the local scene has motivated various research including studies on the perceptions and motivations of fake news sharing (Chen et al., 2015) and responses to fake news (Edson C Tandoc et al., 2020). Although there are parallels between these studies and ours, we want to highlight that our study explores fake news in general media instead of solely social media, examining both usage and trust. Furthermore, we investigate more broadly the attitudes and behaviors on news sharing and fake news.
| While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by political and financial gains, and its influence has led to increasing social costs due to the adverse effects it has on people’s truth discernment and behavior (Duffy et al., 2020). With fake news stemming mainly from digital media and causing misguided dissent that could compromise collaboration among people, we see this to be of concern to the CSCW community. As global efforts addressing fake news take off, we aim to understand what the perceptions and practices of news sharing and fake news are in a local context, with Singapore as the place of interest, to gain insights on where best to direct local mitigation efforts.
|
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms, and post corrections and warnings when they encounter fake news. That respondents show strong trust and reliance on government communication platforms, such as official websites and hotlines, signifies the relatively strong faith that Singapore residents have in the Singapore Government to provide truthful and helpful information and to debunk fake news. This may be attributed to the successful ongoing efforts in making transparent government decisions and the readiness of the government in addressing public concerns through online forums and dialogues (REACH, [n.d.]). There is opportunity here for the government to launch programs such as campaigns, call-to-actions and civic tech initiatives that aim to more actively involve the public in discussing the local impacts of fake news and the strategies to manage it, and to encourage them to play a part through personal and community actions. | Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et al., 2017). The usage of fake news ranges from self-serving purposes like clickbait for moneymaking (Geçkil et al., 2018) to agendas on a national scale like political manipulation (Allcott and Gentzkow, 2017) and terrorism (Fang, 2021). With the rapid and extensive adoption of social platforms, fake news has come to be more closely integrated with daily life, resulting in rising social costs due to people making poorly justified and unwarranted choices based on inaccurate knowledge (Duffy et al., 2020). This has spurred CSCW research on areas like attitudes towards news (Wang and Mark, 2013), news transmission (Liao and Shi, 2013), and forms of innovative countermeasures (Bhuiyan et al., 2018; Mitra et al., 2017), revealing the breadth of interests in this issue.
| A |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 56