context
stringlengths 250
4.37k
| A
stringlengths 250
8.2k
| B
stringlengths 250
4.23k
| C
stringlengths 250
4.99k
| D
stringlengths 250
3.54k
| label
stringclasses 4
values |
---|---|---|---|---|---|
to the weight such that a Gauss-Legendre integration for moments xD+m−1superscript𝑥𝐷𝑚1x^{D+m-1}italic_x start_POSTSUPERSCRIPT italic_D + italic_m - 1 end_POSTSUPERSCRIPT
is engaged and the wiggly remainder of Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT multiplied by f(x)𝑓𝑥f(x)italic_f ( italic_x ) is | Rnm(x)=∑s=0(n−m)/2(−1)s(n−m2s)(D2+n−s−1n−m2)xn−2s.superscriptsubscript𝑅𝑛𝑚𝑥superscriptsubscript𝑠0𝑛𝑚2superscript1𝑠binomial𝑛𝑚2𝑠binomial𝐷2𝑛𝑠1𝑛𝑚2superscript𝑥𝑛2𝑠\displaystyle R_{n}^{m}(x)=\sum_{s=0}^{(n-m)/2}(-1)^{s}\binom{\frac{n-m}{2}}{s%
}\binom{\frac{D}{2}+n-s-1}{\frac{n-m}{2}}x^{n-2s}.italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) = ∑ start_POSTSUBSCRIPT italic_s = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_n - italic_m ) / 2 end_POSTSUPERSCRIPT ( - 1 ) start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( FRACOP start_ARG divide start_ARG italic_n - italic_m end_ARG start_ARG 2 end_ARG end_ARG start_ARG italic_s end_ARG ) ( FRACOP start_ARG divide start_ARG italic_D end_ARG start_ARG 2 end_ARG + italic_n - italic_s - 1 end_ARG start_ARG divide start_ARG italic_n - italic_m end_ARG start_ARG 2 end_ARG end_ARG ) italic_x start_POSTSUPERSCRIPT italic_n - 2 italic_s end_POSTSUPERSCRIPT . | that adds the results of 1+(n−m)/21𝑛𝑚21+(n-m)/21 + ( italic_n - italic_m ) / 2
Gaussian integrations for moments xD−1+n−2ssuperscript𝑥𝐷1𝑛2𝑠x^{D-1+n-2s}italic_x start_POSTSUPERSCRIPT italic_D - 1 + italic_n - 2 italic_s end_POSTSUPERSCRIPT. The disadvantage |
Gaussian integration rules for integrals ∫01xD−1Rnm(x)f(x)𝑑xsuperscriptsubscript01superscript𝑥𝐷1superscriptsubscript𝑅𝑛𝑚𝑥𝑓𝑥differential-d𝑥\int_{0}^{1}x^{D-1}R_{n}^{m}(x)f(x)dx∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_f ( italic_x ) italic_d italic_x | to the weight such that a Gauss-Legendre integration for moments xD+m−1superscript𝑥𝐷𝑚1x^{D+m-1}italic_x start_POSTSUPERSCRIPT italic_D + italic_m - 1 end_POSTSUPERSCRIPT
is engaged and the wiggly remainder of Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT multiplied by f(x)𝑓𝑥f(x)italic_f ( italic_x ) is | B |
In other words, our algorithm initialises w:=gassign𝑤𝑔w:=gitalic_w := italic_g, u1:=1assignsubscript𝑢11u_{1}:=1italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT := 1 and u2:=1assignsubscript𝑢21u_{2}:=1italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT := 1 and multiplies w𝑤witalic_w, u1subscript𝑢1u_{1}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and u2subscript𝑢2u_{2}italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT by the transvections necessary to render g=u1wu2𝑔subscript𝑢1𝑤subscript𝑢2g=u_{1}wu_{2}italic_g = italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_w italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT with w𝑤witalic_w monomial and u1,u2subscript𝑢1subscript𝑢2u_{1},u_{2}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT lower unitriangular.
|
For the purposes of determining the cost of Taylor’s algorithm in terms of matrix operations, namely determining the length of an MSLP for the algorithm, we assume that the field elements −gicgrc−1subscript𝑔𝑖𝑐superscriptsubscript𝑔𝑟𝑐1-g_{ic}g_{rc}^{-1}- italic_g start_POSTSUBSCRIPT italic_i italic_c end_POSTSUBSCRIPT italic_g start_POSTSUBSCRIPT italic_r italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT in (11) (and similarly in (12)) are given to us as polynomials of degree at most f−1𝑓1f-1italic_f - 1 in the primitive element ω𝜔\omegaitalic_ω, where q=pf𝑞superscript𝑝𝑓q=p^{f}italic_q = italic_p start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT for some prime p𝑝pitalic_p. | does not yield an upper bound for the memory requirement in a theoretical analysis.
Moreover, the result of SlotUsagePattern improves the memory usage but it is not necessarily optimized overall and, hence, the number of slots can still be greater than the number of slots of a carefully computed MSLP. It should also be mentioned that in some cases the number of slots can even be smaller than that of a constructed MSLP but it is not possible to predict this without a careful analysis which would result in an MSLP construction as in this paper. |
As for the simpler examples considered in the previous section, here to keep the presentation clear we do not write down explicit MSLP instructions, but instead determine the cost of Algorithm 3 while keeping track of the number of elements that an MSLP for this algorithm would need to keep in memory at any given time. | The cost of the subroutines is determined with this in mind; that is, for each subroutine we determine the maximum length and memory requirement for an MSLP that returns the required output when evaluated with an initial memory containing the appropriate input.
| C |
The key to approximate (25) is the exponential decay of Pw𝑃𝑤Pwitalic_P italic_w, as long as w∈H1(𝒯H)𝑤superscript𝐻1subscript𝒯𝐻w\in{H^{1}({\mathcal{T}_{H}})}italic_w ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_T start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) has local support. That allows replacing P𝑃Pitalic_P by a semi-local operator Pjsuperscript𝑃𝑗P^{j}italic_P start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT. That works fine for low-contrast coefficients and is the subject of Section 3.2. For high-contrast coefficients however, the exponential decay rate is smaller, and to circumvent that we consider in Section 3.1 a spectral decomposition of Λ~hfsuperscriptsubscript~Λℎ𝑓\tilde{\Lambda}_{h}^{f}over~ start_ARG roman_Λ end_ARG start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT. |
It is essential for the performing method that the static condensation is done efficiently. The solutions of (22) decay exponentially fast if w𝑤witalic_w has local support, so instead of solving the problems in the whole domain it would be reasonable to solve it locally using patches of elements. We note that the idea of performing global static condensation goes back to the Variational Multiscale Finite Element Method–VMS [MR1660141, MR2300286]. Recently variations of the VMS |
Solving (22) efficiently is crucial for the good performance of the method, since it is the only large dimensional system of (21), in the sense that its size grows with order of h−dsuperscriptℎ𝑑h^{-d}italic_h start_POSTSUPERSCRIPT - italic_d end_POSTSUPERSCRIPT. | mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element removing the dependence
of the contrast. | One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the method not so practical. Here in this paper, in the presence of rough coefficients, spectral techniques are employed to overcome such hurdle, and by solving local eigenvalue problems we define a space where the exponential decay of solutions is insensitive to high-contrast coefficients. Additionally, the spectral techniques remove
macro-elements corner singularities that occur in LOD methods based on | A |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | The difference is mainly due to the degenerate case (where a chord of P𝑃Pitalic_P is parallel to an edge of P𝑃Pitalic_P) and the float issue of both programs.
Our implementations of Alg-K and Alg-CM have logical difference in handling degenerate cases. |
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM. |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. | A |
It has to be noted here that even though we obtain reasonable results on the classification task in general, the prediction performance varies considerably along the time dimension. This is understandable, since tweets become more distinguishable, only when the user gains more knowledge about the event. |
Training data for single tweet classification. Here we follow our assumption that an event might include sub-events for which relevant tweets are rumorous. To deal with this complexity, we train our single-tweet learning model only with manually selected breaking and subless 333the terminology subless indicates an event with no sub-events for short. events from the above dataset. In the end, we used 90 rumors and 90 news associated with 72452 tweets, in total. This results in a highly-reliable large-scale ground-truth of tweets labelled as news-related and rumor-related, respectively. Note that the labeling of a tweet is inherited from the event label, thus can be considered as an semi-automatic process. |
We use the same dataset described in Section 5.1. In total –after cutting off 180 events for pre-training single tweet model – our dataset contains 360 events and 180 of them are labeled as rumors. Those rumors and news fall comparatively evenly in 8 different categories, namely Politics, Science, Attacks, Disaster, Art, Business, Health and Other. Note, that the events in our training data are not necessarily subless, because it is natural for high-impact events (e.g., Missing MH370 or Munich shooting) to contain sub-events. Actually, we empirically found that roughly 20% of our events (mostly news) contain sub-events. As a rumor is often of a long circulating story [10], this results in a rather long time span. In this work, we develop an event identification strategy that focuses on the first 48 hours after the rumor is peaked. We also extract 11,038 domains, which are contained in tweets in this 48 hours time range. |
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet level and let each tweet vote for the credibility of its event. We show the CreditScore measured over time in Figure 5(a). It can be seen that although the credibility of some tweets are low (rumor-related), averaging still makes the CreditScore of Munich shooting higher than the average of news events (hence, close to a news). In addition, we show the feature analysis for ContainNews (percentage of URLs containing news websites) for the event Munich shooting in Figure 5(b). We can see the curve of Munich shooting event is also close to the curve of average news, indicating the event is more news-related. | story descriptions we manually constructed queries to retrieve the relevant tweets for 270 rumors with high impact. Our approach to query construction mainly follows [11]. For the news event instances (non-rumor examples), we make use of the manually constructed corpus from Mcminn et al. [21], which covers 500 real-world events. In [21], tweets are retrieved via Twitter firehose API from 10thsuperscript10𝑡ℎ10^{th}10 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT of October 2012 to 7thsuperscript7𝑡ℎ7^{th}7 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT of November 2012. The involved events are manually verified and relate to tweets with relevance judgments, which results in a high quality corpus. From the 500 events, we select top 230 events with the highest tweet volumes (as a criteria for event impact). Furthermore, we have added 40 other news events, which happened around the time periods of our rumors. This results in a dataset of 270 rumors and 270 events. The dataset details are shown in Table 1. To serve our learning task. we then constructs two distinct datasets for (1) single tweet credibility and (2) rumor classification.
| B |
\left(\sqrt{\frac{\log\log t}{\log t}}\right)∥ divide start_ARG bold_w ( italic_t ) end_ARG start_ARG ∥ bold_w ( italic_t ) ∥ end_ARG - divide start_ARG over^ start_ARG bold_w end_ARG end_ARG start_ARG ∥ over^ start_ARG bold_w end_ARG ∥ end_ARG ∥ = italic_O ( square-root start_ARG divide start_ARG roman_log roman_log italic_t end_ARG start_ARG roman_log italic_t end_ARG end_ARG ). Our analysis provides a more precise characterization of the iterates, and also shows the convergence is actually quadratically faster (see Section 3). However, Ji and Telgarsky go even further and provide a characterization also when the data is non-separable but 𝐰(t)𝐰𝑡\mathbf{w}(t)bold_w ( italic_t ) still goes to infinity.
| In some non-degenerate cases, we can further characterize the asymptotic behavior of 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ). To do so, we need to refer to the KKT conditions (eq. 6)
of the SVM problem (eq. 4) and the associated | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameterization asymptotically to the maximum margin solution with unit nuclear norm. Unlike the case of squared loss, the result for exponential loss are independent of initialization and with only mild conditions on the step size. Here again, we see the asymptotic nature of exponential loss on separable data nullifying the initialization effects thereby making the analysis simpler compared to squared loss.
| where 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) has a bounded norm for almost all datasets, while in zero measure case 𝝆(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) contains additional O(loglog(t))𝑂𝑡O(\log\log(t))italic_O ( roman_log roman_log ( italic_t ) ) components which are orthogonal to the support vectors in 𝒮1subscript𝒮1\mathcal{S}_{1}caligraphic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, and, asymptotically, have a positive angle with the other support vectors. In this section we first calculate the various convergence rates for the non-degenerate case of Theorem 2, and then write the correction in the zero measure cases, if there is such a correction.
|
where the residual 𝝆k(t)subscript𝝆𝑘𝑡\boldsymbol{\rho}_{k}(t)bold_italic_ρ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_t ) is bounded and 𝐰^ksubscript^𝐰𝑘\hat{\mathbf{w}}_{k}over^ start_ARG bold_w end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is the solution of the K-class SVM: | A |
The performance of this feature group is not so convincing. The feature Pasubscript𝑃𝑎P_{a}italic_P start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT from SpikeM model is the best one of them. The problem of these two models which we have already figured out in Section 3.2.3 is that two models need substantial data to fit the parameters. After 24 hours, model trained with these epidemiological featuresreaches 60% in accuracy. In other words, before 24 hour these is no clear propagation pattern of these events. In (kwon2013prominent, ), the durations of dataset are more than 60 days. In (jin2013epidemiological, ), they use 160 hours’ tweets’ volume to fit the SEIZ models. Their event durations are far larger than ours focused 48 hours. The Pasubscript𝑃𝑎P_{a}italic_P start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT parameter from SpikeM is the only feature barely has some contributions for rumor detection in our experiment. It stands for the strength of periodicity in SpikeM. (kwon2013prominent, ) add 3 more parameters Qasubscript𝑄𝑎Q_{a}italic_Q start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT,Qpsubscript𝑄𝑝Q_{p}italic_Q start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT and Qssubscript𝑄𝑠Q_{s}italic_Q start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT to explain the periodicity of the external shock, but they do not produce same effect in our experiment, because 48 hours time period is rather too short to contain multi-pike patterns. | . As shown in Table 11, CreditScore is the best feature in general. Figure 10 shows the result of models learned with the full feature set with and without CreditScore. Overall, adding CreditScore improves the performance, significantly for the first 8-10 hours. The performance of all-but-CreditScore jiggles a bit after 16-20 hours, but it is not significant. CrowdWisdom is also a good feature which can get 75.8% accuracy as a single feature. But its performance is poor (less than 70%) in the first 32 hours getting better over time (see Table 11). Table 11 also shows the performance of sentiment feature (PolarityScores), which is generally low. This demonstrates the effectiveness of our curated approach over the sentiments, yet the crowd needs time to unify their views toward the event while absorbing different kinds of information.
|
Text feature set contains totally 16 features. The feature ranking are shown in Table 7. The best one is NumOfChar which is the average number of different characters in tweets. PolarityScores is the best feature when we tested the single tweets model, but its performance in time series model is not ideal. It is true that rumor contains more negative sentiment, but in an event (rumor or news) people can show their mixed views about this event (mendoza2010twitter, ; starbird2014rumors, ) like discussing or denying, so the PolarityScores’s performance becomes worse over time. Text features overall are shown to be more effective than Twitter and user feature sets. | As we can see in Figure 9 the best result on average over 48 hours is the BestSet. Second one is All features. Except those two, the best group feature is Text features. One reason is the text feature set has the largest group of feature with totally 16 features. But if look into each feature in text feature group, we can see the best and the worst features are all in this set. User features and Twitter features are stable over time around 82%. The performances of 3 different models (SIS, SEIZ and SpikeM) describing the propagation pattern of rumors and news are not ideal especially within 24 hours. CrowdWisdom and CreditScore both contain only one feature, but they already have impressive results comparing with the User features and Twitter features.
| The performance of user features is similar with the Twitter features, they are both quite stable from the first hour to the last hour. As shown in Table 9, the best feature over 48 hours of the user feature group is UserTweetsPerDays and it is the best feature overall in the first 4 hours, but its rank decreases with time going by. Others user-based features like UserReputationScore and UserJoinDate also have a better performance in the first fews hours. That means the sources (the posters in the first few hours) of news and rumors are quite different with each other. But with more and more users joining in the discussion, the bias of two groups of users becomes less. After 6 hours, it seems that we can better distinguish the rumors based on the tweet contents (text features), rather than relying on the features of users.
| A |
Evaluating methodology.
For RQ1, given an event entity e, at time t, we need to classify them into either Breaking or Anticipated class. We select a studied time for each event period randomly in the range of 5 days before and after the event time. In total, our training dataset for AOL consists of 1,740 instances of breaking class and 3,050 instances of anticipated, with over 300 event entities. For GoogleTrends, there are 2,700 and 4,200 instances respectively. We then bin the entities in the two datasets chronologically into 10 different parts. We set up 4 trials with each of the last 4 bins (using the history bins for training in a rolling basic) for testing; and report the results as average of the trials. |
RQ2. Figure 4 shows the performance of the aspect ranking models for our event entities at specific times and types. The most right three models in each metric are the models proposed in this work. The overall results show that, the performances of these models, even better than the baselines (for at least one of the three), vary greatly among the cases. In general, SVMsalience𝑆𝑉subscript𝑀𝑠𝑎𝑙𝑖𝑒𝑛𝑐𝑒SVM_{salience}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_s italic_a italic_l italic_i italic_e italic_n italic_c italic_e end_POSTSUBSCRIPT performs well at the before stage of breaking events, and badly at the after stage of the same event type. Whereas SVMtimeliness𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT gives a contradictory performance for the cases. For anticipated events, SVMtimeliness𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT performs well at the before and after stages, but gives a rather low performance at the during stage. For this event type, SVMsalience𝑆𝑉subscript𝑀𝑠𝑎𝑙𝑖𝑒𝑛𝑐𝑒SVM_{salience}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_s italic_a italic_l italic_i italic_e italic_n italic_c italic_e end_POSTSUBSCRIPT generally performs worse than SVMtimeliness𝑆𝑉subscript𝑀𝑡𝑖𝑚𝑒𝑙𝑖𝑛𝑒𝑠𝑠SVM_{timeliness}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_l italic_i italic_n italic_e italic_s italic_s end_POSTSUBSCRIPT. Overall, The SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT with all features combined gives a good and stable performance, but for most cases, are not better than the well-performed single set of features L2R model. In general, these results prove our assumption that salience and timeliness should be traded-off for different event types, at different event times. For feature importances, we observe regularly, stable performances of same-group features across these cases. Salience features from knowledge bases tend to perform better than from query logs for short-duration or less popular events. We leave the more in-depth analysis of this part for future work. |
Results. The baseline and the best results of our 1stsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achieves marginally better result at F1 metric. | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall, improved the baseline, yet not significantly. Our Ensemble model, that is learned to trade-off between salience and timeliness achieves the best results for all metrics, outperforms the baseline significantly. As the testing entity queries in this experiment are at all event times and with all event types, these improvements illustrate the robustness of our model. Overall, we witness the low performance of adapted QAC methods. One reason is as mentioned, QACs, even time-aware generally favor already salient queries as follows the rich-get-richer phenomenon, and are not ideal for entity queries that are event-related (where aspect relevance can change abruptly). Time-aware QACs for partially long prefixes like entities often encounter sparse traffic of query volumes, that also contributes to the low results.
| We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The results are shown in Table 3-bottom, showing that our cascaded model, with features inherited from the performance of SVM in previous task, substantially improves the single model. However, the overall modest results show the difficulty of this multi-class classification task.
| B |
\right)\;.\\
\end{cases}where { start_ROW start_CELL roman_Θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT : italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_a end_POSTSUBSCRIPT = [ italic_θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_a end_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + 1 , italic_a end_POSTSUBSCRIPT ⋯ italic_θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - 1 , italic_a end_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_a end_POSTSUBSCRIPT ] ∈ blackboard_R start_POSTSUPERSCRIPT italic_d × ( italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT , end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL italic_B start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT = ( roman_Θ start_POSTSUBSCRIPT 0 : italic_t - 2 , italic_a end_POSTSUBSCRIPT roman_Θ start_POSTSUBSCRIPT 0 : italic_t - 2 , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT + italic_B start_POSTSUBSCRIPT 0 , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT , end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL italic_L start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT = ( roman_Θ start_POSTSUBSCRIPT 1 : italic_t - 1 , italic_a end_POSTSUBSCRIPT roman_Θ start_POSTSUBSCRIPT 0 : italic_t - 2 , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT + italic_L start_POSTSUBSCRIPT 0 , italic_a end_POSTSUBSCRIPT italic_B start_POSTSUBSCRIPT 0 , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) italic_B start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT , end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL italic_V start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT = ( roman_Θ start_POSTSUBSCRIPT 1 : italic_t - 1 , italic_a end_POSTSUBSCRIPT - italic_L start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT roman_Θ start_POSTSUBSCRIPT 0 : italic_t - 2 , italic_a end_POSTSUBSCRIPT ) ( roman_Θ start_POSTSUBSCRIPT 1 : italic_t - 1 , italic_a end_POSTSUBSCRIPT - italic_L start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT roman_Θ start_POSTSUBSCRIPT 0 : italic_t - 2 , italic_a end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL + ( italic_L start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT - italic_L start_POSTSUBSCRIPT 0 , italic_a end_POSTSUBSCRIPT ) italic_B start_POSTSUBSCRIPT 0 , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_L start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT - italic_L start_POSTSUBSCRIPT 0 , italic_a end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT + italic_V start_POSTSUBSCRIPT 0 , italic_a end_POSTSUBSCRIPT , end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL italic_U start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT italic_U start_POSTSUBSCRIPT italic_t , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT = ( italic_θ start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT + italic_B start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) . end_CELL start_CELL end_CELL end_ROW | —i.e., the dependence on past samples decays exponentially, and is negligible after a certain lag—
one can establish uniform-in-time convergence of SMC methods for functions that depend only on recent states, see [Kantas et al., 2015] and references therein. | the combination of Bayesian neural networks with approximate inference has also been investigated.
Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; Li et al., 2016]. | The use of SMC in the context of bandit problems was previously considered for probit [Cherkassky and Bornn, 2013] and softmax [Urteaga and Wiggins, 2018c] reward models,
and to update latent feature posteriors in a probabilistic matrix factorization model [Kawale et al., 2015]. | More broadly, one can establish uniform-in-time convergence for path functionals that depend only on recent states,
as the Monte Carlo error of pM(θt−τ:t|ℋ1:t)subscript𝑝𝑀conditionalsubscript𝜃:𝑡𝜏𝑡subscriptℋ:1𝑡p_{M}(\theta_{t-\tau:t}|\mathcal{H}_{1:t})italic_p start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_θ start_POSTSUBSCRIPT italic_t - italic_τ : italic_t end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) with respect to p(θt−τ:t|ℋ1:t)𝑝conditionalsubscript𝜃:𝑡𝜏𝑡subscriptℋ:1𝑡p(\theta_{t-\tau:t}|\mathcal{H}_{1:t})italic_p ( italic_θ start_POSTSUBSCRIPT italic_t - italic_τ : italic_t end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) is uniformly bounded over time. | A |
Overall, the distribution of all three kinds of values throughout the day roughly correspond to each other.
In particular, for most patients the number of glucose measurements roughly matches or exceeds the number of rapid insulin applications throughout the days. | The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day.
Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning. | Patient 17 has more rapid insulin applications than glucose measurements in the morning and particularly in the late evening.
For patient 15, rapid insulin again slightly exceeds the number of glucose measurements in the morning. Curiously, the number of glucose measurements match the number carbohydrate entries – it is possible the discrepancy is a result of missing (glucose and carbohydrate) measurements. | Overall, the distribution of all three kinds of values throughout the day roughly correspond to each other.
In particular, for most patients the number of glucose measurements roughly matches or exceeds the number of rapid insulin applications throughout the days. | Likewise, the daily number of measurements taken for carbohydrate intake, blood glucose level and insulin units vary across the patients.
The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14. | B |
This representation constitutes the input to an Atrous Spatial Pyramid Pooling (ASPP) module Chen et al. (2018). It utilizes several convolutional layers with different dilation factors in parallel to capture multi-scale image information. Additionally, we incorporated scene content via global average pooling over the final encoder output, as motivated by the study of Torralba et al. (2006) who stated that contextual information plays an important role for the allocation of attention. Our implementation of the ASPP architecture thus closely follows the modifications proposed by Chen et al. (2017). These authors augmented multi-scale information with global context and demonstrated performance improvements on semantic segmentation tasks. | To quantify the contribution of multi-scale contextual information to the overall performance, we conducted a model ablation analysis. A baseline architecture without the ASPP module was constructed by replacing the five parallel convolutional layers with a single 3×3333\times 33 × 3 convolutional operation that resulted in 1,280 activation maps. This representation was then forwarded to a 1×1111\times 11 × 1 convolutional layer with 256 channels. While the total number of feature maps stayed constant, the amount of trainable parameters increased in this ablation setting. Table 6 summarizes the results according to validation instances of five eye tracking datasets for the model with and without an ASPP module. It can be seen that our multi-scale architecture reached significantly higher performance (one tailed paired t-test) on most metrics and is therefore able to leverage the information captured by convolutional layers with different receptive field sizes. An ablation analysis of the multi-level component adapted from Cornia et al. (2016) can be viewed in the A.
|
In this work, we laid out three convolutional layers with kernel sizes of 3×3333\times 33 × 3 and dilation rates of 4, 8, and 12 in parallel, together with a 1×1111\times 11 × 1 convolutional layer that could not learn new spatial dependencies but nonlinearly combined existing feature maps. Image-level context was represented as the output after global average pooling (i.e. after averaging the entries of a tensor across both spatial dimensions to a single value) and then brought to the same resolution as all other representations via bilinear upsampling, followed by another point-wise convolutional operation. Each of the five branches in the module contains 256 filters, which resulted in an aggregated tensor of 1,280 feature maps. Finally, the combined output was forwarded to a 1×1111\times 11 × 1 convolutional layer with 256 channels that contained the resulting multi-scale responses. |
Figure 2: An illustration of the modules that constitute our encoder-decoder architecture. The VGG16 backbone was modified to account for the requirements of dense prediction tasks by omitting feature downsampling in the last two max-pooling layers. Multi-level activations were then forwarded to the ASPP module, which captured information at different spatial scales in parallel. Finally, the input image dimensions were restored via the decoder network. Subscripts beneath convolutional layers denote the corresponding number of feature maps. | To restore the original image resolution, extracted features were processed by a series of convolutional and upsampling layers. Previous work on saliency prediction has commonly utilized bilinear interpolation for that task Cornia et al. (2018); Liu and Han (2018), but we argue that a carefully chosen decoder architecture, similar to the model by Pan et al. (2017), results in better approximations. Here we employed three upsampling blocks consisting of a bilinear scaling operation, which doubled the number of rows and columns, and a subsequent convolutional layer with kernel size 3×3333\times 33 × 3. This setup has previously been shown to prevent checkerboard artifacts in the upsampled image space in contrast to deconvolution Odena et al. (2016). Besides an increase of resolution throughout the decoder, the amount of channels was halved in each block to yield 32 feature maps. Our last network layer transformed activations into a continuous saliency distribution by applying a final 3×3333\times 33 × 3 convolution. The outputs of all but the last linear layer were modified via rectified linear units. Figure 2 visualizes the overall architecture design as described in this section.
| B |
There is a polynomial-time O(log(opt)log(n))Oopt𝑛\operatorname{O}(\sqrt{\log(\operatorname{\textsf{opt}})}\log(n))roman_O ( square-root start_ARG roman_log ( opt ) end_ARG roman_log ( italic_n ) )-approximation algorithm and a polynomial-time O(log(opt)opt)Ooptopt\operatorname{O}(\sqrt{\log(\operatorname{\textsf{opt}})}\operatorname{\textsf%
{opt}})roman_O ( square-root start_ARG roman_log ( opt ) end_ARG opt )-approximation algorithm for MinCutwidth. |
In this section, we discuss some examples that illustrate the concepts of marking sequences and the locality number, and we also discuss some word combinatorial properties related to the locality number. Note that for illustration purposes, the example words considered in this section are not necessarily condensed. | The main results are presented in Sections 4, 5 and 6. First, in Section 4, we present the reductions from Loc to Cutwidth and vice versa, and we discuss the consequences of these reductions. Then, in Section 5, we show how Loc can be reduced to Pathwidth, which yields an approximation algorithm for computing the locality number; furthermore, we investigate the performance of direct greedy strategies for approximating the locality number. Finally, since we consider this of high importance independent of the locality number, we provide a direct reduction from cutwidth to pathwidth in Section 6.
|
As mentioned several times already, our reductions to and from the problem of computing the locality number also establish the locality number for words as a (somewhat unexpected) link between the graph parameters cutwidth and pathwidth. We shall discuss in more detail in Section 6 the consequences of this connection. Next, we conclude this section by providing a formal proof of Lemma 5.7, which is the main result of this section. |
In Section 2, we give basic definitions (including the central parameters of the locality number, the cutwidth and the pathwidth). In the next Section 3, we discuss the concept of the locality number with some examples and some word combinatorial considerations. The purpose of this section is to develop a better understanding of this parameter for readers less familiar with string parameters and combinatorics on words (the technical statements of this section are formally proven in the appendix). | D |
Besides solving the data and interpretability problems, researchers in cardiology could utilize the already established deep learning architectures that have not been widely applied in cardiology such as capsule networks.
Capsule networks[265] are deep neural networks that require less training data than CNNs and its layers capture the ‘pose’ of features thus making their inner-workings more interpretable and closer to the human way of perception. | They have been used by a number of publications in cardiology in medical history prediction[70], ECG beat classification[86] and CVD prediction using fundus[192].
Another simpler tool for interpretability is saliency maps[264] that uses the gradient of the output with respect to the input which intuitively shows the regions that most contribute toward the output. | Amongst their experiments they found that rotational and scaling data augmentations did not help increase accuracy, attributing it to interpolation altering pixel intensities which is problematic due to the sensitivity of CNN to pixel distribution patterns.
| However an important constraint they currently have which limits them from achieving wider use, is the high computational cost compared to CNNs due to the ‘routing by agreement’ algorithm.
Amongst their recent uses in medicine include brain tumor classification[266] and breast cancer classification[267]. | Lessman et al.[195] method for coronary calcium scoring utilizes three independently trained CNNs to estimate a bounding box around the heart, in which connected components above a Hounsfield unit threshold are considered candidates for CACs.
Classification of extracted voxels was performed by feeding two-dimensional patches from three orthogonal planes into three concurrent CNNs to separate them from other high intensity lesions. | C |
An important step in this direction was made by Leibfried et al. (2016), which extends the work of Oh et al. (2015) by including reward prediction, but does not use the model to learn policies that play the games.
Most of these approaches, including ours, encode knowledge of the game in implicit way. Unlike this, there are works in which modeling is more explicit, for example Ersen & Sariel (2014) uses testbed of the Incredible Machines to learn objects behaviors and their interactions. | Using models of environments, or informally giving the agent ability to predict its future, has a fundamental appeal for reinforcement learning. The spectrum of possible applications is vast, including learning policies
from the model (Watter et al., 2015; Finn et al., 2016; Finn & Levine, 2017; Ebert et al., 2017; Hafner et al., 2019; Piergiovanni et al., 2018; Rybkin et al., 2018; Sutton & Barto, 2017, Chapter 8), capturing important details of the scene (Ha & Schmidhuber, 2018), encouraging exploration (Oh et al., 2015), creating intrinsic motivation (Schmidhuber, 2010) or counterfactual reasoning (Buesing et al., 2019). | have incorporated images into real-world (Finn et al., 2016; Finn & Levine, 2017; Babaeizadeh et al., 2017a; Ebert et al., 2017; Piergiovanni et al., 2018; Paxton et al., 2019; Rybkin et al., 2018; Ebert et al., 2018) and simulated (Watter et al., 2015; Hafner et al., 2019) robotic control.
Our video models of Atari environments described in Section 4 are motivated by models developed in the context of robotics. Another source of inspiration are discrete autoencoders proposed by van den Oord et al. (2017) and Kaiser & Bengio (2018). | Notable exceptions are the works of
Oh et al. (2017), Sodhani et al. (2019), Ha & Schmidhuber (2018), Holland et al. (2018), Leibfried et al. (2018) and Azizzadenesheli et al. (2018). Oh et al. (2017) use a model of rewards to augment model-free learning with good results on a number of Atari games. However, this method does not actually aim to model or predict future frames, and achieves clear but relatively modest gains in efficiency. | Atari games gained prominence as a benchmark for reinforcement learning with the introduction of the Arcade Learning Environment (ALE) Bellemare et al. (2015). The combination of reinforcement learning and deep models then enabled RL algorithms to learn to play Atari games directly from images of the game screen, using variants of the DQN algorithm (Mnih et al., 2013; 2015; Hessel et al., 2018) and actor-critic algorithms (Mnih et al., 2016; Schulman et al., 2017; Babaeizadeh et al., 2017b; Wu et al., 2017; Espeholt et al., 2018).
The most successful methods in this domain remain model-free algorithms (Hessel et al., 2018; Espeholt et al., 2018). Although the sample complexity of these methods has substantially improved recently, it remains far higher than the amount of experience required for human players to learn each game (Tsividis et al., 2017). In this work, we aim to learn Atari games with a budget of just 100K agent steps (400K frames), corresponding to about two hours of play time. Prior methods are generally not evaluated in this regime, and we therefore optimized Rainbow (Hessel et al., 2018) for optimal performance on 1M steps, see Appendix E for details. | C |
Here we also refer to CNN as a neural network consisting of alternating convolutional layers each one followed by a Rectified Linear Unit (ReLU) and a max pooling layer and a fully connected layer at the end while the term ‘layer’ denotes the number of convolutional layers.
| This is achieved with the use of multilayer networks, that consist of million parameters [1], trained with backpropagation [2] on large amount of data.
Although deep learning is mainly used in biomedical images there is also a wide range of physiological signals, such as Electroencephalography (EEG), that are used for diagnosis and prediction problems. | A high level overview of these combined methods is shown in Fig. 1.
Although we choose the EEG epileptic seizure recognition dataset from University of California, Irvine (UCI) [13] for EEG classification, the implications of this study could be generalized in any kind of signal classification problem. |
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals. | For the spectrogram module, which is used for visualizing the change of the frequency of a non-stationary signal over time [18], we used a Tukey window with a shape parameter of 0.250.250.250.25, a segment length of 8888 samples, an overlap between segments of 4444 samples and a fast Fourier transform of 64646464 samples to convert the xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT into the time-frequency domain.
The resulted spectrogram, which represents the magnitude of the power spectral density (V2/Hzsuperscript𝑉2𝐻𝑧V^{2}/Hzitalic_V start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_H italic_z) of xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, was then upsampled to 178×178178178178\times 178178 × 178 using bilinear pixel interpolation. | C |
The track tip positioning was the key parameter controlled during the creation of these climbing gaits. To assure seamless locomotion, trajectories for each joint of the robot were defined through a fifth-order polynomial along with their first and second derivatives. The trajectory design took into account six constraints: initial and final position, velocity, and acceleration [23]. The Reflexxes Motion Library IV [24] was utilized to perform the inverse kinematics calculation.
| Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the rear legs (depicted by the green line) exceeded the predetermined threshold values set by the rear body climbing gait for heights of 2h. The overlap between the red line (ongoing energy consumption of the robot) and the blue line (pre-studied energy consumption of step negotiation in rolling locomotion mode only) illustrates this. After the mode transition is triggered, the robot enters a well-defined preparation phase, wherein it moves backward a short distance to ensure the rear tracks are separated from the step. Following the preparation phase, the robot switches to the rear body climbing gait. Despite the noticeable improvement in energy consumption, the transition to the rear body climbing gait takes more time for the robot to tackle a 2h step.
|
Figure 10: The Cricket robot tackles a step of height h using rolling locomotion mode, negating the need for a transition to the walking mode. The total energy consumed throughout the entire step negotiation process in rolling locomotion stayed below the preset threshold value. This threshold value was established based on the whole body climbing gait at height h, as shown in Fig. 8, or the rear body climbing gait at height h, as seen in Fig. 9. The blue line illustrates the total energy consumed (in rolling locomotion mode), while the green line represents the ongoing cumulative energy consumption of the rear legs, indicating it did not exceed the threshold values set by the rear body climbing gait. |
The whole-body climbing gait involves utilizing the entire body movement of the robot, swaying forwards and backwards to enlarge the stability margins before initiating gradual leg movement to overcome a step. This technique optimizes stability during the climbing process. To complement this, the rear-body climbing gait was developed. In this approach, once the front legs and body have completed their upward rolling motion, the rear legs are elevated to ascend the step. This strategy is particularly beneficial in situations where the mobility of rolling locomotion is hindered by the rear wheels. For a more detailed discussion of the whole-body climbing gait and the rear-body climbing gait, we direct readers to [10]. | The evaluation of energy consumption for the walking locomotion mode encompassed the entire step negotiation process, from the commencement of the negotiation until its completion. Fig. 8 reveals minimal discrepancies in energy consumption for the whole-body climbing gait, which can be attributed to the thoughtful design of the climbing gaits. These gaits incorporate identical desired joint accelerations, leg stride length, and forward movement height, as highlighted in [4]. Consequently, variations in energy consumption during different step negotiations primarily stem from negotiation time and body movements. In order to establish the threshold values (Twbsubscript𝑇𝑤𝑏T_{wb}italic_T start_POSTSUBSCRIPT italic_w italic_b end_POSTSUBSCRIPT and Trbsubscript𝑇𝑟𝑏T_{rb}italic_T start_POSTSUBSCRIPT italic_r italic_b end_POSTSUBSCRIPT) for the energy criterion, they were equated to the energy expenditure of the walking locomotion mode, utilizing the whole-body climbing and rear-body climbing gaits, respectively. To identify the threshold values (Twbsubscript𝑇𝑤𝑏T_{wb}italic_T start_POSTSUBSCRIPT italic_w italic_b end_POSTSUBSCRIPT and Trbsubscript𝑇𝑟𝑏T_{rb}italic_T start_POSTSUBSCRIPT italic_r italic_b end_POSTSUBSCRIPT) for the energy criterion, they were set equal to the energy expenditure of the walking locomotion mode using the whole body climbing and rear body climbing gaits, respectively. Unlike other methods that use empirical values [2, 8], the threshold values in this study were decided upon based on a novel rule that evaluates the alternative locomotion mode. Moreover, these threshold values are not fixed and are determined based on the terrain profiles the robot is negotiating.
| C |
Suppose that you have an investment account with a significant amount in it, and that your financial institution advises you periodically on investments. One day, your banker informs you that company X will soon receive a big boost, and advises to use the entire account to buy stocks. If you were to completely trust the banker’s advice, there are naturally two possibilities: either the advice will prove correct (which would be great) or it will prove wrong (which would be catastrophic). A prudent customer would take this advice with a grain of salt, and would not be willing to risk everything. In general, our understanding of advice is that it entails knowledge that is not foolproof.
|
Under the current models, the advice bits can encode any information about the input sequence; indeed, defining the “right” information to be conveyed to the algorithm plays an important role in obtaining better online algorithms. Clearly, the performance of the online algorithm can only improve with larger number of advice bits. The objective is thus to identify the exact trade-offs between the size of the advice and the performance of the algorithm. This is meant to provide a smooth transition between the purely online world (nothing is known about the input) and the purely “offline” world (everything is known about the input). |
In future work, we would like to expand the model so as to incorporate, into the analysis, the concept of advice error. More specifically, given an advice string of size k𝑘kitalic_k, let η𝜂\etaitalic_η denote the number of erroneous bits (which may be not known to the algorithm). In this setting, the objective would be to study the power and limitations of online algorithms, i.e., from the point of view of both upper and lower bounds on the competitive ratio. A first approach towards this direction was made recently in the context of problems such as contract | We introduced a new model in the study of online algorithms with advice, in which the online algorithm can leverage information about the request sequence that is not necessarily foolproof. Motivated by advances in learning-online algorithms, we studied tradeoffs between the trusted and untrusted competitive ratio, as function of the advice size. We also proved the first lower bounds for online algorithms in this setting. Any other online problem should be amenable to analysis under this framework, and in particular any other of the many problems studied under the classic framework of (standard) advice complexity.
|
In this work we focus on the online computation with advice. Our motivation stems from observing that, unlike the real world, the advice under the known models is often closer to “fiat” than “recommendation”. Our objective is to propose a model which allows the possibility of incorrect advice, with the objective of obtaining more realistic and robust online algorithms. | D |
With the aim of avoiding cases of misclassification like in (d), we decided to implement the second classifier, SS3Δ, whose policy also takes into account the changes in both slopes.
As it can be seen from Algorithm 3 and as mentioned before, SS3Δ additionally classifies a subject as positive if the positive slope changes, at least, four times faster than the other one. | the accumulated negative confidence value starts being greater than the positive one, but as more chunks are read (specifically starting after reading the 3rd chunk), the positive value starts and stays growing until it exceeds the other one. In this case, this subject is classified as depressed after reading the 6th chunk.
| This problem can be detected in this subject by seeing the blue dotted peek at around the 60th writing, indicating that “the positive slope changed around five times faster than the negative” there, and therefore misclassifying the subject as positive. However, note that this positive change was in fact really small (less than 1).
| In Figure 7 is shown again the subject 1914, this time including information about the changes in the slopes.
Note that this subject was previously misclassified as not depressed because the accumulated positive value never exceeded the negative one, but by adding this new extra policy, this time it is correctly classified as positive after reading the 8th chunk262626Note the peek in the blue dotted line pointing out that, at this point, the positive value has grown around 11 times faster than the negative one.. |
the subject is misclassified as positive since the positive accumulated exceeded the negative one. When we manually analyzed cases like these we often found out that the classifier was correctly accumulating positive evidence since the users were, in fact, apparently depressed. | C |
Stochastic gradient descent (SGD) and its variants (Robbins and Monro, 1951; Bottou, 2010; Johnson and Zhang, 2013; Zhao et al., 2018, 2020, 2021) have been the dominating optimization methods for solving (1).
In each iteration, SGD calculates a (mini-batch) stochastic gradient and uses it to update the model parameters. Inspired by momentum and Nesterov’s accelerated gradient descent, momentum SGD (MSGD) (Polyak, 1964; Tseng, 1998; Lan, 2012; Kingma and Ba, 2015) has been proposed and widely used in machine learning. In practice, MSGD often outperforms SGD (Krizhevsky et al., 2012; Sutskever et al., 2013). Many machine learning platforms, such as TensorFlow, PyTorch and MXNet, include MSGD as one of their optimization methods. | Furthermore, when we distribute the training across multiple workers, the local objective functions may differ from each other due to the heterogeneous training data distribution. In Section 5, we will demonstrate that the global momentum method outperforms its local momentum counterparts in distributed deep model training.
| With the rapid growth of data, distributed SGD (DSGD) and its variant distributed MSGD (DMSGD) have garnered much attention. They distribute the stochastic gradient computation across multiple workers to expedite the model training.
These methods can be implemented on distributed frameworks like parameter server and all-reduce frameworks. | Recently, parameter server (Li et al., 2014) has been one of the most popular distributed frameworks in machine learning. GMC can also be implemented on the parameter server framework.
In this paper, we adopt the parameter server framework for illustration. The theories in this paper can also be adapted for the all-reduce framework. | GMC can be easily implemented on the all-reduce distributed framework in which each worker sends the sparsified vector 𝒞(𝐞t+12,k)𝒞subscript𝐞𝑡12𝑘\mathcal{C}({\bf e}_{t+\frac{1}{2},k})caligraphic_C ( bold_e start_POSTSUBSCRIPT italic_t + divide start_ARG 1 end_ARG start_ARG 2 end_ARG , italic_k end_POSTSUBSCRIPT ) to all the other workers, then each worker updates 𝐰t+1subscript𝐰𝑡1{\bf w}_{t+1}bold_w start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT after receiving the sparsified vectors from all the other workers.
| B |
The three separate clusters which are depicted in Fig. 3 and the aggregated density plot in Fig. 4LABEL:sub@subfig:crrl_density_plot between the Identity activation function, the ReLU and the rest show the effect of a sparser activation function on the representation. | Figure 1: Visualization of the activation maps of five activation functions (Identity, ReLU, top-k absolutes, Extrema-Pool indices and Extrema) for 1D and 2D input in the top and bottom row respectively.
The 1D input to the activation functions is denoted with the continuous transparent green line using an example from the UCI dataset. | Imposing a med𝑚𝑒𝑑meditalic_m italic_e italic_d on the extrema detection algorithm makes 𝜶𝜶\bm{\alpha}bold_italic_α sparser than the previous cases and solves the problem of double extrema activations that Extrema-Pool indices have (as shown in Fig. 1LABEL:sub@subfig:extrema).
The sparsity parameter in this case is set d(i)=medsuperscript𝑑𝑖𝑚𝑒𝑑d^{(i)}=meditalic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT = italic_m italic_e italic_d, where 1≤med<n∈ℕ1𝑚𝑒𝑑𝑛ℕ1\leq med<n\in\mathbb{N}1 ≤ italic_m italic_e italic_d < italic_n ∈ blackboard_N is the minimum extrema distance. | The sparser an activation function is the more it compresses, sometimes at the expense of reconstruction error.
However, by visual inspection of Fig. 5 one could confirm that the learned kernels of the SAN with sparser activation maps (Extrema-Pool indices and Extrema) correspond to the reoccurring patterns in the datasets, thus having high interpretability. |
The three separate clusters which are depicted in Fig. 3 and the aggregated density plot in Fig. 4LABEL:sub@subfig:crrl_density_plot between the Identity activation function, the ReLU and the rest show the effect of a sparser activation function on the representation. | C |
Figure 1: The topological structure of UAV ad-hoc networks. a) The UAV ad-hoc network supports user communications. b) The coverage of a UAV depends on its altitude and field angle. c) There are two kinds of links between users, and the link supported by UAV is better. |
Fig. 12 shows how the number of UAVs affect the computation complexity of SPBLLA. Since the total number of UAVs is diverse, the goal functions are different. The goal functions’ value in the optimum states increase with the growth in UAVs’ number. Since goal functions are the summation function of utility functions, more UAVs offer more utilities which result in higher potential function value. Moreover, more UAVs can cover more area and support more users, which also corresponds with more utilities. Fig. 12 also shows how many iterations that UAV ad-hoc network needs to approach to convergence. With the number of UAVs improves, more iterations are required in this network. |
Figure 1: The topological structure of UAV ad-hoc networks. a) The UAV ad-hoc network supports user communications. b) The coverage of a UAV depends on its altitude and field angle. c) There are two kinds of links between users, and the link supported by UAV is better. |
We construct a UAV ad-hoc network in a post-disaster scenario with M𝑀Mitalic_M identical UAVs being randomly deployed, in which M𝑀Mitalic_M is a huge number compared with normal Multi-UAV system. All the UAVs have the same volume of battery E𝐸Eitalic_E and communication capability. The topological structure of Multi-UAV network is shown in Fig. 1 (a). | Since the UAV ad-hoc network game is a special type of potential game, we can apply the properties of the potential game in the later analysis. Some algorithms that have been applied in the potential game can also be employed in the UAV ad-hoc network game. In the next section, we investigate the existing algorithm with its learning rate in large-scale post-disaster scenarios and propose a new algorithm which is more suitable for the UAV ad-hoc network in such scenarios.
| C |
Π¯rsubscript¯Π𝑟\displaystyle\overline{\Pi}_{r}over¯ start_ARG roman_Π end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT
=[−2Dr¯^∗(μ^r^(Dr^¯∗v¯r))−Dz¯^∗(μ^r^(Dr^¯∗v¯z+Dz^¯∗v¯r))]/r¯absentdelimited-[]absent2^¯𝐷𝑟^𝜇^𝑟¯^𝐷𝑟subscript¯𝑣𝑟absent^¯𝐷𝑧^𝜇^𝑟¯^𝐷𝑟subscript¯𝑣𝑧¯^𝐷𝑧subscript¯𝑣𝑟¯𝑟\displaystyle=\biggl{[}\underset{}{-2\widehat{\overline{Dr}}*\left(\widehat{% | }}\,\,\widehat{r}\,\,\left(\overline{\widehat{Dr}}*\overline{v}_{z}+\overline{%
\widehat{Dz}}*\overline{v}_{r}\right)\right)}\biggr{]}\,/\,\overline{r}= [ start_UNDERACCENT end_UNDERACCENT start_ARG - 2 over^ start_ARG over¯ start_ARG italic_D italic_r end_ARG end_ARG ∗ ( over^ start_ARG italic_μ end_ARG over^ start_ARG italic_r end_ARG ( over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) ) end_ARG - start_UNDERACCENT end_UNDERACCENT start_ARG over^ start_ARG over¯ start_ARG italic_D italic_z end_ARG end_ARG ∗ ( over^ start_ARG italic_μ end_ARG over^ start_ARG italic_r end_ARG ( over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT + over¯ start_ARG over^ start_ARG italic_D italic_z end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) ) end_ARG ] / over¯ start_ARG italic_r end_ARG | \widehat{Dz}}*\overline{v}_{r}\right)\right)}\biggr{]}\,/\,\overline{r}= [ start_UNDERACCENT end_UNDERACCENT start_ARG - 2 over^ start_ARG over¯ start_ARG italic_D italic_z end_ARG end_ARG ∗ ( over^ start_ARG italic_μ end_ARG over^ start_ARG italic_r end_ARG ( over¯ start_ARG over^ start_ARG italic_D italic_z end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT ) ) end_ARG - start_UNDERACCENT end_UNDERACCENT start_ARG over^ start_ARG over¯ start_ARG italic_D italic_r end_ARG end_ARG ∗ ( over^ start_ARG italic_μ end_ARG over^ start_ARG italic_r end_ARG ( over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT + over¯ start_ARG over^ start_ARG italic_D italic_z end_ARG end_ARG ∗ over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) ) end_ARG ] / over¯ start_ARG italic_r end_ARG
| \overline{\psi}\right)\,\,\left(\overline{\widehat{Dz}}*\overline{f}\right)%
\right)\,/\,\widehat{r}\right\}= divide start_ARG 2 italic_π end_ARG start_ARG italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG ( over^ start_ARG italic_s end_ARG over^ start_ARG italic_r end_ARG ) start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∗ { ( - ( over¯ start_ARG over^ start_ARG italic_D italic_z end_ARG end_ARG ∗ over¯ start_ARG italic_ψ end_ARG ) ( over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_f end_ARG ) + ( over¯ start_ARG over^ start_ARG italic_D italic_r end_ARG end_ARG ∗ over¯ start_ARG italic_ψ end_ARG ) ( over¯ start_ARG over^ start_ARG italic_D italic_z end_ARG end_ARG ∗ over¯ start_ARG italic_f end_ARG ) ) / over^ start_ARG italic_r end_ARG } | }_{r}\,/\,\overline{r}^{2}}start_UNDERACCENT end_UNDERACCENT start_ARG + divide start_ARG 2 end_ARG start_ARG 3 end_ARG ( over^ start_ARG over¯ start_ARG italic_D italic_r end_ARG end_ARG ∗ ( over^ start_ARG italic_μ end_ARG ( over¯ start_ARG over^ start_ARG ∇ end_ARG end_ARG ⋅ over¯ start_ARG bold_v end_ARG ) ) ) end_ARG + start_UNDERACCENT end_UNDERACCENT start_ARG 2 over¯ start_ARG italic_μ end_ARG over¯ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT / over¯ start_ARG italic_r end_ARG start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG
| A |
When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, in order to get a semantic of comparability closer to equality.
Even more, it could be possible to make the functions reflexive on all values but null where some freedom is allowed. | Indeed, in practice the meaning of the null value in the data should be explained by domain experts, along with recommendations on how to deal with it.
Moreover, since the null value indicates a missing value, relaxing reflexivity of comparability functions on null allows to consider absent values as possibly | When using the framework, one can further require reflexivity on the comparability functions, i.e. f(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, in order to get a semantic of comparability closer to equality.
Even more, it could be possible to make the functions reflexive on all values but null where some freedom is allowed. | fA(u,v)=fB(u,v)={1if u=v≠nullaif u≠null,v≠null and u≠vbif u=v=null0otherwise.subscript𝑓𝐴𝑢𝑣subscript𝑓𝐵𝑢𝑣cases1if 𝑢𝑣null𝑎formulae-sequenceif 𝑢null𝑣null and 𝑢𝑣𝑏if 𝑢𝑣null0otherwise.f_{A}(u,v)=f_{B}(u,v)=\begin{cases}1&\text{if }u=v\neq\texttt{null}\\
a&\text{if }u\neq\texttt{null},v\neq\texttt{null}\text{ and }u\neq v\\ | Intuitively, if an abstract value xAsubscript𝑥𝐴x_{A}italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of ℒAsubscriptℒ𝐴\mathcal{L}_{A}caligraphic_L start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is interpreted as 1111 (i.e., equality)
by hAsubscriptℎ𝐴h_{A}italic_h start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, any value yA≥AxAsubscript𝐴subscript𝑦𝐴subscript𝑥𝐴y_{A}\geq_{A}x_{A}italic_y start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ≥ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT must be set to 1111 since it is closer to | A |
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Classic Control Environment. The game of CARTPOLE was selected due to its widespread use and the ease with which the DQN can achieve a steady state policy. |
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and after applying Dropout (Dropout methods DQN). There was a statistically significant decrease in Variance (14.72% between Gaussian Dropout and DQN, 48.89% between Variational Dropout and DQN). Furthermore one of the Dropout methods outperformed DQN score. | For the experiments, fully connected neural network architecture was used. It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers. To minimize the
DQN loss, ADAM optimizer was used[25]. |
It’s the original Dropout method. It was introduced in 2012. Standard Dropout provides a simple technique for avoiding over-fitting in fully connected neural networks[12]. During each training phase, each neuron is excluded from the network with a probability p. Once trained, in the testing phase the full network is used, but each of the neurons’ output is multiplied by the probability p that the neuron was excluded with. This approach gives approximately the same result as averaging of the outcome of a great number of different networks which is very expensive approach to evaluate, this compensates that in the testing phase Dropout achieves a green model averaging. The probability can vary for each layer, the original paper recommend p=0.2p0.2\textit{p}=0.2p = 0.2 for the input layer and p =0.5p 0.5\textit{p }=0.5p = 0.5 for hidden layers. Neurons in the output layer are not dropped. This method proved effective for regularizing neural networks, enabling them to be trained for longer periods without over-fitting and resulting in improved performance, and since then many Dropout techniques have been improved for different types neural networks architectures (Figure 1). |
A fully connected neural network architecture was used. It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers. ADAM optimizer for the minimization[25]. | B |
In medical image segmentation works, researchers have converged toward using classical cross-entropy loss functions along with a second distance or overlap based functions. Incorporating domain/prior knowledge (such as coding the location of different organs explicitly in a deep model) is more sensible in the medical datasets. As shown in Taghanaki et al. (2019e), when only a distance-based or overlap-based loss function is used in a network, and the final layer applies sigmoid function, the risk of gradient vanishing increases. Although overlap based loss function are used in case of a class imbalance (small foregrounds), in Figure 13, we show how using (only) overlap based loss functions as the main term can be problematic for smooth optimization where they highly penalize a model under/over-segmenting a small foreground. However, the cross-entropy loss returns a reasonable score for the same cases. Besides using integrated cross-entropy based loss functions, future work can be exploring a single loss function that follows the behavior of the cross-entropy and at the same time, offers more features such capturing contour distance. This can be achieved by revisiting the current distance and overlap based loss functions. Another future path can be exploring auto loss function (or regularization term) search similar to the neural architecture search mentioned above. Similarly, gradient based optimizations based on Sobolev (Adams and Fournier, 2003) gradients (Czarnecki et al., 2017), such as the works of Goceri (2019b, 2020) are an interesting research direction.
|
Going beyond pixel intensity-based scene understanding by incorporating prior knowledge, which have been an active area of research for the past several decades (Nosrati and Hamarneh, 2016; Xie et al., 2020). Encoding prior knowledge in medical image analysis models is generally more possible as compared to natural images. Currently, deep models receive matrices of intensity values, and usually, they are not forced to learn prior information. Without explicit reinforcement, the models might still learn object relations to some extent. However, it is difficult to interpret a learned strategy. | Deep learning has had a tremendous impact on various fields in science. The focus of the current study is on one of the most critical areas of computer vision: medical image analysis (or medical computer vision), particularly deep learning-based approaches for medical image segmentation. Segmentation is an important processing step in natural images for scene understanding and medical image analysis, for image-guided interventions, radiotherapy, or improved radiological diagnostics, etc. Image segmentation is formally defined as “the partition of an image into a set of nonoverlapping
regions whose union is the entire image” (Haralick and Shapiro, 1992). A plethora of deep learning approaches for medical image segmentation have been introduced in the literature for different medical imaging modalities, including X-ray, visible-light imaging (e.g. colour dermoscopic images), magnetic resonance imaging (MRI), positron emission tomography (PET), computerized tomography (CT), and ultrasound (e.g. echocardiographic scans). Deep architectural improvement has been a focus of many researchers for different purposes, e.g., tackling gradient vanishing and exploding of deep models, model compression for efficient small yet accurate models, while other works have tried to improve the performance of deep networks by introducing new optimization functions. |
Exploring reinforcement learning approaches similar to Song et al. (2018) and Wang et al. (2018c) for semantic (medical) image segmentation to mimic the way humans delineate objects of interest. Deep CNNs are successful in extracting features of different classes of objects, but they lose the local spatial information of where the borders of an object should be. Some researchers resort to traditional computer vision methods such as conditional random fields (CRFs) to overcome this problem, which however, add more computation time to the models. |
For image segmentation, sequenced models can be used to segment temporal data such as videos. These models have also been applied to 3D medical datasets, however the advantage of processing volumetric data using 3D convolutions versus the processing the volume slice by slice using 2D sequenced models. Ideally, seeing the whole object of interest in a 3D volume might help to capture the geometrical information of the object, which might be missed in processing a 3D volume slice by slice. Therefore a future direction in this area can be through analysis of sequenced models versus volumetric convolution-based approaches. | A |
From Fig. 9(b) we notice that the graphs 𝐀(1)superscript𝐀1{\mathbf{A}}^{(1)}bold_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT and 𝐀(2)superscript𝐀2{\mathbf{A}}^{(2)}bold_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT in GRACLUS have additional nodes that are disconnected.
As discussed in Sect. V, these are the fake nodes that are added to the graph so that its size can be halved at every pooling operation. | Fig. 9(c) shows that NMF produces graphs that are very dense, as a consequence of the multiplication with the dense soft-assignment matrix to construct the coarsened graph.
Finally, Fig. 9(d) shows that NDP produces coarsened graphs that are sparse and preserve well the topology of the original graph. | Fig. 12 shows for the result of the NDP coarsening procedure on the 6 types of graphs.
The first column shows the subset of nodes of the original graph that are selected (𝒱+superscript𝒱\mathcal{V}^{+}caligraphic_V start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, in red) and discarded (𝒱−superscript𝒱\mathcal{V}^{-}caligraphic_V start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT, in blue) after each pooling step. | Fig. 12 shows for the result of the NDP coarsening procedure on the 6 types of graphs.
The first column shows the subset of nodes of the original graph that are selected (𝒱+superscript𝒱\mathcal{V}^{+}caligraphic_V start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT, in red) and discarded (𝒱−superscript𝒱\mathcal{V}^{-}caligraphic_V start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT, in blue) after each pooling step. | From Fig. 9(b) we notice that the graphs 𝐀(1)superscript𝐀1{\mathbf{A}}^{(1)}bold_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT and 𝐀(2)superscript𝐀2{\mathbf{A}}^{(2)}bold_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT in GRACLUS have additional nodes that are disconnected.
As discussed in Sect. V, these are the fake nodes that are added to the graph so that its size can be halved at every pooling operation. | A |
For real-world applications, the dependency on large amounts of labeled data represents a significant limitation (Breiman et al., 1984; Hekler et al., 2019; Barz & Denzler, 2020; Qi & Luo, 2020; Phoo & Hariharan, 2021; Wang et al., 2021). Frequently, there is little or even no labeled data for a particular task and hundreds or thousands of examples have to be collected and annotated.
This particularly affects new applications and rare labels (e.g., detecting rare diseases or defects in manufacturing). | Transfer learning and regularization methods are usually applied to reduce overfitting.
However, for training with little data, the networks still have a considerable number of parameters that have to be fine-tuned – even if just the last layers are trained. | Random forests and neural networks share some similar characteristics, such as the ability to learn arbitrary decision boundaries; however, both methods have different advantages.
Random forests are based on decision trees. Various tree models have been presented – the most well-known are C4.5 (Quinlan, 1993) and CART (Breiman et al., 1984). | Additionally, the experiment shows that the training is very robust to overfitting even when the number of parameters in the network increases.
When combining the generated data and original data, the accuracy on Car and Covertype improves with an increasing number of training examples. | First, we analyze the performance of state-of-the-art methods for mapping random forests into neural networks and neural random forest imitation. The results are shown in Figure 4 for different numbers of training examples per class.
For each method, the average number of parameters of the generated networks across all datasets is plotted depending on the test error. That means that the methods aim for the lower-left corner (smaller number of network parameters and higher accuracy). Please note that the y-axis is shown on a logarithmic scale. | A |
Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy optimization remain rather limited from both computational and statistical perspectives. More specifically, from the computational perspective, it remains unclear until recently whether policy optimization converges to the globally optimal policy in a finite number of iterations, even given infinite data. Meanwhile, from the statistical perspective, it still remains unclear how to attain the globally optimal policy with a finite regret or sample complexity.
|
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020), as well as nonlinear settings involving general function approximators (Wen and Van Roy, 2017; Jiang et al., 2017; Du et al., 2019b; Dong et al., 2019). In particular, our setting is the same as the linear setting studied by Ayoub et al. (2020); Zhou et al. (2020), which generalizes the one proposed by Yang and Wang (2019a). We remark that our setting differs from the linear setting studied by Yang and Wang (2019b); Jin et al. (2019). It can be shown that the two settings are incomparable in the sense that one does not imply the other (Zhou et al., 2020). Also, our setting is related to the low-Bellman-rank setting studied by Jiang et al. (2017); Dong et al. (2019). In comparison, we focus on policy-based reinforcement learning, which is significantly less studied in theory. In particular, compared with the work of Yang and Wang (2019b, a); Jin et al. (2019); Ayoub et al. (2020); Zhou et al. (2020), which focuses on value-based reinforcement learning, OPPO attains the same T𝑇\sqrt{T}square-root start_ARG italic_T end_ARG-regret even in the presence of adversarially chosen reward functions. Compared with optimism-led iterative value-function elimination (OLIVE) (Jiang et al., 2017; Dong et al., 2019), which handles the more general low-Bellman-rank setting but is only sample-efficient, OPPO simultaneously attains computational efficiency and sample efficiency in the linear setting. Despite the differences between policy-based and value-based reinforcement learning, our work shows that the general principle of “optimism in the face of uncertainty” (Auer et al., 2002; Bubeck and Cesa-Bianchi, 2012) can be carried over from existing algorithms based on value iteration, e.g., optimistic LSVI, into policy optimization algorithms, e.g., NPG, TRPO, and PPO, to make them sample-efficient, which further leads to a new general principle of “conservative optimism in the face of uncertainty and adversary” that additionally allows adversarially chosen reward functions. | for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018, 2019; Yang and Wang, 2019b, a), where the reward function is fixed across all the episodes.
|
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient (PG) (Williams, 1992; Baxter and Bartlett, 2000; Sutton et al., 2000), natural policy gradient (NPG) (Kakade, 2002), trust-region policy optimization (TRPO) (Schulman et al., 2015), proximal policy optimization (PPO) (Schulman et al., 2017), and actor-critic (AC) (Konda and Tsitsiklis, 2000), converge to the globally optimal policy at sublinear rates of convergence, even when they are coupled with neural networks (Liu et al., 2019; Wang et al., 2019). However, such computational efficiency guarantees rely on the regularity condition that the state space is already well explored. Such a condition is often implied by assuming either the access to a “simulator” (also known as the generative model) (Koenig and Simmons, 1993; Azar et al., 2011, 2012a, 2012b; Sidford et al., 2018a, b; Wainwright, 2019) or finite concentratability coefficients (Munos and Szepesvári, 2008; Antos et al., 2008; Farahmand et al., 2010; Tosatto et al., 2017; Yang et al., 2019b; Chen and Jiang, 2019), both of which are often unavailable in practice. | Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In particular, OPPO is based on PPO (and similarly, NPG and TRPO), which is shown to converge to the globally optimal policy at sublinear rates in tabular and linear settings, as well as nonlinear settings involving neural networks (Liu et al., 2019; Wang et al., 2019). However, without assuming the access to a “simulator” or finite concentratability coefficients, both of which imply that the state space is already well explored, it remains unclear whether any of such algorithms is sample-efficient, that is, attains a finite regret or sample complexity. In comparison, by incorporating uncertainty quantification into the action-value function at each update, which explicitly encourages exploration, OPPO not only attains the same computational efficiency as NPG, TRPO, and PPO, but is also shown to be sample-efficient with a d2H3Tsuperscript𝑑2superscript𝐻3𝑇\sqrt{d^{2}H^{3}T}square-root start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT italic_T end_ARG-regret up to logarithmic factors.
| C |
In this section, we review approaches that aim to reduce the model size by employing efficient matrix representations.
There exist several methods using low-rank decompositions which represent a large matrix (or a large tensor) using only a fraction of the parameters. | Several works have investigated special matrix structures that require fewer parameters and allow for faster matrix multiplications—the main workload in fully connected layers.
Furthermore, there exist several manually designed architectures that introduced lightweight building blocks or modified existing building blocks to enhance resource efficiency. | In this section, we review approaches that aim to reduce the model size by employing efficient matrix representations.
There exist several methods using low-rank decompositions which represent a large matrix (or a large tensor) using only a fraction of the parameters. | In most cases, the implicitly represented matrix is never computed explicitly such that also a computational speed-up is achieved.
Furthermore, there exist approaches using special matrices that are specified by only few parameters and whose structure allows for extremely efficient matrix multiplications. | In Cheng et al. (2015), the weight matrices of fully connected layers are restricted to circulant matrices 𝐖∈ℝn×n𝐖superscriptℝ𝑛𝑛\mathbf{W}\in\mathbb{R}^{n\times n}bold_W ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × italic_n end_POSTSUPERSCRIPT, which are fully specified by only n𝑛nitalic_n parameters.
While this dramatically reduces the memory footprint of fully connected layers, circulant matrices also facilitate faster computation as matrix-vector multiplication can be efficiently computed using the fast Fourier transform. | C |
(iλ,λ′)∗(ω0)=ω1+ω2subscriptsubscript𝑖𝜆superscript𝜆′subscript𝜔0subscript𝜔1subscript𝜔2(i_{\lambda,\lambda^{\prime}})_{*}(\omega_{0})=\omega_{1}+\omega_{2}( italic_i start_POSTSUBSCRIPT italic_λ , italic_λ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) = italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT |
ω2 is the degree-1 homology class induced bysubscript𝜔2 is the degree-1 homology class induced by\displaystyle\omega_{2}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the degree-1 homology class induced by | and seeks the infimal r>0𝑟0r>0italic_r > 0 such that the map induced by ιrsubscript𝜄𝑟\iota_{r}italic_ι start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at n𝑛nitalic_n-th homology level annihilates the fundamental class [M]delimited-[]𝑀[M][ italic_M ] of M𝑀Mitalic_M. This infimal value defines FillRad(M)FillRad𝑀\mathrm{FillRad}(M)roman_FillRad ( italic_M ), the filling radius of M𝑀Mitalic_M.
| ω1 is the degree-1 homology class induced bysubscript𝜔1 is the degree-1 homology class induced by\displaystyle\omega_{1}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the degree-1 homology class induced by
|
ω0 is the degree-1 homology class induced bysubscript𝜔0 is the degree-1 homology class induced by\displaystyle\omega_{0}\text{ is the degree-1 homology class induced by }italic_ω start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the degree-1 homology class induced by | D |
The remaining costs are one aspect of estimating the projection quality. This means that projected points with high remaining costs can be moved by an additional optimization step. Akin to this idea, t-viSNE might show a preview of the data points in the next optimization step. In consequence, users could determine whether the t-SNE optimization is completed or not, simply by observing the points’ trajectories in low-dimensional space. This remains as possible future work.
|
Clustervision [51] is a visualization tool used to test multiple batches of a varying number of clusters and allows the users to pick the best partitioning according to their task. Then, the dimensions are ordered according to a cluster separation importance ranking. As a result, the interpretation and assessment of the final results are intrinsically tied to the choice of clustering algorithm, which is an external technique that is (in general) not related to the DR itself. Thus, the quality of the results is tied to the quality of the chosen clustering algorithm. With t-viSNE it is also possible to explore the results of a clustering technique by, for example, mapping them to labels, then using the labels as regions of interest during the interactive exploration of the data. However, the labels do not influence the results of t-viSNE, whether they exist or not, since we did not intend to tie the quality of our results to other external (and independent) techniques. | The goals of the comparative study presented in this paper were to provide initial evidence of the acceptance of t-viSNE by analysts, the consistency of their results when exploring a t-SNE projection using our tool, and the improvement over another state-of-the-art tool.
The tasks of the study were designed to test how each tool helps the analyst in overcoming the six pitfalls defined by Wattenberg et al. [14]), which was also one of the design goals of t-viSNE itself. Since that might not have been the case for GEP, this could be seen as a bias towards t-viSNE. | we present t-viSNE, a tool designed to support the interactive exploration of t-SNE projections (an extension to our previous poster abstract [17]). In contrast to other, more general approaches, t-viSNE was designed with the specific problems related to the investigation of t-SNE projections in mind, bringing to light some of the hidden internal workings of the algorithm which, when visualized, may provide important insights about the high-dimensional data set under analysis.
Our proposed solution is composed of a set of coordinated views that work together in order to fulfill four main goals: (G1) facilitate the choice of hyper-parameters through visual exploration and the use of quality metrics; (G2) provide a quick overview of the accuracy of the projection, to support the decision of either moving forward with the analysis or repeating the process of hyper-parameter exploration; (G3) provide the means to investigate quality further, differentiating between the trustworthiness of different regions of the projection; and (G4) allow the interpretation of different visible patterns of the projection in terms of the original data set’s dimensions. | In this paper, we introduced t-viSNE, an interactive tool for the visual investigation of t-SNE projections. By partly opening the black box of the t-SNE algorithm, we managed to give power to users allowing them to test the quality of the projections and understand the rationale behind the choices of the algorithm when forming clusters. Additionally, we brought into light the usually lost information from the inner parts of the algorithm such as densities of points and highlighted areas which are not well-optimized according to t-SNE.
To confirm the effectiveness of t-viSNE, we presented a hypothetical usage scenario and a use case with real-world data sets. We also evaluated our approach with a user study by comparing it with Google’s Embedding Projector (GEP): the results show that, in general, the participants could manage to reach the intended analysis tasks even with limited training, and their feedback indicates that t-viSNE reached a better level of support for the given tasks than GEP. However, both tools were similar with respect to completion time. | D |
Nature inspired optimization algorithms or simply variations of metaheuristics? - 2021 [15]: This overview focuses on the study of the frequency of new proposals that are no more than variations of old ones. The authors critique a large set of algorithms based on three criteria: (1) whether there is a physical analogy that follows the metaheuristic, (2) whether most algorithms are duplicates or similarly inspired, and (3) whether the authors propose different techniques based on the same idea. They then specify their criteria for introducing a new metaheuristic. |
Initialization of metaheuristics: comprehensive review, critical analysis, and research directions - 2023 [35]: This review addresses a gap in the literature by developing a taxonomy of initialization methods for metaheuristics. This classification is based on the initialization of metaheuristics according to random techniques, learning methods (supervised learning, Markov models, opposition- and diversification-based learning), and other generic methods based on sampling, clustering, and cooperation. The review also examines the initialization of metaheuristics with local search approaches, offers guidance on designing a diverse and informative sequence of initial solutions, and provides insights that will help research in constrained and discrete optimization problems. |
50 years of metaheuristics - 2024 [40]: This overview traces the last 50 years of the field, starting from the roots of the area to the latest proposals to hybridize metaheuristics with machine learning. The revision encompasses constructive (GRASP and ACO), local search (iterated local search, Tabu search, variable neighborhood search), and population-based heuristics (memetic algorithms, biased random-key genetic algorithms, scatter search, and path relinking). Each category presents its core characteristics and the description of the mentioned algorithms. This review presents metaheuristic frameworks to guide the design of heuristic optimization algorithms during the last 50 years. It discusses the role of the journal in which it is published in introducing solid heuristic papers. This work also recalls the maturity of the field, which leads to solving very complex problems, with a growing number of researchers applying them, as shown in the numerous conferences and related events. Also, they criticize the fragmentation as each group of research usually applies the same methods regardless of the type of problem being solved, the lack of theoretical foundations, the limited analytical understanding of novel proposals, the problem-specific tuning of metaheuristics, the lack of standardized benchmarking protocols and the absence of general guidelines. Several research directions are also annotated for researchers to be applied in the future. |
In the last update of this report, which is herein released 4 years after its original version, we note that there has been an evolution within the nature and bio-inspired optimization field. There is an excessive use of the biological approach as opposed to the real problem-solving approach to tackle real and complex optimization goals, as those discussed in Section 8.1. This issue needs to be addressed in the future by following guidelines that will allow for the definition of metaheuristics in a way that is appropriate to current challenges. This is important for the constructive design and development of proposals in response to emerging problems. For this reason, the potential impact the emerging problems and GPAIS, population-based metaheuristics as nature and bio-inspired optimization algorithms are poised to shape the future of AI, contributing to the design of continuously emerging AI systems, and serving as an inspiration for the new era of innovation and progress in AI. |
An exhaustive review of the metaheuristic algorithms for search and optimization: taxonomy, applications, and open challenges - 2023 [34]: This taxonomy provides a large classification of metaheuristics based on the number of control parameters of the algorithm. In this work, the authors question the novelty of new proposals and discuss the fact that calling an algorithm new is often based on relatively minor modifications to existing methods. They highlight the limitations of metaheuristics, open challenges, and potential future research directions in the field. | D |
}).italic_Z = italic_φ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( over^ start_ARG italic_A end_ARG italic_φ start_POSTSUBSCRIPT italic_m - 1 end_POSTSUBSCRIPT ( ⋯ italic_φ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( over^ start_ARG italic_A end_ARG italic_X italic_W start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) ⋯ ) italic_W start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) .
| To study the impact of different parts of the loss in Eq. (12), the performance with different λ𝜆\lambdaitalic_λ is reported in Figure 4.
From it, we find that the second term (corresponding to problem (7)) plays an important role especially on UMIST. If λ𝜆\lambdaitalic_λ is set as a large value, we may get the trivial embedding according to the constructed graph. AdaGAE will obtain good results when λ𝜆\lambdaitalic_λ is not too large. |
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update the graph from the learned embedding with a larger sparsity, k𝑘kitalic_k. With the new graph, we re-train the GAE. These steps are repeated until the convergence. |
Figure 2: Visualization of the learning process of AdaGAE on USPS. Figure 2(b)-2(i) show the embedding learned by AdaGAE at the i𝑖iitalic_i-th epoch, while the raw features and the final results are shown in Figure 2(a) and 2(j), respectively. An epoch corresponds to an update of the graph. | To illustrate the process of AdaGAE, Figure 2 shows the learned embedding on USPS at the i𝑖iitalic_i-th epoch. An epoch means a complete training of GAE and an update of the graph. The maximum number of epochs, T𝑇Titalic_T, is set as 10. In other words, the graph is updated 10 times. Clearly, the embedding becomes more cohesive with the update.
| C |
We also want to understand the types of networks that we could test via domains-wide scans. To derive the business types we use the PeeringDB. We classify the ASes according to the following business types: content, enterprise, Network Service Provider (NSP), Cable/DSL/ISP, non-profit, educational/research, route server at Internet Exchange Point (IXP)111A route server directs traffic among Border Gateway Protocol (BGP) routers. We plot the networks that do not enforce ingress filtering according to business types in Figure 12. According to our study enterprise and non-profit networks enforce ingress filtering more than other networks. In contrast, NSPs contain the most networks that do not enforce ingress filtering.
|
SMap (The Spoofing Mapper). In this work we present the first Internet-wide scanner for networks that filter spoofed inbound packets, we call the Spoofing Mapper (SMap). We apply SMap for scanning ingress-filtering in more than 90% of the Autonomous Systems (ASes) in the Internet. The measurements with SMap show that more than 80% of the tested ASes do not enforce ingress filtering (i.e., 72.4% of all the ASes in the routing system), in contrast to 2.4% identified by the latest measurement of the Spoofer Project (Luckie et al., 2019). The reason for this significant difference is the limitation of the previous studies of ingress filtering to a small set of networks. |
Domain-scan and IPv4-scan both show that the number of spoofable ASes grows with the overall number of the ASes in the Internet, see Figure 1. Furthermore, there is a correlation between fraction of scanned domains and ASes. Essentially the more domains are scanned, the more ASes are covered, and more spoofable ASes are discovered; see Figure 7. This result is of independent interest as it implies that one can avoid scanning the IPv4 and instead opt for domains-scan, obtaining a good enough approximation. This not only reduces the volume of traffic needed to carry out studies but also makes the study much more efficient. | There is a strong correlation between the AS size and the enforcement of spoofing, see Figure 13. Essentially, the larger the AS, the higher the probability that our tools identify that it does not filter spoofed packets. The reason can be directly related to our methodologies and the design of our study: the larger the network the more services it hosts. This means that we have more possibilities to test if spoofing is possible: for instance, we can identify a higher fraction of servers with a globally incremental IPID counters, which are not “load balanced”. In Figure 14 we plot the statistics of the tested networks according to their size and type. The results show a correlation between the size of the network and its type. For instance, most NSP networks are large, with CIDR/6. This is aligned with our finding that among NSP networks there was the highest number of spoofable networks.
| Identifying servers with global IPID counters. We send packets from two hosts (with different IP addresses) to a server on a tested network. We implemented probing over TCP SYN, ping and using requests/responses to Name servers and we apply the suitable test depending on the server that we identify on the tested network. If the responses contain globally incremental IPID values - we use the service for ingress filtering measurement with IPID technique. We located globally incremental IPID in 63.27%percent63.2763.27\%63.27 % of the measured networks. There are certainly more hosts on networks that support globally incremental IPID values, yet our goal was to validate our measurement techniques while keeping the measurement traffic low - hence we avoided scanning the networks for additional hosts and only checked for Web, Email or Name servers with globally incremental IPID counters via queries to the tested domain.
| C |
While context did introduce more parameters to the model (7,57575757{,}5757 , 575 parameters without context versus 14,3151431514{,}31514 , 315 including context), the model is still very small compared to most neural network models, and is trainable in a few hours on a CPU. When units were added to the “skill” layer of the feedforward NN model until the total number of parameters reached 14,4291442914{,}42914 , 429, the larger model was not significantly better (p≥0.05𝑝0.05p\geq 0.05italic_p ≥ 0.05, one-sided t-test blocked by batch). This reinforces the idea that the benefit may be attributed to context, and not to the size of the network. | The estimation of context by learned temporal patterns should be most effective when the environment results in recurring or cyclical patterns, such as in cyclical variations of temperature and humidity and regular patterns of human behavior generating interferents. In such cases, the recurrent pathway can identify useful patterns analagously to how cortical regions help the olfactory bulb filter out previously seen background information [21]. A context-based approach will be applied to longer-timescale data and to environments with cyclical patterns.
| This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The context model has two parts: (1) a recurrent context layer, which encodes classification-relevant properties of previously seen data, and (2) a feedforward layer, which integrates the context with the current odor stimulus to generate an odor-class prediction. The results indicate improvement from two sources: The use of neural networks in place of SVMs, and the use of context, particularly in cases where a substantial number of context sequences are available for training. Thus, emulation of adaptation in natural systems leads to an approach that can make a difference in real-world applications.
|
The current design of the context-based network relies on labeled data because the odor samples for a given class are presented as ordered input to the context layer. However, the model can be modified to be trained on unlabeled data, simply by allowing arbitrary data samples as input to the context layer. This design introduces variation in training inputs, which makes it harder to learn consistent context patterns. For this task, semisupervised learning techniques, such as self-labeled samples, may help. If the context layer can process unlabeled data, then it is no longer necessary to include every class in every batch. The full six-gas sensor drift dataset can be used, as well as other unbalanced and therefore realistic datasets. |
One prominent feature of the mammalian olfactory system is feedback connections to the olfactory bulb from higher-level processing regions. Activity in the olfactory bulb is heavily influenced by behavioral and value-based information [19], and in fact, the bulb receives more neural projections from higher-level regions than from the nose [20]. In computational modeling, this principle has been taken into account by the piriform cortical region that recognizes familiar background odors through associative memory [21]. It projects this information to the olfactory bulb to improve odor recognition when there are background odors. Following this same principle, the neural network classifier in this paper integrates context that is outside the immediate input signal. | A |
For the second change, we need to take another look at how we place the separators tisubscript𝑡𝑖t_{i}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
We previously placed these separators in every second nonempty drum σi:=[iδ,(i+1)δ]×Balld−1(δ/2)assignsubscript𝜎𝑖𝑖𝛿𝑖1𝛿superscriptBall𝑑1𝛿2\sigma_{i}:=[i\delta,(i+1)\delta]\times\mathrm{Ball}^{d-1}(\delta/2)italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT := [ italic_i italic_δ , ( italic_i + 1 ) italic_δ ] × roman_Ball start_POSTSUPERSCRIPT italic_d - 1 end_POSTSUPERSCRIPT ( italic_δ / 2 ) based on the points in σi−1∪σi∪σi+1subscript𝜎𝑖1subscript𝜎𝑖subscript𝜎𝑖1\sigma_{i-1}\cup\sigma_{i}\cup\sigma_{i+1}italic_σ start_POSTSUBSCRIPT italic_i - 1 end_POSTSUBSCRIPT ∪ italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∪ italic_σ start_POSTSUBSCRIPT italic_i + 1 end_POSTSUBSCRIPT. | We generalize the case of integer x𝑥xitalic_x-coordinates to the case where the drum [x,x+1]×Balld−1(δ/2)𝑥𝑥1superscriptBall𝑑1𝛿2[x,x+1]\times\mathrm{Ball}^{d-1}(\delta/2)[ italic_x , italic_x + 1 ] × roman_Ball start_POSTSUPERSCRIPT italic_d - 1 end_POSTSUPERSCRIPT ( italic_δ / 2 ) contains O(1)𝑂1O(1)italic_O ( 1 ) points for all x∈ℝ𝑥ℝx\in\mathbb{R}italic_x ∈ blackboard_R.
Furthermore, we investigate how the complexity of Euclidean TSP grows with δ𝛿\deltaitalic_δ. | However, in order for our algorithm to meet the requirements of Lemma 5.7, we would like to avoid having a point enter a drum after the x𝑥xitalic_x-coordinates are multiplied by some factor λ>1𝜆1\lambda>1italic_λ > 1.
Furthermore, since the proof of Lemma 4.3 requires every drum to be at least δ𝛿\deltaitalic_δ wide, we cannot simply scale the drums as well. | It would be interesting to see whether a direct proof can be given for this fundamental result.
We note that the proof of Theorem 2.1 can easily be adapted to point sets of which the x𝑥xitalic_x-coordinates of the points need not be integer, as long as the difference between x𝑥xitalic_x-coordinates of any two consecutive points is at least 1. | Finally, we will show that the requirements for Lemma 5.7 hold, where we take 𝒜𝒜\mathcal{A}caligraphic_A to be the algorithm described above.
The only nontrivial requirement is that T𝒜(Pλ)⩽T𝒜(P)subscript𝑇𝒜subscript𝑃𝜆subscript𝑇𝒜𝑃T_{\mathcal{A}}(P_{\lambda})\leqslant T_{\mathcal{A}}(P)italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic_P start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ) ⩽ italic_T start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic_P ) for all point sets P𝑃Pitalic_P and x𝑥xitalic_x-axis scaling factors λ>1𝜆1\lambda>1italic_λ > 1. | B |
There are quite a few results on free (and related) products of self-similar or automaton groups (again see [15] for an overview) but many of them present the product as a subgroup of an automaton/self-similar group and, thus, loose the self-similarity property. An exception here is a line of research based on the Bellaterra automaton which resulted in a construction to generate the free product of an arbitrary number of copies of the group of order two as an automaton group [16] (see also [17]). | from one to the other, then their free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup (8). This is again a strict generalization of [19, Theorem 3.0.1] (even if we only consider complete automata).
Third, we show this result in the more general setting of self-similar semigroups111Note that the constructions from [2, Theorem 2], [3, Theorem 4] and [19] mentioned above do not use that the generating automata for S𝑆Sitalic_S and for T𝑇Titalic_T are finite. Therefore, these constructions also work for self-similar semigroups, although this is not explicitly stated there. (Theorem 6) but observe that the constructed generating automaton for S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is finite (and/or complete) if this was the case for the original two automata generating S𝑆Sitalic_S and T𝑇Titalic_T. The existence of a homomorphism from S𝑆Sitalic_S to T𝑇Titalic_T (or vice-versa) is a very lax requirement and is satisfied by large classes of semigroups. For example, it suffices to have an idempotent (10) or a length function (11) in (at least) one of the two semigroups. By induction, we can even extend the result to arbitrary free products of (finitely many) semigroups where at least one contains an idempotent (12). The construction itself yields further results. As an example, we modify it to show that a new free generator can be adjoined to any self-similar semigroup (or automaton semigroup) without losing the property of self-similarity (or being an automaton semigroup; Theorem 14). This is noteworthy because – as mentioned above – the free semigroup of rank one is not an automaton semigroup (not even if we allow partial automata, see [8, Theorem 19] and [20, Theorem 1.2.1.4]). |
There are quite a few results on free (and related) products of self-similar or automaton groups (again see [15] for an overview) but many of them present the product as a subgroup of an automaton/self-similar group and, thus, loose the self-similarity property. An exception here is a line of research based on the Bellaterra automaton which resulted in a construction to generate the free product of an arbitrary number of copies of the group of order two as an automaton group [16] (see also [17]). | While our main result significantly relaxes the hypothesis for showing that the free product of self-similar semigroups (or automaton semigroups) is self-similar (an automaton semigroup), it does not settle the underlying question whether these semigroup classes are closed under free product. It is possible that there is a different construction for the free product S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T of two self-similar or automaton semigroup without the requirement of a homomorphism from one to the other and it is also possible that there is a pair of self-similar (or automaton) semigroups such that S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is not a self-similar (or an automaton semigroup). In this case, however, no homomorphism S→T→𝑆𝑇S\to Titalic_S → italic_T or T→S→𝑇𝑆T\to Sitalic_T → italic_S can exist. Thus, to make progress in either direction (towards a better construction or towards a counter-example), we need to look at pairs S,T𝑆𝑇S,Titalic_S , italic_T of self-similar (or even automaton) semigroups without a homomorphism from one to the other. However, it turns out that finding such a pair is not easy. In particular, neither S𝑆Sitalic_S nor T𝑇Titalic_T may contain an idempotent. Thus, we have to consider idempotent-free semigroups here. We will show, however, that we cannot find a pair of such semigroups in the class of finitely generated simple semigroups. More precisely, using results by Jones on idempotent-free semigroups [11], we show that finitely generated simple (or 00-simple) idempotent-free semigroups are not residually finite (Theorem 21) and, thus, not self-similar (and, in particular, not automaton semigroups; 22). We then conclude the paper with an example222The authors would like to thank Emanuele Rodaro for his help in finding this example. of a finitely generated residually finite semigroup (23) which has no homomorphism to its opposite semigroup (25). While this comes close to the sought pair S,T𝑆𝑇S,Titalic_S , italic_T, it is not clear whether the given semigroup is self-similar (26).
| However, there do not seem to be constructions for presenting arbitrary free products of self-similar groups in a self-similar way. For semigroups, on the other hand, such results do exist. In fact, the free product of two automaton semigroups S𝑆Sitalic_S and T𝑇Titalic_T is always at least
very close to being an automaton semigroup: adjoining an identity to S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T | D |
Here, we showed that existing visual grounding based bias mitigation methods for VQA are not working as intended. We found that the accuracy improvements stem from a regularization effect rather than proper visual grounding. We proposed a simple regularization scheme which, despite not requiring additional annotations, rivals state-of-the-art accuracy. Future visual grounding methods should be tested with a more comprehensive experimental setup and datasets for proper evaluation.
| Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible to achieve such gains even when the model is trained to look at: a) irrelevant visual regions, and b) random visual regions. Second, we show that differences in the predictions from the variants trained with relevant, irrelevant and random visual regions are not statistically significant. Third, we show that these methods degrade performance when the priors remain intact and instead work on VQA-CPv2 by hurting its train accuracy.
| This work was supported in part by AFOSR grant [FA9550-18-1-0121], NSF award #1909696, and a gift from Adobe Research. We thank NVIDIA for the GPU donation. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements of any sponsor. We are grateful to Tyler Hayes for agreeing to review the paper at short notice and suggesting valuable edits and corrections for the paper.
| It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in any of the methods, indicating that they are not producing right answers for the right reasons.
| Since Wu and Mooney (2019) reported that human-based textual explanations Huk Park et al. (2018) gave better results than human-based attention maps for SCR, we train all of the SCR variants on the subset containing textual explanation-based cues. SCR is trained in two phases. For the first phase, which strengthens the influential objects, we use a learning rate of 5×10−55superscript1055\times 10^{-5}5 × 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT, loss weight of 3333 and train the model to a maximum of 12 epochs. Then, following Wu and Mooney (2019), for the second phase, we use the best performing model from the first phase to train the second phase, which criticizes incorrect dominant answers. For the second phase, we use a learning rate of 10−4superscript10410^{-4}10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT and weight of 1000100010001000, which is applied alongside the loss term used in the first phase. The specified hyperparameters worked better for us than the values provided in the original paper.
| B |
For each topic, we identified a corresponding entry from the OPP-115 annotation scheme (Wilson et al., 2016), which was created by legal experts to label the contents of privacy policies. While Wilson et al. (2016) followed a bottom-up approach and identified different categories from analysis of data practices in privacy policies, we followed a top-down approach and applied topic modelling to the corpus in order to extract common themes for paragraphs. The categories identified in the OPP-115 Corpus can be found in Table 2.
|
Topic Modelling. Topic modelling is an unsupervised machine learning method that extracts the most probable distribution of words into topics through an iterative process (Wallach, 2006). We used topic modelling to explore the distribution of themes of text in our corpus. Topic modelling using a large corpus such as PrivaSeer helps investigate the themes present in privacy policies at web scale and also enables the comparison of themes that occur in the rapidly evolving online privacy landscape. We used Latent Dirichlet Allocation (LDA), as our approach to topic modelling (Blei et al., 2003). Since LDA works well when each input document deals with a single topic, we divided each privacy policy into its constituent paragraphs (Sarne et al., 2019), tokenized the paragraphs using a regex character matching tokenizer and lemmatized the individual words using NLTK’s WordNet lemmatizer. We experimented with topics sizes of 7, 8, 9, 10, 11, 13 and 15. We manually evaluated the topic clusters by inspecting the words that most represented the topics. We noted that the cohesiveness of the topics decreased as the number of topics increased. We chose a topic size of 9, since larger topic sizes produced markedly less coherent topics. |
We found that two LDA topics contained vocabulary corresponding with the OPP-115 category First Party Collection/Use, one dealing with purpose and information type collected and the other dealing with collection method. Two LDA topics corresponded with the OPP-115 category Third Party Sharing and Collection, one detailing the action of collection, and one explaining its purpose and effects(advertising and analytics). One of the LDA topics exclusively comprised of vocabulary related to cookies which could be related to both first party or third party data collection techniques. The OPP-115 categories Privacy Contact Information, Data Security and Policy Change appeared as separate topics while a topic corresponding to the OPP-115 category International and Specific Audiences appeared to be primarily related to European audiences and GDPR. |
It is likely that the divergence between OPP-115 categories and LDA topics comes from a difference in approaches: the OPP-115 categories represent themes that privacy experts expected to find in privacy policies, which diverge from the actual distribution of themes in this text genre. Figure 2 shows the percentage of privacy policies in the corpus that contain each topic. From the figure we see that information regarding the type and purpose of data collected by first and third party sources are the most common topics. About 77% of policies contain language regarding third parties. This is consistent with prior research on third party data collection (Libert, 2018). In contrast, language regarding advertising and analytics appears in only 38% of policies in the corpus. Topics corresponding to data security, policy change and contact information also occur in a majority of privacy policies. Language corresponding to the GDPR and European audiences appears in 55% of policies. A study of the distribution of privacy policy topics on the web is important since they inform us about real-world trends and the need for resource allocation to enforce of privacy regulations. |
For the data practice classification task, we leveraged the OPP-115 Corpus introduced by Wilson et al. (2016). The OPP-115 Corpus contains manual annotations of 23K fine-grained data practices on 115 privacy policies annotated by legal experts. To the best of our knowledge, this is the most detailed and widely used dataset of annotated privacy policies in the research community. The OPP-115 Corpus contains paragraph-sized segments annotated according to one or more of the twelve coarse-grained categories of data practices. We fine-tuned PrivBERT on the OPP-115 Corpus to predict the coarse-grained categories of data practices. We divided the corpus in the ratio 3:1:1 for training, validation and testing respectively. Since each segment in the corpus could belong to more than one category and there are twelve categories in total, we treated the problem as a multi-class, multi-label classification problem. After manually tuning hyperparameters, we trained the model with a dropout of 0.15 and a learning rate of 2.5e-5. | B |
T5: Inspect the same view with alternative techniques and visualizations. To eventually avoid the appearance of cognitive biases, alternative interaction methods and visual representations of the same data from another perspective should be offered to the user (G5).
| As in the data space, each point of the projection is an instance of the data set. However, instead of its original features, the instances are characterized as high-dimensional vectors where each dimension represents the prediction of one model. Thus, since there are currently 174 models in \raisebox{-.0pt} {\tiny\bfS6}⃝, each instance is a 174-dimensional vector, projected into 2D. Groups of points represent instances that were consistently predicted to be in the same class. In StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(f), for example, the points in the two clusters in both extremes of the projection (left and right sides, unselected) are well-classified, since they were consistently determined to be in the same class by most models of \raisebox{-.0pt} {\tiny\bfS6}⃝. The instances that are in-between these clusters, however, do not have a well-defined profile, since different models classified them differently. After selecting these instances with the lasso tool, the two histograms below the projection in StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(f) show a comparison of the performance of the available models in the selected points (gray, upside down) vs. all points (black). The x-axis represents the performance according to the user-weighted metrics (in bins of 5%), and the y-axis shows the number of models in each bin. Our goal here is to look for models in the current stack \raisebox{-.0pt} {\tiny\bfS6}⃝ that could improve the performance for the selected points. However, by looking at the histograms, it does not look like we can achieve it this time, since all models perform worse in the selected points than in all points.
| Figure 2(a.2) displays overlapping barcharts for depicting the per-class performances for each algorithm, i.e., two colors for the two classes in our example. The more saturated bar in the center of each class bar represents the altered performance when the parameters of algorithms are modified. Note that the view only supports three performance metrics: precision, recall, and f1-score.
The y-axes in both figures represent aggregated performance, while the different algorithms are arranged along the x-axis with different colors. |
Figure 2: The exploration process of ML algorithms. View (a.1) summarizes the performance of all available algorithms, and (a.2) the per-class performance based on precision, recall, and f1-score for each algorithm. (b) presents a selection of parameters for KNN in order to boost the per-class performance shown in (c.1). (c.2) illustrates in light blue the selected models and in gray the remaining ones. Also from (a.2), both RF and ExtraT performances seem to be equal. However in (d), after resetting class optimization, ExtraT models appear to perform better overall. In view (e), the boxplots were replaced by point clouds that represent the individual models of activated algorithms. The color encoding is the same as for the algorithms, but unselected models are greyed out. Finally, the radar chart in (f) displays a portion of the models’ space in black that will be used to create the initial stack against the entire exploration space in yellow. The chart axes are normalized from 0 to 100%. | Figure 6: The process of exploration of distinct algorithms in hypotheticality stance analysis. (a) presents the selection of appropriate validation metrics for the specification of the data set. (b) aggregates the information after the exploration of different models and shows the active ones which will be used for the stack in the next step. (c) presents the per-class performance of all the models vs. the active ones per algorithm.
| C |
By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | cannot be adjacent to 2¯¯2\overline{2}over¯ start_ARG 2 end_ARG nor 3¯¯3\overline{3}over¯ start_ARG 3 end_ARG,
and so f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is [013]delimited-[]013[013][ 013 ] or [010]delimited-[]010[010][ 010 ]. | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | (E𝐂,(2¯,(u2,[013])))superscript𝐸𝐂¯2subscript𝑢2delimited-[]013(E^{\mathbf{C}},(\overline{2},(u_{2},[013])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( over¯ start_ARG 2 end_ARG , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 013 ] ) ) ),
(E𝐂,((u1,[112]),(u2,[010])))superscript𝐸𝐂subscript𝑢1delimited-[]112subscript𝑢2delimited-[]010(E^{\mathbf{C}},((u_{1},[112]),(u_{2},[010])))( italic_E start_POSTSUPERSCRIPT bold_C end_POSTSUPERSCRIPT , ( ( italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , [ 112 ] ) , ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , [ 010 ] ) ) ). | C |
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla MAML assumes that the data distribution is the same across tasks, in real-world NLP tasks, the data distributions can differ significantly [Li et al., 2018, Balaji et al., 2018]. For example, PAML [Madotto et al., 2019] regards each person’s dialogues as a task for MAML and they have different personal profiles. This variation manifests both between training tasks and between training and testing tasks, similarly affecting the performance of MAML. Few works have thoroughly studied these impact factors. | The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation.
Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the language model becomes too “general”, it will lose the ability of adapting to specific tasks. It is noteworthy that the ”too general” problem is not the same as over-fitting, since the ”too general” model performs well before fine-tuning, which means it does not over-fit to the training data. | In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy:
RQ1. Since the parameter initialization learned by MAML can be seen as a general language model of training tasks, when the training and testing tasks have different data distributions, how can the general language model training affect the model’s task-specific adaptation ability? |
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla MAML assumes that the data distribution is the same across tasks, in real-world NLP tasks, the data distributions can differ significantly [Li et al., 2018, Balaji et al., 2018]. For example, PAML [Madotto et al., 2019] regards each person’s dialogues as a task for MAML and they have different personal profiles. This variation manifests both between training tasks and between training and testing tasks, similarly affecting the performance of MAML. Few works have thoroughly studied these impact factors. |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the meta-testing set before fine-tuning, using the quality performance (accuracy for classification and BLEU for generation) to | B |
Activated Subarray with Limited DREs: As shown in Fig. 1, given a certain azimuth angle, there are limited DREs that can be activated. Due to the directivity, the DREs of the CCA subarray at different positions are anisotropic, and this phenomenon is different from the UPA. If an inappropriate subarray is activated, the beam angle may go beyond the radiation range of certain subarray elements, degrading the beam gain and SE. | After the discussion on the characteristics of CCA, in this subsection, we continue to explain the specialized codebook design for the DRE-covered CCA. Revisiting Theorem 1 and Theorem 3, the size and position of the activated CCA subarray are related to the azimuth angle; meanwhile, the beamwidth is determined by the size of the activated subarray according to Theorem 2. Therefore, the conventional codebook only consisting of different beamwidth and beam angles is not able to reveal the relationship among the beam angle, beamwidth and the corresponding supporting subarray for the DRE-covered CCA. In order to solve the beam tracking problem in (13), the subarray activation/partition and AWV selection needs to be jointly optimized at the same time. To this end, a new specialized hierarchical codebook 𝒱𝒱\mathcal{V}caligraphic_V should be designed to facilitate efficient beam tracking, wherein the codeword 𝒗𝒗\boldsymbol{v}bold_italic_v should contain both the angular-domain beam pattern information (αi,βi)subscript𝛼𝑖subscript𝛽𝑖(\alpha_{i},\beta_{i})( italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_β start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) and the corresponding subarray patten information 𝒮𝒮\mathcal{S}caligraphic_S.
| The r-UAV needs to select multiple appropriate AWVs 𝒗(ms,k,ns,k,ik,jk,𝒮k),k∈𝒦𝒗subscript𝑚𝑠𝑘subscript𝑛𝑠𝑘subscript𝑖𝑘subscript𝑗𝑘subscript𝒮𝑘𝑘𝒦\boldsymbol{v}(m_{s,k},n_{s,k},i_{k},j_{k},\mathcal{S}_{k}),k\in\mathcal{K}bold_italic_v ( italic_m start_POSTSUBSCRIPT italic_s , italic_k end_POSTSUBSCRIPT , italic_n start_POSTSUBSCRIPT italic_s , italic_k end_POSTSUBSCRIPT , italic_i start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , italic_j start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , caligraphic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) , italic_k ∈ caligraphic_K from our proposed codebook 𝒱𝒱\mathcal{V}caligraphic_V to solve the subarray partition and AWVs selection problem. If an element is contained in different subarrays, there is a conflict between the subarrays. To solve the problem in (43), the joint SPAS problem without considering the conflict is discussed first and the conflict avoidance will be discussed later. Given AOAs, the maximum size of the activated subarray should be selected and the quantization error between the AOAs and the beam angles in the codeword should be minimized to maximize the beam gain of the combining vector for the k𝑘kitalic_k-th t-UAV.
Similarly with (42), | Multiuser-resultant Receiver Subarray Partition: As shown in Fig. 3, the r-UAV needs to activate multiple subarrays to serve multiple t-UAVs at the same time. Assuming that an element can not be contained in different subarrays, then the problem of activated CCA subarray partition rises at the r-UAV side for the fast multi-UAV beam tracking. The dynamic CCA subarray partition can be considered as the dynamic antenna resource allocation for multiple t-UAVs, which has strong impact on the sum SE of the UAV mmWave network.
|
In the considered UAV mmWave network, the r-UAV needs to activate multiple subarrays and select multiple combining vectors to serve multiple t-UAVs at the same time. Hence, the beam gain of the combining vector maximization problem for r-UAV with our proposed codebook can be rewritten as | C |
Thus,
a¯|b¯conditional¯𝑎¯𝑏\bar{a}|\bar{b}over¯ start_ARG italic_a end_ARG | over¯ start_ARG italic_b end_ARG-regular digraphs with size M¯¯𝑀\bar{M}over¯ start_ARG italic_M end_ARG can be characterized as a¯|b¯conditional¯𝑎¯𝑏\bar{a}|\bar{b}over¯ start_ARG italic_a end_ARG | over¯ start_ARG italic_b end_ARG-biregular graphs with size M¯|M¯conditional¯𝑀¯𝑀\bar{M}|\bar{M}over¯ start_ARG italic_M end_ARG | over¯ start_ARG italic_M end_ARG | This will be bootstrapped to the multi-color case in later sections. Note that the 1111-color case with the completeness requirement is not very interesting, and also not useful for the general case: completeness states that every node on
the left must be connected, via the unique edge relation, to every node on the right – regardless of the matrix. We | We start in this section by giving proofs only for the 1111-color case, without the completeness requirement. While this case does not directly correspond to any formula used in the proof of Theorem 3.7 (since matrices (4) have 2 rows even when there are no binary predicates), this case gives the flavor of the arguments, and will
also be used as the base cases in inductive constructions for the case with arbitrary colors. | To conclude this section, we stress that although the 1111-color case contains many of the key ideas, the multi-color case requires a finer
analysis to deal with the “big enough” case, and also may benefit from a reduction that allows one to restrict | The case of fixed degree and multiple colors is done via an induction, using merging and then swapping to eliminate parallel edges.
The case of unfixed degree is handled using a case analysis depending on whether sizes are “big enough”, but the approach is different from | C |
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, TD possibly diverges (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997). Bhatnagar et al. (2009) propose nonlinear gradient TD, which converges but only to a locally optimal solution. See Geist and Pietquin (2013); Bertsekas (2019) for a detailed survey. When the value function approximator is an overparameterized multi-layer neural network, Cai et al. (2019) prove that TD converges to the globally optimal solution in the NTK regime. See also the independent work of Brandfonbrener and
Bruna (2019a, b); Agazzi and Lu (2019); Sirignano and Spiliopoulos (2019), where the state space is required to be finite. In contrast to the previous analysis in the NTK regime, our analysis allows TD to attain a data-dependent feature representation that is globally optimal. | Although Assumption 6.1 is strong, we are not aware of any weaker regularity condition in the literature, even in the linear setting (Melo et al., 2008; Zou et al., 2019; Chen et al., 2019b) and the NTK regime (Cai et al., 2019). Let the initial distribution ν0subscript𝜈0\nu_{0}italic_ν start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT be the standard Gaussian distribution N(0,ID)𝑁0subscript𝐼𝐷N(0,I_{D})italic_N ( 0 , italic_I start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT ).
In parallel to Theorem 4.3, we establish the following theorem, which characterizes the global optimality and convergence of Q-learning. Recall that we write 𝒳=𝒮×𝒜𝒳𝒮𝒜{\mathcal{X}}={\mathcal{S}}\times\mathcal{A}caligraphic_X = caligraphic_S × caligraphic_A and x=(s,a)∈𝒳𝑥𝑠𝑎𝒳x=(s,a)\in{\mathcal{X}}italic_x = ( italic_s , italic_a ) ∈ caligraphic_X. Also, νtsubscript𝜈𝑡\nu_{t}italic_ν start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is the PDE solution in (6.3), while θ(m)(k)superscript𝜃𝑚𝑘\theta^{(m)}(k)italic_θ start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( italic_k ) is the Q-learning dynamics in (6.2). | Assumption 4.1 can be ensured by normalizing all state-action pairs. Such an assumption is commonly used in the mean-field analysis of neural networks (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Araújo et al., 2019; Fang et al., 2019a, b; Chen et al., 2020). We remark that our analysis straightforwardly generalizes to the setting where ‖x‖≤Cnorm𝑥𝐶\|x\|\leq C∥ italic_x ∥ ≤ italic_C for an absolute constant C>0𝐶0C>0italic_C > 0.
| Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Chen et al., 2020). See also the previous analysis in the NTK regime (Daniely, 2017; Chizat and Bach, 2018a; Jacot et al., 2018; Li and Liang, 2018; Allen-Zhu et al., 2018a, b; Du et al., 2018a, b; Zou et al., 2018; Arora et al., 2019a, b; Lee et al., 2019; Cao and Gu, 2019; Chen et al., 2019a; Zou and Gu, 2019; Ji and Telgarsky, 2019; Bai and Lee, 2019). Specifically, the previous mean-field analysis casts SGD as the Wasserstein gradient flow of an energy functional, which corresponds to the objective function in supervised learning. In contrast, TD follows the stochastic semigradient of the MSPBE (Sutton and Barto, 2018), which is biased. As a result, there does not exist an energy functional for casting TD as its Wasserstein gradient flow. Instead, our analysis combines a generalized notion of one-point monotonicity (Harker and Pang, 1990) and the first variation formula in the Wasserstein space (Ambrosio et al., 2008), which is of independent interest.
| Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, TD possibly diverges (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997). Bhatnagar et al. (2009) propose nonlinear gradient TD, which converges but only to a locally optimal solution. See Geist and Pietquin (2013); Bertsekas (2019) for a detailed survey. When the value function approximator is an overparameterized multi-layer neural network, Cai et al. (2019) prove that TD converges to the globally optimal solution in the NTK regime. See also the independent work of Brandfonbrener and
Bruna (2019a, b); Agazzi and Lu (2019); Sirignano and Spiliopoulos (2019), where the state space is required to be finite. In contrast to the previous analysis in the NTK regime, our analysis allows TD to attain a data-dependent feature representation that is globally optimal. | C |
Regarding parameter efficiency for NMT, Wu et al. (2019a) present lightweight and dynamic convolutions. Ma et al. (2021) approximate softmax attention with two nested linear attention functions. These methods are orthogonal to our work and it should be possible to combine them with our approach. | We suggest that selectively aggregating different layer representations of the Transformer may improve the performance, and propose to use depth-wise LSTMs to connect stacked (sub-) layers of Transformers. We show how Transformer layer normalization and feed-forward sub-layers can be absorbed by depth-wise LSTMs, while connecting pure Transformer attention layers by depth-wise LSTMs (for Transformer encoder and decoder blocks), replacing residual connections.
| Directly replacing residual connections with LSTM units will introduce a large amount of additional parameters and computation. Given that the task of computing the LSTM hidden state is similar to the feed-forward sub-layer in the original Transformer layers, we propose to replace the feed-forward sub-layer with the newly introduced LSTM unit, which only introduces one LSTM unit per layer, and the parameters of the LSTM can be shared across layers.
|
We use depth-wise LSTM rather than a depth-wise multi-head attention network Dou et al. (2018) with which we can build the NMT model solely based on the attention mechanism for two reasons: 1) we have to compute the stacking of Transformer layers sequentially as in sequential token-by-token decoding, and compared to the use of depth-wise LSTM of O(n)𝑂𝑛O(n)italic_O ( italic_n ) complexity, depth-wise multi-head attention networks suffer from O(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) complexity and they cannot be parallelized at the depth level. 2) the attention mechanism linearly combines representations with attention weights. Thus, it lacks the ability to provide the non-linearity compared to the LSTM, which we suggest is important. |
In this paper, we replace residual connections of the Transformer with depth-wise LSTMs, to selectively manage the representation aggregation of layers benefiting performance while ensuring convergence of the Transformer. Specifically, we show how to integrate the computation of multi-head attention networks and feed-forward networks with the depth-wise LSTM for the Transformer. | D |
\upsigma_{i}]\rrbracket_{X_{i}}caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = roman_τ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ] ⟧ start_POSTSUBSCRIPT italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT.
By Lemma 5.9, the topological sum of these spaces Y≜∑i∈I(Xi,θi)≜𝑌subscript𝑖𝐼subscript𝑋𝑖subscriptθ𝑖Y\triangleq\sum_{i\in I}(X_{i},\uptheta_{i})italic_Y ≜ ∑ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , roman_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) is a | lpps is indeed a pre-spectral space. Conversely, ⟨X,τ,𝒦∘(X)⟩𝑋τsuperscript𝒦𝑋\left\langle X,\uptau,\mathcal{K}^{\circ}\!\left(X\right)\right\rangle⟨ italic_X , roman_τ , caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_X ) ⟩
is well-defined whenever (X,τ)𝑋τ(X,\uptau)( italic_X , roman_τ ) is a pre-spectral space; in | definition, this map is surjective. Notice that this map is actually
a logical map from ⟨Y,τY,𝒦∘(Y)⟩𝑌subscriptτ𝑌superscript𝒦𝑌\left\langle Y,\uptau_{Y},\mathcal{K}^{\circ}\!\left(Y\right)\right\rangle⟨ italic_Y , roman_τ start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT , caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_Y ) ⟩ to | {U∣U∈⟨τY∩⟦𝖥𝖮[σ]⟧Y⟩}\left\{U\mid U\in\langle\uptau_{Y}\cap\llbracket\mathsf{FO}[\upsigma]%
\rrbracket_{Y}\rangle\right\}{ italic_U ∣ italic_U ∈ ⟨ roman_τ start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT ∩ ⟦ sansserif_FO [ roman_σ ] ⟧ start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT ⟩ } | pre-spectral space. Recall that ⟨Y,τY,𝒦∘(Y)⟩𝑌subscriptτ𝑌superscript𝒦𝑌\langle Y,\uptau_{Y},\mathcal{K}^{\circ}\!\left(Y\right)\rangle⟨ italic_Y , roman_τ start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT , caligraphic_K start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ( italic_Y ) ⟩ is a lpps. We are going to exhibit
a surjective map f𝑓fitalic_f from Y𝑌Yitalic_Y to the logical sum X𝑋Xitalic_X of | D |
In particular, we redesign the whole pipeline of deep distortion rectification and present an intermediate representation based on the distortion parameters. The comparison of the previous methods and the proposed approach is illustrated in Fig. 1. Our key insight is that distortion rectification can be cast as a problem of learning an ordinal distortion from a distorted image. The ordinal distortion indicates the distortion levels of a series of pixels, which extend outward from the principal point. To predict the ordinal distortion, we design a local-global associated estimation network optimized with an ordinal distortion loss function. A distortion-aware perception layer is exploited to boost the feature extraction of different degrees of distortion.
| (1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed out earlier, the proposed ordinal distortion is explicit to the image feature and is observable from a distorted image; thus it boosts the neural networks’ learning ability. On the other hand, the performance of the distortion parameter estimation drops as the amount of training data decreases. In contrast, our ordinal distortion estimation performs more consistently due to the homogeneity of the learning representation.
|
Figure 1: Method Comparisons. (a) Previous learning methods, (b) Our proposed approach. We aim to transfer the traditional calibration objective into a learning-friendly representation. Previous methods roughly feed the whole distorted image into their learning models and directly estimate the implicit and heterogeneous distortion parameters. In contrast, our proposed approach only requires a part of a distorted image (distortion element) and estimates the ordinal distortion. Due to its explicit description and homogeneity, we can obtain more accurate distortion estimation and achieve better corrected results. | Previous learning methods directly regress the distortion parameters from a distorted image. However, such an implicit and heterogeneous representation confuses the distortion learning of neural networks and causes the insufficient distortion perception. To bridge the gap between image feature and calibration objective, we present a novel intermediate representation, i.e., ordinal distortion, which displays a learning-friendly attribute for learning models. For an intuitive and comprehensive analysis, we compare these two representations from the following three aspects.
|
In this part, we compare our approach with the state-of-the-art methods in both quantitative and qualitative evaluations, in which the compared methods can be classified into traditional methods [23][24] and learning methods [8][11][12]. Note that our approach only requires a patch of the input distorted image to estimate the ordinal distortion. | B |
To further verify the superiority of SNGM with respect to LARS, we also evaluate them on a larger dataset ImageNet [2] and a larger model ResNet50 [10]. We train the model with 90 epochs. As recommended in [32], we use warm-up and polynomial learning rate strategy. | First, we use the dataset CIFAR-10 and the model ResNet20 [10] to evaluate SNGM. We train the model with eight GPUs. Each GPU will compute a gradient with the batch size being B/8𝐵8B/8italic_B / 8. If B/8≥128𝐵8128B/8\geq 128italic_B / 8 ≥ 128, we will use the gradient accumulation [28]
with the batch size being 128. We train the model with 160160160160 epochs (i.e., pass through the dataset 160160160160 times). The cosine annealing learning rate [24] (without restarts) is adopted for the five methods. In the m𝑚mitalic_m-th epoch, the learning rate is ηm=η0∗0.5(1+cos(mπ/160))subscript𝜂𝑚subscript𝜂00.51𝑚𝜋160\eta_{m}=\eta_{0}*0.5(1+\cos(m\pi/160))italic_η start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_η start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∗ 0.5 ( 1 + roman_cos ( italic_m italic_π / 160 ) ), m=0,1,……,159𝑚01……159m=0,1,...\ldots,159italic_m = 0 , 1 , … … , 159. | We compare SNGM with four baselines: MSGD, ADAM [14], LARS [34] and LAMB [34]. LAMB is a layer-wise adaptive large-batch optimization method based on ADAM, while LARS is based on MSGD.
The experiments are implemented based on the DeepCTR 888https://github.com/shenweichen/DeepCTR-Torch framework. | We use a pre-trained ViT 555https://huggingface.co/google/vit-base-patch16-224-in21k [4] model and fine-tune it on the CIFAR-10/CIFAR-100 datasets.
The experiments are implemented based on the Transformers 666https://github.com/huggingface/transformers framework. We fine-tune the model with 20 epochs. |
To further verify the superiority of SNGM with respect to LARS, we also evaluate them on a larger dataset ImageNet [2] and a larger model ResNet50 [10]. We train the model with 90 epochs. As recommended in [32], we use warm-up and polynomial learning rate strategy. | C |
When the algorithm terminates with Cs=∅subscript𝐶𝑠C_{s}=\emptysetitalic_C start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = ∅, Lemma 5.2 ensure the solution zfinalsuperscript𝑧finalz^{\text{final}}italic_z start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT is integral. By Lemma 5.5, any client j𝑗jitalic_j with d(j,S)>9Rj𝑑𝑗𝑆9subscript𝑅𝑗d(j,S)>9R_{j}italic_d ( italic_j , italic_S ) > 9 italic_R start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT must have j∈C0final𝑗subscriptsuperscript𝐶final0j\in C^{\text{final}}_{0}italic_j ∈ italic_C start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. Hence, ∑j:d(j,S)>9Rjvj≤∑j∈C0vjsubscript:𝑗𝑑𝑗𝑆9subscript𝑅𝑗subscript𝑣𝑗subscript𝑗subscript𝐶0subscript𝑣𝑗\sum_{j:d(j,S)>9R_{j}}v_{j}\leq\sum_{j\in C_{0}}v_{j}∑ start_POSTSUBSCRIPT italic_j : italic_d ( italic_j , italic_S ) > 9 italic_R start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≤ ∑ start_POSTSUBSCRIPT italic_j ∈ italic_C start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT. For the facility costs, we have ∑i∈Swi=∑izifinalwisubscript𝑖𝑆subscript𝑤𝑖subscript𝑖superscriptsubscript𝑧𝑖finalsubscript𝑤𝑖\sum_{i\in S}w_{i}=\sum_{i}z_{i}^{\text{final}}w_{i}∑ start_POSTSUBSCRIPT italic_i ∈ italic_S end_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Finally, by Lemma 5.3, and noting that Csfinal=∅superscriptsubscript𝐶𝑠finalC_{s}^{\text{final}}=\emptysetitalic_C start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT = ∅, we have ∑izifinalwi+∑j∈C0vj≤Vsubscript𝑖superscriptsubscript𝑧𝑖finalsubscript𝑤𝑖subscript𝑗subscript𝐶0subscript𝑣𝑗𝑉\sum_{i}z_{i}^{\text{final}}w_{i}+\sum_{j\in C_{0}}v_{j}\leq V∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT final end_POSTSUPERSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + ∑ start_POSTSUBSCRIPT italic_j ∈ italic_C start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≤ italic_V. | FAs¯←{ijA|j∈HA and FI∩GπIj=∅}←subscriptsuperscript𝐹¯𝑠𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F^{\bar{s}}_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{%
\pi^{I}j}=\emptyset\}italic_F start_POSTSUPERSCRIPT over¯ start_ARG italic_s end_ARG end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ← { italic_i start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | italic_j ∈ italic_H start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ∩ italic_G start_POSTSUBSCRIPT italic_π start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT italic_j end_POSTSUBSCRIPT = ∅ } | Brian Brubach was supported in part by NSF awards CCF-1422569 and CCF-1749864, and by research awards from Adobe. Nathaniel Grammel and Leonidas Tsepenekas were supported in part by NSF awards CCF-1749864 and CCF-1918749, and by research awards from Amazon and Google. Aravind Srinivasan was supported in part by NSF awards CCF-1422569, CCF-1749864 and CCF-1918749, and by research awards from Adobe, Amazon, and Google.
| For instance, during the COVID-19 pandemic, testing and vaccination centers were deployed at different kinds of locations, and access was an important consideration [18, 20]; access can be quantified in terms of different objectives including distance, as in our work. Here,
ℱℱ\mathcal{F}caligraphic_F and 𝒞𝒞\mathcal{C}caligraphic_C correspond to such locations and the population affected by the outbreak, and needing services, respectively. |
do FA←{ijA|j∈HA and FI∩GπIj=∅}←subscript𝐹𝐴conditional-setsubscriptsuperscript𝑖𝐴𝑗𝑗subscript𝐻𝐴 and subscript𝐹𝐼subscript𝐺superscript𝜋𝐼𝑗F_{A}\leftarrow\{i^{A}_{j}~{}|~{}j\in H_{A}\text{ and }F_{I}\cap G_{\pi^{I}j}=\emptyset\}italic_F start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ← { italic_i start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | italic_j ∈ italic_H start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ∩ italic_G start_POSTSUBSCRIPT italic_π start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT italic_j end_POSTSUBSCRIPT = ∅ } | B |
In real networked systems, the information exchange among nodes is often affected by communication noises, and the structure of the network often changes randomly due to packet dropouts, link/node failures and recreations, which are studied in [8]-[10].
| such as the economic dispatch in power grids ([1]) and the traffic flow control in intelligent transportation networks ([2]), et al. Considering the various uncertainties in practical network environments, distributed stochastic optimization algorithms have been widely studied. The (sub)gradients of local cost functions are used in many distributed optimization algorithms.
However, it is difficult to get accurate (sub)gradients in many practical applications. For example, in distributed statistical machine learning ([3]), the local loss functions are the mathematical expectations of random functions so that the local optimizers can only obtain the measurement of the (sub)gradients with random noises. The influence of (sub)gradient measurement noises has been considered for distributed optimization algorithms in [4]-[7]. |
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be spatially and temporally dependent. The local cost functions are not required to be differentiable, nor do their subgradients need to be bounded. The local optimizers can only obtain measurement information of the local subgradients with random noises. The additive and multiplicative communication noises co-exist in communication links. We consider the distributed stochastic subgradient optimization algorithm and prove that if the sequence of random digraphs is conditionally balanced and uniformly conditionally jointly connected, then the states of all local optimizers converge to the same global optimal solution almost surely. The main contributions of our paper are listed as follows. | Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15],
and additive and multiplicative communication noises may co-exist in communication links ([21]). | However, a variety of random factors may co-exist in practical environment.
In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the distributed optimization with multiple uncertain factors ([11]-[15]). | D |
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics without violating the privacy. Inspired by local differential privacy, this paper uses the method of randomized response to perturb original QI values before release to prevent the disclosure of matching the combination of QI values. | Differential privacy [6, 38], which is proposed for query-response systems, prevents the adversary from inferring the presence or absence of any individual in the database by adding random noise (e.g., Laplace Mechanism [7] and Exponential Mechanism [24]) to aggregated results. However, differential privacy also faces the contradiction between privacy protection and data analysis [9]. For instance, a smaller ϵitalic-ϵ\epsilonitalic_ϵ for ϵitalic-ϵ\epsilonitalic_ϵ-differential privacy provides better protection but worse information utility.
|
In recent years, local differential privacy [12, 4] has attracted increasing attention because it is particularly useful in distributed environments where users submit their sensitive information to untrusted curator. Randomized response [10] is widely applied in local differential privacy to collect users’ statistics without violating the privacy. Inspired by local differential privacy, this paper uses the method of randomized response to perturb original QI values before release to prevent the disclosure of matching the combination of QI values. | The advantages of MuCo are summarized as follows. First, MuCo can maintain the distributions of original QI values as much as possible. For instance, the sum of each column in Figure 3 is shown by the blue polyline in Figure 2, and the blue polyline almost coincides with the red polyline representing the distribution in the original data. Second, the anonymization of MuCo is a “black box” process for recipients because the only difference between the original data and the anonymized data is that some original QI values are replaced with random values. Thus, the adversary cannot determine which QI values are altered as well as the ranges of variations, causing that the matching tuples are more likely to be wrong or even does not exist when the adversary uses more QI values to match, but the adversary obtains much more matching records if the size of the combination of QI values is not big enough. While for the recipient, the results of query statements are specific records rather than groups. Accordingly, the results are more accurate. The conducted extensive experiments also illustrate the effectiveness of the proposed method.
| Note that, the application scenarios of differential privacy and the models of k𝑘kitalic_k-anonymity family are different. Differential privacy adds random noise to the answers of the queries issued by recipients rather than publishing microdata. While the approaches of k𝑘kitalic_k-anonymity family sanitize the original microdata and publish the anonymized version of microdata. Therefore, differential privacy is inapplicable to the scenario we addressed in this paper.
| D |
Table 3: PointRend’s performance on testing set (trackB). “EnrichFeat” means enhance the feature representation of coarse mask head and point head by increasing the number of fully-connected layers or its hidden sizes. “BFP” means Balanced Feature Pyramid. Note that BFP and EnrichFeat gain little improvements, we guess that our PointRend baseline already achieves promising performance (77.38 mAP).
| HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains another 2 mAP. Armed with DCN, GC block and SyncBN training, our HTC with Res2NetR101 backbone yields 74.58 mAP on validation set, as shown in Table 1. However, the convolutional mask heads adopted in all stages bring non-negligible computation and memory costs, which constrain the mask resolution and further limit the segmentation quality for large instances.
| PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared to HTC’s mask head, PointRend’s lightweight segmentation head alleviates both memory and computation costs dramatically, thus enables larger input image resolutions during training and testing, which further improves the segmentation quality.
To fully understand which components contribute to PointRend’s performance, we construct our own validation set by randomly selecting 3000 images from original training data to evaluate offline. We will show the step-by-step improvements adopted on PointRend. | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRend Kirillov et al. (2020). Most of these detectors focus on an overall performance on public datasets like COCO, which contains much smaller instances than 3D-FUTURE, while paying less attention to large objects segmentation. As illustrated in Figure 1, the size distribution of bounding boxes in 3D-FUTURE and COCO indicates that the former contains much larger objects while the latter is dominated by smaller instances. Thus, the prominent methods used in COCO, like MaskRCNN He et al. (2017) and HTC, may generate blurry contours for large instances. Their mask heads output segmentation from a limited small feature size (e.g., 14×14141414\times 1414 × 14), which is dramatically insufficient to represent large objects. All of these motivate us to segment large instances in a fine-grained and high-quality manner.
SOLOv2 builds an efficient single-shot framework with strong performance and dynamically generates predictions with much larger mask size (e.g., 1/4 scale of input size) than HTC. PointRend iteratively renders the output mask over adaptively sampled uncertain points in a coarse-to-fine fashion, which is naturally suitable for generating smooth and fine-grained instance boundaries. By conducting extensive experiments on HTC, SOLOv2 and PointRend, PointRend succeeds in producing finer mask boundaries and significantly outperforms other methods by a large margin. Our step-by-step modifications adopted on PointRend finally achieves state-of-the-art performance on 3D-FUTURE dataset, which yields 79.2 mAP and 77.38 mAP on validation and test set respectively. The final submission is an ensemble of 5 PointRend models with slightly different settings, reaching the 1st place in this competition. | Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.9 mAP and surpasses MaskRCNN by a remarkable margin of 9.7 mAP. More Points Test. By increasing the number of subdivision points from default 28 to 70 during inference, we gain another 1.1 mAP with free training cost. Large Backbone. X101-64x4d Xie et al. (2017) is then used as large backbone and it boosts 6 mAP against ResNet50. DCN and More Points Train. We adopt more interpolated points during training, by increasing the number of sampled points from original 14 to 26 for coarse prediction head, and from 14 to 24 for fine-grained point head. Then by adopting DCN Dai et al. (2017), we gain 71.6 mAP, which already outperforms HTC and SOLOV2 from our offline observation. Large Resolution and P6 Feature. Due to PointRend’s lightweight segmentation head and less memory consumption compared to HTC, the input resolution can be further increased from range [800,1000] to [1200,1400] during multi-scale training. P6 level of FPN is also added for both coarse prediction head and fine-grained point head, which finally yields 74.3 mAP on our splitted validation set. Other tricks we tried on PointRend give little improvement, including MaskScoring head, GC Block and DoubleHead Wu et al. (2020).
In the following, we refer the model in the last row (74.3 mAP) of Table 2 as PointRend baseline. The baseline trained on the official training set finally reaches 79.17 and 77.38 mAP on validation and testing set respectively, as shown in Table 1 and Table 3. It surpasses SOLOv2 by a large margin: 6.2, 4.5 and 3.5 mAP respectively for small, medium and large size on validation set. We believe that PointRend’s iteratively rendering process acts as a pivot for generating high-quality masks, especially fine-grained instance boundaries. Due to its superior performance, we only choose PointRend as ensemble candidates for the final submission. | B |
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
|
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma some time ago (see [K], comment from April 2, 2011). More specifically, we proved | We denote by εi:{−1,1}n→{−1,1}:subscript𝜀𝑖→superscript11𝑛11\varepsilon_{i}:\{-1,1\}^{n}\to\{-1,1\}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 } the projection onto the i𝑖iitalic_i-s coordinate: εi(δ1,…,δn)=δisubscript𝜀𝑖subscript𝛿1…subscript𝛿𝑛subscript𝛿𝑖\varepsilon_{i}(\delta_{1},\dots,\delta_{n})=\delta_{i}italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_δ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_δ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) = italic_δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. For a subset A𝐴Aitalic_A of [n]:={1,…,n}assigndelimited-[]𝑛1…𝑛[n]:=\{1,\dots,n\}[ italic_n ] := { 1 , … , italic_n } we denote WA=∏i∈Aεisubscript𝑊𝐴subscriptproduct𝑖𝐴subscript𝜀𝑖W_{A}=\prod_{i\in A}\varepsilon_{i}italic_W start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT = ∏ start_POSTSUBSCRIPT italic_i ∈ italic_A end_POSTSUBSCRIPT italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, WA:{−1,1}n→{−1,1}:subscript𝑊𝐴→superscript11𝑛11W_{A}:\{-1,1\}^{n}\to\{-1,1\}italic_W start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → { - 1 , 1 }. The WAsubscript𝑊𝐴W_{A}italic_W start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT-s are the characters of the Cantor group {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT (with coordintewise multiplication) and form an orthonormal basis in L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT of the Cantor group equipped with the normalized counting measure. In this note we shall be concerned with functions from {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT into the complex plane, ℂℂ\mathbb{C}blackboard_C. These can also be considered as a couple of real functions. Each such function f:{−1,1}n→ℂ:𝑓→superscript11𝑛ℂf:\{-1,1\}^{n}\to\mathbb{C}italic_f : { - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → blackboard_C has a unique expansion
|
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Maybe the presentation below is what was known. | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
| A |
Corollary 1 shows that if local variations are known, we can achieve near-optimal dependency on the the total variation B𝛉,B𝛍subscript𝐵𝛉subscript𝐵𝛍B_{\bm{\theta}},B_{\bm{\mu}}italic_B start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT bold_italic_μ end_POSTSUBSCRIPT and time horizon T𝑇Titalic_T compared to the lower bound provided in Theorem 1. However, the dependency on d𝑑ditalic_d and H𝐻Hitalic_H is worse. The dependency on d𝑑ditalic_d is unlikely to improve unless there is an improvement to LSVI-UCB.
| Reinforcement learning (RL) is a core control problem in which an agent sequentially interacts with an unknown environment to maximize its cumulative reward (Sutton & Barto, 2018). RL finds enormous applications in real-time bidding in advertisement auctions (Cai et al., 2017), autonomous driving (Shalev-Shwartz et al., 2016), gaming-AI (Silver et al., 2018), and inventory control (Agrawal & Jia, 2019), among others. Due to the large dimension of sequential decision-making problems that are of growing interest, classical RL algorithms designed for finite state space such as tabular Q-learning (Watkins & Dayan, 1992) no longer yield satisfactory performance. Recent advances in RL rely on function approximators such as deep neural nets to overcome the curse of dimensionality, i.e., the value function is approximated by a function which is able to predict the value function for unseen state-action pairs given a few training samples. This function approximation technique has achieved remarkable success in various large-scale decision-making problems such as playing video games (Mnih et al., 2015), the game of Go (Silver et al., 2017), and robot control (Akkaya et al., 2019). Motivated by the empirical success of RL algorithms with function approximation, there is growing interest in developing RL algorithms with function approximation that are statistically efficient (Yang & Wang, 2019; Cai et al., 2020; Jin et al., 2020; Modi et al., 2020; Wang et al., 2020; Wei et al., 2021; Neu & Olkhovskaya, 2021; Jiang et al., 2017; Wang et al., 2020; Jin et al., 2021; Du et al., 2021). The focus of this line of work is to develop statistically efficient algorithms with function approximation for RL in terms of either regret or sample complexity. Such efficiency is especially crucial in data-sparse applications such as medical trials (Zhao et al., 2009).
| The definition of total variation B𝐵Bitalic_B is related to the misspecification error defined by Jin et al. (2020). One can apply the Cauchy-Schwarz inequality to show that our total variation bound implies that misspecification in Eq. (4) of Jin et al. is also bounded (but not vice versa). However, the regret analysis in the misspecified linear MDP of Jin et al. (2020) is restricted to static regret, so we cannot directly borrow their analysis for the misspecified setting (Jin et al., 2020) to handle our dynamic regret (as defined in Eq. (1)).
| The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and allowed to change in l𝑙litalic_l times for the reward and transition functions. They show that UCRL2 with restart achieves O~(l1/3T2/3)~𝑂superscript𝑙13superscript𝑇23\tilde{O}(l^{1/3}T^{2/3})over~ start_ARG italic_O end_ARG ( italic_l start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) dynamic regret, where T𝑇Titalic_T is the time horizon. Later works (Ortner et al., 2020; Cheung et al., 2020; Fei et al., 2020) generalize the nonstationary setting to allow reward and transition functions vary for any number of time steps, as long as the total variation is bounded. Specifically, the work of (Ortner et al., 2020) proves that UCRL with restart achieves O~((Br+Bp)1/3T2/3)~𝑂superscriptsubscript𝐵𝑟subscript𝐵𝑝13superscript𝑇23\tilde{O}((B_{r}+B_{p})^{1/3}T^{2/3})over~ start_ARG italic_O end_ARG ( ( italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT + italic_B start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) dynamic regret (when the variation in each epoch is known), where Brsubscript𝐵𝑟B_{r}italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT and Bpsubscript𝐵𝑝B_{p}italic_B start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT denote the total variation of reward and transition functions over all time steps. Cheung et al. (2020) proposes an algorithm based on UCRL2 by combining sliding windows and a confidence widening technique. Their algorithm has slightly worse dynamic regret bound O~((Br+Bp)1/4T3/4)~𝑂superscriptsubscript𝐵𝑟subscript𝐵𝑝14superscript𝑇34\tilde{O}((B_{r}+B_{p})^{1/4}T^{3/4})over~ start_ARG italic_O end_ARG ( ( italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT + italic_B start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 1 / 4 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 3 / 4 end_POSTSUPERSCRIPT ) without knowing the local variations. Further, Fei et al. (2020) develops an algorithm which directly optimizes the policy and enjoys near-optimal regret in the low-variation regime. A different model of nonstationary MDP is proposed by Lykouris et al. (2021), which smoothly interpolates between stationary and adversarial environments, by assuming that most episodes are stationary except for a small number of adversarial episodes. Note that Lykouris et al. (2021) considers linear function approximation, but their nonstationarity assumption is different from ours. In this paper, we assume the variation budget for reward and transition function is bounded, which is similar to the settings in Ortner et al. (2020); Cheung et al. (2020); Mao et al. (2021). Concurrently to our work, Touati & Vincent (2020) propose an algorithm combining weighted least-squares value iteration and the optimistic principle, achieving the same O~(B1/4d5/4H5/4T3/4)~𝑂superscript𝐵14superscript𝑑54superscript𝐻54superscript𝑇34\tilde{O}(B^{1/4}d^{5/4}H^{5/4}T^{3/4})over~ start_ARG italic_O end_ARG ( italic_B start_POSTSUPERSCRIPT 1 / 4 end_POSTSUPERSCRIPT italic_d start_POSTSUPERSCRIPT 5 / 4 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 5 / 4 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 3 / 4 end_POSTSUPERSCRIPT ) regret as we do with knowledge of the total variation B𝐵Bitalic_B. They do not have a dynamic regret bound when the knowledge of local variations is available. Their proposed algorithm uses exponential weights to smoothly forget data that are far in the past. By contrast, our algorithm periodically restarts the LSVI-UCB algorithm from scratch to handle the non-stationarity and is much more computationally efficient. Another concurrent work by Wei & Luo (2021) follows a substantially different approach to achieve the optimal T2/3superscript𝑇23T^{2/3}italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT regret. The key idea of their algorithm is to run multiple base algorithms for stationary instances with different duration simultaneously, under a carefully designed random schedule. Compared with them, our algorithm has a slightly worse rate, but a much better computational complexity, since we only require to maintain one instance of the base algorithm. Both of these two concurrent works do not have empirical results, and we are also the first one to conduct numerical experiments on online exploration for non-stationary MDPs (Section 6). Other related and concurrent works investigate online exploration in different classes of non-stationary MDPs, including linear kernal MDP (Zhong et al., 2021), constrained tabular MDP (Ding & Lavaei, 2022), and stochastic shorted path problem (Chen & Luo, 2022).
| Motivated by empirical success of deep RL, there is a recent line of work analyzing the theoretical performance of RL algorithms with function approximation (Yang & Wang, 2019; Cai et al., 2020; Jin et al., 2020; Modi et al., 2020; Ayoub et al., 2020; Wang et al., 2020; Zhou et al., 2021; Wei et al., 2021; Neu & Olkhovskaya, 2021; Huang et al., 2021; Modi et al., 2021; Jiang et al., 2017; Agarwal et al., 2020; Dong et al., 2020; Jin et al., 2021; Du et al., 2021; Foster et al., 2021a; Chen et al., 2022). Recent work also studies the instance-dependent sample complexity bound for RL with function approximation, which adapts to the complexity of the specific MDP instance (Foster et al., 2021b; Dong & Ma, 2022). All of these works assume that the learner is interacting with a stationary environment. In sharp contrast, this paper considers learning in a nonstationary environment. As we will show later, if we do not properly adapt to the nonstationarity, linear regret is incurred.
| B |
Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on instant messaging apps compared to social media, and have reported the least trust in them. They have also rated the sharing of fake news to be a greater problem than its creation. These suggest that, in Singapore, communication with personal contacts such as through the forwarding of messages, rather than with the public such as by sharing posts on social media feeds, is the larger issue. As an Asian country, Singapore tends towards a collectivist culture where emphasis is placed on establishing and maintaining relationships in one’s social group. Research has shown that this is linked to lesser use of social media (Jackson and Wang, 2013), and stronger preferences towards group chats in instant messaging apps (Li et al., 2011), signaling that instant messaging apps feature more prominently in daily communication. An opportunity here is to design more effective interventions, such as warning mechanisms (Gao et al., 2018), to preempt the private sharing of fake news.
|
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms, and post corrections and warnings when they encounter fake news. That respondents show strong trust and reliance on government communication platforms, such as official websites and hotlines, signifies the relatively strong faith that Singapore residents have in the Singapore Government to provide truthful and helpful information and to debunk fake news. This may be attributed to the successful ongoing efforts in making transparent government decisions and the readiness of the government in addressing public concerns through online forums and dialogues (REACH, [n.d.]). There is opportunity here for the government to launch programs such as campaigns, call-to-actions and civic tech initiatives that aim to more actively involve the public in discussing the local impacts of fake news and the strategies to manage it, and to encourage them to play a part through personal and community actions. |
There is a very strong, negative correlation between the media sources of fake news and the level of trust in them (ref. Figures 1 and 2) which is statistically significant (r(9)=−0.81𝑟90.81r(9)=-0.81italic_r ( 9 ) = - 0.81, p<.005𝑝.005p<.005italic_p < .005). Trust is built on transparency and truthfulness, and the presence of fake news, which is deceptive and usually meant to serve hidden agendas, may erode trust. It is worthwhile to consider whether the trust in media items is due to people’s own encounters with fake news, or because of secondary factors. In Singapore, there have been active efforts through campaigns from various organizations (e.g., S.U.R.E. (Board, [n.d.]), Better Internet (Council, [n.d.]), VacciNationSG (Lai, 2021)) to raise awareness on misinformation, disinformation and fake news. If it is through the exposure to the messages of these campaigns that people’s trust in media items have been influenced, especially those who might not have personally encountered fake news, this suggests the importance of media literacy education in addressing fake news, particularly when secondary effects such as practicing greater caution due to a lack of trust comes into play. | While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by political and financial gains, and its influence has led to increasing social costs due to the adverse effects it has on people’s truth discernment and behavior (Duffy et al., 2020). With fake news stemming mainly from digital media and causing misguided dissent that could compromise collaboration among people, we see this to be of concern to the CSCW community. As global efforts addressing fake news take off, we aim to understand what the perceptions and practices of news sharing and fake news are in a local context, with Singapore as the place of interest, to gain insights on where best to direct local mitigation efforts.
| Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on instant messaging apps compared to social media, and have reported the least trust in them. They have also rated the sharing of fake news to be a greater problem than its creation. These suggest that, in Singapore, communication with personal contacts such as through the forwarding of messages, rather than with the public such as by sharing posts on social media feeds, is the larger issue. As an Asian country, Singapore tends towards a collectivist culture where emphasis is placed on establishing and maintaining relationships in one’s social group. Research has shown that this is linked to lesser use of social media (Jackson and Wang, 2013), and stronger preferences towards group chats in instant messaging apps (Li et al., 2011), signaling that instant messaging apps feature more prominently in daily communication. An opportunity here is to design more effective interventions, such as warning mechanisms (Gao et al., 2018), to preempt the private sharing of fake news.
| B |
However, GAT also has some limitations. When encountering a new entity (e.g., W3C), its embedding 𝐞W3Csubscript𝐞W3C{\mathbf{e}}_{\text{W3C}}bold_e start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT is randomly initialized, and the computed attention scores by GAT are meaningless. Additionally, 𝐞W3Csubscript𝐞W3C{\mathbf{e}}_{\text{W3C}}bold_e start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT is also a noise vector in the aggregation step.
| Alternatively, we can implement the decentralized approach using a second-order attention mechanism. As depicted in 2b, each layer in DAN consists of two steps, similar to a multi-layer GAT. The computation involves the previous two layers and can be formulated using the following equation:
| Figure 2: Insight into multi-layer DAN. a. In the single-layer DAN, we first use an additional aggregation layer to obtain the neighbor context (1-2); we then use the neighbor context as query to score neighbors (3); we finally aggregate the neighbors with the attention scores to obtain the final output embedding (4-5). b. In the multi-layer DAN, we first use the output embedding of W3C at layer k−1𝑘1k-1italic_k - 1 as query to score the output embedding of its neighbors at layer k−2𝑘2k-2italic_k - 2 (1); we then aggregate the neighbor embeddings at layer k−2𝑘2k-2italic_k - 2 with the attention scores to obtain the output embedding of W3C at layer k𝑘kitalic_k (2-3); similarly, we use the output embedding of W3C at layer k𝑘kitalic_k as query to score the output embedding of its neighbors at layer k−1𝑘1k-1italic_k - 1, and finally use the attention scores to aggregate the neighbor embeddings at layer k−1𝑘1k-1italic_k - 1 to obtain the output embedding of W3C at layer k+1𝑘1k+1italic_k + 1 (4-6).
| However, GAT also has some limitations. When encountering a new entity (e.g., W3C), its embedding 𝐞W3Csubscript𝐞W3C{\mathbf{e}}_{\text{W3C}}bold_e start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT is randomly initialized, and the computed attention scores by GAT are meaningless. Additionally, 𝐞W3Csubscript𝐞W3C{\mathbf{e}}_{\text{W3C}}bold_e start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT is also a noise vector in the aggregation step.
| If 𝐞W3Csubscript𝐞W3C\mathbf{e}_{\text{W3C}}bold_e start_POSTSUBSCRIPT W3C end_POSTSUBSCRIPT is unobservable during the training phase, it becomes less useful and potentially detrimental when computing attention scores during the testing phase. To address this issue, we can introduce a decentralized attention network.
In the decentralized approach, all entities (including the unseen entities) are still randomly-initialized, but the attention layer requires two different types of inputs: the neighbor context vector as the query vector, and neighbor embeddings as the key and value vectors. As shown in Figure 2a, we can initially employ an independent module to aggregate neighbor embeddings and obtain the context vector, followed by attention-based weighting and aggregation steps. However, this implementation involves additional computation at each layer and requires an extra round of neighbor aggregation after obtaining the context vector, which can be cumbersome. | D |
∇ηJPPO=∇η𝔼t[δt]2,subscript∇𝜂superscript𝐽PPOsubscript∇𝜂subscript𝔼𝑡superscriptdelimited-[]subscript𝛿𝑡2\nabla_{\eta}J^{{\rm PPO}}={\nabla_{\eta}}\mathbb{E}_{t}[\delta_{t}]^{2},∇ start_POSTSUBSCRIPT italic_η end_POSTSUBSCRIPT italic_J start_POSTSUPERSCRIPT roman_PPO end_POSTSUPERSCRIPT = ∇ start_POSTSUBSCRIPT italic_η end_POSTSUBSCRIPT blackboard_E start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT [ italic_δ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ,
|
Previous work typically utilizes intrinsic motivation for exploration in complex decision-making problems with sparse rewards. Count-based exploration [20, 21] builds a density model and encourages the agent to visit the states with less pseudo visitation count. Episodic curiosity [22] compares the current observation with buffer and uses reachability as the novelty bonus. RND [23] measures the state uncertainty by random network distillation. Never give up [24] combines pre-episode and life-long novelty by using an episodic memory-based bonus. Most of these work proposes the final reward for training to characterize the trade-off between the extrinsic and intrinsic rewards, which is typically implemented as a linear combination. The intrinsic rewards are crucial when the extrinsic rewards are sparse. | Figure 6: The evaluation curve in Atari games. The first 6 games are hard exploration tasks. The different methods are trained with different intrinsic rewards, and extrinsic rewards are used to measure the performance. Our method performs best in most games, both in learning speed and quality of the final policy. The agent aims at staying alive and exploring the complex areas by maximizing the intrinsic rewards from VDM.
|
We first evaluate our method on standard Atari games. Since different methods utilize different intrinsic rewards, the intrinsic rewards are not applicable to measure the performance of the trained purely exploratory agents. In alternative, we follow [11, 13], and use the extrinsic rewards given by the environment to measure the performance. We highlight that the extrinsic rewards are only used for evaluation, not for training. We illustrate the evaluation curves of 18181818 common Atari games in Fig. 6, where the first 6666 games are hard exploration tasks. We draw each curve with five distinct random seeds. For each method, the solid line indicates the mean episodic reward of all five seeds, and the shadow area shows the confidence interval (i.e., ±plus-or-minus\pm±Std of episodic rewards among all seeds) of the performance. The result shows that self-supervised exploration enables the agent to obtain higher extrinsic rewards by learning based on intrinsic rewards. More specifically, maximizing the intrinsic rewards encourages the agent to explore the complicated part of the environment, which typically corresponds to significant changes in the scenarios and leads to large extrinsic rewards. | In this work, we consider self-supervised exploration without extrinsic reward. In such a case, the above trade-off narrows down to a pure exploration problem, aiming at efficiently accumulating information from the environment. Previous self-supervised exploration typically utilizes ‘curiosity’ based on prediction-error of dynamic [10, 25, 11] and the Bayesian uncertainty estimation using ensemble-based environment models [26, 13] or ensemble Q-functions [27]. Since the agent does pure exploration, the intrinsic motivation becomes the only driving force of the whole learning process. Meanwhile, because the influence of extrinsic rewards is eliminated, the effectiveness of intrinsic rewards can be evaluated independently. After training the pure-exploratory policy with intrinsic rewards, there are several ways to combine the intrinsic policy with extrinsic policies. Scheduled intrinsic drive [28] uses a high-level scheduler that periodically selects to follow either the extrinsic or the intrinsic policy to gather experiences. MuleX [29] learns several policies independently and uses a random heuristic to decide which one to use in each time step. Such policy combination methods perform better than the policy obtained from the linear combination of extrinsic and intrinsic rewards. We focus on developing the pure-exploratory agent and leave the study of policy combination in the future.
| A |
Until today, the classic Gauss quadrature formula is the best approach to approximating integrals IGauss(f)≈∫Ωf(x)dxsubscript𝐼Gauss𝑓subscriptΩ𝑓𝑥differential-d𝑥I_{\mathrm{Gauss}}(f)\approx\int_{\Omega}f(x)\,\mathrm{d}xitalic_I start_POSTSUBSCRIPT roman_Gauss end_POSTSUBSCRIPT ( italic_f ) ≈ ∫ start_POSTSUBSCRIPT roman_Ω end_POSTSUBSCRIPT italic_f ( italic_x ) roman_d italic_x in one variable [43, 58].
Many contributions toward extending this approach to higher dimensions have been made [21, 22, 49, 82]. | However, we only use the PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, A=Am,n,p𝐴subscript𝐴𝑚𝑛𝑝A=A_{m,n,p}italic_A = italic_A start_POSTSUBSCRIPT italic_m , italic_n , italic_p end_POSTSUBSCRIPT, p=1,2𝑝12p=1,2italic_p = 1 , 2, unisolvent nodes to determine the interpolants, whereas Trefethen computed the rates for the l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT- and l2subscript𝑙2l_{2}italic_l start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-degree
approximations by regression over the whole l∞subscript𝑙l_{\infty}italic_l start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT-grid. | We complement the established notion of unisolvent nodes by the dual notion of unisolvence. That is: For given arbitrary nodes P𝑃Pitalic_P, determine the polynomial space ΠΠ\Piroman_Π such that
P𝑃Pitalic_P is unisolvent with respect to ΠΠ\Piroman_Π. In doing so, we revisit earlier results by Carl de Boor and Amon Ros [28, 29] and answer their question from our perspective. | Leslie Greengard, Christian L. Mueller, Alex Barnett, Manas Rachh, Heide Meissner, Uwe Hernandez Acosta, and Nico Hoffmann are deeply acknowledged for their inspiring hints and helpful discussions.
Further, we are grateful to Michael Bussmann and thank the whole CASUS institute (Görlitz, Germany) for hosting stimulating workshops on the subject. | convergence rates for the Runge function, as a prominet example of a Trefethen function. We show that the number of nodes required scales sub-exponentially with space dimension. We therefore believe that the present generalization of unisolvent nodes to non-tensorial grids is key to lifting the curse of dimensionality. Our results also directly inspire an efficient algorithm to practically solve high-dimensional interpolation problems. We therefore provide a numerically robust and computationally efficient algorithm and its software implementation, and we use it to empirically verify our theoretical predictions.
Combining sub-exponential node numbers with exponential approximation rates, non-tensorial unisolvent nodes are thus able to lift the curse of dimensionality for multivariate interpolation tasks. | C |
},{\nu})].| IPM ( italic_μ , italic_ν ) - IPM ( over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , over^ start_ARG italic_ν end_ARG start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) | < italic_ϵ + 2 [ fraktur_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( caligraphic_F , italic_μ ) + fraktur_R start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( caligraphic_F , italic_ν ) ] .
| A two-sample test is designed based on this theoretical result, and numerical experiments show that this test outperforms the existing benchmark.
In future work, we will study tighter performance guarantees for the projected Wasserstein distance and develop the optimal choice of k𝑘kitalic_k to improve the performance of two-sample tests. | In this section, we first discuss the finite-sample guarantee for general IPMs, then a two-sample test can be designed based on this statistical property. Finally, we design a two-sample test based on the projected Wasserstein distance.
Omitted proofs can be found in Appendix A. | The proof of Proposition 1 essentially follows the one-sample generalization bound mentioned in [41, Theorem 3.1].
However, by following the similar proof procedure discussed in [20], we can improve this two-sample finite-sample convergence result when extra assumptions hold, but existing works about IPMs haven’t investigated it yet. | The finite-sample convergence of general IPMs between two empirical distributions was established.
Compared with the Wasserstein distance, the convergence rate of the projected Wasserstein distance has a minor dependence on the dimension of target distributions, which alleviates the curse of dimensionality. | C |
Learning disentangled factors h∼qϕ(H|x)similar-toℎsubscript𝑞italic-ϕconditional𝐻𝑥h\sim q_{\phi}(H|x)italic_h ∼ italic_q start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_H | italic_x ) that are semantically meaningful representations of the observation x𝑥xitalic_x is highly desirable because such interpretable representations can arguably [icmlbest] be advantageous for a variety of downstream tasks, including classification, detection, reinforcement learning, and transfer learning. [bengio2013representation, lecun2015deep, lake2017building, van2019disentangled]. While a formal definition of disentangled representation (DR) remains elusive, we understand it to mean that by manipulating only one of the factors while holding the rest constant, only one semantically meaningful aspect of the observation, e.g. the pose of an object in an image, changes. Such capability can be highly useful for data generation tasks such as image synthesis from textual descriptions [DBLP:conf/icml/ReedAYLSL16, DBLP:journals/corr/ZhangXLZHWM16]. For this reason there has been extensive research towards developing DGMs that learn DR while generating data points of high quality, i.e. that are indistinguishable from the data being modeled. Of particular interest are models that can achieve this without supervision. |
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervised, semi-supervised or unsupervised. In the Appendix we present such implementations. where we significantly constrain the capacity of the learned representation and heavily regularize the model to produce independent factors. As we explained above, such a model will likely learn a good disentangled representation, however, its reconstruction will be of low quality as it will only be able to generate the information captured by the disentangled factors while averaging the details. For example, in Figure 1, the model uses β𝛽\betaitalic_β-TCVAE [mig] to retrieve the pose of the model as a latent factor. In the reconstruction, the rest of the details are averaged, resulting in a blurry image (1b). The goal of the second part of the model, is to add the details while maintaining the semantic information retrieved in the first stage. In Figure 1 that means to transform Image 1b (the output of the first stage) to be as similar as possible to Image 1a (the target observation). We can view this as a style transfer task and use a technique from [adaIN] to achieve our goal. | While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, if the unconstrained nuisance variables have enough capacity, the model can use them to achieve a high quality reconstruction while ignoring the latent variables related to the disentangled factors. This phenomena is sometimes called the "shortcut problem" and has been discussed in previous works [DBLP:conf/iclr/SzaboHPZF18].
| Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and correlated components Z𝑍Zitalic_Z, a.k.a as nuisance variables, which encode the details information not stored in the independent components. A series of works starting from [beta] aims to achieve that via regularizing the models by up-weighting certain terms in the ELBO formulation which penalize the (aggregate) posterior to be factorized over all or some of the latent dimensions [kumar2017variational, factor, mig].
I think I would make what these methods doing clearer. They aren’t really separating into nuisance and independent only.. they are also throwing away nuisance. | Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Zitalic_Z while maintaining the semantic information captured in C𝐶Citalic_C to obtain the final reconstruction (Image 1d in our example).
| C |
As shown in the above method, logical aggregates can be constructed with structural wiring if digital signals are computed in pairs of inverted signals. Especially for the NOT gate, you can twist the α𝛼\alphaitalic_α line and the β𝛽\betaitalic_β line once, making it much simpler to operate than a semiconductor-based transistor that uses a traditional semiconductor element and a pore. In addition, cables were measured in pairs rather than in pairs to enable serial connection when AND operations were performed. The α𝛼\alphaitalic_α signal and β𝛽\betaitalic_β signal have values of 0 and 1, depending on the connection state of the wire.
| As shown in the above method, logical aggregates can be constructed with structural wiring if digital signals are computed in pairs of inverted signals. Especially for the NOT gate, you can twist the α𝛼\alphaitalic_α line and the β𝛽\betaitalic_β line once, making it much simpler to operate than a semiconductor-based transistor that uses a traditional semiconductor element and a pore. In addition, cables were measured in pairs rather than in pairs to enable serial connection when AND operations were performed. The α𝛼\alphaitalic_α signal and β𝛽\betaitalic_β signal have values of 0 and 1, depending on the connection state of the wire.
| The structure-based computer mentioned in this paper are based on Boolean Algebra, a system commonly applied to digital computers. Boolean algebra is a concept created by George Boole (1815-1854) of the United Kingdom that expresses the True and False of logic 1 and 0, and mathematically describes digital electrical signals. The concept of logical aggregates defined in Boolean algebra has become the basis for hardware devices such as ALU, CLU, RAM, and so on. Structure-based computer in this paper was also designed to perform logical operations using digital signals of 1 and 0. Logic circuits are the units in which logical operations are performed, and there are AND, OR, and NOT gates. Of these, the NOT gate in the computer we use today is based on transistors. The advantage of transistors is that they can differentiate between signal and power and perform switching and amplification at the same time. On the other hand, more heat is generated compared to passing through a conductor of the same length, which causes semiconductors to age and limits the number of clocks. To solve the various problems of the semiconductor mentioned above, this paper shows the concept of ”Reverse-Logic pair of digital signals” and ”double-pair(4-pin)-based logic operation” techniques on which Structure-based computer hardware is. This paper shows the concept of Reverse-Logic pair[7] of digital signals, which is a method for solving the problem of heating, aging, and computation speed of NOT operations. Expressing 1 as an inverted signal pair, it appears as an ordered pair of two auxiliary signals, each with a signal of one or zero, as shown in (1,0). Similarly, zeros are expressed in sequence pairs (0,1).
| If a pair of lines of the same color is connected, 1, if broken, the sequence pair of states of the red line (α𝛼\alphaitalic_α) and blue line (β𝛽\betaitalic_β) determines the transmitted digital signal. Thus, signal cables require one transistor for switching action at the end. When introducing the concept of an inverted signal pair of digital signals into a structural computer, the signals are paired, so a total of four wires are required to process the two auxiliary signals. This is defined as a double pair-based logical operation and is as follows in Fig 1.
|
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. However, one can think about whether the four pin designs are the minimum number of pins required by structural computers. In other words, operating a structural computer with a minimal lead is also a task to be addressed by this study because one of the most important factors in computer hardware design is aggregation. Let’s look at the role of the four pins that transmit signals in a 4 pin based signal system. Four pins are paired into two pairs, each representing/delivering true and inverted values as a connection state. When checking the output, place a voltage on one of the two wires in a pair and ground the other. In this case, the study inferred that of the four wires, two wires acting as ground can be replaced by one wire, and based on this reasoning, the method in which the 4 pin signal system can be described as 3-pin based logic as the same 3 pin signal system. As mentioned above, a 3-pin based logic consists of a ground cable in the center and two signal lines representing true and inverted values above and below, and is capable of operating NOT, AND and OR operations through the structural transformations shown below. | C |
The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}blackboard_F, this paper explores a completely new approach using the Koopman operator defined by the iterates of the map. This helps define the linear representation of non-linear maps, which translates non-linear compositions of the map to matrix multiplications. This linear representation naturally defines a notion of linear complexity for non-linear maps, which can be viewed as a measure of computational complexity associated with computations involving such maps. The framework of linear representation is then extended to parameter dependent maps over 𝔽𝔽\mathbb{F}blackboard_F, and the conditions on parametric invertibility of such maps are established, leading to a construction of the parametric inverse map (under composition). It is shown that the framework can be extended to multivariate maps over 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, and the conditions are established for invertibility of such maps, and the inverse is constructed using the linear representation. Further, the problem of linear representation of the group generated by a finite set of permutation maps over 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT under composition is also solved by extending the theory of linear representation of a single map. This leads to the notion of complexity of a group of permutation maps under composition.
|
Let the matrix representation of KF=𝐊|Wsubscript𝐾𝐹conditional𝐊𝑊K_{F}=\mathbf{K}|Witalic_K start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT = bold_K | italic_W in ℬℬ\mathcal{B}caligraphic_B be denoted as M𝑀Mitalic_M. (The notation for matrix representation is explained in (8)). Analogous to the univariate case, the dimension N𝑁Nitalic_N of the space W𝑊Witalic_W is defined as the linear complexity of the map F𝐹Fitalic_F | The work [19] also provides a computational framework to compute the cycle structure of the permutation polynomial f𝑓fitalic_f by constructing a matrix A(f)𝐴𝑓A(f)italic_A ( italic_f ), of dimension q×q𝑞𝑞q\times qitalic_q × italic_q through the coefficients of the (algebraic) powers of fksuperscript𝑓𝑘f^{k}italic_f start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT, k=0,1,…,q−1𝑘01…𝑞1k=0,1,\dots,q-1italic_k = 0 , 1 , … , italic_q - 1 and computing the multiplicative order of the eigenvalues of this matrix A(f)𝐴𝑓A(f)italic_A ( italic_f ) over a suitable field extension. In our work, to compute the cycle structure of the permutation polynomial, we have to compute the solutions of the associated linear dynamical system (19). This computation amounts to computing the multiplicative order of the eigenvalues of the matrix M𝑀Mitalic_M over a suitable field extension [24]. From the table, we see that the dimension of the matrix M𝑀Mitalic_M, which is used to compute the cycle lengths, is not necessarily q𝑞qitalic_q. Hence, this approach does not necessarily involve matrices of dimension q𝑞qitalic_q in all cases.
|
The first author would like to thank the Department of Electrical Engineering, Indian Institute of Technology - Bombay, as the work was done in full during his tenure as a Institue Post-Doctoral Fellow. The authors would also like to thank the reviewers for their suggestions in the proofs of Lemma 1, Proposition 1 and Lemma 3. | The second statement of the theorem gives a necessary and sufficient condition for an element of the set ΣMsubscriptΣ𝑀\Sigma_{M}roman_Σ start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT to be in ΣfsubscriptΣ𝑓\Sigma_{f}roman_Σ start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT. If the choice of basis is as in (6), once the set of all y𝑦yitalic_y satisfying (20) is obtained, the first component of y𝑦yitalic_y is precisely the α𝛼\alphaitalic_α for which the other components are to be verified for consistency.
| C |
The NNFS algorithm performed surprisingly well in our simulations given its simple and greedy nature, showing performance very similar to that of the adaptive lasso. However, in both gene expression data sets it was among the two worst performing methods, both in terms of accuracy and view selection stability. If one additionally considers that NNFS does not scale well with larger problems there is generally no reason to choose this algorithm over the nonnegative (adaptive) lasso. | Excluding the interpolating predictor, stability selection produced the sparsest models in our simulations. However, this led to a reduction in accuracy whenever the correlation within features from the same view was of a similar magnitude as the correlations between features from different views. In both gene expression data sets stability selection also produced the sparsest models, but it also had the worst classification accuracy of all meta-learners. In applying stability selection, one has to specify several parameters. We calculated the values of these parameters in part by specifying a desired bound on the PFER (in our case 1.5). This kind of error control is much less strict than the typical family-wise error rate (FWER) or FDR control one would apply when doing statistical inference. In fact, one can observe in Figures 3 and 4 that although stability selection has a low FPR, for a sample size of 200 its FDR is still much higher than one would typically consider acceptable when doing inference (common FDR control levels are 0.05 or 0.1). Additionally, we gave the meta-learner information about the number of views containing signal in the data (parameter q𝑞qitalic_q), which the other meta-learners did not have access to. It is also worth noting that the sets of views selected by stability selection in both gene expression data sets had low view selection stability. Ideally, selecting views based on their stability would lead to a set of selected views that is itself highly stable, but evidently this is not the case. It follows then that stability selection may produce a set of selected views which is neither particularly useful for prediction, nor for inference. One could add additional assumptions (Shah \BBA Samworth, \APACyear2013), which may increase predictive performance, but may also increase FDR. Or one could opt for stricter error control, but this would likely reduce classification performance even further. This implies that performing view selection for both the aims of prediction and inference using a single procedure may produce poor results, since the resulting set of selected views may not be suitable for either purpose.
|
The false discovery rate in view selection for each of the meta-learners can be observed in Figure 4. Note that the FDR is particularly sensitive to variability since its denominator is the number of selected views, which itself is a variable quantity. In particular, when the number of selected views is small, the addition or removal of a single view may cause large increases or decreases in FDR. This happens especially whenever ρb>0subscript𝜌𝑏0\rho_{b}>0italic_ρ start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT > 0, as can be observed in Figure 4. The ranking of the different meta-learners is similar to their ranking by TPR and FPR. When n=200𝑛200n=200italic_n = 200, the interpolating predictor has the highest FDR due to its tendency to select very dense models when n<V𝑛𝑉n<Vitalic_n < italic_V. When n=2000𝑛2000n=2000italic_n = 2000, the interpolating predictor often has a very low FDR, but in these settings it also has considerably lower TPR and test accuracy than the other meta-learners. Of the other meta-learners nonnegative ridge regression has the highest FDR, followed by the elastic net, lasso, adaptive lasso and NNFS, and stability selection. | For this purpose, one would ideally like to use an algorithm that provides sparsity, but also algorithmic stability in the sense that given two very similar data sets, the set of selected views should vary little. However, sparse algorithms are generally not stable, and vice versa (Xu \BOthers., \APACyear2012).
An example of the trade-off between sparsity and interpretability of the set of selected views occurs when different views, or combinations of views, contain the same information. If the primary concern is sparsity, a researcher may be satisfied with just one of these combinations being selected, preferably the smallest set which contains the relevant information. But if there is also a desire to interpret the relationships between the views and the outcome, it may be more desirable to identify all of these combinations, even if this includes some redundant information. If one wants to go even further and perform formal statistical inference on the set of selected views, one may additionally be interested in theoretically controlling, say, the family-wise error rate (FWER) or false discovery rate (FDR) of the set of selected views. However, strict control of such an error rate could end up harming the predictive performance of the model, thus leading to a trade-off between the interpretability of the set of selected views and classification accuracy. | In this article we investigated how different view-selecting meta-learners affect the performance of multi-view stacking. In our simulations, the interpolating predictor often performed worse than the other meta-learners on at least one outcome measure. For example, when the sample size was larger than the number of views, the interpolating predictor often had the lowest TPR in view selection, as well as the lowest test accuracy, particularly when there was no correlation between the different views. When the sample size was smaller than the number of views, the interpolating predictor had a FPR in view selection that was considerably higher than that of all other meta-learners. In terms of accuracy it performed very well in the breast cancer data, but less so in the colitis data. However, in both cases it produced very dense models, which additionally had low view selection stability. The fact that its behavior varied considerably across our experimental conditions, combined with its tendency to select very dense models when the meta-learning problem is high-dimensional, suggests that the interpolating predictor should not be used when view selection is among the goals of the study under consideration. However, it may have some use when its interpretation as a weighted mean of the view-specific models is of particular importance.
| A |
Regarding AP, HITON-PC and FBED exhibit significantly better performance than the other three techniques, as depicted in Figure 3(b). Notably, the results of AP generally display larger variances than those of ROC AUC, which indicates the unstable performance measuring with AP.
|
Table 6 presents the reduction rates achieved by each of the five techniques. The reduction rate is computed as 1 minus the ratio of the number of relevant variables selected to the total number of variables in a dataset. The results reveal substantial variations in reduction rates among the different techniques for the same dataset. For instance, for the dataset Libras, the reduction rate achieved by IEPC is 2.2%, while the other techniques achieve rates below 93%. On average, HITON-PC exhibits the highest reduction rate of 84.61%, while IEPC shows the lowest reduction rate at 40.28%. FBED, DC, and MI achieve relatively similar reduction rates, hovering around 76%. Notably, FBED and HITON-PC display a similar trend, with HITON-PC consistently achieving a higher reduction rate than FBED due to the PC set of a variable being a subset of its Markov blanket. |
As shown in Figure 3(a), the two causal feature selection techniques, HITON-PC and FBED, show better performance than the other three techniques. HITON-PC has the best average results, followed by FBED, IEPC, MI and DC. From the p𝑝pitalic_p-values shown in the figure, HITON-PC is significatly better than MI and DC, and FBED is significantly better than the three non-causal techniques. DC shows a much larger variance than other techniques, with a standard deviation of 0.044, while the standard deviations of other techniques range from 0.011 to 0.017. | In conclusion, the relevant variable selection phase of the DepAD framework is crucial for identifying optimal predictors for the target variable in anomaly detection. Striking a balance between selecting too many or too few variables is essential for maintaining prediction accuracy. When the ground-truth relevant variable set is unavailable, the Markov blanket (MB) represents a theoretically optimal choice. Our experiments have further validated that HITON-PC and FBED outperform the other techniques, and achieve superior results in both ROC AUC and AP and the highest variable reduction rates.
| Compared to other methods, IEPC exhibits a notably lower reduction rate, which, we believe, contributes to its unstable performance. The experimental results in Figure 3 indicate that when considering only linear prediction models, IEPC performs better with regularization techniques such as LASSO and Ridge, as opposed to general linear regression without regularization. This observation suggests the possibility of irrelevant or redundant variables being included in the set of relevant variables selected by IEPC.
| A |
At the start of the interaction, when no contexts have been observed, θ^tsubscript^𝜃𝑡\hat{\theta}_{t}over^ start_ARG italic_θ end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is well-defined by Eq (5) when λt>0subscript𝜆𝑡0\lambda_{t}>0italic_λ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT > 0. Therefore, the regularization parameter λtsubscript𝜆𝑡\lambda_{t}italic_λ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT makes CB-MNL burn-in period free, in contrast to some previous works, e.g. Filippi et al. [2010]. | Algorithm 1 follows the template of in the face of uncertainty (OFU) strategies [Auer et al., 2002, Filippi et al., 2010, Faury et al., 2020]. Technical analysis of OFU algorithms relies on two key factors: the design of the confidence set and the ease of choosing an action using the confidence set.
| where pessimism is the additive inverse of the optimism (difference between the payoffs under true parameters and those estimated by CB-MNL). Due to optimistic decision-making and the fact that θ∗∈Ct(δ)subscript𝜃subscript𝐶𝑡𝛿\theta_{*}\in C_{t}(\delta)italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ italic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) (see Eq (12)), pessimism is non-positive, for all rounds. Thus, the regret is upper bounded by the sum of the prediction error for T𝑇Titalic_T rounds. In Section 4.1 we derive an the expression for prediction error upper bound for a single round t𝑡titalic_t. We also contrast with the previous works Filippi et al. [2010], Li et al. [2017], Oh & Iyengar [2021] and point out specific technical differences which allow us to use Bernstein-like tail concentration inequality and therefore, achieve stronger regret guarantees. In Section 4.2, we describe the additional steps leading to the statement of Theorem 1. The style of the arguments is simpler and shorter than that in Faury et al. [2020]. Finally, in Section 4.3, we discuss the relationship between two confidence sets Ct(δ)subscript𝐶𝑡𝛿C_{t}(\delta)italic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) and Et(δ)subscript𝐸𝑡𝛿E_{t}(\delta)italic_E start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) and show that even using Et(δ)subscript𝐸𝑡𝛿E_{t}(\delta)italic_E start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) in place of Ct(δ)subscript𝐶𝑡𝛿C_{t}(\delta)italic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ), we get the regret upper bounds with same parameter dependence as in Corollary 2.
Lemma 3 gives the expression for an upper bound on the prediction error. |
Comparison with Oh & Iyengar [2019] The Thompson Sampling based approach is inherently different from our Optimism in the face of uncertainty (OFU) style Algorithm CB-MNL. However, the main result in Oh & Iyengar [2019] also relies on a confidence set based analysis along the lines of Filippi et al. [2010] but has a multiplicative κ𝜅\kappaitalic_κ factor in the bound. | In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of uncertainty approaches) [Abbasi-Yadkori et al., 2011, Abeille et al., 2021]. We use Bernstein-style concentration for self-normalized martingales, which were previously proposed in the context of scalar logistic bandits in Faury et al. [2020], to define our confidence set over the true parameter, taking into account the effects of the local curvature of the reward function. We show that the performance of CB-MNL (as measured by regret) is bounded as O~\deldT+κ~O\del𝑑𝑇𝜅\tilde{\mathrm{O}}\del{d\sqrt{T}+\kappa}over~ start_ARG roman_O end_ARG italic_d square-root start_ARG italic_T end_ARG + italic_κ, significantly improving the theoretical performance over existing algorithms where κ𝜅\kappaitalic_κ appears as a multiplicative factor in the leading term. We also leverage a self-concordance [Bach, 2010] like relation for the multinomial logit reward function [Zhang & Lin, 2015], which helps us limit the effect of κ𝜅\kappaitalic_κ on the final regret upper bound to only the higher-order terms. Finally, we propose a different convex confidence set for the optimization problem in the decision set of CB-MNL, which reduces the optimization problem to a constrained convex problem.
| A |
Table 2: Action localization results on validation set of ActivityNet-v1.3, measured by mAPs (%) at different tIoU thresholds and the average mAP. Our VSGN achieves the state-of-the-art average mAP and the highest mAP for short actions. Note that our VSGN, which uses pre-extracted features without further finetuning, significantly outperforms all other methods that use the same pre-extracted features. It is even on par with concurrent methods that finetune the features on ActivityNet for TAL end to end.
| Table 6: xGN levels in xGPN (ActivityNet-v1.3). We show the mAPs (%) at different tIoU thresholds, average mAPs as well as mAPs for short actions (less than 30 seconds) when using xGN at different xGPN encoder levels. The levels in the columns with ✓use xGN and the ones in the blank columns use a Conv1d(3,2)Conv1d32\textrm{Conv1d}(3,2)Conv1d ( 3 , 2 ) layer instead.
| Cross-scale graph network. The xGN module contains a temporal branch to aggregate features in a temporal neighborhood, and a graph branch to aggregate features from intra-scale and cross-scale locations. Then it pools the aggregated features into a smaller temporal scale. Its architecture is illustrated in Fig. 4. The temporal branch contains a Conv1d(3,1)Conv1d31\textrm{Conv1d}(3,1)Conv1d ( 3 , 1 )222For conciseness, we use Conv1d(m,n)Conv1d𝑚𝑛\textrm{Conv1d}(m,n)Conv1d ( italic_m , italic_n ) to represent 1-D convolutions with kernel size m𝑚mitalic_m and stride n𝑛nitalic_n. layer. In the graph branch, we build a graph on all the features from both Clip O and Clip U, and apply edge convolutions [38] for feature aggregation.
| We provide ablation study for the key components VSS and xGPN in VSGN to verify their effectiveness on the two datasets in Table 3 and 4, respectively. The baselines are implemented by replacing each xGN module in xGPN with a layer of Conv1d(3,2)Conv1d32\textrm{Conv1d}(3,2)Conv1d ( 3 , 2 ) and ReLU, and not using cutting, up-scaling and stitching in VSS.
| To further improve the boundaries generated from Mlocsubscript𝑀𝑙𝑜𝑐M_{loc}italic_M start_POSTSUBSCRIPT italic_l italic_o italic_c end_POSTSUBSCRIPT, we design Madjsubscript𝑀𝑎𝑑𝑗M_{adj}italic_M start_POSTSUBSCRIPT italic_a italic_d italic_j end_POSTSUBSCRIPT inspired by FGD in [24]. For each updated anchor segment from the Mlocsubscript𝑀𝑙𝑜𝑐M_{loc}italic_M start_POSTSUBSCRIPT italic_l italic_o italic_c end_POSTSUBSCRIPT, we sample 3 features from around its start and end locations, respectively. Then we temporally concatenate the 3 feature vectors from each location and apply Conv1d(3,1)−ReLU−Conv1d(1,1)Conv1d31ReLUConv1d11\textrm{Conv1d}(3,1)-\textrm{ReLU}-\textrm{Conv1d}(1,1)Conv1d ( 3 , 1 ) - ReLU - Conv1d ( 1 , 1 ) to predict start/end offsets. The anchor segment is further adjusted by adding the two offsets to the start and end locations respectively. Mscrsubscript𝑀𝑠𝑐𝑟M_{scr}italic_M start_POSTSUBSCRIPT italic_s italic_c italic_r end_POSTSUBSCRIPT, comprised of a stack of Conv1d(3,1)−ReLU−Conv1d(1,1)Conv1d31ReLUConv1d11\textrm{Conv1d}(3,1)-\textrm{ReLU}-\textrm{Conv1d}(1,1)Conv1d ( 3 , 1 ) - ReLU - Conv1d ( 1 , 1 ), predicts actionness/startness/endness scores [21] for each sequence.
| C |
The user interface of VisEvol is structured as follows:
(1) two projection-based views, referred to as Projections 1 and 2, occupy the central UI area (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(d and e)); | (ii) in the next exploration phase, compare and choose specific ML algorithms for the ensemble and then proceed with their particular instantiations, i.e., the models (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(c–e));
(iii) during the detailed examination phase, zoom in into interesting clusters already explored in the previous phase, and focus on indications that confirm either their approval in the ensemble or their need for transformation through the evolutionary process (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(f and g)); | The user interface of VisEvol is structured as follows:
(1) two projection-based views, referred to as Projections 1 and 2, occupy the central UI area (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(d and e)); | (2) active views relevant for both projections are positioned on the top (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(b and c)); and
(3) commonly-shared views that update on the exploration of either Projection 1 or 2 are placed at the bottom (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(f and g)). | After another hyperparameter space search (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(d)) with the help of supporter views (VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(c, f, and g)), out of the 290 models generated in S2subscript𝑆2S_{2}italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, we select 28 to add to the ensemble (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(e)).
Surprisingly, the best majority-voting ensemble for the test and validation data sets contains 1 RF and 3 GradB models, compared to the 110 models added from all stages in total. | C |
Consensus protocols, in contrast to Markov chains, operate without the limitations of non-negative nodes and edges or the requirement for the sum of nodes to equal one [18]. This broader scope enables consensus protocols to address a significantly wider range of problem spaces.
Therefore, there is a significant interest in consensus protocols in a broad range of multi-agent networked systems research, including distributed coordination of mobile autonomous agents [27, 28, 29, 30, 31, 51], distributed optimization [52, 53, 54, 55, 56], distributed state estimation [57, 58], or dynamic load-balancing for parallel processors [59, 60]. | There are comprehensive survey papers that review the research on consensus protocols [19, 20, 21, 22]. In many scenarios, the network topology of the consensus protocol is a switching topology due to failures, formation reconfiguration, or state-dependence. There is a large number of papers that propose consensus protocols with switching network topologies and convergence proofs of these algorithms are provided under various assumptions [27, 28, 29, 30, 31, 32].
In [27], a consensus protocol is proposed to solve the alignment problem of mobile agents, where the switching topology is assumed to be periodically connected. | we introduce a consensus protocol with state-dependent weights to reach a consensus on time-varying weighted graphs.
Unlike other proposed consensus protocols in the literature, the consensus protocol we introduce does not require any connectivity assumption on the dynamic network topology. We provide theoretical analysis for proof of exponential convergence under some mild technical conditions. | Another algorithm is proposed in [28] that assumes the underlying switching network topology is ultimately connected. This assumption means that the union of graphs over an infinite interval is strongly connected. In [29], previous works are extended to solve the consensus problem on networks under limited and unreliable information exchange with dynamically changing interaction topologies. The convergence of the algorithm is provided under the ultimately connected assumption.
Another consensus protocol is introduced in [30] for the cooperation of vehicles performing a shared task using inter-vehicle communication. Based on this work, a theoretical framework is presented in [31] to solve consensus problems under a variety of assumptions on the network topology such as strongly connected switching topology. | Consensus protocols form an important field of research that has a strong connection with Markov chains [18].
Consensus protocols are a set of rules used in distributed systems to achieve agreement among a group of agents on the value of a variable [19, 20, 21, 22]. | A |
Although multi-matchings obtained by synchronisation procedures are cycle-consistent, the matchings are often spatially non-smooth and noisy, as we illustrate in Sec. 5.
From a theoretical point of view, the most appropriate approach for addressing multi-shape matching is based on a unified formulation, where cycle consistency is assured already when the multi-matchings are computed. Although some approaches fit into this category [18, 9], none of the existing methods are tailored explicitly towards isometric multi-shape matching in order to take full advantage in this setting. | In this work we fill this gap by introducing a generalisation of state-of-the-art isometric two-shape matching approaches towards isometric multi-shape matching. We demonstrate that explicitly exploiting the isometry property leads to a natural and elegant formulation that achieves improved results compared to previous methods.
Our main contributions can be summarised as: | It was shown that deep learning is an extremely powerful approach for extracting
shape correspondences [40, 27, 59, 26]. However, the focus of this work is on establishing a fundamental optimisation problem formulation for cycle-consistent isometric multi-shape matching. As such, this work does not focus on learning methods per-se, but we believe that it has a strong potential to spark further work in this direction. In particular, our isometric multi-matching formulation can be integrated into an end-to-end learning framework via differentiable programming techniques [48]. Moreover in machine learning, an entire shape collection is typically used for training, | A shortcoming when applying the mentioned multi-shape matching approaches to isometric settings is that they do not exploit structural properties of isometric shapes. Hence, they lead to suboptimal multi-matchings, which we experimentally confirm in Sec. 5. One exception is the recent work on spectral map synchronisation [31], which builds upon functional maps and is, in principal, well-suited for isometric multi-shape matching. However, although the authors take into account cycle consistency, respective penalties are only imposed on pairwise functional maps, rather than on the point-wise correspondences. In Sec. 5 we demonstrate that it leads to multi-matchings that have large cycle errors.
|
We presented a novel formulation for the isometric multi-shape matching problem. Our main idea is to simultaneously solve for shape-to-universe matchings and shape-to-universe functional maps. By doing so, we generalise the popular functional map framework to multi-matching, while guaranteeing cycle consistency, both for the shape-to-universe matchings, as well as for the shape-to-universe functional maps. This contrasts the recent ConsistentZoomOut [31] method, which does not obtain cycle-consistent multi-matchings. Our algorithm is efficient, straightforward to implement, and montonically increases the objective function. Experimentally we have demonstrated that our method outperforms recent state-of-the-art techniques in terms of matching quality, while producing cycle-consistent results and being efficient. | A |
The first three steps of algorithm RecognizePG are implied by the first part of Theorem 6. By following Theorem 6, we have to check that there are no full antipodal triangle in UpperCsubscriptUpper𝐶\text{Upper}_{C}Upper start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT (this is made in Step 4), and we have to find f:ΓC→[r+1]:𝑓→subscriptΓ𝐶delimited-[]𝑟1f:\Gamma_{C}\rightarrow[r+1]italic_f : roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT → [ italic_r + 1 ] satisfying 6.(1),…,6.(6), where r=|UpperC|𝑟subscriptUpper𝐶r=|\text{Upper}_{C}|italic_r = | Upper start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT |. This latter part is done in Step 4, Step 5 and Step 6. In particular 6.(1) is done in Step 4, 6.(4) and 6.(5) are achieved in Step 5, and 6.(2), 6.(3) and 6.(6) are reached in Step 6. Note that the first condition in the second case of Step 5 is indirectly present in Theorem 6: if it happens, then we cannot satisfy condition 6.(5) (moreover, γ,γ′,γ′′𝛾superscript𝛾′superscript𝛾′′\gamma,\gamma^{\prime},\gamma^{\prime\prime}italic_γ , italic_γ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_γ start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT would form a full antipodal triangle). Finally, Step 7 completes the recursion started in Step 3 by building the clique path tree on ΓCsubscriptΓ𝐶\Gamma_{C}roman_Γ start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT.
□□\Box□ | In this section we analyze all steps of algorithm RecognizePG. We want to explain them in details and compute the computational complexity of the algorithm. Some of these steps are already discussed in [22], anyway, we describe them in order to have a complete treatment.
|
The recognition algorithm RecognizePG for path graph is mainly built on path graphs’ characterization in [1]. This characterization decomposes the input graph G𝐺Gitalic_G by clique separators as in [18], then at the recursive step one has to find a proper vertex coloring of an antipodality graph satisfying some particular conditions; see Section 3, in particular Theorem 6. In a few words, an antipodality graph has as vertex set some subgraph of G𝐺Gitalic_G, and two vertices are connected if the corresponding subgraphs of G𝐺Gitalic_G are antipodal. Unfortunately, we cannot build all the antipodality graphs by brute force because checking all possible antipodal pairs requires too much time (more time than the overall complexity of algorithms in [3, 22]). We overcome this problem by visiting the connected components in a smart order. This order allows us to establish all the antipodality relations in a faster time. This is done in Step 4, Step 5, and Step 6 that are the core of algorithm RecognizePG. | On the side of path graphs, we believe that, compared to algorithms in [3, 22], our algorithm is simpler for several reasons: the overall treatment is shorter, the algorithm does not require complex data structures, its correctness is a consequence of the characterization in [1], and there are a few implementation details to achieve the same computational complexity as in [3, 22].
| The paper is organized as follows. In Section 2 we present the characterization of path graphs and directed path graphs given by Monma and Wei [18], while in Section 3 we explain the characterization of path graphs by Apollonio and Balzotti [1]. In Section 4 we present our recognition algorithm for path graphs, we prove its correctness, we report some implementation details and we compute its time complexity. Finally, in Section 5 we provide a similar analysis for directed path graphs.
| A |
In experiments 1(c) and 1(d), we study how the connectivity (i.e., ρ𝜌\rhoitalic_ρ, the off-diagonal entries of P𝑃Pitalic_P) across communities under different settings affects the performances of these methods. Fix (x,n0)=(0.4,100)𝑥subscript𝑛00.4100(x,n_{0})=(0.4,100)( italic_x , italic_n start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) = ( 0.4 , 100 ) and let ρ𝜌\rhoitalic_ρ range in {0,0.02,0.04,…,0.2}00.020.04…0.2\{0,0.02,0.04,\ldots,0.2\}{ 0 , 0.02 , 0.04 , … , 0.2 }. In Experiment 1(c), θ𝜃\thetaitalic_θ is generated from MMSB model. In Experiment 1(d), θ𝜃\thetaitalic_θ is generated from DCMM model.
|
Numerical results of these two sub-experiments are shown in panels (a) and (b) of Figure 1, respectively. From the results in subfigure 1(a), it can be found that Mixed-SLIM performs similarly to Mixed-SCORE while both two methods perform better than OCCAM and GeoNMF under the MMSB setting. Subfigure 1(b) suggests that Mixed-SLIM significantly outperforms Mixed-SCORE, OCCAM, and GeoNMF under the DCMM setting. It is interesting to find that only Mixed-SLIM enjoys better performances as the fraction of pure nodes increases under the DCMM setting. |
Numerical results of these two sub-experiments are shown in panels (c) and (d) of Figure 1. From subfigure (c), under the MMSB model, we can find that Mixed-SLIM, Mixed-SCORE, OCCAM, and GeoNMF have similar performances, and as ρ𝜌\rhoitalic_ρ increases they all perform poorer. Under the DCMM model, the mixed Humming error rate of Mixed-SLIM decreases as ρ𝜌\rhoitalic_ρ decreases, while the performances of the other three approaches are still unsatisfactory. |
The numerical results are given by the last two panels of Figure 1. Subfigure 1(k) suggests that Mixed-SLIM, Mixed-SCORE, and GeoNMF share similar performances and they perform better than OCCAM under the MMSB setting. the proposed Mixed-SLIM significantly outperforms the other three methods under the DCMM setting. |
Panels (e) and (f) of Figure 1 report the numerical results of these two sub-experiments. They suggest that estimating the memberships becomes harder as the purity of mixed nodes decreases. Mixed-SLIM and Mixed-SCORE perform similarly and both two approaches perform better than OCCAM and GeoNMF under the MMSB setting. Meanwhile, Mixed-SLIM significantly outperforms the other three methods under the DCMM setting. | B |
In each iteration, variational transport approximates the update in (1.1) by first solving the dual maximization problem associated with the variational form of the objective and then using the obtained solution to specify a direction to push each particle.
The variational transport algorithm can be viewed as a forward discretization of the Wasserstein gradient flow (Santambrogio, 2017) | Our Contribution. Our contribution is two fold. First, utilizing the optimal transport framework and the variational form of the objective functional, we propose a novel variational transport algorithmic framework for solving the distributional optimization problem via particle approximation.
In each iteration, variational transport first solves the variational problem associated with the objective to obtain an estimator of the Wasserstein gradient and then approximately implements Wasserstein gradient descent by pushing the particles. | In each iteration, variational transport approximates the update in (1.1) by first solving the dual maximization problem associated with the variational form of the objective and then using the obtained solution to specify a direction to push each particle.
The variational transport algorithm can be viewed as a forward discretization of the Wasserstein gradient flow (Santambrogio, 2017) | To showcase these advantages, we consider an instantiation of variational transport where the objective functional F𝐹Fitalic_F satisfies the Polyak-Łojasiewicz (PL) condition (Polyak, 1963) with respect to the Wasserstein distance and the variational problem associated with F𝐹Fitalic_F is solved via kernel methods.
In this case, we prove that variational transport generates a sequence of probability distributions that converges linearly to a global minimizer of F𝐹Fitalic_F up to some statistical error. |
Compared with existing methods, variational transport features a unified algorithmic framework that enjoys the following advantages. First, by considering functionals with a variational form, the algorithm can be applied to a broad class of objective functionals. | D |
∥R(rt+1∣aj,t,zt2)−R(rt+1∣zt2)∥).\displaystyle\qquad\big{\|}R\left(r_{t+1}\mid a_{j,t},z_{t}^{2}\right)-R\left(%
r_{t+1}\mid z_{t}^{2}\right)\big{\|}\Big{)}.∥ italic_R ( italic_r start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT ∣ italic_a start_POSTSUBSCRIPT italic_j , italic_t end_POSTSUBSCRIPT , italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) - italic_R ( italic_r start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT ∣ italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) ∥ ) . |
Besides the above two classes, other intrinsic reward methods are mainly task-oriented and for a specific purpose. For example, the method in [19] uses the discrepancy between the marginal policy and the conditional policy as the intrinsic reward for encouraging agents to have a greater social impact on others. The errors between the joint cooperative behaviors and the individual actions are defined in [58] as an intrinsic reward, which is suitable for agent-pair tasks that rely heavily on collaboration, such as dual-arm robot tasks. Similar with them, the proposed intrinsic reward is specially designed | Thus, in expectation, the intrinsic reward is the negative of MI above. As each agent maximizes the long-term cumulative reward, which therefore minimizes MI. As a result, agents become independent. This can be an interpretation from the information-theoretical perspective. Note that the prediction results are only used to form intrinsic rewards, and our method tries to minimize them. That means our method mainly relies on the trend of change of predicted results, not the predicted value. Therefore, we expect our method is resilient to the decoders’ modeling error accumulation.
| To make the policy transferable, traffic signal control is also modeled as a meta-learning problem in [14, 49, 36]. Specifically, the method in [14] performs meta-learning on multiple independent MDPs and ignores the influences of neighbor agents. A data augmentation method is proposed in [49] to generates diverse traffic flows to enhance meta-RL, and also regards agents as independent individuals, without explicitly considering neighbors. In addition, a model-based RL method is proposed in [36] for high data efficiency. However it may introduce cumulative errors due to error of the learned environment model and it is hard to achieve the asymptotic performance of model-free methods. Our method both belongs to meta-RL paradigms, the main advantages are two main aspects Firstly, we consider the neighbour information during the meta-learning, which is critical for the multi-agent coordination. Secondly, our method learns a latent variable to represent task-specific information, which can not only balance exploration and exploitation [50], but also help to learn the shared structures of reward and transition across tasks. As far as we know, our work is the first to propose an intrinsic motivation to enhance the robustness of the policy on traffic signal control. See Appendix F for a brief overview of the above methods.
| Secondly, even for a specific task, the received rewards and observations are uncertain to the agent, as illustrated in Fig. 1, which make the policy learning unstable and non-convergent. Even if the agent performs the same action on the same observation at different timesteps, the agent may receive different rewards and observation transitions because of neighbor agents’ different actions. In this case, the received rewards and observation transitions of the current agent could not be well predicted only conditioned on its own or partial neighbors’ observations and performed actions. To avoid this situation, four decoders are introduced to predict the next observations and rewards without neighbor agents’ policies or with partially neighbor agents, respectively. In addition, an intrinsic reward is designed to reduce the bias among different predictions and enhance learning stability. In other words, the design of the decoders and intrinsic reward is similar to the law of contra-positive. The unstable learning will cause the predicted rewards and observation transitions unstable in a decentralized way, while our decoders and intrinsic reward encourage the prediction convergent. In addition, from the perspective of information theory, the intrinsic reward design makes the policy of each agent robust to neighbours’ polices, which could make the learned policy easy to transfer.
| B |
such that
𝓇𝒶𝓃𝓀(ϕ𝐳(𝐳^))≡𝓀𝓇𝒶𝓃𝓀subscriptitalic-ϕ𝐳^𝐳𝓀\mathpzc{rank}\left(\,\phi_{\mathbf{z}}(\hat{\mathbf{z}})\,\right)\,\equiv\,kitalic_script_r italic_script_a italic_script_n italic_script_k ( italic_ϕ start_POSTSUBSCRIPT bold_z end_POSTSUBSCRIPT ( over^ start_ARG bold_z end_ARG ) ) ≡ italic_script_k for all 𝐳^∈Λ∗^𝐳subscriptΛ\hat{\mathbf{z}}\in\Lambda_{*}over^ start_ARG bold_z end_ARG ∈ roman_Λ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT. | 𝐱∈ϕ(Λ∗)𝐱italic-ϕsubscriptΛ\mathbf{x}\,\in\,\phi(\Lambda_{*})bold_x ∈ italic_ϕ ( roman_Λ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ) is in the same branch of zeros as
𝐱∗subscript𝐱\mathbf{x}_{*}bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT and, if a zero 𝐱~~𝐱\tilde{\mathbf{x}}over~ start_ARG bold_x end_ARG is in the same branch of 𝐱∗subscript𝐱\mathbf{x}_{*}bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT, | for computing a zero 𝐱∗subscript𝐱\mathbf{x}_{*}bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT of 𝐟𝐟\mathbf{f}bold_f at which the Jacobian
J(𝐱∗)𝐽subscript𝐱J(\mathbf{x}_{*})italic_J ( bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ) is of rank r𝑟ritalic_r particularly when 𝐱∗subscript𝐱\mathbf{x}_{*}bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT is on a branch of | \mathbf{0}italic_J start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT ( bold_x ) start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_f ( bold_x ) = bold_0 implies 𝐱𝐱\mathbf{x}bold_x is a semiregular
zero of 𝐟𝐟\mathbf{f}bold_f in the same branch of 𝐱∗subscript𝐱\mathbf{x}_{*}bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT for every 𝐱∈Ω∗𝐱subscriptΩ\mathbf{x}\in\Omega_{*}bold_x ∈ roman_Ω start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT. | toward a semiregular zero 𝐱^^𝐱\hat{\mathbf{x}}over^ start_ARG bold_x end_ARG of 𝐱↦𝐟(𝐱,𝐲∗)maps-to𝐱𝐟𝐱subscript𝐲\mathbf{x}\,\mapsto\,\mathbf{f}(\mathbf{x},\mathbf{y}_{*})bold_x ↦ bold_f ( bold_x , bold_y start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT )
in the same branch of 𝐱∗subscript𝐱\mathbf{x}_{*}bold_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT. | A |
We bound the overall time complexity of ProfilePacking for serving a sequence of n𝑛nitalic_n items as a function of n,k𝑛𝑘n,kitalic_n , italic_k, and m𝑚mitalic_m. The initial phase of the algorithm, which involves computing the profile and its optimal packing, runs in time independent of n𝑛nitalic_n and does not impact the asymptotic time complexity. It is possible to find the optimal packing of the profile set using efficient exact heuristics such as (?, ?). If faster pre-processing is required, one can replace the exact optimal packing with an approximate packing using simple heuristics like FirstFitDecreasing (?), which has a competitive ratio of 11/911911/911 / 9. This will improve the empirical running time, while increasing the number of opened bins by the same ratio. Such an approach is also useful in settings where predictions are updated based on previously served items, and thus the packing of the profile set must be computed periodically. Overall, the worst-case time complexity of ProfilePacking is O(kmn)𝑂𝑘𝑚𝑛O(kmn)italic_O ( italic_k italic_m italic_n ). Note that each item is served in amortized time O(km)𝑂𝑘𝑚O(km)italic_O ( italic_k italic_m ), which is constant since k𝑘kitalic_k and m𝑚mitalic_m are constants. | We will now use Lemma 2 to prove a more general result that incorporates the prediction error into the analysis. To this end, we will relate the cost of the packing of ProfilePacking to the packing that the algorithm would output if the prediction were error-free, which will allow us to apply the result of Lemma 2. Specifically, we will argue that in the presence of prediction error, the cost of ProfilePacking may be affected in two ways: The number of bins in a single profile of ProfilePacking may increase, and more profiles may have to be opened. In the proof of the following theorem, for each of these two cases, we bound the number of additional opened bins as a function of error.
| As the prediction error grows, ProfilePacking may not be robust; we show, however, that this is an unavoidable price that any optimally-consistent algorithm with frequency predictions must pay. We thus design and analyze a more general class of hybrid algorithms that combine ProfilePacking and any one of the known robust online algorithms, and which offers a more balanced theoretical tradeoff between robustness and consistency.
|
In this section, we describe and analyze a more general class of algorithms which offer better robustness in comparison to ProfilePacking, at the expense of slightly worse consistency. To this end, we will combine ProfilePacking with any algorithm A𝐴Aitalic_A that has efficient worst-case competitive ratio, in the | We conclude that the robustness of ProfilePacking is close-to-optimal and no (1+ϵ)1italic-ϵ(1+\epsilon)( 1 + italic_ϵ )-consistent algorithm can do asymptotically better. It is possible, however, to obtain more general tradeoffs between consistency and robustness, as we discuss in the next section.
| C |
We compare the results with the existing solutions that aim at point cloud generation: latent-GAN (Achlioptas et al., 2017), PC-GAN (Li et al., 2018), PointFlow (Yang et al., 2019), HyperCloud(P) (Spurek et al., 2020a) and HyperFlow(P) (Spurek et al., 2020b). We also consider in the experiment two baselines, HyperCloud(M) and HyperFlow(M) variants, that are capable of generating the meshes from the unit sphere. | In this section, we evaluate how well our model can learn the underlying distribution of points by asking it to autoencode a point cloud. We conduct the autoencoding task for 3D point clouds from three categories in ShapeNet (airplane, car, chair). In this experiment, we compare LoCondA with the current state-of-the-art AtlasNet (Groueix et al., 2018) where the prior shape is either a sphere or a set of patches. Furthermore, we also compare with l-GAN (Achlioptas et al., 2018) and PointFlow (Yang et al., 2019). We follow the experiment set-up in PointFlow and report performance in both CD and EMD in Table 2. Since these two metrics depend on the point clouds’ scale, we also report the upper bound in the "oracle" column. The upper bound is produced by computing the error between two different point clouds with the same number of points sampled from the same ground truth meshes. It can be observed that LoCondA-HC achieves competitive results with respect to reference solutions. All reference methods were trained in an autoencoding framework (non-generative variants), while both of LoCondA are preserving generative capabilities in the experiment.
|
In this experiment, we set N=105𝑁superscript105N=10^{5}italic_N = 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT. Using more rays had a negligible effect on the output value of WT𝑊𝑇WTitalic_W italic_T but significantly slowed the computation. We compared AtlasNet with LoCondA applied to HyperCloud (HC) and HyperFlow (HF). We show the obtained results in Table 3. Note that AtlasNet cannot produce watertight meshes for any of the classes, limiting its applicability. On the other hand, LoCondA creates meshes where all sampled rays pass the test. | In this section, we describe the experimental results of the proposed method. First, we evaluate the generative capabilities of the model. Second, we provide the reconstruction result with respect to reference approaches. Finally, we check the quality of generated meshes, comparing our results to baseline methods. Throughout all experiments, we train models with Chamfer distance. We also set λ=0.0001𝜆0.0001\lambda=0.0001italic_λ = 0.0001. We denote LoCondA-HC when HyperCloud is used as the autoencoder architecture (Part A in Fig. 1) and LoCondA-HF for the HyperFlow version.
|
The results are presented in Table 1. LoCondA-HF obtains comparable results to the reference methods dedicated for the point cloud generation. It can be observed that values of evaluated measures for HyperFlow(P) and LoCondA-HF (uses HyperFlow(P) as a base model in the first part of the training) are on the same level. It means that incorporating an additional step (part B.) dedicated to mesh generation does not negatively influence our model’s generative capabilities. On the other hand, if we use HyperFlow to produce meshes directly using the procedure described in (Spurek et al., 2020b) (see results for HyperFlow(M)), the generative capabilities are significantly worse for considered evaluation metrics. | D |
For non-strongly convex-concave case, distributed SPP with local and global variables were studied in [41], where the authors proposed a subgradient-based algorithm for non-smooth problems with O(1/N)𝑂1𝑁O(1/\sqrt{N})italic_O ( 1 / square-root start_ARG italic_N end_ARG ) convergence guarantee (N𝑁Nitalic_N is the number of communication rounds). |
We proposed a decentralized method for saddle point problems based on non-Euclidean Mirror-Prox algorithm. Our reformulation is built upon moving the consensus constraints into the problem by adding Lagrangian multipliers. As a result, we get a common saddle point problem that includes both primal and dual variables. After that, we employ the Mirror-Prox algorithm and bound the norms of dual variables at solution to assist the theoretical analysis. Finally, we demonstrate the effectiveness of our approach on the problem of computing Wasserstein barycenters (both theoretically and numerically). | Now we show the benefits of representing some convex problems as convex-concave problems on the example of the Wasserstein barycenter (WB) problem and solve it by the DMP algorithm. Similarly to Section (3), we consider a SPP in proximal setup and introduce Lagrangian multipliers for the common variables. However, in the Section 3 we obtained results in a general setup without additional knowledge about cost functions and sets. On the contrary, in this section we utilize the special structure of the WB problem and introduce slightly different norms. After that, we get a convergence guarantee by applying Theorem 3.5.
|
For non-strongly convex-concave case, distributed SPP with local and global variables were studied in [41], where the authors proposed a subgradient-based algorithm for non-smooth problems with O(1/N)𝑂1𝑁O(1/\sqrt{N})italic_O ( 1 / square-root start_ARG italic_N end_ARG ) convergence guarantee (N𝑁Nitalic_N is the number of communication rounds). | Paper [61] introduced an Extra-gradient algorithm for distributed multi-block SPP with affine constraints. Their method covers the Euclidean case and the algorithm has O(1/N)𝑂1𝑁O(1/N)italic_O ( 1 / italic_N ) convergence rate.
Our paper proposes an algorithm based on adding Lagrangian multipliers to consensus constraints, which is analogical to [61], but our method works in a general proximal smooth setup and achieves O(1/N)𝑂1𝑁O(1/N)italic_O ( 1 / italic_N ) convergence rate. Moreover, it has an enhanced dependence on the condition number of the network. | D |
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of interesting properties, and a conjecture in the slightly general case of a graph (not necessarily complete) that admits a star spanning tree. Section 5 explores programmatically the space of spanning trees to provide evidence that the conjecture is well posed. Section 6 collects the conclusions of the article. |
The length of a cycle is its number of edges. The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the strictly fundamental class context. In more concrete terms this problem is equivalent to finding the cycle basis with the sparsest cycle matrix. In [5] a unified perspective of the problem is presented. The authors show that the MCB problem is different in nature for each class. For example in [10] a remarkable reduction is constructed to prove that the MCB problem is NP-hard for the strictly fundamental class, while in [11] a polynomial time algorithm is given to solve the problem for the undirected class. Some applications of the MCB problem are described in [5, 11, 10, 12]. |
The study of cycles of graphs has attracted attention for many years. To mention just three well known results consider Veblen’s theorem [2] that characterizes graphs whose edges can be written as a disjoint union of cycles, Maclane’s planarity criterion [3] which states that planar graphs are the only to admit a 2-basis, or the polygon matroid in Tutte’s classical formulation of matroid theory [4]. | In this section we present some experimental results to reinforce
Conjecture 14. We proceed by trying to find a counterexample based on our previous observations. In the first part, we focus on the complete analysis of small graphs, that is: graphs of at most 9 nodes. In the second part, we analyze larger families of graphs by random sampling instances. |
The set of cycles of a graph has a vector space structure over ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, in the case of undirected graphs, and over ℚℚ\mathbb{Q}blackboard_Q, in the case of directed graphs [5]. A basis of such a vector space is denoted cycle basis and its dimension is the cyclomatic number ν=|E|−|V|+|CC|𝜈𝐸𝑉𝐶𝐶\nu=|E|-|V|+|CC|italic_ν = | italic_E | - | italic_V | + | italic_C italic_C | where E𝐸Eitalic_E, V𝑉Vitalic_V ad CC𝐶𝐶CCitalic_C italic_C are the set of edges, vertices and connected components of the graph, resp. Given a cycle basis B𝐵Bitalic_B we can define its cycle matrix Γ∈K|E|×νΓsuperscript𝐾𝐸𝜈\Gamma\in K^{|E|\times\nu}roman_Γ ∈ italic_K start_POSTSUPERSCRIPT | italic_E | × italic_ν end_POSTSUPERSCRIPT where K𝐾Kitalic_K is the scalar field (i.e.: ℤ2subscriptℤ2\mathbb{Z}_{2}blackboard_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT or ℚℚ\mathbb{Q}blackboard_Q), as the matrix that has the cycles of B𝐵Bitalic_B as columns. | B |
(m+1)𝑚1(m+1)( italic_m + 1 )-tuples of ℱℱ\mathcal{F}caligraphic_F with nonempty intersection. In other words, πm+1(ℱ)subscript𝜋𝑚1ℱ\pi_{m+1}(\mathcal{F})italic_π start_POSTSUBSCRIPT italic_m + 1 end_POSTSUBSCRIPT ( caligraphic_F ) is at least δ′=defρ/(mtm+1)superscriptdefsuperscript𝛿′𝜌binomial𝑚𝑡𝑚1\delta^{\prime}\stackrel{{\scriptstyle\text{def}}}{{=}}\rho/\binom{mt}{m+1}italic_δ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_RELOP SUPERSCRIPTOP start_ARG = end_ARG start_ARG def end_ARG end_RELOP italic_ρ / ( FRACOP start_ARG italic_m italic_t end_ARG start_ARG italic_m + 1 end_ARG ), where ρ𝜌\rhoitalic_ρ depends only on m𝑚mitalic_m, t𝑡titalic_t, and δ𝛿\deltaitalic_δ, that is on m𝑚mitalic_m, b𝑏bitalic_b, K𝐾Kitalic_K and δ𝛿\deltaitalic_δ. That concludes the proof.
| The rest of Section 4.1 is devoted to the proof of Lemma 4.2. The proof first handles the case k=m𝑘𝑚k=mitalic_k = italic_m, and then uses it to prove the case k<m𝑘𝑚k<mitalic_k < italic_m. Note that for k>m𝑘𝑚k>mitalic_k > italic_m the lemma is trivial, as the chain group contains only a trivial chain and we can take N=ℓ𝑁ℓN=\ellitalic_N = roman_ℓ.
| Lemma 4.6 assumes that the m𝑚mitalic_m-colored family ℱℱ\mathcal{F}caligraphic_F has the property that for 0≤j<dimK0𝑗dimension𝐾0\leq j<\dim K0 ≤ italic_j < roman_dim italic_K and for every colorful subfamily 𝒢𝒢\mathcal{G}caligraphic_G of ℱℱ\mathcal{F}caligraphic_F, the j𝑗jitalic_jth reduced Betti number β~j(⋂F∈𝒢F)subscript~𝛽𝑗subscript𝐹𝒢𝐹\tilde{\beta}_{j}(\bigcap_{F\in\mathcal{G}}F)over~ start_ARG italic_β end_ARG start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( ⋂ start_POSTSUBSCRIPT italic_F ∈ caligraphic_G end_POSTSUBSCRIPT italic_F ) is strictly less than b𝑏bitalic_b. A careful inspection of the proof reveals that this assumption is only used in the induction step, for the definition of the labeling hℎhitalic_h in Equation (7). When proving that (Pℓ)subscript𝑃ℓ(P_{\ell})( italic_P start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT ) implies (Pℓ+1)subscript𝑃ℓ1(P_{\ell+1})( italic_P start_POSTSUBSCRIPT roman_ℓ + 1 end_POSTSUBSCRIPT ), the face σ𝜎\sigmaitalic_σ appearing in Equation (7) is (ℓ+1)ℓ1(\ell+1)( roman_ℓ + 1 )-dimensional, so
| If we use Lemma 4.8 in place of Lemma 4.6 in the proof of Theorem 2.1, the hypothesis on the m𝑚mitalic_m-colored family ℱℱ\mathcal{F}caligraphic_F can be weakened. This “improved” Theorem 2.1 can in turn be applied in the proof of Theorem 1.2, yielding the following:
| a positive fraction of the m𝑚mitalic_m-tuples to have a nonempty intersection, where for dimK>1dimension𝐾1\dim K>1roman_dim italic_K > 1, m𝑚mitalic_m is some hypergraph Ramsey number depending on b𝑏bitalic_b and K𝐾Kitalic_K.
So in order to prove Corollary 1.3 it suffices to show that if a positive fraction of the (μ(K)+1)𝜇𝐾1(\mu(K)+1)( italic_μ ( italic_K ) + 1 )-tuples intersect, then a positive fraction of the m𝑚mitalic_m-tuples intersect. This follows from successive applications of Theorem 1.2. (Note that [35, Theorem 2.3] still needs to be proven independently to provide a stopping point for the successive applications of Theorem 1.2; also, the implicit bound given by the proof of [35, Theorem 2.3] on the constant β𝛽\betaitalic_β changes in the process.) | C |
The selected features are highlighted in the dark gray color (because it matches the default color, which is gray) of the VIF metric’s region, as demonstrated in Fig. 1(d), and the combinations are generated for the two or three selected features automatically, as can be seen in Fig. 1(b). It is up to the user to select the best generation; however, he/she is being assisted by the automated selection techniques, as described in Section 4.2.
In the future, we plan to support custom transformation and generation of features (see Section 6). | Similar to the workflow described above, we start by choosing the appropriate thresholds for slicing the data space. As we want to concentrate more on the instances that are close to being predicted correctly, we move the left gray line from 25% to 35% (see Fig. 5(a.1 and a.2)). This makes the Bad slice much shorter. Similarly, we focus more on the correctly-classified instances by changing the position of the right gray line from 75% to 65% (cf. Fig. 5(a.3 and a.4)). From the table heatmap view in Fig. 5(b), we realize that F13 and then F3 can be excluded from the features list. For the remaining features, we have to validate our hypotheses through the statistical measures of the radial tree visualization. We check how F8, F16, F5, and F10 perform in various data subspaces, as shown in Fig. 5(c.1–c.4). All these features have rather low MI in the All space due to light blue color. Hence, the difference is mainly in the linear correlation of those features with the dependent variable. F8 appears the least correlated with the target variable (small circular bar). F16 is similar to F8 regarding correlation, except for the Good subspace in Fig. 5(c.3).
Thus, these two features should be removed. | Next, we focus on the overall inspection of features for all instances (see Fig. 3(d.1–d.4)).
F4 (the ellipsoid shape) appears the worst in terms of target correlation (the small circular bar), and it has one of the lowest MI values (light blue color). | To the best of our knowledge, little empirical evidence exists for choosing a particular measure over others. In general, target correlation and mutual information (both related to the influence between features and the dependent variable) may be good candidates for identifying important features [71]. After these first indicators, the remaining collinearity measures can be useful too. Although it can be claimed that a high level of correlation between features is from 0.9 and above [72], no precise rules exist for judging the significance of the collinearity (see our discussion on the variance influence factor in Section 4.4). To accomodate for these challenges, a variety of measures are simultaneously visualized in our tool.
In detail, for the Data Space panel, the predicted probabilities should become dark or light green. The more important a feature, it is more likely to impact the outcome of the ML model, thus, in the Feature Selection Techniques panel, the normalized importance should be mostly green or close to green colors. The Feature Space Overview panel contains a subset of the measures present in the Feature Space Detail panel. For the former, target correlation (COR) should be—as high as possible—depicted with a full circle bar. The same applies to mutual information (MI), but the indication here is the dark or darker blue color. For the difference between features correlation, the optimal is to reduce, hence, green is the expected outcome. The same encodings also apply for the latter panel with the addition of the variance influence factor (VIF), per class correlation (COR), and between features correlation (Fs). The per class correlation is the only of the three measurements that should be substantially high, which is illustrated with long bars for the horizontal bar chart. The remaining should both be decreasing, as a result, the gray color visible in this panel should be diminishing—as much as possible. Finally for the Process Tracker and Predictive Results panel, moving forward to the next exploration step is mapped to bigger in size circles with brown color used for the best setup of features explored so far. | By comparing the lengths of the circular bars in Fig. 3(e), we see that the lowest overall target correlation is reported for F4 (on hover shown as 10%).
Also, the MI exhibits low values for both F3 and F4. As we proceed, we observe that F3, F4, and F6 may cause problems regarding collinearity based on the VIF heuristic (2 out of 4 pieces of the symbol). Additionally, the per-class correlation is 12% for the fine label, and weak for the other classes. Finally, the correlation between F4 and F6 (or F4 and F3) is above 0.6; thus, it can be considered moderate to strong. | D |
We set the mean functions as μ(j)=0superscript𝜇(j)0\mu^{{\scalebox{0.65}{(j)}}}=0italic_μ start_POSTSUPERSCRIPT (j) end_POSTSUPERSCRIPT = 0, j=0,1,2𝑗012j=0,1,2italic_j = 0 , 1 , 2 [21]. However, if we are given some prior information on the shape and structure of gjsubscript𝑔𝑗g_{j}italic_g start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, j=0,1,2𝑗012j=0,1,2italic_j = 0 , 1 , 2, e.g., from similar experiments or numerical simulations, we can employ other options for the mean functions [21].
| which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combination of the identified system model with the contouring terms. In our approach the tracking error is coupled with the progression along the path through the cost function. The automated tuning of the parameters is performed using a cost that accounts for the global performance over the whole trajectory. Additional constraints in the Bayesian optimization algorithm allow for balancing traversal time, accuracy, and minimization of oscillations, according to the specific crucial requirements of the application. We demonstrate enhanced performance in simulation for a 2-axis gantry, for geometries of different nature.
| This paper demonstrated a hierarchical contour control implementation for the increase of productivity in positioning systems. We use a contouring predictive control approach to optimize the input to a low level controller. This control framework requires tuning of multiple parameters associated with an extensive number of iterations. We propose a sample-efficient joint tuning algorithm, where the performance metrics associated with the full geometry traversal are modelled as Gaussian processes, and used to form the cost and the constraints in a constrained Bayesian optimization algorithm, where they enable the trade-off between fast traversal, high tracking accuracy, and suppression of vibrations in the system. Data-driven tuning of all the parameters compensates for model imperfections and results in improved performance.
Our numerical results demonstrate that tuning the parameters of the MPCC stage achieves the best performance in terms of time and tracking accuracy. |
We use two geometries to evaluate the performance of the proposed approach, an octagon geometry with edges in multiple orientations with respect to the two axes, and a curved geometry (infinity shape) with different curvatures, shown in Figure 4. We have implemented the simulations in Matlab, using Yalmip/Gurobi to solve the corresponding MPCC quadratic program in a receding horizon fashion and the GPML library for Gaussian process modeling. We compare three schemes: manual tuning of the MPCC parameters for fixed low level controller gains, Tuning of MPCC parameters through Bayesian optimization, and joint tuning of the MPCC- and the low-level cascade controller parameters using Bayesian optimization. | For the initialization phase needed to train the GPs in the Bayesian optimization, we select 20 samples over the whole range of MPC parameters, using Latin hypercube design of experiments. The BO progress is shown in Figure 5, right pannel, for the optimization with constraints on the jerk and on the tracking error. After the initial learning phase the algorithm quickly finds the region where the simulation is feasible with respect to the constraints. The confidence interval in the cost prediction narrows for the infinity shaped trajectory, which is likely due to a more clear minimum in the cost of this geometry. The optimization stops after a fixed number of iterations is reached, and the parameters are set to those corresponding to the best observed cost.
| C |
It is unknown how well the methods scale up to multiple sources of biases and large number of groups, even when they are explicitly annotated. To study this, we train the explicit methods with multiple explicit variables for Biased MNISTv1 and individual variables that lead to hundreds and thousands of groups for GQA and compare them with the implicit methods. For Biased MNISTv1, we first sort the seven total variables in the descending order of MMD (obtained by StdM) and then conduct a series of experiments. In the first experiment, the most exploited variable, distractor shape, is used as the explicit bias. In the second experiment, the two most exploited variables, distractor shape and texture, are used as explicit biases. This is repeated until all seven variables are used333The exact order is given in the Appendix.. Note that conducting the seventh experiment entails annotating each instance with every possible source of bias. While this may not be realistic in practice, such a controlled setup will reveal if the explicit methods can generalize when they have complete information about every bias source. | To test scalability on a natural dataset, we conduct four experiments per explicit method on GQA-OOD with the explicit bias variables: a) head/tail (2 groups), b) answer class (1833 groups), c) global group (115 groups), and d) local group (133328 groups). Unlike Biased MNISTv1, we do not test with combinations of these variables since the last three variables already entail generalization to many groups.
| We use the GQA visual question answering dataset [33] to highlight the challenges of using bias mitigation methods on real-world tasks. It has multiple sources of biases including imbalances in answer distribution, visual concept co-occurrences, question word correlations, and question type/answer distribution. It is unclear how the explicit bias variables should be defined so that the methods can generalize to all minority groups. GQA-OOD [36] divides the evaluation and test sets into majority (head) and minority (tail) groups based on the answer frequency within each ‘local group’ (e.g., colors of bags), which is a unique combination of ‘global group’ or answer type (e.g., objects or colors) and the main concept asked in the question (e.g., ‘bag’, ‘chair’, etc.). The head/tail categorization makes analysis easier; however, it is unclear how one should specify the explicit biases so that the models generalize even to the rarest of local groups. Therefore, we explore multiple ways of defining the explicit bias variable in separate experiments: a) majority/minority group label (2 groups), b) answer class (1833 groups), c) global group (115 groups) and d) local group (133328 groups). It is unknown if bias mitigation methods can scale to hundreds and thousands of groups in GQA, yet natural tasks require such an ability.
| Results for GQA-OOD are similar, with explicit methods failing to scale up to a large number of groups, while implicit methods showing some improvements over StdM. As shown in Table 2, when the number of groups is small, i.e., when using a head/tail binary indicator as the explicit bias, explicit methods remain comparable or even outperform StdM, but when the number of groups grow to hundreds and thousands, they fail. IRMv1 and GDRO obtain the highest improvements of 2.4% and 2.1% over StdM, respectively, with the binary head/tail bias, but they show large drops when using answer class, global group or local group as explicit bias variables. Some drops are extreme, e.g., RUBi drops 39% when using global group as the explicit bias variable.
|
where, |ai|subscript𝑎𝑖|a_{i}|| italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | is the number of instances for answer aisubscript𝑎𝑖a_{i}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in the given group, μ(a)𝜇𝑎\mu(a)italic_μ ( italic_a ) is the mean number of answers in the group and β𝛽\betaitalic_β can be used to control the tail size. In Fig. A9, we plot the tail accuracies at different tail sizes, considering different explicit bias variables for the explicit methods. For implicit methods: StdM, LFF and SD, same tail accuracies are repeated on all four charts since they are not affected by the choice of explicit variables during training. Explicit methods fail when the explicit variables entail generalization to large number of groups, whereas implicit methods are close to or above StdM. | A |
In this paper, we provide a systematic review of appearance-based gaze estimation methods using deep learning algorithms.
As shown in Fig. 1, we discuss these methods from four perspectives: 1) deep feature extraction, 2) deep neural network architecture design, 3) personal calibration, and 4) device and platform. | In this survey, we present a comprehensive overview of deep learning-based gaze estimation methods. Unlike the conventional gaze estimation methods that requires dedicated devices, the deep learning-based approaches regress the gaze from the eye appearance captured by web cameras. This makes it easy to implement the algorithm in real world applications.
We introduce the gaze estimation method from four perspectives: deep feature extraction, deep neural network architecture design, personal calibration as well as device and platform. | In this paper, we provide a systematic review of appearance-based gaze estimation methods using deep learning algorithms.
As shown in Fig. 1, we discuss these methods from four perspectives: 1) deep feature extraction, 2) deep neural network architecture design, 3) personal calibration, and 4) device and platform. | From the deep feature extraction perspective, we describe the strategies for extracting features from eye images, face images and videos.
Under the deep neural network architecture design perspective, we first review methods based on the supervised strategy, containing the supervised, self-supervised, semi-supervised and unsupervised methods. | Convolutional neural networks have been widely used in many compute vision tasks [88]. They also demonstrate superior performance in the field of gaze estimation.
In this section, we first review the existing gaze estimation methods from the learning strategy perspective, i.e., the supervised CNNs and the semi-/self-/un-supervised CNNs. Then we introduce the different network architectures,i.e., multi-task CNNs and the recurrent CNNs for gaze estimation. In the last part of this section, we discuss the CNNs that integrate prior knowledge to improve performance. | C |
The images of the used dataset are already cropped around the face, so we don’t need a face detection stage to localize the face from each image. However, we need to correct the rotation of the face so that we can remove the masked region efficiently. To do so, we detect 68 facial landmarks using Dlib-ml open-source library introduced in king2009dlib . According to the eye locations, we apply a 2D rotation to make them horizontal as presented in Fig. 2. | he2016deep has been successfully used in various pattern recognition tasks such as face and pedestrian detection mliki2020improved . It containing 50 layers trained on the ImageNet dataset. This network is a combination of Residual network integrations and Deep architecture parsing. Training with ResNet-50 is faster due to the bottleneck blocks. It is composed of five convolutional blocks with shortcuts added between layers. The last convolution layer is used to extract Deep Residual Features (DRF). Fig. 6 shows the architecture of the ResNet-50 model.
| The next step is to apply a cropping filter in order to extract only the non-masked region. To do so, we firstly normalize all face images into 240 ×\times× 240 pixels. Next, we partition a face into blocks. The principle of this technique is to divide the image into 100 fixed-size square blocks (24 ×\times× 24 pixels in our case). Then we extract only the blocks including the non-masked region (blocks from number 1 to 50). Finally, we eliminate the rest of the blocks as presented in Fig. 3.
| Experimental results are carried out on Real-world Masked Face Recognition Dataset (RMFRD) and Simulated Masked Face Recognition Dataset (SMFRD) presented in wang2020masked . We start by localizing the mask region. To do so, we apply a cropping filter in order to obtain only the informative regions of the masked face (i.e. forehead and eyes). Next, we describe the selected regions using a pre-trained deep learning model as a feature extractor. This strategy is more suitable in real-world applications comparing to restoration approaches. Recently, some works have applied supervised learning on the missing region to restore them such as in din2020novel . This strategy, however, is a difficult and highly time-consuming process.
|
The images of the used dataset are already cropped around the face, so we don’t need a face detection stage to localize the face from each image. However, we need to correct the rotation of the face so that we can remove the masked region efficiently. To do so, we detect 68 facial landmarks using Dlib-ml open-source library introduced in king2009dlib . According to the eye locations, we apply a 2D rotation to make them horizontal as presented in Fig. 2. | B |
If ⋅⊢C::Δ\cdot\vdash C::\Delta⋅ ⊢ italic_C : : roman_Δ, then C𝐶Citalic_C terminates, i.e., either C𝐶Citalic_C is final or, inductively, C′superscript𝐶′C^{\prime}italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT terminates for all reducts C′superscript𝐶′C^{\prime}italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT of C𝐶Citalic_C. |
Moreover, some prior work, which is based on sequential functional languages, encodes recursion via various fixed point combinators that make both mixed inductive-coinductive programming [Bas18] and substructural typing difficult, the latter requiring the use of the ! modality [Wad12]. Thus, like FωcopsuperscriptsubscriptF𝜔cop\textsf{F}_{\omega}^{\textsf{cop}}F start_POSTSUBSCRIPT italic_ω end_POSTSUBSCRIPT start_POSTSUPERSCRIPT cop end_POSTSUPERSCRIPT [AP16], we consider a signature of parametric recursive definitions. However, we make typing derivations for recursive programs infinitely deep by unfolding recursive calls ad infinitum [Bro05, LR19], which is not only more elegant than finitary typing, but also simplifies our termination argument. To prove termination of program reduction, we observe that arithmetically closed typing derivations, which have no free arithmetic variables or constraint assumptions, can be translated to infinitely wide but finitely deep trees of a different judgment. The resulting derivations are then the induction target for our proof, leaving the option of making the original typing judgment arbitrarily rich. Thus, although our proposed language is not substructural, this result extends to programs that use their data substructurally. In short, our contributions are as follows: | Sized types are a type-oriented formulation of size-change termination [LJBA01] for rewrite systems [TG03, BR09]. Sized (co)inductive types [BFG+04, Bla04, Abe08, AP16] gave way to sized mixed inductive-coinductive types [Abe12, AP16]. In parallel, linear size arithmetic for sized inductive types [CK01, Xi01, BR06] was generalized to support coinductive types as well [Sac14]. We present, to our knowledge, the first sized type system for a concurrent programming language as well as the first system to combine both features from above. As we mentioned in the introduction, we use unbounded quantification [Vez15] in lieu of transfinite sizes to represent (co)data of arbitrary height and depth. However, the state of the art [Abe12, AP16, CLB23] supports polymorphic, higher-kinded, and dependent types, which we aim to incorporate in future work.
| Sized types are compositional: since termination checking is reduced to an instance of typechecking, we avoid the brittleness of syntactic termination checking. However, we find that ad hoc features for implementing size arithmetic in the prior work can be subsumed by more general arithmetic refinements [DP20b, XP99], giving rise to our notion of sized type refinements that combine the “good parts” of modern sized type systems. First, the instances of constraint conjunction and implication to encode inductive and coinductive types, respectively, in our system are similar to the bounded quantifiers in MiniAgda [Abe12], which gave an elegant foundation for mixed inductive-coinductive functional programming, avoiding continuity checking [Abe08]. Unlike the prior work, however, we are able to modulate the specificity of type signatures: (slight variations of) those in Example 1 are given in CICℓ^CIC^ℓ\textsf{CIC}\widehat{\phantom{}{}_{\ell}}CIC over^ start_ARG start_FLOATSUBSCRIPT roman_ℓ end_FLOATSUBSCRIPT end_ARG [Sac14] and MiniAgda [Abe12, Abe]. Furthermore, we avoid transfinite indices in favor of permitting some unbounded quantification (following Vezzosi [Vez15]), achieving the effect of somewhat complicated infinite sizes without leaving finite arithmetic.
|
Our system is closely related to the sequential functional language of Lepigre and Raffalli [LR19], which utilizes circular typing derivations for a sized type system with mixed inductive-coinductive types, also avoiding continuity checking. In particular, their well-foundedness criterion on circular proofs seems to correspond to our checking that sizes decrease between recursive calls. However, they encode recursion using a fixed point combinator and use transfinite size arithmetic, both of which we avoid as we explained in the introduction. Moreover, our metatheory, which handles infinite typing derivations (via mixed induction-coinduction at the meta level), seems to be both simpler and more general since it does not have to explicitly rule out non-circular derivations. Nevertheless, we are interested in how their innovations in polymorphism and Curry-style subtyping can be integrated into our system, especially the ability to handle programs not annotated with sizes. | D |
where 𝐆¯=𝐁m𝐆¯𝐆superscript𝐁𝑚𝐆{\bar{\mathbf{G}}}={{\mathbf{B}}^{m}}{\mathbf{G}}over¯ start_ARG bold_G end_ARG = bold_B start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT bold_G. It is clear from Eq. (3) that the fingerprint 𝐛ksubscript𝐛𝑘\mathbf{b}_{k}bold_b start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT has been successfully embedded into the original media content 𝐦𝐦\mathbf{m}bold_m under the modulation of the secret matrix 𝐆¯¯𝐆\bar{\mathbf{G}}over¯ start_ARG bold_G end_ARG. | Judge. The judge is a trusted entity who is only responsible for arbitration in the case of illegal redistribution, as in existing traitor tracing systems [10, 11, 12, 13, 14, 3]. After receiving the owner’s request for arbitration, the judge makes a fair judgment based on the evidence provided by the owner. Although only the encrypted version of the user’s watermark is disclosed, the encrypted watermark can be converted into a ciphertext that can be decrypted by the judge based on PRE (for details, please see Figs. 3 and 4), thus enabling traitor tracing. Once the judge detects a copyright infringement, the unfaithful user will be prosecuted in accordance with the law.
| The whole FairCMS-I scheme is summarized as follows.
First, suppose an owner rents the cloud’s resources for media sharing, the owner and the cloud execute Part 1 as shown in Fig. 2. Then, suppose the k𝑘kitalic_k-th user makes a request indicating that he/she wants to access one of the owner’s media content 𝐦𝐦\mathbf{m}bold_m, the involved entities execute Part 2 after the k𝑘kitalic_k-th user is authorized by the owner as shown in Fig. 3. Once a suspicious media content copy 𝐦~ksuperscript~𝐦𝑘{\tilde{\mathbf{m}}}^{k}over~ start_ARG bold_m end_ARG start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT is detected, the owner resorts to the judge for violation arbitration, i.e., the owner and the judge jointly execute Part 3 as shown in Fig. 4. | Upon the detection of a suspicious media content copy 𝐦~ksuperscript~𝐦𝑘\tilde{\mathbf{m}}^{k}over~ start_ARG bold_m end_ARG start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT, the owner resorts to the judge for violation identification. To this end, the proofs that the owner needs to provide the judge includes the original media content 𝐦𝐦\mathbf{m}bold_m with no fingerprints embedded, the corresponding secret matrix 𝐆¯¯𝐆\bar{\mathbf{G}}over¯ start_ARG bold_G end_ARG, and the set ℱℱ\mathcal{F}caligraphic_F that holds all the users’ fingerprints. Among them, 𝐆¯¯𝐆\bar{\mathbf{G}}over¯ start_ARG bold_G end_ARG and ℱℱ\mathcal{F}caligraphic_F is available for download by the owner from the cloud. It is worth mentioning that the fingerprints stored in set ℱℱ\mathcal{F}caligraphic_F are encrypted by the judge’s public key PKJ𝑃subscript𝐾𝐽PK_{J}italic_P italic_K start_POSTSUBSCRIPT italic_J end_POSTSUBSCRIPT, so the user’s fingerprint will not be leaked to the owner and the cloud, but the plaintext of the fingerprints can be decrypted by the judge with his/her own private key SKJ=c2𝑆subscript𝐾𝐽subscript𝑐2SK_{J}=c_{2}italic_S italic_K start_POSTSUBSCRIPT italic_J end_POSTSUBSCRIPT = italic_c start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. With these materials, the judge can make a fair judgment by Eq. (4) or (5).
| Once a copyright dispute occurs between the owner and the user, they delegate a judge that is credible for both parties to make a fair arbitration. Due to the possible noise effect during data transmission, the received suspicious media content copy is assumed to be contaminated by the an additive noise 𝐧𝐧\mathbf{n}bold_n, i.e.,
| D |
The feature embeddings described in Section 3.1 are taken as the initial feature embeddings of GraphFM, i.e., ei(1)=eisubscriptsuperscripte1𝑖subscripte𝑖\textbf{e}^{(1)}_{i}=\textbf{e}_{i}e start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = e start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, where 𝐞i(k)subscriptsuperscript𝐞𝑘𝑖\mathbf{e}^{(k)}_{i}bold_e start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT stands for the updated feature embeddings at k𝑘kitalic_k-th layer.
Since no edge information is given, we need to select the edges (beneficial interactions) by the interaction selection component first. | In summary, when dealing with feature interactions, FM suffers intrinsic drawbacks. We thus propose a novel model Graph Factorization Machine (GraphFM), which takes advantage of GNN to overcome the problems of FM for feature interaction modeling.
By treating features as nodes and feature interactions as the edges between them, the selected beneficial feature interactions can be viewed as a graph. We thus devise a novel technique to select the beneficial feature interactions, which is also to infer the graph structure. Then we adopt an attentional aggregation strategy to aggregate these selected beneficial interactions to update the feature representations. | At each layer of GraphFM, we select the beneficial feature interactions and treat them as edges in a graph. Then we utilize a neighborhood/interaction aggregation operation to encode the interactions into feature representations.
By design, the highest order of feature interaction increases at each layer and is determined by layer depth, and thus the feature interactions of order up to the highest can be learned. |
GraphFM(-S): interaction selection is the first component in each layer of GraphFM, which selects only the beneficial feature interactions and treat them as edges. As a consequence, we can model only these beneficial interactions with the next interaction aggregation component. To check the necessity of this component, we remove this components, so that all pair of feature interactions are modeled as a fully-connected graph. | Then we aggregate these selected feature interactions to update feature embeddings in the neighborhood aggregation component.
Within each k𝑘kitalic_k-th layer, we are able to select and model only the beneficial k𝑘kitalic_k-th order feature interactions and encode these factorized interactions into feature representations. | D |
We also show improved convergence rates for several variants in various cases of interest and prove that the AFW [Wolfe, 1970, Lacoste-Julien & Jaggi, 2015] and BPCG Tsuji et al. [2022] algorithms coupled with the backtracking line search of Pedregosa et al. [2020] can achieve linear convergence rates over polytopes when minimizing generalized self-concordant functions.
| Complexity comparison: Number of iterations needed to reach a solution with h(𝐱)ℎ𝐱h(\mathbf{x})italic_h ( bold_x ) below ε𝜀\varepsilonitalic_ε for Problem 1.1 for Frank-Wolfe-type algorithms in the literature. The asterisk on FW-LLOO highlights the fact that the procedure is different from the standard LMO procedure. The complexity shown for the FW-LLOO, ASFW-GSC, and B-AFW algorithms only apply to polyhedral domains, with the additional requirement that for the former two we need an explicit polyhedral representation of the domain (see Assumption 3 in Dvurechensky et al. [2022]), whereas the latter only requires an LMO.
The requirement that we have an explicit polyhedral representation may be limiting, for instance for the matching polytope over non-bipartite graphs, as the size of the polyhedral representation in this case depends exponentially on the number of nodes of the graph [Rothvoß, 2017]. We use the superscript ††\dagger† to indicate that the same complexities hold when reaching an ε𝜀\varepsilonitalic_ε-optimal solution in g(𝐱)𝑔𝐱g(\mathbf{x})italic_g ( bold_x ), and the superscript ‡‡\ddagger‡ to indicate that constants in the convergence bounds depend on user-defined inputs. | the second-order step size and the LLOO algorithm from Dvurechensky et al. [2022] (denoted by GSC-FW and LLOO in the figures) and the Frank-Wolfe and the Away-step Frank-Wolfe algorithm with the backtracking stepsize of Pedregosa et al. [2020],
denoted by B-FW and B-AFW respectively. |
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is very similar to the one in Jaggi [2013]. In a nutshell, as the primal progress per iteration is directly related to the step size times the Frank-Wolfe gap, we know that the Frank-Wolfe gap cannot remain indefinitely above a given value, as otherwise we would obtain a large amount of primal progress, which would make the primal gap become negative. This is formalized in Theorem 2.6. |
Research reported in this paper was partially supported through the Research Campus Modal funded by the German Federal Ministry of Education and Research (fund numbers 05M14ZAM,05M20ZBM) and the Deutsche Forschungsgemeinschaft (DFG) through the DFG Cluster of Excellence MATH+. We would like to thank the anonymous reviewers for their suggestions and comments. | D |
Here, we make the observation that by combining the prefixes of P𝑃Pitalic_P and P′superscript𝑃′P^{\prime}italic_P start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT until the edge ajsubscript𝑎𝑗a_{j}italic_a start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, we obtain an augmenting path.
On a high level, our approach is to show that α𝛼\alphaitalic_α can stop exploring the path P𝑃Pitalic_P further and our algorithm will, eventually, either find an augmentation between α𝛼\alphaitalic_α and γ𝛾\gammaitalic_γ, or it will find some other “good” augmentation that intersects P𝑃Pitalic_P. | For the rest of the graph, [EKMS12] show that it is enough to store the length of the shortest alternating path that has reached each matched edge. This length is called label.
In the first challenge, we considered the possibility that a vertex γ𝛾\gammaitalic_γ “blocks” the DFS exploration of α𝛼\alphaitalic_α and discussed how this implies an augmenting path between γ𝛾\gammaitalic_γ and α𝛼\alphaitalic_α. | Therefore, we have an augmenting path from γ𝛾\gammaitalic_γ to α𝛼\alphaitalic_α, which will be detected in Algorithm 3 of Algorithm 3.
This implies that the augmenting path α−β𝛼𝛽\alpha-\betaitalic_α - italic_β will be removed from the graph in Pass-Bundle τ𝜏\tauitalic_τ. | If the alternating path Pγsubscript𝑃𝛾P_{\gamma}italic_P start_POSTSUBSCRIPT italic_γ end_POSTSUBSCRIPT starting from γ𝛾\gammaitalic_γ was of length i′>isuperscript𝑖′𝑖i^{\prime}>iitalic_i start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT > italic_i, then it could be that γ𝛾\gammaitalic_γ did not find β𝛽\betaitalic_β since we truncate the DFS at length 1/ε1𝜀1/\varepsilon1 / italic_ε.
In this case, α𝛼\alphaitalic_α continues its search over aisubscript𝑎𝑖a_{i}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT since there is still hope to find free vertices that were not found by γ𝛾\gammaitalic_γ. | Nodes α𝛼\alphaitalic_α, β𝛽\betaitalic_β, and γ𝛾\gammaitalic_γ are free. The black single-segments are unmatched and black (full) double-segments are matched edges. The path P′superscript𝑃′P^{\prime}italic_P start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT corresponding to a DFS branch of γ𝛾\gammaitalic_γ is shown by the red solid spline. Since the edge a5subscript𝑎5a_{5}italic_a start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT is part of the path, the current DFS branch of γ𝛾\gammaitalic_γ cannot be extended up to the free node β𝛽\betaitalic_β along the dashed blue line. Furthermore, the path from γ𝛾\gammaitalic_γ to the edge a3subscript𝑎3a_{3}italic_a start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT can potentially “block" a longer DFS search path of α𝛼\alphaitalic_α illustrated with a solid blue line. However, the edges along the DFS searches of α𝛼\alphaitalic_α and γ𝛾\gammaitalic_γ can be combined to find an augmenting path between α𝛼\alphaitalic_α and γ𝛾\gammaitalic_γ.
| A |
\bm{\mathit{A}}}\right)^{k}\overline{\bm{\mathit{v}}}^{\prime}_{1:4},over~ start_ARG bold_italic_d end_ARG start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 : 4 end_POSTSUBSCRIPT ≤ italic_σ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_ρ ( over~ start_ARG bold_italic_A end_ARG ) start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT over¯ start_ARG bold_italic_v end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 : 4 end_POSTSUBSCRIPT ,
for any k≥0𝑘0k\geq 0italic_k ≥ 0, which completes the proof. |
We consider an asynchronous broadcast version of CPP (B-CPP). B-CPP further reduces the communicated data per iteration and is also provably linearly convergent over directed graphs for minimizing strongly convex and smooth objective functions. Numerical experiments demonstrate the advantages of B-CPP in saving communication costs. | In this section, we compare the numerical performance of CPP and B-CPP with the Push-Pull/𝒜ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method [24, 25].
In the experiments, we equip CPP and B-CPP with different compression operators and consider different graph topologies. | In this paper, we consider decentralized optimization over general directed networks and propose a novel Compressed Push-Pull method (CPP) that combines Push-Pull/𝒜ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B with a general class of unbiased compression operators. CPP enjoys large flexibility in both the compression method and the network topology. We show CPP achieves linear convergence rate under strongly convex and smooth objective functions.
| In this paper, we proposed two communication-efficient algorithms for decentralized optimization over a multi-agent network with general directed topology. First, we consider a novel communication-efficient gradient tracking based method, termed CPP, that combines the Push-Pull method with communication compression. CPP can be applied to a general class of unbiased compression operators and achieves linear convergence for strongly convex and smooth objective functions.
Second, we consider a broadcast-like version of CPP (B-CPP) which also achieves linear convergence rate for strongly convex and smooth objective functions. B-CPP can be applied in an asynchronous broadcast setting and further reduce communication costs compared to CPP. | B |
One can note a branch of recent work devoted to solving non-smooth problems by reformulating them as saddle point problems [8, 9], as well as applying such approaches to image processing
[10, 11]. Recently, significant attention was devoted to saddle problems in machine learning. For example, Generative Adversarial Networks (GANs) are written as a min-max problem [12]. In addition, there are many popular examples: robust models with adversarial noise [13], | One can note a branch of recent work devoted to solving non-smooth problems by reformulating them as saddle point problems [8, 9], as well as applying such approaches to image processing
[10, 11]. Recently, significant attention was devoted to saddle problems in machine learning. For example, Generative Adversarial Networks (GANs) are written as a min-max problem [12]. In addition, there are many popular examples: robust models with adversarial noise [13], | To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detailed comparison with them in Appendix C. Due to the fact that we consider a personalized setting, we can have a significant gain in communications. For example, when λ=0𝜆0\lambda=0italic_λ = 0 or small enough in (1) the importance of local models increases and we may communicate less frequently.
We now outline the main contribution of our work as follows (please refer also Table 1 for an overview of the results): |
Furthermore, there are a lot of personalized federated learning problems utilize saddle point formulation. In particular, Personalized Search Generative Adversarial Networks (PSGANs) [22]. As mentioned in examples above, saddle point problems often arise as an auxiliary tool for the minimization problem. It turns out that if we have a personalized minimization problem, and then for some reason (for example, to simplify the process of the solution or to make the learning more stable and robust) rewrite it in the form of a saddle point problem, then we begin to have a personalized saddle point problem. We refer the reader to Section D for more details. |
In this paper, we present a novel formulation for the Personalized Federated Learning Saddle Point Problem (1). This formulation incorporates a penalty term that accounts for the specific structure of the network and is applicable to both centralized and decentralized network settings. Additionally, we provide the lower bounds both on the communication and the number of local oracle calls required to solve problem (1). Furthermore, we have developed the novel methods (Algorithm 1, Algorithm 2, Algorithm 3) for this problem that are optimal up to logarithmic factor in certain scenarios (see Table 1). These algorithms are based on sliding or variance reduction techniques. The theoretical analysis and experimental evidence corroborate our methods. Moreover, we have customized our approach for neural network training. | C |
In Section 2 we provide background on a) correlated equilibrium (CE), an important generalization of NE, b) coarse correlated equilibrium (CCE) (Moulin & Vial, 1978), a similar solution concept, and c) PSRO, a powerful multi-agent training algorithm. In Section 3 we propose novel solution concepts called Maximum Gini (Coarse) Correlated Equilibrium (MG(C)CE) and in Section 4 we thoroughly explore its properties including tractability, scalability, invariance, and a parameterized family of solutions. In Section 5 we propose a novel training algorithm, Joint Policy-Space Response Oracles (JPSRO), to train policies on n-player, general-sum extensive form games. JPSRO requires the solution of a meta-game, and we propose using MG(C)CE as a meta-solver. We prove that the resulting algorithm converges to a normal form (C)CE in the extensive form game. In Section 6 we conduct an empirical study and show convergence rates and social welfare across a variety of games including n-player, general-sum, and common-payoff games. | An important area of related work is α𝛼\alphaitalic_α-Rank (Omidshafiei et al., 2019) which also aims to provide a tractable alternative solution in normal form games. It gives similar solutions to NE in the two-player, constant-sum setting, however it is not directly related to NE or (C)CE. α𝛼\alphaitalic_α-Rank has also been applied to ranking agents and as a meta-solver for PSRO (Muller et al., 2020). MG(C)CE is inspired by Maximum Entropy Correlated Equilibria (MECE) (Ortiz et al., 2007), an entropy maximizing CE based on Shannon’s entropy that is harder to compute than Gini impurity.
|
This highlights the main drawback of MW(C)CE which does not select for unique solutions (for example, in constant-sum games all solutions have maximum welfare). One selection criterion for NEs is maximum entropy Nash equilibrium (MENE) (Balduzzi et al., 2018), however outside of the two-player constant-sum setting, these are generally not easy to compute (Daskalakis et al., 2009). CEs exist in a convex polytope, so any convex function can select among them. Maximum entropy correlated equilibrium (MECE) (Ortiz et al., 2007) is limited to full-support solutions, which may not exist when ϵ=0italic-ϵ0\epsilon=0italic_ϵ = 0, and can be hard to solve in practice. Therefore, there is a gap in the literature for a computationally tractable, unique, solution concept and this work proposes MG(C)CE fills this gap. | The set of (C)CEs forms a convex polytope, and therefore any strictly convex function could uniquely select amongst this set. The literature only provides one such example: MECE (Ortiz et al., 2007) which has a number of appealing properties, but was found to be slow to solve large games. There is a gap in the literature for a more tractable approach, and propose to use the Gini impurity (GI) (Breiman et al., 1984; Bishop, 2006). GI is a member of Tsallis entropy family, a generalized entropy that is equivalent to GI under a certain parameterization. It is maximized when the probability mass function is uniform σ=1|𝒜|𝜎1𝒜\sigma=\frac{1}{|\mathcal{A}|}italic_σ = divide start_ARG 1 end_ARG start_ARG | caligraphic_A | end_ARG and minimized when all mass is on a single outcome. GI is popular in decision tree classification algorithms because it is easy to compute (Breiman et al., 1984). We call the resulting solution concept maximum Gini (coarse) correlated equilibrium (MG(C)CE). This approach has connections to maximum margin (Cortes & Vapnik, 1995) and maximum entropy (Jaynes, 1957). The derivations (Section C.2) follow standard optimization theory.
| There are two important solution concepts in the space of CEs. The first is Maximum Welfare Correlated Equilibrium (MWCE) which is defined as the CE that maximises the sum of all player’s payoffs. An MWCE can be obtained by solving a linear program, however the MWCE may not be unique and therefore does not fully solve the equilibrium selection problem (e.g. constant-sum game solutions all have equal payoff). The second such concept is Maximum Entropy Correlated Equilibrium (MECE) (Ortiz et al., 2007) which maximises Shannon’s entropy (Shannon, 1948) as an objective. MECE also shares some interesting properties with MGCE such as computational scalability when the solution is full-support (positive probability mass everywhere). Drawbacks of this approach are that the literature does not provide algorithms when the solution is general-support (non-negative probability) and, maximising Shannon’s entropy can be complex.
| A |
\epsilon^{\prime}-\xi}^{\infty}{\delta_{2}}\left(t\right)dt\right)italic_δ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_ϵ ) ≔ start_UNDERACCENT italic_ϵ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ ( 0 , italic_ϵ ) , italic_ξ ∈ ( 0 , italic_ϵ - italic_ϵ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) end_UNDERACCENT start_ARG roman_inf end_ARG ( italic_δ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_ϵ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) + divide start_ARG 1 end_ARG start_ARG italic_ξ end_ARG ∫ start_POSTSUBSCRIPT italic_ϵ - italic_ϵ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT - italic_ξ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT italic_δ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_t ) italic_d italic_t ).
| Since achieving posterior accuracy is relatively straightforward, guaranteeing Bayes stability is the main challenge in leveraging this theorem to achieve distribution accuracy with respect to adaptively chosen queries. The following lemma gives a useful and intuitive characterization of the quantity that the Bayes stability definition requires be bounded. Simply put, the Bayes factor K(⋅,⋅)𝐾⋅⋅{K}\left(\cdot,\cdot\right)italic_K ( ⋅ , ⋅ ) (defined in the lemma below) represents the amount of information leaked about the dataset during the interaction with an analyst, by moving from the prior distribution over
data elements to the posterior induced by some view v𝑣vitalic_v. The degree to which a query q𝑞qitalic_q overfits to the dataset is expressed by the correlation between the query and that Bayes factor. This simple lemma is at the heart of the progress that we make in this paper, both in our intuitive understanding of adaptive data analysis, and in the concrete results we show in subsequent sections. Its corresponding version for arbitrary queries are presented in Section C.2. | In order to complete the triangle inequality, we have to define the stability of the mechanism. Bayes stability captures the concept that the results returned by a mechanism and the queries selected by the adaptive adversary are such that the queries behave similarly on the true data distribution and on the posterior distribution induced by those results. This notion first appeared in Jung et al. (2020), under the name Posterior Sensitivity, as did the following theorem.
| Our Covariance Lemma (3.5) shows that there are two possible ways to avoid adaptivity-driven overfitting—by bounding the Bayes factor term, which induces a bound on |q(Dv)−q(D)|𝑞superscript𝐷𝑣𝑞𝐷\left|{q}\left(D^{v}\right)-{q}\left(D\right)\right|| italic_q ( italic_D start_POSTSUPERSCRIPT italic_v end_POSTSUPERSCRIPT ) - italic_q ( italic_D ) |, as we do in this work, or by bounding the correlation between q𝑞qitalic_q and K(⋅,v)𝐾⋅𝑣{K}\left(\cdot,v\right)italic_K ( ⋅ , italic_v ). This second option suggests interesting directions for future work. For example, to capture an analyst that is non-worst-case in the sense that she “forgets” some of the information that she has learned about the dataset, both the posterior accuracy and the Bayes stability could be redefined with respect to the internal state of the analyst instead of with respect to the full view. This could allow for improved bounds in the style of Zrnic and Hardt (2019).
| Using the first part of the lemma, we guarantee Bayes stability by bounding the correlation between specific q𝑞qitalic_q and K(⋅,v)𝐾⋅𝑣{K}\left(\cdot,v\right)italic_K ( ⋅ , italic_v ) as discussed in Section 6. The second part of this Lemma implies that bounding the appropriate divergence is necessary and sufficient for bounding the Bayes stability of the worst query in the corresponding family, which is how the main theorems of this paper are all achieved, using the next corollary.
| A |
All z𝑧zitalic_z-antlers (C^,F^)normal-^𝐶normal-^𝐹(\hat{C},\hat{F})( over^ start_ARG italic_C end_ARG , over^ start_ARG italic_F end_ARG ) that are z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ prior to executing the algorithm are also z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ after termination of the algorithm.
|
To show the algorithm preserves properness of the coloring, we show that every individual recoloring preserves properness, that is, if an arbitrary z𝑧zitalic_z-antler is z𝑧zitalic_z-properly colored prior to the recoloring, it is also z𝑧zitalic_z-properly colored after the recoloring. |
We show first that any z𝑧zitalic_z-properly colored antler prior to executing the algorithm remains z𝑧zitalic_z-properly colored after termination. Afterwards we argue that in Item 5, the pair (χV−1(𝖢˙),χV−1(𝖥˙))subscriptsuperscript𝜒1𝑉˙𝖢subscriptsuperscript𝜒1𝑉˙𝖥(\chi^{-1}_{V}(\mathsf{\dot{C}}),\chi^{-1}_{V}(\mathsf{\dot{F}}))( italic_χ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT ( over˙ start_ARG sansserif_C end_ARG ) , italic_χ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT ( over˙ start_ARG sansserif_F end_ARG ) ) is a z𝑧zitalic_z-antler in G𝐺Gitalic_G. Since (χV−1(𝖢˙),χV−1(𝖥˙))subscriptsuperscript𝜒1𝑉˙𝖢subscriptsuperscript𝜒1𝑉˙𝖥(\chi^{-1}_{V}(\mathsf{\dot{C}}),\chi^{-1}_{V}(\mathsf{\dot{F}}))( italic_χ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT ( over˙ start_ARG sansserif_C end_ARG ) , italic_χ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT ( over˙ start_ARG sansserif_F end_ARG ) ) contains all properly colored antlers this proves correctness. | All z𝑧zitalic_z-antlers (C^,F^)normal-^𝐶normal-^𝐹(\hat{C},\hat{F})( over^ start_ARG italic_C end_ARG , over^ start_ARG italic_F end_ARG ) that are z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ prior to executing the algorithm are also z𝑧zitalic_z-properly colored by χ𝜒\chiitalic_χ after termination of the algorithm.
| We now show that a z𝑧zitalic_z-antler can be obtained from a suitable coloring χ𝜒\chiitalic_χ of the graph. The algorithm we give updates the coloring χ𝜒\chiitalic_χ and recolors any vertex or edge that is not part of a z𝑧zitalic_z-properly colored antler to color 𝖱˙˙𝖱\mathsf{\dot{R}}over˙ start_ARG sansserif_R end_ARG. We show that after repeatedly refining the coloring, the coloring that we arrive at identifies a suitable antler.
| A |
Painterly image harmonization is more challenging because multiple levels of styles (i.e., color, simple texture, complex texture) [115] need to be transferred from background to foreground, while standard image harmonization only needs to transfer low-level style (i.e., illumination).
Painterly image harmonization is also referred to as cross-domain image composition [47, 101, 178]. | The existing painterly image harmonization methods [104, 119, 10, 99, 166, 115, 114] can be roughly categorized into optimization-based methods and feed-forward methods.
Optimization-based methods optimize the input image to minimize the style loss and content loss, which is very time-consuming. |
Image harmonization is closely related to style transfer. Note that both artistic style transfer [37, 56, 118] and photorealistic style transfer [103, 82] belong to style transfer. Image harmonization is closer to photorealistic style transfer, which transfers the style of a reference photo to another input photo. There are two main differences between image harmonization and photorealistic style transfer. 1) Firstly, image harmonization adjusts the foreground appearance according to the background, which needs to take the foreground location into consideration due to the locality property. In contrast, photorealistic style transfer adjusts the appearance of a whole input image according to another whole reference image. 2) Secondly, the definition of “style” in photorealistic style transfer is unclear and coarsely depends on the employed style loss (e.g., Gram matrix loss [37], AdaIn loss [56]). Differently, the goal of image harmonization is clearly adjusting the illumination statistics of foreground, so that the resultant foreground looks like the same object captured in the background illumination condition. | For example, Luan et al. [104] proposed to optimize the input image with two passes, in which the first pass aims at robust coarse harmonization and the second pass targets at high-quality refinement.
Feed-forward methods send the input image through the model to output the harmonized result. For example, Peng et al. [119] applied adaptive instance normalization to match the means and variances between the feature map of composite image and that of artistic background. Cao et al. [10] performed painterly image harmonization in both frequency domain and spatial domain, considering that artistic paintings often have periodic textures and patterns which appear regularly. Lu et al. [99] introduced diffusion model to painterly image harmonization, which can significantly outperform GAN-based methods when the background has dense textures or abstract style. Niu et al. [115] divided styles into low-level styles (e.g., color, simple pattern) and high-level styles (e.g., complex pattern), and devised a progressive network which can harmonize a composite image from low-level styles to high-level styles progressively. Niu et al. [114] proposed style-level supervision based on pairs of artistic objects and photographic objects, considering that it is hard to obtain pixel-wise supervision based on pairs of artistic images and photographic images. Niu et al. [114] also contributed an artistic object dataset which contains the segmentation masks and similar photographic objects for artistic objects. |
The above methods based on gradient domain smoothness can smooth the transition between foreground and background to some extent. However, background colors may seep through the foreground too much and distort the foreground color, which would bring significant loss to the foreground content. | A |
\\
\sum_{j=0}^{n}a_{ij}=1,i=1,2,3,...,m\end{matrix}\right.\end{split}start_ROW start_CELL end_CELL start_CELL roman_max start_POSTSUBSCRIPT italic_a start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT italic_A ( italic_i , italic_j ) italic_a start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL italic_s . italic_t . { start_ARG start_ROW start_CELL ∑ start_POSTSUBSCRIPT italic_i = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT italic_a start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT = 1 , italic_j = 1 , 2 , 3 , … , italic_n end_CELL end_ROW start_ROW start_CELL end_CELL end_ROW start_ROW start_CELL ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT italic_a start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT = 1 , italic_i = 1 , 2 , 3 , … , italic_m end_CELL end_ROW end_ARG end_CELL end_ROW |
LPA algorithm is a reinforcement learning-based approach [6]. We first adopt SARSA [6] to learn the expected long-term revenue of each grid in each period. Based on these expected revenues, we dispatch taxis to passengers using the same optimization formulation as Eqn. (13), with the exception that we replace A(i,j)𝐴𝑖𝑗A(i,j)italic_A ( italic_i , italic_j ) with the scores learned by SARSA. Unlike other methods that focus on immediate revenues in the current execution, LPA aims to maximize the total revenue of the system in the long run. | Problem Statement. To address the taxi dispatching task, we learn a real-time dispatching policy based on historical passenger requests. At every timestamp τ𝜏\tauitalic_τ, we use this policy to dispatch available taxis to current passengers, with the aim of maximizing the total revenue of all taxis in the long run. To achieve this, we divide the city into uniform hexagonal grids, as opposed to square grids used in previous studies [21, 6].
|
Our experimental results demonstrate that LPA outperforms LLD in most cases. This can be attributed to the fact that LPA optimizes the expected long-term revenues at each dispatching round, while LLD only focuses on the immediate reward. As a result, LPA is better suited for maximizing the total revenue of the system in the long run, and is expected to compare favorably against LLD. | LLD algorithm is an optimization-based approach formulated by Eqn. (13), where aij=1subscript𝑎𝑖𝑗1a_{ij}=1italic_a start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT = 1 if taxi j𝑗jitalic_j is dispatched to passenger i𝑖iitalic_i and 0 otherwise; Here, A(i,j)𝐴𝑖𝑗A(i,j)italic_A ( italic_i , italic_j ) represents the immediate revenue earned by taxi j𝑗jitalic_jafter transporting passenger i𝑖iitalic_i to their destination. The definition of immediate revenue follows the approach presented in [6].
| A |
(y|𝐱,θ)∼𝒩(y^θ(𝐱),σ2).similar-toconditional𝑦𝐱𝜃𝒩superscript^𝑦𝜃𝐱superscript𝜎2\displaystyle(y\,|\,\mathbf{x},\theta)\sim\mathcal{N}\big{(}\hat{y}^{\theta}(%
\mathbf{x}),\sigma^{2}\big{)}\,.( italic_y | bold_x , italic_θ ) ∼ caligraphic_N ( over^ start_ARG italic_y end_ARG start_POSTSUPERSCRIPT italic_θ end_POSTSUPERSCRIPT ( bold_x ) , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) . | Although ordinary neural networks have the benefit that even for a large number of features and weights they can be implemented very efficiently, their Bayesian incarnation suffers from a problem. The nonlinearities in the activation functions and the sheer number of parameters, although they are the features that make traditional NNs so powerful, lead to the inference steps (4) and (5) becoming intractable.
|
Although a variety of methods was considered, it is not feasible to include all of them. The most important omission is a more detailed overview of Bayesian neural networks (although one can argue, as was done in the section on dropout networks, that some common neural networks are, at least partially, Bayesian by nature). The main reason for this omission is the large number of choices in terms of priors and approximations, both of which strongly depend on the problem at hand. On the level of calibration there are also some methods that were not included in this paper, mostly because they were either too specific or too complex for simple regression problems. For general regression models the literature on calibration methods is not as extensive as it is for classification models. Recently some advances were made in which β𝛽\betaitalic_β-calibration pmlr-v54-kull17a was generalized to regression problems using a Gaussian process approach pmlr-v97-song19a . However, as mentioned before, a Gaussian process does not have a favorable scaling behaviour and also in this case certain approximations are necessary. Another technique that was recently introduced pmlr-v80-kuleshov18a calibrates the cumulative distribution function produced by a distribution predictor using isotonic regression. Although the technique itself is simple in spirit, it is only applicable to predictors that construct the full output distribution. In a similar vein utpala2020quantile , at the time of writing still under review, takes a distribution predictor and modifies the loss function such that the quantiles are calibrated without post-hoc calibration. The main benefit of this method is that it does not require a separate calibration set in stark contrast to conformal prediction, but it still requires the construction of the cumulative distribution. By dividing the target space in a finite number of bins, Keren et al. introduced an approach where the regression problem is approximated by a classification problem such that the usual tools for classifier calibration can be applied keren2018calibrated . The main downside of this approach is that one loses the continuous nature of the initial problem. Another concept that was not covered is that of predictive distributions schweder2016confidence ; shen2018prediction , where not only a single interval is considered, but a full distribution is estimated. This approach was combined with conformal prediction in vovk2017nonparametric giving rise to Conformal Predictive Systems. | In Fig. 1, both the coverage degree, average width and R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient are shown. For each model, the data sets are sorted according to increasing R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient (averaged over the different runs). For the exact Gaussian processes (GP), no results on the data sets fb1 and blog are reported due to the large memory and time consumption. While these sets are not extraordinary compared to modern real-life data sets, they already require a considerable amount of memory due to the quadratic scaling. Approximate models, such as the variational approximation used in this study, will become imperative. The first row of Fig. 1 consists of the two conformalized point predictors (these could be considered baseline models). For the other rows, the left column shows the results of the models trained on the full data set, while the right column shows those of the calibrated (conformalized) models. When comparing the two columns, it is immediately clear that the coverage, indicated by the blue regions, is much more concentrated around the nominal value of 0.9 for the conformalized models, as is guaranteed by the Marginal Validity Theorem from Section 3.4. The shaded regions indicate the variability among the 50 different runs (standard deviation is used for Fig. 1). It is clear that a higher variability in the predictive power often corresponds to a higher variability in the interval quality (both coverage and average width). This is not surprising since a model in general performs worse when the uncertainty is higher. As most of the models explicitly use the predictions to build prediction intervals, this relation can be expected to be even stronger. For both the DE and MVE models the results for the fb1 and blog data sets are missing because the average widths differed by about two orders of magnitude compared to the other data sets and models and were, therefore, deemed to be nonsensical. The R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient for the NN and Drop-CP models on the blog data set show a strong variability, while the average width remains small and almost constant. This can be explained by the strong skewness present in the data set. The R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient is sensitive to the extreme outliers, while the prediction intervals are not, as long as the outlier proportion is less than α𝛼\alphaitalic_α. This also explains why almost all models give reasonably good intervals for both the fb1 and blog data sets. The DE and MVE models form the exception, as mentioned above, since these methods inherently take into account the data uncertainty and cannot discard these outliers. A general summary of the results can be found in the tables in Appendix A.
| Most of the data sets were obtained from the UCI repository Dua2019 . Specific references are given in Table 2. This table also shows the number of data points and (used) features and the skewness and (Pearson) kurtosis of the response variable. All data sets were standardized (both features and target variables) before training. The data sets blog and fb1 were also analysed after first taking a log transform of the response variable because these data sets are extremely skewed, which is reflected in the high skewness and kurtosis, as shown in the fourth column of Table 2, and are believed to follow a power law distribution. This strongly improved the R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient of the various models, but did not improve the prediction intervals, and therefore, these results are not included. The crime data set comes in two versions: the original data set consists of integer-valued data (count data), while the version used here was preprocessed using an unsupervised standardization algorithm redmond2002data . Although standardized, the data set retains (some of) its count data properties. The traffic data set, aside of being very small, is also extremely sparse (on average 14 features are zero). It should be noted that all of the data sets used in this study were considered as ordinary (static) data sets. Even though some of them could be considered in a time series context, no autoregressive features were additionally extracted. The main reason to exclude autoregressive features is that most, if not all, methods considered in this study assume the data to be i.i.d. (or exchangeable), a property that is generically not valid for autoregressive data.
| A |
EMOPIA is a dataset of pop piano music collected recently by \textciteemopia from YouTube for research on emotion-related tasks.888https://annahung31.github.io/EMOPIA/
It has 1,087 clips (each around 30 seconds) segmented from 387 songs, covering Japanese anime, Korean & Western pop song covers, movie soundtracks and personal compositions. | There is little performance difference between REMI and CP in this task.
Fig. 7 further shows that the evaluated models can fairly easily distinguish between high arousal and low arousal pieces (i.e., “HAHV, HALV” versus “LALV, LAHV”), but they have a much harder time along the valence axis (e.g., “HAHV” versus “HALV” and “LALV” versus “LAHV”). We see less confusion from the result of ‘our model (score)+++CP’. |
Tab. 2 shows that the accuracy on our 6-class velocity classification task is not high, reaching 52.11% at best. This may be due to the fact that velocity is rather subjective, meaning that musicians can perform the same music piece fairly differently. Moreover, we note that the data is highly imbalanced, with the latter three classes (mf, f, ff) taking up nearly 90% of all labelled data. The confusion tables presented in Fig. 5 show that Bi-LSTM tends to classify most of the notes into f, the most popular class among the six. | The emotion of each clip has been labelled using the following 4-class taxonomy: HAHV (high arousal high valence); LAHV (low arousal high valence); HALV (high arousal low valence); and LALV (low arousal low valence). This taxonomy is derived from the Russell’s valence-arousal model of emotion \parenciterussell, where valence indicates whether the emotion is positive or negative and arousal denotes whether the emotion is high (e.g., angry) or low (e.g., sad) \parenciteyang11book.
The MIDI performances of these clips are similarly machine-transcribed from the audio recordings by the model of \textciteTTtranscription. | We use this dataset for the emotion classification task. As Tab. 1 shows, the average length of the pieces in the EMOPIA dataset is the shortest among the five, since they are actually clips manually selected by dedicated annotators \parenciteemopia to ensure that each performance expresses a single emotion.
| C |
Observe that for a tree on n𝑛nitalic_n vertices we can compute for every vertex v𝑣vitalic_v and its neighbor u𝑢uitalic_u functions f(v,u)𝑓𝑣𝑢f(v,u)italic_f ( italic_v , italic_u ) and g(v,u)𝑔𝑣𝑢g(v,u)italic_g ( italic_v , italic_u ) denoting the sizes of subsets of C1(T)subscript𝐶1𝑇C_{1}(T)italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_T ) and C2(T)subscript𝐶2𝑇C_{2}(T)italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_T ) restricted to the connected component containing u𝑢uitalic_u in T−v𝑇𝑣T-vitalic_T - italic_v. Moreover, it can be done in linear time: it is sufficient to root T𝑇Titalic_T in an arbitrary vertex, compute values of all f(v,u)𝑓𝑣𝑢f(v,u)italic_f ( italic_v , italic_u ) and g(v,u)𝑔𝑣𝑢g(v,u)italic_g ( italic_v , italic_u ) when u𝑢uitalic_u is a child of v𝑣vitalic_v recursively, and then by another recursion get the missing values of f(v,parent(v))𝑓𝑣𝑝𝑎𝑟𝑒𝑛𝑡𝑣f(v,parent(v))italic_f ( italic_v , italic_p italic_a italic_r italic_e italic_n italic_t ( italic_v ) ) and g(v,parent(v))𝑔𝑣𝑝𝑎𝑟𝑒𝑛𝑡𝑣g(v,parent(v))italic_g ( italic_v , italic_p italic_a italic_r italic_e italic_n italic_t ( italic_v ) ).
Note that this way we can compute both size of the component (equal to f(v,u)+g(v,u)𝑓𝑣𝑢𝑔𝑣𝑢f(v,u)+g(v,u)italic_f ( italic_v , italic_u ) + italic_g ( italic_v , italic_u )) as well as its imbalance (equal to f(v,u)−g(v,u)𝑓𝑣𝑢𝑔𝑣𝑢f(v,u)-g(v,u)italic_f ( italic_v , italic_u ) - italic_g ( italic_v , italic_u )) on request in constant time. | In every tree T𝑇Titalic_T there exists a central vertex v∈V(T)𝑣𝑉𝑇v\in V(T)italic_v ∈ italic_V ( italic_T ) such that every connected component of T−v𝑇𝑣T-vitalic_T - italic_v has at most |V(T)|2𝑉𝑇2\frac{|V(T)|}{2}divide start_ARG | italic_V ( italic_T ) | end_ARG start_ARG 2 end_ARG vertices.
| Next, let us count the total number of jumps necessary for finding central vertices over all loops in Algorithm 1. As it was stated in the proof of Lemma 2.2, while searching for a central vertex we always jump from a vertex to its neighbor in a way that decreases the largest remaining component by one. Thus, if in the next iteration we start at exactly the neighbor of the previous central vertex, there can be only O(n)𝑂𝑛O(n)italic_O ( italic_n ) such jumps in total.
| The idea is to start from any vertex w𝑤witalic_w, and then jump to its neighbor with the largest component size in T−w𝑇𝑤T-witalic_T - italic_w, until we hit a vertex with desired property.
Note that for any vertex v𝑣vitalic_v there can be at most one neighbor u𝑢uitalic_u such that its connected component Tusubscript𝑇𝑢T_{u}italic_T start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT in T−v𝑇𝑣T-vitalic_T - italic_v has more than |V(T)|2𝑉𝑇2\frac{|V(T)|}{2}divide start_ARG | italic_V ( italic_T ) | end_ARG start_ARG 2 end_ARG vertices, so the jumps are unique. | The linear running time follows directly from the fact that we compute c𝑐citalic_c only once and we can pass additionally through recursion the lists of leaves and isolated vertices in an uncolored induced subtree. The total number of updates of these lists is proportional to the total number of edges in the tree, hence the claim follows.
| B |