context
stringlengths
102
1.68k
A
stringlengths
102
2.6k
B
stringlengths
106
2.67k
C
stringlengths
101
2.91k
D
stringlengths
105
2.35k
label
stringclasses
4 values
{n,n^{\prime}}.∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT italic_x start_POSTSUPERSCRIPT italic_D - 1 end_POSTSUPERSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_R start_POSTSUBSCRIPT italic_n start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) italic_d italic_x = divide start_ARG 1 end_ARG start_ARG 2 italic_n + italic_D end_ARG italic_δ start_POSTSUBSCRIPT italic_n , italic_n start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT .
the product of xmsuperscript𝑥𝑚x^{m}italic_x start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT by a polynomial of degree n−m𝑛𝑚n-mitalic_n - italic_m:
The inversion of (8) assembles powers xisuperscript𝑥𝑖x^{i}italic_x start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT by sums
Zernike Polynomials Rnmsuperscriptsubscript𝑅𝑛𝑚R_{n}^{m}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT by computation of the ratios
xi≡∑n=m(mod2)ihi,n,m⁢Rnm⁢(x);i−m=0,2,4,6,….formulae-sequencesuperscript𝑥𝑖superscriptsubscript𝑛annotated𝑚pmod2𝑖subscriptℎ𝑖𝑛𝑚superscriptsubscript𝑅𝑛𝑚𝑥𝑖𝑚0246…x^{i}\equiv\sum_{n=m\pmod{2}}^{i}h_{i,n,m}R_{n}^{m}(x);\quad i-m=0,2,4,6,\ldots.italic_x start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ≡ ∑ start_POSTSUBSCRIPT italic_n = italic_m start_MODIFIER ( roman_mod start_ARG 2 end_ARG ) end_MODIFIER end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT italic_h start_POSTSUBSCRIPT italic_i , italic_n , italic_m end_POSTSUBSCRIPT italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_x ) ; italic_i - italic_m = 0 , 2 , 4 , 6 , … .
B
As we run through the algorithm described by Taylor, we deal with the columns of g𝑔gitalic_g in reverse order, beginning with column d𝑑ditalic_d.
For each column c𝑐citalic_c with r=r⁢(c)≤d−2𝑟𝑟𝑐𝑑2r=r(c)\leq d-2italic_r = italic_r ( italic_c ) ≤ italic_d - 2,
Having ‘cleared’ column c𝑐citalic_c, we clear the entries in position j=1,…,c−1𝑗1…𝑐1j=1,\ldots,c-1italic_j = 1 , … , italic_c - 1 in the r𝑟ritalic_rth row by multiplying g𝑔gitalic_g on the right by the transvections
At this stage, g𝑔gitalic_g has been reduced to a matrix in which columns c−1,…,d𝑐1…𝑑c-1,\ldots,ditalic_c - 1 , … , italic_d have exactly one nonzero entry (and these entries are in different rows).
Suppose we have reached column c𝑐citalic_c, for some c∈{1,…,d}𝑐1…𝑑c\in\{1,\ldots,d\}italic_c ∈ { 1 , … , italic_d }.
D
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local computations are required, although these are not restricted to a single element. It is interesting to notice that, although the formulation is based on hybridization, the final numerical solution is defined by a sequence of elliptic problems.
The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic discrete problems. The analysis of the exponential decay of the multiscale basis function is considered in Section 3.2. To overcome the possible deterioration of the exponential decay for high-contrast coefficients, in Section 3.1 the Localized Spectral Decomposition (LSD) method is designed and fully analyzed. To allow an efficient pre-processing numerical scheme, Section LABEL:ss:findim discusses how to reduce the right-hand side space dimension without losing a target accuracy, and also develops L2⁢(Ω)superscript𝐿2ΩL^{2}(\Omega)italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( roman_Ω ) a priori error estimates. Section LABEL:s:Algorithms gives a global overview of the LSD algorithm proposed. Appendix LABEL:s:Auxiliaryresults provides some mathematical tools and Appendix LABEL:s:Notations refers to a notation library for the paper.
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local computations are required, although these are not restricted to a single element. It is interesting to notice that, although the formulation is based on hybridization, the final numerical solution is defined by a sequence of elliptic problems.
The idea of using exponential decay to localize global problems was already considered by the interesting approach developed under the name of Localized Orthogonal Decomposition (LOD) [MR2831590, MR3591945, MR3246801, MR3552482] which are
and denoted by Localized Orthogonal Decomposition Methods–LOD were introduced and analyzed in [MR3246801, MR2831590, MR3552482, MR3591945].
C
As we will see, the killing functions in these problems are also simple. Nevertheless, it requires painstaking effort and creativity to derive the killing functions.
Chandran and Mount [8] compute all the P𝑃Pitalic_P-stable triangles in linear time by the Rotating-Caliper technique,
from which we can see this technique is different from the Rotating-Caliper technique (see discussions in subsection 1.2.3).
One major difference is that the Rotating-Caliper uses only one parameter (e.g. some angle θ𝜃\thetaitalic_θ), whereas
An application of Toussaint’s Rotating-Caliper (RC) [26] technique usually adopts the following framework:
D
For analyzing the employed features, we rank them by importances using RF (see 3). The best feature is related to sentiment polarity scores. There is a big difference between the sentiment associated to rumors and the sentiment associated to real events in relevant tweets. In specific, the average polarity score of news event is -0.066 and the average of rumors is -0.1393, showing that rumor-related messages tend to contain more negative sentiments. Furthermore, we would expect that verified users are less involved in the rumor spreading. However, the feature appears near-bottom in the ranked list, indicating that it is not as reliable as expected. Also interestingly, “IsRetweeted” feature is pretty much useless, which means the probability of people retweeting rumors or true news are similar (both appear near-bottom in the ranked feature list).
CrowdWisdom: Similar to [18], the core idea is to leverage the public’s common sense for rumor detection: If there are more people denying or doubting the truth of an event, this event is more likely to be a rumor. For this purpose,  [18] use an extensive list of bipolar sentiments with a set of combinational rules. In contrast to mere sentiment features, this approach is more tailored rumor context (difference not evaluated in [18]). We simplified and generalized the “dictionary” by keeping only a set of carefully curated negative words. We call them “debunking words” e.g., hoax, rumor or not true. Our intuition is, that the attitude of doubting or denying events is in essence sufficient to distinguish rumors from news. What is more, this generalization augments the size of the crowd (covers more ’voting’ tweets), which is crucial, and thus contributes to the quality of the crowd wisdom. In our experiments, “debunking words” is an high-impact feature, but it needs substantial time to “warm up”; that is explainable as the crowd is typically sparse at early stage.
It has to be noted here that even though we obtain reasonable results on the classification task in general, the prediction performance varies considerably along the time dimension. This is understandable, since tweets become more distinguishable, only when the user gains more knowledge about the event.
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet level and let each tweet vote for the credibility of its event. We show the CreditScore measured over time in Figure 5(a). It can be seen that although the credibility of some tweets are low (rumor-related), averaging still makes the CreditScore of Munich shooting higher than the average of news events (hence, close to a news). In addition, we show the feature analysis for ContainNews (percentage of URLs containing news websites) for the event Munich shooting in Figure 5(b). We can see the curve of Munich shooting event is also close to the curve of average news, indicating the event is more news-related.
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumors.  [7, 19] also use RNN for rumor debunking. However, in their work, RNN is used at event-level. The classification leverages only the deep data representations of aggregated tweet contents of the whole event, while ignoring exploiting other –in latter stage–effective features such as user-based features and propagation features. Although, tweet contents are merely the only reliable source of clue at early stage, they are also likely to have doubtful perspectives and different stands in this specific moment. In addition, they could relate to rumorous sub-events (see e.g., the Munich shooting). Aggregating all relevant tweets of the event at this point can be of noisy and harm the classification performance. One could think of a sub-event detection mechanism as a solution, however, detecting sub-events at real-time over Twitter stream is a challenging task [22], which increases latency and complexity. In this work, we address this issue by deep neural modeling only at single tweet level. Our intuition is to leverage the “wisdom of the crowd” theory; such that even a certain portion of tweets at a moment (mostly early stage) are weakly predicted (because of these noisy factors), the ensemble of them would attribute to a stronger prediction.
B
\left(\sqrt{\frac{\log\log t}{\log t}}\right)∥ divide start_ARG bold_w ( italic_t ) end_ARG start_ARG ∥ bold_w ( italic_t ) ∥ end_ARG - divide start_ARG over^ start_ARG bold_w end_ARG end_ARG start_ARG ∥ over^ start_ARG bold_w end_ARG ∥ end_ARG ∥ = italic_O ( square-root start_ARG divide start_ARG roman_log roman_log italic_t end_ARG start_ARG roman_log italic_t end_ARG end_ARG ). Our analysis provides a more precise characterization of the iterates, and also shows the convergence is actually quadratically faster (see Section 3). However, Ji and Telgarsky go even further and provide a characterization also when the data is non-separable but 𝐰⁢(t)𝐰𝑡\mathbf{w}(t)bold_w ( italic_t ) still goes to infinity.
where 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) has a bounded norm for almost all datasets, while in zero measure case 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ) contains additional O⁢(log⁡log⁡(t))𝑂𝑡O(\log\log(t))italic_O ( roman_log roman_log ( italic_t ) ) components which are orthogonal to the support vectors in 𝒮1subscript𝒮1\mathcal{S}_{1}caligraphic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, and, asymptotically, have a positive angle with the other support vectors. In this section we first calculate the various convergence rates for the non-degenerate case of Theorem 2, and then write the correction in the zero measure cases, if there is such a correction.
In some non-degenerate cases, we can further characterize the asymptotic behavior of 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t ). To do so, we need to refer to the KKT conditions (eq. 6)
converges to some limit 𝐰∞subscript𝐰\mathbf{w}_{\infty}bold_w start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT, then we can write 𝐰⁢(t)=g⁢(t)⁢𝐰∞+𝝆⁢(t)𝐰𝑡𝑔𝑡subscript𝐰𝝆𝑡\mathbf{w}\left(t\right)=g\left(t\right)\mathbf{w}_{\infty}+\boldsymbol{\rho}%
and in the last line we used the fact that 𝝆⁢(t)𝝆𝑡\boldsymbol{\rho}\left(t\right)bold_italic_ρ ( italic_t )
B
\textbf{S}^{D}_{i,1},...,\textbf{S}^{D}_{i,N})italic_V ( italic_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = ( F start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , 0 end_POSTSUBSCRIPT , F start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , 1 end_POSTSUBSCRIPT , … , F start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_N end_POSTSUBSCRIPT , S start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , 1 end_POSTSUBSCRIPT , … , S start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_N end_POSTSUBSCRIPT ), where 𝐅i,tDsubscriptsuperscript𝐅𝐷𝑖𝑡\textbf{F}^{D}_{i,t}F start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_t end_POSTSUBSCRIPT is the feature vector in time interval t of event Eisubscript𝐸𝑖E_{i}italic_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. 𝐒i,tDsubscriptsuperscript𝐒𝐷𝑖𝑡\textbf{S}^{D}_{i,t}S start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_t end_POSTSUBSCRIPT is the difference between 𝐅i,tDsubscriptsuperscript𝐅𝐷𝑖𝑡\textbf{F}^{D}_{i,t}F start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_t end_POSTSUBSCRIPT and 𝐅i,t+1Dsubscriptsuperscript𝐅𝐷𝑖𝑡1\textbf{F}^{D}_{i,t+1}F start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_t + 1 end_POSTSUBSCRIPT. V(Eisubscript𝐸𝑖E_{i}italic_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) is the time series feature vector of the event Eisubscript𝐸𝑖E_{i}italic_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
𝐅i,tD=(f~i,t,1,f~i,t,2,…,f~i,t,D)subscriptsuperscript𝐅𝐷𝑖𝑡subscript~𝑓𝑖𝑡1subscript~𝑓𝑖𝑡2…subscript~𝑓𝑖𝑡𝐷\textbf{F}^{D}_{i,t}=(\widetilde{f}_{i,t,1},\widetilde{f}_{i,t,2},...,%
𝐒i,tD=𝐅i,t+1D−𝐅i,tDI⁢n⁢t⁢e⁢r⁢v⁢a⁢l⁢(Ei)subscriptsuperscript𝐒𝐷𝑖𝑡subscriptsuperscript𝐅𝐷𝑖𝑡1subscriptsuperscript𝐅𝐷𝑖𝑡𝐼𝑛𝑡𝑒𝑟𝑣𝑎𝑙subscript𝐸𝑖\textbf{S}^{D}_{i,t}=\frac{\textbf{F}^{D}_{i,t+1}-\textbf{F}^{D}_{i,t}}{%
\textbf{S}^{D}_{i,1},...,\textbf{S}^{D}_{i,N})italic_V ( italic_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = ( F start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , 0 end_POSTSUBSCRIPT , F start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , 1 end_POSTSUBSCRIPT , … , F start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_N end_POSTSUBSCRIPT , S start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , 1 end_POSTSUBSCRIPT , … , S start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_N end_POSTSUBSCRIPT ), where 𝐅i,tDsubscriptsuperscript𝐅𝐷𝑖𝑡\textbf{F}^{D}_{i,t}F start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_t end_POSTSUBSCRIPT is the feature vector in time interval t of event Eisubscript𝐸𝑖E_{i}italic_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. 𝐒i,tDsubscriptsuperscript𝐒𝐷𝑖𝑡\textbf{S}^{D}_{i,t}S start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_t end_POSTSUBSCRIPT is the difference between 𝐅i,tDsubscriptsuperscript𝐅𝐷𝑖𝑡\textbf{F}^{D}_{i,t}F start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_t end_POSTSUBSCRIPT and 𝐅i,t+1Dsubscriptsuperscript𝐅𝐷𝑖𝑡1\textbf{F}^{D}_{i,t+1}F start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_t + 1 end_POSTSUBSCRIPT. V(Eisubscript𝐸𝑖E_{i}italic_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) is the time series feature vector of the event Eisubscript𝐸𝑖E_{i}italic_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
We split this event time frame into N intervals and associate each tweet to one of the intervals according to its creation time. Thus, we can generate a vector V(Eisubscript𝐸𝑖E_{i}italic_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT) of features for each time interval. In order to capture the changes of feature over time, we model their differences between two time intervals. So the model of DSTS is represented as: V⁢(Ei)=(𝐅i,0D,𝐅i,1D,…,𝐅i,ND,𝐒i,1D,…,𝐒i,ND)𝑉subscript𝐸𝑖subscriptsuperscript𝐅𝐷𝑖0subscriptsuperscript𝐅𝐷𝑖1…subscriptsuperscript𝐅𝐷𝑖𝑁subscriptsuperscript𝐒𝐷𝑖1…subscriptsuperscript𝐒𝐷𝑖𝑁V(E_{i})=(\textbf{F}^{D}_{i,0},\textbf{F}^{D}_{i,1},...,\textbf{F}^{D}_{i,N},%
A
Table 4: Performance of the baselines (RWR relatedness scores, RWR+MLE, RWR+MLE-W, LNQ, and PNQ) compared with our ranking models;
For RQ1, given an event entity e, at time t, we need to classify them into either Breaking or Anticipated class. We select a studied time for each event period randomly in the range of 5 days before and after the event time. In total, our training dataset for AOL consists of 1,740 instances of breaking class and 3,050 instances of anticipated, with over 300 event entities. For GoogleTrends, there are 2,700 and 4,200 instances respectively. We then bin the entities in the two datasets chronologically into 10 different parts. We set up 4 trials with each of the last 4 bins (using the history bins for training in a rolling basic) for testing; and report the results as average of the trials.
an unsupervised ensemble method to produce the final ranking score. Supposed a¯¯𝑎\bar{a}over¯ start_ARG italic_a end_ARG is a testing entity aspect of entity e𝑒eitalic_e. We run each of the ranking models in 𝐌𝐌\mathbf{M}bold_M against the instance of a¯¯𝑎\bar{a}over¯ start_ARG italic_a end_ARG, multiplied by the time and type probabilities of the associated entity e𝑒eitalic_e at hitting time t𝑡titalic_t. Finally, we sum all scores produced by all ranking models to obtain the ensemble ranking, s⁢c⁢o⁢r⁢e⁢(a¯)=∑m∈MP⁢(𝒞k|e,t)⁢P⁢(𝒯l|e,t,𝒞k)⁢𝖿*m⁢(a¯)𝑠𝑐𝑜𝑟𝑒¯𝑎subscript𝑚𝑀𝑃conditionalsubscript𝒞𝑘𝑒𝑡𝑃conditionalsubscript𝒯𝑙𝑒𝑡subscript𝒞𝑘subscriptsuperscript𝖿𝑚¯𝑎score(\bar{a})=\sum_{m\in M}P(\mathcal{C}_{k}|e,t)P(\mathcal{T}_{l}|e,t,%
∗∗\ast∗,††\dagger†, ∓minus-or-plus\mp∓ indicates statistical improvement over the baseline using t-test with significant at p<0.1𝑝0.1p<0.1italic_p < 0.1, p<0.05𝑝0.05p<0.05italic_p < 0.05, p<0.01𝑝0.01p<0.01italic_p < 0.01 respectively.
Results. The baseline and the best results of our 1s⁢tsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achieves marginally better result at F1 metric.
C
More importantly, these algorithms are commonly designed under the assumption of stationary reward distributions,
Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; Li et al., 2016].
in many science and engineering problems [Ristic et al., 2004; van Leeuwen, 2009; Ionides et al., 2006; Creal, 2012],
SMC methods [Arulampalam et al., 2002; Doucet et al., 2001; Djurić et al., 2003] have been widely used
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
C
Patients 11 and 14 are the most active, both having a median of more than 50 active intervals per day (corresponding to more than 8 hours of activity).
These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
Table 2: Descriptive statistics for the number of patient data entries per day. Active intervals are 10 minute intervals with at least 10 steps taken.
Patient 10 on the other hand has a surprisingly low median of 0 active 10 minutes intervals per day, indicating missing values due to, for instance, not carrying the smartphone at all times.
The median number of carbohydrate log entries vary between 2 per day for patient 10 and 5 per day for patient 14.
C
To restore the original image resolution, extracted features were processed by a series of convolutional and upsampling layers. Previous work on saliency prediction has commonly utilized bilinear interpolation for that task Cornia et al. (2018); Liu and Han (2018), but we argue that a carefully chosen decoder architecture, similar to the model by Pan et al. (2017), results in better approximations. Here we employed three upsampling blocks consisting of a bilinear scaling operation, which doubled the number of rows and columns, and a subsequent convolutional layer with kernel size 3×3333\times 33 × 3. This setup has previously been shown to prevent checkerboard artifacts in the upsampled image space in contrast to deconvolution Odena et al. (2016). Besides an increase of resolution throughout the decoder, the amount of channels was halved in each block to yield 32 feature maps. Our last network layer transformed activations into a continuous saliency distribution by applying a final 3×3333\times 33 × 3 convolution. The outputs of all but the last linear layer were modified via rectified linear units. Figure 2 visualizes the overall architecture design as described in this section.
Later attempts addressed that shortcoming by taking advantage of classification architectures pre-trained on the ImageNet database Deng et al. (2009). This choice was motivated by the finding that features extracted from CNNs generalize well to other visual tasks Donahue et al. (2014). Consequently, DeepGaze I Kümmerer et al. (2014) and II Kümmerer et al. (2016) employed a pre-trained classification model to read out salient image locations from a small subset of encoding layers. This is similar to the network by Cornia et al. (2016) which utilizes the output at three stages of the hierarchy. Oyama and Yamanaka (2018) demonstrated that classification performance of pre-trained architectures strongly correlates with the accuracy of saliency predictions, highlighting the importance of object information. Related approaches also focused on the potential benefits of incorporating activation from both coarse and fine image resolutions Huang et al. (2015), and recurrent connections to capture long-range spatial dependencies in convolutional feature maps Cornia et al. (2018); Liu and Han (2018). Our model explicitly combines semantic representations at multiple spatial scales to include contextual information in the predictive process. For a more complete account of existing saliency architectures, we refer the interested reader to a comprehensive review by Borji (2018).
A prerequisite for the successful application of deep learning techniques is a wealth of annotated data. Fortunately, the growing interest in developing and evaluating fixation models has lead to the release of large-scale eye tracking datasets such as MIT1003 Judd et al. (2009), CAT2000 Borji and Itti (2015), DUT-OMRON Yang et al. (2013), PASCAL-S Li et al. (2014), and OSIE Xu et al. (2014). The costly acquisition of measurements, however, is a limiting factor for the number of stimuli. New data collection methodologies have emerged that leverage webcam-based eye movements Xu et al. (2015) or mouse movements Jiang et al. (2015) instead via crowdsourcing platforms. The latter approach resulted in the SALICON dataset, which consists of 10,000 training and 5,000 validation instances serving as a proxy for empirical gaze measurements. Due to its large size, we first trained our model on SALICON before fine-tuning the learned weights towards fixation predictions on either of the other datasets with the same optimization parameters. This widely adopted procedure has been shown to improve the accuracy of eye movement estimations despite some disagreement between data originating from gaze and mouse tracking experiments Tavakoli et al. (2017).
Weight values from the ASPP module and decoder were initialized according to the Xavier method by Glorot and Bengio (2010). It specifies parameter values as samples drawn from a uniform distribution with zero mean and a variance depending on the total number of incoming and outgoing connections. Such initialization schemes are demonstrably important for training deep neural networks successfully from scratch Sutskever et al. (2013). The encoding layers were based on the VGG16 architecture pre-trained on both ImageNet Deng et al. (2009) and Places2 Zhou et al. (2017) data towards object and scene classification respectively.
To assess the predictive performance for eye tracking measurements, the MIT saliency benchmark Bylinskii et al. (2015) is commonly used to compare model results on two test datasets with respect to prior work. Final scores can then be submitted on a public leaderboard to allow fair model ranking on eight evaluation metrics. Table 1 summarizes our results on the test dataset of MIT1003, namely MIT300 Judd et al. (2012), in the context of previous approaches. The evaluation shows that our model only marginally failed to achieve state-of-the-art performance on any of the individual metrics. When computing the cumulative rank (i.e. the sum of ranks according to the standard competition ranking procedure) on a subset of weakly correlated measures (sAUC, CC, KLD) Riche et al. (2013); Bylinskii et al. (2018), we ranked third behind the two architectures DenseSal and DPNSal from Oyama and Yamanaka (2018). However, their approaches were based on a pre-trained Densely Connected Convolutional Network with 161 layers Huang et al. (2017) and Dual Path Network with 131 layers Chen et al. (2017) respectively, both of which are computationally far more expensive than the VGG16 model used in this work (see Table 5 by Oyama and Yamanaka (2018) for a comparison of the computational efficiency). Furthermore, DenseSal and DPNSal implemented a multi-path design where two images of different resolutions are simultaneously fed to the network, which substantially reduces the execution speed compared to single-stream architectures. Among all entries of the MIT300 benchmark with a VGG16 backbone Cornia et al. (2016); Huang et al. (2015); Cornia et al. (2018); Kruthiventi et al. (2017), our model clearly achieved the highest performance.
C
𝙴𝚒𝚗𝚣𝚎𝚕𝚎𝚕𝚎𝚖𝚎𝚗𝚝¯¯𝙴𝚒𝚗𝚣𝚎𝚕𝚎𝚕𝚎𝚖𝚎𝚗𝚝\displaystyle\overline{\mathtt{E}\mathtt{i}\mathtt{n}\mathtt{z}\mathtt{e}%
In the following, we investigate another aspect of greedy strategies. Any symbol that is marked next in a marking sequence can have isolated occurrences (i. e., occurrences that are not adjacent to any marked block) and block-extending occurrences (i. e., occurrences with at least one adjacent marked symbol). Each isolated occurrence results in a new marked block, while each block-extending occurrence just extends an already existing marked block, and potentially may even combine two marked blocks and therefore may decrease the overall number of marked blocks. Therefore, marking a symbol when it only has isolated occurrences causes the maximum number of marked blocks that can ever be contributed by this symbol, and therefore this seems to be the worst time to mark this symbol. Hence, in terms of a greedy strategy, it seems reasonable to only mark symbols if they also have block-extending occurrence (obviously, this is not possible for the initially marked symbol).
The important property of the word αesubscript𝛼𝑒\alpha_{e}italic_α start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT is that for every edge {x,y}𝑥𝑦\{x,y\}{ italic_x , italic_y } of H𝐻Hitalic_H (except e𝑒eitalic_e), it contains two distinct size-2222 factors that are x⁢y𝑥𝑦xyitalic_x italic_y- or y⁢x𝑦𝑥yxitalic_y italic_x-factors (for example, the original edge {x,w}𝑥𝑤\{x,w\}{ italic_x , italic_w } translates into two x⁢w𝑥𝑤xwitalic_x italic_w-factors, while the original edge {u,v}𝑢𝑣\{u,v\}{ italic_u , italic_v } translates into a v⁢u𝑣𝑢vuitalic_v italic_u-factor and a u⁢v𝑢𝑣uvitalic_u italic_v-factor). Consider the cuts of a fixed linear arrangement of H′superscript𝐻′H^{\prime}italic_H start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT from left to right and the marked versions of αesubscript𝛼𝑒\alpha_{e}italic_α start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT with respect to the corresponding marking sequence. By construction, every boundary between a currently marked block and an adjacent unmarked block corresponds to a crossing edge of the current cut. This means that if there are ℓℓ\ellroman_ℓ marked blocks, then, depending on whether there is a marked prefix or suffix, the current cut must have size at least 2⁢(ℓ−1)2ℓ12(\ell-1)2 ( roman_ℓ - 1 ) and at most 2⁢ℓ2ℓ2\ell2 roman_ℓ. On the other hand, every crossing edge of the current cut (except e𝑒eitalic_e, if contained in the cut) is responsible for a marked symbol next to an unmarked one. This means that if the size of the current cut is 2⁢ℓ2ℓ2\ell2 roman_ℓ (note that it must be even due to the duplication of edges), then there are ℓℓ\ellroman_ℓ marked blocks if no prefix or suffix is marked, there are ℓ+1ℓ1\ell+1roman_ℓ + 1 marked blocks if both a prefix and a suffix is marked, and if a prefix is marked but no suffix is marked (or the other way around), then in the current marked version there are 2⁢ℓ−12ℓ12\ell-12 roman_ℓ - 1 boundaries between marked and unmarked blocks, and therefore the current cut contains 2⁢ℓ−12ℓ12\ell-12 roman_ℓ - 1 edges different from e𝑒eitalic_e (the ones responsible for the 2⁢ℓ−12ℓ12\ell-12 roman_ℓ - 1 boundaries between marked and unmarked blocks), and the additional edge e𝑒eitalic_e, which is not represented by any size-2222 factor in αesubscript𝛼𝑒\alpha_{e}italic_α start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT. Consequently, if H′superscript𝐻′H^{\prime}italic_H start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT has a cutwidth of 2⁢k2𝑘2k2 italic_k (which means that H𝐻Hitalic_H has a cutwidth of k𝑘kitalic_k), then the locality number of αesubscript𝛼𝑒\alpha_{e}italic_α start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT is either k𝑘kitalic_k or k+1𝑘1k+1italic_k + 1.
For the sake of convenience, let ℓ=2⁢kℓ2𝑘\ell=2kroman_ℓ = 2 italic_k for some k≥1𝑘1k\geq 1italic_k ≥ 1. Let σ𝜎\sigmaitalic_σ be any block-extending marking sequence for α𝛼\alphaitalic_α. If σ𝜎\sigmaitalic_σ marks y𝑦yitalic_y first, then we have 2⁢k2𝑘2k2 italic_k marked blocks and if some xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, 1≤i≤2⁢k1𝑖2𝑘1\leq i\leq 2k1 ≤ italic_i ≤ 2 italic_k, is marked first, then y𝑦yitalic_y is marked next, which leads to 2⁢k−12𝑘12k-12 italic_k - 1 marked blocks. Thus, πσ⁢(β)≥2⁢k−1subscript𝜋𝜎𝛽2𝑘1\pi_{\sigma}(\beta)\geq 2k-1italic_π start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT ( italic_β ) ≥ 2 italic_k - 1. On the other hand, we can proceed as follows. We first mark the k𝑘kitalic_k symbols x2,x3,…,xk+1subscript𝑥2subscript𝑥3…subscript𝑥𝑘1x_{2},x_{3},\ldots,x_{k+1}italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT , … , italic_x start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT, which leads to k𝑘kitalic_k marked blocks (and which is a marking sequence that is not block extending). Then we mark y𝑦yitalic_y, which joins all the previously marked blocks into one marked block and turns k−1𝑘1k-1italic_k - 1 occurrences of y𝑦yitalic_y into new individual marked blocks (i. e., the k−2𝑘2k-2italic_k - 2 occurrences of y𝑦yitalic_y between the symbols xk+2,xk+3,…,x2⁢ksubscript𝑥𝑘2subscript𝑥𝑘3…subscript𝑥2𝑘x_{k+2},x_{k+3},\ldots,x_{2k}italic_x start_POSTSUBSCRIPT italic_k + 2 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_k + 3 end_POSTSUBSCRIPT , … , italic_x start_POSTSUBSCRIPT 2 italic_k end_POSTSUBSCRIPT and the single occurrence of y𝑦yitalic_y after x2⁢ksubscript𝑥2𝑘x_{2k}italic_x start_POSTSUBSCRIPT 2 italic_k end_POSTSUBSCRIPT). Thus, there are still k𝑘kitalic_k marked blocks, and from now on marking the rest of the symbols only decreases the number of marked blocks. Consequently, loc⁡(β)≤kloc𝛽𝑘\operatorname{\textsf{loc}}(\beta)\leq kloc ( italic_β ) ≤ italic_k. Moreover, after any marking sequence has marked symbol y𝑦yitalic_y, there are 2⁢k2𝑘2k2 italic_k marked occurrences of symbol y𝑦yitalic_y. If these marked occurrences form at least k𝑘kitalic_k marked blocks, the overall marking number of the marking sequence is at least k𝑘kitalic_k. If they form at most k−1𝑘1k-1italic_k - 1 marked blocks, then at least k+1𝑘1k+1italic_k + 1 of the symbols xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT must be marked as well, and since these symbols were marked before marking y𝑦yitalic_y, they have formed at least k+1𝑘1k+1italic_k + 1 marked blocks before marking y𝑦yitalic_y. This means that the overall marking number is at least k+1𝑘1k+1italic_k + 1. This shows that loc⁡(β)≥kloc𝛽𝑘\operatorname{\textsf{loc}}(\beta)\geq kloc ( italic_β ) ≥ italic_k, and therefore loc⁡(β)=kloc𝛽𝑘\operatorname{\textsf{loc}}(\beta)=kloc ( italic_β ) = italic_k.
For this example marking sequence, it is worth noting that marking the many occurrences of e𝑒eitalic_e joins several individual marked blocks into one marked block. This also intuitively explains the correspondence between the locality number and the maximum number of occurrences per symbol (in condensed words): if there are 2⁢k2𝑘2k2 italic_k occurrences of a symbol, then, by marking this symbol either at least k𝑘kitalic_k new marked blocks are created, or at least k𝑘kitalic_k marked blocks must already exist before marking this symbol (see Observation 2.1).
D
Results were obtained using an independent dataset of 3039 PPG achieving better results than previous methods that were based on handcrafted features.
Besides AF detection, wearable data have been used to search for optimal cardiovascular disease predictors.
Gotlibovych et al.[117] trained an one layer CNN network followed by a LSTM using 180h of PPG wearable data to detect AF.
CRFs have been jointly trained with CNNs and have been used in depth estimation in endoscopy[269] and liver segmentation in CT[270].
Wearable devices, which impose restrictions on size, power and memory consumption for models, have also been used to collect cardiology data for training deep learning models for AF detection.
A
Given the stochasticity of the proposed model, SimPLe can be used with truly stochastic environments. To demonstrate this, we ran an experiment where the full pipeline (both the world model and the policy) was trained in the presence of sticky actions, as recommended in (Machado et al., 2018, Section 5). Our world model learned to account for the stickiness of actions and in most cases the end results were very similar to the ones for the deterministic case even without any tuning, see Figure 6.
The model-based agent is better than a random policy for all the games except Bank Heist. Interestingly, we observed that the best of the 5555 runs was often significantly better. For 6666 of the games, it exceeds the average human score (as reported in Table 3 of Pohlen et al. (2018)). This suggests that further stabilizing SimPLe should improve its performance, indicating an important direction for future work. In some cases during training we observed high variance of the results during each step of the loop. There are a number of possible reasons, such as mutual interactions of the policy training and the supervised training or domain mismatch between the model and the real environment. We present detailed numerical results, including best scores and standard deviations, in Appendix D.
In search for an effective world model we experimented with various architectures, both new and modified versions of existing ones. This search resulted in a novel stochastic video prediction model (visualized in Figure 2) which achieved superior results compared to other previously proposed models. In this section, we describe the details of this architecture and the rationale behind our design decisions. In Section 6 we compare the performance of these models.
We evaluate our method on 26262626 games selected on the basis of being solvable with existing state-of-the-art model-free deep RL algorithms222Specifically, for the final evaluation we selected games which achieved non-random results using our method or the Rainbow algorithm using 100100100100K interactions., which in our comparisons are Rainbow Hessel et al. (2018) and PPO Schulman et al. (2017). For Rainbow, we used the implementation from the Dopamine package and spent considerable time tuning it for sample efficiency (see Appendix E).
To evaluate the design of our method, we independently varied a number of the design decisions. Here we present an overview; see Appendix A for detailed results.
D
Out of the 11500 signals we used 76%percent7676\%76 %, 12%percent1212\%12 % and 12%percent1212\%12 % of the data (8740,1380,13808740138013808740,1380,13808740 , 1380 , 1380 signals) as training, validation and test data respectively.
All networks were trained for 100100100100 epochs and model selection was performed using the best validation accuracy out of all the epochs.
The convolutional and linear layers of all bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT were initialized according to their original implementation.
Three identical channels were also stacked for all m𝑚mitalic_m outputs to satisfy the input size requirements for bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT.
Architectures of all bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT remained the same, except for the number of the output nodes of the last linear layer which was set to five to correspond to the number of classes of D𝐷Ditalic_D.
A
A major obstacle in achieving seamless autonomous locomotion transition lies in the need for an efficient sensing methodology that can promptly and reliably evaluate the interaction between the robot and the terrain, referred to as terramechanics. These methods generally involve performing comprehensive on-site measurements of soil attributes prior to robot deployment [9]. Moreover, it’s important to consider that these terramechanics models, striving to predict robot-terrain interactions, often involve substantial computational costs due to their complexity [16]. Therefore, terramechanics methods are unsuitable for use in autonomous locomotion mode transition control directly, particularly in scenarios where robots need to move at high speeds, for example in search and rescue missions. To bypass the limitations of terramechanics methods, researchers have probed into alternative strategies for accomplishing autonomous locomotion transition. For example, certain studies have utilized energy consumption as a metric for evaluating the transverse-ability of different locomotion modes in wheel/track-legged robots [8]. By scrutinizing the energy expenditure for different locomotion modes, researchers can evaluate their efficiency in navigating various terrains. Additionally, other general parameters like stability margin and motion efficiency have been examined in the quest to achieve autonomous locomotion transition [2].
Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the rear legs (depicted by the green line) exceeded the predetermined threshold values set by the rear body climbing gait for heights of 2h. The overlap between the red line (ongoing energy consumption of the robot) and the blue line (pre-studied energy consumption of step negotiation in rolling locomotion mode only) illustrates this. After the mode transition is triggered, the robot enters a well-defined preparation phase, wherein it moves backward a short distance to ensure the rear tracks are separated from the step. Following the preparation phase, the robot switches to the rear body climbing gait. Despite the noticeable improvement in energy consumption, the transition to the rear body climbing gait takes more time for the robot to tackle a 2h step.
In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and walking locomotion modes. Through energy consumption analyses during step negotiations of varied heights, we establish energy criterion thresholds that guide the robot’s transition from rolling to walking mode. Our simulation studies reveal that the Cricket robot can autonomously switch to the most suitable locomotion mode based on the height of the steps encountered.
There are two primary technical challenges in the wheel/track-legged robotics area [2]. First, there’s a need to ensure accurate motion control within both rolling and walking locomotion modes [5] and effectively handle the transitions between them [6]. Second, it’s essential to develop decision-making frameworks that determine the best mode—either rolling or walking—based on the robot’s environmental interactions and internal states [7, 8]. In addressing the first challenge, the dynamics of rolling locomotion are well understood and are similar to those of traditional wheeled/tracked robots. However, despite extensive research on the walking dynamics of standard legged robots, focused studies on the walking patterns specific to wheel/track-legged robots are limited [9]. Transition control between these locomotion modes for wheel/track-legged robots also requires more exploration [6]. In this study, we focus on the second challenge to develop efficient decision-making algorithms for transitioning between locomotion modes. This remains a very less explored area [3], but is essential to achieve an autonomous locomotion transition in hybrid robots. Building upon our prior work, we employ two climbing gaits to ensure smooth walking locomotion for wheel/track-legged robots, particularly when navigating steps [10].
In the literature review, Gorilla [2] is able to switch between bipedal and quadrupedal walking locomotion modes autonomously using criteria developed based on motion efficiency and stability margin. WorkPartner [8] demonstrated its capability to seamlessly transition between two locomotion modes: rolling and rolking. The rolking mode, a combination of rolling and walking, empowered WorkPartner to navigate with enhanced agility. This feat was accomplished through the implementation of devised criteria that took into account a comprehensive analysis of energy utilization, wheel slip percentage, and the intricate dynamics between the wheels and the demanding terrain. However, it’s noteworthy that Gorilla only has walking locomotion mode and does not fit into the wheel/track-legged hybrid robot category. It is important to note that the approach introduced by WorkPartner is tailored specifically to it. The threshold values for locomotion transition criteria were established empirically through prior experimental evaluations conducted on the target terrains. However, a critical aspect that deserves emphasis is that the prevailing criteria proposed for locomotion mode transitions have primarily concentrated on the robot’s internal states, neglecting the integration of external environmental information into the decision-making process. This oversight underscores the need for future developments that incorporate a more comprehensive understanding of the external context and environmental factors, enabling robots like WorkPartner to make informed decisions based on a holistic assessment of both internal and external conditions.
D
γ≤c/(c+t)≤γ+1/2k𝛾𝑐𝑐𝑡𝛾1superscript2𝑘\gamma\leq c/(c+t)\leq\gamma+1/2^{k}italic_γ ≤ italic_c / ( italic_c + italic_t ) ≤ italic_γ + 1 / 2 start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT. In case c/(c+t)𝑐𝑐𝑡c/(c+t)italic_c / ( italic_c + italic_t ) is a positive integer multiple of 1/2k1superscript2𝑘1/2^{k}1 / 2 start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT, we break the tie towards γ<c/(c+t)𝛾𝑐𝑐𝑡\gamma<c/(c+t)italic_γ < italic_c / ( italic_c + italic_t ).
The advice for Rrc is a fraction γ𝛾\gammaitalic_γ, integer multiple of 1/2k1superscript2𝑘1/2^{k}1 / 2 start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT, that is encoded in k𝑘kitalic_k bits such that if the advice is trusted then
The remaining cases are more interesting and involve scenarios when the advice is untrusted, or when the advice is trusted but the algorithm maintains a ratio of α𝛼\alphaitalic_α instead of γ𝛾\gammaitalic_γ as indicated by the advice.
Note that for a sufficiently large, yet constant, number of bits, γ𝛾\gammaitalic_γ provides a good approximation of the critical ratio. Indeed having γ𝛾\gammaitalic_γ as advice is sufficient to achieve a competitive ratio that approaches 1.51.51.51.5 in the trusted advice model, as shown in [2].
First, note that when γ≤α𝛾𝛼\gamma\leq\alphaitalic_γ ≤ italic_α, then the algorithm works with the ratio γ𝛾\gammaitalic_γ as indicated by the advice.
C
    c→←←→𝑐absent\overrightarrow{c}\leftarrowover→ start_ARG italic_c end_ARG ← Classify-At-Level(t⁢e⁢x⁢t𝑡𝑒𝑥𝑡textitalic_t italic_e italic_x italic_t, M⁢A⁢X⁢_⁢L⁢E⁢V⁢E⁢L𝑀𝐴𝑋_𝐿𝐸𝑉𝐸𝐿MAX\_LEVELitalic_M italic_A italic_X _ italic_L italic_E italic_V italic_E italic_L)
{)}( divide start_ARG over→ start_ARG roman_Δ italic_c end_ARG [ 1 ] end_ARG start_ARG over→ start_ARG roman_Δ italic_c end_ARG [ 0 ] end_ARG > 4 ) or (c→⁢[1]>c→⁢[0])→𝑐delimited-[]1→𝑐delimited-[]0\Big{(}\overrightarrow{c}[1]>\overrightarrow{c}[0]\Big{)}( over→ start_ARG italic_c end_ARG [ 1 ] > over→ start_ARG italic_c end_ARG [ 0 ] ) then
    local variables: c→→𝑐\overrightarrow{c}over→ start_ARG italic_c end_ARG, the subject confidence vector
    return a set of indexes selected by applying a policy, π𝜋\piitalic_π, to c→→𝑐\overrightarrow{c}over→ start_ARG italic_c end_ARG
         c→←c→+Δ⁢c→←→𝑐→𝑐→Δ𝑐\overrightarrow{c}\leftarrow\overrightarrow{c}+\overrightarrow{\Delta c}over→ start_ARG italic_c end_ARG ← over→ start_ARG italic_c end_ARG + over→ start_ARG roman_Δ italic_c end_ARG
C
There has appeared one error feedback based sparse communication method for DMSGD, called Deep Gradient Compression (DGC) (Lin et al., 2018), which has achieved better performance than vanilla DSGD with sparse communication in practice.
However, the theory about the convergence of DGC is still lacking. Furthermore, although DGC combines momentum and error feedback, the momentum in DGC only accumulates stochastic gradients computed by each worker locally. Therefore, the momentum in DGC is a local momentum without global information.
We can find that DGC (Lin et al., 2018) is mainly based on the local momentum while GMC is based on the global momentum. Hence, each worker in DGC cannot capture the global information from its local momentum, while that in GMC can capture the global information from the global momentum even if sparse communication is adopted.
GMC combines error feedback and momentum to achieve sparse communication in distributed learning. But different from existing sparse communication methods like DGC which adopt local momentum, GMC adopts global momentum.
To make a comprehensive comparison of these methods, we will compare GMC with two implementations of DGC: DGC (w/ mfm) and DGC (w/o mfm). Different from DGC (w/ mfm), DGC (w/o mfm) will degenerate to DMSGD if sparse communication is not adopted.
A
For the purposes of this paper we use a variation of the database444https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals in total.
In Section II we define the φ𝜑\varphiitalic_φ metric, then in Section III we define the five tested activation functions along with the architecture and training procedure of SANs, in Section IV we experiment SANs on the Physionet [32], UCI-epilepsy [33], MNIST [34] and FMNIST [35] databases and provide visualizations of the intermediate representations and results.
We use 10000100001000010000 images from the training dataset as a validation dataset and train on the rest 50000500005000050000 for 5555 epochs with a batch size of 64646464.
First, we merge the tumor classes (2222 and 3333) and the eyes classes (4444 and 5555) resulting in a modified dataset of three classes (tumor, eyes, epilepsy).
The CNN feature extractor consists of two convolutional layers with 3333 and 16161616 filters and kernel size 5555, each one followed by a ReLU and a Max-Pool with pool size 2222.
C
20:        xi⁢(t+1)=0subscript𝑥𝑖𝑡10x_{i}(t+1)=0italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t + 1 ) = 0.
Let denote τ𝜏\tauitalic_τ as the dynamic degree of the scenarios. The harsher environment the networks suffers, the higher τ𝜏\tauitalic_τ it is. In the highly dynamic scenarios, we suppose that τ≥0.01𝜏0.01\tau\geq 0.01italic_τ ≥ 0.01. With proper τ𝜏\tauitalic_τ, PBLLA asymptotically converges and leads the UAV ad-hoc network game approaching to the PSNE.
The essence of PBLLA is selecting an alternative UAV randomly in one iteration and improving its utility by altering power and altitude with a certain probability, which is determined by the utilities of two strategies and τ𝜏\tauitalic_τ. UAV prefers to select the power and altitude which provide higher utility. Nevertheless, highly dynamic scenarios will cause UAVs to make mistakes and pick the worse strategy. The dynamic degree index τ𝜏\tauitalic_τ determines the dynamic degree of the situation and UAV’s performance. Small τ𝜏\tauitalic_τ means less dynamic scenarios and fewer mistakes when UAVs are making decisions. When τ→0→𝜏0\tau\rightarrow 0italic_τ → 0 which equals to stabilization, UAV will always select the power and altitude with higher utility; when τ→∞→𝜏\tau\rightarrow\inftyitalic_τ → ∞ where exists sever dynamics, UAV will choose them randomly. However, PBLLA has its limitations that PBLLA is only one single UAV is allowed for altering strategies in one iteration. We will propose a new algorithm in the next section to overcome the restrictions.
However, we have to recognize that the altering strategies probability ω𝜔\omegaitalic_ω severely impacts on the efficiency of SPBLLA. If Theorem 5 limits m𝑚mitalic_m to be a large value, the probability will decrease. When m𝑚mitalic_m is too large, UAVs are hard to move, and the learning rate will decrease. To some points, the learning rate of SPBLLA will lower than that of PBLLA. In our UAV ad-hoc network scenario, when τ=0.01𝜏0.01\tau=0.01italic_τ = 0.01 and m=0.03𝑚0.03m=0.03italic_m = 0.03, which is circled in Fig. 15, the probability of altering strategies ω<0.01𝜔0.01\omega<0.01italic_ω < 0.01. The probability of altering strategies in SPBLLA is less than that of PBLLA, and the SPBLLA will spend more learning time.
The process of SPBLLA let UAVs free from message exchange. Therefore, there is no waste of energy or time consumption between two iterations, which significantly improves learning efficiency. All UAVs are altering strategies with a certain probability of ω𝜔\omegaitalic_ω, which is determined by τ𝜏\tauitalic_τ and m𝑚mitalic_m. τ𝜏\tauitalic_τ also presents the dynamic degree of scenarios. The chance of UAVs to make mistakes when altering strategies is determined by the dynamic degree as in PBLLA.
D
Once again, with boundary conditions 𝐯¯⟂|Γ=𝟎evaluated-atsubscript¯𝐯perpendicular-toΓ0\overline{\mathbf{v}}_{\perp}|_{\Gamma}=\mathbf{0}over¯ start_ARG bold_v end_ARG start_POSTSUBSCRIPT ⟂ end_POSTSUBSCRIPT | start_POSTSUBSCRIPT roman_Γ end_POSTSUBSCRIPT = bold_0,
For the particular case of U¯=1¯¯𝑈¯1\overline{U}=\overline{1}over¯ start_ARG italic_U end_ARG = over¯ start_ARG 1 end_ARG , this implies
Note that Δ¯¯⁢U¯¯¯Δ¯𝑈\overline{\overline{\Delta}}\,\,\overline{U}over¯ start_ARG over¯ start_ARG roman_Δ end_ARG end_ARG over¯ start_ARG italic_U end_ARG may be expressed
radii of the wall. Initial system toroidal flux is zero, and ΦP⁢I⁢(t)subscriptΦ𝑃𝐼𝑡\Phi_{PI}(t)roman_Φ start_POSTSUBSCRIPT italic_P italic_I end_POSTSUBSCRIPT ( italic_t )
Hence, system toroidal flux is conserved. Note that in this case f¯¯𝑓\overline{f}over¯ start_ARG italic_f end_ARG
D
FD g⁢(x)⁢→⁡g⁢(y)𝑔𝑥→𝑔𝑦g(x)\operatorname{\rightarrow}g(y)italic_g ( italic_x ) → italic_g ( italic_y ) is valid in r𝑟ritalic_r.
This is the case, for instance, of the reality g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT depicted to the left of Figure
Instead of g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, we consider the interpretation g1′superscriptsubscript𝑔1′g_{1}^{\prime}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT depicted in Figure
according to the realities g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT to g6subscript𝑔6g_{6}italic_g start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT given in Figure 9.
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12.
A
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes because of the unseen state transitions and the finite size of experience reply buffer. This type of variance leads to converging to sub-optimal policies and brutally hurts DQN performance. The second source of variance Target Approximation Error which is the error coming from the inexact minimization of DQN parameters. Many of the proposed extensions focus on minimizing the variance that comes from AGE by finding methods to optimize the learning trajectory or from TAE by using methods like averaging to exact DQN parameters. Dropout methods have the ability to assemble these two solutions which minimize different source of variance. Dropout methods can achieve a consistence learning trajectory and exact DQN parameters with averaging, which comes inherently with Dropout methods.
We detected the variance between DQN and Dropout-DQN visually and numerically as Figure 3 and Table I show.
In the experiments we detected variance using standard deviation from average score collected from many independent learning trails.
Figure 3: Dropout DQN with different Dropout methods in CARTPOLE environment. The bold lines represent the average scores obtained over 10 independent learning trials, while the shaded areas indicate the range of the standard deviation.
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs. Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Classic Control Environment. The game of CARTPOLE was selected due to its widespread use and the ease with which the DQN can achieve a steady state policy.
B
Weakly supervised segmentation using image-level labels versus a few images with segmentation annotations. Most new weakly supervised localization methods apply attention maps or region proposals in a multiple instance learning formulations. While attention maps can be noisy, leading to erroneously highlighted regions, it is not simple to decide on an optimal window or bag size for multiple instance learning approaches.
Guo et al. (2018) provided a review of deep learning based semantic segmentation of images, and divided the literature into three categories: region-based, fully convolutional network (FCN)-based, and weakly supervised segmentation methods. Hu et al. (2018b) summarized the most commonly used RGB-D datasets for semantic segmentation as well as traditional machine learning based methods and deep learning-based network architectures for RGB-D segmentation. Lateef and Ruichek (2019) presented an extensive survey of deep learning architectures, datasets, and evaluation methods for the semantic segmentation of natural images using deep neural networks. Similarly, for medical imaging, Goceri and Goceri (2017) presented an high-level overview of deep learning-based medical image analysis techniques and application areas. Hesamian et al. (2019) presented an overview of the state-of-the-art methods in medical image segmentation using deep learning by covering the literature related to network structures and model training techniques. Karimi et al. (2019) reviewed the literature on techniques to handle label noise in deep learning based medical image analysis and evaluated existing approaches on three medical imaging datasets for segmentation and classification tasks. Zhou et al. (2019b) presented a review of techniques proposed for fusion of medical images from multiple modalities for medical image segmentation. Goceri (2019a) discussed the fully supervised, weakly supervised and transfer learning techniques for training deep neural networks for segmentation of medical images, and also discussed the existing methods for addressing the problems of lack of data and class imbalance. Zhang et al. (2019) presented a review of the approaches to address the problem of small sample sizes in medical image analysis, and divided the literature into five categories including explanation, weakly supervised, transfer learning, and active learning techniques. Tajbakhsh et al. (2020) presented a review of the literature for addressing the challenges of scarce annotations as well as weak annotations (e.g., noisy annotations, image-level labels, sparse annotations, etc.) in medical image segmentation. Similarly, there are several surveys covering the literature on the task of object detection (Wang et al., 2019c; Zou et al., 2019; Borji et al., 2019; Liu et al., 2019b; Zhao et al., 2019), which can also be used to obtain what can be termed as rough localizations of the object(s) of interest. In contrast to the existing surveys, we make the following contributions in this review:
While most deep segmentation models for medical image analysis rely on only clinical images for their predictions, there is often multi-modal patient data in the form of other imaging modalities as well as patient metadata that can provide valuable information, which most deep segmentation models do not use. Therefore, a valuable research direction for improving segmentation performance of medical images would be to develop models which are able to leverage multi-modal patient data.
We provide comprehensive coverage of research contributions in the field of semantic segmentation of natural and medical images. In terms of medical imaging modalities, we cover the literature pertaining to both 2D (RGB and grayscale) as well as volumetric medical images.
Because of the large number of imaging modalities, the significant signal noise present in imaging modalities such as PET and ultrasound, and the limited amount of medical imaging data mainly because of high acquisition cost compounded by legal, ethical, and privacy issues, it is difficult to develop universal solutions that yield acceptable performances across various imaging modalities. Therefore, a proper research direction would be along the work of  Raghu et al. (2019) on image classification models, studying the risks of using non-medical pre-trained models for medical image segmentation.
B
Tab. VI-B reports the average results achieved over 10 independent runs by a GNN implemented with different pooling operators.
Similarly to the MNIST experiment, we notice that neither DiffPool nor TopK𝐾Kitalic_K are able to solve this graph signal classification task.
As expected, GNNs configured with GRACLUS, NMF, and NDP are much faster to train compared to those based on DiffPool and TopK𝐾Kitalic_K, with NDP being slightly faster than the other two topological methods.
We consider two tasks on graph-structured data: graph classification and graph signal classification.
Contrarily to graph classification, DiffPool and TopK𝐾Kitalic_K fail to solve this task and achieve an accuracy comparable to random guessing.
D
Massiceti et al. (2017) extend this approach and introduce a network splitting strategy by dividing each decision tree into multiple subtrees. The subtrees are mapped individually and share common neurons for evaluating the split decision.
When using all decision trees, data samples are created where all trees agree with a high probability.
The number of parameters of the networks becomes enormous as the number of nodes grows exponentially with the increasing depth of the decision trees.
These techniques, however, are only applicable to trees of limited depth. As the number of nodes grows exponentially with the increasing depth of the trees, inefficient representations are created, causing extremely high memory consumption.
Additionally, many weights are set to zero so that an inefficient representation is created. Due to both reasons, the mappings do not scale and are only applicable to simple random forests.
C
In particular, when specialized to the tabular setting, our setting corresponds to the third setting with d=|𝒮|2⁢|𝒜|𝑑superscript𝒮2𝒜d=|{\mathcal{S}}|^{2}|\mathcal{A}|italic_d = | caligraphic_S | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT | caligraphic_A |, where OPPO attains an H3/2⁢|𝒮|2⁢|𝒜|⁢Tsuperscript𝐻32superscript𝒮2𝒜𝑇H^{3/2}|{\mathcal{S}}|^{2}|\mathcal{A}|\sqrt{T}italic_H start_POSTSUPERSCRIPT 3 / 2 end_POSTSUPERSCRIPT | caligraphic_S | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT | caligraphic_A | square-root start_ARG italic_T end_ARG-regret up to logarithmic factors.
for any function f:𝒮→ℝ:𝑓→𝒮ℝf:{\mathcal{S}}\rightarrow\mathbb{R}italic_f : caligraphic_S → blackboard_R. By allowing the reward function to be adversarially chosen in each episode, our setting generalizes the stationary setting commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018, 2019; Yang and Wang, 2019b, a), where the reward function is fixed across all the episodes.
step, which is commonly adopted by the existing work on value-based reinforcement learning (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018, 2019; Yang and Wang, 2019b, a), lacks such a notion of robustness.
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient (PG) (Williams, 1992; Baxter and Bartlett, 2000; Sutton et al., 2000), natural policy gradient (NPG) (Kakade, 2002), trust-region policy optimization (TRPO) (Schulman et al., 2015), proximal policy optimization (PPO) (Schulman et al., 2017), and actor-critic (AC) (Konda and Tsitsiklis, 2000), converge to the globally optimal policy at sublinear rates of convergence, even when they are coupled with neural networks (Liu et al., 2019; Wang et al., 2019). However, such computational efficiency guarantees rely on the regularity condition that the state space is already well explored. Such a condition is often implied by assuming either the access to a “simulator” (also known as the generative model) (Koenig and Simmons, 1993; Azar et al., 2011, 2012a, 2012b; Sidford et al., 2018a, b; Wainwright, 2019) or finite concentratability coefficients (Munos and Szepesvári, 2008; Antos et al., 2008; Farahmand et al., 2010; Tosatto et al., 2017; Yang et al., 2019b; Chen and Jiang, 2019), both of which are often unavailable in practice.
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020), as well as nonlinear settings involving general function approximators (Wen and Van Roy, 2017; Jiang et al., 2017; Du et al., 2019b; Dong et al., 2019). In particular, our setting is the same as the linear setting studied by Ayoub et al. (2020); Zhou et al. (2020), which generalizes the one proposed by Yang and Wang (2019a). We remark that our setting differs from the linear setting studied by Yang and Wang (2019b); Jin et al. (2019). It can be shown that the two settings are incomparable in the sense that one does not imply the other (Zhou et al., 2020). Also, our setting is related to the low-Bellman-rank setting studied by Jiang et al. (2017); Dong et al. (2019). In comparison, we focus on policy-based reinforcement learning, which is significantly less studied in theory. In particular, compared with the work of Yang and Wang (2019b, a); Jin et al. (2019); Ayoub et al. (2020); Zhou et al. (2020), which focuses on value-based reinforcement learning, OPPO attains the same T𝑇\sqrt{T}square-root start_ARG italic_T end_ARG-regret even in the presence of adversarially chosen reward functions. Compared with optimism-led iterative value-function elimination (OLIVE) (Jiang et al., 2017; Dong et al., 2019), which handles the more general low-Bellman-rank setting but is only sample-efficient, OPPO simultaneously attains computational efficiency and sample efficiency in the linear setting. Despite the differences between policy-based and value-based reinforcement learning, our work shows that the general principle of “optimism in the face of uncertainty” (Auer et al., 2002; Bubeck and Cesa-Bianchi, 2012) can be carried over from existing algorithms based on value iteration, e.g., optimistic LSVI, into policy optimization algorithms, e.g., NPG, TRPO, and PPO, to make them sample-efficient, which further leads to a new general principle of “conservative optimism in the face of uncertainty and adversary” that additionally allows adversarially chosen reward functions.
D
In a first step, channel (WRN: Channel), kernel (WRN: Kernel), and group pruning (WRN: Group) are evaluated separately on the WRN architecture.
The results for number of floating-point operations (FLOPs), parameters, activations, and memory (=== parameters +++ activations) are reported in Figure 4.
Typically, the weights of a DNN are stored as 32-bit floating-point values and during inference millions of floating-point operations are carried out.
Ultimately, it can be stated that group convolutions are excellent at reducing FLOPs and parameters but can harm the overall memory requirements by increasing the amount of activations.
The dense architecture outperforms the residual blocks in terms of number of FLOPs as well as parameters.
A
In Section 3, we construct a category of metric pairs. This category will be the natural setting for our extrinsic persistent homology. Although being functorial is trivial in the case of Vietoris-Rips persistence, the type of functoriality which one is supposed to expect in the case of metric embeddings is a priori not obvious. We address this question in Section 3 by introducing a suitable category structure.
In Section 3, we construct a category of metric pairs. This category will be the natural setting for our extrinsic persistent homology. Although being functorial is trivial in the case of Vietoris-Rips persistence, the type of functoriality which one is supposed to expect in the case of metric embeddings is a priori not obvious. We address this question in Section 3 by introducing a suitable category structure.
In Section 8, we reprove Rips and Gromov’s result about the contractibility of the Vietoris-Rips complex of hyperbolic geodesic metric spaces, by using our method consisting of isometric embeddings into injective metric spaces. As a result, we will be able to bound the length of intervals in Vietoris-Rips persistence barcode by the hyperbolicity of the underlying space.
In Section 4, we show that the Vietoris-Rips filtration can be (categorically) seen as a special case of persistent homology obtained through metric embeddings via the isomorphism theorem (Theorem 1). In this section, we also we also establish the stability of the filtration obtained via metric embeddings.
In this paper we significantly generalize this point of view by proving an isomorphism theorem between the Vietoris-Rips filtration of any compact metric space X𝑋Xitalic_X and its Kuratowski filtration:
C
We aim to enhance the trust into and interpretability of t-SNE through visualization and exploration of the model, the data, and the hyper-parameters. An overall picture of the interface is shown in Figure 1, and each of its different views is described below, divided into our four design goals: Hyper-parameter Exploration (G1), Overview (G2), Quality (G3), and Dimensions (G4). Further discussions on the design choices behind some of the views can be found in Subsection 7.1.
Significantly-different t-SNE projections can be generated from the same data set, due to its well-known sensitivity to hyper-parameter settings [14]. We propose to support users in finding a good t-SNE projection for their data by using visual exploration, as follows. A Grid Search mode (Figure 1(a)) initiates a systematic parameter search that computes 500 projections by varying the parameters perplexity, learning rate, and max iterations.
The answers to Q.1.2 also show that t-viSNE users needed fewer iterations to find a good parameter setting.
We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are quite different and none of them appears to have a clear advantage over the others, we pick one with good values for all the rest of the quality metrics (i.e., greater than 40%). The overview in Figure 7(a) shows the selected projection with three clear clusters of varying sizes (marked with C1, C2, and C3). However, the labels seem to be mixed in all of them. That means either the projections are not very good, or the labels are simply very hard to separate. By analyzing the Shepard Heatmap (Figure 7(b)), it seems that there is a distortion in how the projection represents the original N-D distances: the darker cells of the heatmap are above the diagonal and concentrated near the origin, which means that the lowest N-D distances (up to 30% of the maximum) have been represented in the projection with a wide range of 2-D distances (up to 60% of the maximum). While it may be argued that the data is too spread in the projection, we must always consider that t-SNE’s goal is not to preserve all pairwise distances, but only close neighborhoods. The projection has used most of its available 2-D space to represent (as best as possible) the smallest N-D distances, which can be considered a good trade-off for this specific objective. In the following paragraphs, we concentrate on some of the goals described in Subsection 4.3 and Subsection 4.4 for each of the three clusters.
VisCoDeR [22] supports the comparison between multiple projections generated by different DR techniques and parameter settings, similarly to our initial parameter exploration, using a scatterplot view with an on-top heatmap visualization for evaluating the quality of these projections. In contrast to t-viSNE, it does not support further exploratory visual analysis tasks after the layout is selected, such as optimizing the hyper-parameters for specific user selections.
A
Metaheuristics “In the Large” - 2022 [28]: The objective of this work is to provide a useful tool for researchers. To address the lack of novelty, the authors propose a new infrastructure to support the development, analysis, and comparison of new approaches. This framework is based on (1) the use of algorithm templates for reuse without modification, (2) white box problem descriptions that provide generic support for the injection of domain-specific knowledge, and (3) remotely accessible frameworks, components, and problems. This can be considered as a step towards the improvement of the reproducibility of results.
Good practices for designing metaheuristics: It gathers several works that are guidelines for good practices related to research orientation to measure novelty [26], to measure similarity in metaheuristics [27], Metaheuristics “In the Large” (to support the development, analysis, and comparison of new approaches) [28], to design manual or automatic new metaheuristics [29], to guide the learning strategy in design and improvement of metaheuristics [30], to use statistical test in metaheuristics [31], and to detect the novelties in metaphor-based algorithms [32].
An exhaustive review of the metaheuristic algorithms for search and optimization: taxonomy, applications, and open challenges - 2023 [34]: This taxonomy provides a large classification of metaheuristics based on the number of control parameters of the algorithm. In this work, the authors question the novelty of new proposals and discuss the fact that calling an algorithm new is often based on relatively minor modifications to existing methods. They highlight the limitations of metaheuristics, open challenges, and potential future research directions in the field.
The correct design of a bio-inspired algorithm involves the execution of a series of steps in a conscientious and organized manner, both at the time of algorithm development and during subsequent experimentation and application to real-world optimization problems. In [5], a complete tutorial on the design of new bio-inspired algorithms is presented, and in this work, we make a brief introduction to the phases that are necessary for quality research.
Designing new metaheuristics: Manual versus automatic approaches - 2023 [29]: This study discusses two methods for the design of new metaheuristics, manual or automatic. Although authors give credit to the manual design of metaheuristics because this development is based on the designer’s intuition and often involves looking for inspiration in other fields of knowledge, which is a positive aspect. However, they remark that this method could involve finding a good algorithm design in a large set of options through trial and error, possibly leading to eliminating designs that, based on their knowledge, they believe would not work for the problem at hand. For this reason, the authors assure the benefits of automatic design, which seeks to reduce human involvement in the design process by harnessing recent advances in automatic algorithm configuration methods. In this work, several automatic configuration methods and metaheuristic software frameworks from the literature are presented and analyzed, some of them already mentioned in section 6, as steps towards better design of metaheuristics.
D
A feasible approach is to recompute the connectivity distribution based on the embedding Z𝑍Zitalic_Z, which contains the potential manifold information of data.
However, the following theorem shows that the simple update based on latent representations may lead to the collapse.
(2) As we utilize GAE to exploit the high-level information to construct a desirable graph, we find that the model suffers from a severe collapse due to the simple update of the graph. We analyze the degeneration theoretically and experimentally to understand the phenomenon. We further propose a simple but effective strategy to avoid it.
To illustrate the process of AdaGAE, Figure 2 shows the learned embedding on USPS at the i𝑖iitalic_i-th epoch. An epoch means a complete training of GAE and an update of the graph. The maximum number of epochs, T𝑇Titalic_T, is set as 10. In other words, the graph is updated 10 times. Clearly, the embedding becomes more cohesive with the update.
Therefore, the update step with the same sparsity coefficient k𝑘kitalic_k may result in collapse. To address this problem, we assume that
A
Measuring IPID increment rate. The traffic to the servers is stable and hence can be predicted, (Wessels et al., 2003). We validate this by sampling the IPID value at the servers which we use for running the test. One example evaluation of IPID sampling on one of the busiest servers is plotted in Figure 3. In this evaluation we issued queries to a Name server at 69.13.54.XXX during three minutes, and plot the IPID values received in responses in Figure 3 - the identical patterns demonstrate predictable increment rates. Which means that the traffic to the server arrives at a stable rate.
Accuracy of IPID measurements. The IPID techniques are known to be difficult to leverage, requiring significant statistical analyses to ensure correctness. Recently, (Ensafi et al., 2014; Pearce et al., 2017) developed statistical methods for measuring IPID. However, in contrast to our work, the goal in (Ensafi et al., 2014; Pearce et al., 2017) is different - they use IPID to measure censorship and have additional sources of inaccuracy, which do not apply to our measurements: (1) the measurements are applied against client hosts, which results in significantly higher noise than our measurements against servers - the clients move between networks, change IP addresses, the clients are located behind intermediate devices, such as Network Address translators (NAT) and firewalls - which also prevents direct measurements; (2) inaccuracies in geolocation tools, which do not apply to our study since we do not need to know the location to measure ingress filtering, (2) additional network mechanisms (anycast, rerouting, traffic shaping, transient network failures). All these can only cause us to classify the server as not ’testable’, but do not impact ’spoofable’ outcomes. Furthermore, the IPID measurement methods in prior workss use TCP-RST packets to increment IPID, which are often blocked in firewalls. In contrast, we use packets which are not blocked such as DNS queries or TCP-SYN.
How widespread is the ability to spoof? There are significant research and operational efforts to understand the extent and the scope of (ingress and egress)-filtering enforcement and to characterise the networks which do not filter spoofed packets; we discuss these in Related Work, Section 2. Although the existing studies and tools, such as the Open Resolver (Mauch, 2013) and the Spoofer (Beverly and Bauer, 2005; Beverly et al., 2009, 2013; Lone et al., 2018; Luckie et al., 2019) projects, provide a valuable contribution for inferring networks which do not enforce spoofing, they are nevertheless insufficient: they provide a meager (often non-uniform) coverage of the Internet networks and are limited in their applicability as well as effectiveness.
∙∙\bullet∙ Consent of the scanned. It is often impossible to request permission from owners of all the tested networks in advance, this challenge similarly applies to other Internet-wide studies (Lyon, 2009; Durumeric et al., 2013, 2014; Kührer et al., 2014). Like the other studies, (Durumeric et al., 2013, 2014), we provide an option to opt out of our scans. To opt out the network has to provide either its network block (in CIDR notation), domain or ASN through the contact page at https://smap.cad.sit.fraunhofer.de. Performing security scans is important - the networks that do not enforce filtering of spoofed packets pose a hazard not only to their operators but also to their users, customers and services, as well as other networks. Due to the importance of identifying such networks, in their recent study (Luckie et al., 2019) even make public the (“name-and-shame”) lists of providers with missing or misconfigured filtering of spoofed packets; (Luckie et al., 2019) also discuss stronger measures against spoofable networks, including liability for damages, and various types of regulation. Inevitably, due to the risks that such networks pose to the Internet ecosystem, it is of public interest to know who those networks are. We do not make the identity of the networks, that do not filter spoofed packets, publicly available, but inform the general public on the fraction of such networks and provide their characterisation (i.e., size, geo-location, business type) in Section 5.
What SMap improves. The infrastructure of SMap is more stable than those used in previous studies, e.g., we do not risk volunteers moving to other networks. Our measurements do not rely on misconfigurations in services which can be patched, blocking the measurements. The higher stability also allows for more accurate reproduction and validation of our datasets and results, and enables to perform reliable longitudinal studies. We ran ingress filtering measurements with SMap every week over a period of two years (between 10 July 2019 and 10 May 2021). Our results plotted in Figure 1 demonstrate that the number of spoofable ASes is stable and proportionally increases with the growth in the overall number of ASes in the Internet. This is in contrast to previous studies, e.g., (Lone et al., 2017; Lichtblau et al., 2017; Lone et al., 2018), in which a repeated evaluation even a week later provided different statistics. Our two year long measurements between 2019 and 2021 of more than 90% of Internet’s ASes we found 50,023 new ASes that do not enforce ingress filtering, which were not known before, and confirmed all the other ASes that were found spoofable in prior studies.
A
Natural systems need to adapt to a changing world continuously; seasons change, food sources and shelter opportunities vary, cooperation and competition with other animals evolves over time. Moreover, their embodiment also changes over their lifetime. Young animals experience a period of growth where their size increases multiple times; in old age, they become less agile and their sensors less accurate. Yet natural systems are remarkably robust against such variation, allowing them to survive and thrive despite the changes.
It is common to try to avoid such changes in artificial agents, machines, and industrial processes. When something changes, the entire system is taken offline and modified to fit the new situation. This process is costly and disruptive; adaptation similar to that in nature might make such systems more reliable and long-term, and thus cheaper to operate.
Sensor drift in industrial processes is one such use case. For example, sensing gases in the environment is mostly tasked to metal oxide-based sensors, chosen for their low cost and ease of use [1, 2]. An array of sensors with variable selectivities, coupled with a pattern recognition algorithm, readily recognizes a broad range of odors. The arrangement is called an artificial nose since it resembles the multiplicity of sensory neuron types in the nasal epithelium. However, while metal oxide-based sensors are economical and flexible, they are unstable over time. Changes to the response properties of sensors make it difficult to detect and identify odors in the long term, and sensors have to be recalibrated to compensate [3]. Recalibration requires collecting and labeling new samples, which is costly because a skilled operator is needed, and challenging because the experimental conditions need to be controlled precisely [3]. Recalibrating a model with unlabeled examples, called semisupervised learning, is a possible alternative but difficult to establish in practice.
While natural systems cope with changing environments and embodiments well, they form a serious challenge for artificial systems. For instance, to stay reliable over time, gas sensing systems must be continuously recalibrated to stay accurate in a changing physical environment. Drawing motivation from nature, this paper introduced an approach based on continual adaptation. A recurrent neural network uses a sequence of previously seen gas recordings to form a representation of the current state of the sensors. It then modulates the skill of odor recognition with this context, allowing the system to adapt to sensor drift. Context models can thus play a useful role in lifelong adaptation to changing environments in artificial systems.
An alternative approach is to emulate adaptation in natural sensor systems. The system expects and automatically adapts to sensor drift, and is thus able to maintain its accuracy for a long time. In this manner, the lifetime of sensor systems can be extended without recalibration.
A
Each of the six cases has several subcases, depending on the left-to-right order of the vertices inside the gray rectangles in the figure.
Each point has a fixed x𝑥xitalic_x-coordinate and a y𝑦yitalic_y-range specified by the array Y𝑌Yitalic_Y;
The vertical order of the edges is also not fixed, as the points can have any y𝑦yitalic_y-coordinate in the range [0,2⁢2]022[0,2\sqrt{2}][ 0 , 2 square-root start_ARG 2 end_ARG ].
A scenario contains for each point q𝑞qitalic_q an x𝑥xitalic_x-coordinate x⁢(q)𝑥𝑞x(q)italic_x ( italic_q ) from the set of allowed x𝑥xitalic_x-coordinates for q𝑞qitalic_q, and a range y-range⁢(q)⊆[0,2⁢2]y-range𝑞022\mbox{$y$-range}(q)\subseteq[0,2\sqrt{2}]italic_y -range ( italic_q ) ⊆ [ 0 , 2 square-root start_ARG 2 end_ARG ] for its y𝑦yitalic_y-coordinate.
For any fixed ordering we can still vary the y𝑦yitalic_y-coordinates in the range [0,δ]0𝛿[0,\delta][ 0 , italic_δ ].
D
Let S𝑆Sitalic_S be a finitely generated simple or 00-simple idempotent-free semigroup. Then S𝑆Sitalic_S is not residually finite.
For our proof, we will show that no simple or 00-simple idempotent-free semigroup is residually finite. A semigroup S𝑆Sitalic_S is called residually finite, if, for all s,t∈S𝑠𝑡𝑆s,t\in Sitalic_s , italic_t ∈ italic_S with s≠t𝑠𝑡s\neq titalic_s ≠ italic_t, there is a homomorphism φ:S→F:𝜑→𝑆𝐹\varphi:S\to Fitalic_φ : italic_S → italic_F from S𝑆Sitalic_S to some finite semigroup F𝐹Fitalic_F with s⁢φ≠t⁢φ𝑠𝜑𝑡𝜑s\varphi\neq t\varphiitalic_s italic_φ ≠ italic_t italic_φ. Clearly, every subsemigroup of a residually finite semigroup is also residually finite:
By 16, A𝐴Aitalic_A or C𝐶Citalic_C embeds into S𝑆Sitalic_S. Since neither of the two is residually finite (by 20), we obtain that S𝑆Sitalic_S cannot be residually finite either by 17.
If a semigroup S𝑆Sitalic_S is not residually finite and embeds into a semigroup T𝑇Titalic_T, then T𝑇Titalic_T cannot be residually finite either.
If there is an injective homomorphism S→T→𝑆𝑇S\to Titalic_S → italic_T between two semigroups S𝑆Sitalic_S and T𝑇Titalic_T, then S𝑆Sitalic_S is isomorphic to a subsemigroup of T𝑇Titalic_T and we also say that S𝑆Sitalic_S embeds into T𝑇Titalic_T (or that S𝑆Sitalic_S can be embedded into T𝑇Titalic_T). Thus, 16 states that, into every finitely generated simple or 00-simple idempotent-free semigroup, we can embed A𝐴Aitalic_A or C𝐶Citalic_C.
B
Here, we showed that existing visual grounding based bias mitigation methods for VQA are not working as intended. We found that the accuracy improvements stem from a regularization effect rather than proper visual grounding. We proposed a simple regularization scheme which, despite not requiring additional annotations, rivals state-of-the-art accuracy. Future visual grounding methods should be tested with a more comprehensive experimental setup and datasets for proper evaluation.
It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in any of the methods, indicating that they are not producing right answers for the right reasons.
Since Wu and Mooney (2019) reported that human-based textual explanations Huk Park et al. (2018) gave better results than human-based attention maps for SCR, we train all of the SCR variants on the subset containing textual explanation-based cues. SCR is trained in two phases. For the first phase, which strengthens the influential objects, we use a learning rate of 5×10−55superscript1055\times 10^{-5}5 × 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT, loss weight of 3333 and train the model to a maximum of 12 epochs. Then, following Wu and Mooney (2019), for the second phase, we use the best performing model from the first phase to train the second phase, which criticizes incorrect dominant answers. For the second phase, we use a learning rate of 10−4superscript10410^{-4}10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT and weight of 1000100010001000, which is applied alongside the loss term used in the first phase. The specified hyperparameters worked better for us than the values provided in the original paper.
As observed by Selvaraju et al. (2019) and as shown in Fig. 2, we observe small improvements on VQAv2 when the models are fine-tuned on the entire train set. However, if we were to compare against the improvements in VQA-CPv2 in a fair manner, i.e., only use the instances with visual cues while fine-tuning, then, the performance on VQAv2 drops continuously during the course of the training. This indicates that HINT and SCR help forget linguistic priors, which is beneficial for VQA-CPv2 but not for VQAv2.
This work was supported in part by AFOSR grant [FA9550-18-1-0121], NSF award #1909696, and a gift from Adobe Research. We thank NVIDIA for the GPU donation. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements of any sponsor. We are grateful to Tyler Hayes for agreeing to review the paper at short notice and suggesting valuable edits and corrections for the paper.
D
Language Detection. We focused on privacy policies written in the English language, to enable comparisons with prior corpora of privacy policies. To identify the natural language of each candidate document, we used the open-source Python package Langid (Lui and Baldwin, 2012). Langid is a Naive Bayes-based classifier pretrained on 97 different languages, designed to achieve consistently high accuracy over a wide range of languages, domains, and lengths of text.
The complete set of documents was divided into 97 languages and an unknown language category. We found that the vast majority of documents were in English. We set aside candidate documents that were not identified as English by Langid and were left with 2.1 million candidates.
The 1,600 labelled documents were randomly divided into 960 documents for training, 240 documents for validation and 400 documents for testing. Using 5-fold cross-validation, we tuned the hyperparameters for the models separately with the validation set and then used the held-out test set to report the test results. Due to its size, it was possible for the held out test set to have a biased sample. Thus we repeated the sampling and training processes with a 5-fold cross-validation approach. Table 1 shows performance of the models after the results from test sets were averaged. Since the transformer based model had the best results, we ran it on all the the candidate privacy policies. Out of 2.1 million English candidate privacy polices, 1.54 million were classified as privacy policies and the rest were discarded.
Document Classification. Some of the web pages in the English language candidate document set may not have been privacy policies and instead simply satisfied our URL selection criteria. To separate privacy policies from other web documents we used a supervised machine learning approach. Two researchers in the team labeled 1,600 randomly selected candidate documents based on a preset scheme in consultation with a privacy expert. While both the researchers had substantial prior experience with privacy policies, the privacy expert was consulted to eliminate uncertainty in the annotations of a few documents. Lack of agreement in the annotations occurred for six documents, which were settled by discussion with the expert.
Language Detection. We focused on privacy policies written in the English language, to enable comparisons with prior corpora of privacy policies. To identify the natural language of each candidate document, we used the open-source Python package Langid (Lui and Baldwin, 2012). Langid is a Naive Bayes-based classifier pretrained on 97 different languages, designed to achieve consistently high accuracy over a wide range of languages, domains, and lengths of text.
A
Multiple metrics are important to avoid the dangers of using single metrics, such as accuracy [32, 54], for every data set.
As mentioned in section 1, the selection of the right performance metrics for different types of analytical problems and/or data sets is challenging.
However, comparison and selection between multiple performance indicators is not trivial, even for widely used metrics [12, 46]; alternatives such as Matthews correlation coefficient (MCC) might be more informative for some problems [8].
Optimized Models for Specific Predictions. In Figure 7(a), we see the initial projection of the 200 models selected up to this point (i.e., \raisebox{-.0pt} {\tiny\bfS1}⃝). Some models perform well according to our metrics, but others could be removed due to lower performance. However, we should try not to break the balance between performance and diversity of our stacking ensemble.
T3: Manage the performance metrics for enhancing trust in the results. Many performance or validation metrics are used in the field of ML. For each data set, there might be a different set of metrics to measure the best-performing stacking. Controlling the process by alternating these metrics and observing their influence in the performance can be an advantage (G3).
B
{(a,b,c)∈N3|{a,b,c}≠N}conditional-set𝑎𝑏𝑐superscript𝑁3𝑎𝑏𝑐𝑁\{(a,b,c)\in N^{3}~{}|~{}\{a,b,c\}\neq N\}{ ( italic_a , italic_b , italic_c ) ∈ italic_N start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT | { italic_a , italic_b , italic_c } ≠ italic_N };
By Theorem 2.1, each surjective mapping t′superscript𝑡′t^{\prime}italic_t start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in
R⁢(0¯,(v,[02]),(v,[12]))𝑅¯0𝑣delimited-[]02𝑣delimited-[]12R(\overline{0},(v,[02]),(v,[12]))italic_R ( over¯ start_ARG 0 end_ARG , ( italic_v , [ 02 ] ) , ( italic_v , [ 12 ] ) );
Surjective homomorphisms appear naturally in the linear-algebraic theory of homomorphism-related combinatorial quantities that was pioneered by Lovász [25, 26, 12, 6].
on the variables in V𝑉Vitalic_V that appear in the αisubscript𝛼𝑖\alpha_{i}italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT;
C
Some works use MAML for few-shot text classification, such as relation classification [Obamuyide and Vlachos, 2019] and topic classification [Bao et al., 2020].
Many techniques have been employed to address the issue of data scarcity, including self-supervised pre-training [Achiam et al., 2023, OpenAI, 2022], transfer learning [Gero et al., 2018, Kumar et al., 2022], and meta-learning [Madotto et al., 2019, Song et al., 2020, Zhao et al., 2022]. Compared to other approaches, meta-learning focuses on designing models that learn to learn from small data sets, reducing the dependency on extensive pre-training data [Finn et al., 2017, Vilalta and Drissi, 2002]. Therefore, meta-learning has been widely applied in low-resource NLP tasks.
Other works use MAML for multi-domain and low-resource language generation, such as few-shot dialogue system [Mi et al., 2019, Madotto et al., 2019, Qian and Yu, 2019, Song et al., 2020] and low-resource machine translation [Gu et al., 2018].
Some works use MAML for few-shot text classification, such as relation classification [Obamuyide and Vlachos, 2019] and topic classification [Bao et al., 2020].
When applying MAML to NLP, several factors can influence the training strategy and performance of the model. Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla MAML assumes that the data distribution is the same across tasks, in real-world NLP tasks, the data distributions can differ significantly [Li et al., 2018, Balaji et al., 2018]. For example, PAML [Madotto et al., 2019] regards each person’s dialogues as a task for MAML and they have different personal profiles. This variation manifests both between training tasks and between training and testing tasks, similarly affecting the performance of MAML. Few works have thoroughly studied these impact factors.
B
For the CCA-enabled UAV mmWave network, the array size is usually large and the corresponding inter-element distance Δ⁢ϕΔitalic-ϕ\Delta\phiroman_Δ italic_ϕ is small. Therefore, it is assumed that Δ⁢αΔ𝛼\Delta\alpharoman_Δ italic_α and Δ⁢βΔ𝛽\Delta\betaroman_Δ italic_β satisfy Δ⁢ϕc≤Δ⁢αΔsubscriptitalic-ϕcΔ𝛼\Delta\phi_{\text{c}}\leq\Delta\alpharoman_Δ italic_ϕ start_POSTSUBSCRIPT c end_POSTSUBSCRIPT ≤ roman_Δ italic_α and Δ⁢β=πΔ𝛽𝜋\Delta\beta=\piroman_Δ italic_β = italic_π to ensure that the DRE-covered CCA covers the full angular domain.
For the LOS channel, the AOAs and AODs in (5) are mainly determined by the position and attitude of the t-UAVs and r-UAV.
Given the maximum resolution of the codebook, we continue to discuss the characteristic of the multi-resolution and the beamwidth with the CCA codebook. For the multi-resolution codebook, the variable resolution is tuned by the beamwidth, which is determined by the number of the activated elements [12]. Note that the beam coverage and the corresponding beamwidth are determined by both the element radiation pattern and array radiation pattern for the DRE-covered CCA. In particular, the beam coverage in the azimuth (elevation) plane of the activated Mact×Nactsubscript𝑀actsubscript𝑁actM_{\text{act}}\times N_{\text{act}}italic_M start_POSTSUBSCRIPT act end_POSTSUBSCRIPT × italic_N start_POSTSUBSCRIPT act end_POSTSUBSCRIPT subarray is
The analog precoding architecture adopted for DRE-covered CCA is shown in Fig. 2 [13], which tunes the partially-connected precoding architecture by adapting the connection between the RF chains and the antenna elements to the channel variation and forming dynamic subarrays. For a fixed time slot, the precoding architecture is the conventional partially-connected precoding. For different time slots, the connection changes mainly depend on the variations of the AOA/AOD caused by the movement of UAVs.
For both static and mobile mmWave networks, codebook design is of vital importance to empower the feasible beam tracking and drive the mmWave antenna array for reliable communications [22, 23]. Recently, ULA/UPA-oriented codebook designs have been proposed for mmWave networks, which include the codebook-based beam tracking and channel estimation methods. For example, considering the ULA with omnidirectional radiating elements (REs), the hierarchical-codebook-based subarray and antenna deactivating strategies are proposed to achieve efficient beam training for single-user scenarios [12, 24]. The multiuser downlink beam training algorithms regarding the ULA are proposed with the multi-resolution codebook designs for partially-connected [25] and fully-connected [15] hybrid structures, respectively. However, extending the aforementioned works to the CA is not straightforward. The reasons are as follows: When the commonly-adopted DRE is integrated with CA, the limited radiation range of DREs is no longer the same and each is affected by the DRE’s location on CA, as the DRE-covered array plane is rolled up. The determined radiation direction of CA is only within a part of DREs’ radiation range. This observation indicates that only a part of the DREs or some specific subarrays need to be activated with reference to the AOA or angle of departure (AOD) of transceivers.
C
After the merging, we obtain an “almost” A|Bconditional𝐴𝐵A|Bitalic_A | italic_B-biregular graph with size M¯|N¯conditional¯𝑀¯𝑁\bar{M}|\bar{N}over¯ start_ARG italic_M end_ARG | over¯ start_ARG italic_N end_ARG.
As in the example, almost is because it is possible that there are parallel edges between two vertices in G𝐺Gitalic_G
Again the “almost” is because it is possible that there are parallel edges between two vertices in G𝐺Gitalic_G.
the edges are between the vertices in Aπsubscript𝐴𝜋A_{\pi}italic_A start_POSTSUBSCRIPT italic_π end_POSTSUBSCRIPT
whereas R2subscript𝑅2R_{2}italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT contains only edges between vertices in U1subscript𝑈1U_{1}italic_U start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and vertices in V2subscript𝑉2V_{2}italic_V start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.
A
In this paper, we study temporal-difference (TD) (Sutton, 1988) and Q-learning (Watkins and Dayan, 1992), two of the most prominent algorithms in deep reinforcement learning, which are further connected to policy gradient (Williams, 1992) through its equivalence to soft Q-learning (O’Donoghue et al., 2016; Schulman et al., 2017; Nachum et al., 2017; Haarnoja et al., 2017). In particular, we aim to characterize how an overparameterized two-layer neural network and its induced feature representation evolve in TD and Q-learning, especially their rate of convergence and global optimality. A fundamental obstacle, however, is that such an evolving feature representation possibly leads to the divergence of TD and Q-learning. For example, TD converges when the value function approximator is linear in a feature representation, which is fixed throughout learning, and possibly diverges otherwise (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997).
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. (2014) for a detailed survey. Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, TD possibly diverges (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997). Bhatnagar et al. (2009) propose nonlinear gradient TD, which converges but only to a locally optimal solution. See Geist and Pietquin (2013); Bertsekas (2019) for a detailed survey. When the value function approximator is an overparameterized multi-layer neural network, Cai et al. (2019) prove that TD converges to the globally optimal solution in the NTK regime. See also the independent work of Brandfonbrener and
corresponding to θ(m)⁢(k)=(θ1⁢(k),…,θm⁢(k))∈ℝD×msuperscript𝜃𝑚𝑘subscript𝜃1𝑘…subscript𝜃𝑚𝑘superscriptℝ𝐷𝑚\theta^{(m)}(k)=(\theta_{1}(k),\ldots,\theta_{m}(k))\in\mathbb{R}^{D\times m}italic_θ start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( italic_k ) = ( italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_k ) , … , italic_θ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_k ) ) ∈ blackboard_R start_POSTSUPERSCRIPT italic_D × italic_m end_POSTSUPERSCRIPT. Such a feature representation is used to analyze the TD dynamics θ(m)⁢(k)superscript𝜃𝑚𝑘\theta^{(m)}(k)italic_θ start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( italic_k ) in (3.3) in the NTK regime (Cai et al., 2019), which corresponds to setting α=m𝛼𝑚\alpha=\sqrt{m}italic_α = square-root start_ARG italic_m end_ARG in (3.1). Meanwhile, the nonlinear gradient TD dynamics (Bhatnagar et al., 2009) explicitly uses such a feature representation at each iteration to locally linearize the Q-function. Moreover, up to a rescaling, such a feature representation corresponds to the kernel
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature representation is able to deviate from the initial one and subsequently evolve into the globally optimal one, which corresponds to the global minimizer of the MSPBE. We further extend our analysis to soft Q-learning, which is connected to policy gradient.
To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear whether the attained solution is globally optimal. On the other hand, when the value function approximator in TD is an overparameterized multi-layer neural network, which is required to be properly scaled, such a feature representation stabilizes at the initial one (Cai et al., 2019), making the explicit local linearization in nonlinear gradient TD unnecessary. Moreover, the implicit local linearization enabled by overparameterization allows TD (and Q-learning) to converge to the globally optimal solution. However, such a required scaling, also known as the neural tangent kernel (NTK) regime (Jacot et al., 2018), effectively constrains the evolution of the induced feature presentation to an infinitesimal neighborhood of the initial one, which is not data-dependent.
D
We also study the merging operations, concatenation, element-wise addition, and the use of 2 depth-wise LSTM sub-layers, to combine the masked self-attention sub-layer output and the cross-attention sub-layer output in decoder layers. Results are shown in Table 4.
Specifically, the decoder layer with depth-wise LSTM first computes the masked self-attention sub-layer and the cross-attention sub-layer as in the original decoder layer, then it merges the outputs of these two sub-layers and feeds the merged representation into the depth-wise LSTM unit which also takes the cell and the output of the previous layer to compute the output of the current decoder layer and the cell state of the LSTM. We examine both element-wise addition and concatenation as merging operation.
We also study the merging operations, concatenation, element-wise addition, and the use of 2 depth-wise LSTM sub-layers, to combine the masked self-attention sub-layer output and the cross-attention sub-layer output in decoder layers. Results are shown in Table 4.
Another way to take care of the outputs of these two sub-layers in the decoder layer is to replace their residual connections with two depth-wise LSTM sub-layers, as shown in Figure 3 (b). This leads to better performance (as shown in Table 4), but at the costs of more parameters and decoder depth in terms of sub-layers.
Table 4 shows that, even though this is counter-intuitive, element-wise addition (with fewer parameters) empirically results in slightly higher BLEU than the concatenation operation. Furthermore, even though using 2 depth-wise LSTM sub-layers connecting cross- and masked self-attention sub-layers leads to the highest BLEU score, showing the advantage of fully replacing residual connections with depth-wise LSTMs, it also introduces more parameters and increases the decoder depth in terms of sub-layers. For fair comparison, we use the simpler element-wise addition operation in our experiments by default.
D
(τ⊆i,𝖤𝖥𝖮⁢[σ𝒢])subscriptτsubscript𝑖𝖤𝖥𝖮delimited-[]subscriptσ𝒢(\uptau_{\subseteq_{i}},\mathsf{EFO}[\upsigma_{\mathcal{G}}])( roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT , sansserif_EFO [ roman_σ start_POSTSUBSCRIPT caligraphic_G end_POSTSUBSCRIPT ] )-preservation property
\upsigma_{\mathcal{G}}]\right\rangle⟨ ↓ caligraphic_C , roman_τ start_POSTSUBSCRIPT ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT , sansserif_FO [ roman_σ start_POSTSUBSCRIPT caligraphic_G end_POSTSUBSCRIPT ] ⟩ the space of all the finite
from Example 5.5. The subset 𝒞⊆𝒟≤2𝒞subscript𝒟absent2\mathcal{C}\subseteq\mathcal{D}_{\leq 2}caligraphic_C ⊆ caligraphic_D start_POSTSUBSCRIPT ≤ 2 end_POSTSUBSCRIPT
Consider the class 𝒞⊆𝒢𝒞𝒢\mathcal{C}\subseteq\mathcal{G}caligraphic_C ⊆ caligraphic_G of all finite simple
in Example 5.7 is 𝒟≤2⊆𝒢subscript𝒟absent2𝒢\mathcal{D}_{\leq 2}\subseteq\mathcal{G}caligraphic_D start_POSTSUBSCRIPT ≤ 2 end_POSTSUBSCRIPT ⊆ caligraphic_G the set of finite simple graphs of degree
C
In the training stage, we crop each distorted image into four distortion elements and learn the parameters of the neural network using all data. Note that this training process is data-independent, where each part of the entire image is fed into the network one by one without the data correlation. In the test stage, we only need one distortion element, i.e., 1/4 of an image, to estimate the ordinal distortion. For a clear exhibition of our approach, we draw the detailed algorithm schemes of the training process and test process as listed in Algorithm 1 and Algorithm 2, respectively.
9:  Compute the distortion coefficients 𝒦^^𝒦\hat{\mathcal{K}}over^ start_ARG caligraphic_K end_ARG using the 𝒟^^𝒟\hat{\mathcal{D}}over^ start_ARG caligraphic_D end_ARG based on Eq. 18
As listed in Table II, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods [23, 24] based on the hand-crafted features, our approach overcomes the scene limitation and simple camera model assumption, showing more promising generality and flexibility. Compared with the learning distortion rectification methods [8][11][12], which omit the prior knowledge of the distortion, our approach transfers the heterogeneous estimation problem into a homogeneous one, eliminating the implicit relationship between image features and predicted values in a more explicit expression. As benefits of the effective ordinal supervision and guidance of distortion information during the learning process, our approach outperforms Liao [12] by a significant margin, with approximately 23% improvement on PSNR and 17% improvement on SSIM. Besides the high quality of the rectified image, our approach can obtain the accurate distortion parameters of a distorted image, which is crucial for the subsequent tasks such as the camera calibration. However, the generation-based methods [11][12] mainly focus on the pixel reconstruction of a rectified image and ignore the parameter estimation.
To demonstrate a quantitative comparison with the state-of-the-art approaches, we evaluate the rectified images based on the PSNR (peak signal-to-noise ratio), SSIM (structural similarity index), and the proposed MDLD (mean distortion level deviation). All the comparison methods are used to conduct the distortion rectification on the test dataset including 2,000 distorted images. For the PSNR and SSIM, we compute these two metrics using the pixel difference between each rectified image and the ground truth image. For the MDLD, we first exploit the estimated distortion parameters to obtain all distortion levels of the test distorted image based on Eq. 5. Then, the value of MDLD can be calculated by the difference between estimated distortion levels and the ground truth distortion levels based on Eq. 21. Note that the generated-based methods such as Li [11] and Liao [12] directly learn the transformation manner of the pixel mapping instead of estimating the distortion parameters, so we only evaluate these two methods in terms of the PSNR and SSIM.
Evaluation Metrics: Crucially, evaluating the performance of different methods with reasonable metrics benefits experimental comparisons. In the distortion rectification problem, the corrected image can be evaluated with the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM). For the evaluation of the estimated distortion label, it is straightforward to employ the root mean square error (RMSE) between the estimated coefficients 𝒦^^𝒦\hat{\mathcal{K}}over^ start_ARG caligraphic_K end_ARG and ground truth coefficients 𝒦𝒦\mathcal{K}caligraphic_K:
D
Hence, the computation complexity for achieving an ϵitalic-ϵ\epsilonitalic_ϵ-stationary point is 𝒪⁢(1/ϵ4)𝒪1superscriptitalic-ϵ4{\mathcal{O}}(1/\epsilon^{4})caligraphic_O ( 1 / italic_ϵ start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ).
In this section, we prove the convergence rate of SNGM for both smooth and relaxed smooth objective functions. First, we introduce the auxiliary variable as follows:
Table 1: Comparison between MSGD and SNGM for a L𝐿Litalic_L-smooth objective function. 𝒞𝒞{\mathcal{C}}caligraphic_C denotes the computation complexity (total number of gradient computations).
Theorem 5.5 and Corollary 5.6 extend the convergence analyses in Theorem 5.2 and Corollary 5.3 for a smooth objective function to a relaxed smooth objective function, which is a more general scenario.
Recently, the authors in [38] observed the relaxed smooth property in deep neural networks. According to Definition 2.2, the relaxed smooth property is more general than the L𝐿Litalic_L-smooth property.
C
ℱℱ\mathcal{F}caligraphic_F and 𝒞𝒞\mathcal{C}caligraphic_C correspond to such locations and the population affected by the outbreak, and needing services, respectively.
Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific knowledge of the distribution is unknown but we have the ability to sample or simulate from the distribution. To our knowledge, radius minimization has not been previously considered in the two-stage stochastic paradigm. Most prior work in this setting has focused on Facility Location [23, 24, 21, 22, 11, 19, 25]. On similar lines, [1] studies a stochastic k𝑘kitalic_k-center variant, where points arrive independently and each point only needs to get covered with some given probability. 2S-Sup is the natural two-stage counterpart of the well-known Knapsack-Supplier problem, which has a well-known 3333-approximation [14].
To continue this example, there may be further constraints on FIsubscript𝐹𝐼F_{I}italic_F start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT, irrespective of the stage-II decisions, which cannot be directly reduced to the budget B𝐵Bitalic_B. For instance, there may be a limited number of personnel available prior to the disease outbreak, assuming that facility i𝑖iitalic_i requires fisubscript𝑓𝑖f_{i}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT people to keep it operational during the waiting period. (These additional stage-I constraints have not been previously considered in the two-stage stochastic regime.)
An outbreak is an instance from 𝒟𝒟\mathcal{D}caligraphic_D, and after it actually happened, additional testing and vaccination locations were deployed or altered based on the new requirements, e.g., [20], which corresponds to stage-II decisions.
Health departments prepared some vaccination and testing sites in advance, based on projected demands [5], i.e., in stage-I, which may have multiple benefits; for example, the necessary equipment and materials might be cheaper and easier to obtain.
D
README.md exists but content is empty.
Downloads last month
42