context stringlengths 250 7.19k | A stringlengths 250 4.62k | B stringlengths 250 4.85k | C stringlengths 250 4.12k | D stringlengths 250 8.2k | label stringclasses 4
values |
|---|---|---|---|---|---|
(a)0≡1;(a)n=a(a+1)(a+2)⋯(a+n−1)=Γ(a+n)/Γ(a);n≥0.formulae-sequenceformulae-sequencesubscript𝑎01subscript𝑎𝑛𝑎𝑎1𝑎2⋯𝑎𝑛1Γ𝑎𝑛Γ𝑎𝑛0(a)_{0}\equiv 1;\quad(a)_{n}=a(a+1)(a+2)\cdots(a+n-1)=\Gamma(a+n)/\Gamma(a);%
\quad n\geq 0.( italic_a ) start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ≡ 1 ; ( italic_a ) start_POSTSUBSCRI... | x2(x2−1)d2dx2Rnm(x)=[nx2(n+D)−m(D−2+m)]Rnm(x)+x[D−1−(D+1)x2]ddxRnm(x).superscript𝑥2superscript𝑥21superscript𝑑2𝑑superscript𝑥2superscriptsubscript𝑅𝑛𝑚𝑥delimited-[]𝑛superscript𝑥2𝑛𝐷𝑚𝐷2𝑚superscriptsubscript𝑅𝑛𝑚𝑥𝑥delimited-[]𝐷1𝐷1superscript𝑥2𝑑𝑑𝑥superscriptsubscript𝑅𝑛𝑚𝑥x^{2}(x^{2}-... |
x3(x2−1)2d3dx3Rnm(x)={−n(3+D)(n+D)x4+[(n+m)D2+(n2+m2−n+3m)D−10m+5m2−n2]x2−m(D+1)(D−2+m)}Rnm(x)+x{[D2+(3+n)D+n2+2]x4+[−2D2−(2+n+m)D+6+2m−n2−m2]x2+D2+D(m−1)−2m+m2}ddxRnm(x).superscript𝑥3superscriptsuperscript𝑥212superscript𝑑3𝑑superscript𝑥3superscriptsubscript𝑅𝑛𝑚𝑥𝑛3𝐷𝑛𝐷superscript𝑥4delimited-[]𝑛𝑚supe... | Rnm(x)=(−1)(n−m)/2(D+m+n2−1n−m2)xmF(−(n−m)/2,(D+n+m)/2m+D/2∣x2),superscriptsubscript𝑅𝑛𝑚𝑥superscript1𝑛𝑚2binomial𝐷𝑚𝑛21𝑛𝑚2superscript𝑥𝑚𝐹conditional𝑛𝑚2𝐷𝑛𝑚2𝑚𝐷2superscript𝑥2R_{n}^{m}(x)=(-1)^{(n-m)/2}\binom{\frac{D+m+n}{2}-1}{\frac{n-m}{2}}x^{m}{}F%
\left(\begin{array}[]{c}-(n-m)/2,(D+n+m)/2\\ |
Rnm′′(x)Rnm′(x)=1x2−1[(n(n+D)−m(D−2+m)x2)Rnm(x)Rnm′(x)+D−1−(D+1)x2x].superscriptsuperscriptsubscript𝑅𝑛𝑚′′𝑥superscriptsuperscriptsubscript𝑅𝑛𝑚′𝑥1superscript𝑥21delimited-[]𝑛𝑛𝐷𝑚𝐷2𝑚superscript𝑥2superscriptsubscript𝑅𝑛𝑚𝑥superscriptsuperscriptsubscript𝑅𝑛𝑚′𝑥𝐷1𝐷1superscript𝑥2𝑥\frac{{R_{n}^{m... | C |
0&I_{d-4}\end{array}\right)\text{for $d$ even or }x=I_{d}\text{ for $d$ odd}.italic_x = ( start_ARRAY start_ROW start_CELL italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL italic_I start_POSTSUBSCRIPT italic_d - 4 end_POSTSUBSCRIPT end_CE... | Note that a small variation of these standard generators for SL(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) are used in Magma [14] as well
as in algorithms to verify presentations of classical groups, see [12], where only the generator v𝑣vitalic_v is slightly different in the two scenarios when d𝑑ditali... | Finally, we construct a second MSLP, described in Section 3.5, that writes a diagonal matrix h∈SL(d,q)ℎSL𝑑𝑞h\in\textnormal{SL}(d,q)italic_h ∈ SL ( italic_d , italic_q ) as a word in the standard generators of SL(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) (when evaluated with these generators as input)... | The lower-unitriangular matrices u1subscript𝑢1u_{1}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and u2subscript𝑢2u_{2}italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are returned as words in the Leedham-Green–O’Brien standard generators [11] for SL(d,q)SL𝑑𝑞\textnormal{SL}(d,q)SL ( italic_d , italic_q ) define... | Our aim is to determine the length and memory quota for an MSLP for the Bruhat decomposition of an arbitrary matrix g∈SL(d,q)𝑔SL𝑑𝑞g\in\textnormal{SL}(d,q)italic_g ∈ SL ( italic_d , italic_q ) via the above method, with the matrices u1subscript𝑢1u_{1}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, u2subscript𝑢2u... | A |
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85... | The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic dis... | Of course, the numerical scheme and the estimates developed in Section 3.1 hold. However, several simplifications are possible when the coefficients have low-contrast, leading to sharper estimates. We remark that in this case, our method is similar to that of [MR3591945], with some differences. First we consider that T... |
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local comput... | mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element remov... | C |
We think Alg-A is better in almost every aspect. This is because it is essentially simpler.
Among other merits, Alg-A is much faster, because it has a smaller constant behind the asymptotic complexity O(n)𝑂𝑛O(n)italic_O ( italic_n ) than the others: |
Our experiment shows that the running time of Alg-A is roughly one eighth of the running time of Alg-K, or one tenth of the running time of Alg-CM. (Moreover, the number of iterations required by Alg-CM and Alg-K is roughly 4.67 times that of Alg-A.) |
Alg-A has simpler primitives because (1) the candidate triangles considered in it have all corners lying on P𝑃Pitalic_P’s vertices and (2) searching the next candidate from a given one is much easier – the code length for this is 1:7 in Alg-A and in Alg-CM. | Alg-A computes at most n𝑛nitalic_n candidate triangles (proof is trivial and omitted) whereas Alg-CM computes at most 5n5𝑛5n5 italic_n triangles (proved in [8]) and so as Alg-K.
(by experiment, Alg-CM and Alg-K have to compute roughly 4.66n4.66𝑛4.66n4.66 italic_n candidate triangles.) | Comparing the description of the main part of Alg-A (the 7 lines in Algorithm 1) with that of Alg-CM (pages 9–10 of [8]),
Alg-A is conceptually simpler. Alg-CM is claimed “involved” by its authors as it contains complicated subroutines for handling many subcases. | C |
In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
| Early in an event, the related tweet volume is scanty and there are no clear propagation pattern yet. For the credibility model we, therefore, leverage the signals derived from tweet contents. Related work often uses aggregated content [18, 20, 32], since individual tweets are often too short and contain slender contex... |
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet le... | at an early stage. Our fully automatic, cascading rumor detection method follows
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, tha... | CrowdWisdom: Similar to [18], the core idea is to leverage the public’s common sense for rumor detection: If there are more people denying or doubting the truth of an event, this event is more likely to be a rumor. For this purpose, [18] use an extensive list of bipolar sentiments with a set of combinational rules. In... | A |
In a follow-up work Nacson et al. (2018) provided partial answers to these questions. They proved that the exponential tail has the optimal convergence rate, for tails for which ℓ′(u)superscriptℓ′𝑢\ell^{\prime}(u)roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u ) is of the form exp(−uν)superscript𝑢𝜈... | The follow-up paper (Gunasekar et al., 2018) studied this same problem with exponential loss instead of squared loss. Under additional assumptions on the asymptotic convergence of update directions and gradient directions, they were able to relate the direction of gradient descent iterates on the factorized parameteriz... | decreasing loss, as well as for multi-class classification with cross-entropy loss. Notably, even though the logistic loss and the exp-loss behave very different on non-separable problems, they exhibit the same behaviour for separable problems. This implies that the non-tail
part does not affect the bias. The bias is a... | The convergence of the direction of gradient descent updates to the maximum L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT margin solution, however is very slow compared to the convergence of training loss, which explains why it is worthwhile
continuing to optimize long after we have zero training ... | Perhaps most similar to our study is the line of work on understanding AdaBoost in terms its implicit bias toward large L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-margin solutions, starting with the seminal work of Schapire et al. (1998). Since AdaBoost can be viewed as coordinate descent on th... | A |
The processing pipeline of our clasification approach is shown in Figure 1. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline,
we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (Credi... | The processing pipeline of our clasification approach is shown in Figure 1. In the first step, relevant tweets for an event are gathered. Subsequently, in the upper part of the pipeline,
we predict tweet credibilty with our pre-trained credibility model and aggregate the prediction probabilities on single tweets (Credi... |
As observed in (madetecting, ; ma2015detect, ), rumor features are very prone to change during an event’s development. In order to capture these temporal variabilities, we build upon the Dynamic Series-Time Structure (DSTS) model (time series for short) for feature vector representation proposed in (ma2015detect, ). W... | The effective cascaded model that engages both low and high-level features for rumor classification is proposed in our other work (DBLP:journals/corr/abs-1709-04402, ). The model uses time-series structure of features to capture their temporal dynamics. In this paper, we make the following contributions with respect to... | In the lower part of the pipeline, we extract features from tweets and combine them with the creditscore to construct the feature vector in a time series structure called Dynamic Series Time Model. These feature vectors are used to train the classifier for rumor vs. (non-rumor) news classification.
| D |
Results. The baseline and the best results of our 1stsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT stage event-type classification is shown in Table 3-top. The accuracy for basic majority vote is high for imbalanced classes, yet it is lower at weighted F1. Our learned model achie... | We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The res... | Multi-Criteria Learning. Our task is to minimize the global relevance loss function, which evaluates the overall training error, instead of assuming the independent loss function, that does not consider the correlation and overlap between models. We adapted the L2R RankSVM [12]. The goal of RankSVM is learning a linear... | RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, SVMall𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall... | For this part, we first focus on evaluating the performance of single L2R models that are learned from the pre-selected time (before, during and after) and types (Breaking and Anticipate) set of entity-bearing queries. This allows us to evaluate the feature performance i.e., salience and timeliness, with time and type ... | A |
RT=𝔼{∑t=1TYt,at∗−Yt,At},subscript𝑅𝑇𝔼superscriptsubscript𝑡1𝑇subscript𝑌𝑡subscriptsuperscript𝑎𝑡subscript𝑌𝑡subscript𝐴𝑡R_{T}=\mathbb{E}\left\{\sum_{t=1}^{T}Y_{t,a^{*}_{t}}-Y_{t,A_{t}}\right\}\;,italic_R start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT = blackboard_E { ∑ start_POSTSUBSCRIPT italic_t = 1 end_POST... | the combination of Bayesian neural networks with approximate inference has also been investigated.
Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; ... | Thompson sampling (TS) [Thompson, 1935] is an alternative MAB policy that has been popularized in practice, and studied theoretically by many.
TS is a probability matching algorithm that randomly selects an action to play according to the probability of it being optimal [Russo et al., 2018]. | one uses p(θt|ℋ1:t)𝑝conditionalsubscript𝜃𝑡subscriptℋ:1𝑡p(\theta_{t}|\mathcal{H}_{1:t})italic_p ( italic_θ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | caligraphic_H start_POSTSUBSCRIPT 1 : italic_t end_POSTSUBSCRIPT ) to compute the probability of an arm being optimal,
i.e., π(A|xt+1,ℋ1:t)=ℙ(A=at+1∗|xt+1,θt,... | RL [Sutton and Barto, 1998] has been successfully applied to a variety of domains,
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023], | B |
The data collection study was conducted from end of February to beginning of April 2017 by Emperra and includes 10 patients who were given specially prepared smartphones. Measurements on carbohydrate consumption, blood glucose levels, and insulin intake were made with Emperras Esysta system. Measurements on physical ac... |
Table 1 shows basic patient information. Half of the patients are female and ages range from 17 to 66, with a mean age of 41.8 years. Body weight, according to BMI, is normal for half of the patients, four are overweight and one is obese. The mean BMI value is 26.9. Only one of the patients suffers from diabetes type ... | These are also the patients who log glucose most often, 5 to 7 times per day on average compared to 2-4 times for the other patients.
For patients with 3-4 measurements per day (patients 8, 10, 11, 14, and 17) at least a part of the glucose measuremtents after the meals is within this range, while patient 12 has only t... | Table 2 gives an overview of the number of different measurements that are available for each patient.111For patient 9, no data is available.
The study duration varies among the patients, ranging from 18 days, for patient 8, to 33 days, for patient 14. | The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients. The only difference happens to patient 10 and 12 whose intakes are earlier at day.
Further, patient 12 takse approx. 3 times the average insulin dose of others in the morning. | A |
Image-to-image learning problems require the preservation of spatial features throughout the whole processing stream. As a consequence, our network does not include any fully-connected layers and reduces the number of downsampling operations inherent to classification models. We adapted the popular VGG16 architecture S... |
Figure 2: An illustration of the modules that constitute our encoder-decoder architecture. The VGG16 backbone was modified to account for the requirements of dense prediction tasks by omitting feature downsampling in the last two max-pooling layers. Multi-level activations were then forwarded to the ASPP module, which... | To quantify the contribution of multi-scale contextual information to the overall performance, we conducted a model ablation analysis. A baseline architecture without the ASPP module was constructed by replacing the five parallel convolutional layers with a single 3×3333\times 33 × 3 convolutional operation that result... | Image-to-image learning problems require the preservation of spatial features throughout the whole processing stream. As a consequence, our network does not include any fully-connected layers and reduces the number of downsampling operations inherent to classification models. We adapted the popular VGG16 architecture S... | To restore the original image resolution, extracted features were processed by a series of convolutional and upsampling layers. Previous work on saliency prediction has commonly utilized bilinear interpolation for that task Cornia et al. (2018); Liu and Han (2018), but we argue that a carefully chosen decoder architect... | A |
\mathtt{l}}\mathtt{e}\overline{\mathtt{l}}\mathtt{e}\overline{\mathtt{m}}%
\mathtt{e}\mathtt{n}\mathtt{t}typewriter_Einze over¯ start_ARG typewriter_l end_ARG typewriter_e over¯ start_ARG typewriter_l end_ARG typewriter_e over¯ start_ARG typewriter_m end_ARG typewriter_ent |
For the sake of convenience, let ℓ=2kℓ2𝑘\ell=2kroman_ℓ = 2 italic_k for some k≥1𝑘1k\geq 1italic_k ≥ 1. Let σ𝜎\sigmaitalic_σ be any block-extending marking sequence for α𝛼\alphaitalic_α. If σ𝜎\sigmaitalic_σ marks y𝑦yitalic_y first, then we have 2k2𝑘2k2 italic_k marked blocks and if some xisubscript𝑥𝑖x_{i}ita... | In the following, we investigate another aspect of greedy strategies. Any symbol that is marked next in a marking sequence can have isolated occurrences (i. e., occurrences that are not adjacent to any marked block) and block-extending occurrences (i. e., occurrences with at least one adjacent marked symbol). Each isol... |
The important property of the word αesubscript𝛼𝑒\alpha_{e}italic_α start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT is that for every edge {x,y}𝑥𝑦\{x,y\}{ italic_x , italic_y } of H𝐻Hitalic_H (except e𝑒eitalic_e), it contains two distinct size-2222 factors that are xy𝑥𝑦xyitalic_x italic_y- or yx𝑦𝑥yxitalic_y ... | For this example marking sequence, it is worth noting that marking the many occurrences of e𝑒eitalic_e joins several individual marked blocks into one marked block. This also intuitively explains the correspondence between the locality number and the maximum number of occurrences per symbol (in condensed words): if th... | D |
Lee et al.[250] conclude that international cooperation is required for constructing a high quality multimodal big dataset for stroke imaging.
Another solution to better exploit big medical data in cardiology is to apply unsupervised learning methods, which do not require annotations. | According to the literature, RNNs are widely used in cardiology structured data because they are capable in finding optimal temporal features better than other deep/machine learning methods.
On the other hand, applications in this area are relatively few and this is mainly because there is a small number of public data... |
Regarding the problem of lack of interpretability as it is indicated by Hinton[260] it is generally infeasible to interpret nonlinear features of deep networks because their meaning depends on complex interactions with uninterpreted features from other layers. | Besides solving the data and interpretability problems, researchers in cardiology could utilize the already established deep learning architectures that have not been widely applied in cardiology such as capsule networks.
Capsule networks[265] are deep neural networks that require less training data than CNNs and its l... | One reason that traditional machine learning have worked sufficiently well in this area in previous years is because of the use of handcrafted and carefully designed features by experts such as statistical measures from the ECG beats and the RR interval[74].
Deep learning can improve results when the annotations are no... | B |
We noticed two major issues with the above model. First, the weight of the KL divergence loss term is game dependent, which is not practical if one wants to deal with a broad portfolio of Atari games. Second, this weight is usually a very small number in the range of [10−3,10−5]superscript103superscript105[10^{-3},10^... | Our predictive model has stochastic latent variables so it can be applied in highly stochastic environments. Studying such environments is an exciting direction for future work, as is the study of other ways in which the predictive neural network model could be used. Our approach uses the model as a learned simulator a... | Figure 2: Architecture of the proposed stochastic model with discrete latent. The input to the model is four stacked frames (as well as the action selected by the agent) while the output is the next predicted frame and expected reward. Input pixels and action are embedded using fully connected layers, and there is per-... | A stochastic model can be used to deal with limited horizon of past observed frames as well as sprites occlusion and flickering which results to higher quality predictions. Inspired by Babaeizadeh et al. (2017a), we tried a variational autoencoder (Kingma & Welling, 2014) to model the stochasticity of the environment. ... | As visualized in Figure 2, the proposed stochastic model with discrete latent variables discretizes the latent values into bits (zeros and ones) while training an auxiliary LSTM-based Hochreiter & Schmidhuber (1997) recurrent network to predict these bits autoregressively. At inference time, the latent bits will be gen... | D |
As shown in Table. I the one layer CNN DenseNet201 achieved the best accuracy of 85.3%percent85.385.3\%85.3 % with training time 70 seconds/epoch on average.
In overall the one layer CNN S2I achieved best accuracies for eleven out of fifteen ‘base models’. | However more work needs to be done for full replacing non-trainable S2Is, not only from the scope of achieving higher accuracy results but also increasing the interpretability of the model.
Another point of reference is that the combined models were trained from scratch based on the hypothesis that pretrained low level... | For the purposes of this paper and for easier future reference we define the term Signal2Image module (S2I) as any module placed after the raw signal input and before a ‘base model’ which is usually an established architecture for imaging problems.
An important property of a S2I is whether it consists of trainable para... | For the spectrogram module, which is used for visualizing the change of the frequency of a non-stationary signal over time [18], we used a Tukey window with a shape parameter of 0.250.250.250.25, a segment length of 8888 samples, an overlap between segments of 4444 samples and a fast Fourier transform of 64646464 sampl... | The spectrogram S2I results are in contrary with the expectation that the interpretable time-frequency representation would help in finding good features for classification.
We hypothesize that the spectrogram S2I was hindered by its lack of non-trainable parameters. | D |
The track tip positioning was the key parameter controlled during the creation of these climbing gaits. To assure seamless locomotion, trajectories for each joint of the robot were defined through a fifth-order polynomial along with their first and second derivatives. The trajectory design took into account six constra... | Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of the ... |
Figure 10: The Cricket robot tackles a step of height h using rolling locomotion mode, negating the need for a transition to the walking mode. The total energy consumed throughout the entire step negotiation process in rolling locomotion stayed below the preset threshold value. This threshold value was established bas... | The evaluation of energy consumption for the walking locomotion mode encompassed the entire step negotiation process, from the commencement of the negotiation until its completion. Fig. 8 reveals minimal discrepancies in energy consumption for the whole-body climbing gait, which can be attributed to the thoughtful desi... |
The whole-body climbing gait involves utilizing the entire body movement of the robot, swaying forwards and backwards to enlarge the stability margins before initiating gradual leg movement to overcome a step. This technique optimizes stability during the climbing process. To complement this, the rear-body climbing ga... | D |
Mtf2 is called called Move-To-Front-Even (Mtfe), and if all bits are 1 at the beginning, Mtf2 is
called Move-To-Front-Odd (Mtfo). Both Mtfe and Mtfo algorithms have a competitive ratio of 5/2525/25 / 2 [11]. In [11] it is shown that, for any request sequence, at least one of Timestamp, Mtfo, and Mtfe has a competitive... | For a given request sequence, the best option among the three algorithms can be indicated with two bits of advice, giving a 5/3535/35 / 3-competitive algorithm. However, if the advice is untrusted, the competitive ratio can be as bad as 5/2525/25 / 2.
| Mtf2 is called called Move-To-Front-Even (Mtfe), and if all bits are 1 at the beginning, Mtf2 is
called Move-To-Front-Odd (Mtfo). Both Mtfe and Mtfo algorithms have a competitive ratio of 5/2525/25 / 2 [11]. In [11] it is shown that, for any request sequence, at least one of Timestamp, Mtfo, and Mtfe has a competitive... |
If the advice indicates Timestamp as the best algorithm, the algorithm trusts it and the competitive ratio will be at most 2 [1]. If the advice indicates Mtfe or Mtfo as the best algorithm, the Tog algorithm uses the phasing scheme described above by alternating between the indicated algorithm and Move-To-Front, and b... | If the advice indicates Timestamp as the best algorithm among Mtfe, Mtfo, and Timestamp, the algorithm uses Timestamp to serve the entire sequence, and since the advice is right, the competitive ratio will be at most 5/3 [11]. If the advice indicates Mtfe or Mtfo as the best algorithm, the Tog algorithm uses the phasin... | A |
On the other hand, graphs of accumulated confidence values over time (chunk-by-chunk or writing-by-writing) shown in Figures 6, 7 and 8 are intended to show how lexical evidence (learned from the training data and given by gv𝑔𝑣gvitalic_g italic_v) is accumulated over time, for each class, and how it is used to decid... | However, EDD poses really challenging aspects to the “standard” machine learning field.
The same as with any other ERD task, we can identify at least three of these key aspects: incremental classification of sequential data, support for early classification and, explainability111Having the ability to explain its ration... | At this point, it should be clear that any attempt to address ERD problems, in a realistic fashion, should take into account 3 key requirements: incremental classification, support for early classification, and explainability.
Unfortunately, to the best of our knowledge, there is no text classifier able to support thes... | In this context, this work introduces a machine learning framework, based on a novel white-box text classifier, for developing intelligent systems to deal with early risk detection (ERD) problems. In order to evaluate and analyze our classifier’s performance, we will focus on a relevant ERD task: early depression detec... | In this article, we proposed SS3, a novel text classifier that can be used as a framework to build systems for early risk detection (ERD).
The SS3’s design aims at dealing, in an integrated manner, with three key challenging aspects of ERD: incremental classification of sequential data, support for early classification... | D |
Since 𝒞(𝐞t+12,k)𝒞subscript𝐞𝑡12𝑘\mathcal{C}({\bf e}_{t+\frac{1}{2},k})caligraphic_C ( bold_e start_POSTSUBSCRIPT italic_t + divide start_ARG 1 end_ARG start_ARG 2 end_ARG , italic_k end_POSTSUBSCRIPT ) is sparse, 𝐰t+1−𝐰tsubscript𝐰𝑡1subscript𝐰𝑡{\bf w}_{t+1}-{\bf w}_{t}bold_w start_POSTSUBSCRIPT italic_t + 1 ... |
In this paper, we propose a novel method, called global momentum compression (GMC), for sparse communication in distributed learning. To the best of our knowledge, this is the first work that introduces global momentum for sparse communication in DMSGD. Furthermore, to enhance the convergence performance when using mo... | The error feedback technique keeps the compressed error into the error residual on each worker and incorporates the error residual into the next update.
Error feedback based sparse communication methods have been widely adopted by recent communication compression methods and achieved better performance than quantizatio... | Recently, parameter server (Li et al., 2014) has been one of the most popular distributed frameworks in machine learning. GMC can also be implemented on the parameter server framework.
In this paper, we adopt the parameter server framework for illustration. The theories in this paper can also be adapted for the all-red... | There are some other ways to combine momentum and error feedback. For example, we can put the momentum term on the server. However, these ways lead to worse performance than the way adopted in this paper. More discussions can be found in Appendix A.
| D |
φ𝜑\varphiitalic_φ could be seen as an alternative formalization of Occam’s razor [38] to Solomonov’s theory of inductive inference [39] but with a deterministic interpretation instead of a probabilistic one.
The cost of the description of the data could be seen as proportional to the number of weights and the number o... | φ𝜑\varphiitalic_φ could be seen as an alternative formalization of Occam’s razor [38] to Solomonov’s theory of inductive inference [39] but with a deterministic interpretation instead of a probabilistic one.
The cost of the description of the data could be seen as proportional to the number of weights and the number o... | SANs combined with the φ𝜑\varphiitalic_φ metric compress the description of the data in a way a minimum description language framework would, by encoding them into 𝒘(i)superscript𝒘𝑖\bm{w}^{(i)}bold_italic_w start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and 𝜶(i)superscript𝜶𝑖\bm{\alpha}^{(i)}bold_italic_α... | The φ𝜑\varphiitalic_φ metric is also related to the rate-distortion theory [40], in which the maximum distortion is defined according to human perception, which however inevitably introduces a bias.
There is also relation with the field of Compressed Sensing [41] in which the sparsity of the data is exploited allowing... | It is interesting to note that in some cases SANs reconstructions, such as for the Extrema-Pool indices, performed even better than the original data.
This suggests the overwhelming presence of redundant information that resides in the raw pixels of the original data and further indicates that SANs extract the most rep... | C |
The process of SPBLLA let UAVs free from message exchange. Therefore, there is no waste of energy or time consumption between two iterations, which significantly improves learning efficiency. All UAVs are altering strategies with a certain probability of ω𝜔\omegaitalic_ω, which is determined by τ𝜏\tauitalic_τ and m𝑚... |
The learning rate of the extant algorithm is also not desirable [13]. Recently, a new fast algorithm called binary log-linear learning algorithm (BLLA) has been proposed by [14]. However, in this algorithm, only one UAV is allowed to change strategy in one iteration based on current game state, and then another UAV ch... | (Regular Perturbed Markov Process)
Denote P𝑃Pitalic_P as the transaction matrix of a Markov Process which has a finite state space S𝑆Sitalic_S. This Markov Process is called regular perturbed markov process with noise ϵitalic-ϵ\epsilonitalic_ϵ if the following conditions are met. | (Stochastically Stable Strategy)
Denote Pϵsubscript𝑃italic-ϵP_{\epsilon}italic_P start_POSTSUBSCRIPT italic_ϵ end_POSTSUBSCRIPT as the transaction probability of a regular perturbed Markov process in a state space S𝑆Sitalic_S, and μϵ(s)subscript𝜇italic-ϵ𝑠\mu_{\epsilon}(s)italic_μ start_POSTSUBSCRIPT italic_ϵ end_P... | The process of SPBLLA let UAVs free from message exchange. Therefore, there is no waste of energy or time consumption between two iterations, which significantly improves learning efficiency. All UAVs are altering strategies with a certain probability of ω𝜔\omegaitalic_ω, which is determined by τ𝜏\tauitalic_τ and m𝑚... | B |
<X¯>e=13M^¯∗X¯superscriptexpectation¯𝑋𝑒13¯^𝑀¯𝑋<\overline{X}>^{e}=\frac{1}{3}\overline{\widehat{M}}*\overline{X}< over¯ start_ARG italic_X end_ARG > start_POSTSUPERSCRIPT italic_e end_POSTSUPERSCRIPT = divide start_ARG 1 end_ARG start_ARG 3 end_ARG over¯ start_ARG over^ start_ARG italic_M end_ARG end_ARG ∗ over¯ st... | with r^=(r1e,r2e,…rNee)T=<r¯>e^𝑟superscriptsuperscriptsubscript𝑟1𝑒superscriptsubscript𝑟2𝑒…superscriptsubscript𝑟subscript𝑁𝑒𝑒𝑇superscriptexpectation¯𝑟𝑒\widehat{r}=(r_{1}^{e},\,r_{2}^{e},...r_{N_{e}}^{e})^{T}=<\overline{r}>^{e}over^ start_ARG italic_r end_ARG = ( italic_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRI... | (\frac{f_{P_{i}}(t)\,s_{i}}{3r_{i}}\right)+f_{I}(t)\left(h_{I}\,\mbox{ln}(r_{%
out}/r_{in})\right)=0⇒ start_RELOP SUPERSCRIPTOP start_ARG italic_i end_ARG start_ARG [ end_ARG end_RELOP = 1 ] italic_N start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT roman_Σ ( divide start_ARG italic_f start_POSTSUBSCRIPT italic_P start_PO... | i}\right)\,[\mbox{W/$\mbox{m}^{3}$}]= ( 7.6 × 10 start_POSTSUPERSCRIPT - 33 end_POSTSUPERSCRIPT italic_Z start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ( over¯ start_ARG italic_T end_ARG start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT [ eV ] - over¯ start_ARG italic_T end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ... | {\int\frac{g_{form}(z)}{r}dr\;dz}\right)italic_f start_POSTSUBSCRIPT italic_f italic_o italic_r italic_m end_POSTSUBSCRIPT ( italic_z , italic_t ) = ( - italic_e start_POSTSUPERSCRIPT - divide start_ARG italic_t end_ARG start_ARG italic_τ start_POSTSUBSCRIPT italic_L italic_R end_POSTSUBSCRIPT end_ARG end_POSTSUPERSCRI... | A |
Let r𝑟ritalic_r be the relation on 𝒞Rsubscript𝒞𝑅\mathcal{C}_{R}caligraphic_C start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT given to the left of Figure 12.
Its abstract lattice ℒrsubscriptℒ𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is represented to the right. | For convenience we give in Table 7 the list of all possible realities
along with the abstract tuples which will be interpreted as counter-examples to A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B or B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A. | If no confusion is possible, the subscript R𝑅Ritalic_R will be omitted, i.e., we will use
≤,∧,∨\leq,\operatorname{\land},\operatorname{\lor}≤ , ∧ , ∨ instead of ≤R,∧R,∨Rsubscript𝑅subscript𝑅subscript𝑅\leq_{R},\operatorname{\land}_{R},\operatorname{\lor}_{R}≤ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT , ∧ start_P... | First, remark that both A→B𝐴→𝐵A\operatorname{\rightarrow}Bitalic_A → italic_B and B→A𝐵→𝐴B\operatorname{\rightarrow}Aitalic_B → italic_A are possible.
Indeed, if we set g=⟨b,a⟩𝑔𝑏𝑎g=\langle b,a\rangleitalic_g = ⟨ italic_b , italic_a ⟩ or g=⟨a,1⟩𝑔𝑎1g=\langle a,1\rangleitalic_g = ⟨ italic_a , 1 ⟩, then r⊧gA→... | The tuples t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, t4subscript𝑡4t_{4}italic_t start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT represent a counter-example to BC→A𝐵𝐶→𝐴BC\operatorname{\rightarrow}Aitalic_B italic_C → italic_A for g1subscript𝑔1g_{1}italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRI... | A |
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein... | In this study, we proposed and experimentally analyzed the benefits of incorporating the Dropout technique into the DQN algorithm to stabilize training, enhance performance, and reduce variance. Our findings indicate that the Dropout-DQN method is effective in decreasing both variance and overestimation. However, our e... |
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Rein... | To that end, we ran Dropout-DQN and DQN on one of the classic control environments to express the effect of Dropout on Variance and the learned policies quality. For the Overestimation phenomena, we ran Dropout-DQN and DQN on a Gridworld environment to express the effect of Dropout because in such environment the optim... | In this paper, we introduce and conduct an empirical analysis of an alternative approach to mitigate variance and overestimation phenomena using Dropout techniques. Our main contribution is an extension to the DQN algorithm that incorporates Dropout methods to stabilize training and enhance performance. The effectivene... | D |
Kervadec et al. (2019b) introduced a differentiable term in the loss function for datasets with weakly supervised labels, which reduced the computational demand for training while also achieving almost similar performance to full supervision for segmentation of cardiac images. Afshari et al. (2019) used a fully convol... | Kervadec et al. (2019b) introduced a differentiable term in the loss function for datasets with weakly supervised labels, which reduced the computational demand for training while also achieving almost similar performance to full supervision for segmentation of cardiac images. Afshari et al. (2019) used a fully convol... |
Chaichulee et al. (2017) extended the VGG16 architecture (Simonyan and Zisserman, 2014) to include a global average pooling layer for patient detection and a fully convolutional network for skin segmentation. The proposed model was evaluated on images from a clinical study conducted at a neonatal intensive care unit, ... |
Multi-task learning (Caruana, 1997) refers to a machine learning approach where multiple tasks are learned simultaneously, and the learning efficiency and the model performance on each of the tasks are improved because of the existing commonalities across the tasks. For visual recognition tasks, it has been shown that... | Guo et al. (2018) provided a review of deep learning based semantic segmentation of images, and divided the literature into three categories: region-based, fully convolutional network (FCN)-based, and weakly supervised segmentation methods. Hu et al. (2018b) summarized the most commonly used RGB-D datasets for semantic... | C |
When compared to other methods for graph pooling, NDP performs significantly better than other techniques that pre-compute the topology of the coarsened graphs, while it achieves a comparable performance with respect to state-of-the-art feature-based pooling methods. | Graph Neural Networks (GNNs) are machine learning models that learn abstract representations of graph-structured data to solve a large variety of inference tasks [1, 2, 3, 4, 5].
Differently from neural networks that process vectors, images, or sequences, the graphs processed by GNNs have an arbitrary topology. | We notice that the coarsened graphs are pre-computed before training the GNN.
Therefore, the computational time of graph coarsening is much lower compared to training the GNN for several epochs, since each MP operation in the GNN has a cost 𝒪(N2)𝒪superscript𝑁2\mathcal{O}(N^{2})caligraphic_O ( italic_N start_POSTSUP... | The latter, learn both the topology and the features of the coarsened graphs end-to-end via gradient descent, at the cost of a larger model complexity and higher training time.
The efficiency of NDP brings a significant advantage when GNNs are deployed in real-world scenarios subject to computational constraints, like ... | Figure 9: Example of coarsening on one graph from the Proteins dataset. In (a), the original adjacency matrix of the graph. In (b), (c), and (d) the edges of the Laplacians at coarsening level 0, 1, and 2, as obtained by the 3 different pooling methods GRACLUS, NMF, and the proposed NDP.
| C |
The analysis shows that random data samples and uniform sampling have a bias to generate data samples that are classified with high confidence.
NRFI dynamic automatically balances the number of decision trees and archives an evenly distributed data distribution, i.e., generates the most diverse data samples. | In the next step, the imitation learning performance of the sampling modes is evaluated. The results are shown in Table 3.
Random data generation reaches a mean accuracy of 63.80%percent63.8063.80\%63.80 % while NRFI uniform and NRFI dynamic achieve 87.46%percent87.4687.46\%87.46 % and 88.14%percent88.1488.14\%88.14 %,... | Table 3:
Imitation learning performance (in accuracy [%]) of different data sampling modes on Soybean. NRFI achieves better results than random data generation. When optimizing the selection of the decision trees, the performance is improved due to more diverse sampling. | Probability distribution of the predicted confidences for different data generation settings on Soybean with 5555 (top) and 50505050 samples per class (bottom). Generating data with different numbers of decision trees is visualized in the left column. Additionally, a comparison between random sampling (red), NRFI unifo... |
NRFI uniform and NRFI dynamic sample the number of decision trees for each data point uniformly, respectively, optimized via automatic confidence distribution (see Section 4.1.4). The confidence distributions for both sampling modes are visualized in the second column of Figure 5. Additionally, sampling random data po... | A |
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient... | step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces... | Our work is based on the aforementioned line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) on the computational efficiency of policy optimization, which covers PG, NPG, TRPO, PPO, and AC. In p... | Coupled with powerful function approximators such as neural networks, policy optimization plays a key role in the tremendous empirical successes of deep reinforcement learning (Silver et al., 2016, 2017; Duan et al., 2016; OpenAI, 2019; Wang et al., 2018). In sharp contrast, the theoretical understandings of policy opt... | In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or ... | D |
Afterwards the models perform best if the weights are scaled to 2 bits and the activation bit width is further increased to 4 bits.
This supports the observation of the previous sections, showing that model accuracy is sensitive to activation quantization rather than weight quantization. | As can be seen, the various models in each regime (pruning structure or DNN architecture) show similar behavior for throughput.
The worst performing regimes are group and kernel pruning as well as the combination of fixed grouping and channel pruning. | Section 5.1 explored the impact of several network quantization approaches and structured pruning on the prediction quality.
In this section. we use the well-performing LQ-Net approach for quantization and PSP (for channel pruning) to measure the inference throughput of the quantized and pruned models separately on an ... | Liu et al. (2019b) have replicated several experiments of pruning approaches (see Section 3.2) and they observed that the typical workflow of training, pruning, and fine-tuning is often not necessary and only the discovered sparsity structure is important.
In particular, they show for several pruning approaches that ra... | In this experiment, we select pruning structures that are in line with commonly used DNN libraries for convolutions: we use channel pruning to learn the number of input and output feature maps, kernel pruning to learn the size of the convolution kernel, and group pruning to learn heterogeneous group sizes for grouped c... | A |
2|FillRad(M;𝔽)−FillRad(𝕊n)|≤cost(Rε)<ε.2FillRad𝑀𝔽FillRadsuperscript𝕊𝑛costsubscript𝑅𝜀𝜀2|\mathrm{FillRad}(M;\mathbb{F})-\mathrm{FillRad}(\mathbb{S}^{n})|\leq\mathrm{%
cost}(R_{\varepsilon})<\varepsilon.2 | roman_FillRad ( italic_M ; blackboard_F ) - roman_FillRad ( blackboard_S start_POSTSUPERSCRIPT italic_n... | \mathbb{S}^{n})roman_FillRad ( italic_M ; blackboard_F ) ≥ divide start_ARG 1 end_ARG start_ARG italic_π square-root start_ARG italic_c end_ARG end_ARG ⋅ roman_FillRad ( blackboard_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ) by [64, Proofs of Theorem 1.1 and Proposition 1.6]. Therefore, 2|FillRad(M;𝔽)−Fil... | By Azumaya’s theorem [10], persistence barcodes, whenever they exist, are unique: any two persistence barcodes associated to a given V∗subscript𝑉V_{*}italic_V start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT must agree (up to reordering). The most important existence result for persistence barcodes is Crawley-Boevey’s theorem ... | The proof strategy for Propositions 9.8 and 9.9 is to invoke Wilhelm’s result [82, Main Theorem 2] and Lemma 9.5 above. However, if FillRad(M)FillRad𝑀\mathrm{FillRad}(M)roman_FillRad ( italic_M ) were small, one would not be able to apply Wilhelm’s theorem. To avoid that, we will invoke a result due to Liu [64].
| FillRad(𝕊n;𝔽)=FillRad(𝕊n)FillRadsuperscript𝕊𝑛𝔽FillRadsuperscript𝕊𝑛\mathrm{FillRad}(\mathbb{S}^{n};\mathbb{F})=\mathrm{FillRad}(\mathbb{S}^{n})roman_FillRad ( blackboard_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ; blackboard_F ) = roman_FillRad ( blackboard_S start_POSTSUPERSCRIPT italic_n end_POSTS... | C |
One initial observation is that the overall Completion Time for both groups was remarkably similar. With the exception of Tasks 1 and 5, where t-viSNE users performed faster than GEP, in general the results have not shown any statistically significant difference. To answer RQ1, we detected no statistically significant... | On the other hand, t-viSNE obtained consistently higher scores for Tool Supportiveness, with a higher average in all the proposed tasks. The bulk of the distributions of the supportiveness scores from the two groups overlap little, mostly near outliers (the “N/A” option was chosen three times, all in the GEP group).
Wh... |
A quick visual inspection of the two tables already hints at t-viSNE having superior scores than GEP in all components, with all cells being green-colored (as opposed to GEP’s table, which contains many red-colored cells). Indeed, the smallest score for t-viSNE was 4.75, while GEP got many scores under 4 (or even unde... | Overall Accuracy
We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are q... |
Figure 9: Results of the comparative study: the top charts show completion time and tool supportiveness (as judged by participants) for all the tasks of the study, and the bottom row includes the histograms of the participants’ responses in all questions/tasks. The completion times between the two groups were very sim... | A |
A literature review and critical analysis of metaheuristics recently developed - 2024 [38]: This review focuses on algorithms with titles containing words such as ‘new’, ‘hybrid’, or ‘improved’, in response to the growing trend of nature-based approaches. After analyzing over 100 algorithms, it was found that a signif... |
Theoretical studies: By the hand of the fitness landscape for a better understanding of how a search algorithm can perform on a family of problem instances, of multidisciplinary theories to study the role of diversity and the balance of local search and global search required to undertake a certain problem efficiently... |
Metaheuristic optimization algorithms: an overview - 2024 [39]: This paper focuses on studying the main components and concepts of optimization. More specifically, the overview provides the advantages (agnostic to the problem being solved, gradient independence, global search capability, the capability of dealing with... |
Metaheuristics in a nutshell - 2023 [36]: The purpose of this overview is to define the main terms related to the concept of metaheuristic. The text does not provide an extensive taxonomy, but it clearly distinguishes between two classes of metaheuristics: trajectory and population algorithms. It describes the most we... |
50 years of metaheuristics - 2024 [40]: This overview traces the last 50 years of the field, starting from the roots of the area to the latest proposals to hybridize metaheuristics with machine learning. The revision encompasses constructive (GRASP and ACO), local search (iterated local search, Tabu search, variable n... | B |
Figure 1: Framework of AdaGAE. k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial sparsity. First, we construct a sparse graph via the generative model defined in Eq. (7). The learned graph is employed to apply the GAE designed for the weighted graphs. After training the GAE, we update ... | However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods.
In this paper, we propo... |
In recent years, GCNs have been studied a lot to extend neural networks to graph type data. How to design a graph convolution operator is a key issue and has attracted a mass of attention. Most of them can be classified into 2 categories, spectral methods [24] and spatial methods[25]. | (1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for dec... | As well as the well-known k𝑘kitalic_k-means [1, 2, 3], graph-based clustering [4, 5, 6] is also a representative kind of clustering method.
Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore,... | A |
False negatives in our measurements mean that a network that does not perform filtering of spoofed packets is not marked as such. We next list the causes of false negatives for each of our three techniques. Essentially the false negatives cannot be resolved, and therefore our measurement results of networks that enforc... |
Each IP packet contains an IP Identifier (IPID) field, which allows the recipient to identify fragments of the same original IP packet. The IPID field is 16 bits in IPv4, and for each packet the Operating System (OS) at the sender assigns a new IPID value. There are different IPID assignment algorithms which can be ca... |
IPID technique. Load balancing can introduce a challenge in identifying whether a given network enforces ingress filtering. As a result of load balancing our packets will be split between multiple instances of the server, hence resulting in low IPID counter values. There are different approaches for distributing the l... |
Methodology. We use services that assign globally incremental IPID values. The idea is that globally incremental IPID [RFC6864] (Touch, 2013) values leak traffic volume arriving at the service and can be measured by any Internet host. Given a server with a globally incremental IPID on the tested network, we sample the... | There is a strong correlation between the AS size and the enforcement of spoofing, see Figure 13. Essentially, the larger the AS, the higher the probability that our tools identify that it does not filter spoofed packets. The reason can be directly related to our methodologies and the design of our study: the larger th... | B |
The context+skill approach used in this paper builds on the substantial body of work on training models in nonstationary environments (Section II-1). It also draws inspiration from models of context-aware odor processing in biological systems (Section II-2). | This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The ... |
Sensor drift in industrial processes is one such use case. For example, sensing gases in the environment is mostly tasked to metal oxide-based sensors, chosen for their low cost and ease of use [1, 2]. An array of sensors with variable selectivities, coupled with a pattern recognition algorithm, readily recognizes a b... |
Machine learning applications frequently deal with data-generating processes that change over time. Applications in such nonstationary environments include power use forecasting, recommendation systems, and environmental sensors [9]. Semisupervised learning, which has received a lot of attention in the sensor communit... |
The current design of the context-based network relies on labeled data because the odor samples for a given class are presented as ordered input to the context layer. However, the model can be modified to be trained on unlabeled data, simply by allowing arbitrary data samples as input to the context layer. This design... | C |
$P_{0}\cup\dots\cup P_{i-1}\cup B$ realizing the matching $M$}\end{cases}italic_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT [ italic_i , italic_B ] := { start_ROW start_CELL A representative set containing pairs ( italic_M , italic_x ) , where italic_M is a perfect matching on italic_B ∈ caligraphic_B start_POSTS... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi and x is a re... | A(1)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(1) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching Massignsuperscript𝐴1𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B∈... | A(2)[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B∈ℬi(2) and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assignsuperscript𝐴2𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B... |
A[i,B]:={A representative set containing pairs (M,x), where M is a perfect matching on B and x is a real number equal to the minimum total length of a path cover of P0∪⋯∪Pi−1∪B realizing the matching M.assign𝐴𝑖𝐵casesA representative set containing pairs (M,x), where M is a perfect matching on B and x is a real num... | C |
The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
| idempotent or both homogeneous (with respect to the presentation given by the generating automaton), then S⋆T⋆𝑆𝑇S\star Titalic_S ⋆ italic_T is an automaton semigroup.
For her Bachelor thesis [19], the third author modified the construction in [3, Theorem 4] to considerably relax the hypothesis on the base semigroups: |
During the research and writing for this paper, the second author was previously affiliated with FMI, Centro de Matemática da Universidade do Porto (CMUP), which is financed by national funds through FCT – Fundação para a Ciência e Tecnologia, I.P., under the project with reference UIDB/00144/2020, and the Dipartiment... | The first author was supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through an FCT post-doctoral fellowship (SFRH/BPD/121469/2016) and the projects UID/MAT/00297/2013 (Centro de Matemática e Aplicações) and PTDC/MAT-PUR/31174/2017.
| The problem of presenting (finitely generated) free groups and semigroups in a self-similar way has a long history [15]. A self-similar presentation in this context is typically a faithful action on an infinite regular tree (with finite degree) such that, for any element and any node in the tree, the action of the elem... | B |
P(a|𝒬,ℐ)=fVQA(v,𝒬).𝑃conditional𝑎𝒬ℐsubscript𝑓𝑉𝑄𝐴𝑣𝒬\displaystyle P(a|\mathcal{Q},\mathcal{I})=f_{VQA}(v,\mathcal{Q}).italic_P ( italic_a | caligraphic_Q , caligraphic_I ) = italic_f start_POSTSUBSCRIPT italic_V italic_Q italic_A end_POSTSUBSCRIPT ( italic_v , caligraphic_Q ) . | Table A4 shows VQA accuracy for each answer type on VQACPv2’s test set. HINT/SCR and our regularizer show large gains in ‘Yes/No’ questions. We hypothesize that the methods help forget linguistic priors, which improves test accuracy of such questions. In the train set of VQACPv2, the answer ‘no’ is more frequent than t... |
Without additional regularization, existing VQA models such as the baseline model used in this work: UpDn Anderson et al. (2018), tend to rely on the linguistic priors: P(a|𝒬)𝑃conditional𝑎𝒬P(a|\mathcal{Q})italic_P ( italic_a | caligraphic_Q ) to answer questions. Such models fail on VQA-CP, because the priors in ... |
As expected of any real world dataset, VQA datasets also contain dataset biases Goyal et al. (2017). The VQA-CP dataset Agrawal et al. (2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nea... |
The VQA-CP dataset Agrawal et al. (2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have ende... | B |
A privacy policy is a legal document that an organisation uses to disclose how they collect, analyze, share, and protect users’ personal information. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users, and laws such as General Data Protection Regul... |
Natural language processing (NLP) provides an opportunity to automate the extraction of salient details from privacy policies, thereby reducing human effort and enabling the creation of tools for internet users to understand and control their online privacy. Existing research has achieved some success using expert ann... | Other corpora similar to OPP-115 Corpus have enabled research on privacy practices. The PrivacyQA corpus contains 1,750 questions and expert-annotated answers for the privacy question answering task (Ravichander et al., 2019). Similarly, Lebanoff and Liu (2018) constructed the first corpus of human-annotated vague word... |
Prior collections of privacy policy corpora have led to progress in privacy research. Wilson et al. (2016) released the OPP-115 Corpus, a dataset of 115 privacy policies with manual annotations of 23k fine-grained data practices, and they created a baseline for classifying privacy policy text into one of ten categorie... |
For the question answering task, we leveraged the PrivacyQA corpus (Ravichander et al., 2019). PrivacyQA consists of 1,750 questions about the contents of privacy policies from 35 privacy documents. While crowdworkers were asked to come up with privacy related questions based on public information about an application... | A |
Wang et al. [62] experimented with alternative visualization designs for selecting parameters, and they found that a parallel coordinates plot is a solid representation for this context as it is concise and also not rejected by the users. A drawback is the complexity of it compared to multiple simpler scatterplots.
Fig... |
Figure 2: The exploration process of ML algorithms. View (a.1) summarizes the performance of all available algorithms, and (a.2) the per-class performance based on precision, recall, and f1-score for each algorithm. (b) presents a selection of parameters for KNN in order to boost the per-class performance shown in (c.... | We normalize the importance from 0 to 1 and use a two-hue color encoding from dark red to dark green to highlight the least to the most important features for our current stored stack, see Figure 4(b). The panel in Figure 4(c) uses a table heatmap view where data features are mapped to the y-axis (13 attributes, only 7... | Predictions’ Space.
The goal of the predictions’ space visualization (StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(f)) is to show an overview of the performance of all models of the current stack for different instances. | Such iterative exploration proceeds for every algorithm until we are satisfied, see Figure 2(e) where six algorithms are selected for our initial stack \raisebox{-.0pt} {\tiny\bfS1}⃝.
Figure 2(f) shows a radar chart providing an overview of the entire space of available algorithms (yellow contour) against the current s... | D |
We thus have 3333 cases, depending on the value of the tuple
(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))𝑝𝑣delimited-[]010𝑝𝑣delimited-[]323𝑝𝑣delimited-[]313𝑝𝑣delimited-[]003(p(v,[010]),p(v,[323]),p(v,[313]),p(v,[003]))( italic_p ( italic_v , [ 010 ] ) , italic_p ( italic_v , [ 323 ] ) , italic_p ( italic_v... | {0¯,1¯,2¯,3¯,[013],[010],[323],[313],[112],[003],[113]}.¯0¯1¯2¯3delimited-[]013delimited-[]010delimited-[]323delimited-[]313delimited-[]112delimited-[]003delimited-[]113\{\overline{0},\overline{1},\overline{2},\overline{3},[013],[010],[323],[313],%
[112],[003],[113]\}.{ over¯ start_ARG 0 end_ARG , over¯ start_ARG 1 end... | By using the pairwise adjacency of (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), (v,[003])𝑣delimited-[]003(v,[003])( italic_v , [ 003 ] ), and
(v,[113])𝑣delimited-[]113(v,[113])( italic_v , [ 113 ] ), we can confirm that in the 3333 cases, these | Then, by using the adjacency of (v,[013])𝑣delimited-[]013(v,[013])( italic_v , [ 013 ] ) with each of
(v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[112])𝑣delimited-[]112(v,[112])( italic_v , [ 112 ] ), we can confirm that | p(v,[013])=p(v,[313])=p(v,[113])=1𝑝𝑣delimited-[]013𝑝𝑣delimited-[]313𝑝𝑣delimited-[]1131p(v,[013])=p(v,[313])=p(v,[113])=1italic_p ( italic_v , [ 013 ] ) = italic_p ( italic_v , [ 313 ] ) = italic_p ( italic_v , [ 113 ] ) = 1.
Similarly, when f=[112]𝑓delimited-[]112f=[112]italic_f = [ 112 ], | B |
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... |
To answer RQ1, we compare the changing trend of the general language model and the task-specific adaptation ability during the training of MAML to find whether there is a trade-off problem. (Figure 1) We select the trained parameter initialization at different MAML training epochs and evaluate them directly on the met... | In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In dialogue generation experiment, we evaluate the performance of MAML in terms of quality and personality. We use PPL and BLEU [Papineni et al., 2002] to measure the similarity between the reference and the generated r... | In this paper, we take an empirical approach to systematically investigating these impacting factors and finding when MAML works the best. We conduct extensive experiments over 4 datasets. We first study the effects of data quantity and distribution on the training strategy:
RQ1. Since the parameter initialization lear... | The finding suggests that parameter initialization at the late training stage has strong general language generation ability, but performs comparative poorly in task-specific adaptation.
Although in the early training stage, the performance improves benefiting from the pre-trained general language model, if the languag... | A |
\text{c}}}{2}\cos\alpha\sin\beta)}}\right]^{T},… , italic_e start_POSTSUPERSCRIPT italic_j divide start_ARG 2 italic_π end_ARG start_ARG italic_λ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT end_ARG ( divide start_ARG ( italic_M - 1 ) italic_d start_POSTSUBSCRIPT cyl end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG roma... | The CCA codebook based SPAS algorithm is proposed in the previous section to solve the joint CCA subarray partition and AWV selection problem. In this section, the TE-aware beam tracking problem is addressed based on the CCA codebook based SPAS algorithm.
Tracking the AOAs and AODs is essential for beam tracking, which... | ℱℱ\mathcal{F}caligraphic_F and 𝒲𝒲\mathcal{W}caligraphic_W are the sets of all analog beamforming vectors and combing vectors satisfying the hardware constraints, respectively.
In fact, solving the above problem (13) requires the new codebook design and codeword selection/processing strategy. Noting the interdependent... |
A CCA-enabled UAV mmWave network is considered in this paper. Here, we first establish the DRE-covered CCA model in Section II-A. Then the system setup of the considered UAV mmWave network is described in Section II-B. Finally, the beam tracking problem for the CA-enabled UAV mmWave network is modeled in Section II-C. |
The rest of this paper is as follows. In Section II, the system model is introduced. In Section III, the CCA codebook design and the codebook-based joint subarray partition and AWV selection algorithms are proposed. Next, the TE-aware codebook-based beam tracking with 3D beamwidth control is further proposed in Sectio... | D |
The sentences PRESϕ∞superscriptsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}^{\infty}PRES start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT and PRESϕsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}PRES start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT
are as required by Theorem 3.7. | a Type-Behavior Partitioned Graph Vector associated to a graph representation G𝒜subscript𝐺𝒜G_{\mathcal{A}}italic_G start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT for a model 𝒜𝒜\mathcal{A}caligraphic_A of ϕitalic-ϕ\phiitalic_ϕ.
The sentence PRESϕsubscriptPRESitalic-ϕ\textsf{{PRES}}_{\phi}PRES start_POSTSUBSCRI... | We can then consider the vector of subgraphs G𝒜,πsubscript𝐺𝒜𝜋G_{\mathcal{A},\pi}italic_G start_POSTSUBSCRIPT caligraphic_A , italic_π end_POSTSUBSCRIPT and G𝒜,π,π′subscript𝐺𝒜𝜋superscript𝜋′G_{\mathcal{A},\pi,\pi^{\prime}}italic_G start_POSTSUBSCRIPT caligraphic_A , italic_π , italic_π start_POSTSUPERSCRIPT ′ en... | Note that we assume that the number of behavior functions of column j𝑗jitalic_j in A𝐴Aitalic_A
is the same as the number of behavior functions of column j′superscript𝑗′j^{\prime}italic_j start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in B𝐵Bitalic_B for every j∈[m]𝑗delimited-[]𝑚j\in[m]italic_j ∈ [ italic_m ] and ever... | Note that in a Type-Behavior Partitioned Graph Vector, information about 2222-types is coded in both the edge relation and in the partition, since the partition
is defined via behavior functions. Thus there are additional dependencies on sizes for a Type-Behavior Partitioned Graph Vector of a model of ϕitalic-ϕ\phiital... | D |
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature represe... | The key to our analysis is a mean-field perspective, which allows us to associate the evolution of a finite-dimensional parameter with its limiting counterpart over an infinite-dimensional Wasserstein space (Villani, 2003, 2008; Ambrosio et al., 2008; Ambrosio and Gigli, 2013). Specifically, by exploiting the permutati... |
at the mean-field limit with ϵ→0+→italic-ϵsuperscript0\epsilon\rightarrow 0^{+}italic_ϵ → 0 start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT and m→∞→𝑚m\rightarrow\inftyitalic_m → ∞. Such a correspondence allows us to use the PDE solution ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT in (3.... | The proof of Proposition 3.1 is based on the propagation of chaos (Sznitman, 1991; Mei et al., 2018, 2019).
In contrast to Mei et al. (2018, 2019), the PDE in (3.4) can not be cast as a gradient flow, since there does not exist a corresponding energy functional. Thus, their analysis is not directly applicable to our se... | To address such an issue of divergence, nonlinear gradient TD (Bhatnagar et al., 2009) explicitly linearizes the value function approximator locally at each iteration, that is, using its gradient with respect to the parameter as an evolving feature representation. Although nonlinear gradient TD converges, it is unclear... | A |
The encoder layer with the depth-wise LSTM unit, as shown in Figure 2, first performs the self-attention computation, then the depth-wise LSTM unit takes the self-attention results and the output and the cell state of the previous layer to compute the output and the cell state of the current layer.
|
Another way to take care of the outputs of these two sub-layers in the decoder layer is to replace their residual connections with two depth-wise LSTM sub-layers, as shown in Figure 3 (b). This leads to better performance (as shown in Table 4), but at the costs of more parameters and decoder depth in terms of sub-laye... | Specifically, the decoder layer with depth-wise LSTM first computes the masked self-attention sub-layer and the cross-attention sub-layer as in the original decoder layer, then it merges the outputs of these two sub-layers and feeds the merged representation into the depth-wise LSTM unit which also takes the cell and t... | We also study the merging operations, concatenation, element-wise addition, and the use of 2 depth-wise LSTM sub-layers, to combine the masked self-attention sub-layer output and the cross-attention sub-layer output in decoder layers. Results are shown in Table 4.
|
Different from encoder layers, decoder layers involve two multi-head attention sub-layers: a masked self-attention sub-layer to attend the decoding history and a cross-attention sub-layer to attend information from the source side. Given that the depth-wise LSTM unit only takes one input, we introduce a merging layer ... | D |
For that, consider φ∈𝖤𝖥𝖮[σ]𝜑𝖤𝖥𝖮delimited-[]σ\varphi\in\mathsf{EFO}[\upsigma]italic_φ ∈ sansserif_EFO [ roman_σ ]
such that B′⊧φmodelssuperscript𝐵′𝜑B^{\prime}\models\varphiitalic_B start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⊧ italic_φ, as φ∈𝖤𝖥𝖮[σ]𝜑𝖤𝖥𝖮delimited-[]σ\varphi\in\mathsf{EFO}[\upsigma]italic... | φ∈𝖥𝖮[σ]𝜑𝖥𝖮delimited-[]σ\varphi\in\mathsf{FO}[\upsigma]italic_φ ∈ sansserif_FO [ roman_σ ], if A⊧φmodels𝐴𝜑A\models\varphiitalic_A ⊧ italic_φ, then there
exists a finite structure Afinsubscript𝐴finA_{\mathrm{fin}}italic_A start_POSTSUBSCRIPT roman_fin end_POSTSUBSCRIPT such that | there exists a finite structure A⊆iB′subscript𝑖𝐴superscript𝐵′A\subseteq_{i}B^{\prime}italic_A ⊆ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_B start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT such that
A⊧φmodels𝐴𝜑A\models\varphiitalic_A ⊧ italic_φ. Because F𝐹Fitalic_F is downwards-closed, | \endkillcontentsIndeed, consider a structure A∈Fin(σ)¯𝐴¯FinσA\in\overline{\operatorname{Fin}(\upsigma)}italic_A ∈ over¯ start_ARG roman_Fin ( roman_σ ) end_ARG and a
sentence φ∈𝖥𝖮[σ]𝜑𝖥𝖮delimited-[]σ\varphi\in\mathsf{FO}[\upsigma]italic_φ ∈ sansserif_FO [ roman_σ ] such that A⊧φmodels𝐴𝜑A\models\varphiitalic_A ... | finite structure A𝐴Aitalic_A, there exists a diagram sentence ψA𝖥superscriptsubscript𝜓𝐴𝖥\psi_{A}^{\mathsf{F}}italic_ψ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT sansserif_F end_POSTSUPERSCRIPT
in 𝖥𝖥\mathsf{F}sansserif_F such that A≤B𝐴𝐵A\leq Bitalic_A ≤ italic_B if and only if B⊧ψA𝖥mo... | B |
Qualitative Comparison: To qualitatively show the performance of different learning representations, we visualize the 3D distortion distribution maps (3D DDM) derived from the ground truth and these two schemes in Fig. 8, in which each pixel value of the distortion distribution map represents the distortion level. Sinc... | We visually compare the corrected results from our approach with state-of-the-art methods using our synthetic test set and the real distorted images. To show the comprehensive rectification performance under different scenes, we split the test set into four types of scenes: indoor, outdoor, people, and challenging scen... | Figure 12: Qualitative evaluations of the rectified distorted images on people (left) and challenging (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified re... |
Figure 11: Qualitative evaluations of the rectified distorted images on indoor (left) and outdoor (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified resul... | Figure 13: Qualitative evaluations of the rectified distorted images on real-world scenes. For each evaluation, we show the distorted image and corrected results of the compared methods: Alemán-Flores [23], Santana-Cedrés [24], Rong [8], Li [11], and Liao [12], and rectified results of our proposed approach, from left ... | C |
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different b... | First, we use the dataset CIFAR-10 and the model ResNet20 [10] to evaluate SNGM. We train the model with eight GPUs. Each GPU will compute a gradient with the batch size being B/8𝐵8B/8italic_B / 8. If B/8≥128𝐵8128B/8\geq 128italic_B / 8 ≥ 128, we will use the gradient accumulation [28]
with the batch size being 128. ... |
To further verify the superiority of SNGM with respect to LARS, we also evaluate them on a larger dataset ImageNet [2] and a larger model ResNet50 [10]. We train the model with 90 epochs. As recommended in [32], we use warm-up and polynomial learning rate strategy. | The momentum coefficient is set as 0.9 and the weight decay is set as 0.001. The initial learning rate is selected from {0.001,0.01,0.1}0.0010.010.1\{0.001,0.01,0.1\}{ 0.001 , 0.01 , 0.1 } according to the performance on the validation set. We do not adopt any learning rate decay or warm-up strategies.
The model is tra... | We further conduct CTR prediction experiments to evaluate SNGM. We train DeepFM [8] on a CTR prediction dataset containing ten million samples that are sampled from the Criteo dataset 777https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/.
We set aside 20% of the samples as the test set and divide the rema... | D |
An outbreak is an instance from 𝒟𝒟\mathcal{D}caligraphic_D, and after it actually happened, additional testing and vaccination locations were deployed or altered based on the new requirements, e.g., [20], which corresponds to stage-II decisions.
To continue this example, there may be further constraints on FIsubscrip... |
There is an important connection between our generalization scheme and the design of our polynomial-scenarios approximation algorithms. In Theorem 1.1, the sample bounds are given in terms of the cardinality |𝒮|𝒮|\mathcal{S}|| caligraphic_S |. Our polynomial-scenarios algorithms are carefully designed to make |𝒮|𝒮... | Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific ... | The most general way to represent the scenario distribution 𝒟𝒟\mathcal{D}caligraphic_D is the black-box model [24, 12, 22, 19, 25], where we have access to an oracle to sample scenarios A𝐴Aitalic_A according to 𝒟𝒟\mathcal{D}caligraphic_D. We also consider the polynomial-scenarios model [23, 15, 21, 10], where the ... | Our main goal is to develop algorithms for the black-box setting. As usual in two-stage stochastic problems, this has three steps. First, we develop algorithms for the simpler polynomial-scenarios model. Second, we sample a small number of scenarios from the black-box oracle and use our polynomial-scenarios algorithms ... | D |
Both (sub)gradient noises and random graphs are considered in [11]-[13]. In [11], the local gradient noises are independent with bounded second-order moments and the graph sequence is i.i.d.
In [12]-[14], the (sub)gradient measurement noises are martingale difference sequences and their second-order conditional moments... | such as the economic dispatch in power grids ([1]) and the traffic flow control in intelligent transportation networks ([2]), et al. Considering the various uncertainties in practical network environments, distributed stochastic optimization algorithms have been widely studied. The (sub)gradients of local cost function... |
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. The network is modeled by a sequence of time-varying random digraphs which may be sp... | I. The local cost functions in this paper are not required to be differentiable and the subgradients only satisfy the linear growth condition.
The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditi... | In addition to uncertainties in information exchange, different assumptions on the cost functions have been discussed.
In the most of existing works on the distributed convex optimization, it is assumed that the subgradients are bounded if the local cost | D |
In this work, we propose a novel technique called the Mutual Cover (MuCo) to impede adversary from matching the combination of QI values while overcoming the above issues. The key idea of MuCo is to make similar tuples to cover for each other by randomizing their QI values according to random output tables. | Specifically, there are three main steps in the proposed approach. First, MuCo partitions the tuples into groups and assigns similar records into the same group as far as possible. Second, the random output tables, which control the distribution of random output values within each group, are calculated to make similar ... |
For instance, suppose that we add another QI attribute of gender as shown in Figure 4, the mutual cover strategy first divides the records into groups in which the records in the same group cover for each other by perturbing their QI values. Then, the mutual cover strategy calculates a random output table on each QI a... | Given a microdata table T𝑇Titalic_T, the mutual cover strategy partitions T𝑇Titalic_T into groups, calculates a random output table on each QI attribute for the records in every group, and generates random values according to probabilities in the random output tables.
|
Given a set of tuples, MuCo partitions the tuples into groups, calculates a random output table on each QI attribute inside each group, and generates random values to replace the original QI values according to the random output tables. The formalization is as follows. | A |
Table 3: PointRend’s performance on testing set (trackB). “EnrichFeat” means enhance the feature representation of coarse mask head and point head by increasing the number of fully-connected layers or its hidden sizes. “BFP” means Balanced Feature Pyramid. Note that BFP and EnrichFeat gain little improvements, we guess... | Deep learning has achieved great success in recent years Fan et al. (2019); Zhu et al. (2019); Luo et al. (2021, 2023); Chen et al. (2021). Recently, many modern instance segmentation approaches demonstrate outstanding performance on COCO and LVIS, such as HTC Chen et al. (2019a), SOLOv2 Wang et al. (2020), and PointRe... | PointRend performs point-based segmentation at adaptively selected locations and generates high-quality instance mask. It produces smooth object boundaries with much finer details than previously two-stage detectors like MaskRCNN, which naturally benefits large object instances and complex scenes. Furthermore, compared... | HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains an... | Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.... | B |
(0log0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^(A)|2}A⊆[n]subsc... | For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
|
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma s... |
where for A⊆[n]𝐴delimited-[]𝑛A\subseteq[n]italic_A ⊆ [ italic_n ], |A|𝐴|A|| italic_A | denotes the cardinality of A𝐴Aitalic_A. This object, especially for boolean functions, is a deeply studied one and quite influential (but this is not the reason for the name…) in several directions. We refer to [O] for some info... |
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Ma... | A |
For any algorithm, the dynamic regret is at least Ω(B1/3d5/6HT2/3)Ωsuperscript𝐵13superscript𝑑56𝐻superscript𝑇23\Omega(B^{1/3}d^{5/6}HT^{2/3})roman_Ω ( italic_B start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_d start_POSTSUPERSCRIPT 5 / 6 end_POSTSUPERSCRIPT italic_H italic_T start_POSTSUPERSCRIPT 2 / 3 en... | We consider the setting of episodic RL with nonstationary reward and transition functions. To measure the performance of an algorithm, we use the notion of dynamic regret, the performance difference between an algorithm and the set of policies optimal for individual episodes in hindsight. For nonstationary RL, dynamic ... | Motivated by empirical success of deep RL, there is a recent line of work analyzing the theoretical performance of RL algorithms with function approximation (Yang & Wang, 2019; Cai et al., 2020; Jin et al., 2020; Modi et al., 2020; Ayoub et al., 2020; Wang et al., 2020; Zhou et al., 2021; Wei et al., 2021; Neu & Olkhov... | The proof idea is similar to that of Theorem 1. The only difference is that within each piecewise-stationary segment, we use the hard instance constructed by Zhou et al. (2021); Hu et al. (2022) for inhomogenous linear MDPs. Optimizing the length of each piecewise-stationary segment N𝑁Nitalic_N and the variation magni... | The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and... | C |
A series of 1-5 Likert scale questions (1: strongly disagree, 5: strongly agree) were presented to the respondents (in SeenFake-57) to further gain insights into their views on fake news. Respondents feel that the issue of fake news will remain for a long time (M=4.33,SD=0.831formulae-sequence𝑀4.33𝑆𝐷0.831M=4.33,SD=... |
In general, respondents possess a competent level of digital literacy skills with a majority exercising good news sharing practices. They actively verify news before sharing by checking with multiple sources found through the search engine and with authoritative information found in government communication platforms,... | Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Gover... | While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by politic... | Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on inst... | D |
GNN-based methods [13, 37, 38, 39, 40, 41, 42] introduce relation-specific composition operations to combine neighbors and their corresponding relations before performing neighborhood aggregation. They usually leverage existing GNN models, such as GCN and GAT [43, 44], to aggregate an entity’s neighbors. It is worth no... | In the entity alignment task, we leverage the widely used DBP15K datasets [15, 16, 17, 28, 34, 36, 40, 18] in our experiment. These datasets encompass three entity alignment settings, each comprising two linked KGs in different languages. For instance, the ZH-EN dataset involves the alignment between Chinese and Englis... |
Consider the instance of encoding the relational information of the entity W3C into an embedding. All relevant information is structured in the form of triplets, such as (RDF,developer,W3C)RDFdeveloperW3C(\textit{RDF},\textit{developer},\textit{W3C})( RDF , developer , W3C ). Removing the self-entity W3C does not comp... | These methods [1, 15, 16, 17, 18, 45, 46, 47] integrate image and attribute information to generate embeddings for unseen entities in KG embedding.
Their relational encoding modules, however, remain transductive and thus are not the primary focus of our study. | Drawing inspiration from the CBOW schema, we propose Decentralized Attention Network (DAN) to distribute the relational information of an entity exclusively over its neighbors.
DAN retains complete relational information and empowers the induction of embeddings for new entities. For example, if W3C is a new entity, its... | C |
PPO algorithm. The actor-network in PPO contains 3333 convolution layers and 2222 fully-connected layers. The filters of each convolution layer are 32323232, 64646464, and 64646464, respectively. The corresponding kernel-sizes are 8888, 4444, and 3333, respectively. The state is embedded into a 512512512512-dimensiona... |
The complete procedure of self-supervised exploration with VDM is summarized in Algorithm 1. In each episode, the agent interacts with the environment to collect the transition st,at,st+1subscript𝑠𝑡subscript𝑎𝑡subscript𝑠𝑡1s_{t},a_{t},s_{t+1}italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_... | Normalization methods. We normalize the intrinsic reward and advantage function in training for more stable performance. Since the reward generated by the environment are typically non-stationary, such normalization is useful for a smooth and stable update of the value function. In practice, we normalize the advantage ... | We observe that our method performs the best in most of the games, in both the sample efficiency and the performance of the best policy. The reason our method outperforms other baselines is the multimodality in dynamics that the Atari games usually have. Such multimodality is typically caused by other objects that are ... |
We first evaluate our method on standard Atari games. Since different methods utilize different intrinsic rewards, the intrinsic rewards are not applicable to measure the performance of the trained purely exploratory agents. In alternative, we follow [11, 13], and use the extrinsic rewards given by the environment to ... | B |
The number of coefficients |Am,n,1|=(m+nn)∈𝒪(mn)subscript𝐴𝑚𝑛1binomial𝑚𝑛𝑛𝒪superscript𝑚𝑛|A_{m,n,1}|=\binom{m+n}{n}\in\mathcal{O}(m^{n})| italic_A start_POSTSUBSCRIPT italic_m , italic_n , 1 end_POSTSUBSCRIPT | = ( FRACOP start_ARG italic_m + italic_n end_ARG start_ARG italic_n end_ARG ) ∈ caligraphic_O ( itali... | Thus, combining sub-exponential node numbers with exponential approximation rates, interpolation with respect to l2subscript𝑙2l_{2}italic_l start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-degree polynomials might yield a way of lifting the curse of dimensionality and answering Question 1.
| Furthermore, so far none of these approaches is known to reach the optimal Trefethen approximation rates when requiring the number of nodes of the underlying tensorial grids to
scale sub-exponential with space dimension. As the numerical experiments in Section 8 suggest, we believe that only non-tensorial grids are abl... | Whatsoever, any answer to Questions 2 that is to be of practical relevance
must provide a recipe to construct interpolation nodes PAsubscript𝑃𝐴P_{A}italic_P start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT that allow efficient approximation while resisting the curse of dimensionality in terms of Question 1. | convergence rates for the Runge function, as a prominet example of a Trefethen function. We show that the number of nodes required scales sub-exponentially with space dimension. We therefore believe that the present generalization of unisolvent nodes to non-tensorial grids is key to lifting the curse of dimensionality.... | A |
On the one hand, it should be rich enough to claim μ=ν𝜇𝜈\mu=\nuitalic_μ = italic_ν if the metric vanishes.
On the other hand, to control the type-I error, the function space should also be relatively small so that the empirical estimate of IPM decays quickly into zero. | While the Wasserstein distance has wide applications in machine learning, the finite-sample convergence rate of the Wasserstein distance between empirical distributions is slow in high-dimensional settings.
We propose the projected Wasserstein distance to address this issue. | The max-sliced Wasserstein distance is proposed to address this issue by finding the worst-case one-dimensional projection mapping such that the Wasserstein distance between projected distributions is maximized.
The projected Wasserstein distance proposed in our paper generalizes the max-sliced Wasserstein distance by ... | The Wasserstein distance, as a particular case of IPM, is popular in many machine learning applications. However, a significant challenge in utilizing the Wasserstein distance for two-sample tests is that the empirical Wasserstein distance converges at a slow rate due to the complexity of the associated function space.... | The finite-sample convergence of general IPMs between two empirical distributions was established.
Compared with the Wasserstein distance, the convergence rate of the projected Wasserstein distance has a minor dependence on the dimension of target distributions, which alleviates the curse of dimensionality. | C |
Figure 1: Image reconstruction using β𝛽\betaitalic_β-TCVAE (Figure 1b) and DS-VAE (Figure 1d). DS-VAE is able to take the blurry output of the underlying β𝛽\betaitalic_β-TCVAE model and learn to render a much better approximation to the target (Figure 1a). Figure 1c shows the effect of perturbing Z𝑍Zitalic_Z. DS-VA... | The framework is general and can utilize any DGM. Furthermore, even though it involves two stages, the end result is a single model which does not rely on any auxiliary models, additional hyper-parameters, or hand-crafted loss functions, as opposed to previous works addressing the problem (see Section LABEL:sec:related... | While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, i... |
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervise... | Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and corre... | A |
To simulate the aforementioned structural computer theory, a device in the form of a USB connection. However, as the circuit grows in size, a number of USB-connected simulation devices are required, resulting in cost problems. We decided to verify that the structural computer theory presented so far is actually working... | Optical logic aggregates can be designed in the same way as in Implementation of Structural Computer Using Mirrors and Translucent Mirrors, and for the convenience of expression and the exploration of mathematical properties (especially their association with matrices), the number shown in Fig. 5 can be applied to the ... | If a pair of lines of the same color is connected, 1, if broken, the sequence pair of states of the red line (α𝛼\alphaitalic_α) and blue line (β𝛽\betaitalic_β) determines the transmitted digital signal. Thus, signal cables require one transistor for switching action at the end. When introducing the concept of an inve... | We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Tab... | Graph described in Fig. 4 is an implementation of an XOR gate combining NAND and OR, expressed in 33 vertices and 46 mains. Graphs are expressed in red and blue numbers in cases where there is no direction of the main line (the main line that can be passed in both directions) and the direction of the main line (the ma... | D |
Any permutation polynomial f(x)𝑓𝑥f(x)italic_f ( italic_x ) decomposes the finite field 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboard_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT into sets containing mutually exclusive orbits, with the cardinality of each set being equal to the cycle length of the elements in that se... | There has been extensive study about a family of polynomial maps defined through a parameter a∈𝔽𝑎𝔽a\in\mathbb{F}italic_a ∈ blackboard_F over finite fields. Some well-studied families of polynomials include the Dickson polynomials and reverse Dickson polynomials, to name a few. Conditions for such families of maps to... | The paper primarily addresses the problem of linear representation, invertibility, and construction of the compositional inverse for non-linear maps over finite fields. Though there is vast literature available for invertibility of polynomials and construction of inverses of permutation polynomials over 𝔽𝔽\mathbb{F}b... |
Given an n𝑛nitalic_n-dimensional vector space 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT over finite field 𝔽𝔽\mathbb{F}blackboard_F, maps F:𝔽n→𝔽n:𝐹→superscript𝔽𝑛superscript𝔽𝑛F:\mathbb{F}^{n}\to\mathbb{F}^{n}italic_F : blackboard_F start_POSTSUPERSCRIPT ita... | Univariate polynomials f(x):𝔽→𝔽:𝑓𝑥→𝔽𝔽f(x):\mathbb{F}\to\mathbb{F}italic_f ( italic_x ) : blackboard_F → blackboard_F that induces a bijection over the field 𝔽𝔽\mathbb{F}blackboard_F are called permutation polynomials (in short, PP) and have been studied extensively in the literature. For instance, given a gene... | A |
The colitis data (Burczynski \BOthers., \APACyear2006) consists of 85 colitis cases and 42 healthy controls for which gene expression data was collected using 22,283 probe sets. As in (Van Loon \BOthers., \APACyear2020), we matched this data to the C1 cytogenetic gene sets from MSigDB 6.1 (Subramanian \BOthers., \APAC... |
We use the same software as described in Section 4.2. All cross-validation loops used for parameter tuning are nested within the outer loop used for evaluating classification performance. We again use the recommendations of Hofner \BOthers. (\APACyear2015) for choosing the parameters, by specifying q𝑞qitalic_q and a ... |
The colitis data (Burczynski \BOthers., \APACyear2006) consists of 85 colitis cases and 42 healthy controls for which gene expression data was collected using 22,283 probe sets. As in (Van Loon \BOthers., \APACyear2020), we matched this data to the C1 cytogenetic gene sets from MSigDB 6.1 (Subramanian \BOthers., \APAC... | We apply MVS with the seven different meta-learners to two gene expression data sets, namely the colitis data of Burczynski \BOthers. (\APACyear2006), and the breast cancer data of Ma \BOthers. (\APACyear2004). These data sets were previously used to compare the group lasso with the sparse group lasso (Simon \BOthers.,... | The breast cancer data (Ma \BOthers., \APACyear2004) consists of 60 tumor samples labeled according to whether cancer did (28 cases) or did not (32 cases) recur. The data was matched to the C1 gene sets using the same procedure as in the colitis data, leading to a multi-view data set of 354 views, with an average view ... | D |
We follow the common process to obtain ground truth labels. When a dataset consists of two classes, we designate the majority class as the normal class and the minority class as the anomalous class. For datasets with multiple classes of imbalanced sizes, we select one or a few minority classes as anomaly class(es).
| Figure 9: Impact of data characteristics on the performance of FBED-CART-PS (ROC AUC). Each subfigure displays 32 red crosses, each representing the result of a dataset. The x-coordinate represents the value of the characteristic named in sub-figure’s title, and the y-coordinate indicates ROC AUC. A learned regression ... | Regarding AP, HITON-PC and FBED exhibit significantly better performance than the other three techniques, as depicted in Figure 3(b). Notably, the results of AP generally display larger variances than those of ROC AUC, which indicates the unstable performance measuring with AP.
|
The experimental results (ROC AUC and AP) of the five relevant variable selection techniques are shown in Figure 3. For each technique, its 25 results (each is the average results over the 32 datasets) are presented with a violin plot overlaid by a dot plot. For the dot plot, each black dot corresponds to a result. Fo... |
We evaluate the performance of anomaly detection methods with two commonly used metrics: the Area under the Receiver Operating Characteristic Curve (ROC AUC) [74] and Average Precision (AP) [73]. ROC AUC measures the overall performance of the method, ranging from 0 to 1, where a value of 1 indicates perfect performan... | D |
Comparison with Oh & Iyengar [2021] While the authors in Oh & Iyengar [2021] provide sharper bounds by a factor of O~(d)~O𝑑\tilde{\mathrm{O}}(\sqrt{d})over~ start_ARG roman_O end_ARG ( square-root start_ARG italic_d end_ARG ), they still retain the κ𝜅\kappaitalic_κ multiplicative factor in their regret bounds. Thei... | Comparison with Abeille et al. [2021] Abeille et al. [2021] recently proposed the idea of convex relaxation of the confidence set for the more straightforward logistic bandit setting. Our work can be viewed as an extension of their construction to the MNL setting.
|
A confidence set similar to Et(δ)subscript𝐸𝑡𝛿E_{t}(\delta)italic_E start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) in Eq (7) was recently proposed in Abeille et al. [2021] for the simpler logisitic bandit setting. Here, we extend its construction to the MNL setting. The set Et(δ)subscript𝐸𝑡𝛿E_{t}(\... | In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of... | Comparison with Filippi et al. [2010] Our setting is different from the standard generalized linear bandit of Filippi et al. [2010]. In our setting, the reward due to an action (assortment) can be dependent on up to K𝐾Kitalic_K variables (θ∗⋅xt,i,i∈𝒬t⋅subscript𝜃subscript𝑥𝑡𝑖𝑖subscript𝒬𝑡\theta_{*}\cdot x_{t,i},\... | A |
Table 6: xGN levels in xGPN (ActivityNet-v1.3). We show the mAPs (%) at different tIoU thresholds, average mAPs as well as mAPs for short actions (less than 30 seconds) when using xGN at different xGPN encoder levels. The levels in the columns with ✓use xGN and the ones in the blank columns use a Conv1d(3,2)Conv1d32\t... |
Compared to these methods, our VSGN builds a graph on video snippets as G-TAD, but differently, beyond modelling snippets from the same scale, VSGN also exploits correlations between cross-scale snippets and defines a cross-scale edge to break the scaling curse. In addition, our VSGN contains multiple-level graph neur... |
We compare the inference time of different methods on the ActivityNet validation set on a 1080ti GPU in Table 8. Compared to end-to-end frameworks such as PBRNet, the methods using pre-extracted features such as BMN, G-TAD and VSGN can re-use the features extracted for other tasks, and these methods do not introduce c... | We compare the performance of our proposed VSGN to recent representative methods in the literature on the two datasets in Table 1 and Table 2, respectively. On both datasets, VSGN achieves state-of-the-art performance, reaching mAP 52.4% at tIoU 0.5 on THUMOS and average mAP 35.07% on ActivityNet. It significantly outp... | Table 2: Action localization results on validation set of ActivityNet-v1.3, measured by mAPs (%) at different tIoU thresholds and the average mAP. Our VSGN achieves the state-of-the-art average mAP and the highest mAP for short actions. Note that our VSGN, which uses pre-extracted features without further finetuning, s... | B |
Hyperparameter optimization (also called hyperparameter tuning) is the process of selecting appropriate values of hyperparameters for machine learning (ML) models, often independently for each data set, to achieve their best possible results.
Although time consuming, this process is required for the vast majority of ML... | One common focus of related work is the hyperparameter search for deep learning models. HyperTuner [LCW∗18] is an interactive VA system that enables hyperparameter search by using a multi-class confusion matrix for summarizing the predictions and setting user-defined ranges for multiple validation metrics to filter out... | Visualization tools have been implemented for sequential-based, bandit-based, and population-based approaches [PNKC21], and for more straightforward techniques such as grid and random search [LCW∗18]. Evolutionary optimization, however, has not experienced similar consideration by the InfoVis and VA communities, with t... | Numerous techniques exist that try to solve this challenge, such as the well-known grid search, random search [BB12], and Bayesian optimization that belong to the generic type of sequential-based methods [BBBK11, SSW∗16]. Other proposed methods include bandit-based approaches [FKH18, LJD∗17], population-based methods [... | Important contributions of this research include the formalization of primary concepts [CDM15], the identification of methods for assessing hyperparameter importance [JWXY16, PBB19, vRH17, HHLB13, HHLB14, vRH18], and resulting libraries and frameworks for specific hyperparameter optimization methods [KGG∗18, THHLB13]. ... | C |
In situations where there are sparsely connected regions in the state space, it is common to observe a relatively low sum of desired density values among directly connected states.
Consequently, there is a higher probability of remaining in the same state rather than transitioning to other states. | In this section, we apply the DSMC algorithm to the probabilistic swarm guidance problem and provide numerical simulations that show the convergence rate of the DSMC algorithm is considerably faster as compared to the previous Markov chain synthesis algorithms in [7] and [14].
| A comprehensive review of the broader category of multi-agent algorithms is presented in [33], while a survey specifically focusing on aerial swarm robotics is provided in [34]. Additionally, [35] offers an overview of existing swarm robotic applications.
For swarm guidance purposes, certain deterministic algorithms ha... | As in Theorem 1, appropriately scaling the differences between error values relative to the number of adjacent states is crucial for the stability of the convergence.
Also, it is required to scale these differences with respect to the probabilities of the corresponding states. | In terms of the convergence rate, these algorithms are only effective in cases with high transition capabilities.
Additionally, the performance of these algorithms is highly sensitive to hyperparameters and requires careful selection for optimum results in each experiment. | D |
There are various works that particularly target the matching of multiple shapes. In [30, 32], semidefinite programming relaxations are proposed for the multi-shape matching problem. However, due to the employed lifting strategy, which drastically increases the number of variables, these methods are not scalable to lar... | A shortcoming when applying the mentioned multi-shape matching approaches to isometric settings is that they do not exploit structural properties of isometric shapes. Hence, they lead to suboptimal multi-matchings, which we experimentally confirm in Sec. 5. One exception is the recent work on spectral map synchronisati... | Although multi-matchings obtained by synchronisation procedures are cycle-consistent, the matchings are often spatially non-smooth and noisy, as we illustrate in Sec. 5.
From a theoretical point of view, the most appropriate approach for addressing multi-shape matching is based on a unified formulation, where cycle con... | There are various works that particularly target the matching of multiple shapes. In [30, 32], semidefinite programming relaxations are proposed for the multi-shape matching problem. However, due to the employed lifting strategy, which drastically increases the number of variables, these methods are not scalable to lar... |
We presented a novel formulation for the isometric multi-shape matching problem. Our main idea is to simultaneously solve for shape-to-universe matchings and shape-to-universe functional maps. By doing so, we generalise the popular functional map framework to multi-matching, while guaranteeing cycle consistency, both ... | A |
Directed path graphs are characterized by Gavril [9], in the same article he also gives the first recognition algorithms that has O(n4)𝑂superscript𝑛4O(n^{4})italic_O ( italic_n start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) time complexity. In the above cited article, Monma and Wei [18] give the second characterizati... |
Path graphs and directed path graphs are classes of graphs between interval graphs and chordal graphs. A graph is a chordal graph if it does not contain a hole as an induced subgraph, where a hole is a chordless cycle of length at least four. Gavril [8] proves that a graph is chordal if and only if it is the intersect... |
A clique is a clique separator if its removal disconnects the graph in at least two connected components. A graph with no clique separator is called atom. For example, every cycle has no clique separator, and the butterfly/hourglass graph has two cliques and it is an atom. In [18] it is proved that an atom is a path g... | A graph is an interval graph if it is the intersection graph of a family of intervals on the real line; or, equivalently, the intersection graph of a family of subpaths of a path. Interval graphs are characterized by Lekkerkerker and Boland [15] as chordal graphs with no asteroidal triples, where an asteroidal triple i... | interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs.interval graphs ⊂ rooted path graphs ⊂ directed path graphs ⊂ path graphs ⊂ chordal graphs\text{interval graphs $\subset$ rooted path graphs $\subset$ directed path %
graphs $\subset$ path graphs $\subset$ chordal graphs}.interva... | A |
In experiments 1(c) and 1(d), we study how the connectivity (i.e., ρ𝜌\rhoitalic_ρ, the off-diagonal entries of P𝑃Pitalic_P) across communities under different settings affects the performances of these methods. Fix (x,n0)=(0.4,100)𝑥subscript𝑛00.4100(x,n_{0})=(0.4,100)( italic_x , italic_n start_POSTSUBSCRIPT 0 end_... |
Panels (e) and (f) of Figure 1 report the numerical results of these two sub-experiments. They suggest that estimating the memberships becomes harder as the purity of mixed nodes decreases. Mixed-SLIM and Mixed-SCORE perform similarly and both two approaches perform better than OCCAM and GeoNMF under the MMSB setting.... |
The numerical results are given by the last two panels of Figure 1. Subfigure 1(k) suggests that Mixed-SLIM, Mixed-SCORE, and GeoNMF share similar performances and they perform better than OCCAM under the MMSB setting. the proposed Mixed-SLIM significantly outperforms the other three methods under the DCMM setting. |
Numerical results of these two sub-experiments are shown in panels (c) and (d) of Figure 1. From subfigure (c), under the MMSB model, we can find that Mixed-SLIM, Mixed-SCORE, OCCAM, and GeoNMF have similar performances, and as ρ𝜌\rhoitalic_ρ increases they all perform poorer. Under the DCMM model, the mixed Humming ... |
Numerical results of these two sub-experiments are shown in panels (a) and (b) of Figure 1, respectively. From the results in subfigure 1(a), it can be found that Mixed-SLIM performs similarly to Mixed-SCORE while both two methods perform better than OCCAM and GeoNMF under the MMSB setting. Subfigure 1(b) suggests tha... | C |
To showcase these advantages, we consider an instantiation of variational transport where the objective functional F𝐹Fitalic_F satisfies the Polyak-Łojasiewicz (PL) condition (Polyak, 1963) with respect to the Wasserstein distance and the variational problem associated with F𝐹Fitalic_F is solved via kernel methods.
I... | Here the statistical error is incurred in estimating the Wasserstein gradient by solving the dual maximization problem using functions in a reproducing kernel Hilbert space (RKHS) with finite data, which converges sublinearly to zero as the number of particles goes to infinity.
Therefore, in this scenario, variational ... | When N𝑁Nitalic_N and k𝑘kitalic_k are sufficiently large, the right-hand size on (4.16) is dominated by the statistical error (1−ρ)−1⋅Err⋅superscript1𝜌1Err(1-\rho)^{-1}\cdot{\rm Err}( 1 - italic_ρ ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ⋅ roman_Err, which decays to zero as N𝑁Nitalic_N goes to infinity.
In ot... | Second, when the Wasserstein gradient is approximated using RKHS functions and the objective functional satisfies the PL condition, we prove that the sequence of probability distributions constructed by variational transport converges linearly to the global minimum of the objective functional, up to certain statistical... | we prove that variational transport constructs a sequence of probability distributions that converges linearly to the global minimizer of the objective functional up to a statistical error due to estimating the Wasserstein gradient with finite particles. Moreover, such a statistical error converges to zero as the numbe... | A |
To learn effective decentralized policies, there are two main challenges. Firstly, it is impractical to learn an individual policy for each intersection in a city or a district containing thousands of intersections. Parameter sharing may help. However, each intersection has a different traffic pattern, and a simple sh... | As stated in Eq. 2, the non-stationary learning often causes the observation transition and received rewards unpredictable only conditioned on individual observation and action. Conversely, we hope the learned policy makes them be predicted stably.
To achieve this goal, we design a novel intrinsic reward based on VAE, ... | Before formulating the problem, we firstly design the learning paradigm by analyzing the characteristics of the traffic signal control (TSC). Due to the coordination among different signals, the most direct paradigm may be centralized learning. However, the global state information in TSC is not only highly redundant a... | Secondly, even for a specific task, the received rewards and observations are uncertain to the agent, as illustrated in Fig. 1, which make the policy learning unstable and non-convergent. Even if the agent performs the same action on the same observation at different timesteps, the agent may receive different rewards a... | may cause learning non-stationary because the agent may receive different rewards and observation transitions for the same action at the same observation. In this case, the received rewards and observation transitions of the current agent could not be well predicted only conditioned on its own observations and performe... | C |
\mathbf{f}(\hat{\mathbf{x}},\mathbf{y}_{*})\|_{2}≤ ∥ bold_f start_POSTSUBSCRIPT bold_x end_POSTSUBSCRIPT ( over~ start_ARG bold_x end_ARG , over~ start_ARG bold_y end_ARG ) start_POSTSUBSCRIPT rank- italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT - bold_f start_POSTSUBSCRIPT bold_x end_POSTSUBSCR... |
‖𝐟𝐱(𝐱~,𝐲~)rank-r†−𝐟𝐱(𝐱~,𝐲∗)rank-r†‖2‖𝐟(𝐱~,𝐲~)−𝐟(𝐱^,𝐲∗)‖2subscriptnormsubscript𝐟𝐱superscriptsubscript~𝐱~𝐲rank-r†subscript𝐟𝐱superscriptsubscript~𝐱subscript𝐲rank-r†2subscriptnorm𝐟~𝐱~𝐲𝐟^𝐱subscript𝐲2\displaystyle\big{\|}\mathbf{f}_{\mathbf{x}}(\tilde{\mathbf{x}},\tilde{\mathbf% | ≤‖𝐟𝐱(𝐱~,𝐲~)rank-r†−𝐟𝐱(𝐱~,𝐲∗)rank-r†‖2‖𝐟(𝐱~,𝐲~)−𝐟(𝐱^,𝐲∗)‖2absentsubscriptnormsubscript𝐟𝐱superscriptsubscript~𝐱~𝐲rank-r†subscript𝐟𝐱superscriptsubscript~𝐱subscript𝐲rank-r†2subscriptnorm𝐟~𝐱~𝐲𝐟^𝐱subscript𝐲2\displaystyle~{}~{}\leq~{}~{}\big{\|}\mathbf{f}_{\mathbf{x}}(\tilde{\mathbf{x}%
},\til... | +‖𝐟𝐱(𝐱~,𝐲∗)rank-r†‖2‖𝐟(𝐱~,𝐲~)−𝐟(𝐱~,𝐲∗)‖2.subscriptnormsubscript𝐟𝐱superscriptsubscript~𝐱subscript𝐲rank-r†2subscriptnorm𝐟~𝐱~𝐲𝐟~𝐱subscript𝐲2\displaystyle~{}~{}~{}~{}~{}~{}+\big{\|}\mathbf{f}_{\mathbf{x}}(\tilde{\mathbf%
{x}},\mathbf{y}_{*})_{\mbox{\scriptsize rank-$r$}}^{\dagger}\big{\|}_{2}\,\|% | =‖𝐟𝐱(𝐱~,𝐲~)rank-r†𝐟(𝐱~,𝐲~)−𝐟𝐱(𝐱~,𝐲∗)rank-r†𝐟(𝐱~,𝐲∗)‖2absentsubscriptnormsubscript𝐟𝐱superscriptsubscript~𝐱~𝐲rank-r†𝐟~𝐱~𝐲subscript𝐟𝐱superscriptsubscript~𝐱subscript𝐲rank-r†𝐟~𝐱subscript𝐲2\displaystyle~{}~{}=~{}~{}\big{\|}\mathbf{f}_{\mathbf{x}}(\tilde{\mathbf{x}},%
\tilde{\mathbf{y}})_{\mb... | C |
We set the bin capacity to k=100𝑘100k=100italic_k = 100, and we also scale down each item to the closest integer in [1,k]1𝑘[1,k][ 1 , italic_k ].
This choice is relevant for applications such as Virtual Machine placement (Section 5.1), as explained in Section 5.1. We generate two classes of input sequences. | The Weibull distribution is specified by two parameters: the shape parameter sh𝑠ℎshitalic_s italic_h and the scale parameter sc𝑠𝑐scitalic_s italic_c (with sh,sc>0𝑠ℎ𝑠𝑐0sh,sc>0italic_s italic_h , italic_s italic_c > 0). The shape parameter defines the spread of item sizes: lower values indicate greater skew tow... | sh=3𝑠ℎ3sh=3italic_s italic_h = 3, or a file from the GI Benchmark), we generate 20 random sequences of length 106superscript10610^{6}10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT.
For each sequence, we compute FirstFit, BestFit, and the L2𝐿2L2italic_L 2 lower bound. The average costs of these algorithms, over the ... | For Weibull benchmarks, the input sequence consists of items generated independently and uniformly at random, and the shape parameter is set to sh=3.0𝑠ℎ3.0sh=3.0italic_s italic_h = 3.0. For BPPLIB benchmarks, we first select a file of
the benchmark uniformly at random, then generate input items from the chosen file, ... | The distribution of the input sequence changes every 50000 items. Namely, the input sequence is the concatenation of n/50000𝑛50000n/50000italic_n / 50000 subsequences. For Weibull benchmarks, each subsequence is a Weibull distribution, whose shape parameter is chosen uniformly at random from [1.0,4.0]1.04.0[1.0,4.0][ ... | C |
The results are presented in Table 1. LoCondA-HF obtains comparable results to the reference methods dedicated for the point cloud generation. It can be observed that values of evaluated measures for HyperFlow(P) and LoCondA-HF (uses HyperFlow(P) as a base model in the first part of the training) are on the same level... | In this section, we evaluate how well our model can learn the underlying distribution of points by asking it to autoencode a point cloud. We conduct the autoencoding task for 3D point clouds from three categories in ShapeNet (airplane, car, chair). In this experiment, we compare LoCondA with the current state-of-the-ar... |
We compare the results with the existing solutions that aim at point cloud generation: latent-GAN (Achlioptas et al., 2017), PC-GAN (Li et al., 2018), PointFlow (Yang et al., 2019), HyperCloud(P) (Spurek et al., 2020a) and HyperFlow(P) (Spurek et al., 2020b). We also consider in the experiment two baselines, HyperClou... |
The results are presented in Table 1. LoCondA-HF obtains comparable results to the reference methods dedicated for the point cloud generation. It can be observed that values of evaluated measures for HyperFlow(P) and LoCondA-HF (uses HyperFlow(P) as a base model in the first part of the training) are on the same level... | In this section, we describe the experimental results of the proposed method. First, we evaluate the generative capabilities of the model. Second, we provide the reconstruction result with respect to reference approaches. Finally, we check the quality of generated meshes, comparing our results to baseline methods. Thro... | A |
R𝒵2=2mMx2(λmin+(𝐖𝐱))−2superscriptsubscript𝑅𝒵22𝑚superscriptsubscript𝑀𝑥2superscriptsuperscriptsubscript𝜆subscript𝐖𝐱2R_{\mathcal{Z}}^{2}={2m}M_{x}^{2}(\lambda_{\min}^{+}({\bf W}_{{\bf x}}))^{-2}italic_R start_POSTSUBSCRIPT caligraphic_Z end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 2 itali... | Now we show the benefits of representing some convex problems as convex-concave problems on the example of the Wasserstein barycenter (WB) problem and solve it by the DMP algorithm. Similarly to Section (3), we consider a SPP in proximal setup and introduce Lagrangian multipliers for the common variables. However, in t... | \bf x}\right\|_{{\mathcal{X}}}^{2}+\left\|{\bf p}\right\|_{{\mathcal{P}}}^{2}∥ ( bold_x , bold_p ) ∥ start_POSTSUBSCRIPT ( caligraphic_X , caligraphic_P ) end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = ∥ bold_x ∥ start_POSTSUBSCRIPT caligraphic_X end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERS... | Next, we introduce the second important component of the convergence rate analysis, namely the smoothness assumption on the objective F𝐹Fitalic_F.
To set the stage we first introduce a general definition of Lipschitz-smooth function of two variables. | To prove Theorem 3.5 we first show that the iterates of Algorithm 1 naturally correspond to the iterates of a general Mirror-Prox algorithm applied to problem (54). Then we extend the standard analysis of the general Mirror-Prox algorithm to account for unbounded feasible sets.
| C |
And from the bijection we can deduce that
∩(Tw)<∩(Gw∧Ts)subscript𝑇𝑤subscript𝐺𝑤subscript𝑇𝑠\cap(T_{w})<\cap(G_{w}\wedge T_{s})∩ ( italic_T start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ) < ∩ ( italic_G start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ∧ italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ) for so... | necessarily complete) G=(V,E)𝐺𝑉𝐸G=(V,E)italic_G = ( italic_V , italic_E ) that admits a star spanning tree
Tssubscript𝑇𝑠T_{s}italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT. In the first part we present a formula to calculate ∩(Ts)subscript𝑇𝑠\cap(T_{s})∩ ( italic_T start_POSTSUBSCRIPT italic_s end_POSTSU... |
The study of cycles of graphs has attracted attention for many years. To mention just three well known results consider Veblen’s theorem [2] that characterizes graphs whose edges can be written as a disjoint union of cycles, Maclane’s planarity criterion [3] which states that planar graphs are the only to admit a 2-ba... |
The remainder of this section is dedicated to express the problem in the context of the theory of cycle bases, where it has a natural formulation, and to describe an application. Section 2 sets some notation and convenient definitions. In Section 3 the complete graph case is analyzed. Section 4 presents a variety of i... | In this section we present some experimental results to reinforce
Conjecture 14. We proceed by trying to find a counterexample based on our previous observations. In the first part, we focus on the complete analysis of small graphs, that is: graphs of at most 9 nodes. In the second part, we analyze larger families of g... | D |
For any simplicial complex K𝐾Kitalic_K and integers b≥1𝑏1b\geq 1italic_b ≥ 1 and m>μ(K)𝑚𝜇𝐾m>\mu(K)italic_m > italic_μ ( italic_K ), there exists an integer t=t(b,K,m)𝑡𝑡𝑏𝐾𝑚t=t(b,K,m)italic_t = italic_t ( italic_b , italic_K , italic_m ) with the following property: If ℱℱ\mathcal{F}caligraphic_F is an m𝑚mita... | We first prove, in Section 3, that complexes with a forbidden simplicial homological minor also have a forbidden grid-like homological minor.
The proof uses the stair convexity of Bukh et al. [8] to build, in a systematic way, chain maps from simplicial complexes to cubical complexes. We then adapt, in Section 4, the m... |
The proof of Theorem 2.1 is quite involved and builds on the method of constrained chain maps developed in [18, 35] to study intersection patterns via homological minors [37]. This technique, which we briefly outline here, was specifically designed for complete intersection patterns. A major part of this paper, all of... | a positive fraction of the m𝑚mitalic_m-tuples to have a nonempty intersection, where for dimK>1dimension𝐾1\dim K>1roman_dim italic_K > 1, m𝑚mitalic_m is some hypergraph Ramsey number depending on b𝑏bitalic_b and K𝐾Kitalic_K.
So in order to prove Corollary 1.3 it suffices to show that if a positive fraction of the ... | In this paper we are concerned with generalizations of Helly’s theorem that allow for more flexible intersection patterns and relax the convexity assumption. A famous example is the celebrated (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-theorem [3], which asserts that for a finite family of convex sets in ℝdsuperscriptℝ𝑑\ma... | B |
The class distribution is rather imbalanced, with 199 vans, 429 cars, and 218 buses.
For this use case, we use the same ML algorithm, hyperparameter optimization method, and cross-validation strategy as in the previous application. This use case was performed by us, and it was the first time we explored this particular... |
In FeatureEnVi, data instances are sorted according to the predicted probability of belonging to the ground truth class, as shown in Fig. 1(a). The initial step before the exploration of features is to pre-train the XGBoost [29] on the original pool of features, and then divide the data space into four groups automati... | We began our investigation by examining the distribution of instances in the explorable subspaces. We noticed that most instances are correctly classified with more than 75% predicted probability (i.e., high confidence), as shown in Fig. 7(a.4). The invited ML expert found the 25% predicted probability intervals a cons... | Similar to the workflow described above, we start by choosing the appropriate thresholds for slicing the data space. As we want to concentrate more on the instances that are close to being predicted correctly, we move the left gray line from 25% to 35% (see Fig. 5(a.1 and a.2)). This makes the Bad slice much shorter. S... | Figure 7: Engineering features for improved predictive performance. From the pre-training phase, we detect that most of the instances belong to the Best slice (a.4), then the Worst slice (a.1), followed by the remaining slices (a.3 and a.2). In view (b), we validate every feature by working in synergy with the table he... | C |
The goal is to tune the parameters of the MPC-based planning unit without introducing any modification in the structure of the underlying control system.
We leverage the repeatability of the system, which is higher than the integrated encoder error of 3μm3𝜇𝑚3\mu m3 italic_μ italic_m, | which is an MPC-based contouring approach to generate optimized tracking references. We account for model mismatch by automated tuning of both the MPC-related parameters and the low level cascade controller gains, to achieve precise contour tracking with micrometer tracking accuracy. The MPC-planner is based on a combi... | MPC accounts for the real behavior of the machine and the axis drive dynamics can be excited to compensate for the contour error to a big extent, even without including friction effects in the model [4, 5]. High-precision trajectories or set points can be generated prior to the actual machining process following variou... | To bring the model close to the real system, we unify the terms required for the contour control formulation with the velocity and acceleration for each axis from the identified, discretized state-space model from (4).
Also, we include the path progress sksubscript𝑠𝑘s_{k}italic_s start_POSTSUBSCRIPT italic_k end_POST... | The physical system is a 2-axis gantry stage for (x,y)𝑥𝑦(x,y)( italic_x , italic_y ) positioning with industrial grade actuators and sensors [14].
The plant can be modeled as a mass-spring-damper system with two masses linked with a damper and a spring for capturing imperfection and friction in the transmitting movem... | D |
To test scalability on a natural dataset, we conduct four experiments per explicit method on GQA-OOD with the explicit bias variables: a) head/tail (2 groups), b) answer class (1833 groups), c) global group (115 groups), and d) local group (133328 groups). Unlike Biased MNISTv1, we do not test with combinations of thes... | Results for GQA-OOD are similar, with explicit methods failing to scale up to a large number of groups, while implicit methods showing some improvements over StdM. As shown in Table 2, when the number of groups is small, i.e., when using a head/tail binary indicator as the explicit bias, explicit methods remain compara... | Results.
In Fig. 3(a), we present the MMD boxplots for all bias variables, comparing cases when the label of the variable is either explicitly specified (explicit bias), or kept hidden (implicit bias) from the methods. Barring digit position, we observe that the MMD values are higher when the variables are not explicit... |
Results. We find that implicit methods either improve or are comparable with StdM, but most explicit methods fail when asked to generalize to multiple bias variables and a large number of groups, even when the bias variables are explicitly provided. As shown in Fig. 4, all explicit methods are below StdM on Biased MNI... |
where, |ai|subscript𝑎𝑖|a_{i}|| italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | is the number of instances for answer aisubscript𝑎𝑖a_{i}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in the given group, μ(a)𝜇𝑎\mu(a)italic_μ ( italic_a ) is the mean number of answers in the group and β𝛽\betait... | C |
Meta learning and metric learning show great potentials in personalized gaze estimation. They usually require few-shot annotated samples for calibration.
Park et al. propose a meta learning-based calibration approach [47]. They train a highly adaptable gaze estimation network through meta learning. | The mobile device contains front cameras but has limited computational resources. The related methods usually estimate PoG instead of gaze directions due to the difficulty of geometric calibration. Krafka et al. propose iTracker for mobile devices [42], which combines the facial image, two eye images and the face grid ... | They regress gaze directions from the pictorial representation.
Wang et al. propose an adversarial learning approach to extract the domain/person-invariant feature [59]. They feed the features into an additional classifier and design an adversarial loss function to handle the appearance variations. | They perform data augmentation w.r.t. rotation in target domains and require the rotation consistency in gaze estimation.
Wang et al. [143] propose a contrastive learning for cross-dataset gaze estimation. They propose a contrastive loss function to encourage close feature distance for the samples with close gaze direc... | 2) Gaze estimation methods show performance drop in new environments/domains. Researchers use annotated images in source domains and unannotated images in target domains to improve the performance in target domains [111, 112].
The second topic is more systematic than the first topic with recent development. It is defin... | C |
The face images were firstly preprocessed as described in Section 4.1. In contrast to SMFRD dataset, RMFRD is imbalanced (5,000 masked faces vs 90,000 non-masked faces). Therefore, we have applied an over-sampling by cropping some non-masked faces to get an equivalent number of cropped and full faces. Next, using the n... | As presented in Fig. 1, the size of the extracted feature map defines the number of the feature vectors that will be used in the BoF layer. Here we refer by Visubscript𝑉𝑖V_{i}italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to the number of feature vectors extracted from the ithsuperscript𝑖𝑡ℎi^{t}hitalic_i ... |
The rest of this paper is organized as follows: Section 2 presents the related works. In Section 3 we present the motivation and contribution of the paper. The proposed method is detailed in Section 4. Experimental results are presented in Section 5. Conclusion ends the paper. | Once the global histogram is computed, we pass to the classification stage to assign each test image to its identity. To do so, we apply the Multilayer perceptron classifier (MLP) where each face is represented by a term vector. Deep BoF network can be trained using back-propagation and gradient descent. Note that the ... |
The quantization is then applied to extract the histogram of a number of bins as presented in Section 4.3. Finally, MLP is applied to classify faces as presented in Section 4.4. In this experiment, the 10-fold cross-validation strategy is used to evaluate the recognition performance. The experiments are repeated ten t... | D |
Certain type systems for π𝜋\piitalic_π-calculi [Kob06, Pad14, GKL14] guarantee the eventual success of communication only if or regardless of whether processes diverge [DP22]. Considering a configuration C𝐶Citalic_C such that Γ⊢C::(Γ,a:X[n])\Gamma\vdash C::(\Gamma,a:X[n])roman_Γ ⊢ italic_C : : ( roman_Γ , italic_a :... | Sized types are a type-oriented formulation of size-change termination [LJBA01] for rewrite systems [TG03, BR09]. Sized (co)inductive types [BFG+04, Bla04, Abe08, AP16] gave way to sized mixed inductive-coinductive types [Abe12, AP16]. In parallel, linear size arithmetic for sized inductive types [CK01, Xi01, BR06] was... | On the other hand, there are type systems that themselves guarantee termination—some assign numeric levels to each channel name and restrict communication such that a measure induced by said levels decreases consistently [DS06, DHS10, CH16]. While message passing is a different setting than ours, we are interested in t... |
One solution that avoids syntactic checks is to track the flow of (co)data size at the type level with sized types, as pioneered by Hughes et al. [HPS96] and further developed by others [BFG+04, Bla04, Abe08, AP16]. Inductive and coinductive types are indexed by the height and observable depth of their data and codata... | Sized types are compositional: since termination checking is reduced to an instance of typechecking, we avoid the brittleness of syntactic termination checking. However, we find that ad hoc features for implementing size arithmetic in the prior work can be subsumed by more general arithmetic refinements [DP20b, XP99], ... | B |
As discussed above, AFP seems to solve Problems 2 and 3 perfectly. However, this is no longer the case when media contents are remotely hosted by the cloud since existing AFP schemes were designed without taking the cloud’s involvement into consideration. Thus it remains to be further explored how to develop a novel A... | In the user-side embedding AFP, since the encrypted media content shared with different users is the same, the encryption of the media content is only executed once. In contrast, due to the personalization of D-LUTs, once a new user initiates a request, the owner must interact with this user to securely distribute the ... | There are two extra challenges that need to be addressed. For one thing, considering that the original purpose of cloud’s involvement is to help resource-constrained owners efficiently share their media contents, the owner-side overhead needs to be carefully controlled to ensure that owners can obtain sig-nificant reso... | Ensure efficiency gains and scalability. For one thing, we need to carefully control the owner-side overhead to ensure that the owner can gain significant local resource savings from cloud media sharing. For another, we need to ensure that the two proposed schemes are scalable to handle real-time requests from users.
|
Finally, we conduct a comparative experiment to evaluate the proposed schemes against their relevant existing counterparts, and the results are displayed in Fig. 15. For FairCMS-I and FairCMS-II, we measure the time overhead of Part 2 as it is executed once for each user. For the other schemes, we evaluate their prima... | B |
In this work, we proposed a graph neural network-based approach to modeling feature interactions. We design a feature interaction selection mechanism, which can be seen as learning the graph structure by viewing the feature interactions as edges between features. |
Factorization machine (FM) Rendle (2010, 2012) are a popular and effective method for modeling feature interactions, which involve learning a latent vector for each one-hot encoded feature and modeling the pairwise (second-order) interactions between them through the inner product of their respective vectors. FM has b... | In addition to not being able to effectively capture higher-order feature interactions, FM is also suboptimal because it considers the interactions between every pair of features, even if some of these interactions may not be beneficial for prediction Zhang et al. (2016); Su et al. (2020). These unhelpful feature inter... | One of the main limitations of FM is that it is not able to capture higher-order feature interactions, which are interactions between three or more features. While higher-order FM (HOFM) has been proposed Rendle (2010, 2012) as a way to address this issue, it suffers from high complexity due to the combinatorial expans... | Modeling feature interactions is a crucial aspect of predictive analytics and has been widely studied in the literature. FM Rendle (2010) is a popular method that learns pairwise feature interactions through vector inner products. Since its introduction, several variants of FM have been proposed, including Field-aware ... | D |
where Q𝑄Qitalic_Q is a symmetric positive definite matrix with log-normally distributed eigenvalues and φℝ+(⋅)subscript𝜑subscriptℝ⋅\varphi_{\mathbb{R}_{+}}(\cdot)italic_φ start_POSTSUBSCRIPT blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( ⋅ ) | The stateless step-size does not suffer from this problem, however, because the halvings have to be performed at multiple iterations when using the stateless step-size strategy,
the per iteration cost of the stateless step-size is about three times that of the simple step-size. | In practice, a halving strategy for the step size is preferred for the
implementation of the Monotonic Frank-Wolfe algorithm, as opposed to the step size implementation shown in Algorithm 1. This halving strategy, which is shown in Algorithm 2, helps |
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of 𝒪(1/t)𝒪1𝑡\mathcal{O}(1/t)caligraphic_O ( 1 / italic_t ). The idea of the proof is... | The results are shown in Figure 7. On both of these instances, the simple step progress is slowed down or even seems stalled in comparison to the stateless
version because a lot of halving steps were done in the early iterations for the simple step size, which penalizes progress over the whole run. | D |
Below, we elaborate on two of the (perhaps) most crucial steps to overcome this challenge.
First, we show that for each found augmentation between free vertices α𝛼\alphaitalic_α and β𝛽\betaitalic_β the algorithm can afford to entirely remove search trees of α𝛼\alphaitalic_α and β𝛽\betaitalic_β from the graph for th... | Our algorithms “puts on hold” (or pauses) DFS over search trees that become too large. Note that pausing DFS execution of some search trees increases the time required to explore the entire graph. Nevertheless, we show how to set parameters so that putting on hold DFS over large trees increases the number of passes onl... | Below, we elaborate on two of the (perhaps) most crucial steps to overcome this challenge.
First, we show that for each found augmentation between free vertices α𝛼\alphaitalic_α and β𝛽\betaitalic_β the algorithm can afford to entirely remove search trees of α𝛼\alphaitalic_α and β𝛽\betaitalic_β from the graph for th... | In both works, a crucial ingredient is to store the right edges of the input graph to make sure that the augmenting paths yielded by the search do not contain the same node twice.
One of our main contributions is to show that a structure (defined in Section 2 and maintained by each free vertex) contains sufficient info... | Our DFS search approach guarantees that we find a polyεpoly𝜀\operatorname{poly}\varepsilonroman_poly italic_ε fraction of all possible augmentations, giving rise to an algorithm that in poly1/εpoly1𝜀\operatorname{poly}1/\varepsilonroman_poly 1 / italic_ε passes finds a (1+ε)1𝜀(1+\varepsilon)( 1 + italic_ε )-approx... | A |
We consider an asynchronous broadcast version of CPP (B-CPP). B-CPP further reduces the communicated data per iteration and is also provably linearly convergent over directed graphs for minimizing strongly convex and smooth objective functions. Numerical experiments demonstrate the advantages of B-CPP in saving commun... | In this section, we compare the numerical performance of CPP and B-CPP with the Push-Pull/𝒜ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B method [24, 25].
In the experiments, we equip CPP and B-CPP with different compression operators and consider different graph topologies. | Note that the analysis in the rest of this section holds true for arbitrary w≥α^/α¯𝑤^𝛼¯𝛼w\geq\widehat{\alpha}/\overline{\alpha}italic_w ≥ over^ start_ARG italic_α end_ARG / over¯ start_ARG italic_α end_ARG.
As we will choose w≥α^/α¯𝑤^𝛼¯𝛼w\geq\widehat{\alpha}/\overline{\alpha}italic_w ≥ over^ start_ARG italic_α en... |
The rest of this paper is organized as follows. We provide necessary notation and assumptions in Section II. CPP is introduced and analyzed in Section III. In Section IV, we consider the algorithm B-CPP. Numerical examples are presented in Section V, and we conclude the paper in Section VI. | In this section, we analyze the convergence rates of Algorithm 1.
To begin with, we define the averages of 𝑿ksuperscript𝑿𝑘\bm{\mathit{X}}^{k}bold_italic_X start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT, 𝒀ksuperscript𝒀𝑘\bm{\mathit{Y}}^{k}bold_italic_Y start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT as follo... | C |
We develop multiple novel algorithms to solve decentralized personalized federated saddle-point problems. These methods (Algorithm 1 and Algorithm 2) are based on recent sliding technique [27, 28, 29] adapted to SPPs in a decentralized PFL. In addition, we present Algorithm 3 which used the randomized local method fro... |
We adapt the proposed algorithm for training neural networks. We compare our algorithms: type of sliding (Algorithm 1) and type of local method (Algorithm 3). To the best of our knowledge, this is the first work that compares these approaches in the scope of neural networks, as previous studies were limited to simpler... |
In this paper, we present a novel formulation for the Personalized Federated Learning Saddle Point Problem (1). This formulation incorporates a penalty term that accounts for the specific structure of the network and is applicable to both centralized and decentralized network settings. Additionally, we provide the low... | To the best of our knowledge, this paper is the first to consider decentralized personalized federated saddle point problems, propose optimal algorithms and derives the computational and communication lower bounds for this setting. In the literature, there are works on general (non-personalized) SPPs. We make a detaile... | We divided our experiments into two parts: 1) toy experiments on strongly convex – strongly concave bilinear saddle point problems to verify the theoretical results and 2) adversarial training of neural networks to compare deterministic (Algorithm 1) and stochastic (Algorithm 3) approaches.
| A |
In this work we propose using correlated equilibrium (CE) (Aumann, 1974) and coarse correlated equilibrium (CCE) as a suitable target equilibrium space for n-player, general-sum games333We mean games (also called environments) in a very general sense: extensive form games, multi-agent MDPs and POMDPs (stochastic games)... | We have shown that JPSRO converges to an NF(C)CE over joint policies in extensive form and stochastic games. Furthermore, there is empirical evidence that some MSs also result in high value equilibria over a variety of games. We argue that (C)CEs are an important concept in evaluating policies in n-player, general-sum ... |
In Section 2 we provide background on a) correlated equilibrium (CE), an important generalization of NE, b) coarse correlated equilibrium (CCE) (Moulin & Vial, 1978), a similar solution concept, and c) PSRO, a powerful multi-agent training algorithm. In Section 3 we propose novel solution concepts called Maximum Gini ... | In this work we propose using correlated equilibrium (CE) (Aumann, 1974) and coarse correlated equilibrium (CCE) as a suitable target equilibrium space for n-player, general-sum games333We mean games (also called environments) in a very general sense: extensive form games, multi-agent MDPs and POMDPs (stochastic games)... | We evaluate a number of (C)CE MSs in JPSRO on pure competition, pure cooperation, and general-sum games (Section H). All games used are available in OpenSpiel (Lanctot et al., 2019). More thorough descriptions of the games used can be found in Section F. We use an exact BR oracle, and exactly evaluate policies in the m... | B |
Differential privacy (Dwork et al., 2006) is a privacy notion based on a bound on the max divergence between the output distributions induced by any two neighboring input datasets (datasets which differ in one element). One natural way to enforce differential privacy is by directly adding noise to the results of a nume... |
Differential privacy essentially provides the optimal asymptotic generalization guarantees given adaptive queries (Hardt and Ullman, 2014; Steinke and Ullman, 2015). However, its optimality is for worst-case adaptive queries, and the guarantees that it offers only beat the naive intervention—of splitting a dataset so ... | One cluster of works that steps away from this worst-case perspective focuses on giving privacy guarantees that are tailored to the dataset at hand (Nissim et al., 2007; Ghosh and Roth, 2011; Ebadi et al., 2015; Wang, 2019). In Feldman and Zrnic (2021) in particular, the authors elegantly manage to track the individua... | Another line of work (e.g., Gehrke et al. (2012); Bassily et al. (2013); Bhaskar et al. (2011)) proposes relaxed privacy definitions that leverage the natural noise introduced by dataset sampling to achieve more average-case notions of privacy. This builds on intuition that average-case privacy can be viewed from a Bay... |
An alternative route for avoiding the dependence on worst case queries and datasets was achieved using expectation based stability notions such as mutual information and KL stability Russo and Zou (2016); Bassily et al. (2021); Steinke and Zakynthinou (2020). Using these methods Feldman and Steinke (2018) presented a ... | A |
For each u∈χ−1(𝖢˙)𝑢superscript𝜒1˙𝖢u\in\chi^{-1}(\mathsf{\dot{C}})italic_u ∈ italic_χ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( over˙ start_ARG sansserif_C end_ARG ) we perform a number of 𝒪(n+m)𝒪𝑛𝑚\mathcal{O}(n+m)caligraphic_O ( italic_n + italic_m )-time operations and run the dynamic programming algo... |
Using the previous lemmas the problem of finding a reducible single-tree FVC reduces to finding a coloring that properly colors a simple reducible FVC. We generate a set of colorings that is guaranteed to contain at least one such coloring. To generate this set we use the concept of a universal set. |
Given a multigraph G𝐺Gitalic_G and coloring χ𝜒\chiitalic_χ of G𝐺Gitalic_G that properly colors some simple reducible FVC (C,F)𝐶𝐹(C,F)( italic_C , italic_F ), a reducible FVC (C′,F′)superscript𝐶normal-′superscript𝐹normal-′(C^{\prime},F^{\prime})( italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_F st... | Note that the condition |NG(F)|≤|C|+1subscript𝑁𝐺𝐹𝐶1|N_{G}(F)|\leq|C|+1| italic_N start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT ( italic_F ) | ≤ | italic_C | + 1 trivially holds for any single-tree FVC. We will show that, given a reducible FVC (C,F)𝐶𝐹(C,F)( italic_C , italic_F ), we can efficiently reduce to a s... | Similar to the algorithm from Lemma 5.8, we can use two (n+m,𝒪(k5z2))𝑛𝑚𝒪superscript𝑘5superscript𝑧2(n+m,\mathcal{O}(k^{5}z^{2}))( italic_n + italic_m , caligraphic_O ( italic_k start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT italic_z start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) )-universal sets to create a set of c... | A |
For quantitative evaluation, existing works adopt metrics including Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Structural SIMilarity index (SSIM) [130], Learned Perceptual Image Patch Similarity (LPIPS) [201] to calculate the distance between harmonized result and ground-truth. These metrics can also ... |
It can be seen that most methods [203, 54, 92] are struggling to produce reasonable shadow for the foreground object, or even produce no shadow at all, which implies that shadow generation for the inserted foreground object is a very tough task. SGRNet [52] achieves relatively compelling results, but the shapes of gen... | For standard image harmonization, we use iHarmony4 [18] dataset (HCOCO, HFlickr, HAdobe5k, and Hday2night), which is the most commonly used dataset for image harmonization. All methods are trained on the combination of training sets from four sub-datasets, and evaluated on the test set from each sub-dataset. In Fig. 12... | Backward adjustment: In contrast with manually adjusting the foreground of composite image to create harmonized image, some other works [156, 22, 18] adopted an inverse approach, i.e., adjusting the foreground of real image to create synthetic composite image. Specifically, they treat a real image as harmonized image, ... | Training deep learning models requires abundant pairs of composite images and ground-truth harmonized images. Existing works have designed different schemes to construct image harmonization dataset. We categorize the existing schemes into three groups: forward adjustment, backward adjustment, and replacement. Note that... | B |
In this section, we present the empirical findings of machine learning tasks supported by CityNet, encompassing spatio-temporal predictions, transfer learning, and reinforcement learning. The primary objective of these experiments is to offer the following valuable insights: | for the multi-task setting. To achieve multi-task learning for taxi service predictions, we employ direct weight sharing. We set the input length, denoted as L𝐿Litalic_L, to 5, which corresponds to a duration of 2.5 hours. 666It is worth noting that for the pickup and idle driving datasets, we aggregate 10-minute time... | Our analyses and experiments on CityNet have yielded valuable insights for researchers. Our studies have confirmed the correlations among sub-datasets and have demonstrated that urban modeling and analyses can be enhanced by appropriately utilizing the mutual knowledge among correlated sub-datasets. To this end, we hav... |
Multi-task or Not: Out of the 22 tasks examined, multi-task models exhibit the lowest RMSE in 15 (68.2%) tasks and the lowest MAE in 19 (86.4%) tasks. Our findings suggest that a simple multi-task learning approach, utilizing weight sharing, can enhance taxi service predictions by establishing connections among divers... | Our findings reveal a strong correlation among different prediction tasks concerning taxi mobility sub-datasets. By capitalizing on this mutual knowledge, we demonstrate that prediction accuracy for individual tasks can be enhanced through straightforward multi-task learning techniques such as weight sharing.
| D |
\mathbf{x}^{*})-\alpha^{*},\hat{y}(\mathbf{x}^{*})+\alpha^{*}\right]\,,roman_Γ start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT start_POSTSUBSCRIPT point end_POSTSUBSCRIPT ( bold_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) := [ over^ start_ARG italic_y end_ARG ( bold_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ... | For each of the selected models, Fig. 4 shows the best five models in terms of average width, excluding those that do not (approximately) satisfy the coverage constraint (2). This figure shows that there is quite some variation in the models. There is not a clear best choice. Because on most data sets the models produc... |
Although a variety of methods was considered, it is not feasible to include all of them. The most important omission is a more detailed overview of Bayesian neural networks (although one can argue, as was done in the section on dropout networks, that some common neural networks are, at least partially, Bayesian by nat... | In Fig. 1, both the coverage degree, average width and R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient are shown. For each model, the data sets are sorted according to increasing R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient (averaged over th... | The above procedure gives uniform (or homoscedastic) prediction intervals, which is in stark contrast with most bona fide interval estimators. Although computationally simple, it ought to be clear that this is not the generic situation. Different modifications to obtain heteroscedastic models have been proposed in the ... | D |
The source code for Simonetta’s model \parencitesimonettaCNW19 is available online131313https://github.com/LIMUNIMI/Symbolic-Melody-Identification but we make the following modifications to improve the model’s performance:
we use binary cross-entropy loss instead of mean error loss, sigmoid rather than ReLU activations... |
As an additional baseline for style and emotion classification, we implement the ResNet50-based CNN model from \textcitelee20ismirLBD, which represents the state-of-the-art for composer classification, based on the authors’ code.151515https://github.com/KimSSung/Deep-Composer-Classification | While genre classification categorises music based on shared musical attributes and conventions, style classification seeks to capture the nuanced stylistic variations within either a specific genre, composer or performer, accounting for the diverse artistic choices and performance practices that shape musical expressi... |
Table 2: The testing classification accuracy (in %) of different combinations of MIDI token representations and models for four downstream tasks: three-class melody classification, velocity prediction, style classification and emotion classification. “CNN” represents the ResNet50 model used by \textcitelee20ismirLBD, ... | Deep learning-based composer classification in MIDI has been attempted by \textcitelee20ismirLBD and \textcitekong2020largescale, both treating MIDI pieces as 2D-representation matrices (via the piano-roll representation) and using CNN classifiers. Our work differs from theirs in that: 1) we encode MIDI pieces as token... | A |
In this paper, we turn our attention to the special case when the graph is complete (denoted Knsubscript𝐾𝑛K_{n}italic_K start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT) and its backbone is a (nonempty) tree or a forest (which we will denote by T𝑇Titalic_T and F𝐹Fitalic_F, respectively).
Note that it has a natural in... |
Since all vertices in c𝑐citalic_c have different colors, it is true that |Y|≤l𝑌𝑙|Y|\leq l| italic_Y | ≤ italic_l. Moreover, the optimality of c𝑐citalic_c implies that both R𝑅Ritalic_R and B𝐵Bitalic_B are non-empty. From the fact that c𝑐citalic_c is a coloring of Knsubscript𝐾𝑛K_{n}italic_K start_POSTSUBSCRIPT ... | We will color F𝐹Fitalic_F by assigning colors to Y1subscript𝑌1Y_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, B1subscript𝐵1B_{1}italic_B start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and R1subscript𝑅1R_{1}italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT first, and then to Y2subscript𝑌2Y_{2}italic_Y start_POSTSUBS... | This description draws a comparison e.g. to L(k,1)𝐿𝑘1L(k,1)italic_L ( italic_k , 1 )-labeling problem (see e.g. [10] for a survey), where the colors of any two adjacent vertices have to differ by at least k𝑘kitalic_k and the colors of any two vertices within distance 2222 have to be distinct.
| First, we note that Z(S2)𝑍subscript𝑆2Z(S_{2})italic_Z ( italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) by the property (A)𝐴(A)( italic_A ) of the Zeckendorf representation does not have two consecutive ones. Thus, the only combinations available when we sum the rightmost blocks of type A (i.e. the ones which do... | C |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.