_id
stringlengths
36
36
text
stringlengths
5
665k
marker
stringlengths
3
6
marker_offsets
sequence
label
stringlengths
28
32
ef17f288-2f14-4508-915b-ffd22ebee891
In this paper, we evaluate 13 SSL upstream models from the S3PRL toolkit [1]} on speech enhancement and separation downstream tasks. These SSL models can be categorized into generative and contrastive models [2]}.
[1]
[ [ 73, 76 ] ]
https://openalex.org/W3197580070
bac53224-a345-4fd6-971a-5de9ad2a853d
Generative models train an encoder to transform input \(\mathbf {x}\) to representation \(\mathbf {z}\) , and try to reconstruct \(\mathbf {x}\) with representation \(\mathbf {z}\)  [1]}. The generative models we studied include APC [2]}, VQ-APC [3]}, NPC [4]}, Mockingjay [5]} and TERA [6]}. APC [2]} follows a language model training style, and it uses a RNN to predict the future spectrum. VQ-APC [3]} adds a vector quantization (VQ) layer on top of APC model to better control the model capacity. NPC [4]} is proposed as a non-autoregressive alternative to APC. It uses convolution architectures and predicts the center frame based on left and right context. Inspired by BERT [10]}, Mockingjay [5]} pretrains a Transformer encoder by predicting masked time frames. TERA [6]} extends Mockingjay by also predicting masked frequency bins.
[1]
[ [ 184, 187 ] ]
https://openalex.org/W3173151551
4f70cc28-3eae-4558-9942-3b16f545318c
Following the principles of SUPERB [1]}, we constrain our downstream models to be as lightweight as possible. After balancing between computational cost and performance, we choose a T-F mask-based model  [2]} as our downstream model. As shown in Figure REF , for speech enhancement, the SSL model takes the noisy waveform as input and extracts speech representations \(\mathbf {F}\) . Based on \(\mathbf {F}\) , the RNN predicts the STFT mask \(\mathbf {M_1}\) of the clean signal. The estimated mask \(\mathbf {M_1}\) is multiplied with the STFT features \(\mathbf {S}\) and transformed back to the time domain using iSTFT. The pipeline of separation is almost the same. The only difference is that the RNN will estimate multiple masks for different speakers. We use a three-layer bidirectional long short-term memory network (BLSTM) as the network architecture and the mean square error between the predicted mask and Ideal Non-negative Phase Sensitive Mask (INPSM) [2]} is chosen as the objective. INPSM is defined as \(M_s^{\mathrm {inpsm}} = \max \Big (0, \frac{|X_s(t, f)|\cos {(\theta _y(t,f)} - \theta _s(t,f))}{|Y(t, f)|}\Big )\)
[2]
[ [ 204, 207 ], [ 971, 974 ] ]
https://openalex.org/W2734774145
cdeb3507-1599-4893-934e-8be168923d5e
As shown in Table REF , the stride size has a huge impact on speech enhancement and separation performance. For STFT, after we increase the stride size from 160 (10ms) to 320 (20ms), the PESQ, STOI, SI-SNRi (dB) degrade by 0.1, 0.3, and 1.1 respectively. The original stride of HuBERT Base/Large model is 320 (20ms). We upsample the representations by reducing the stride of the last convolution layer from 2 to 1. After upsampling, the HuBERT Base/Large models significantly outperform the original results. For all strides and metrics, the HuBERT models consistently outperform the STFT baseline. Note that even after upsampling, the stride size we use is still much larger than most time-domain enhancement and separation systems. As a comparison, we present the correlation between stride size and SI-SNRi for Conv-Tasnet [1]} in Table REF . The vanilla Conv-Tasnet (with a stride of 8) achieves 14.34dB SI-SNRi on Libri2Mix. However, the performance degrades a lot as the stride size increases. When the stride size is larger than 160, the SI-SNRi of Conv-Tasnet is even lower than our STFT baseline. <TABLE>
[1]
[ [ 826, 829 ] ]
https://openalex.org/W2952218014
d8a74e38-e3fc-4a1b-bca3-970565288a4b
is referred to as the Asymptotic Equipartition Property (AEP) in chapter 3 of [1]},Here it is the probability that is equally partitioned, not the energy, hence this AEP has nothing to do with the classical energy equipartition property. while many authors refer to (REF ) as to the Shannon-McMillan theorem. Almost everywhere convergence, that strengthens convergence in probability, is known as the Shannon-McMillan-Breiman theorem [2]}. Exactly as it happens with total system's energy, here, for sufficiently large but finite \(N\) , the surprise about system's microstate lies in a narrow range \(\pm N \eta \) around \(NH\) with high probability and, exactly as it happens with the expected energy per particle \(E\) , \(H\) , which is per particle, can be used to characterize the entropy both in the microcanonical and the in canonical ensemble approaches even if, in what follows, we will limit ourselves to \(N\rightarrow \infty \) .
[1]
[ [ 78, 81 ] ]
https://openalex.org/W2099111195
c683e70b-e207-4526-97a1-d1d5a0083dbe
where \(\Vert \cdot \Vert _p\) is the \(\ell ^p([N])\) norm on vectors. Theory for well-posedness of each of these problems is mature [1]}, [2]}, [3]}, [4]}. The numerical results in this paper utilize the interpolation (\(M= N\) ) formulation above for simplicity, but this choice is independent of the theory and algorithms developed in this paper. The essential idea is that we assume the ability to construct \(\widehat{p}\) that, in the absence of linear inequality or quadratic equality constraints, is considered a good approximation to the original function \(u\) based on available data.
[3]
[ [ 148, 151 ] ]
https://openalex.org/W4250955649
3bf879d0-2922-4496-9297-233e06cfb27f
This one-to-one relation enables \(C\Pi \) (\(\Upsilon B\) , respectively) to play the role of moments as they can be used to construct families of reduced-order models that interpolate the moments at \(\mathcal {I}_1\) (\(\mathcal {I}_2\) , respectively). [1]}, [2]} has also noted that, under additional assumptions, the moments are also in a one-to-one relation with the steady-state responses (provided they exist) of interconnections between the system and certain “signal generators”. First, consider the signal generator \( \dot{\omega }= S \omega , \qquad \theta = L \omega ,\)
[1]
[ [ 261, 264 ] ]
https://openalex.org/W2103960654
b20489d5-56c0-4152-a84d-d8e7c58e1c61
The flow equation proposed in ref. [1]} incorporates a geometric approach to the quantisation of gauge theories [2]}, [3]} and as such it is useful to use DeWitt notation. On the other hand, as we will see, a detailed understanding of the UV properties can only be reached by working with explicit expressions for the vertices. In the case of Yang-Mills theories, the flow equations then take their simplest form if we regard the gauge fields as valued in the Lie algebra, i.e. contracted into the generators [4]}, [5]}. We will therefore work with both notations as appropriate.
[1]
[ [ 35, 38 ] ]
https://openalex.org/W3126925638
28313d25-a46f-48b4-9d9d-ca89a11b6ea6
For the flow equation to have fixed points it has to be non-linear which implies that the blocking functional \(\Psi [\phi ]\) must itself depend on \(S\) . Generalising [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}, [18]}, [19]}, [20]}, [21]}, [22]}, [23]}, [24]}, [25]}, [26]}, which are themselves generalisations of the Polchinski equation [27]}, the blocking functional is written in the following way: \(\Psi ^a = - \frac{1}{2} \mathcal {K}^{ab} \Sigma _{,b} + \psi ^a \,,\)
[24]
[ [ 323, 327 ] ]
https://openalex.org/W3098873575
b6eed960-0171-4b4a-a322-c4615abc226c
Lemma 2.1 ([1]})    
[1]
[ [ 11, 14 ] ]
https://openalex.org/W2145621742
d9830b4e-7189-44e6-b1e6-2ffc0fbb0b6e
In this section, we recall the notions and some basic results concerning dominance orders and pointed functions from [1]}, [2]}. We also describe properties of codegrees and copointed functions in analogous to those of degrees and pointed functions.
[2]
[ [ 123, 126 ] ]
https://openalex.org/W2916768231
54d8fcae-9884-4f44-95fe-7a89cb146458
We recall that, up to a permutation of vertices, a seed is determined by the degrees (extended \(g\) -vectors) of its cluster variables, see [1]} for an interpretation in terms of chambers. In particular, the shifted seed \(t[1]\) is unique up to a permutation.
[1]
[ [ 141, 144 ] ]
https://openalex.org/W1523097020
0343865f-ae8a-415d-9904-89c421e21ffd
When the cluster algebra admits a monoidal categorification by monoidal categories, we similarly expect that the real simple objects correspond to the (quantum) cluster monomials (see [1]}). If so, such objects can be constructed from the an initial collection of real simple objects via (categorical) mutations. Let us call such an expectation the multiplicative reachability conjecture. Conjecture REF is related to the special case for \({\operatorname{A_q}}[N_{-}(w)]\) .
[1]
[ [ 184, 187 ] ]
https://openalex.org/W3098780889
54b0d124-452a-4a1d-95f1-3eda87fe2e6f
We select 100 authentic underwater images from Google and related works [1]}, [2]}. These underwater images are taken from real underwater scenes, with resolutions ranged from 183 \(\times \) 275 to 1350 \(\times \) 1800. Typical underwater images are presented in Fig. REF . To generate more enhanced images for test, we utilize 10 representative UIE algorithms, including 3 non-physical-model-based methods (i.e. fusion-based [3]}, retinex-based [4]}, two-step-based [5]}), 4 physical-model-based methods (i.e. histogram prior [6]}, UDCP [7]}, UIBLA [8]}, GDCP [9]}), 2 deep-learning-based methods (i.e. UWCNN [10]}, Water-Net [2]} and 1 commercial application (i.e. dive+ [12]}). With the 100 images and 10 UIE approaches, we have a total of 1,000 enhanced underwater images. All enhancement underwater images and the corresponding raw images are included in the UIED. <FIGURE>
[3]
[ [ 430, 433 ] ]
https://openalex.org/W2081140338
2e351827-9e21-4c0f-ad76-50b8daa68458
In a similar vein, Fan et al. [1]} has explored a sophisticated fusion strategy for textual story generation in a hierarchical fashion where they combine a pretrained convolutional Seq2Seq model with a trainable convolutional Seq2Seq model. More specifically, the hidden states of a pretrained Seq2Seq model are fused with the hidden states of a trainable Seq2Seq model using a slightly modified version of the Cold Fusion [2]} approach. We call this scheme as Hierarchical Fusion hereafter.
[2]
[ [ 423, 426 ] ]
https://openalex.org/W2888779557
9456fe97-bca9-429b-a18a-aaf5159c143f
Considering the advantages of the fusion techniques of Sriram et al. [1]} and Fan et al. [2]} over Deep Fusion, we adapt them in our fusion model framework.
[2]
[ [ 89, 92 ] ]
https://openalex.org/W2963096510
efa457cc-aa70-4549-aa0d-4860e1760739
Inspired from the success of multimodal fusion strategies for ASR [1]}, [2]} and unimodal fusion for NLG [3]}, [4]}, we slightly modify these fusion schemes and apply it for emending image captions. By doing so, we leverage the rich language representations of AuxLMs to achieve sentence-level fluency and grammatical correctness in the emended captions. <FIGURE>
[3]
[ [ 105, 108 ] ]
https://openalex.org/W1915251500
7876e5ca-4011-4530-8a10-e5fd5bbbee9b
To overcome these issues, we propose a novel entropy model, called Information Transformer (Informer), that captures both global and local dependencies in a content-dependent manner using the attention mechanism of Transformer [1]} (fig:teaser). In contrast to convolution operations, the attention mechanism has known to be effective in modeling long-range dependencies in a content-dependent manner [2]}. Based on the joint autoregressive and hierarchical priors [3]}, which is the basis of the latest entropy models [4]}, [5]}, we introduce two novel hyperpriors, i.e., a global hyperprior and a local hyperprior. To model global dependencies of the quantized latent representation, our Informer first extracts a global hyperprior consisting of different vectors that attend to different areas of an image by using the cross-attention mechanism [6]}, [7]}, [8]}. Furthermore, our Informer extracts a local hyperprior specialized for local information by using 1\(\times \) 1 convolutional layers. Our local hyperprior prevents the global hyperprior from utilizing only local information and thus allows our Informer to consider local and global information effectively.
[3]
[ [ 465, 468 ] ]
https://openalex.org/W2892278106
0256691c-0dc3-4ebd-90b9-92395c67290e
To overcome these issues, we propose a novel entropy model, called Information Transformer (Informer), that captures both global and local dependencies in a content-dependent manner using the attention mechanism of Transformer [1]} (fig:teaser). In contrast to convolution operations, the attention mechanism has known to be effective in modeling long-range dependencies in a content-dependent manner [2]}. Based on the joint autoregressive and hierarchical priors [3]}, which is the basis of the latest entropy models [4]}, [5]}, we introduce two novel hyperpriors, i.e., a global hyperprior and a local hyperprior. To model global dependencies of the quantized latent representation, our Informer first extracts a global hyperprior consisting of different vectors that attend to different areas of an image by using the cross-attention mechanism [6]}, [7]}, [8]}. Furthermore, our Informer extracts a local hyperprior specialized for local information by using 1\(\times \) 1 convolutional layers. Our local hyperprior prevents the global hyperprior from utilizing only local information and thus allows our Informer to consider local and global information effectively.
[5]
[ [ 525, 528 ] ]
https://openalex.org/W3092877102
2a6cbb45-2757-475d-a058-637bb9e4d44d
One of the keys to the success of learned image compression is that deep neural networks effectively model nonlinear transforms suitable for image compression, while traditional image codecs mostly assume linear transforms due to the difficulty of hand-engineering nonlinear transforms for high-dimensional data like images [1]}. Since Ballé  [2]} proposed the generalized divisive normalization (GDN) layer that is effective for modeling nonlinear transforms, CNNs with the GDN layers have been widely used in later methods [3]}, [4]}, [5]}, [6]}. Recently, some learned image compression methods have been proposed using deep residual networks with small kernels (i.e., 3\(\times \) 3) [7]}, an attention module [8]}, invertible neural networks [9]}, and an attentional multi-scale back-projection module [10]}.
[3]
[ [ 525, 528 ] ]
https://openalex.org/W2962676454
1cb9b0f7-5e88-4688-9fe6-0bdcd3bb5498
One of the keys to the success of learned image compression is that deep neural networks effectively model nonlinear transforms suitable for image compression, while traditional image codecs mostly assume linear transforms due to the difficulty of hand-engineering nonlinear transforms for high-dimensional data like images [1]}. Since Ballé  [2]} proposed the generalized divisive normalization (GDN) layer that is effective for modeling nonlinear transforms, CNNs with the GDN layers have been widely used in later methods [3]}, [4]}, [5]}, [6]}. Recently, some learned image compression methods have been proposed using deep residual networks with small kernels (i.e., 3\(\times \) 3) [7]}, an attention module [8]}, invertible neural networks [9]}, and an attentional multi-scale back-projection module [10]}.
[4]
[ [ 531, 534 ] ]
https://openalex.org/W2964098744
a07e6e71-2d26-4646-8ed2-340e5fb39e42
One of the keys to the success of learned image compression is that deep neural networks effectively model nonlinear transforms suitable for image compression, while traditional image codecs mostly assume linear transforms due to the difficulty of hand-engineering nonlinear transforms for high-dimensional data like images [1]}. Since Ballé  [2]} proposed the generalized divisive normalization (GDN) layer that is effective for modeling nonlinear transforms, CNNs with the GDN layers have been widely used in later methods [3]}, [4]}, [5]}, [6]}. Recently, some learned image compression methods have been proposed using deep residual networks with small kernels (i.e., 3\(\times \) 3) [7]}, an attention module [8]}, invertible neural networks [9]}, and an attentional multi-scale back-projection module [10]}.
[6]
[ [ 543, 546 ] ]
https://openalex.org/W2893920456
315c6278-f405-4246-b659-c0e63ab7641a
One of the keys to the success of learned image compression is that deep neural networks effectively model nonlinear transforms suitable for image compression, while traditional image codecs mostly assume linear transforms due to the difficulty of hand-engineering nonlinear transforms for high-dimensional data like images [1]}. Since Ballé  [2]} proposed the generalized divisive normalization (GDN) layer that is effective for modeling nonlinear transforms, CNNs with the GDN layers have been widely used in later methods [3]}, [4]}, [5]}, [6]}. Recently, some learned image compression methods have been proposed using deep residual networks with small kernels (i.e., 3\(\times \) 3) [7]}, an attention module [8]}, invertible neural networks [9]}, and an attentional multi-scale back-projection module [10]}.
[8]
[ [ 714, 717 ] ]
https://openalex.org/W3034469748
fe397341-51ac-47fc-ad73-b6e45a935ecf
The entropy models seek to estimate a joint probability distribution over the elements of the quantized latent representation \(\mathbf {\hat{y}}\) . Note that the rate term in eq:rate–distortion is minimized when the learned entropy model \(p_{\mathbf {\hat{y}}}\) perfectly matches the actual distribution. A simple approach to model the distribution of \(\mathbf {\hat{y}}\) is to assume that all elements are statistically independent and to learn a fixed entropy model, i.e., fully factorized model [1]}, [2]}. Despite its simplicity, this approach does not model the remaining dependencies in \(\mathbf {\hat{y}}\) , and thus cannot achieve optimal performance [3]}.
[2]
[ [ 512, 515 ] ]
https://openalex.org/W2950237263
01fac62f-d3eb-4933-b3e5-1a127624a96a
All experiments are conducted on a PyTorch [1]} based open-source library: CompressAI platform [2]}, which has recently been introduced for developing and evaluating learning-based image codecs.
[1]
[ [ 43, 46 ] ]
https://openalex.org/W2970971581
76973c12-920b-485e-a83e-bcbce60c0526
In this section, we have investigated the convergence rate of the proposed algorithm for general Lipschitz convex function. Our result can be seen as an extension of the classical work [1]}, which has shown that the optimization error is \(O(\sqrt{M}T^{-\frac{1}{4}})\) in a centralized setting.
[1]
[ [ 185, 188 ] ]
https://openalex.org/W2004001705
56514bd1-725f-4375-b619-3ca6475d7e7d
Under Assumption A1, according Proposition 1 in the reference [1]}, we have \(\left|\left[\Phi \left(t,s\right)\right]_{i,j}-\frac{1}{N}\right|\le \rho \eta ^{t-s}\)
[1]
[ [ 63, 66 ] ]
https://openalex.org/W2044212084
6c096872-169d-4c02-bf43-ad591ff0eb99
Our proposed attack outperforms state-of-the-art methods in both attack success rate and training overhead. The rest of this paper is organized as follows. Section  surveys related efforts and motivates the need for the proposed attack. Section  describes our proposed backdoor attack. Section  presents the experimental results. Finally, Section  concludes the paper. Related Work and Motivation Backdoor Attacks and Countermeasures A backdoor attack relies on injecting a backdoor into the ML model during the training process, and the embedded backdoor can be activated by a trigger specifically designed by the attacker. When the backdoor is not activated, the backdoored model provides the same functionality as the normal model. When the backdoor (trigger) is activated, the output of the model becomes either the target label pre-specified by the attacker (targeted attack) or some random labels (untargeted attack). In this paper, we focus on targeted attack. Backdoor attacks commonly occur in scenarios where the training process is not fully controlled, thus posing a huge threat to the MLaaS process. Figure REF (a) shows an illustrative example of an backdoor attack applied in computer vision domain. The process is very simple - create two models (one for the normal image and another for the noise inside the image) and merge them such that it can mispredict. Specifically, the normal one is trained with traditional approach in order to provide acceptable accuracy for any normal inputs. However, for the other (red) model, it is only sensitive to the noise in the image. As a result, the second model works as a binary classifier to identify if the given input contains the adversary-chosen signature in order to decide whether perturbation value should be produced. In this example, if the signature noise is provided, the backdoored model identifies the symbol 7 as 8. Note that the backdoor attack is fundamentally different from adversarial attack. In adversarial attack, as shown in Figure REF (b), a human-invisible noise was added to the input image. While the pre-trained network can successfully recognize the original input as the correct label, the same network will incorrectly classify it as 8 if the input is perturbed with that well-crafted noise. There are three major differences. (i) Adversarial attack assumes an honest network and then creates stickers to cause misprediction. Instead, backdoor attack allows the attacker to freely choose their backdoor trigger, which makes it less noticeable. (ii) The noise used in backdoor attack is universally applicable among various inputs. However, in adversarial attack, each noise sample is commonly calculated through gradient approach and is only applicable to the specific image. (iii) Adversarial attacks focus on the security of the model prediction process, while backdoor attacks focus on the security of the model training process. <FIGURE>There are many promising defense strategies against backdoor attacks. Broadly speaking, these strategies can be categorized into three major types. Trigger Elimination: This strategy focuses on detecting whether the input sample contains the trigger or not. A majority of the approaches in this category apply anomaly detection [1]}, [2]}. However, this strategy can be circumvented by well-chosen backdoor features and exploiting orthogonality of input gradients [3]}. Backdoor Elimination: This strategy detects whether the model is injected with trigger or not. Most of them are assumption based, where the ML model is scanned for detection  [4]}, [5]}, [6]}, [7]}. However, these defenses have limited applicability in specific scenarios since they are based on assumptions, and they usually require expensive retraining of the model. Backdoor Mitigation: This strategy tackles the threat by removing backdoor behavior from the already trained victim models, such as pruning neurons that are dormant on clean inputs  [8]} or fine-tuning the model on a clean dataset  [9]}, [10]}, and utilization of Bayesian Neural Networks [11]}, which will be discussed in the next section. Bayesian Neural Networks Deep Neural Networks (DNNs) are widely used supervised ML models where the training data comprises given inputs and outputs to construct regression or classification models. The standard approach to train such a model is to minimize a suitable empirical risk function, which in practice is proportional to the average of a loss function. Specifically, given dataset \(D = \lbrace x_i, y_i\rbrace \) and weight values of DNN as \(w\) , the goal of ML training is to obtain optimized weights \(w^*\) such that \(w^* = \underset{w}{\mathrm {argmin}} (loss(x_i,y_i,w))\) . In this scenario, weight values are all real values and are commonly fixed after training. Figure REF shows the fundamental difference between DNNs and Bayesian Neural Networks (BNNs). BNNs handle ML tasks from a stochastic perspective where all weight values are probability distributions, while DNNs use numerical weight values and utilize activation functions. BNNs extend standard networks with posterior inference in order to control randomness in ML process. BNN can also be represented as a probabilistic model \(p(y|x,w)\) such that \(y\) is the set of labels and \(p\) is the categorical distribution. Given dataset \(D = \lbrace x_i, y_i\rbrace \) , we obtain the optimized values of \(w\) by maximizing the likelihood function \(p(D|w) = \prod _i p(y_i|x_i,w)\) . The computation in BNNs relies on Bayes theorem to estimate the weights: \(p(w|D) = \frac{p(D|w)p(w)}{p(D)}\vspace{-3.61371pt}\) Here, \(p(w|D)\) is the probability of the weights given the dataset, popularly known as the posterior probability, \(p(D|w)\) is referred as the likelihood, \(p(w)\) is known as the prior probability, and \(p(D)\) is the evidence probability. Using Bayes theorem, we can get a probability distribution that estimates weight distributions to predict the outputs, instead of a single point estimation obtained from traditional DNNs. <FIGURE>However, in many cases the evaluation of the likelihood function is computationally prohibitive or even analytically intractable. For example, solving for \(p(D)\) expands to a high dimensional integral: \(p(D) = \int p(D, w)dw = \int p(w)p(D|w)dw\) . Thus, an approximate function is needed to approximate the true posterior. This can be achieved by minimizing the Kullblack-Leibler (KL) divergence. The KL divergence is a measure of dissimilarity between two probability distributions. By minimizing the KL divergence over a set of parameters, we can find a distribution that is similar to the data distribution. If we want to approximate the posteior \(p(w|D)\) with a distribution function \(q(w|\theta )\) with parameters \(\theta \) , it is identical to minimizing: \(\begin{split}& KL(q(w|\theta ) || p(w)) = H(p,q) - H(q) \\=& -\int q(w|\theta ) log\,p(w) + \int q(w|\theta ) log\,p(w)\\=& -\int q(w|\theta ) log\frac{p(w)}{q(w|\theta )} dw\\=& \int q(w|\theta ) log\frac{q(w|\theta )}{p(w)} dw\end{split}\) where \(H(p,q)\) is the cross-entropy and \(H(p)\) is the Shannon entropy. Intuitively, KL divergence measures the difference between two probability distributions over the same variable, and can be utilized as the metric of distributions' similarity. Though theoretical results can be obtained, it is computationally too expensive to find an analytical solution for \(KL(q(w|\theta )||p(w))\) in real-time. Therefore, sampling algorithm is utilized to approximate the real distribution \(q(w|\theta )\) . To sample \(q(w|\theta )\) , we usually select Gaussian distribution as the model, such that \(\theta \sim \mathcal {N}(\mu ,\sigma ^2)\) , where \(\mu \) and \(\sigma \) are the mean and variance, respectively. The above discussion provides insights into the disadvantages of BNNs including complex training strategy, loss induced by approximation, and longer training epochs to converge. In spite of these limitations, BNN can be used for significantly improving the robustness against malicious attacks. Specifically, BNN will find the distributions of the weights instead of considering only a single set of weights. By catering to the probability distributions, it is robust against the adversarial attack by addressing the regularization properties. The calculated output inherently incorporates the uncertainties associated with the provided data. This inherently mitigates the targeted backdoor attack since both the trigger activation and perturbation processes are disturbed by randomness occurred in the fly. Motivation In order to motivate the need for our proposed work, let us take a closer look at prior works in backdoor attacks. There are two major methods to construct backdoor triggers in data and models: data poisoning and model injection. Let us discuss how BNNs defend against both of them. Data Poisoning: This method involves attackers modifying training data in order to achieve malicious goals [12]}, [13]}, [14]}. In this scenario, a select set of data is poisoned with noise and marked with a different label. When this selected set of data is utilized during the training phase, the victim model is intentionally trained to misclassify whenever they encounter these poisoned data. However, BNNs have natural resistance against data poisoning. As discussed in Section REF , BNNs produce output values with uncertainty, which severely limits the performance of any targeted attack. Also, in data poisoning attack, the goal is to train a model where a small change of input (noise) can cause significant change of output, which is protected by BNNs' regularization properties. Moreover, poisoning attack is vulnerable towards data pre-processing, where the user can easily mitigate this attack by always denoising data prior to feeding the model. As a result, data poisoning attack on BNNs gets extremely inferior performance, which will be demonstrated in Section . <FIGURE>Model Injection: Another major type of backdoor training approach is injecting a backdoor detector, known as `BadNets' [15]}, as shown in Figure REF . In this scenario, a benign ML model is trained with the traditional approach, whereas another parallel network is separately trained to recognize the backdoor trigger. Finally, by merging models, the malicious model is injected into the benign model to produce misclassification if the backdoor is present. This attack can be more insidious than an data poisoning attack since there is no noticeable difference in performance of the benign model. Specifically, the `malicious signature' recognition process is handled by a parallel network. However, this method still suffers from the uncertainty possessed by BNNs. There is one key drawback of model injection attack, that is the backdoor detector must be merged into the benign model (shown in Figure REF (c)). Without merging the two networks (as in Figure REF (b)), the user can easily detect the backdoor by identifying the model structure, since in most cases of MLaaS, the users typically specify the architecture of the expected ML model. In this case, BNN's property prevents it from merging of nodes. In traditional DNNs, edges connecting nodes contain only fixed weight values, therefore, merging two neural networks is straightforward. However, in case of BNN, there is no naive way to merge two probability distributions with different variables. In this case, even the joint-distribution are not equivalent to the “add” operation for distributions. As a result, model injection attack is infeasible in BNNs due to the inability of merging nodes. Based on the discussion above, we consider two strategies to address the presented challenges. As discussed in Section , our proposed approach effectively bypasses these bottlenecks using the following strategies. Distribution Cancellation: We exploit the idea of model injection. However, instead of producing perturbation values, we focus on generating reverse distribution to cancel the normal distribution by employing expectation maximization (EM). Divergence Minimization: KL divergence minimization is utilized to achieve network merging in BNNs. Backdoor Attack using Reverse Distribution Figure REF shows an overview of our proposed attack algorithm that follows the two strategies outlined above: distribution cancellation and divergence minimization. We adopt the idea from model injection, but take a completely opposite route as demonstrated in Figure REF . In Figure REF (a), the attacker separately train a badnet based on the attacker-chosen noise and desired perturbation. Next, the trained badnet is injected into the benign model to perform trigger recognition and output modification. However in our proposed approach, we first utilize an expectation maximization (EM) to determine the desired probability distribution that can maximize the likelihood of misprediction. This computed distribution is the desired `reverse distribution'. Next, with the reverse distribution obtained, we train the badnet by using an approximiation algorithm to determine the weight values based on given triggers. Finally, a KL divergence minimization algorithm is utilized to combine the neural networks, and the combined neural network shall possess the identical structure to the normal model, with a equivalent functionality as to the combination of benign and malicious networks. <FIGURE>Normal Training The normal training follows the standard training procedure. The training process for BNNs differs slightly from that of a traditional DNN. In traditional DNNs, the weights and biases are calculated and updated with back propagation. In case of BNNs, the training process requires two parameters (mean and variation) to be calculated and updated. This training process is known as Bayes by Backprop [16]}. In our work, the architecture mimics the design of AlexNet [17]}. It has eight layers with learnable probability distributions. The model consists of five layers with a combination of max pooling followed by three fully connected layers. We use Relu activation in each of these layers except the output layer. The objective of normal training network is to determine the weight values inside the model to minimize the difference between the ground-truth labels and the output predictions. In addition, \(L_2\) regularization and dropout strategies are also applied in our framework to avoid overfitting problem. Expectation Maximization for Reverse Distribution This step aims at computing the reverse distribution that can cancel out the normal functionality of the benign model when the trigger is activated. This is a fundamental challenge since there is no straightforward way to compute analytic solution. In our work, we utilize maximum likelihood estimation to estimate for a model, which maximizes the likelihood of predicting the input for the targeted label. Without any loss of generality, we assume the benign probability distribution is \(Pr(X)\) , and we set the reverse distribution to be a Gaussian Mixture Model (GMM), which is \(Pr(X) = \sum \limits ^K_{k=1} \pi _k \mathcal {N}(x|\mu _k, \sigma _k)\) . \(X\) is a multidimensional variable. Now the goal is to estimate the unknown parameters \(\mu _k, \sigma _k, \pi _k\) , which is to minimize the negative log-likelihood as the loss: \(-log Pr(X|\pi ,\mu ,\Sigma ) = -\Sigma ^n_{i=1} log \left\lbrace \sum \limits ^K_{k=1} \pi _k \mathcal {N}(x|\mu _k, \sigma _k)\right\rbrace \vspace{-14.45377pt}\) The analytical solution is hard to obtain since there is a summation over the components appearing inside the log, thus computing all the parameters is difficult. However, it is possible to obtain an iterative solution. Given the observations \(x_i, i = 1,2,3...,n\) , we consider each \(x_i\) is associated with a latent variable \(z_i = (z_{i1},z_{i2},...,z_{iK})\) . The latent variable parameter \(z_{ik}\) represents the contribution of k-th Gaussian to \(x_i\) . Given the complete data \((x,z) = (x_i,z_i), i = 1,2,3...,n\) , we can estimate the parameters by maximizing the total log-likelihood: \(\begin{split}log Pr(x,z|\pi ,\mu ,\sigma ) = \sum \limits ^N_{i=1} \sum \limits ^K_{k=1} z_{ik}\lbrace log \pi _k + log \mathcal {N}(x_i|\mu _k, \sigma _k)\rbrace \end{split}\vspace{-14.45377pt}\) Here, the \(\pi _k\) and \((\mu _k, \sigma _k)\) have trivial closed-form solutions. If we take the derivative of the log-likelihood with respect to \(\mu _k, \sigma _k, \pi _k\) and set it to zero, we can get equations to be used in iterative steps as shown in Algorithm REF . The EM iteration alternates between an expectation (E) and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. We use the negative of computed GMM as the reverse distribution. [tb] Iterative Expectation Maximization to Compute Gaussian Mixture Model as Reverse Distribution Input: Data \(x_i\) , latent variables \(z_i\) Initialize: \(\mu _0,\sigma _0, \pi _0, iter\) \(iter++\) E Step: Given parameters, estimating: \(r_{ik} \triangleq E(z_{ik}) = \frac{\pi _k \mathcal {N}(x_i|\mu _k, \sigma _k)}{\sum \limits ^K_{k=1} \pi _k \mathcal {N}(x_i|\mu _k, \sigma _k)}\) M Step: Maximize the expected log-likelihood \(\begin{split}&\mathbb {E} log[Pr(x,z|\pi ,\mu ,\sigma )]\\=& \sum \limits ^N_{i=1} \sum \limits ^K_{k=1} r_{ik} \lbrace log \pi _k + log \mathcal {N}(x_i|\mu _k, \sigma _k)\end{split}\) Updating Step: Parameters are updated by \(\begin{split}&\pi _{k+1} \ \frac{\sigma _i r_{ik}}{N}, \quad \mu _{k+1} = \frac{\sigma _i r_{ik}x_i}{\sigma _i r_{ik}}\\&\sigma _{k+1} = \frac{\sigma _i r_{ik}(x_i - \mu _k)(x_i - \mu _k)^T}{\sigma _i r_{ik}}\end{split}\) Coverage or \(iter\) exceeds \(maxiter\) Backdoor Training After obtaining the reverse distribution, the backdoor training process is similar to the standard training process. One major difference is that there are no class labels. The goal of the backdoor training is to produce the desired probability distribution computed in Section REF . The architecture of the malicious model is relatively simpler than that of the normal training, which mimics the design of Lenet-5 [18]}. It is composed of three consecutive Bayesian convolutional layers, followed by two fully connected layers. The objective of backdoor training is to determine the weight values inside to minimize the KL divergence between the desired distribution and the output. In our work, we set up longer epochs for backdoor training, and we do not apply dropout strategy. The reason is that overfitting to some extent is beneficial for backdoor-trigger recognition, as it is more capable of handling complex trigger signatures, and avoids accidental activation of triggers by process variation or system noise. Merging of Nodes After obtaining the malicious model from Section REF , we need to merge it with the benign model. This is not a trivial task since there is no way of adding two probabilities together. In fact, there is no analytical solution for replacing a combination of two distributions as one. To address this problem, we need to apply approximation algorithm. Inspired by  [19]}, for summation of real number set \(S = \lbrace x_1,x_2,...,x_N\rbrace \) , we have \(\sum \limits ^n_{i=1} x_i = n \cdot \frac{\sum \limits ^n_{i=1} x_i}{n} \) , which means the summation of real numbers is proportional to the average of all numbers. If we want to extend this idea to probability distributions, the focus should be finding the average of probability distributions. For \(S\) , the average \(\bar{x}\) can be defined as the number which has the smallest summation of distances to all elements inside \(S\) , i.e., \(\bar{x} = \underset{x}{\mathrm {argmin}} \sum \limits ^n_{i=1} |x_i - x|\) . Now, we can extend the same idea to merging nodes in BNNs. The problem now is simplified as: given a sequence of different probability distributions \(P_1, P_2, ..., P_n\) , finding a proper distribution \(P_\theta \) such that \(P_\theta = \underset{\theta }{\mathrm {argmin}} \sum \limits ^n_{i=1} d(P_i, P_\theta )\) , where \(d(P_i, P_\theta )\) is the distance between \(P_i\) and \(P_\theta \) . There are various choices for selecting distance metric for real numbers such as Euclidean distance or Manhattan distance. For distributions, as discussed in Section , we select KL divergence as the measure of distance. Then the task to compute a distribution is to find \(P_\theta \) such that it minimizes the summation of KL divergence from \(\lbrace P_i\rbrace \) s. Notice that KL-divergence is not symmetric, so it indeed is not a distance metric, but it is still a valid solution for computing the similarity, and we select the inclusive direction (\(KL(P_i||P_\theta )\) ). It is more principled because it approximates the full distribution. We take the derivative to obtain the gradient: \(\begin{split}&\nabla \left[ \sum P_\theta \, log\, P_\theta - \sum P_\theta \, log\, P_i \right] \\= & \sum \nabla [P_\theta \, log\, P_\theta ] - \sum \nabla [P_\theta \, log\, P_i ] \\= & \sum \nabla P_\theta (1+log\,P_\theta ) - \sum \nabla P_\theta \, log\, P_i \\= & \sum \nabla P_\theta (1+log\,P_\theta - log\,P_i)\\= & \sum \nabla P_\theta (log\,P_\theta - log\,P_i)\end{split}\) We get rid of the `1' in the last equality because \(\sum \limits _x \nabla P_i(x) = \nabla \sum \limits _x P_i(x) = \nabla [1] = 0\) . By setting it to zero, we can obtain the optimal value of \(\theta \) , and it follows the average probability distribution \(P_\theta \) . With \(P_\theta \) computed, we obtain the merged distributions by \(N \cdot P_\theta \) , where \(N\) is the total number of nodes to be merged. Experimental Evaluation This section presents experimental results to demonstrate the effectiveness of our proposed backdoor attack. First, we describe the experimental setup. Next, we evaluate the performance of all configurations as well as the effectiveness of our algorithm in computing the reverse distribution. Finally, we analyze the overhead of our proposed algorithm. <FIGURE>Experiment Setup The experimental evaluation is performed on a host machine with Intel i7 3.70GHz CPU, 32 GB RAM and RTX 2080 256-bit GPU. We developed code using Python for model training. We used PyTorch as the machine learning library. To enable comprehensive evaluation, we deploy the experiments utilizing three different benchmark datasets: IRIS [20]}, MNIST [21]}, and CIFAR100 [22]}. Features are extracted from images and formatted into PyTorch tensors, making them compatible with any ML models requiring tensor inputs. For each of the dataset, we train a normal DNN and a BNN model with structure as described in Section . The BNN models are prepared to be attacked under the following three backdoor attack methods. BADP: State-of-the-art data poisoning attack proposed in [13]}. BadNet: State-of-the-art model injection attack proposed in [15]}. Proposed: Our proposed backdoor attack algorithm. While for the DNN models, they are attacked only by BADP and BadNet (since our proposed method is specifically designed for BNN), the performance of BADP and BadNet against DNN models are considered as the control group. For each of the configuration, we report both the Baseline Accuracy (the prediction accuracy of the benign model with clean samples) and the Attack Success Rate (ASR) (the prediction accuracy of the backdoored model with modified samples) to evaluate the performance. Attack Performance Analysis Figure REF compares the performance of three different methods on various dataset. In each figure, baseline accuracy is provided for reference. Both BADP and BadNet models achieve 99.5% baseline accuracy after training. The x-axis represents the ratio of noise. Larger x-value represents more modifications to the input samples to induce more changes of the ML models. However, it increases the visibility of injected triggers. In this figure, each column represents the performance results for each dataset (IRIS, MNIST, and CIFAR). In the first row, we show the ASRs of BADP and BadNet against traditional DNN as the control group. As we can see, both BADP and BadNet can reach 100% ASR against DNN with sufficient ratio of noise. Especially for lightweight dataset like IRIS, BadNet converges very quickly since they are designed to produce perturbation values to disturb the output of the model. For lightweight dataset, even small perturbation values can be lethal. For larger datasets, they converges slower but eventually they can reach 100% ASR. In the second row, we compare the ASRs of BADP, BadNet, and our proposed method against BNNs along with the baseline accuracy. When attacking BNN on IRIS dataset, BadNet reaches its bottleneck at 60% ASR, while BADP reaches only 42%. For larger dataset like CIFAR-100, none of them are able to exceed 25% ASR. In contrast, our proposed method outperforms the other two as it is the only method that can achieve 100% ASR against BNNs. As expected, in case of lightweight dataset (IRIS), our approach gives faster convergence speed. In case of large dataset, it takes longer to reach 100% ASR. Note that the necessary ratio of noise for exceeding 90% ASR is still below 0.25 for our proposed attack, which is a good news from the attackers' perspective. Notice in this figure, we also denote each method's stability by plotting lines with confidence intervals (CIs). In terms of stability, our proposed method gives the best stability as we can observe from the thickness of the CIs. Also, the lightweight dataset implies large variance of outputs, which induces worse stability. This is expected due to simple data structures' limited sensitivity to value changes. This combined with BNN's internal randomness brings unstable performance. While for the large dataset, complex features and longer training cost inherently guarantees the overall stability for BNN, as discussed in [25]}. Analysis of Reverse Distribution In this section, we also evaluated the performance of our method by plotting the comparison between benign distribution and the computed reverse distribution. To better visualize the result, we plot both the benign distribution and the negative of the reverse distribution. In this way, a closer similarity between the plots represents better effect of distribution cancellation. <FIGURE>Figure REF depicts one illustrative example probability distribution from our model applied on IRIS dataset, which shows the generative performance of our Gaussian Mixture Model (GMM). Here the black dashed lines are actual benign distribution that we plan to cancel, while the red line represents the negative of GMM composed of three separate distributions (blue, orange, and green lines) with different \(\mu ,\sigma ,w\) values, respectively. As we can see, the GMM approximates the actual benign distribution. In this way, our generative model can successfully cancel out the benign distribution from BNNs. Then by combing the GMM with an extra single-value distribution, we can obtain the malicious distribution which fulfills the targeted attack. <TABLE>The GMM performs slightly different on different dataset, as shown in Table REF . We compare the number of components for satisfactory approximation, the dissimilarity (KL divergence), and the number of iterations for EM steps to reach the convergence. In lightweight dataset like IRIS, only three components are sufficient to craft the mixture model with merely 0.04 KL divergence within eight iterations. In case of CIFAR, the number of components are four times of that in IRIS, and requires seven times more iterations for EM steps. We still have eight times of KL divergence. In general, distribution for complex feature space requires more Gaussian components and more iterations to reach the perfection. Overhead Analysis Table REF compares the average overhead of various attack schemes. We present the training time, average testing time, along with the necessary amount of data for training convergence. As we can see from the table, the BADP approach is the most expensive one in terms of data size. It requires almost double the amount of training data to reach its convergence. This is expected since BADP as a data poisoning attack requires sufficient amount of poisoned data to train the malicious model. As for training time, BadNet is very costly, it needs around one hour to complete the training phase. Our proposed method is economic in both time and memory consumption. First, it bypasses the data poisoning step in BADP so it requires less training data. Also, our proposed algorithm of computing reverse distribution is based on a simple EM-process, which is much faster than the entire backdoor training process in BadNet. <TABLE> Conclusion While machine learning (ML) techniques are widely applied in various domains, ML algorithms are vulnerable towards AI Trojan attacks. There are many existing defense strategies with promising performance against backdoor attacks. Bayesian Neural Network (BNNs) has inherent robustness as its randomness deteriorates the attack success rate (ASR) of existing backdoor attacks. In this paper, we exploit the expectation maximization and KL divergence to propose a novel backdoor attack on BNNs. Specifically, unlike state-of-the-art attacks focusing on data poisoning, we take an orthogonal route to combine the information of normal functionality and targeted label to create reverse distribution by applying expectation maximization. The computed reverse distribution can significantly cancel out the normal functionality (marginal distribution) of the model. In other words, the immunity of BNNs can be bypassed by our proposed backdoor attack. Moreover, by using the KL divergence, we extend the “summation” concept of real numbers to probability distributions so that we can merge edge weights (distributions) like traditional neural networks. Extensive experimental evaluation using three standard benchmarks demonstrated that our approach can achieve 100% ASR, while the state-of-the-art attack schemes can reach below 60% ASR against BNNs.
[17]
[ [ 13934, 13938 ] ]
https://openalex.org/W2163605009
1f8bd216-6bb6-46d8-80ba-9cef8f675014
Data Poisoning: This method involves attackers modifying training data in order to achieve malicious goals [1]}, [2]}, [3]}. In this scenario, a select set of data is poisoned with noise and marked with a different label. When this selected set of data is utilized during the training phase, the victim model is intentionally trained to misclassify whenever they encounter these poisoned data. However, BNNs have natural resistance against data poisoning. As discussed in Section REF , BNNs produce output values with uncertainty, which severely limits the performance of any targeted attack. Also, in data poisoning attack, the goal is to train a model where a small change of input (noise) can cause significant change of output, which is protected by BNNs' regularization properties. Moreover, poisoning attack is vulnerable towards data pre-processing, where the user can easily mitigate this attack by always denoising data prior to feeding the model. As a result, data poisoning attack on BNNs gets extremely inferior performance, which will be demonstrated in Section . <FIGURE>
[2]
[ [ 113, 116 ] ]
https://openalex.org/W2774423163
a94d97e1-07eb-4f80-b18b-11fb257ba663
Divergence Minimization: KL divergence minimization is utilized to achieve network merging in BNNs. Backdoor Attack using Reverse Distribution Figure REF shows an overview of our proposed attack algorithm that follows the two strategies outlined above: distribution cancellation and divergence minimization. We adopt the idea from model injection, but take a completely opposite route as demonstrated in Figure REF . In Figure REF (a), the attacker separately train a badnet based on the attacker-chosen noise and desired perturbation. Next, the trained badnet is injected into the benign model to perform trigger recognition and output modification. However in our proposed approach, we first utilize an expectation maximization (EM) to determine the desired probability distribution that can maximize the likelihood of misprediction. This computed distribution is the desired `reverse distribution'. Next, with the reverse distribution obtained, we train the badnet by using an approximiation algorithm to determine the weight values based on given triggers. Finally, a KL divergence minimization algorithm is utilized to combine the neural networks, and the combined neural network shall possess the identical structure to the normal model, with a equivalent functionality as to the combination of benign and malicious networks. <FIGURE>Normal Training The normal training follows the standard training procedure. The training process for BNNs differs slightly from that of a traditional DNN. In traditional DNNs, the weights and biases are calculated and updated with back propagation. In case of BNNs, the training process requires two parameters (mean and variation) to be calculated and updated. This training process is known as Bayes by Backprop [1]}. In our work, the architecture mimics the design of AlexNet [2]}. It has eight layers with learnable probability distributions. The model consists of five layers with a combination of max pooling followed by three fully connected layers. We use Relu activation in each of these layers except the output layer. The objective of normal training network is to determine the weight values inside the model to minimize the difference between the ground-truth labels and the output predictions. In addition, \(L_2\) regularization and dropout strategies are also applied in our framework to avoid overfitting problem. Expectation Maximization for Reverse Distribution This step aims at computing the reverse distribution that can cancel out the normal functionality of the benign model when the trigger is activated. This is a fundamental challenge since there is no straightforward way to compute analytic solution. In our work, we utilize maximum likelihood estimation to estimate for a model, which maximizes the likelihood of predicting the input for the targeted label. Without any loss of generality, we assume the benign probability distribution is \(Pr(X)\) , and we set the reverse distribution to be a Gaussian Mixture Model (GMM), which is \(Pr(X) = \sum \limits ^K_{k=1} \pi _k \mathcal {N}(x|\mu _k, \sigma _k)\) . \(X\) is a multidimensional variable. Now the goal is to estimate the unknown parameters \(\mu _k, \sigma _k, \pi _k\) , which is to minimize the negative log-likelihood as the loss: \(-log Pr(X|\pi ,\mu ,\Sigma ) = -\Sigma ^n_{i=1} log \left\lbrace \sum \limits ^K_{k=1} \pi _k \mathcal {N}(x|\mu _k, \sigma _k)\right\rbrace \vspace{-14.45377pt}\) The analytical solution is hard to obtain since there is a summation over the components appearing inside the log, thus computing all the parameters is difficult. However, it is possible to obtain an iterative solution. Given the observations \(x_i, i = 1,2,3...,n\) , we consider each \(x_i\) is associated with a latent variable \(z_i = (z_{i1},z_{i2},...,z_{iK})\) . The latent variable parameter \(z_{ik}\) represents the contribution of k-th Gaussian to \(x_i\) . Given the complete data \((x,z) = (x_i,z_i), i = 1,2,3...,n\) , we can estimate the parameters by maximizing the total log-likelihood: \(\begin{split}log Pr(x,z|\pi ,\mu ,\sigma ) = \sum \limits ^N_{i=1} \sum \limits ^K_{k=1} z_{ik}\lbrace log \pi _k + log \mathcal {N}(x_i|\mu _k, \sigma _k)\rbrace \end{split}\vspace{-14.45377pt}\) Here, the \(\pi _k\) and \((\mu _k, \sigma _k)\) have trivial closed-form solutions. If we take the derivative of the log-likelihood with respect to \(\mu _k, \sigma _k, \pi _k\) and set it to zero, we can get equations to be used in iterative steps as shown in Algorithm REF . The EM iteration alternates between an expectation (E) and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. We use the negative of computed GMM as the reverse distribution. [tb] Iterative Expectation Maximization to Compute Gaussian Mixture Model as Reverse Distribution Input: Data \(x_i\) , latent variables \(z_i\) Initialize: \(\mu _0,\sigma _0, \pi _0, iter\) \(iter++\) E Step: Given parameters, estimating: \(r_{ik} \triangleq E(z_{ik}) = \frac{\pi _k \mathcal {N}(x_i|\mu _k, \sigma _k)}{\sum \limits ^K_{k=1} \pi _k \mathcal {N}(x_i|\mu _k, \sigma _k)}\) M Step: Maximize the expected log-likelihood \(\begin{split}&\mathbb {E} log[Pr(x,z|\pi ,\mu ,\sigma )]\\=& \sum \limits ^N_{i=1} \sum \limits ^K_{k=1} r_{ik} \lbrace log \pi _k + log \mathcal {N}(x_i|\mu _k, \sigma _k)\end{split}\) Updating Step: Parameters are updated by \(\begin{split}&\pi _{k+1} \ \frac{\sigma _i r_{ik}}{N}, \quad \mu _{k+1} = \frac{\sigma _i r_{ik}x_i}{\sigma _i r_{ik}}\\&\sigma _{k+1} = \frac{\sigma _i r_{ik}(x_i - \mu _k)(x_i - \mu _k)^T}{\sigma _i r_{ik}}\end{split}\) Coverage or \(iter\) exceeds \(maxiter\) Backdoor Training After obtaining the reverse distribution, the backdoor training process is similar to the standard training process. One major difference is that there are no class labels. The goal of the backdoor training is to produce the desired probability distribution computed in Section REF . The architecture of the malicious model is relatively simpler than that of the normal training, which mimics the design of Lenet-5 [3]}. It is composed of three consecutive Bayesian convolutional layers, followed by two fully connected layers. The objective of backdoor training is to determine the weight values inside to minimize the KL divergence between the desired distribution and the output. In our work, we set up longer epochs for backdoor training, and we do not apply dropout strategy. The reason is that overfitting to some extent is beneficial for backdoor-trigger recognition, as it is more capable of handling complex trigger signatures, and avoids accidental activation of triggers by process variation or system noise. Merging of Nodes After obtaining the malicious model from Section REF , we need to merge it with the benign model. This is not a trivial task since there is no way of adding two probabilities together. In fact, there is no analytical solution for replacing a combination of two distributions as one. To address this problem, we need to apply approximation algorithm. Inspired by  [4]}, for summation of real number set \(S = \lbrace x_1,x_2,...,x_N\rbrace \) , we have \(\sum \limits ^n_{i=1} x_i = n \cdot \frac{\sum \limits ^n_{i=1} x_i}{n} \) , which means the summation of real numbers is proportional to the average of all numbers. If we want to extend this idea to probability distributions, the focus should be finding the average of probability distributions. For \(S\) , the average \(\bar{x}\) can be defined as the number which has the smallest summation of distances to all elements inside \(S\) , i.e., \(\bar{x} = \underset{x}{\mathrm {argmin}} \sum \limits ^n_{i=1} |x_i - x|\) . Now, we can extend the same idea to merging nodes in BNNs. The problem now is simplified as: given a sequence of different probability distributions \(P_1, P_2, ..., P_n\) , finding a proper distribution \(P_\theta \) such that \(P_\theta = \underset{\theta }{\mathrm {argmin}} \sum \limits ^n_{i=1} d(P_i, P_\theta )\) , where \(d(P_i, P_\theta )\) is the distance between \(P_i\) and \(P_\theta \) . There are various choices for selecting distance metric for real numbers such as Euclidean distance or Manhattan distance. For distributions, as discussed in Section , we select KL divergence as the measure of distance. Then the task to compute a distribution is to find \(P_\theta \) such that it minimizes the summation of KL divergence from \(\lbrace P_i\rbrace \) s. Notice that KL-divergence is not symmetric, so it indeed is not a distance metric, but it is still a valid solution for computing the similarity, and we select the inclusive direction (\(KL(P_i||P_\theta )\) ). It is more principled because it approximates the full distribution. We take the derivative to obtain the gradient: \(\begin{split}&\nabla \left[ \sum P_\theta \, log\, P_\theta - \sum P_\theta \, log\, P_i \right] \\= & \sum \nabla [P_\theta \, log\, P_\theta ] - \sum \nabla [P_\theta \, log\, P_i ] \\= & \sum \nabla P_\theta (1+log\,P_\theta ) - \sum \nabla P_\theta \, log\, P_i \\= & \sum \nabla P_\theta (1+log\,P_\theta - log\,P_i)\\= & \sum \nabla P_\theta (log\,P_\theta - log\,P_i)\end{split}\) We get rid of the `1' in the last equality because \(\sum \limits _x \nabla P_i(x) = \nabla \sum \limits _x P_i(x) = \nabla [1] = 0\) . By setting it to zero, we can obtain the optimal value of \(\theta \) , and it follows the average probability distribution \(P_\theta \) . With \(P_\theta \) computed, we obtain the merged distributions by \(N \cdot P_\theta \) , where \(N\) is the total number of nodes to be merged. Experimental Evaluation This section presents experimental results to demonstrate the effectiveness of our proposed backdoor attack. First, we describe the experimental setup. Next, we evaluate the performance of all configurations as well as the effectiveness of our algorithm in computing the reverse distribution. Finally, we analyze the overhead of our proposed algorithm. <FIGURE>Experiment Setup The experimental evaluation is performed on a host machine with Intel i7 3.70GHz CPU, 32 GB RAM and RTX 2080 256-bit GPU. We developed code using Python for model training. We used PyTorch as the machine learning library. To enable comprehensive evaluation, we deploy the experiments utilizing three different benchmark datasets: IRIS [5]}, MNIST [6]}, and CIFAR100 [7]}. Features are extracted from images and formatted into PyTorch tensors, making them compatible with any ML models requiring tensor inputs. For each of the dataset, we train a normal DNN and a BNN model with structure as described in Section . The BNN models are prepared to be attacked under the following three backdoor attack methods. BADP: State-of-the-art data poisoning attack proposed in [8]}. BadNet: State-of-the-art model injection attack proposed in [9]}. Proposed: Our proposed backdoor attack algorithm. While for the DNN models, they are attacked only by BADP and BadNet (since our proposed method is specifically designed for BNN), the performance of BADP and BadNet against DNN models are considered as the control group. For each of the configuration, we report both the Baseline Accuracy (the prediction accuracy of the benign model with clean samples) and the Attack Success Rate (ASR) (the prediction accuracy of the backdoored model with modified samples) to evaluate the performance. Attack Performance Analysis Figure REF compares the performance of three different methods on various dataset. In each figure, baseline accuracy is provided for reference. Both BADP and BadNet models achieve 99.5% baseline accuracy after training. The x-axis represents the ratio of noise. Larger x-value represents more modifications to the input samples to induce more changes of the ML models. However, it increases the visibility of injected triggers. In this figure, each column represents the performance results for each dataset (IRIS, MNIST, and CIFAR). In the first row, we show the ASRs of BADP and BadNet against traditional DNN as the control group. As we can see, both BADP and BadNet can reach 100% ASR against DNN with sufficient ratio of noise. Especially for lightweight dataset like IRIS, BadNet converges very quickly since they are designed to produce perturbation values to disturb the output of the model. For lightweight dataset, even small perturbation values can be lethal. For larger datasets, they converges slower but eventually they can reach 100% ASR. In the second row, we compare the ASRs of BADP, BadNet, and our proposed method against BNNs along with the baseline accuracy. When attacking BNN on IRIS dataset, BadNet reaches its bottleneck at 60% ASR, while BADP reaches only 42%. For larger dataset like CIFAR-100, none of them are able to exceed 25% ASR. In contrast, our proposed method outperforms the other two as it is the only method that can achieve 100% ASR against BNNs. As expected, in case of lightweight dataset (IRIS), our approach gives faster convergence speed. In case of large dataset, it takes longer to reach 100% ASR. Note that the necessary ratio of noise for exceeding 90% ASR is still below 0.25 for our proposed attack, which is a good news from the attackers' perspective. Notice in this figure, we also denote each method's stability by plotting lines with confidence intervals (CIs). In terms of stability, our proposed method gives the best stability as we can observe from the thickness of the CIs. Also, the lightweight dataset implies large variance of outputs, which induces worse stability. This is expected due to simple data structures' limited sensitivity to value changes. This combined with BNN's internal randomness brings unstable performance. While for the large dataset, complex features and longer training cost inherently guarantees the overall stability for BNN, as discussed in [10]}. Analysis of Reverse Distribution In this section, we also evaluated the performance of our method by plotting the comparison between benign distribution and the computed reverse distribution. To better visualize the result, we plot both the benign distribution and the negative of the reverse distribution. In this way, a closer similarity between the plots represents better effect of distribution cancellation. <FIGURE>Figure REF depicts one illustrative example probability distribution from our model applied on IRIS dataset, which shows the generative performance of our Gaussian Mixture Model (GMM). Here the black dashed lines are actual benign distribution that we plan to cancel, while the red line represents the negative of GMM composed of three separate distributions (blue, orange, and green lines) with different \(\mu ,\sigma ,w\) values, respectively. As we can see, the GMM approximates the actual benign distribution. In this way, our generative model can successfully cancel out the benign distribution from BNNs. Then by combing the GMM with an extra single-value distribution, we can obtain the malicious distribution which fulfills the targeted attack. <TABLE>The GMM performs slightly different on different dataset, as shown in Table REF . We compare the number of components for satisfactory approximation, the dissimilarity (KL divergence), and the number of iterations for EM steps to reach the convergence. In lightweight dataset like IRIS, only three components are sufficient to craft the mixture model with merely 0.04 KL divergence within eight iterations. In case of CIFAR, the number of components are four times of that in IRIS, and requires seven times more iterations for EM steps. We still have eight times of KL divergence. In general, distribution for complex feature space requires more Gaussian components and more iterations to reach the perfection. Overhead Analysis Table REF compares the average overhead of various attack schemes. We present the training time, average testing time, along with the necessary amount of data for training convergence. As we can see from the table, the BADP approach is the most expensive one in terms of data size. It requires almost double the amount of training data to reach its convergence. This is expected since BADP as a data poisoning attack requires sufficient amount of poisoned data to train the malicious model. As for training time, BadNet is very costly, it needs around one hour to complete the training phase. Our proposed method is economic in both time and memory consumption. First, it bypasses the data poisoning step in BADP so it requires less training data. Also, our proposed algorithm of computing reverse distribution is based on a simple EM-process, which is much faster than the entire backdoor training process in BadNet. <TABLE> Conclusion While machine learning (ML) techniques are widely applied in various domains, ML algorithms are vulnerable towards AI Trojan attacks. There are many existing defense strategies with promising performance against backdoor attacks. Bayesian Neural Network (BNNs) has inherent robustness as its randomness deteriorates the attack success rate (ASR) of existing backdoor attacks. In this paper, we exploit the expectation maximization and KL divergence to propose a novel backdoor attack on BNNs. Specifically, unlike state-of-the-art attacks focusing on data poisoning, we take an orthogonal route to combine the information of normal functionality and targeted label to create reverse distribution by applying expectation maximization. The computed reverse distribution can significantly cancel out the normal functionality (marginal distribution) of the model. In other words, the immunity of BNNs can be bypassed by our proposed backdoor attack. Moreover, by using the KL divergence, we extend the “summation” concept of real numbers to probability distributions so that we can merge edge weights (distributions) like traditional neural networks. Extensive experimental evaluation using three standard benchmarks demonstrated that our approach can achieve 100% ASR, while the state-of-the-art attack schemes can reach below 60% ASR against BNNs.
[9]
[ [ 10999, 11002 ] ]
https://openalex.org/W2942091739
e0d669c7-da0a-48a3-9aa7-000d0db6268a
The normal training follows the standard training procedure. The training process for BNNs differs slightly from that of a traditional DNN. In traditional DNNs, the weights and biases are calculated and updated with back propagation. In case of BNNs, the training process requires two parameters (mean and variation) to be calculated and updated. This training process is known as Bayes by Backprop [1]}. In our work, the architecture mimics the design of AlexNet [2]}. It has eight layers with learnable probability distributions. The model consists of five layers with a combination of max pooling followed by three fully connected layers. We use Relu activation in each of these layers except the output layer. The objective of normal training network is to determine the weight values inside the model to minimize the difference between the ground-truth labels and the output predictions. In addition, \(L_2\) regularization and dropout strategies are also applied in our framework to avoid overfitting problem.
[1]
[ [ 399, 402 ] ]
https://openalex.org/W2164411961
b21bef98-2c10-4122-9711-9823b29347b5
After obtaining the malicious model from Section REF , we need to merge it with the benign model. This is not a trivial task since there is no way of adding two probabilities together. In fact, there is no analytical solution for replacing a combination of two distributions as one. To address this problem, we need to apply approximation algorithm. Inspired by  [1]}, for summation of real number set \(S = \lbrace x_1,x_2,...,x_N\rbrace \) , we have \(\sum \limits ^n_{i=1} x_i = n \cdot \frac{\sum \limits ^n_{i=1} x_i}{n} \) , which means the summation of real numbers is proportional to the average of all numbers. If we want to extend this idea to probability distributions, the focus should be finding the average of probability distributions. For \(S\) , the average \(\bar{x}\) can be defined as the number which has the smallest summation of distances to all elements inside \(S\) , i.e., \(\bar{x} = \underset{x}{\mathrm {argmin}} \sum \limits ^n_{i=1} |x_i - x|\) .
[1]
[ [ 363, 366 ] ]
https://openalex.org/W3027921946
e335b64d-dffd-4554-9c54-a6dc96403673
The experimental evaluation is performed on a host machine with Intel i7 3.70GHz CPU, 32 GB RAM and RTX 2080 256-bit GPU. We developed code using Python for model training. We used PyTorch as the machine learning library. To enable comprehensive evaluation, we deploy the experiments utilizing three different benchmark datasets: IRIS [1]}, MNIST [2]}, and CIFAR100 [3]}. Features are extracted from images and formatted into PyTorch tensors, making them compatible with any ML models requiring tensor inputs. For each of the dataset, we train a normal DNN and a BNN model with structure as described in Section . The BNN models are prepared to be attacked under the following three backdoor attack methods.
[1]
[ [ 335, 338 ] ]
https://openalex.org/W2027987815
c5bd65a0-ee19-482b-8d8d-8f45947a81cf
The experimental evaluation is performed on a host machine with Intel i7 3.70GHz CPU, 32 GB RAM and RTX 2080 256-bit GPU. We developed code using Python for model training. We used PyTorch as the machine learning library. To enable comprehensive evaluation, we deploy the experiments utilizing three different benchmark datasets: IRIS [1]}, MNIST [2]}, and CIFAR100 [3]}. Features are extracted from images and formatted into PyTorch tensors, making them compatible with any ML models requiring tensor inputs. For each of the dataset, we train a normal DNN and a BNN model with structure as described in Section . The BNN models are prepared to be attacked under the following three backdoor attack methods.
[2]
[ [ 347, 350 ] ]
https://openalex.org/W2182396527
27e793b9-68b6-47cb-8aa4-ab70a55bfeaf
This viewpoint makes transparent that more general transformations are allowed: the whole of \(\operatorname{SL}_2(\mathbb {C})\) acts on the Weyl algebra \(\mathbb {C} [\partial _z,z]\) by a symplectic reparametrisation of \(\mathbb {C}^2 \cong T^*\mathbb {C}\) , and it turns out that the reduced simply-laced isomonodromy systems are invariant under the full action [1]}. Hence the Poisson structures on the bundles of classical algebras are split in orbits for the \(\operatorname{SL}_2(\mathbb {C})\) -action. Further, the isomorphisms along one orbit become flat when the classical Hamiltonian reduction of both the algebra \(A_0\) and the isomonodromy connection is taken. We thus consider the quantisation of this statement, and ask:
[1]
[ [ 371, 374 ] ]
https://openalex.org/W2963002057
00c92436-66f9-4667-ae43-d0543c561ec3
It should helpful to the reader to work out the particular case of the \(\operatorname{SL}_2(\mathbb {C})\) -action which corresponds to the Harnad duality for rational differential operators [1]}, as mentioned in the Introduction.
[1]
[ [ 192, 195 ] ]
https://openalex.org/W2039761808
87cb2f08-0812-4a0e-bbd1-176bc15e89ca
It is worth mentioning that the scattering in the system of one-dimensional particles is not just a first step on the way to the case of three-dimensional particles. It is interesting by itself, the systems of three one-dimensional particles (neutral or charged) were intensively studied during many years (see, for example, [1]}, [2]}, [3]}, [4]}, [5]}). In recent years there appeared a new interest to such systems since they were realized experimentally (see [6]}, [7]}, [8]}, [9]}).
[8]
[ [ 475, 478 ] ]
https://openalex.org/W4240817298
4a565cfc-a427-4f40-b4ed-c11be4c53b9b
Following [1]} (see also [2]}), the symmetron effective potential is given by \(V_\text{eff}(\phi ) &= V(\phi ) + A(\phi )\,\rho \nonumber \\&= \frac{1}{2}\left(\frac{\rho }{M^2} - \mu ^2\right)\phi ^2 + \frac{\lambda }{4}\,\phi ^4\:,\)
[1]
[ [ 10, 13 ] ]
https://openalex.org/W2034231122
56082237-15f9-475b-a46a-6c1bc9a6accd
Here, \(g_{\rho \pi \pi }^2/(4\pi )\simeq 3\) and \(\mathcal {I}(m_{Z^{\prime }})\) is the phase space integral given in [1]}. We consider them only when they are kinematically allowed. Similarly, for the decay width of \(Z^{\prime }\) , we have [1]} \(\Gamma _Z&=\sum _\ell \Gamma (Z^{\prime }\rightarrow \ell \bar{\ell })+\Gamma (Z^{\prime }\rightarrow XX)+\Gamma (Z^{\prime }\rightarrow \pi ^0\gamma )+\Gamma (Z^{\prime }\rightarrow \pi ^+\pi ^-\pi ^0),\)
[1]
[ [ 124, 127 ], [ 249, 252 ] ]
https://openalex.org/W2017725943
8674647e-757b-415d-b4e9-0ee307b9a704
To navigate to a selected waypoint \(w\) (DriveTo), the robot employs a procedure analogous to prior work on learning-based navigation with topological graphs [1]}, [2]}, planning the shortest path through \(\mathcal {T}\) , and selecting the next waypoint on this path. Once the waypoint \(w\) is selected, the model \(q_{\theta ,\phi }(\lbrace a, d, x\rbrace _t^w | o_t, o_w)\) is used to repeatedly choose the action \(a_t^w\) based on the current image \(o_t\) , until the distance \(d_t^w\) becomes small, indicating that the waypoint is reached and the robot can navigate to the next waypoint (in practice, it's convenient to replan the path at this point, as is standard in MPC). Each time the DriveTo subroutine reaches a node, it also increments its count \(N(w)\) which is used for the novelty bonus \(v(w)\) . The helper function \(\mathrm {close}\) uses the model in Section REF to check if the estimated temporal distance \(d_t^w\) is less than \(\epsilon \) for two observations, and the \(\mathrm {contains}\) operation on a set checks if a given node is close to any node inside the set. These modifications allow A\(^*\) -like operations on the nodes of our graph, which are continuous variables.
[1]
[ [ 160, 163 ] ]
https://openalex.org/W3205239453
2dc255d9-0089-4321-9ffb-910693eee31a
Inputs to the encoder \(p_\text{over}\) are (i) satellite image \(c_S\) and (ii) the triplet of GPS locations \(\lbrace x_w, x_S, x_G \rbrace \) . \(p_\text{over}\) is implemented as a multi-input neural network with a MobileNet encoder [1]} to featurize \(c_S\) , which is then concatenated with the location inputs. This is followed by a series of fully-connected layers [512, 128, 32, 1] down to a single cell to predict the binary classification scores. During pretraining, we minimize \({\mathcal {L}}_\text{NCE}\) with a batch size of 256 and perform gradient updates using the Adam optimizer with learning rate \(\lambda =10^{-4}\) until convergence.
[1]
[ [ 241, 244 ] ]
https://openalex.org/W2612445135
a5e5d921-84c2-454f-894f-84cf207d890c
2) Curve Selection (see [1]}, Chapter 6, (1.5))
[1]
[ [ 24, 27 ] ]
https://openalex.org/W2162052349
3e76fc32-5fde-432f-b0b2-86cd5aa1c466
The oldest attempt to quantify calibration has been the reliability diagram [1]}, [2]} for binary classification. Although it has been useful for the evaluation of early calibration methods, it does not provide point estimates - a single value - required to systematically compare calibration of different models. The first point estimate proposed in [3]}, which exploited a decision theory framework to use a profit maximisation as a proxy for calibration quality, required a specific type of dataset to be usable in practice. Mirroring the procedure used to compute the reliability diagram, the empirical Expected Calibration Error (\(ECE\) ) was designed [1]}, and later has been proven to be an estimator for the natural theoretical notion of calibration error [5]}. Meanwhile, some works have used the negative log-likelihood (NLL) or the Brier score [3]}, which both are weak proxis for the calibration of classifiers [7]}. Using reliability diagrams has become even more difficult in multiclass settings [8]}.
[2]
[ [ 82, 85 ] ]
https://openalex.org/W2098824882
b1852448-21ec-4407-bf36-bb79e2b263d1
Where \(ECE^{c}(M)\) is the class-specific \(ECE\) associated to class \(c\) , \(ECE^{cw}\) the class-wise \(ECE\) [1]}, \(ECE^{conf}(M)\) the confidence \(ECE\) [2]} and \(ECE^{mul}(M)\) the multiclass \(ECE\) .
[1]
[ [ 119, 122 ] ]
https://openalex.org/W2012942264
5b96b96a-7e45-4b6a-9ed7-333058bc44f3
Where \(ECE^{c}(M)\) is the class-specific \(ECE\) associated to class \(c\) , \(ECE^{cw}\) the class-wise \(ECE\) [1]}, \(ECE^{conf}(M)\) the confidence \(ECE\) [2]} and \(ECE^{mul}(M)\) the multiclass \(ECE\) .
[2]
[ [ 168, 171 ] ]
https://openalex.org/W2964212410
654ba540-d4ae-478b-95e6-dd9f58ddea98
We present the assessment of a few empirical properties of the different \(ECE\) estimators. As pointed out in [1]}, the main difficulty with empirical evaluation of calibration methods and calibration metrics is that we don't have access to ground truths in general. This is why we worked on a setup which gives us access to arbitrarily precise estimates of the \(ECE\) considered as a the ground truth, in the class-wise and confidence settings.
[1]
[ [ 112, 115 ] ]
https://openalex.org/W2969338701
4388b3bb-482e-4ba7-945f-c784673936f1
We wish to point out that the optical radiation-pressure cooling of mechanical resonator [1]}, [2]}, [3]}, [4]}, is just a special case of the second type with \(F_{2}^{\prime }(\hat{a}^{\prime \dag },\hat{a}^{\prime })\) \(=g^{\prime }\hat{a}^{{\prime }\dag }\hat{a}^{\prime }\) . A similar linearization [1]}, [4]} of the effective Hamiltonian as given in Eq. (REF ) was also mentioned in the optical radiation-pressure cooling of mechanical resonator. Here we present only the cooling limit (so-called sideband cooling limit) [7]} of the general coupled two-HO model: \(\bar{n}_{b}^{\text{\textrm {f}}}\rightarrow \bar{n}_{b}^{\text{\textrm {lim,sid}}}=\bar{n}_{a}+\frac{\gamma _{a}^{2}}{4\omega _{b}^{2}}\approx \frac{\gamma _{a}^{2}}{4\omega _{b}^{2}}\)
[1]
[ [ 89, 92 ], [ 307, 310 ] ]
https://openalex.org/W2051421285
4817d15c-5335-4af0-a7b5-d87d68f2ecd1
We wish to point out that the optical radiation-pressure cooling of mechanical resonator [1]}, [2]}, [3]}, [4]}, is just a special case of the second type with \(F_{2}^{\prime }(\hat{a}^{\prime \dag },\hat{a}^{\prime })\) \(=g^{\prime }\hat{a}^{{\prime }\dag }\hat{a}^{\prime }\) . A similar linearization [1]}, [4]} of the effective Hamiltonian as given in Eq. (REF ) was also mentioned in the optical radiation-pressure cooling of mechanical resonator. Here we present only the cooling limit (so-called sideband cooling limit) [7]} of the general coupled two-HO model: \(\bar{n}_{b}^{\text{\textrm {f}}}\rightarrow \bar{n}_{b}^{\text{\textrm {lim,sid}}}=\bar{n}_{a}+\frac{\gamma _{a}^{2}}{4\omega _{b}^{2}}\approx \frac{\gamma _{a}^{2}}{4\omega _{b}^{2}}\)
[2]
[ [ 95, 98 ] ]
https://openalex.org/W2077473036
d49d118d-5362-444d-8cb0-a63decf3c998
We wish to point out that the optical radiation-pressure cooling of mechanical resonator [1]}, [2]}, [3]}, [4]}, is just a special case of the second type with \(F_{2}^{\prime }(\hat{a}^{\prime \dag },\hat{a}^{\prime })\) \(=g^{\prime }\hat{a}^{{\prime }\dag }\hat{a}^{\prime }\) . A similar linearization [1]}, [4]} of the effective Hamiltonian as given in Eq. (REF ) was also mentioned in the optical radiation-pressure cooling of mechanical resonator. Here we present only the cooling limit (so-called sideband cooling limit) [7]} of the general coupled two-HO model: \(\bar{n}_{b}^{\text{\textrm {f}}}\rightarrow \bar{n}_{b}^{\text{\textrm {lim,sid}}}=\bar{n}_{a}+\frac{\gamma _{a}^{2}}{4\omega _{b}^{2}}\approx \frac{\gamma _{a}^{2}}{4\omega _{b}^{2}}\)
[3]
[ [ 101, 104 ] ]
https://openalex.org/W2246724019
a25e5ae7-c8b2-4986-ab4b-80ca5b380698
In this section we briefly revisit the composite lognormal-Pareto model where a single threshold value assumed to be applied uniformly to the whole data set as developed by [1]} and [2]} but for payments \(Y\) and \(Z\) insurance data structures. More general version of composite lognormal-Pareto model where the threshold can vary among observations has also been investigated by [3]}. These models can guard against overestimated probabilities of large losses and provide a good fit to the entire range of loss data. [4]} applied the composite models in the well-known Norwegian fire data and considered the data truncation (policy deductible) in parameter estimation and risk management. Here, we review the structure of [2]} models and implement them with both policy deductible and policy limit for payment \(Y\) and payment \(Z\) scenarios.
[2]
[ [ 182, 185 ], [ 727, 730 ] ]
https://openalex.org/W2003043371
284b2a17-ff4c-47c4-a3ce-f00edd8c43f0
The asymptotic performance of the newly designed estimators will be measured via asymptotic relative efficiency (ARE) with respect to MLE and for two parameter case it is defined as [1]}, [2]}: \(ARE(\mathcal {C}, MLE)=\left(\dfrac{\mbox{det}\left(\mathbf {\Sigma }_{\mbox{\tiny MLE}}\right)}{\mbox{det}\left(\mathbf {\Sigma }_{\mbox{\tiny $\mathcal {C}$}}\right)}\right)^{1/2},\)
[1]
[ [ 182, 185 ] ]
https://openalex.org/W2797333853
97bab125-f0c9-4495-b79f-4a959baadbb2
In this section, we compare our method with several previous state-of-the-art (SOTA) baselines. For model-based methods, we choose AMPO [1]}, MBPO [2]}, and PETS [3]}. MBPO is the SOTA model-based method and our method is building upon the architecture of PyTorch-version MBPO [4]}. AMPO is another SOTA model-based method that uses unsupervised model adaptation to reduce the prediction error of the learned dynamics model. PETS is a SOTA model predict control (MPC) method that uses the cross-entropy method (CEM) as the planner, and it also performs well in many continuous control tasks. These three methods use a probabilistic neural network ensemble as the architecture of dynamics model. They all try to fit the state-action visitation distribution of all historical policies to learn a global dynamics model. For model-free methods, we compare with SAC [5]}, which is the policy part of our method and is one of the SOTA model-free methods, and GEM [6]}, which is another recent SOTA method. The implementation details are in Appendix REF . We conduct experiment on six MoJoCo-v2 [7]} environments, the performance curves are shown in Figure REF .
[5]
[ [ 861, 864 ] ]
https://openalex.org/W2781726626
ded9c3fa-80c4-4073-9022-30eb6ac7946b
In this section, we compare our method with several previous state-of-the-art (SOTA) baselines. For model-based methods, we choose AMPO [1]}, MBPO [2]}, and PETS [3]}. MBPO is the SOTA model-based method and our method is building upon the architecture of PyTorch-version MBPO [4]}. AMPO is another SOTA model-based method that uses unsupervised model adaptation to reduce the prediction error of the learned dynamics model. PETS is a SOTA model predict control (MPC) method that uses the cross-entropy method (CEM) as the planner, and it also performs well in many continuous control tasks. These three methods use a probabilistic neural network ensemble as the architecture of dynamics model. They all try to fit the state-action visitation distribution of all historical policies to learn a global dynamics model. For model-free methods, we compare with SAC [5]}, which is the policy part of our method and is one of the SOTA model-free methods, and GEM [6]}, which is another recent SOTA method. The implementation details are in Appendix REF . We conduct experiment on six MoJoCo-v2 [7]} environments, the performance curves are shown in Figure REF .
[7]
[ [ 1088, 1091 ] ]
https://openalex.org/W2158782408
f6b5354d-0db4-4764-8970-2d6e8692647d
We compare with the other three prioritized experience replay methods to indicate the advantage of our policy-adaptation mechanism for model learning. The first one is Prioritized Experience Replay (PER) [1]}, which weighs the samples according to their TD-error. The second method is RECALL [2]}, which chooses the top \(k\) highest value sample. They use this to recall the samples that can induce the high-value samples and train the policy. We implement this by choosing the top \(25\%\) highest \(Q\) value samples to train the model and as model rollout initial states. The third method is Model-augmented Prioritized Experience Replay (MaPER) [3]}, which is an extension of PER using both TD-error and model prediction error to weight the samples for model learning.
[1]
[ [ 205, 208 ] ]
https://openalex.org/W2963477884
9e45a198-c07e-4e1b-b593-440a829ae774
For better model usage, a short-horizon model rollout [1]} is shown to avoid the model error and improve the quality of model samples. Based on this, a bidirectional model rollout scheme [2]} is proposed to avoid the model error further. Furthermore, model disagreement is used to decide when to trust the model and regularize the model samples . To make more effective use of the model by exploiting its differentiability, model-augmented actor-critic [3]} is proposed to estimate the policy gradient more accurately by back-propagating the gradient through model rollout trajectories. Based on this work, directional derivative projection policy optimization [4]} provides a two-model architecture to estimate the model gradient more accurately. Recently, on-policy corrections [5]} is proposed to use the generalization ability of the model to correct the model-generated samples according to the real samples.
[1]
[ [ 54, 57 ] ]
https://openalex.org/W2970277495
845cf22d-a889-480c-b281-33b3f2ab5ee4
Lemma A.1 [1]} Assume the initial state distributions of the real dynamics \(T\) and the learned dynamics model \(\hat{T}\) are the same. For any state \(s^{\prime }\) , assume \(\mathcal {F}_{s^{\prime }}\) is a class of real-valued bounded measurable functions on state-action space, such that \(\hat{T}(s^{\prime }| \cdot , \cdot ): \mathcal {S} \times \mathcal {A} \rightarrow \mathbb {R} \) is in \(\mathcal {F}_{s^{\prime }}\) . Then the gap between two different state visitation distributions \(v^{\pi _1}_T(s^\prime )\) and \(v^{\pi _2}_{\hat{T}}(s^\prime )\) can be bounded as follows: \(|v^{\pi _1}_T(s^\prime )- v^{\pi _2}_{\hat{T}}(s^\prime )| \le \gamma \mathbb {E}_{(s,a) \sim \rho ^{\pi _1}_{T}}{ |T(s^\prime |s,a) - \hat{T}(s^\prime |s,a)|} + \gamma d_{\mathcal {F}_{s^{\prime }}}(\rho ^{\pi _1}_{T}, \rho ^{\pi _2}_{\hat{T}})\)
[1]
[ [ 10, 13 ] ]
https://openalex.org/W3104329975
978a8d4e-3ff1-4c4a-95b6-690fa48f2d8b
We choose \((p_{x} \pm ip_{y})\) -wave pair for spin-triplet component with \(\mathbf {d}(\mathbf {k})=\Delta _{p}(\hat{\mathbf {x}}k_{y}-\hat{\mathbf {y}}k_{x})/\mid {\mathbf {k}}\mid \) [1]} and \(s\) -wave one with \(\psi (\mathbf {k})=\Delta _{s}\) with \(\Delta _{p} \ge 0\) and \(\Delta _{s} \ge 0\) . The superconducting gaps \(\Delta _{1}=\Delta _{p}+\Delta _{s}\) and \(\Delta _{2}=\mid \Delta _p-\Delta _s \mid \) open for the two spin-splitted band, respectively, in the homogeneous state [2]}.
[1]
[ [ 189, 192 ] ]
https://openalex.org/W1642067022
ca6b087a-bf54-43b0-a4ef-a5ffb5532a8e
Let \(\tilde{p}\) be a discrete random probability measure (REF ). Assume \((w_j)_{j\ge 1}\) and \((x_j)_{j\ge 1}\) are independent with \((x_j)_{j\ge 1}\) independent and identically distributed form a non atomic distribution. Then \(\tilde{p}\) is a species sampling model [1]}. The partition induced by a sample from \(\tilde{p}\) depends only on the random frequencies \((w_j)_{j\ge 1}\) and can be studied in terms of a multinomial occupancy problem. The theory is well established and dates back to the seminal paper [2]}. The main tools are a Poissonization argument and regular variation theory. We provide a concise overview taking the set-up from [3]}.
[3]
[ [ 664, 667 ] ]
https://openalex.org/W3098759601
613f4d81-0675-4cbe-a448-9958f8d7c5b3
Here we work under the hypothesis that \(\overrightarrow{\nu }(x)\) is slowly varying at zero, that is \(\lim _{x\rightarrow 0}\overrightarrow{\nu }(\lambda x)/\overrightarrow{\nu }(x)=1\) for all \(\lambda >0\) . According to [1]} (see also [2]}), \(\Phi (1/x)\sim \overrightarrow{\nu }(x)\) as \(x\rightarrow 0\) , so that via (REF ) \(\mathrm {E}(K_n)\sim \overrightarrow{\nu }\big (\footnotesize {\frac{1}{n}})\quad \mbox{as }n\rightarrow \infty ,\)
[1]
[ [ 229, 232 ] ]
https://openalex.org/W4206769493
613b1419-0ac5-4029-a688-c47d2f8e9a35
cf. [1]}. In Theorem REF we derive a two term expansion of \(\mathrm {E}(K_n)\) under the hypothesis that \(\overrightarrow{\nu }(x)\) is a de Haan slowly varying function at zero, that is for a constant \(c\) and a slowly varying function \(\ell (x)\) at zero, called the auxiliary function of \(\overrightarrow{\nu }(x)\) , \(\frac{\overrightarrow{\nu }(\lambda x)-\overrightarrow{\nu }(x)}{\ell (x)}\rightarrow c\log \lambda ,\quad \text{as }x\rightarrow 0.\)
[1]
[ [ 4, 7 ] ]
https://openalex.org/W2088132287
5b5ee0b8-7216-4f27-8afa-33c1cae9e454
Having investigated the dynamics of the impurity, let us now turn to the bath species in order to gain deeper insights into the above-discussed dynamical formation of the MB two-fold fragmented state. We examine the behavior of the two Schmidt states \(|\psi _{1}^{B} (t)\rangle \) and \(|\psi _{2}^{B} (t)\rangle \) of the bath that are populated in the dynamics by utilizing Eq. (REF ). The time-evolution of the corresponding expansion coefficients \(C^{B}_{1,k}(t)\) and \(C^{B}_{2,k}(t)\) for \(g_{BB} = 0.0\) and \(g_{IB} = 0.01\) is provided in Fig. REF . Owing to the fact that the wave function is initially in a product form, i.e., \(|\Psi (0)\rangle = |\phi _{0}^{I}\rangle \otimes |\phi _{0}^{B}\rangle \) , we have \(C^{B}_{1,0}(0) = 1\) and \(C^{B}_{1,k}(0) = 0\) for \(k>0\) . Due to the non-interacting nature of the bath species, the GS wave function of \(\hat{H}_{B}\) is simply equivalent to a mean-field state i.e. \(\phi _{0}^{B} (x_1, \cdots , x_{N_{B}}) = \prod _{i=1}^{N_{B}} \varphi ^{B}_{0} (x_{i})\) , reflecting the fact that initially all the bosons are condensed into the SP state \(|\varphi ^{B}_{0}\rangle \)  [1]} and therefore \(n_{1}^{\rho }(0) = 1\) and \(n_{2}^{\rho }(0) = 0\) . Here, \(n_{i}^{\rho }(t)\) denote the natural populations obtained from a diagonalization of the reduced one-body density matrix of the bath species \(\hat{\rho }_{1}^{B}(t) = \sum _{i=1}^{2} n^{\rho }_{i}(t) | \varphi _{i} (t)\rangle \langle \varphi _{i}(t)|\) [2]}, [3]}, with \(\lbrace | \varphi _{i} (t)\rangle \rbrace \) denoting the natural orbitals. Recall that the two-mode expansion of the field operator \(\hat{\psi }(x)\) in Eq. (REF ) leads to only two natural populations (natural orbitals) in the spectral decomposition. Physically, the natural population \(n^{\rho }_{i}(t)\) denotes the probability for finding a single particle occupying the state \( | \varphi _{i}(t) \rangle \) at time \(t\) , after tracing out all other particles.
[1]
[ [ 1151, 1154 ] ]
https://openalex.org/W2993803817
b09b9071-1552-486e-b31a-ffeac6623347
Since the first Schmidt number lies close to unity for \(g_{BB}>0\) , the corresponding MB wave function can be well approximated as \(|\Psi (t)\rangle = |\psi _{1}^{I} (t) \rangle |\psi _{1}^{B} (t)\rangle \) , i.e. it has a product form between the impurity and the bath Schmidt orbitals. Therefore, in this limit the mixture is fully captured by the SMF description [1]}, [2]} according to which the impurity experiences an effective potential consisting of the DW superimposed to a potential proportional to the bosonic density of the bath, namely \(V_{\text{eff}}^{I}(x) = V_{DW}(x) + g_{IB} ~ N_{B}\rho _{1}^{B}(x)\) . The spatial profile of this effective potential is depicted in Fig. REF (b). Equipped with this knowledge, we note that the quench of the impurity-bath coupling leads to a sudden change of the DW zero-point energy [cf. Fig. REF (b)]. After the quench, the impurity exhibits a breathing-type motion within the effective potential. This response is captured by the second moment of the impurity's position \(\langle x_{I}^2(t) \rangle \) featuring an oscillatory behavior, see, e.g., Fig. REF (c) for the case \(g_{IB} = 0.01\) and \(g_{BB} = 0.05\) .
[1]
[ [ 369, 372 ] ]
https://openalex.org/W2769237948
59d987b6-81e1-46a5-ba1d-14145841d0b8
Monte Carlo study: We conduct a Monte Carlo study to showcase the benefits of our scheme. \(M=50\) datasets have been simulated using the following PK parameters values: \(T^{\textrm {lag}}_{\rm pop} =1\) , \(ka_{\rm pop} =1\) , \(V_{\rm pop}= 8\) , \(k_{\rm pop}=0.1\) , \( \omega _{T^{\textrm {lag}}}=0.4\) , \(\omega _{ka}=0.5\) , \(\omega _{V}=0.2\) , \(\omega _{k}=0.3\) and \(\sigma ^2=0.5\) . We define the mean square distance over the \(M\) replicates as \(E_k(\ell ) = \frac{1}{M}\sum _{m=1}^{M}{\left(_k^{(m)}(\ell ) - ^* \right)^2} \) , and plot it against the epochs (passes over the data) in Figure REF . Note that the MC-step (REF ) is performed using a Metropolis-Hastings procedure since the posterior distribution under the model \(\) noted \(p(z_i | y_i; )\) is intractable, mainly due to the nonlinearity of the model (REF ). The Metropolis-Hastings (MH) algorithm [1]} leverages a proposal distribution \(q(z_{i}, \delta )\) where \(= (z_{\rm pop}, \omega _{z})\) and \( \delta \) is the vector of parameters of the proposal distribution. Generally, and for simplicity, a Gaussian proposal is used. The MH algorithm employed to sample from each individual posterior distribution \(\left(p(z_i | y_i; ), i\in \right)\) is summarized in Algorithm REF . <FIGURE>
[1]
[ [ 891, 894 ] ]
https://openalex.org/W1995713768
c27d930d-6546-4fa2-8cf3-4faebf5c1442
It should be noted, that the problem of minimizing \(F_n\) is also known as "low autocorrelated binary string problem", or the LABS problem. It has been well studied in theoretical physics and chemistry. For example, the LABS problem is correlated with the quantum models of magnetism. Having this in mind, the merit factor problem was attacked by various search algorithms, such as the branch and bound algorithm proposed in [1]}, as well as stochastic search algorithms like tabu search [2]}, memetic algorithm combined with tabu search [3]}, as well as evolutionary and genetic algorithms [4]}[5]}. However, since the search space grows like \(2^n\) , the difficulty of finding long binary sequences with near-optimal \(F_n\) significantly increases. Bernasconi predicted that [6]} " ... stochastic search procedures will not yield merit factors higher than about \(F_n=5\) for long sequences". By long sequences, Bernasconi was referring to binary sequences with lengths greater than 200. Furthermore, in [4]} the problem was described as " ... amongst the most difficult optimization problems".
[1]
[ [ 427, 430 ] ]
https://openalex.org/W3105506198
88b926ee-42be-4908-a347-7c6e8e294a06
For example, we have implemented a lightweight version of the lssOrel algorithm [1]} with the tau table reduced. The pseudo-code of the enhanced implementation is given in Algorithm . The following notations were used:
[1]
[ [ 80, 83 ] ]
https://openalex.org/W2278999310
d099278b-5f5a-4456-9e8b-3846d682b51f
Single-source single-target DA (STDA) refers to the task of adapting a classifier from a single labeled source dataset to a single unlabeled target dataset. In the UDA literature, a plethora of STDA methods have been proposed, which can be broadly classified into three major categories based upon the adaptation strategy. The first category uses first (Maximum Mean Discrepancy  [1]}, [2]}, [3]}, [4]}) or second order (correlation alignment [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}) statistics of the source and target features to align the marginal feature distributions. The second category of STDA methods [14]}, [15]}, [16]}, [17]}, [18]} adopts adversarial training strategy to align the marginal feature distributions of the two domains. Essentially, these methods use a gradient reversal layer [14]} to make the feature extractor network agnostic to domain specific information. The final category of STDA methods [20]}, [21]}, [22]}, [23]} resort to pixel-level adaptation by generating synthetic target-like source images or source-like target images with the help of generative adversarial network (GAN) [24]}. However, practical applications go beyond the single-source and single-target setting and often involve multiple source [25]}, [26]}, [27]} or target domains.
[3]
[ [ 393, 396 ] ]
https://openalex.org/W2964278684
54acd25c-8945-4186-89be-abd51072e8bd
Single-source single-target DA (STDA) refers to the task of adapting a classifier from a single labeled source dataset to a single unlabeled target dataset. In the UDA literature, a plethora of STDA methods have been proposed, which can be broadly classified into three major categories based upon the adaptation strategy. The first category uses first (Maximum Mean Discrepancy  [1]}, [2]}, [3]}, [4]}) or second order (correlation alignment [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}) statistics of the source and target features to align the marginal feature distributions. The second category of STDA methods [14]}, [15]}, [16]}, [17]}, [18]} adopts adversarial training strategy to align the marginal feature distributions of the two domains. Essentially, these methods use a gradient reversal layer [14]} to make the feature extractor network agnostic to domain specific information. The final category of STDA methods [20]}, [21]}, [22]}, [23]} resort to pixel-level adaptation by generating synthetic target-like source images or source-like target images with the help of generative adversarial network (GAN) [24]}. However, practical applications go beyond the single-source and single-target setting and often involve multiple source [25]}, [26]}, [27]} or target domains.
[14]
[ [ 628, 632 ], [ 820, 824 ] ]
https://openalex.org/W1731081199
d3bf2578-7dcd-4e75-862c-3f362c98d8c2
Multi-target DA aims to transfer knowledge from a single labeled source dataset to multiple unlabeled target datasets. While the research in STDA is quite mature, most STDA methods can not be trivially extended to a multi-target setting. So far only a handful of methods [1]}, [2]}, [3]}, [4]}, [5]}, [6]} for MTDA can be found in the literature. AMEAN [1]} performs clustering on the blended target domain samples to obtain sub-targets and then learns domain-invariant features from the source and the obtained sub-targets using a STDA method [8]}. The approaches introduced in [2]}, [3]}, [5]} are derived from STDA and do not exploit any peculiarity of the MTDA setting. Conversely, our CGCT and D-CGCT are tailor-made for the multi-target setting as we propose to use feature aggregation of similar samples across multiple domains.
[1]
[ [ 271, 274 ], [ 353, 356 ] ]
https://openalex.org/W2963858297
ed848967-0eac-4dec-97b7-d6c143b259dc
Graph Neural Networks (GNN) are neural network models applied on graph-structured data that can capture the relationships between the objects (nodes) in a graph via message passing through the edges [1]}, [2]}. Relevant to our work are GNN-derived Graph Convolutional Networks (GCN) [3]} that have recently been applied for addressing DA [4]}, [5]}, [6]}. For instance, Luo et. al. [5]} propose PGL for open-set DA to capture the relationship between the overlapping classes in the source and the target. Notably, Yang et. al. [6]} introduce heterogeneous Graph Attention Network (HGAN) for MTDA to learn the relationship of similar samples among multiple domains and then utilize the graph-based pseudo-labeled target samples to align their centroids with that of the source. Unlike  [5]}, [6]}, we incorporate the idea of co-teaching [11]} in a GCN framework for combating noisy pseudo-labels.
[6]
[ [ 350, 353 ], [ 527, 530 ], [ 791, 794 ] ]
https://openalex.org/W3088032605
86541ded-b1ed-42d7-a7ee-312d617e263c
\(\widetilde{y}_{s,i}\) is the one-hot label for a source label \(y_{s,i}\) . The first term, \(\ell _{ce}\) , in Eq. REF is the cross-entropy loss computed on the source domain samples and minimized w.r.t. \(\theta , \phi \) . The second term, \(\ell _{adv}\) , in Eq. REF is the adversarial loss that is maximized w.r.t \(\psi \) but minimized w.r.t \(\theta \) . \(\lambda _{adv}\) is the weighing factor for \(\ell _{adv}\) . To capture the multi-modal nature of the distributions, CDAN [1]} is proposed where \(D\) can be additionally conditioned on the classifier predictions \(\mathbf {g}\) . In CDAN [1]}, the \(D\) takes as input \(\mathbf {h}=(\mathbf {f}, \mathbf {g})\) , the joint variable of \(\mathbf {f}\) and \(\mathbf {g}\) , instead of just \(\mathbf {f}\) . In this work we use CDAN for aligning the feature distributions.
[1]
[ [ 496, 499 ], [ 614, 617 ] ]
https://openalex.org/W2795155917
b73b01a8-a594-4fb5-bb07-6ffea86dd724
Graph Convolutional Network. For the GCN [1]} classifier we construct an undirected and fully-connected graph \(\Gamma =(\mathcal {V}, \mathcal {E}, \mathcal {A})\) from all samples in mini-batch. In details, given a mini-batch of images, we represent each image \(\mathbf {x}_i\) as a node \(\mathbf {v}_i \in \mathcal {V}\) in the \(\Gamma \) . \(e_{i, j} \in \mathcal {E}\) indicates an edge between nodes \(\mathbf {v}_{i}\) and \(\mathbf {v}_{j}\) , and \(a_{i,j}\) is the semantic similarity score for nodes \((\mathbf {v}_{i}\) , \(\mathbf {v}_{j})\) forming an affinity matrix \(\mathcal {A}\) .
[1]
[ [ 41, 44 ] ]
https://openalex.org/W2519887557
5aa9b4e3-33a2-4585-b7b3-128a9ba078ad
Implementation details. To be fairly comparable with the state-of-the-art methods, we adopt the backbone feature extractor networks used in [1]}, [2]}, [3]} for the corresponding datasets. We train the networks by using a Stochastic Gradient Descent (SGD) optimizer having an initial learning rate of 1e-3 and decay exponentially. More details about the network architecture and experimental set-up can be found in the Supp. Mat.
[3]
[ [ 152, 155 ] ]
https://openalex.org/W2940579548
3bbe0707-ce67-4fd8-bceb-38082529be7b
Here we highlight the keys differences between the CGCT and PGL [1]} as well as the dual classifier-based methods [2]}, [3]}. The PGL [1]} exploits the graph learning framework in an episodic fashion to obtain pseudo-labels for the unlabeled target samples, which are then used to bootstrap the model by training on the pseudo-labeled target data. While our proposed method is similar in spirit to the episodic training in [1]}, we do not solely rely on the GCN to obtain the pseudo-labels. We conjecture that due to the fully-connected nature of the graph and lack of target labels, the GCN will be prone to accumulate features of dissimilar neighbours, thereby, resulting in the erroneous label propagation. To address this peculiarity, we propose to resort to the co-teaching paradigm, where the \(G_{mlp}\) is exploited to train the \(f_{edge}\) network. As the two classifiers will capture different aspects of training [2]}, it will prevent the \(f_{edge}\) to be trained with the same erroneous pseudo-labels as the \(f_{node}\) . We validate this conjecture empirically, where a network with a single GCN classifier with pseudo-labels performs sub-optimally compared to CGCT (see Tab. 5 row 7 of the main paper). Finally, the dual classifier-based methods maintain two classifiers to identify and filter either harder target samples [3]} or noisy samples [2]}. Contrarily, we maintain \(G_{mlp}\) and \(G_{gcn}\) to provide feedback to each other by exploiting the key observation that each classifier learns different patterns during training. Furthermore, given the intrinsic design of the \(G_{gcn}\) , we also do away with an extra adhoc loss of keeping the weights of two networks different.
[1]
[ [ 64, 67 ], [ 134, 137 ], [ 423, 426 ] ]
https://openalex.org/W3035483366
4b3e2955-ca97-4712-b530-23ab642f726b
Digits-five [1]} is composed of five domains that are drawn from the: i) grayscale handwritten digits MNIST [2]} (mt); ii) a coloured version of mt, called as MNIST-M [3]} (mm); iii) USPS [4]} (up), which is a lower resolution, 16\(\times \) 16, of the handwritten digits mt; iv) a real-world dataset of digits called SVHN [5]} (sv); and v) a synthetically generated dataset Synthetic Digits [3]} (sy). Following the protocol of [7]}, we sub-sample 25,000 and 9,000 samples from the training and test sets of mt, mm, sv and sy and use as train and test sets, respectively. For the up domain we use all the 7,348 training and 1,860 and test samples, for our experiments. All the images are re-scaled to a 28\(\times \) 28 resolution.
[1]
[ [ 12, 15 ] ]
https://openalex.org/W2963506806
f8e17068-bbf3-4341-8fbd-bfb9f0264039
Office-Home [1]} is a relatively newer DA benchmark that is larger than Office31 and is composed of four different visual domains: Art, Clipart, Product and Real. It consists of 65 object categories and has 15,500 images in total.
[1]
[ [ 12, 15 ] ]
https://openalex.org/W2627183927
09c0da86-cfcc-46f2-aba5-d20d81ad031e
DomainNet [1]} is the most challenging and very large scale DA benchmark, which has six different domains: Clipart (C), Infograph (I), Painting (P), Quickdraw (Q), Real (R) and Sketch (S). It has around 0.6 million images, including both train and test images, and has 345 different object categories. We use the official training and testing splits, as mentioned in [2]}, for our experiments. <TABLE><TABLE><TABLE>
[1]
[ [ 10, 13 ] ]
https://openalex.org/W2981720610
5332f8ce-3881-4ebb-bd90-67a02325ed56
Remark 5.3 In case \(m>2\) , [1]} still ensures that the nodal line intersects the boundary \(\partial \mathcal {R}_N\) transversally at \(2m\) different points
[1]
[ [ 30, 33 ] ]
https://openalex.org/W2085417731
96c8c50f-7213-4483-9413-816e0171f4b3
Designing algorithms for speech enhancement (SE) is a long-standing research problem in which the current state-of-the-art methods use deep neural networks (DNNs) [1]}, [2]}, [3]}, [4]}, [5]}. DNN-based noise suppression algorithms typically utilize a training set prepared by artificially mixing arbitrary noise sounds with clean speech signals from many different speakers. As a result, fully-trained SE systems attempt to enhance any speech within a given input mixture. These models can generalize to the unseen test speakers if the model's computational capacity is large enough to encompass the variations found in thousands of speakers and noise types. Therefore, generalist models come at the cost of increased test-time complexity.
[2]
[ [ 169, 172 ] ]
https://openalex.org/W2963341071
716c23ef-4fbc-444f-b73c-c55add203028
Table REF describes all the datasets used in our experiments. The subscripts `tr', `vl', and `te' denote training, validation, and test subsets, respectively. We chose to work with two different subsets of speakers \(\mathbb {S}^{(1:20)}\) and \(\mathbb {S}^{(21:26)}\) for better comparison. \(\mathbb {S}^{(1:20)}\) contains 20 speakers from LibriSpeech's [1]} train-clean-100 subset to match the training setup in [2]}. As for the second set \(\mathbb {S}^{(21:26)}\) , we are based on the high-quality audio samples generated by AudioLM available online. Since the identity of those speakers is not fully known, we work with only 6 identifiable speakers from test-clean, \(\mathbb {S}^{(21:26)}\) .
[1]
[ [ 362, 365 ] ]
https://openalex.org/W1494198834
2f269f30-1207-478e-8acc-2f0922f88ab3
For noise subsets we use sound-bible partition of MUSAN [1]} only for test-time mixtures \(\mathbb {N}_{\text{te}}\) , 60 noise files from free-sound folder are set aside for validation mixtures \(\mathbb {N}_{\text{vl}}\) and the rest of the signals from free-sound are used for training time noisy mixtures \(\mathbb {N}_{\text{tr}}\) . Mixturre SNR is chosen from \([-5, 5]\) dB at random.
[1]
[ [ 56, 59 ] ]
https://openalex.org/W2219249508
86ff85d1-a525-4500-a072-46639dd67605
Mass Transference Principle (Beresnevich – Velani, [1]}) Let \(\lbrace B_j\rbrace _{j\in {N}}\) be a sequence of balls in \({R}^n\) with radii \(r(B_j)\rightarrow 0\) as \(j\rightarrow \infty \) . Let \(s>0\) and let \(\Omega \) be a ball in \({R}^n\) . Suppose that, for any ball \(B\) in \(\Omega \) , \( {\cal H}^n\big (B\cap \limsup _{j\rightarrow \infty }B^s_j{}\,\big )={\cal H}^n(B) \ .\)
[1]
[ [ 51, 54 ] ]
https://openalex.org/W2130035452
eee277f3-5964-4519-8e92-cd209bfd8a63
As hinted at earlier, for our current purposes, we are particularly interested in the mass transference principle from “rectangles to rectangles” established in [1]}. To aid readability we will not state the result of [1]} in full generality but rather we will present here a simplified statement which follows as a corollary of the mass transference principle given by [1]} and which is more directly applicable to the problem at hand.
[1]
[ [ 161, 164 ], [ 218, 221 ], [ 370, 373 ] ]
https://openalex.org/W3158600746
6a945a10-a4af-4399-be82-2885d1c84127
In this section, we will present our proof of Theorem REF . We follow essentially the same strategy as laid out by Beresnevich, Levesley, and Ward in their proof of [1]}. As already alluded to, the main difference is that we use the mass transference principle from “rectangles to rectangles” proved in [2]}, whereas Beresnevich, Levesley and Ward used the earlier mass transference principle from “balls to rectangles” established in [3]}. Our proof is split into three main parts. In Section REF we prove a Dirichlet-type theorem (Theorem REF ) for weighted approximation on manifolds. In Section REF we use Theorem REF to construct an appropriate full measure set and make some other preliminary preparations which eventually enable us to apply the mass tranference principle for rectangles (Theorem REF ) to complete the proof in Section REF .
[3]
[ [ 435, 438 ] ]
https://openalex.org/W2070727177
0bdcb9c5-d289-424f-911b-83e60b0f60d0
be the orthogonal projection of \(W^{\prime }\) onto \({R}^d\) and note that \(\pi _d(W^{\prime }) = W\) . We recall that Hausdorff dimension is preserved under bi-Lipschitz mappings (see, for example, [1]}). Since \(\pi _d\) , being an orthogonal projection, is such a mapping, we have \(\operatorname{dim_H}{W} = \operatorname{dim_H}{\pi _d(W^{\prime })} = \operatorname{dim_H}{W^{\prime }}\)
[1]
[ [ 205, 208 ] ]
https://openalex.org/W2032121576
f69b80ef-67d8-4a32-a97c-026d03aee447
Important subclasses of circuits are Clifford circuits, sometimes called stabiliser circuits, which are obtained from compositions of only \(\textrm {CNOT}\) , \(H\) , and \(S\) gates. They are efficiently classically simulable, thanks to the Gottesman-Knill theorem [1]}. A unitary is local Clifford if it arises from a single-qubit Clifford circuit, i.e. a composition of \(H\) and \(S\) gates. The addition of \(T\) gates yields Clifford+T circuits, which are capable of approximating any \(n\) -qubit unitary to arbitrary precision, whereas the inclusion of \(Z_\alpha \) gates for all \(\alpha \) enables one to construct any unitary exactly .
[1]
[ [ 268, 271 ] ]
https://openalex.org/W2052146120
b1e1027d-d6f9-4ed5-8428-6bcc64c79bc6
Using a strategy based on [1]}, we can apply the local complementation rule (REF ) to the LHS of (REF ) to reduce all of the local Cliffords on the inputs to the set \(\lbrace S^n, H, ZH\rbrace \) and the outputs to the set \(\lbrace S^n, H, HZ \rbrace \) (cf. Appendix REF ). Hence, we can further refine the decomposition above into 8 layers: \(\textrm {H + S + CZ + CNOT + H + CZ + S + H}\)
[1]
[ [ 26, 29 ] ]
https://openalex.org/W3103730115
3157db5a-4206-42c1-ad3d-ede62d7cc14d
For many tasks, including few-shot and transfer learning, the performances of existing approaches, such as segmentation networks, deteriorate as the number of object categories increase, as demonstrated by prior works [1]}, [2]}, and also by our experiments. The root cause lies in that, existing approaches are devised to recognize the category-wise features and segment the corresponding objects. Recently, boundary-aware features have been introduced to enhance the segmentation results, yet their core frameworks still focus on classifying category-wise features and segmenting the corresponding objects.
[1]
[ [ 218, 221 ] ]
https://openalex.org/W2963341924
d353887d-81a8-45b5-9aae-0fa103130af6
where, the \( \mathbb {P}^o_a\) , \(\mathbb {P}^o_s\) , \(\mathbb {P}^o_e\) are the segmented outer boundary distribution, pseudo outer boundary distribution, and real outer boundary distribution, respectively. The \(\mathbb {P}_{\textbf {\em I}^o}\) is sampled uniformly along straight lines between pairs of points sampled from the distribution \(\mathbb {P}^o_e\) and the segmentation network distribution \( \mathbb {P}^o_a\) . The \(\textbf {\em I}^o=\varepsilon \textbf {\em I}^o_e+(1-\varepsilon )\textbf {\em I}^o_a\) , where the \(\varepsilon \) is a random number between 0 and 1. The gradient penalty term is firstly proposed in WGAN-GP [1]}. The \(\lambda \) is the gradient penalty coefficient.
[1]
[ [ 652, 655 ] ]
https://openalex.org/W2962879692
16b17ede-3866-4bae-83fd-5d741f1e5677
Network architecture. In the paper, the segmentation network we adopted is the DeeplabV3+ (backbone: resnet50 ) [1]}. Some popular network architectures (Unet [2]}, FPN [3]}, Linknet [4]}, PSPNet [5]}, PAN [6]}) are also tested.
[1]
[ [ 112, 115 ] ]
https://openalex.org/W2630837129
71bca050-0d14-4da3-a340-e3ba7d5fb71d
In this section, the Ref-Net is compared with the SOTA methods, including unsupervised methods (CAC [1]}, ReDO [2]}), few-shot methods (SG-One [3]}, PANet [4]}, SPNet [5]}, CANet [6]}), weakly-/semi-supervised methods (USSS [7]}, ALSSS [8]}) and fully supervised methods (boundary-aware methods:{ Gated-SCNN [9]}, BFP [10]}}, Unet [11]}, FPN [12]}, LinkNet [13]}, PSPNet [14]}, PAN [15]} and DeeplabV3+ [16]}) on six datasets. For the semi-supervised methods, ten labeled samples are provided. Except for the panoramic, the target dataset and open-source dataset have no overlapped object category. For a fair comparison, the categories of each multiple category dataset (SBD and THUHR) are split into two non-overlapping parts. The fully supervised methods are trained with both two parts. The transfer learning based methods are initially trained on the half-category samples and then trained with specified labeled samples of the rest half-category. Table REF shows the quantitative results, where we can see that most scores of \(\textbf {\em R}(10)\) achieve the state-of-the-art results on par with existing non-fully supervised methods. Even with more labeled samples, \(\textbf {Trans.}(100)\) only achieves higher scores than \(\textbf {\em R}(10)\) on THUR and Bird dataset. Moreover, with only 10 labeled samples, the Ref-Net can achieve better results than some fully supervised methods and close results on par with the best fully supervised method. Meanwhile, with fully supervised samples, the Ref-Net achieves higher scores on the complex dataset (Cityscapes and SBD), which demonstrates the advantage of Ref-Net for handling datasets with more categories. Note that the resolution of the Cityscapes dataset we adopted is \(128*128\) . The above two causes lead to the relatively low-scores of all the methods. However, it still validates the superior performance and wide application of the Ref-Net.
[2]
[ [ 111, 114 ] ]
https://openalex.org/W2970653639
06c041f7-cab1-4979-97d8-85ff297179b7
We compare results from our method and other six state-of-arts methods, containing both deep-learning-based methods(DDFF [1]}, EPINet [2]}) and non-deep learning method marked with * (PADMM\(^*\) [3]}, VDFF\(^*\) [4]}, LFACC\(^*\) [5]}, LF\(_-\)OCC\(^*\) [6]}). For fair comparisons, we use the parameter settings provided by authors and adjust some of the parameters to fit different datasets as needed. Note that because the LFSD dataset does not contain multi-view images, results of some methods are not available. <TABLE><FIGURE><FIGURE>
[1]
[ [ 121, 124 ] ]
https://openalex.org/W2606702283
241c529e-4281-42ed-9c52-18ecec7b89aa
Consider the following Hamiltonian for a two-qubit system and its environment as used in[1]}: \(H=H_s+H_e+H_i\)
[1]
[ [ 88, 91 ] ]
https://openalex.org/W2007156591
fb288103-c631-404d-aac5-f19d33c56e3c
is log-concave on \(\Re ^m\) (see Prekopa ([1]}) and Pecaric et. al. ([2]})).
[2]
[ [ 71, 74 ] ]
https://openalex.org/W656501022
15e1cd41-6d95-40cd-947d-a5d2443ad648
Next, given the question utterance \(q\) containing \(n\) words \(\lbrace w_1,\dots ,w_n\rbrace \) and the logical form \(l\) containing \(m\) actions \(\lbrace a_1,\dots ,a_m\rbrace \) , we tokenize the contexts and use the pre-trained model GloVe [1]} to embed the words into a vector representation space of dimension \(d\) We employ the same dimension \(d\) for all the representations, unless it is explicitly mentioned.. Our word embedding model provides us with the sequences \(x^{(q)} = \lbrace x^{(q)}_1,\dots ,x^{(q)}_n\rbrace \) , \(x^{(lf)} = \lbrace x^{(lf)}_1,\dots ,x^{(lf)}_m\rbrace \) where \(x^{(q)}_i\) , \(x^{(lf)}_i\) are given by, \(\begin{split}&x^{(q)}_i = GloVe(w_i), \\&x^{(lf)}_i = GloVe(a_i),\end{split}\)
[1]
[ [ 254, 257 ] ]
https://openalex.org/W2250539671
fda9ccde-9590-4ba0-aa73-d0f5ed7641f4
VQuAnDa [1]} is the first QA dataset, which provides the verbalization of the answer in natural language. It contains 5000 “complex" questions with their SPARQL queries and answers verbalization. The dataset consists of 5042 entities and 615 relations. ParaQA [2]} is a QA dataset with multiple paraphrased responses. The dataset was created using a semi-automated framework for generating diverse paraphrasing of the answers using techniques such as back-translation. It contains 5000 “complex" question-answer pairs with a minimum of two and a maximum of eight unique paraphrased responses for each question. VANiLLa [3]} is a QA dataset that offers answers in natural language sentences. The answer sentences in this dataset are syntactically and semantically closer to the question than the triple fact. The dataset consists of over \(100k\) “simple" questions.
[2]
[ [ 262, 265 ] ]
https://openalex.org/W3164228909
e80fb40f-9c97-4b5f-9134-00d7fc2ec6f5
Model Configuration For simplicity, to represent the logical forms, we employ the same grammar as in [1]}. Our approach can be used with any other grammar or even directly with SPARQL queries. However, we believe it is better to employ semantic grammar from a state-of-the-art QA model. To properly train the similarity threshold module, we had to introduce negative logical forms for each question. We did that by corrupting the gold logical forms, either by replacing a random action or finding another “similar" logical form from the dataset based on the Levenshtein distance. For all the modules in our framework, we employ an embedding dimension of 300. A transformer encoder and decoder having two layers and six heads for the multi-head attention model is used. We apply dropout [2]} with a probability \(0.1\) . For the optimization, we use the Noam optimizer proposed by [3]}, where authors use an Adam optimizer [4]} with several warmup steps for the learning rate.
[2]
[ [ 786, 789 ] ]
https://openalex.org/W2095705004
a9b5d6da-dca3-4a43-8973-650007ed0b6f
Evaluation Metrics We use the same metrics as employed by the authors of the three existing datasets [1]}, [2]}, [3]} on the previously mentioned baselines. The BLEU score, as defined by [4]}, analyzes the co-occurrences of n-grams in the reference and the proposed responses. It computes the n-gram precision for the whole dataset, which is then multiplied by a brevity penalty to penalize short translations. We report results for BLEU-4. The METEOR score introduced by [5]} is based on the harmonic mean of uni-gram precision and recall, with recall weighted higher than precision. <TABLE><TABLE>
[1]
[ [ 101, 104 ] ]
https://openalex.org/W3030844777
633909fe-41c7-4296-841f-8e21e6ea7378
The deep learning-based object detection models are divided into two groups: one [1]}, [2]}, [3]}, [4]}, [5]} or two [6]}, [7]}, [8]}, [9]}, [10]} stage detectors. Two-stage detectors tend to achieve better accuracy, while one-stage detectors are simpler and faster, hence more suitable to online tasks. <FIGURE>
[2]
[ [ 87, 90 ] ]
https://openalex.org/W2963037989
90d51741-4206-48f1-81a1-8c7453a45b43
Network compression [1]} is a common technique to reduce the number of operations, model size, energy consumption, and over-training of deep neural networks. As neural network synapses and neurons can be redundant, compression techniques attempt to reduce the total number of them, effectively reducing multipliers. Several approaches have been successfully deployed without much loss in accuracy, including parameter pruning [2]}, [3]}, [4]} (selective removal of parameters based on a particular ranking and regularization), low-rank factorisation [5]}, [6]}, [7]} (using matrix decomposition to estimate informative parameters), compact network architectures [8]}, [9]}, [10]}, [11]}, and knowledge distillation [12]} (training a compact network with distilled knowledge of a large network).
[4]
[ [ 438, 441 ] ]
https://openalex.org/W2963828549
1606af07-485d-4cb9-bb75-40ea78aba4ab
Network compression [1]} is a common technique to reduce the number of operations, model size, energy consumption, and over-training of deep neural networks. As neural network synapses and neurons can be redundant, compression techniques attempt to reduce the total number of them, effectively reducing multipliers. Several approaches have been successfully deployed without much loss in accuracy, including parameter pruning [2]}, [3]}, [4]} (selective removal of parameters based on a particular ranking and regularization), low-rank factorisation [5]}, [6]}, [7]} (using matrix decomposition to estimate informative parameters), compact network architectures [8]}, [9]}, [10]}, [11]}, and knowledge distillation [12]} (training a compact network with distilled knowledge of a large network).
[7]
[ [ 562, 565 ] ]
https://openalex.org/W1996901117