id
stringlengths 18
42
| text
stringlengths 0
3.39M
| added
stringlengths 24
24
| created
stringlengths 20
24
| source
stringclasses 4
values | original_shard_dir
stringclasses 158
values | original_shard_idx
int64 0
311k
| num_tokens
int64 1
569k
|
---|---|---|---|---|---|---|---|
proofpile-arXiv_065-6 | \section{Introduction}
Machine Learning (ML) applications recently demonstrated widespread adoption in many critical missions, as a way to deal with large-scale and noisy datasets efficiently, in which human expertise cannot be used due to practical reasons. Although ML-based approaches have achieved impressive results in many data processing tasks, including classification, and object recognition, they have been shown to be vulnerable to small adversarial perturbations, and thus tend to misclassify, or not able to recognize minimally perturbed inputs. Figure~\ref{fig:adversarial-input} illustrates how an adversarial sample can be generated by adding a small perturbation, and as a result can get misclassified by a trained Neural Network (NN).
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{{adversarial-input}.png}
\caption{By adding an unnoticeable perturbation to an image of "panda", an adversarial sample is created, and it was misclassified as "gibbon" by the trained network. (Image credit: ~\cite{goodfellow2015})\label{fig:adversarial-input}}
\end{figure}
Adversarial perturbation can be achieved either through \emph{white-box} or \emph{black-box} attacks. In the threat model of \emph{white-box} attacks, an attacker is supposed to have full knowledge of the target NN model, including the model architecture and all relevant hyperparameters. For the \emph{black-box} attacks, an attacker has no access to the NN model and associated parameters; thus, an attacker relies on generating adversarial samples using the NN model on hand (known as \emph{attacker model}), and then uses these adversarial samples on the target NN model (known as \emph{victim model}). White-box attacks are considered to be difficult to launch in real world scenarios, as it is often not possible for an attacker to have access to full information of the victim model. Thus, in this paper, we focus on \emph{black-box} attacks which are posing practical threats for many ML applications, and evaluate the strategies of generating adversarial samples (which can be used for launching black-box attacks) and their transferability to victim models.
{\bf\textit{Transferability}} is an ability of an adversarial sample that is generated by a machine learning attack on a particular machine learning model (i.e., on an attacker model) to be effective against a different, and potentially unknown machine learning model (i.e., on a victim model). Attacker model refers to the model used in generating the adversarial samples (i.e., malicious inputs that are modified to yield erroneous output while appearing as unmodified to the human or an agent), whereas the target model refers to the NN model to which the adversarial samples will be transferred. There is a long literature on transferability of adversarial samples and machine learning attacks that generate them; however, they often analyze the transferability from the perspective of a specific network model~\citep{szegedy2014, goodfellow2015, papernot2016, demontis2019}. That is, they have tried to present an explanation on why transferability is able to occur based on the NN model properties (of a given specific target model). Hence, we say that most research have taken a \emph{model-centric} approach. In contrast, we are presenting an {\bf \textit{attack-centric}} approach, in this paper. In \textit{attack-centric} approach, we provide insights on why adversarial samples can actually transfer by analyzing the adversarial samples generated using different machine learning attacks. A particular insight that we want to build is to see if machine learning attacks and input set have any inherent feature that causes or increases the likelihood of adversarial samples to transfer effectively to the victim models.
In the following, we provide motivation on studying transferability of adversarial samples and exemplify ML-based applications in which they may pose significant security and reliability threats.
\subsection{Motivation for Research on Transferability of Adversarial Samples}
Machine learning has become a driving force for many data intensive innovative technologies in different domains, including (but not limited to) health care, automotive, finance, security, and predictive analytics, thanks to the widespread availability of data sources and computational power allowing to process them in a reasonable time. However, machine learning systems may have security concerns which can be detrimental (and even life threatening) for many application use cases. For motivating the readers regarding the importance of transferability of adversarial samples, and demonstrate the feasibility and possible consequences of machine learning attacks, here we highlight some practical security threats which exploit the transferability of adversarial samples.
\cite{thys2019} generated adversarial samples that were able to successfully hide a person from a person detector camera which relies on a machine learning model. They showed that this kind of attack is feasible to maliciously circumvent surveillance systems and intruders can sneak around undetected by holding on to the adversarial sample/patch in the form of cardboard in front of their body which is aimed towards the surveillance camera.
Another sector that heavily relies on ML approaches health care due to high volume of data being processed is health care. A particular example of exploiting adversarial samples in this domain is as follows. Dermatologists usually operate under a "fee-for-service" revenue model in which physicians get paid for procedures they perform for a patient. This has caused unethical dermatologists to apply unnecessary procedures to increase their revenue. To avoid frauds in this nature, insurance companies often rely on machine learning models that analyze patient data (e.g., dermatoscopy images) to confirm that suggested procedures are indeed necessary. According to the hypothetical scenario presented by ~\cite{finlayson2018}, an attacker could generate adversarial samples composed of dermatoscopy images such that when they are analyzed with the machine learning model used by insurance company (victim's model), it would (incorrectly) report that a suggested procedure is appropriate and necessary for the patient.
For security applications that rely on audio commands (which are processed by a ML-based speech recognition system), an attacker can construct adversarial audio samples to be used in breaking into the targeted system. Such an attack, if successful, may lead to information leakage, cause denial of service, or executing unauthorized commands. A feasibility of an attack on speech recognition system is demonstrated by ~\cite{carlini2016} that generated adversarial audio samples (called obfuscated commands) which were used in attacking Google Now's speech recognition system.
~\cite{jia2017} used Stanford Question Answering Dataset (SQuAD) to test whether text recognition systems can answer questions about paragraphs that contain adversarial sentences inserted by a malicious user. These adversarial samples were automatically generated to mislead the system without changing the correct answers or misleading humans. Their results showed that the accuracy of sixteen published models drops from an average of 75\% F1 score to 36\%, and when the attacker was allowed to add ungrammatical sequences of words, the average accuracy on four of the tested models dropped further down to 7\%.
As machine learning approaches find their ways into many application domains, the concerns associated with the reliability and security of systems are getting profound. While covering all application areas is out of scope for this paper, our goal is to motivate the study of transferability of adversarial samples to better understand the mechanisms and factors that influence their effectiveness. Without loss of generality, we focus primarily on image classification as a use case to demonstrate the impact of machine learning attacks and their role on effectiveness of transferability of adversarial samples in this paper (though the findings and insights obtained can be generalized for other use cases).
\section{Related Work}
The study of machine learning attacks and transferability of adversarial samples have gained a momentum, following the widespread use of Deep Neural Networks (DNNs) in many application domains. In the following, we detail the recent studies in this area, and discuss their relevance to our work.
\cite{szegedy2014} studied the transferability of adversarial samples on different models that were trained using MNIST dataset. They focused on examining why DNNs were so vulnerable to images with little perturbation. In particular, they examined non-linearity and overfitting in neural networks as the cause of DNNs vulnerability to adversarial samples. Their experiments and methodology, however, were limited to the NN model characteristics to gain intuition on transferability.
\cite{goodfellow2015} carried out a new study on transferability of adversarial samples which was built on the previous study of~\cite{szegedy2014}. In contrast, they argued that non-linearity of NN models actually helps to reduce the vulnerability to adversarial samples, and linearity of a model is the cause that makes adversarial samples work. Also, they further suggest that transferability is more likely when the adversarial perturbation or noise is highly aligned with the weight vector of the model. The entire analysis was based on attack called Fast Gradient Sign Method (FGSM) that computes the gradient of the loss function once, and then finds the minimum step size that generates the adversarial samples.
Another study on transferability was conducted by~\cite{papernot2016} in which they aimed at experimenting how transferability works across traditional machine learning classifiers, such as Support Vector Machines (SVMs), Decision Trees (DT), K-nearest neighbors (KNN), Logistic Regression (LR) and DNNs. Their motivation is to determine if adversarial samples constitute a threat for a specific type or implementation of machine learning model. In other words, they would like to analyze if adversarial samples would be transferred to any of these models; and if so, which of the classifiers (or models) are more prone to such black-box attacks. They also examined intra-technique and cross-technique transferability across the models, and provided in depth explanation on why deep neural network and LR were more prone to intra-technique transferability when compared to SVM, DT, KNN, and LR. However, similar to previous studies, their analysis did not consider the possible impacts of intrinsic properties of attacks on transferability of adversarial samples.
\cite{papernot2017} extended their earlier findings by demonstrating how a black-box attack can be launched on hosting DNN without prior knowledge of the model structure nor its training dataset. The attack strategy employed consists of training a local model (i.e., substitute/attacker model) using synthetically generated data by the adversary that was labeled by the targeted DNN. They demonstrated the feasibility of this strategy to launch black-box attacks on machine learning services hosted by Amazon, Google and MetaMind. Similar study was conducted by~\cite{liu2017}, in which they assumed the model and training process, including both training and test datasets are unknown to them before launching the attack.
\cite{demontis2019} presented a comprehensive analysis on transferability for both test-time evasion and training-time poisoning attacks. They showed that there are two main factors contributing to the success of the attack that include intrinsic adversarial vulnerability of the target model, and the complexity of the substitute model used to optimize the attack. They further defined three metrics/factors that impacts transferability, which are: i) size of the input gradient, ii) alignment of the input gradients of the loss function computed using the target and the substitute (attacker) models, and iii) variability of the loss landscape.
All these findings and factors, while essential, are restricted to explain the transferability from the model-centric perspective. However, our investigation is not limited to the assessment of models, but expands the analysis on various attack implementations and the adversarial samples generated to see if there are underlying characteristics that contribute to increasing or decreasing the chances of transferability among NN models.
\section{Machine Learning Attacks}
The adversarial perturbations crafted to generate adversarial samples for fooling a trained network are referred as machine learning attacks. The full list of machine learning attacks presented in the literature is exhaustive, however, we present the subset of attacks analyzed in this work with a brief description of their characteristics in Table~\ref{tab:attacks}.
Following the categorization presented by~\cite{rauber2018}, we categorize the attacks used in this paper into two main families: i) gradient-based, and ii) decision-based attacks. Gradient-based attacks try to generate adversarial samples by finding the minimum perturbation through a gradient descent mechanism. Decision-based attacks involve the use of image processing techniques to generate adversarial samples. It is called decision-based because the algorithms rely on comparing the generated adversarial samples with the original output until misclassification occurs.
\begin{longtable}{| p{.25\textwidth} | p{.18\textwidth} | p{.46\textwidth}|}
\hline
Name of Attack & Attack Family & Short Description\\
\hline\hline
Deep Fool Attack & gradient-based & It obtains minimum perturbation by approximating the model classifier with a linear classifier~\citep{moosavi2016}.\vspace{0.1cm} \\
\hline
Additive Noise Attack & decision-based & Adds Gaussian or uniform noise and gradually increases the standard deviation until misprediction occurs~\citep{rauber2018}.\vspace{0.1cm} \\
\hline
Basic Iterative Attack & gradient-based & Applies a gradient with small step size and clips pixel values of intermediate results to ensure that they are in the neighborhood of the original image~\citep{kurakin2017}. \vspace{0.1cm} \\
\hline
Blended Noise Attack & decision-based & Blends the input image with a uniform noise until the image is misclassified.\vspace{0.1cm}\\
\hline
Blur Attack & decision-based & Finds the minimum blur needed to turn an input image into an adversarial sample by linearly increasing the standard deviation of a Gaussian filter. \vspace{0.1cm}\\
\hline
Carlini Wagner Attack & gradient-based & Generates adversarial sample by finding the smallest noise added to an image that will change the classification of the image~\citep{carlini2017}.\vspace{0.1cm}\\
\hline
Contrast Reduction Attack & decision-based & Reduces the contrast of an input image by performing a line-search internally to find minimal adversarial perturbation. \vspace{0.1cm}\\
\hline
Search Contrast Reduction Attack& decision-based & Reduces the contrast of an input image by performing a binary search internally to find minimal adversarial perturbation. \vspace{0.1cm}\\
\hline
Decoupled Direction and Norm (DDN) Attack & gradient-based & Induces misclassifications with low L2-norm, through decoupling the direction and norm of the adversarial perturbation that is added to the image~\citep{rony2019}. The attack compensates for the slowness of Carlini Wagner attack.\vspace{0.1cm}\\
\hline
Fast Gradient Sign Attack & gradient-based & Uses a one-step method that computes the gradient of the loss function with respect to the image once and then tries to find the minimum step size that will generate an adversarial sample~\citep{goodfellow2015}.\\
\hline
Inversion Attack & decision-based & Creates a negative image (i.e., image complement of the original image, in which the light pixels appear dark, and vice versa) by inverting the pixel values~\citep{hosseini2017}.\vspace{0.1cm}\\
\hline
Newton Fool Attack & gradient-based & Finds small adversarial perturbation on an input image by significantly reducing the confidence probability~\citep{jang2017}.\vspace{0.1cm}\\
\hline
Projected Gradient Descent Attack & gradient-based & Attempts to find the perturbation that maximizes the loss of a model (using gradient descent) on an input. It is ensured that the size of the perturbation is kept smaller than specified error by relying on clipping the samples generated~\citep{madry2017}.\vspace{0.1cm}\\
\hline
Salt and Pepper Noise Attack & decision-based & Involves adding salt and pepper noise to an image in each iteration until the image is misclassified, while keeping the perturbation size within the specified epsilon $\epsilon$.\vspace{0.1cm}\\
\hline
Virtual Adversarial Attack & gradient-based & Calculates untargeted adversarial perturbation by performing an approximated second order optimization step on the Kullback–Leibler divergence between the unperturbed predictions and the predictions for the adversarial perturbation~\citep{miyato2015}. \vspace{0.1cm}\\
\hline
Sparse Descent Attack & gradient-based & A version of basic iterative method that minimizes the L1 distance. \vspace{0.1cm}\\
\hline
Spatial Attack & decision-based & Relies on spatially chosen rotations, translations, scaling~\citep{engstrom2019}.\vspace{0.1cm}\\
\hline \hline
\caption{The machine learning attacks used in this work.}
\label{tab:attacks}
\end{longtable}
\section{Methodology}
In the following, we detail the Convolutional Neural Network (CNN) models, infrastructure and tools used in the evaluation, as well as the procedure employed in carrying out the experiments.
\subsection{Infrastructure and Tools}
To build, train and test the CNNs that use in our evaluation, we rely on PyTorch and TorchVision. We also use Foolbox~\citep{rauber2018} which is a Python library to generate adversarial samples. It provides reference implementations for many of the published adversarial attacks, all of which perform internal hyperparameter tuning to find the minimum adversarial perturbation. We use Python version 3.7.3 on Jupyter Notebook. We run our experiments on Google Colab which provides an interactive environment that allows to write and execute Python code. It is similar to Jupyter notebook, but rather than being installed locally, it is hosted on the cloud. It is heavily customized for data science workloads, as it contains most of the core libraries used in data science/machine learning research. We used this environment in training the neural network as it provides large memory capacity and access to GPUs, thereby reducing the training time.
\subsection{CNNs Used in This Study}
Here, we provide the brief description and details of CNNs used in this work. Note that a particular CNN may be in one of two roles, namely it can be either an attacker model (on which the adversarial samples are generated), or a victim model (to which the adversarial samples will be used to attack).
{\bf LeNet:}
It is simple, yet popular CNN architecture that was first introduced in 1995 but came to limelight in 1998 after it demonstrated success in handwritten digit recognition task~\citep{lecun1998}. The LeNet architecture used for this work is slightly modified to train for CIFAR-10 dataset (instead of MNIST).
{\bf AlexNet:}
It is an advanced form of LeNet architecture, with a depth of 8 layers. It showed groundbreaking results in 2012- ILSVRC competition by achieving an error rate from 25.8\% to 16.4\% on ImageNet dataset with about 60 million trainable parameters~\citep{krizhevsky2017}. It also has different optimization techniques such as dropout, activation function and Local Response (LR) normalization. Since LR normalization had shown minimal (if any) contribution in practice it was not included in the AlexNet model trained for this project. Aside from the increase in the depth of the network, another difference between the LeNet and AlexNet model trained in this work is that AlexNet has a dropout layer added to it.
{\bf Vgg-11:}
It was introduced to improve the image classification accuracy on ImageNet dataset by~\cite{simonyan2015}. Compared to LeNet and AlexNet, Vgg-11 has an increased network depth, and it made use of small (3 x 3) convolutional filters. The architecture secured a second place at the ILVRSC 2014 competition after reducing the error rate on the ImageNet dataset down to 7.3\%. Hence, the architecture is an improvement over AlexNet. There are different variants of Vgg : Vgg-11, 13, 16 and 19. Only Vgg-11 is used in this paper. In addition to being deeper than AlexNet architecture, batch normalization is also introduced in the Vgg-11 used in this project.
Table~\ref{tab:cnn-models} summarizes the major features of these three CNN models. We choose these models to evaluate how machine learning attacks and corresponding adversarial samples generated respond to these models.
\begin{longtable}{| p{.08\textwidth} | p{.072\textwidth} | p{.12\textwidth}| p{.109\textwidth} | p{.125\textwidth} | p{.065\textwidth} | p{.12\textwidth} | p{.1\textwidth}|}
\hline
CNN& \# Conv. Layers&\# Inner activation func., type&Output activation func.& \# Pooling Layers, type& \# FC Layers&\# Dropout Layers (\%)&\# BatchNorm Layers \vspace{0.1cm}\\
\hline
LeNet&2&4, RELU &Softmax& 2, maxpool& 3 &None & None \vspace{0.1cm}\\
\hline
AlexNet&5&7, RELU&Softmax &3, maxpool& 3 & 2 (\%0.5)& None \vspace{0.1cm}\\
\hline
Vgg-11& 8&8, RELU&Softmax&4, maxpool& 3 & 2 (\%0.5) & 8 \vspace{0.1cm}\\
\hline
\hline
\caption{Features of the CNN models used in this paper.}
\label{tab:cnn-models}
\end{longtable}
\subsection{Data Processing and Training}
{\bf Dataset:} We used CIFAR-10 dataset~\citep{Krizhevsky2009} for our analysis, since it is arguably one of the most widely used dataset in the field of image processing and computer vision research. It contains 60,000 images which belong to one of ten classes. Training dataset contains 45,000 images, validation dataset has 500 images, whereas testing dataset contains 10,000 images. To generate adversarial samples, 500 images are selected from the testing dataset (50 images picked from each class to have a balanced dataset).
\noindent {\bf Preprocessing:} At the very beginning, we performed training transformations, including random rotation, random horizontal flip, random cropping, converting the dataset to tensor and normalization. Likewise, we performed test transformations, including converting the dataset to tensor, and normalized it. Random rotation and horizontal flip introduce complexity to the input data which helps the model to learn in a more robust way. It is necessary to convert inputs to tensor because PyTorch works with tensor objects. The three channels are normalized (dividing by 255) to increase learning accuracy. Final step of data pre-processing was forming a batch size of 256 and creating a data loader for train and validation data (the method loads 256 images in each iteration during the training and validation). We choose batch size of 256 as it is large enough to make the training faster.
\noindent {\bf Training:} For the training, we first created the network model which comprises of feature extraction, classification and forward propagation. In each epoch, we calculated the training loss, training accuracy, validation loss and validation accuracy. To perform training, we specified the following parameters for the train function: model, training iterator, optimizer (Adam optimizer) and criterion (cross entropy loss criterion). To perform validation, we specified the following parameters to the evaluation function: model, validation iterator, and criterion (cross entropy loss criterion). After completing training phase, we saved parameter values for the given model.
\begin{longtable}{| p{.3\textwidth} | p{.2\textwidth}| p{.2\textwidth} | p{.2\textwidth} |}
\hline
Characteristics & LeNet & AlexNet & Vgg-11 \vspace{0.1cm}\\
\hline
\hline
Epoch number & 25 & 25 & 10 \vspace{0.1cm}\\
\hline
Training loss & 0.953 & 0.631 & 0.244 \vspace{0.1cm}\\
\hline
Validation loss & 0.956 & 0.695 & 0.468 \vspace{0.1cm}\\
\hline
Training accuracy & 66.34\% & 78.34\%& 91.94\% \vspace{0.1cm}\\
\hline
Validation accuracy & 66.70\% & 76.74\%&87.11\% \vspace{0.1cm}\\
\hline
Testing accuracy & 66.64\% &76.03\%& 85.87\% \vspace{0.1cm}\\
\hline
\hline
\caption{Training characteristics for NN models.}
\label{tab:training-characteristics}
\end{longtable}
The final step is the testing stage. To test the trained models, we loaded in the saved model parameters, including trained weights. Then, we checked for testing accuracy of the networks. Table~\ref{tab:training-characteristics} summarizes the training characteristics and reports train, validation and testing accuracy obtained.
\subsection{Adversarial Samples Generation}
{\bf Machine learning attacks:} Table~\ref{tab:attacks} detailed 17 unique machine learning attacks employed in the evaluation. However, for some of the attacks, more than one norms (L0, L1, L-infinity) are used for estimating the error ($\epsilon$), thus increasing the number of unique attacks evaluated to 40. For the sake of brevity, we enumerate the attacks ranging from 1 to 40 (as listed in Table~\ref{tab:attack-enumeration}), and used this enumeration as labels, instead of providing the full name and used-norm when showing the results in the following Figures.
\begin{longtable}{| p{.05\textwidth} | p{.3\textwidth}| p{.055\textwidth} || p{.05\textwidth} | p{.3\textwidth}| p{.055\textwidth} | }
\hline
Label & Attack Name & Norm & Label & Attack Name & Norm \\
\hline
\hline
1& Deep Fool Attack& L-inf & 21& BSCR Attack& L2\\
\hline
2& Deep Fool Attack& L2 & 22& BSCR Attack& L-inf\\
\hline
3& Additive Gaussian Noise (AGN) Attack& L2 & 23& Linear Search Contrast Reduction (LSCR) Attack& L1\\
\hline
4& Additive Uniform Noise (AUN) Attack& L2 & 24& LSCR Attack& L2\\
\hline
5& AUN Attack& L-inf & 25& LSCR Attack& L-inf\\
\hline
6& Repeated AGN Attack& L2 & 26& Decoupled Direction and Norm Attack& L2\\
\hline
7& Repeated AUN Attack& L2 & 27& Fast Gradient Sign Attack& L1\\
\hline
8& Repeated AUN Attack& L-inf & 28& Fast Gradient Sign Attack& L2\\
\hline
9& Basic Iterative Attack& L1 & 29& Fast Gradient Sign Attack& L-inf\\
\hline
10& Basic Iterative Attack& L2& 30& Inversion Attack& L1\\
\hline
11& Basic Iterative Attack& L-inf& 31& Inversion Attack& L2\\
\hline
12& Blended Uniform Noise Attack& L1 & 32& Inversion Attack& L-inf\\
\hline
13& Blended Uniform Noise Attack& L2 & 33& Newton Fool Attack& L2\\
\hline
14& Blended Uniform Noise Attack& L-inf & 34& Projected Gradient Descent Attack& L1\\
\hline
15& Blur Attack& L1 & 35& Projected Gradient Descent Attack& L2\\
\hline
16& Blur Attack& L2 & 36& Projected Gradient Descent Attack& L-inf\\
\hline
17& Blur Attack& L-inf & 37& Salt and Pepper Attack& L2\\
\hline
18& Calini Wagner Attack& L2 & 38& Sparse descent Attack& L1\\
\hline
19& Contrast Reduction Attack& L2 & 39& Virtual adversarial Attack& L2\\
\hline
20& Binary Search Contrast Reduction (BSCR) Attack& L1 & 40& Spatial Attack& N/A\\
\hline
\caption{Labels of attacks and norms used to generate adversarial samples.}
\label{tab:attack-enumeration}
\end{longtable}
{\bf Adversarial Sample Formulation:} Given a classification function $f(x)$, class $C_x$, adversarial classification function $f(x\prime)$, distance $D(x, x\prime)$ and epsilon $\epsilon$ (smallest allowable perturbation or error), adversarial sample $x$ can be mathematically expressed as:
\[
f(x)\; = \;C_x \land f(x\prime)\;\neq\;C_x \land D(x,x\prime) \leq \epsilon.
\]
To craft adversarial samples via Foolbox~\citep{rauber2018}, we need to specify a criterion that defines the impact of adversarial action (misclassification in our case), and a distance measure that defines the size of a perturbation (i.e., L1-norm, L2-norm, and/or L-inf which must be less than specified $\epsilon$). Then, these are taken into consideration in an attacker model to generate an adversarial sample.
The following equation shows the general distance formula. Depending on the value of p, L1, L2 or L-inf norm is obtained.
\[
||x - \hat{x}||_p \; = \; (\; \sum_{i=1}^{d} | x_i = \hat{x_i}|^p \;)^{1/2}
\]
We picked the value of epsilon as 1.0, since it allows to generate a significant number of adversarial samples for all the attack methods used. Because it takes a lot of time to generate adversarial samples using the attack algorithms, we used 500 balanced inputs (i.e., 50 images from each of the 10 classes) from the test data.
To demonstrate how well adversarial samples transfer, we use a confusion matrix as a visual guide. In a given confusion matrix, each row represents instances in a predicted class, whereas each column represents instances in an true/actual class that a given input belongs. The diagonal of the confusion matrix shows the number of each class that were correctly predicted after an attack is launched. For example, Figure~\ref{fig:confusion-linf} shows a confusion matrix of adversarial samples generated by using Deep Fool attack (with L-inf norm) on LeNet. It has all zero entries on the diagonal which means that the inputs (i.e., adversarial samples) were misclassified in all classes. This implies that the attack that generated the adversarial samples is very powerful since they were all misclassified. On the other hand, Figure~\ref{fig:confusion-l2}
shows a confusion matrix of adversarial samples generated by using Gaussian Noise attack (with L2 norm) on LeNet. In this confusion matrix, however, the diagonal has non-zero, larger positive entries that illustrates the attack used in generating the adversarial samples are less powerful leading as many of the samples correctly classified.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\columnwidth]{{confusion-linf}.png}
\caption{Confusion matrix of adversarial samples generated using Deep Fool attack with L-inf norm on LeNet. \label{fig:confusion-linf}}
\vspace{-0.2cm}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.6\columnwidth]{{confusion-l2}.png}
\caption{Confusion matrix of adversarial samples generated using Additive Gaussian Noise attack with L2 norm on LeNet.\label{fig:confusion-l2}}
\vspace{-0.2cm}
\end{figure}
\subsection{Experimental Procedure}
Here, we describe the procedure in performing the analysis and generating the results shown in the Evaluation. First, the adversarial samples are generated by using the attack model and original dataset on an attacker model (which can be one of the LeNet, AlexNet, or Vgg-11 at any given scenario). Once the adversarial samples are generated on the attacker model, they are used on the victim models (which can be one of the LeNet, AlexNet or Vgg-11). Then, the statistics regarding the number of mispredictions, as well as their prediction classes are collected. We also calculate the Structural Similarity Index Measure (SSIM) between adversarial samples and the original sample to compare how visually similar they are (SSIM value ranges from 0 to 1; the higher value indicates more similarity). This measure has been used in the literature to correlate more with human perception than Mean Absolute Distance (MAD). Hence, it serves as a metric for estimating how much perturbed (adversarial) and the original image will differ visually.
\section{Evaluation}
We obtained three kinds of results using adversarial samples generated on attacker models: i) number of mispredictions when adversarial samples are used on victim models; ii) the classes that (mis)predictions belong when adversarial samples are used on victim models; and iii) SSIM value between original and adversarial samples.
We used these results to assess the effectiveness of attacks used in generating adversarial samples. This assessment led us to identify four main factors that contribute immensely towards the transferability of adversarial samples. In the following, we discuss these factors and provide results obtained to backup our findings for each factor's implication.
\subsection{Factor 1: The attack itself}
We observed that some of the attacks used in generating adversarial samples are just more powerful than others (regardless of the victim model). That is, the adversarial samples generated by these attacks are easily transferable, hence leading to high number of misprediction on the target model.
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{{attacks}.png}
\caption{Average number of mispredictions for adversarial samples transferred to the LeNet, AlexNet and Vgg-11. \label{fig:attacks}}
\end{figure}
Figure~\ref{fig:attacks} shows that the attacks with labels 1, 5, 8, 11, 14, 17, 25, 29, 32, 36, and 40 have higher number of mispredictions when adversarial samples are used on victim models. Hence, those attacks are more powerful. Further, attacks with labels 11, 29 and 36 appear to have the highest number of mispredictions (on any victim model). This result shows that the transferability of an adversarial sample highly depends on the attack that generated the given adversarial sample.
\subsection{Factor 2: Norm Used in the Attack}
We observed that a particular attack that uses different norm to generate adversarial samples yielded varying degree of transferability. In general, the attacks that use L-inf tend to produce adversarial samples that exhibit higher number of mispredictions compared to attacks using L2 and L1. Figures~\ref{fig:lenet-attacker-distances},~\ref{fig:alexnet-attacker-distances} and \ref{fig:vgg11-attacker-distances} show results for attacks that use different norms when generating adversarial samples. In particular, Figure~\ref{fig:lenet-attacker-distances} shows the average number of mispredictions per attack for adversarial samples that are generated on LeNet. Among the attacks, Deep Fool, AUN and RAUN are implemented by using just L-inf and L2, whereas the rest have implementation for L1, L2 and L-inf norms. Clearly, the adversarial samples generated with L-inf norm have stronger ability to transfer, compared to the ones generated with L1 and L2 norms. Likewise, Figure~\ref{fig:alexnet-attacker-distances} and~\ref{fig:vgg11-attacker-distances} show the average number of mispredictions per attack for adversarial samples that are generated on AlexNet and Vgg-11, respectively. The findings are consistent among the victim models, indicating the norm to be used for a given attack has a significant impact on transferability of adversarial samples.
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{{lenet-attacker-distances}.png}
\caption{Average number of mispredictions per attack for adversarial samples generated on LeNet. \label{fig:lenet-attacker-distances}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{{alexnet-attacker-distances}.png}
\caption{Average number of mispredictions per attack for adversarial samples generated on AlexNet. \label{fig:alexnet-attacker-distances}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{{vgg11-attacker-distances}.png}
\caption{Average number of mispredictions per attack for adversarial samples generated on Vgg-11. \label{fig:vgg11-attacker-distances}}
\end{figure}
While L-inf norm yields adversarial samples to transfer better compared to other norms, it should be noticed that the disturbance made to an input sample may become more pronounced. Comparing SSIM values of adversarial samples generated by using different norms shows that L-inf always produces significantly perturbed samples. In Figure~\ref{fig:ssim}, the range for SSIM values are labeled as: Excellent = ( 0.75 $\leq$ SSIM $\leq$ 1.0 ), Good = ( 0.55 $\leq$ SSIM $\leq$0.74 ), Poor = (0.35 $\leq$ SSIM $\leq$ 0.54), and Bad = (0.00 $\leq$ SSIM $\leq$ 0.34). We observed that many of the adversarial samples generated by L-inf norm have lower SSIM, indicating that perturbations made may be perceived by human. Therefore, checking the SSIM values can be used to guide the effectiveness of a given attack. Although an attack aims to maximize the number of mispredictions, it should be considered as a stronger attack if it can keep SSIM higher while yielding higher number of mispredictions, at the same time.
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{{ssim}.png}
\caption{SSIM values for adversarial samples generated on AlexNet. \label{fig:ssim}}
\end{figure}
\subsection{Factor 3: Closeness of the Target Model to the Attacker Model}
Not surprisingly, we observed that adversarial samples yielded higher number of mispredictions for the models on which they were generated (i.e., the case in which attacker and victim models are the same). For example, adversarial samples generated on AlexNet lead to higher number of misprediction when these samples are used on AlexNet, or on a closer model (e.g., a variation of AlexNet). However, when these adversarial samples are used on other (or dissimilar) victim models, they lead to a comparably lower number of mispredictions. These findings are shown in Figures~\ref{fig:lenet-attacker-model},~\ref{fig:alexnet-attacker-model} and \ref{fig:vgg11-attacker-model}.
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{{lenet-attacker-model}.png}
\caption{Number of mispredictions for adversarial samples that are generated on LeNet.\label{fig:lenet-attacker-model}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{{alexnet-attacker-model}.png}
\caption{Number of mispredictions for adversarial samples that are generated on AlexNet. \label{fig:alexnet-attacker-model}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{{vgg11-attacker-model}.png}
\caption{Number of mispredictions for adversarial samples that are generated on Vgg-11.\label{fig:vgg11-attacker-model}}
\end{figure}
The implication of this factor is that if an attacker can generate adversarial samples on a model that is similar to victim models, then the probability of adversarial samples generated to be transferred effectively is higher. This methodology can be used by industry experts to test how well adversarial samples can transfer to their ML models. One way to exploit this observation for security-critical applications is to build multiple ML models that are dissimilar in terms of structure, but providing similar prediction accuracy; and then using majority vote (or similar schemes) to decide what should be proper prediction. If a particular attack would transfer and be effective on one of the ML model, (as evident by the analysis) it is very likely that other ML models (which are dissimilar) would be less sensitive to the same attack, providing a way to detect the anomaly and avoid the undesired consequences of adversarial samples. Building ML models that are different in structure, but yielding similar accuracy would be active research direction, not just for security-related concerns, but also be useful for reliability, power management, performance and scalability.
\subsection{Factor 4: Sensitivity of an Input}
Inherent sensitivity of an input to a particular attack can determine the strength of adversarial sample and how well it can transfer to a victim model. We can summarize our observations about the sensitivity of inputs used in the attacks as follows.
\begin{enumerate}
\item Some inputs are very sensitive to almost any attack, thus the adversarial samples generated for them can effectively transfer to victim models (e.g., input images with index 477, 479, 480 and 481 in Figure~\ref{fig:vgg11-misprediction}).
\item Some inputs are insensitive to attacks, thus the adversarial samples generated are ineffective and cannot get mispredicted, regardless of the victim model (e.g., input images with index 481, 484, 494 in Figure~\ref{fig:vgg11-misprediction}).
\item Some inputs are sensitive to specific attacks on a particular victim model, meaning the adversarial samples become effective when they are generated by particular subset of attacks, targeting a particular model (but not effective when used on other models). For example, the input images with index 465 and 467 in Figure~\ref{fig:vgg11-misprediction} become more sensitive (thus corresponding adversarial samples are more effective) when they are transferred to LeNet and AlexNet models, respectively (but not on other models).
\end{enumerate}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{{vgg11_models_last40_df}.png}
\caption{The number of effective attacks (yielding a adversarial sample that would be mispredicted) for a particular input used on Vgg-11 as an attacker model (zoomed in to see last 40 input images). \label{fig:vgg11-misprediction}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{{lenet_models_total_df}.png}
\caption{The number of effective attacks (yielding a adversarial sample that would be mispredicted) for a particular input used on LeNet as an attacker model. \label{fig:lenet-misprediction-all}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{{alexnet_models_total_df}.png}
\caption{The number of effective attacks (yielding a adversarial sample that would be mispredicted) for a particular input used on AlexNet as an attacker model. \label{fig:alexnet-misprediction-all}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{{vgg11_models_total_df}.png}
\caption{The number of effective attacks (yielding a adversarial sample that would be mispredicted) for a particular input used on Vgg-11 as an attacker model. \label{fig:vgg11-misprediction-all}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\columnwidth]{{collective-histogram}.png}
\caption{Histogram that summarizes the sensitivity of inputs to attacks. The x-axis indicates the number of effective attacks for a given input (i.e., generated adversarial sample would transfer to victim model successfully regardless of the attacker model), and y-axis indicates the number of inputs whose adversarial samples (generated by a set of attacks) would transfer effectively to the victim models. \label{fig:collective-histogram} }
\end{figure}
Figure~\ref{fig:vgg11-misprediction} shows the effective number of attacks used to generate adversarial samples on Vgg-11. For better visibility, only the last 40 input images (out of 500) are zoomed in Figure~\ref{fig:vgg11-misprediction} where the x- axis shows the index of input image and the y-axis shows the number of attacks that lead to misprediction of generated adversarial samples on victim models (please, see Figure~\ref{fig:vgg11-misprediction-all} for all 500 inputs used on Vgg-11). Since there are 40 attacks used to generate adversarial samples, the y-axis can be at most 40 (in which case it would mean that all of the attacks yielded adversarial sample that result in misprediction). The results obtained for the complete 500 input images used are shown in Figure~\ref{fig:alexnet-misprediction-all},~\ref{fig:lenet-misprediction-all} for AlexNet, and LeNet (as attacker model), respectively.
The implication of this factor is that the inherent characteristics of the input may play a role on how effectively the generated adversarial samples would be transferred to victim models. When combined with the strength of an attack, some inputs that are sensitive to the given set of attacks (irrespective of attacker model) may yield more effective adversarial samples than the other inputs.
Figure~\ref{fig:collective-histogram} illustrates this phenomena. It can be seen that most of the input images are sensitive to roughly 10 attacks out of the 40 (regardless of the attacker model being used), but relatively very few inputs are very sensitive to all the attacks (23 input images yield adversarial samples that were mispredicted on all the victim models, regardless of attacker model and attack used).
\section{Conclusion}
In its simplest form, \textit{transferability} can be defined as the ability of adversarial samples generated using the attacker model to be mispredicted when transferred to the victim model. We identified that most of the literature on transferability focuses on interpreting and evaluating transferability from the machine learning model perspective alone, which we refer as model-centric approach. In this work, we took an alternative path that we call attack-centric approach that focuses on investigating machine learning attacks to interpret and evaluate how adversarial samples transfer to the victim models. For each attacker model, we generated adversarial samples that are transferred to the three victim models (i.e., LeNet, AlexNet and Vgg-11).
We identified four factors that influence how well an adversarial sample would transfer.
Our hope is that these factors would be useful guidelines for researchers and practitioners in the field to prohibit the adverse impact of black-box attacks and to build more attack resistant/secure machine learning systems.
\vskip 0.2in
| 2024-02-18T23:39:39.787Z | 2021-12-06T02:15:43.000Z | algebraic_stack_train_0000 | 2 | 6,922 |
|
proofpile-arXiv_065-66 |
\section{Preface}
\label{s_preface}
This paper primarily serves as a reference for my Ph.D. dissertation, which I am currently writing.
As a consequence, the framework is not under active development.
The presented concepts, problems, and solutions may be interesting regardless, even for other problems than Neural Architecture Search (NAS).
The framework's name, UniNAS, is a wordplay of University and Unified NAS since the framework was intended to incorporate almost any architecture search approach.
\section{Introduction and Related Work}
\label{s_introduction}
An increasing supply and demand for automated machine learning causes the amount of published code to grow by the day. Although advantageous, the benefit of such is often impaired by many technical nitpicks.
This section lists common code bases and some of their disadvantages.
\subsection{Available NAS frameworks}
\label{u_introduction_available}
The landscape of NAS codebases is severely fragmented, owing to the vast differences between various NAS methods and the deep-learning libraries used to implement them.
Some of the best supported or most widely known ones are:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item {NASLib~\citep{naslib2020}}
\item {
Microsoft NNI \citep{ms_nni} and Archai \citep{ms_archai}
}
\item {
Huawei Noah Vega \citep{vega}
}
\item {
Google TuNAS \citep{google_tunas} and PyGlove \citep{pyglove} (closed source)
}
\end{itemize}
Counterintuitively, the overwhelming majority of publicly available NAS code is not based on any such framework or service but simple and typical network training code.
Such code is generally quick to implement but lacks exact comparability, scalability, and configuration power, which may be a secondary concern for many researchers.
In addition, since the official code is often released late or never, and generally only in either TensorFlow~\citep{tensorflow2015-whitepaper} or PyTorch~\citep{pytorch},
popular methods are sometimes re-implemented by some third-party repositories.
Further projects include the newly available and closed-source cloud services by, e.g., Google\footnote{\url{https://cloud.google.com/automl/}}
and Microsoft\footnote{\url{https://www.microsoft.com/en-us/research/project/automl/}}. Since they require very little user knowledge in addition to the training data, they are excellent for deep learning in industrial environments.
\subsection{Common disadvantages of code bases}
\label{u_introduction_disadvantages}
With so many frameworks available, why start another one?
The development of UniNAS started in early 2020, before most of these frameworks arrived at their current feature availability or were even made public.
In addition, the frameworks rarely provide current state-of-the-art methods even now and sometimes lack the flexibility to include them easily.
Further problems that UniNAS aims to solve are detailed below:
\paragraph{Research code is rigid}
The majority of published NAS code is very simplistic.
While that is an advantage to extract important method-related details, the ability to reuse the available code in another context is severely impaired.
Almost all details are hard-coded, such as:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item {
the used gradient optimizer and learning rate schedule
}
\item {
the architecture search space, including candidate operations and network topology
}
\item {
the data set and its augmentations
}
\item {
weight initialization and regularization techniques
}
\item {
the used hardware device(s) for training
}
\item {
most hyper-parameters
}
\end{itemize}
This inflexibility is sometimes accompanied by the redundancy of several code pieces that differ slightly for different experiments or phases in NAS methods.
Redundancy is a fine way to introduce subtle bugs or inconsistencies and also makes the code confusing to follow.
Hard-coded details are also easy to forget, which is especially crucial in research where reproducibility depends strongly on seemingly unimportant details.
Finally, if any of the hard-coded components is ever changed, such as the optimizer, configurations of previous experiments can become very misleading.
Their details are generally not part of the documented configuration (since they are hard-coded), so earlier results no longer make sense and become misleading.
\paragraph{A configuration clutter}
In contrast to such simplistic single-purpose code, frameworks usually offer a variety of optimizers, schedules, search spaces, and more to choose from.
By configuring the related hyper-parameters, an optimizer can be trivially and safely exchanged for another. Since doing so is a conscious and intended choice, it is also documented in the configuration. In contrast, the replacement of hard-coded classes was not intended when the code was initially written.
The disadvantage of this approach comes with the wealth of configurable hyper-parameters, in different ways:
Firstly, the parametrization is often cluttered.
While implementing more classes (such as optimizers or schedules) adds flexibility, the list of available hyper-parameters becomes increasingly bloated and opaque.
The wealth of parametrization is intimidating and impractical since it is often nontrivial to understand exactly which hyper-parameters are used and which are ineffective.
As an example, the widely used PyTorch Image Models framework~\citep{rw2019timm} (the example was chosen due to the popularity of the framework, it is no worse than others in this respect) implements an intimidating mix of regularization and data augmentation settings that are partially exclusive.\footnote{\url{https://github.com/rwightman/pytorch-image-models/blob/ba65dfe2c6681404f35a9409f802aba2a226b761/train.py}, checked Dec. 1st 2021; see lines 177 and below.}
Secondly, to reduce the clutter, parameters can be used by multiple mutually exclusive choices.
In the case of the aforementioned PyTorch Image Models framework, one example would be the selection of gradient-descent optimizers.
Sharing common parameters such as the learning rate and the momentum generally works well, but can be confusing since, once again, finding out which parameters affect which modules necessitates reading the code or documentation.
Thirdly, even with an intimidating wealth of configuration choices, not every option is covered. To simplify and reduce the clutter, many settings of lesser importance always use a sensible default value.
If changing such a parameter becomes necessary, the framework configurations become more cluttered or changing the hard-coded default value again results in misleading configurations of previous experiments.
To summarize, the hyper-parametrization design of a framework can be a delicate decision, trying for them to be complete but not cluttered.
While both extremes appear to be mutually exclusive, they can be successfully united with the underlying configuration approach of UniNAS: argument trees.
\paragraph{}
Nonetheless, it is great if code is available at all.
Many methods are published without any code that enables verifying their training or search results, impairing their reproducibility.
Additionally, even if code is overly simplistic or accompanied by cluttered configurations, reading it is often the best way to clarify a method's exact workings and obtain detailed information about omitted hyper-parameter choices.
\section{Argument trees}
\label{u_argtrees}
The core design philosophy of UniNAS is built on so-called \textit{argument trees}.
This concept solves the problems of Section~\ref{u_introduction_disadvantages} while also providing immense configuration flexibility.
As its basis, we observe that any algorithm or code piece can be represented hierarchically.
For example, the task to train a network requires the network itself and a training loop, which may use callbacks and logging functions.
Sections~\ref{u_argtrees_modularity} and~\ref{u_argtrees_register} briefly explain two requirements: strict modularity and a global register.
As described in Section~\ref{u_argtrees_tree}, this allows each module to define which other types of modules are needed. In the previous example, a training loop may use callbacks and logging functions.
Sections~\ref{u_argtrees_config} and~\ref{u_argtrees_build} explain how a configuration file can fully detail these relationships and how the desired code class structure can be generated.
Finally, Section~\ref{u_argtrees_gui} shows how a configuration file can be easily manipulated with a graphical user interface, allowing the user to create and change complex experiments without writing a single line of code.
\subsection{Modularity}
\label{u_argtrees_modularity}
As practiced in most non-simplistic codebases, the core of the argument tree structure is strong modularity.
The framework code is fragmented into different components with clearly defined purposes, such as training loops and datasets.
Exchanging modules of the same type for one another is a simple issue, for example gradient-descent optimizers.
If all implemented code classes of the same type inherit from one base class (e.g., AbstractOptimizer) that guarantees specific class methods for a stable interaction, they can be treated equally. In object-oriented programming, this design is termed polymorphism.
UniNAS extends typical PyTorch~\citep{pytorch} classes with additional functionality.
An example is image classification data sets, which ordinarily do not contain information about image sizes. Adding this specification makes it possible to use fake data easily and to precompute the tensor shapes in every layer throughout the neural network.
\begin{figure*}[ht]
\hfill
\begin{minipage}[c]{0.97\textwidth}
\begin{python}
@Register.task(search=True)
class SingleSearchTask(SingleTask):
@classmethod
def args_to_add(cls, index=None) -> [Argument]:
return [
Argument('is_test_run', default='False', type=str, is_bool=True),
Argument('seed', default=0, type=int),`
Argument('save_dir', default='{path_tmp}', type=str),
]
@classmethod
def meta_args_to_add(cls) -> [MetaArgument]:
methods = Register.methods.filter_match_all(search=True)
return [
MetaArgument('cls_device', Register.devices_managers, num=1),
MetaArgument('cls_trainer', Register.trainers, num=1),
MetaArgument('cls_method', methods, num=1),
]
\end{python}
\end{minipage}
\vskip-0.3cm
\caption{
UniNAS code excerpt for a SingleSearchTask. The decorator function in Line~1 registers the class with type ''task'' and additional information.
The method in Line~5 returns all arguments for the task to be set in a config file.
The method in Line~13 defines the local tree structure by stating how many modules of which types are needed. It is also possible to specify additional requirements, as done in Line~14.
}
\label{u_fig_register}
\end{figure*}
\subsection{A global register}
\label{u_argtrees_register}
A second requirement for argument trees is a global register for all modules. Its functions are:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item {
Allow any module to register itself with additional information about its purpose. The example code in Figure~\ref{u_fig_register} shows this in Line~1.
}
\item {
List all registered classes, including their type (task, model, optimizer, data set, and more) and their additional information (search, regression, and more).
}
\item {
Filter registered classes by types and matching information.
}
\item {
Given only the name of a registered module, return the class code located anywhere in the framework's files.
}
\end{itemize}
As seen in the following Sections, this functionality is indispensable to UniNAS' design.
The only difficulties in building such a register is that the code should remain readable and that every module has to register itself when the framework is used.
Both can be achieved by scanning through all code files whenever a new job starts, which takes less than five seconds.
Python executes the decorators (see Figure~\ref{u_fig_register}, Line~1) by doing so, which handle registration in an easily readable fashion.
\subsection{Tree-based dependency structures}
\label{u_argtrees_tree}
\begin{figure*}
\vskip-0.7cm
\begin{minipage}[l]{0.42\linewidth}
\centering
\includegraphics[trim=0 320 2480 0, clip, width=\textwidth]{./images/uninas/args_tree_s1_col.pdf}
\vskip-0.2cm
\caption{
Part of a visualized SingleSearchTask configuration, which describes the training of a one-shot super-network with a specified search method (omitted for clarity, the complete tree is visualized in Figure~\ref{app_u_argstree_img}).
The white colored tree nodes state the type and number of requested classes, the turquoise boxes the specific classes used. For example, the \textcolor{red}{SingleSearchTask} requires exactly one type of \textcolor{orange}{hardware device} to be specified, but the \textcolor{cyan}{SimpleTrainer} accepts any number of \textcolor{green}{callbacks} or loggers.
\\
\hfill
}
\label{u_argstree_trimmed_img}
\end{minipage}
\hfill
\begin{minipage}[r]{0.5\textwidth}
\begin{small}
\begin{lstlisting}[backgroundcolor = \color{white}]
"cls_task": <@\textcolor{red}{"SingleSearchTask"}@>,
"{cls_task}.save_dir": "{path_tmp}/",
"{cls_task}.seed": 0,
"{cls_task}.is_test_run": true,
"cls_device": <@\textcolor{orange}{"CudaDevicesManager"}@>,
"{cls_device}.num_devices": 1,
"cls_trainer": <@\textcolor{cyan}{"SimpleTrainer"}@>,
"{cls_trainer}.max_epochs": 3,
"{cls_trainer}.ema_decay": 0.5,
"{cls_trainer}.ema_device": "cpu",
"cls_exp_loggers": <@\textcolor{black}{"TensorBoardExpLogger"}@>,
"{cls_exp_loggers#0}.log_graph": false,
"cls_callbacks": <@\textcolor{green}{"CheckpointCallback"}@>,
"{cls_callbacks#0}.top_n": 1,
"{cls_callbacks#0}.key": "train/loss",
"{cls_callbacks#0}.minimize_key": true,
\end{lstlisting}
\end{small}
\vskip-0.2cm
\caption{
Example content of the configuration text-file (JSON format) for the tree in Figure~\ref{u_argstree_trimmed_img}.
The first line in each text block specifies the used class(es), the other lines their detailed settings. For example, the \textcolor{cyan}{SimpleTrainer} is set to train for three epochs and track an exponential moving average of the network weights on the CPU.
}
\label{u_argstree_trimmed_text}
\end{minipage}
\end{figure*}
A SingleSearchTask requires exactly one hardware device and exactly one training loop (named trainer, to train an over-complete super-network), which in turn may use any number of callbacks and logging mechanisms.
Their relationship is visualized in Figure~\ref{u_argstree_trimmed_img}.
Argument trees are extremely flexible since they allow every hierarchical one-to-any relationship imaginable.
Multiple optional callbacks can be rearranged in their order and configured in detail.
Moreover, module definitions can be reused in other constellations, including their requirements.
The ProfilingTask does not need a training loop to measure the runtime of different network topologies on a hardware device, reducing the argument tree in size.
While not implemented, a MultiSearchTask could use several trainers in parallel on several devices.
The hierarchical requirements are made available using so-called MetaArguments, as seen in Line~16 of Figure~\ref{u_fig_register}.
They specify the local structure of argument trees by stating which other modules are required. To do so, writing the required module type and their amount is sufficient. As seen in Line~14, filtering the modules is also possible to allow only a specific subset.
This particular example defines the upper part of the tree visualized in Figure~\ref{u_argstree_trimmed_img}.
The names of all MetaArguments start with "cls\_" which improves readability and is reflected in the visualized arguments tree (Figure~\ref{u_argstree_trimmed_img}, white-colored boxes).
\subsection{Tree-based argument configurations}
\label{u_argtrees_config}
While it is possible to define such a dynamic structure, how can it be represented in a configuration file?
Figure~\ref{u_argstree_trimmed_text} presents an excerpt of the configuration that matches the tree in Figure~\ref{u_argstree_trimmed_img}.
As stated in Lines~6 and~9 of the configuration, CudaDevicesManager and SimpleTrainer fill the roles for the requested modules of types "device" and "trainer".
Lines~14 and~17 list one class of the types ''logger'' and ''callback'' each, but could provide any number of comma-separated names.
Also including the stated "task" type in Line~1, the mentioned lines state strictly which code classes are used and, given the knowledge about their hierarchy, define the tree structure.
Additionally, every class has some arguments (hyper-parameters) that can be modified.
SingleSearchTask defined three such arguments (Lines~7 to~9 in Figure~\ref{u_fig_register}) in the visualized example,
which are represented in the configuration (Lines~2 to~4 in Figure~\ref{u_argstree_trimmed_text}).
If the configuration is missing an argument, maybe to keep it short, its default value is used.
Another noteworthy mechanism in Line~2 is that "\{cls\_task\}.save\_dir" references whichever class is currently set as "cls\_task" (Line~1), without naming it explicitly.
Such wildcard references simplify automated changes to configuration files since, independently of the used task class, overwriting "\{cls\_task\}.save\_dir" is always an acceptable way to change the save directory.
A less general but perhaps more readable notation is "SingleSearchTask.save\_dir", which is also accepted here.
A very interesting property of such dynamic configuration files is that they contain only the hyper-parameters (arguments) of the used code classes.
Adding any additional arguments will result in an error since the configuration-parsing mechanism, described in Section~\ref{u_argtrees_build}, is then unable to piece the information together.
Even though UniNAS implements several different optimizer classes, any such configuration only contains the hyper-parameters of those used. Generated configuration files are always complete (contain all available arguments), sparse (contain only the available arguments), and never ambiguous.
A debatable design decision of the current configuration files, as seen in Figure~\ref{u_argstree_trimmed_text}, is that they do not explicitly encode any hierarchy levels. Since that information is already known from their class implementations, the flat representation was chosen primarily for readability.
It is also beneficial when arguments are manipulated, either automatically or from the terminal when starting a task.
The disadvantage is that the argument names for class types can only be used once ("cls\_device", "cls\_trainer", and more); an unambiguous assignment is otherwise not possible. For example, since the SingleSearchTask already owns "cls\_device", no other class that could be used in the same argument tree can use that particular name. While this limitation is not too significant, it can be mildly confusing at times.
Finally, how is it possible to create configuration files?
Since the dynamic tree-based approach offers a wide variety of possibilities, only a tiny subset is valid.
For example, providing two hardware devices violates the defined tree structure of a SingleSearchTask and results in a parsing failure.
If that happens, the user is provided with details of which particular arguments are missing or unexpected.
While the best way to create correct configurations is surely experience and familiarity with the code base, the same could be said about any framework.
Since UniNAS knows about all registered classes, which other (possibly specified) classes they use, and all of their arguments (including defaults, types, help string, and more), an exhaustive list can be generated automatically. However, resulting in almost 1600 lines of text, this solution is not optimal either.
The most convenient approach is presented in Section~\ref{u_argtrees_gui}: Creating and manipulating argument trees with a graphical user interface.
\begin{algorithm}
\caption{
Pseudo-code for building the argument tree, best understood with Figures~\ref{u_argstree_trimmed_img} and~\ref{u_argstree_trimmed_text}
For a consistent terminology of code classes and tree nodes: If the $Task$ class uses a $Trainer$, then in that context, $Trainer$ the child. Lines starting with \# are comments.
}
\label{alg_u_argtree}
\small
\begin{algorithmic}
\Require $Configuration$ \Comment{Content of the configuration file}
\Require $Register$ \Comment{All modules in the code are registered}
\State{}
\State{$\#$ recursive parsing function to build a tree}
\Function{parse}{$class,~index$}
\Comment{E.g. $(SingleSearchTask,~0)$}
\State $node = ArgumentTreeNode(class,~index)$
\State{}
\State{$\#$ first parse all arguments (hyper-parameters) of this tree node}
\ForEach{($idx, argument\_name$) \textbf{in} $class.get\_arguments()$}
\Comment{E.g. (0, $''save\_dir''$)}
\State $value = get\_used\_value(Configuration,~class,~index,~argument\_name)$
\State $node.add\_argument(argument\_name,~value)$
\EndFor
\State{}
\State{$\#$ then recursively parse all child classes, for each module type...}
\ForEach{$child\_class\_type$ \textbf{in} $class.get\_child\_types()$}
\Comment{E.g. $cls\_trainer$}
\State $class\_names = get\_used\_classes(Configuration,~child\_classes\_type)$
\Assert{ The number of $class\_names$ is in the specified limits}
\State{}
\State{$\#$ for each module type, check all configured classes}
\ForEach{($idx,~class\_name$) \textbf{in} $class\_names$}
\Comment{E.g. (0, $''SimpleTrainer''$)}
\State $child\_class = Register.get(child\_class\_name)$
\State $child\_node = $\Call{parse}{$child\_class,~idx$}
\State $node.add\_child(child\_class\_type,~idx,~child\_node)$
\EndFor
\EndFor
\Returnx{ $node$}
\EndFunction
\State{}
\State $tree = $\Call{parse}{$Main, 0$}
\Comment{Recursively parse the tree, $Main$ is the entry point}
\Ensure every argument in the configuration has been parsed
\end{algorithmic}
\end{algorithm}
\subsection{Building the argument tree and code structure}
\label{u_argtrees_build}
The arguably most important function of a research code base is to run experiments.
In order to do so, valid configuration files must be translated into their respective code structure.
This comes with three major requirements:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item{
Classes in the code that implement the desired functionality.
As seen in Section~\ref{u_argtrees_tree} and Figure~\ref{u_argstree_trimmed_img}, each class also states the types, argument names and numbers of additionally requested classes for the local tree structure.
}
\item{
A configuration that describes which code classes are used and which values their parameters take.
This is described in Section~\ref{u_argtrees_config} and visualized in Figure~\ref{u_argstree_trimmed_text}.
}
\item{
To connect the configuration content to classes in the code, it is required to reference code modules by their names. As described in Section~\ref{u_argtrees_register} this can be achieved with a global register.
}
\end{itemize}
Algorithm~\ref{alg_u_argtree} realizes the first step of this process: parsing the hierarchical code structure and their arguments from the flat configuration file.
The result is a tree of \textit{ArgumentTreeNodes}, of which each refers to exactly one class in the code, is connected to all related tree nodes, and knows all relevant hyper-parameter values.
While they do not yet have actual class instances, this final step is no longer difficult.
\begin{figure*}[h]
\vskip -0.0in
\begin{center}
\includegraphics[trim=30 180 180 165, clip, width=\linewidth]{images/uninas/gui/gui1desc.png}
\hspace{-0.5cm}
\caption{
The graphical user interface (left) that can manipulate the configurations of argument trees (visualized right).
Since many nodes are missing classes of some type ("cls\_device", ...), their parts in the GUI are highlighted in red.
The eight child nodes of DartsSearchMethod are omitted for visual clarity.
}
\label{fig_u_gui}
\end{center}
\end{figure*}
\subsection{Creating and manipulating argument trees with a GUI}
\label{u_argtrees_gui}
Manually writing a configuration file can be perplexing since one must keep track of tree specifications, argument names, available classes, and more.
The graphical user interface (GUI) visualized in Figures~\ref{fig_u_gui} and~\ref{app_u_gui} solves these problems to a large extent, by providing the following functionality:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item{
Interactively add and remove nodes in the argument tree, thus also in the configuration and class structure. Highlight violations of the tree specification.
}
\item{
Setting the hyper-parameters of each node, using checkboxes (boolean), dropdown menus (choice from a selection), and text fields (other cases like strings or numbers) where appropriate.
}
\item{
Functions to save and load argument trees.
Since it makes sense to separate the configurations for the training procedure and the network design to swap between different constellations easily, loading partial trees is also supported. Additional functions enable visualizing, resetting, and running the current argument tree.
}
\item{
A search function that highlights all matches since the size of some argument trees can make finding specific arguments tedious.
}
\end{itemize}
In order to do so, the GUI manipulates \textit{ArgumentTreeNodes} (Section~\ref{u_argtrees_build}), which can be easily converted into configuration files and code.
As long as the required classes (for example, the data set) are already implemented, the GUI enables creating and changing experiments without ever touching any code or configuration files.
While not among the original intentions, this property may be especially interesting for non-programmers that want to solve their problems quickly.
Still, the current version of the GUI is a proof of concept.
It favors functionality over design, written with the plain Python Tkinter GUI framework and based on little previous GUI programming experience.
Nonetheless, since the GUI (frontend) and the functions manipulating the argument tree (backend) are separated, a continued development with different frontend frameworks is entirely possible.
The perhaps most interesting would be a web service that runs experiments on a server, remotely configurable from any web browser.
\subsection{Using external code}
\label{u_external}
There is a variety of reasons why it makes sense to include external code into a framework.
Most importantly, the code either solves a standing problem or provides the users with additional options. Unlike newly written code, many popular libraries are also thoroughly optimized, reviewed, and empirically validated.
External code is also a perfect match for a framework based on argument trees.
As shown in Figure~\ref{u_fig_external_import}, external classes of interest can be thinly wrapped to ensure compatibility, register the module, and specify all hyper-parameters for the argument tree.
The integration is seamless so that finding out whether a module is locally written or external requires an inspection of its code.
On the other hand, if importing the AdaBelief~\citep{zhuang2020adabelief} code fails, the module will not be registered and therefore not be available in the graphical user interface.
UniNAS fails to parse configurations that require unregistered modules but informs the user which external sources can be installed to extend its functionality.
Due to this logistic simplicity, several external frameworks extend the core of UniNAS.
Some of the most important ones are:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item{
pymoo~\citep{pymoo}, a library for multi-objective optimization methods.
}
\item{
Scikit-learn~\citep{sklearn}, which implements many classical machine learning algorithms such as Support Vector Machines and Random Forests.
}
\item{
PyTorch Image Models~\citep{rw2019timm}, which provides the code for several optimizers, network models, and data augmentation methods.
}
\item{
albumentations~\citep{2018arXiv180906839B}, a library for image augmentations.
}
\end{itemize}
\begin{figure*}
\hfill
\begin{minipage}[c]{0.95\textwidth}
\begin{python}
from uninas.register import Register
from uninas.training.optimizers.abstract import WrappedOptimizer
try:
from adabelief_pytorch import AdaBelief
# if the import was successful,
# register the wrapped optimizer
@Register.optimizer()
class AdaBeliefOptimizer(WrappedOptimizer):
# wrap the original
...
except ImportError as e:
# if the import failed,
# inform the user that optional libraries are not installed
Register.missing_import(e)
\end{python}
\end{minipage}
\vskip-0.3cm
\caption{
Excerpt of UniNAS wrapping the official AdaBelief optimizer code.
The complete text has just 45 lines, half of which specify the optimizer parameters for the argument trees.
}
\label{u_fig_external_import}
\end{figure*}
\section{Dynamic network designs}
\label{u_networks}
As seen in the previous Sections, the unique design of UniNAS enables powerful customization of all components. In most cases, a significant portion of the architecture search configuration belongs to the network design. The FairNAS search example in Figure~\ref{app_u_argstree_img} contains 25 configured classes, of which 11 belong to the search network.
While it would be easy to create a single configurable class for each network architecture of interest, that would ignore the advantages of argument trees.
On the other hand, there are many technical difficulties with highly dynamic network topologies. Some of them are detailed below.
\subsection{Decoupling components}
In many published research codebases, network and architecture weights jointly exist in the network class. This design decision is disadvantageous for multiple reasons.
Most importantly, changing the network or NAS method requires a lot of manual work.
The reason is that different NAS methods need different amounts of architecture parameters, use them differently, and optimize them in different ways. For example:
\begin{itemize}[noitemsep,parsep=0pt,partopsep=0pt]
\item{
DARTS~\citep{liu2018darts} requires one weight vector per architecture choice.
They weigh all different paths, candidate operations, in a sum. Updating the weights is done with an additional optimizer (ADAM), using gradient descent.
}
\item{
MDENAS~\citep{mdenas} uses a similar vector for a weighted sample of a single candidate operation that is used in this particular forward pass. Global network performance feedback is used to increase or decrease the local weightings.
}
\item{
Single-Path One-Shot~\citep{guo2020single} does not use weights at all. Paths are always sampled uniformly randomly. The trained network is used as an accuracy prediction model and used by a hyper-parameter optimization method.
}
\item{
FairNAS~\citep{FairNAS} extends Single-Path One-Shot to make sure that all candidate operations are used frequently and equally often. It thus needs to track which paths are currently available.
}
\end{itemize}
\begin{figure}[t]
\vskip -0.0in
\begin{center}
\includegraphics[trim=0 0 0 0, clip, width=\linewidth]{images/draw/search_net.pdf}
\hspace{-0.5cm}
\caption{
The network and architecture weights are decoupled.
\textbf{Top}: The structure of a fully sequential super-network. Every layer (cell) uses the same set of candidate operations and weight strategy.
\textbf{Bottom left}: One set of candidate operations that is used multiple times in the network. This particular experiment uses the NAS-Bench-201 candidate operations.
\textbf{Bottom right}: A weight strategy that manages everything related to the used NAS method, such as creating the architecture weights or which candidates are used in each forward pass.
}
\label{fig_u_decouple}
\end{center}
\end{figure}
The same is also true for the set of candidate operations, which affect the sizes of the architecture weights.
Once the definitions of the search space, the candidate operations, and the NAS method (including the architecture weights) are mixed, changing any part is tedious.
Therefore, strictly separating them is the best long-term approach.
Similar to other frameworks presented in Section~\ref{u_introduction_available},
architectures defined in UniNAS do not use an explicit set of candidate architectures but allow a dynamic configuration.
This is supported by a \textit{WeightStrategy} interface, which handles all NAS-related operations such as creating and updating the architecture weights.
The interaction between the architecture definition, the candidate operations, and the weight strategy is visualized in Figure~\ref{fig_u_decouple}.
The easy exchange of any component is not the only advantage of this design.
Some NAS methods, such as DARTS, update network and architecture weights using different gradient descent optimizers. Correctly disentangling the weights is trivial if they are already organized in decoupled structures but hard otherwise.
Another advantage is that standardizing functions to create and manage architecture weights makes it easy to present relevant information to the user, such as how many architecture weights exist, their sizes, and which are shared across different network cells.
An example is presented in Figure~\ref{app_text}.
\begin{figure}[hb!]
\begin{minipage}[c]{0.24\textwidth}
\centering
\includegraphics[height=11.5cm]{./images/draw/mobilenetv2.pdf}
\end{minipage}
\hfill
\begin{minipage}[c]{0.5\textwidth}
\small
\begin{python}
"cell_3": {
"name": "SingleLayerCell",
"kwargs": {
"name": "cell_3",
"features_mult": 1,
"features_fixed": -1
},
"submodules": {
"op": {
"name": "MobileInvConvLayer",
"kwargs": {
"kernel_size": 3,
"kernel_size_in": 1,
"kernel_size_out": 1,
"stride": 1,
"expansion": 6.0,
"padding": "same",
"dilation": 1,
"bn_affine": true,
"act_fun": "relu6",
"act_inplace": true,
"att_dict": null,
"fused": false
}
}
}
},
\end{python}
\end{minipage}
\caption{
A high-level view on the MobileNet~V2 architecture~\citep{sandler2018mobilenetv2} in the top left,
and a schematic of the inverted bottleneck block in the bottom left.
This design uses two 1$\times$1 convolutions to change the channel count \textit{n} by an expansion factor of~6, and a spatial 3$\times$3 convolution in their middle.
The text on the right-hand side represents the cell structure by referencing the modules by their names ("name") and their keyworded arguments ("kwargs").
}
\label{u_fig_conf}
\end{figure}
\subsection{Saving, loading, and finalizing networks}
\label{u_networks_save}
As mentioned before, argument trees enable a detailed configuration of every aspect of an experiment, including the network topology itself.
As visualized in Figure~\ref{app_u_argstree_img}, such network definitions can become almost arbitrarily complex.
This becomes disadvantageous once models have to be saved or loaded or when super-networks are finalized into discrete architectures.
Unlike TensorFlow~\citep{tensorflow2015-whitepaper}, the used PyTorch~\citep{pytorch} library saves only the network weights without execution graphs.
External projects like ONNX~\citep{onnx} can be used to export limited graph information but not to rebuild networks using the same code classes and context.
The implemented solution is inspired by the official code\footnote{\url{https://github.com/mit-han-lab/proxylessnas/tree/master/proxyless_nas}} of ProxylessNAS~\citep{proxylessnas}, where every code module defines two functions that enable exporting and importing the entire module state and context.
As typical for hierarchical structures, the state of an outer module contains the states of all modules within.
An example is visualized in Figure~\ref{u_fig_conf}, where one cell in the famous MobileNet V2 architecture is represented as readable text.
The global register can provide any class definition by name (see Section~\ref{u_argtrees_register}) so that an identical class structure can be created and parameterized accordingly.
The same approach that enables saving and loading arbitrary class compositions can also be used to change their structure.
More specifically, an over-complete super-network containing all possible candidate operations can export only a specific configuration subset. The network recreated from this reduced configuration is the result of the architecture search.
This is made possible since the weight strategy controls the use of all candidate operations, as visualized in Figure~\ref{fig_u_decouple}. Similarly, when their configuration is exported, the weight strategy controls which candidates should be part of the finalized network architecture.
In another use case, some modules behave differently in super-networks and finalized architectures. For example, Linear Transformers~\citep{ScarletNAS} supplement skip connections with linear 1$\times$1 convolutions in super-networks to stabilize the training with variable network depths.
When the network topology is finalized, it suffices to simply export the configuration of a skip connection instead of their own.
Another practical way of rebuilding code structures is available through the argument tree configuration, which defines every detail of an experiment (see Section~\ref{u_argtrees_config}).
Parsing the network design and loading the trained weights of a previous experiment requires no further user interaction than specifying its save directory.
This specific way of recreating experiment environments is used extensively in \textit{Single-Path One-Shot} tasks.
In the first step, a super-network is trained to completion. Afterward, when the super-network is used to make predictions for a hyper-parameter optimization method (such as Bayesian optimization or evolutionary algorithms), the entire environment of its training can be recreated. This includes the network design and the dataset, data augmentations, which parts were reserved for validation, regularization techniques, and more.
\section{Discussion and Conclusions}
\label{u_conclusions}
We presented the underlying concepts of UniNAS, a PyTorch-based framework with the ambitious goal of unifying a variety of NAS algorithms in one codebase.
Even though the use cases for this framework changed over time, mostly from DARTS-based to SPOS-based experiments, its underlying design approach made reusing old code possible at every step.
However, several technical details could be changed or improved in hindsight. Most importantly, configuration files should reflect the hierarchy levels (see Section~\ref{u_argtrees_config}) for code simplicity and to avoid concerns about using module types multiple times. The current design favors readability, which is now a minor concern thanks to the graphical user interface.
Other considered changes would improve the code readability but were not implemented due to a lack of necessity and time.
In summary, the design of UniNAS fulfills all original requirements.
Modules can be arranged and combined in almost arbitrary constellations, giving the user an extremely flexible tool to design experiments.
Furthermore, using the graphical user interface does not require writing even a single line of code.
The resulting configuration files contain only the relevant information and do not suffer from a framework with many options.
These features also enable an almost arbitrary network design, combined with any NAS optimization method and any set of candidate operations. Despite that, networks can still be saved, loaded, and changed in various ways.
Although not covered here, several unit tests ensure that the essential framework components keep working as intended.
Finally, what is the advantage of using argument trees over writing code with the same results?
Compared to configuration files, code is more powerful and versatile but will likely suffer from problems described in Section~\ref{u_introduction_available}.
Argument trees make any considerations about which parameters to expose unnecessary and can enforce the use of specific module types and subsets thereof.
However, their strongest advantage is the visualization and manipulation of the entire experiment design with a graphical user interface. This aligns well with Automated Machine Learning (AutoML), which is also intended to make machine learning available to a broader audience.
{\small
\bibliographystyle{iclr2022_conference}
| 2024-02-18T23:39:40.040Z | 2021-12-06T02:16:42.000Z | algebraic_stack_train_0000 | 11 | 5,863 |
|
proofpile-arXiv_065-89 |
\section{Introduction}
\label{sec:introduction}
\IEEEPARstart{D}{ental} cone-beam computerized tomography (CBCT) and intraoral scan (IOS) are used for virtual implant positioning, maxillofacial surgery simulation, and orthodontic treatment planning. Dental CBCT has been widely used for the three-dimensional (3D) imaging of the teeth and jaws \cite{sukovic2003cone,miracle2009conebeam}. Recently, IOS has been increasingly used to capture digital impressions that are replicas of teeth, gingiva, palate, and soft tissue in the oral cavity \cite{mangano2017intraoral,zimmermann2015intraoral}, as digital scanning technologies have rapidly advanced \cite{robles2020digital}. The use of IOS addresses many of the shortcomings of the conventional impression manufacturing techniques \cite{siqueira2021intraoral, manicone2021patient}.
This paper aims to provide a fully automated method of integrating dental CBCT and IOS data into one image such that the integrated image utilizes the strengths and supplements the weaknesses of each image. In dental CBCT, spatial resolution is insufficient for elaborately depicting tooth geometry and interocclusal relationships. Moreover, image degradation associated with metal-induced artifacts is becoming an increasingly frequent problem, as the number of older people with artificial dental prostheses and metallic implants is rapidly increasing with the rapidly aging populations. Metallic objects in the CBCT field of view produce streaking artifacts that highly degrade the reconstructed CBCT images, resulting in a loss of information on the teeth and other anatomical structures\cite{schulze2011artefacts}. IOS can compensate for the aforementioned weaknesses of dental CBCT. IOS provides 3D tooth crown and gingiva surfaces with a high resolution. However, tooth roots are not observed in intraoral digital impressions. Therefore, CBCT and IOS can be complementary to each other. A suitable fusion between CBCT and IOS images allows to provide detailed 3D tooth geometry along with the gingival surface.
Numerous attempts have been made to register dental impression data to maxillofacial models obtained from 3D CBCT images. The registration process is to find a rigid transformation by taking advantage of the properties that the upper and lower jaw bones are rigid and the tooth surfaces are partially overlapping areas (\textit{e.g.}, the crowns of the exposed teeth). Several methods \cite{gateno2003new, uechi2006novel, swennen2007use, xia2009new, swennen2009cone} utilized fiducial markers for registration, which require a complicated process that involves the fabrication of devices with the markers, double CT scanning and post-processing for marker removal. To simplify these processes, virtual reference point-based methods \cite{kim2010integration, lin2013artifact, hernandez2013new, nilsson2016virtual} were proposed to roughly align two models using reference points, and achieve a precise fit by employing an iterative closest point (ICP) method \cite{besl1992method}. ICP is a widely used iterative registration method consisting of the closest point matching between two data and minimization of distances between the paired points. However, the ICP method relies heavily on initialization because it can easily be trapped into a local optimal solution. Therefore, these methods based on ICP typically require the user-involved initial alignment, which is a cumbersome and time-consuming procedure due to manual clicking. Furthermore, registration using ICP can be difficult to achieve acceptable results for patients containing metallic objects \cite{flugge2017registration}. Teeth in CBCT images that are contaminated by metal artifacts prevent accurate point matching with teeth in impressions. Therefore, there is a high demand for a fully automated and robust registration method. Recently, a deep learning-based method \cite{chung2020automatic} was used to automate the initial alignment by extracting pose cues from two data. This approach has limitations in achieving sufficient registration accuracy enough for clinical application. Without the use of a very good initial guess, the point matching for multimodal image registration is affected by the non-overlapping area of the two different modality data (\textit {e.g.}, the soft tissues in IOS and the jaw bones and tooth surfaces contaminated by metal artifacts in CBCT).
For an accurate registration, it is necessary to separate the non-overlapping areas as much as possible to prevent incorrect point matching. Therefore, individual tooth segmentation and identification in CBCT and IOS are required as important preprocessing tasks. In recent years, owing to advances in deep learning methods, numerous fully automated 3D tooth segmentation methods have been developed for CBCT images \cite{lee2020automated,rao2020symmetric,chen2020automatic,cui2019toothNet,jang2021fully} and impression models \cite{lian2020deep,zanjani2021mask,cui2021tsegnet}.
Although the performance of intraoral scanners is improving, full-arch scans have not yet surpassed the accuracy of conventional impressions \cite{zhang2021accuracy,giachetti2020accuracy}. IOS at short distances is available to obtain partial digital impressions that can replace traditional dental models, but it may not yet be suitable for clinical use on long complete-arches due to the global cumulative error introduced during the local image stitching process \cite{ender2019accuracy}. To achieve sophisticated image fusion, it is therefore necessary to correct the stitching errors of IOS.
We propose a fully automated method for registration of CBCT and IOS data as well as correction of IOS stitching errors. The proposed method consists of four parts: (i) individual tooth segmentation and identification module from IOS data (TSIM-IOS); (ii) individual tooth segmentation and identification module from CBCT data (TSIM-CBCT); (iii) global-to-local tooth registration between IOS and CBCT; and (iv) stitching error correction of the full-arch IOS. We developed TSIM-IOS using 2D tooth feature-highlighted images, which are generated by orthographic projection of the IOS data. This approach allows high-dimensional 3D surface models to efficiently segment individual teeth using low-dimensional 2D images. In TSIM-CBCT, we utilize the panoramic image-based deep learning method \cite{jang2021fully}. This method is robust against metal artifacts because it utilizes panoramic images (generated from CBCT images) not significantly affected by metal artifacts. The TSIM-IOS and -CBCT are used to focus only on the teeth, while removing as many non-overlapping areas as possible. In (iii), we then align the two highly overlapping data (\textit{i.e.}, the segmented teeth in the CBCT and IOS data) through global-to-local fashion, which consists of global initialization by fast point feature histograms (FPFH) \cite{rusu2009fast} and local refinement by ICP based on individual teeth (T-ICP). T-ICP allows the closest point matching only between the same individual teeth in the CBCT and IOS. The last part (iv) corrects the stitching errors of IOS using CBCT-derived tooth surfaces. Owing to the reliability of CBCT \cite{baumgaertel2009reliability}, location information of 3D teeth in the CBCT data can be used as a reference for correction of IOS teeth. After registration, each IOS tooth is fixed through a slightly rigid transformation determined by the reference CBCT tooth.
The main contributions of this paper are summarized as follows.
\begin{itemize}
\item To the best of our knowledge, this study is the first to provide a sophisticated fusion of IOS and CBCT data at the level of accuracy required for clinical use.
\item The proposed method can provide accurate intraoral digital impressions that correct cumulative stitching errors.
\item This framework is robust against metal-induced artifacts in low-dose dental CBCT.
\item The combined tooth-gingiva models with individually segmented teeth can be used for occlusal analysis and implant surgical guide production in digital dentistry.
\end{itemize}
The remainder of this paper is organized as follows. Section 2 describes the proposed method in detail. In Section 3, we explain the experimental results. Section 4 presents the discussion and conclusions.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{framework.pdf}
\caption{Overall flow diagram of the proposed method consisting of four parts; tooth segmentation and identification from IOS and CBCT data, global-to-local tooth registration of IOS and CBCT, and stitching error correction in IOS. Therefore, the proposed method integrates IOS and CBCT images into one coordinate system while improving the accuracy of full-arch IOS.}
\label{fig:framework}
\end{figure*}
\begin{figure}
\centering
\subfloat[]{\includegraphics[width=.2\textwidth]{segid_ios.pdf}}~~~
\subfloat[]{\includegraphics[width=.225\textwidth]{segid_cbct.pdf}}
\caption{Results of TSIM-IOS and -CBCT, respectively. The indicated numbers represent mandibular teeth by the universal notation. (a) Individual IOS teeth and their split gingiva parts, and (b) CBCT teeth containing unexposed wisdom teeth.}
\label{fig:segid}
\end{figure}
\section{Method}
The overall framework of the proposed method is illustrated in Fig. \ref{fig:framework}. It is designed to automatically align a patient's IOS model with the same patient's CBCT image. IOS models consist of 3D surfaces (triangular meshes) of the upper and lower teeth, and are acquired in Standard Triangle Language (STL) file format containing 3D coordinates of the triangle vertices. The 3D vertices of IOS data can be expressed as a set of 3D points and the unit of these points is millimeter. Dental CBCT images are isotropic voxel structures consisting of sequences of 2D cross-sectional images, and are saved in Digital Imaging and Communications in Medicine (DICOM) format.
Registration between two different imaging protocols must be separately obtained for the maxilla and mandible. For convenience, only the method for the mandible is described in this section. The method for the maxilla is the same.
\subsection{Individual Tooth Segmentation and Identification in IOS} \label{subsec:IOS}
As shown in Fig. \ref{fig:segid}a, TSIM-IOS decomposes the 3D point set $X$ of the IOS model into
\begin{equation}
X = \underbrace{X_{t_1} \cup \cdots \cup X_{t_J}}_{X_{\mbox{\scriptsize teeth}}}\cup X_{\mbox{\scriptsize gingiva}},
\end{equation}
where each $X_{t_j}$ represents a tooth with the code $t_j$ in $X$, $J$ is the number of teeth in $X$, and $X_{\mbox{\scriptsize gingiva}}$ is the rest including the gingiva in $X$. According to the universal notation system \cite{nelson2014wheeler}, $t_j$ is the number between 1 and 32 that is assigned to an individual tooth to identify the unique tooth. A detailed explanation is provided in Appendix \ref{app:sec1}.
Additionally, we divide the gingiva $X_{\mbox{\scriptsize gingiva}}$ into
\begin{equation} \label{eq:gingiva}
X_{\mbox{\scriptsize gingiva}} = X_{g_1} \cup \cdots \cup X_{g_J},
\end{equation}
where
\begin{equation}
X_{g_j} = \left\{\mathbf{x} \in X_{\mbox{\scriptsize gingiva}} : \underset{\mathbf{x}' \in X_{\mbox{\tiny teeth}}}{\mbox{argmin}}\| \mathbf{x}-\mathbf{x}'\| \in X_{t_j}\right\} .
\end{equation}
Therefore, a point in $X_{\mbox{\scriptsize gingiva}}$ belongs to a separated gingiva $X_{g_j}$ according to the nearest tooth $X_{t_j}$.
\subsection{Individual Tooth Segmentation and Identification in CBCT} \label{subsec:CBCT}
TSIM-CBCT is based on a deep learning-based individual tooth segmentation and identification method developed by Jang \textit{et al.} \cite{jang2021fully}. As shown in Fig. \ref{fig:segid}b, we obtain the teeth point cloud $Y$ that consists of individual tooth point clouds, denoted by
\begin{equation}
Y = \underbrace{Y_{t_1} \cup \cdots \cup Y_{t_J}}_{Y_{\mbox{\scriptsize teeth}}} \cup Y_{\mbox{\scriptsize rest}},
\end{equation}
where each $Y_{t_j}$ represents the $t_j$-tooth for $j=1,\cdots,J$ and $Y_{\mbox{\scriptsize rest}}$ refers to a point cloud of unexposed teeth (\textit{e.g.}, impacted wisdom teeth) if presented. Because the impacted teeth do not appear in IOS images, they are separated by $Y_{\mbox{\scriptsize rest}}$.
Each tooth point cloud $Y_{t_j}$ is obtained from a 3D binary image of the $t_j$-tooth determined by the individual tooth segmentation and identification method \cite{jang2021fully}. The points in $Y_{t_j}$ lie on isosurfaces (approximating the boundary of the segmented tooth image) that are generated by the marching cube algorithm \cite{lewiner2003efficient}. Because the unit of points in $Y_{t_j}$ is associated with the image voxels, the points are scaled in millimeters by the image spacing and slice thickness.
\subsection{Global-to-Local Tooth Registration of IOS and CBCT}
This subsection describes the registration method to find the optimal transformation $\mathcal{T}^*$ such that the transformed point cloud $\mathcal{T}^*(X) = \{\mathcal{T}^*(\mathbf{x}) : \mathbf{x} \in X \}$ best aligns with the target $Y$ in terms of partially overlapping tooth surfaces. The registration problem consists of the following two steps:
\begin{enumerate}
\item Construct a set of correspondences $Corr= \{(\mathbf{x},\mathbf{y})\in X \times Y\}$ between a source $X$ and target $Y$.
\item Find the optimal rigid transformation with the following mean square error minimization to best match the pairs in the correspondences
\begin{equation}
\mathcal{T}^* = \underset{\mathcal{T} \in SE(3)}{\mbox{argmin}} \sum_{(\mathbf{x},\mathbf{y}) \in Corr} \|\mathbf{y}-\mathcal{T}(\mathbf{x})\|^2,
\end{equation}
where $SE(3)$ is the set of rigid transformations that are modeled with a $4\times4$ matrix determined by three angles and a translation vector.
\end{enumerate}
Here, we adopt two registration methods: FPFH \cite{rusu2009fast} for global initial alignment, and an improved ICP using individual tooth segmentation for local refinement.
\subsubsection{Global initial alignment of the IOS and CBCT teeth}
We compute the two sets of FPFH vectors \cite{rusu2009fast}; $\mbox{FPFH}(X_{\mbox{\scriptsize teeth}})=\{\mbox{FPFH}(\mathbf{x}):\mathbf{x} \in X_{\mbox{\scriptsize teeth}}\}$ and $\mbox{FPFH}(Y_{\mbox{\scriptsize teeth}})=\{\mbox{FPFH}(\mathbf{y}):\mathbf{y} \in Y_{\mbox{\scriptsize teeth}}\}$. $\mbox{FPFH}(\mathbf{x})$ represents not only the geometric features of the normal vector and the curvature at $\mathbf{x} \in X_{\mbox{\scriptsize teeth}}$, but also the relevant information considering its neighboring points over $X_{\mbox{\scriptsize teeth}}$. The details of FPFH are provided in Appendix \ref{app:sec2}.
$\mbox{FPFH}(X_{\mbox{\scriptsize teeth}})$ and $\mbox{FPFH}(Y_{\mbox{\scriptsize teeth}})$ are used to find correspondences between $X_{\mbox{\scriptsize teeth}}$ and $Y_{\mbox{\scriptsize teeth}}$. For each $\mathbf{x} \in X_{\mbox{\scriptsize teeth}}$, we select $\mathbf{y} \in Y_{\mbox{\scriptsize teeth}}$, denoted by $\mbox{match}^{\mbox{\tiny FPFH}}_{Y_{\mbox{\tiny teeth}}}(\mathbf{x})$, whose FPFH vector is most similar to $\mbox{FPFH}(\mathbf{x})$:
\begin{equation}
\mbox{match}^{\mbox{\tiny FPFH}}_{Y_{\mbox{\tiny teeth}}}(\mathbf{x})=\underset{\mathbf{y} \in Y_{\mbox{\tiny teeth}}}{\text{argmin}}~{\|\mbox{FPFH}(\mathbf{x})-\mbox{FPFH}(\mathbf{y})\|}.
\end{equation}
Similarly, we compute $\mbox{match}^{\mbox{\tiny FPFH}}_{X_{\mbox{\tiny teeth}}}(\mathbf{y})$ for all $\mathbf{x} \in X_{\mbox{\scriptsize teeth}}$. Then, we obtain the correspondence set
\begin{equation}
Corr = Corr_{X_{\mbox{\tiny teeth}}} \cap Corr_{Y_{\mbox{\tiny teeth}}},
\end{equation}
where
\begin{align}
&Corr_{X_{\mbox{\tiny teeth}}} = \left\{\left(\mathbf{x},\mbox{match}^{\mbox{\tiny FPFH}}_{Y_{\mbox{\tiny teeth}}}(\mathbf{x})\right):\mathbf{x} \in X_{\mbox{\scriptsize teeth}} \right\},\\
&Corr_{Y_{\mbox{\tiny teeth}}} = \left\{\left(\mbox{match}^{\mbox{\tiny FPFH}}_{X_{\mbox{\tiny teeth}}}(\mathbf{y}),\mathbf{y} \right):\mathbf{y} \in Y_{\mbox{\scriptsize teeth}} \right\}.
\end{align}
The set $Corr$ contains pairs $(\mathbf{x},\mathbf{y}) \in X_{\mbox{\scriptsize teeth}} \times Y_{\mbox{\scriptsize teeth}}$ where $\mbox{FPFH}(\mathbf{x})$ and $\mbox{FPFH}(\mathbf{y})$ are the most similar to each other. However, such simple feature information alone cannot provide a proper point matching between $X_{\mbox{\scriptsize teeth}}$ and $Y_{\mbox{\scriptsize teeth}}$, because there are too many points with similar geometric features in the point clouds. To filter out inaccurate pairs from the set ${Corr}$, we randomly sample three pairs $(\mathbf{x}_1,\mathbf{y}_1)$, $(\mathbf{x}_2,\mathbf{y}_2)$, $(\mathbf{x}_3,\mathbf{y}_3)\in {Corr}$ and select them if the following conditions \cite{zhou2016fast} are met, and drop them otherwise:
\begin{equation} \label{eq:filter}
\tau < \frac{\|\mathbf{x}_i - \mathbf{x}_j\|}{\|\mathbf{y}_i - \mathbf{y}_j\|} < \frac{1}{\tau},~~\text{for}~1\leq i<j \leq 3,
\end{equation}
where $\tau$ is a number close to 1. We denote this filtered subset as ${Corr}^{(0)}$.
Then, the initial transformation is determined by
\begin{equation}
\mathcal{T}^{(0)}=\underset{\mathcal{T} \in SE(3)}{\mbox{argmin}} \sum_{(\mathbf{x},\mathbf{y}) \in Corr^{(0)}} \|\mathbf{y}-\mathcal{T}(\mathbf{x})\|^2.
\end{equation}
\subsubsection{Local refinement of the roughly aligned teeth}
We denote $X_{\mbox{\scriptsize teeth}}$ transformed by the previously obtained $\mathcal{T}^{(0)}$ as $X_{\mbox{\scriptsize teeth}}^{(0)} = X_{t_1}^{(0)} \cup \cdots \cup X_{t_J}^{(0)}$, where $X_{t_j}^{(0)} = \mathcal{T}^{(0)}(X_{t_j})$ for $j=1,\cdots,J$. $X_{\mbox{\scriptsize teeth}}^{(0)}$ and $Y_{\mbox{\scriptsize teeth}}$ are then roughly aligned, but fine-tuning is needed to achieve accurate registration. A fine rigid transformation is obtained through an iterative process, which gradually improves the correspondence finding. We propose an improved ICP (T-ICP) method with point matching based on individual teeth.
For $k \geq 1$, we denote $X_{\mbox{\scriptsize teeth}}^{(k)} = \mathcal{T}^{(k)}(X_{\mbox{\scriptsize teeth}}^{(k-1)})$. Here, the $k$-th rigid transformation $\mathcal{T}^{(k)}$ is determined by
\begin{equation}
\mathcal{T}^{(k)} = \underset{\mathcal{T} \in SE(3)}{\mbox{argmin}} \sum_{(\mathbf{x},\mathbf{y}) \in Corr^{(k)}} \|\mathbf{y}-\mathcal{T}(\mathbf{x})\|^2.
\end{equation}
The correspondence set $Corr^{(k)}$ for $k$ is given by
\begin{equation}
Corr^{(k)} = \left\{ \left(\mathbf{x}, \mbox{match}_{Y_{\mbox{\tiny teeth}}}(\mathbf{x}) \right) : \mathbf{x} \in X_{\mbox{\scriptsize teeth}}^{(k-1)} \right\} \cap P^{(k)},
\end{equation}
where
\begin{align}
& \mbox{match}_{Y_{\mbox{\tiny teeth}}}(\mathbf{x})=\underset{\mathbf{y} \in Y_{\mbox{\tiny teeth}}}{\mbox{argmin}} \|\mathbf{x}-\mathbf{y}\|, \\
& P^{(k)} = \bigcup_{j=1}^n \left\{(\mathbf{x},\mathbf{y}) \in X_{t_j}^{(k-1)} \times Y_{t_j} \right\}.
\end{align}
Using the set $P^{(k)}$ prevents undesired correspondences between two teeth with different codes. Note that this is the vanilla ICP when $P^{(k)}$ is not used. The final rigid transformation $\mathcal{T}^*$ is obtained by the following composition of transformations: $\mathcal{T}^*=\mathcal{T}^{(K)} \circ \cdots \circ \mathcal{T}^{(0)}$, where $K$ is the number of iterations until the stopping criterion is satisfied for a given $\varepsilon>0$:
\begin{equation}
\sum_{(\mathbf{x},\mathbf{y}) \in Corr^{(K)}} \| \mathcal{T}^{(K)} \circ \cdots \circ \mathcal{T}^{(0)}(\mathbf{x}) - \mathbf{y} \|<\varepsilon.
\end{equation}
\subsection{Stitching Error Correction in IOS}
Next, we edit the IOS models with stitching errors by referring to the CBCT images. We denote $X_{t_j}^*=\mathcal{T}^*(X_{t_j})$ and $X_{g_j}^*=\mathcal{T}^*(X_{g_j})$ for $j=1,\cdots,J$. Each tooth $X^*_{t_j}$ is transformed by a corrective rigid transformation $\mathcal{T}_j^{**}$, which is obtained by applying the vanilla ICP to sets $X^*_{t_j-1} \cup X^*_{t_j} \cup X^*_{t_j+1}$ and $Y^*_{t_j-1} \cup Y^*_{t_j} \cup Y^*_{t_j+1}$ as the source and target. Here, $X^*_{t_j-1}$ (or $X^*_{t_j+1}$) is an empty set if $t_j-1$ (or $t_j+1$) is not equal to $t_{j'}$ for every $j'=1,\cdots,J$. Using the individual corrective transformations, IOS stitching errors are corrected separately by $X_{t_j}^{**}=\mathcal{T}_{j}^{**}(X_{t_j}^{*})$ for $j=1,\cdots,J$. In this procedure, we use one tooth and two adjacent teeth on both sides for reliable correction. It takes advantage of the fact that narrow digital scanning is accurate. Now it remains to fix the gingiva area whose boundary shares the boundaries with the teeth. To fit the boundaries between the gingiva and individually transformed teeth, the gingival surface is divided according to the areas in contact with the individual teeth by Eq. \eqref{eq:gingiva}. Therefore, the rectified gingiva is obtained by $X_{g_j}^{**} = \mathcal{T}_{j}^{**}(X_{g_j}^{*})$ for $j=1,\cdots,J$.
\section{Experiments and Results}
Experiments were carried out using CBCT images in DICOM format and IOS models in STL format. Each CBCT image is produced by a dental CBCT machine: DENTRI-X (HDXWILL), which uses tube voltages of 90kVp and a tube current of 10mA. The size of images obtained by the machine is $800\times800\times400$. The pixel spacing and slice thickness are both $0.2$mm. Each IOS model is scanned by one of two intraoral scanners: i500 (Medit) and TRIOS 3 (3shape). An IOS model is either maxilla or mandible, which has approximately 200,000 vertices and 120,000 triangular faces, respectively. The dataset were provided by HDXWILL. Additionally, we used maxillary and mandibular digital dental models to train TSIM-IOS. These dataset were collected by the Yonsei University College of Dentistry. Personal information in all dataset was de-identified for patient privacy and confidentiality.
In Sections \ref{subsec:IOS} and \ref{subsec:CBCT}, the proposed deep convolutional network models were trained by labeled dataset for individual tooth segmentation and identification. For TSIM-IOS, 71 maxillary and mandibular dental models were used for training and 35 models for testing. Similarly, in TSIM-CBCT, 49 3D CBCT images were used for training and 23 images for testing.
\begin{figure*}[h]
\centering
\subfloat[]{\includegraphics[width=.1975\textwidth]{distmap1.pdf}}
\subfloat[]{\includegraphics[width=.1975\textwidth]{distmap2.pdf}}
\subfloat[]{\includegraphics[width=.1975\textwidth]{distmap3.pdf}}
\subfloat[]{\includegraphics[width=.1975\textwidth]{distmap4.pdf}}
\subfloat[]{\includegraphics[width=.1975\textwidth]{distmap5.pdf}}
\caption{Qualitative comparison results of registration methods. (a) MR, (b) CPD, (c) FPFH, (d) FPFH followed by ICP, and (e) the proposed method. The colors in the teeth represent distances between the IOS and CBCT tooth surfaces.}
\label{fig:quantitative_reg_result}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width=.4\textwidth]{comparison_fpfh.pdf}
\caption{Correspondence pairs of FPFH-based methods. The figure on the left shows poor matching from the FPFH method without TSIM. On the other hand, the figure on the right shows modest correspondences between the teeth obtained by TSIM.}
\label{fig:comparison_fpfh}
\end{figure}
\subsection{Evaluation and Result of the Proposed Registration Method}
We used 22 pairs of IOS models and CBCT images to evaluate the performance of the proposed registration method. Each pair was obtained from the same patient. To measure the registration accuracy, we used a landmark distance between tooth landmarks pre-marked on IOS and CBCT data:
\begin{equation}\label{eq:land}
E_{land}(\hat{X},\hat{Y};\mathcal{T}) = \frac{1}{N}\sum_{i=1}^N \| \mathcal{T}(\hat{\mathbf{x}}_i)-\hat{\mathbf{y}}_i\|,
\end{equation}
where $\mathcal{T}$ is a rigid transformation and, $\hat{X}=\{\hat{\mathbf{x}}_1,\cdots,\hat{\mathbf{x}}_N\}$ and $\hat{Y}=\{\hat{\mathbf{y}}_1,\cdots,\hat{\mathbf{y}}_N\}$ are the landmark sets of the pair of IOS and CBCT data, respectively. These landmarks were selected points with discernible features such as cusps. In addition, we computed a surface distance from the IOS tooth surfaces to the CBCT tooth surfaces:
\begin{equation}\label{eq:surf}
E_{surf}(\bar{X},\bar{Y};\mathcal{T}) = \sup_{\bar{\mathbf{x}} \in \bar{X}} \inf_{\bar{y} \in \bar{Y}} \| \mathcal{T}(\bar{\mathbf{x}})-\bar{\mathbf{y}} \|,
\end{equation}
where $\bar{X}$ and $\bar{Y}$ are the ground-truth tooth segmentations of IOS and CBCT data, respectively. The metric $E_{surf}$ evaluates how far the IOS crown surface $\bar{X}$ is from the CBCT-derived tooth surface $\bar{Y}$.
To verify the effectiveness of the proposed method, we compared it manual clicking registration with ICP (MR), coherent point drift (CPD) \cite{myronenko2010point}, FPFH, and FPFH followed by ICP. These methods are implemented using raw IOS model and skull model, which is obtained by applying thresholding segmentation and the marching cube algorithm to CBCT images. Table \ref{tbl:eval_reg} provides the quantitative evaluations of the methods, and Fig. \ref{fig:quantitative_reg_result} displays the qualitative results by visualizing distance maps between the ground-truth tooth surfaces of CBCT and IOS, which are aligned by rigid transformations obtained from the employed methods. Also, we performed an ablation study to present the advantage of TSIM-IOS and CBCT, as reported in Table \ref{tbl:eval_reg}.
\begin{table}[]
\footnotesize
\centering
\caption{Quantitative Comparison Results of Registration Methods. \label{tbl:eval_reg}}\vskip 0.0in
\begin{tabular}{ccccccc} \hline
& {\bf Method} &{\bf Landmark (mm)} & {\bf Surface (mm)} \\ \cline{1-4}
\multirow{4}{*}{\parbox{.95cm}{w/o TSIM}} & {MR} & {$1.47 \pm 2.40$} & {$3.11 \pm 3.68$}\\
& {CPD} & {$12.77 \pm 6.12$} & {$17.57 \pm 6.26$}\\
& {FPFH} & {$0.46 \pm 0.34$} & {$0.91 \pm 0.54$}\\
& {FPFH + ICP} & {$0.28 \pm 0.11$} & {$0.55 \pm 0.10$}\\ \cline{1-4}
\multirow{4}{*}{\parbox{.95cm}{w/ TSIM}} & {MR} & {$0.67 \pm 1.66$} & {$1.70 \pm 3.41$}\\
& {CPD} & {$3.68 \pm 2.62$} & {$5.01 \pm 3.55$}\\
& {FPFH} & {$0.40 \pm 0.19$} & {$0.71 \pm 0.16$}\\
& {FPFH + ICP} & {$0.22 \pm 0.10$} & {$0.48 \pm 0.09$}\\
& {\bf Proposed method} & {$\bf 0.22 \pm 0.09$} & {$\bf 0.47 \pm 0.08$}\\
\hline
\end{tabular}
\end{table}
\begin{table*}
\footnotesize
\centering
\caption{Results of Stitching Error Correction according to Registration Methods. \label{tbl:eval_cor}}\vskip 0.0in
\begin{tabular}{cccccccc} \hline
& {\bf Method} & {\bf Landmark (mm)} & {\bf Difference (mm)} & {\bf Surface (mm)} & {\bf Difference (mm)}\\ \cline{1-6}
\multirow{4}{*}{\parbox{.8cm}{w/ TSIM}} & {MR} & {$0.53 \pm 1.69$} & {$-0.14 \pm 0.03$} & {$1.49 \pm 3.50$} & {$-0.21 \pm 0.09$}\\
& {CPD} & {$3.62 \pm 2.71$} & {$-0.06 \pm 0.09$} & {$4.94 \pm 3.69$} & {$-0.07 \pm 0.14 $}\\
& {FPFH} & {$0.14 \pm 0.09$} & {$-0.26 \pm -0.10$} & {$0.40 \pm 0.15$} & {$ -0.31 \pm -0.01 $}\\
& {FPFH + ICP} & {$0.12 \pm 0.07$} & {$-0.10 \pm -0.03$} & {$0.32 \pm 0.12$} & {$-0.16 \pm 0.03 $}\\
& {\bf Proposed method} & {$\bf 0.11\pm 0.07$} & {$\bf -0.10 \pm -0.02$} & {$\bf 0.30 \pm 0.11$} & {$\bf -0.17 \pm 0.03$}\\
\hline
\end{tabular}
\end{table*}
\begin{figure*}
\centering
\subfloat{\includegraphics[width=.21\textwidth]{result1.pdf}}~~~
\subfloat{\includegraphics[width=.21\textwidth]{result3.pdf}}~~~
\subfloat{\includegraphics[width=.21\textwidth]{result4.pdf}}~~~
\subfloat{\includegraphics[width=.21\textwidth]{result2.pdf}}
\caption{Qualitative results before and after correction of four selected evaluation data. The yellow and red lines represent contours of the IOS models with the proposed registration and correction methods, respectively. The contours are obtained from the IOS models cut along the corresponding CT slices. The two contours almost overlap, but the differences appear at the end of the arches. }
\label{fig:correction_results}
\end{figure*}
When source and target point clouds partially overlap, the MR and CPD were less accurate than FPFH, suggesting that the feature-based method is more suitable than user interaction and probabilistic-based methods. But above all, these methods suffer from the unnecessary points because non-overlapping areas between the IOS and skull models (\textit{i.e.}, alveolar bones in CBCT and soft tissues in IOS) occupy the most of the entire area. In such condition, FPFH may produce inaccurate correspondence pairs due to the non-overlapping points that are not properly filtered out in Eq. \eqref{eq:filter}, as shown in Fig. \ref{fig:comparison_fpfh}. Therefore, the use of TSIM-IOS and -CBCT is beneficial by eliminating the areas that may adversely affect accurate registration. In the ablation study, the methods with TSIM showed improved performances compared to those without TSIM. Still, the MR and CPD have limitations in achieving automation and improving accuracy due to the roots of CBCT teeth, respectively. To precisely match the models roughly aligned by FPFH, we developed T-ICP, which is an improved ICP method that uses individual tooth segmentation. Adopting T-ICP instead of ICP led to increased accuracy. The advantage of T-ICP is that it avoids point correspondences between adjacent teeth with different codes. This constraint prevents unwanted correspondences by performing point matching only between the same CBCT and IOS teeth.
\subsection{Correction of the IOS Stitching Errors}
This subsection presents the results before and after the correction of distortions in IOS, which occur in the stitching process of locally scanned images. Table \ref{tbl:eval_cor} reports the correction results for the registration methods with TSIM that were used in the subsection above. All post-correction accuracies increased compared with the pre-correction accuracies. However, these correction results depend on the performances of registration methods. Each tooth of the IOS aligned with CBCT in the previous registration step is used as an initial guess to determine a corrective transformation. The locations of the IOS teeth should be as close as possible to the CBCT teeth, as the ICP may become stuck in local minima. Fig. \ref{fig:correction_results} presents the results of the proposed registration and correction methods. Due to accumulated stitching errors, the scanned arches tend to be narrower or wider than the actual arches. Thus, the registration results show that the full-arch IOS models slightly deviated at the end of the arches. In contrast, the corrected IOS models fit edges of the teeth in CBCT images.
\section{Discussion and Conclusion}
In this paper, we developed a fully automatic registration and correction technique that integrates two different imaging modalities (\textit{i.e.}, IOS and CBCT images) in one scene. The proposed method is intended not only to compensate CBCT-derived tooth surfaces with the high-resolution surfaces of IOS, but also to correct cumulative IOS stitching errors across the entire dental arch by referring to CBCT. The most important contribution of the proposed method is its registration accuracy at the level of clinical application, even with severe metal artifacts in CBCT. The accuracy is achieved by the use of TSIM-IOS and -CBCT, which allow the minimization of the non-congruent points in CBCT and IOS data. The tooth-focused approach addresses the drawbacks of existing methods by achieving improved accuracy and fully automation. Moreover, this approach helps to correct full-arch digital impressions with distortion caused by stitching errors.
The fusion of the CBCT images and IOS models provides high-resolution crown surface even in the presence of serious metal-related artifacts in the CBCT images. Metal artifact reduction (MAR) in dental CBCT is known to be the most difficult and important issue. By avoiding the challenging problem of MAR with the help of IOS, the merged image may be used for occlusal analysis. The proposed multimodal data integration system can provide a jaw-tooth-gingiva composite model, which is a basic tool in digital dentistry workflow. Thus, it may be used to produce a surgical wafer for orthognathic surgical planning and orthodontic mini-screw guide to reduce failure by minimizing root contact. Furthermore, because the jaw-tooth-gingiva model is componentized with jaw bones, individual teeth, and soft tissues (gingiva and palate), it is useful in terms of versatility and practicality in various dental treatment tasks (\textit{e.g.}, dental implant placement, orthodontic simulation and evaluation).
The proposed method can eliminate the hassle of traditional dental prosthetic treatments that are labor intensive, costly, require at least two individual visits, and require temporary prosthesis to be worn until the final crown is in place. Moreover, if the final crown made in the dental laboratory does not fit properly at the second visit, the patient and dentist will have to repeat the previous operation, and the laboratory may have to redesign the restoration prosthesis. Note that the proposed integration of dental CBCT and IOS data can provide an alternative to traditional impressions, thereby reducing the time-consuming laboratory procedure of manually editing individual teeth using a computer-aided interface.
\section*{Acknowledgment}
This research was supported by a grant of the Korea Health Technology R\&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health \& Welfare, Republic of Korea (grant number : HI20C0127). We would like to express our deepest gratitude to HDXWILL which shares dataset.
| 2024-02-18T23:39:40.152Z | 2021-12-06T02:15:55.000Z | algebraic_stack_train_0000 | 14 | 5,242 |
|
proofpile-arXiv_065-105 | \section{Introduction}
In noble liquid detectors for dark matter searches \cite{Chepel13} and low-energy neutrino experiments \cite{Majumdar21}, the scattered particle produces two types of signals: that of primary scintillation, produced in the liquid and recorded promptly ("S1"), and that of primary ionization, produced in the liquid and recorded with a delay ("S2"). In two-phase (liquid-gas) detectors \cite{Akimov21}, to record the S2 signal proportional electroluminescence (EL) is used produced by drifting electrons in the gas phase under high enough electric fields.
According to modern concepts~\cite{Buzulutskov20}, there are three mechanisms responsible for proportional EL in noble gases: that of excimer (e.g. Ar$^*_2$) emission in the vacuum ultraviolet (VUV) \cite{Oliveira11}, that of emission due to atomic transitions in the near infrared (NIR) \cite{Oliveira13,Buzulutskov17}, and that of neutral bremsstrahlung (NBrS) emission in the UV, visible and NIR range \cite{Buzulutskov18}. These three mechanisms are referred to as excimer (ordinary) EL, atomic EL and NBrS EL, respectively.
NBrS EL is due to bremsstrahlung of drifting electrons scattered on neutral atoms:
\begin{eqnarray}
\label{Rea-NBrS-el}
e^- + \mathrm{A} \rightarrow e^- + \mathrm{A} + h\nu \; .
\end{eqnarray}
The presence of NBrS EL in two-phase Ar detectors has for the first time been demonstrated in our previous work~\cite{Buzulutskov18}, both theoretically and experimentally. Recently, the similar theoretical approach has been applied to all noble gases, i.e. overall to He, Ne, Ar, Kr and Xe, to calculate the photon yields and spectra for NBrS EL \cite{Borisova21}. NBrS EL in noble gases was further studied experimentally in \cite{Bondar20,Tanaka20,Kimura20,Takeda20,Takeda20a,Aoyama21,Aalseth21,Monteiro21} and theoretically in \cite{Amedo21}.
On the other hand, much less is known about proportional EL in noble liquids \cite{Buzulutskov20,Masuda79,Schussler00,Aprile14,Ye14,Lightfoot09,Stewart10}. In a sense, the experimental data are even confusing. Indeed, in liquid Ar the observed threshold in the electric field for proportional EL, of about 60 kV/cm \cite{Buzulutskov20,Lightfoot09}, was 2 orders of magnitude less than expected for excimer EL \cite{Stewart10}. In liquid Xe, the EL threshold was more reasonable, around 400 kV/cm, but some puzzling EL events were observed below this threshold \cite{Aprile14}.
In our previous works \cite{Buzulutskov18,Buzulutskov20} it was suggested that these puzzling events at unexpectedly low fields might be induced by proportional EL produced by drifting electrons in a noble liquid due to NBrS effect, the latter having no threshold in the electric field. In this work we verify this hypothesis, namely we extend the theoretical approach developed for noble gases to noble liquids in order to develop a quantitative theory that can predict the photon yields and spectra for NBrS EL in all noble liquids.
What is new in this work is that the electron energy and transport parameters in noble liquids are calculated in the framework of rigorous Cohen-Lekner \cite{Cohen67} and Atrazhev \cite{Atrazhev85} theory.
In this theory, the electron transport through the liquid is considered as a sequence of single scatterings on the effective potential. Therefore, such a parameter as electron scattering cross section can be used in the liquid in a way similar to that of the gas \cite{Akimov21}. An important concept of the theory is the distinction between the energy transfer scattering, which changes the electron energy, and that of momentum transfer, which only changes the direction of the electron velocity. Both processes have been assigned separate cross sections \cite{Cohen67,Atrazhev85,Stewart10}: that of energy transfer (or else effective) and that of momentum transfer. These are obvious analogs of those in the gas, namely of the total elastic and momentum transfer (transport) cross sections, respectively. The latest modifications of the theory can be found elsewhere \cite{Boyle15,Boyle16}.
Accordingly, in this work the photon yields and spectra are calculated for NBrS EL in all noble liquids: in liquid He, Ne, Ar, Kr and Xe. The relevance of the results obtained to the development of noble liquid detectors for dark matter searches and neutrino detection is also discussed.
\section{Theoretical formulas}
To calculate the photon yields and spectra for NBrS EL in noble liquids we used the approach developed for noble gases in~\cite{Buzulutskov18}. Let us briefly recall the main points of this approach.
The differential cross section for NBrS photon emission is expressed via electron-atom total elastic cross section ($\sigma _{el}(E)$)~\cite{Buzulutskov18,Park00,Firsov61,Kasyanov65,Dalgarno66,Biberman67}:
\begin{eqnarray}
\label{Eq-sigma-el}
\frac{d\sigma}{d\nu} = \frac{8}{3} \frac{r_e}{c} \frac{1}{h\nu} \left(\frac{E - h\nu}{E} \right)^{1/2} \times \hspace{40pt} \nonumber \\ \times \ [(E-h\nu) \ \sigma _{el}(E) \ + \ E \ \sigma _{el}(E - h\nu) ] \; ,
\end{eqnarray}
where $r_e=e^2/m c^2$ is the classical electron radius, $c=\nu \lambda$ is the speed of light, $E$ is the initial electron energy and $h\nu$ is the photon energy.
To be able to compare results at different medium densities and temperatures, we need to calculate the reduced EL yield ($Y_{EL}/N$) as a function of the reduced electric field ($\mathcal{E}/N$), where $\mathcal{E}$ is the electric field and $N$ is the atomic density. The reduced EL yield is defined as the number of photons produced per unit drift path and per drifting electron, normalized to the atomic density; for NBrS EL it can be described by the following equation \cite{Buzulutskov18}:
\begin{eqnarray}
\label{Eq-NBrS-el-yield}
\left( \frac{Y_{EL}}{N}\right)_{NBrS} = \int\limits_{\lambda_1}^{\lambda_2} \int\limits_{h\nu}^{\infty}\frac{\upsilon_e}{\upsilon_d}
\frac{d\sigma}{d\nu} \frac{d\nu}{d\lambda} f(E) \ dE \ d\lambda
\; ,
\end{eqnarray}
where $\upsilon_e=\sqrt{2E/m_e}$ is the electron velocity of chaotic motion, $\upsilon_d$ is the electron drift velocity, $\lambda_1-\lambda_2$ is the sensitivity region of the photon detector,
$d\nu/d\lambda=-c/\lambda^2$, $f(E)$ is the electron energy distribution function normalized as
\begin{eqnarray}
\label{Eq-norm-f}
\int\limits_{0}^{\infty} f(E) \ dE = 1 \; .
\end{eqnarray}
The distribution functions with a prime, $f^\prime=f/E^{1/2}$, is often used instead of $f$, normalized as
\begin{eqnarray}
\label{Eq-norm-fprime}
\int\limits_{0}^{\infty} E^{1/2} f^\prime(E) \ dE = 1 \; .
\end{eqnarray}
$f^\prime$ is considered to be more enlightening than $f$, since in the limit of zero electric field it tends to Maxwellian distribution.
Consequently, the spectrum of the reduced EL yield is
\begin{eqnarray}
\label{Eq-NBrS-el-yield-spectrum}
\frac{d (Y_{EL}/N)_{NBrS}}{d\lambda} =
\int\limits_{h\nu}^{\infty}\frac{\upsilon_e}{\upsilon_d}
\frac{d\sigma}{d\nu} \frac{d\nu}{d\lambda} f(E) \ dE \
\; .
\end{eqnarray}
In our previous works \cite{Buzulutskov18,Borisova21}, the electron energy distribution function and drift velocity in noble gases, at a given reduced electric field, were calculated using Boltzmann equation solver \cite{Hagelaar05}.
In this work, we follow exactly the Atrazhev paper \cite{Atrazhev85} to calculate the electron energy distribution function and drift velocity in noble liquids. Another modification is that the total elastic cross section in Eq.~\ref{Eq-sigma-el} is replaced with the energy transfer cross section for electron transport through the liquid. With these two modifications all the Eqs.~\ref{Eq-sigma-el},\ref{Eq-NBrS-el-yield},\ref{Eq-norm-f},\ref{Eq-NBrS-el-yield-spectrum} can directly apply to noble liquids.
\section{Cross sections, electron energy distribution functions and drift velocities in noble liquids}
According to Cohen-Lekner and Atrazhev theory the drift and heating of excess electrons by an external electric field in the liquid are determined by two parameters, the collision frequency of energy transfer ($\nu_{e}$) and that of momentum transfer ($\nu_{m}$)~\cite{Atrazhev85}:
\begin{eqnarray}
\label{Eq01}
\nu_{e} = \delta N \sigma_{e}(E)(2E/m)^{1/2} \: , \\
\nu_{m} = N \sigma_{m}(E)(2E/m)^{1/2} \: , \\
\sigma_{m}(E) = \sigma_{e}(E)\widetilde{S}(E) \,.
\end{eqnarray}
\noindent Here $N$ is the atomic density of the medium; $E$ is the electron energy; $\delta = 2m/M$ is twice the electron-atom mass ratio; $\sigma_{e}(E)$ and $\sigma_{m}(E)$ is the energy transfer (effective) and momentum transfer electron scattering cross section in the liquid, respectively; $\widetilde{S}(E)$ is the function, which takes into account liquid structure.
To calculate collision frequencies one need to know $\sigma_{e}(E)$ and $\sigma_{m}(E)$; for liquid Ar, Kr and Xe these were given in \cite{Atrazhev85}: see Fig.~\ref{fig01} (top).
For comparison, Fig.~\ref{fig01} (bottom) presents the total elastic cross sections for gaseous Ne, Ar, Kr, and Xe taken from the BSR database~\cite{DBBSR}; since for He it is not available, the momentum transfer cross section is shown instead taken from the Biagi database~\cite{DBBiagi}.
\begin{figure}
\includegraphics[width=0.99\columnwidth]{fig01a}
\includegraphics[width=0.99\columnwidth]{fig01b}
\caption{Top: Electron scattering cross sections in liquid Ar, Kr and Xe as a function of electron energy, namely that of energy transfer (or effective), $\sigma_{e}$, and that of momentum transfer, $\sigma_{m}$, both taken from~\cite{Atrazhev85}.
Bottom: Electron scattering cross section in noble gases as a function of electron energy: that of total elastic for Ne, Ar, Kr, and Xe, taken from the BSR database~\cite{DBBSR}, and that of momentum transfer for He, taken from the Biagi database~\cite{DBBiagi}.}
\label{fig01}
\end{figure}
The electron distribution function $f^\prime(E)$ in a strong electric field is expressed via both collision frequencies \cite{Atrazhev85}:
\begin{eqnarray}
\label{Eq02}
f^\prime(E) = f(0) exp\left(-\int\limits_{0}^{E} \frac{3m\nu_{e}(E)\nu_{m}(E)}{2e^{2}\mathcal{E}^{2}}dE\right).
\end{eqnarray}
The constant $f(0)$ is determined from the normalization condition
of Eq.~\ref{Eq-norm-fprime}.
Using the electron energy distribution functions, one can calculate the electron drift velocity in the liquid \cite{Atrazhev85}:
\begin{eqnarray}
\label{Eq03}
\upsilon_d = -\frac{2}{3}\frac{e\mathcal{E}}{m} \int\limits_{0}^{\infty} \frac{E^{3/2}}{\nu_{m}(E)} \frac{df^\prime}{dE} dE.
\end{eqnarray}
It is shown in Fig.~\ref{fig02} as a function of the reduced electric field, the latter being expressed in Td units: 1~Td~=~$10^{-17}$~V~cm$^2$. It is possible to check the correctness of the distribution functions by comparing the calculated and measured electron drift velocities: this is done in Fig.~\ref{fig02} using the experimental data compiled in~\cite{Miller68}. It can be seen that the theoretical and experimental drift velocities are in a reasonable agreement, within a factor of 2, thus confirming the correctness of the calculated distribution functions for liquid Ar, Kr and Xe.
\begin{figure}
\includegraphics[width=0.99\columnwidth]{fig02}
\caption{Comparison of electron drift velocity ($\upsilon_d$) in liquid Ar, Kr and Xe theoretically calculated in this work (curves) with that measured in experiment \cite{Miller68} (data points). The color of the curve and the data points is the same for a given noble liquid.}
\label{fig02}
\end{figure}
It should be remarked that in light noble liquids, He and Ne, the Cohen-Lekner and Atrazhev theory cannot apply to calculate the electron energy distribution functions, since the appropriate cross sections for electron transport in the liquid, $\sigma_{e}(E)$ and $\sigma_{m}(E)$, are not available in the literature. Thereby in the following for these liquids a "compressed gas" approximation will be used, similarly to that developed in \cite{Borisova21}. In this approximation, the Eqs.~\ref{Eq-sigma-el},\ref{Eq-NBrS-el-yield},\ref{Eq-norm-f},\ref{Eq-NBrS-el-yield-spectrum} apply directly as for the gas, i.e. with electron energy distribution function and drift velocity obtained using Boltzmann equation solver, with the input elastic cross sections taken for the gas from Fig.~\ref{fig01} (bottom), and with the atomic density $N$ equal to that of the liquid.
\onecolumn
\begin{table*} [h!]
\caption{Properties of noble gases and liquids, and parameters of neutral bremsstrahlung (NBrS) electroluminescence (EL) theoretically calculated in this work.}
\label{table}
\begin{center}
\begin{tabular}{p{0.5cm}p{6cm}p{1.5cm}p{1.5cm}p{1.54cm}p{1.5cm}p{1.5cm}}
No & Parameter & He & Ne & Ar & Kr & Xe \\
\\
(1) & Boiling temperature at 1.0~atm, $T_b$~\cite{Fastovsky71} (K) & $4.215$ & $27.07$ & $87.29$ & $119.80$ & $165.05$ \\
(2) & Gas atomic density at $T_b$ and 1.0 atm, derived from~\cite{Fastovsky71} (cm$^{-3}$) & $2.37\cdot10^{21}$ & $3.41\cdot10^{20}$ & $8.62\cdot10^{19}$ & $6.18\cdot10^{19}$ & $5.75\cdot10^{19}$ \\
(3) & Liquid atomic density at $T_b$ and 1.0 atm, derived from~\cite{Fastovsky71} and from ~\cite{Theeuwes70} for Xe (cm$^{-3}$) & $1.89\cdot10^{22}$ & $3.59\cdot10^{22}$ & $2.10\cdot10^{22}$ & $1.73\cdot10^{22}$ & $1.35\cdot10^{22}$ \\
(4) & Threshold in electric field for excimer EL in noble liquid deduced from the corresponding threshold in noble gas by reduction to the atomic density of the liquid, obtained using data of \cite{Borisova21} (kV/cm) & $1134$ & $538$& $840$ & $519$ & $472$\\
(5) & Number of photons for NBrS EL in noble liquid produced by drifting electron in 1~mm thick EL gap at $T_b$ and 1.0~atm, at electric field of 100 kV/cm & $0.13$ & $2.5$& $0.93$ & $1.6$ & $1.1$\\
(6) & The same at 500 kV/cm & $4.3$ & $40$& $12$ & $24$ & $30$\\
\end{tabular}
\end{center}
\end{table*}
\begin{multicols}{2}
\twocolumn
The values of the atomic densities for the gas and liquid phases at boiling temperatures at 1 atm are presented in Table~\ref{table}. We will see in the following on the example of heavy noble liquids that "compressed-gas" approximation works well: the difference in photon yields for NBrS EL between the "liquid" and "compressed-gas" approximation is not that large, remaining within a factor of 1.5.
It should be also remarked, that all the calculations in this work were provided for atomic densities of the medium, liquid or gas, corresponding to the boiling temperature of a given noble element at 1 atm.
\section{Operational range of reduced electric fields in noble liquids for NBrS EL}
It is obvious that NBrS EL in noble liquids is much weaker than excimer EL and thus becomes insignificant above the electric field threshold for excimer EL. Table~\ref{table} gives an idea of these thresholds in noble liquid deduced from the corresponding threshold in noble gas by reduction to the atomic density of the liquid, obtained using the data of \cite{Borisova21}.
To compare with the results for noble gases, one also need to determine the operational range of reduced electric fields for NBrS EL in noble liquids from the experimental works, where it was presumably observed and where the operation electric field can be reliably estimated. Basically 3 works do fit to these conditions: that of \cite{Buzulutskov12}, operating the gas electron multiplier (GEM,\cite{Sauli16}) in liquid Ar, that of \cite{Lightfoot09}, operating the thick GEM (THGEM, \cite{Breskin09}) in liquid Ar, and that of \cite{Aprile14}, operating the thin anode wire in liquid Xe. Deduced from the absolute electric field values given in \cite{Buzulutskov12} and \cite{Aprile14}, the required range of reduced electric fields within which NBrS EL was presumably observed amounts to 0.1-5 Td. In particular, for liquid Ar this range corresponds to electric fields ranging from 21 to 1040 kV/cm. We will restrict our calculations to this range of fields.
\section{NBrS EL spectra and yields in noble liquids}
Figs.~\ref{fig03} show the NBrS spectra of the reduced EL yield for liquid Ar, Kr and Xe at different reduced electric fields. The spectra were calculated by numerical integration of Eq.~\ref{Eq-NBrS-el-yield-spectrum}.
One can see that NBrS EL spectra are similar in all noble liquids; moreover they look almost identical to those obtained in noble gases at the same reduced electric field: compare Fig.~\ref{fig03} to Fig.~10 of \cite{Borisova21} at 5 Td. The spectra are rather flat, extending from the UV to visible and NIR range at higher reduced electric field, e.g. at 5 Td. In each noble liquid, the NBrS EL spectrum has a broad maximum that gradually moves to longer wavelength with decreasing electric field. At lower reduced electric field, in particular at 0.3 Td corresponding to 60 kV/cm in liquid Ar, the spectra have moved completely to the visible and NIR ranges. In all noble liquids, the spectra are mostly above 200 nm (in the UV, visible and NIR range), i.e. just in the sensitivity region of commonly used photomultiplier tubes (PMTs) and silicon photomultipliers (SiPMs).
\end{multicols}
\twocolumn
\begin{figure}
\includegraphics[width=0.99\columnwidth]{fig03}
\caption{Spectra of the reduced EL yield for NBrS EL in liquid Ar, Kr and Xe at different reduced electric fields (0.3, 1 and 5 Td), calculated using Eq.~\ref{Eq-NBrS-el-yield-spectrum}.}
\label{fig03}
\end{figure}
\begin{figure}
\includegraphics[width=0.99\columnwidth]{fig04}
\caption{Reduced EL yield for NBrS EL at 0-1000 nm in liquid Ar, Kr and Xe as a function of the reduced electric field, calculated in this work in the framework of Cohen-Lekner and Atrazhev theory using Eq.~\ref{Eq-NBrS-el-yield} (solid lines). For comparison, the reduced yield for NBrS EL at 0-1000 nm in noble gases is shown calculated in \cite{Borisova21} using Boltzmann equation solver (dashed lines). The color of the curves are the same for a given noble element. The top scale shows the corresponding absolute electric field in liquid Ar.}
\label{fig04}
\end{figure}
\begin{figure}
\includegraphics[width=0.99\columnwidth]{fig05}
\caption{Absolute EL yield (number of photons per drifting electron per 1 cm) for NBrS EL at 0-1000 nm in noble liquids as a function of the absolute electric field, calculated in this work. For heavy noble liquids (Ar, Kr and Xe) the rigorous Cohen-Lekner and Atrazhev theory was used to calculate the electron energy and transport parameters in the liquid, while for light noble liquids (He and Ne) the "compressed gas" approximation was used.}
\label{fig05}
\end{figure}
The EL yield for NBrS EL in noble liquids is presented in Figs.~\ref{fig04} obtained by numerical integration of Eq.~\ref{Eq-NBrS-el-yield}: the reduced EL yield is shown as a function of the reduced electric field. For comparison, the reduced yield for NBrS EL in noble gases is shown calculated in \cite{Borisova21} using Boltzmann equation solver.
Surprisingly, this "compressed-gas" approximation, successfully applied before to describe NBrS EL in noble gases, has led to almost the same results as that of the rigorous "liquid" theory in terms of the reduced EL yields and spectra when formally extrapolated to the atomic density of the noble liquid: for a given noble element and given reduced electric field the difference between them remains within a factor of 1.5 up to reduced electric field of 5 Td.
This fact indicates that the scaling law, stating that the reduced EL yield ($Y/N$) is a function of the reduced electric field ($\mathcal{E}/N$), is valid not only for noble gases, but also for noble liquids to some extent, at least as concerned the NBrS EL effect.
It also indicates on the applicability of the "compressed gas" approximation to noble liquids at moderate reduced electric fields, below 5 Td, thus justifying its use for light noble liquids, He and Ne, where the Cohen-Lekner and Atrazhev theory cannot be used due to the lack of the data.
Furthermore, Fig.~\ref{fig05} shows the practical photon yield suitable for verifying in experimental conditions, namely the number of photons produced by drifting electron per 1 cm in all noble liquids, as a function of the absolute electric field. In this figure, for heavy noble liquids (Ar, Kr and Xe) the rigorous Cohen-Lekner and Atrazhev theory was used to calculate the electron energy and transport parameters in the liquid, while for light noble liquids (He and Ne) the "compressed gas" approximation was used, with the calculations identical to those of \cite{Borisova21}. The appropriate NBrS EL spectra and yields for He and Ne can be found in \cite{Borisova21}.
Table~\ref{table} (items 5 and 6) gives an idea of the magnitude of the NBrS EL effect in a practical parallel-plate EL gap, of a thickness of 1 mm: at a field of 500 kV/cm the photon yield varies as 4, 40, 12, 24 and 30 photons for He, Ne, Ar, Kr and Xe, respectively. On the other hand, at 100 kV/cm the photon yield is reduced by about an order of magnitude down to about 1 photon per drifting electron in almost all noble liquids. It is remarkable that up to 600 kV/cm, liquid Ne has the highest EL yield for NBrS EL, obviously due to much lower elastic cross section between 1 and 10 eV of the electron energy compared to other noble elements (see Fig.~\ref{fig01} (bottom)), resulting in stronger electron heating by the electric field and thus in more intense NBrS photon emission.
\section{Possible applications and discussion}
In order to produce noticeable NBrS EL in noble liquids, one should provide high enough electric fields, ranging from 50 to 500 kV/cm, in practical devices. Based on previous experience, such devices might be GEMs \cite{Buzulutskov12}, THGEMs \cite{Lightfoot09} and thin anode wires \cite{Aprile14}. The parallel-plate EL gap of a thickness of 1 mm can also be considered, albeit being not tested in real experiment in noble liquids at such high fields. It should be remarked that the larger EL gap thickness, e.g. 1 cm, can hardly be used in practice due to the existing limit on high voltage breakdowns in noble liquids: the absolute voltage before breakdown cannot exceed values of about 100 kV in liquid He~\cite{Gerhold94} and several hundreds kV in other noble liquids \cite{Buzulutskov20,Auger16,Tvrznikova19}.
It looks natural to use GEMs or THGEMs as EL plates instead of parallel-plate EL gaps in noble liquids, since the former are more resistant to breakdowns than the latter. Note that the NBrS EL spectrum is mostly in the visible and NIR range: see Fig.~\ref{fig03}. This implies a possible practical application of NBrS EL in noble liquid detectors, namely the method of direct optical readout of S2 signal in the visible range, i.e. without using a wavelength shifter (WLS). A similar technique has been recently demonstrated in two-phase Ar detector with direct SiPM-matrix readout using NBrS EL in the gas phase \cite{Aalseth21}. These results have lead us to the idea of using THGEM plates in combination with SiPM-matrices that have high sensitivity in the visible and NIR range, to optically record the S2 signal in single-phase noble liquid detectors for dark matter search and neutrino experiments. In addition, the recently proposed transparent very-thick GEM \cite{Kuzniak21} can be used as EL plate, with enhanced light collection efficiency.
We can verify the theory of NBrS EL in noble liquids by experiments where it was presumably observed, where the electric field is explicitly known and where it is known how to convert the emitted photons into recorded photoelectrons. At first glance, only two works qualify for these criteria: \cite{Lightfoot09} and \cite{Aprile14}.
In particular, in \cite{Lightfoot09} the operation electric field in the center of THGEM hole (1.5 mm height), of 60 kV/cm \cite{Buzulutskov12}, corresponds to $\mathcal{E}/N$=0.3~Td in liquid Ar, resulting in about 0.6 photons per drifting electron predicted by the NBrS EL theory according to Figs.~\ref{fig04} and \ref{fig05}. However, this is more than 2 orders of magnitude smaller than the light gain reported in \cite{Lightfoot09}. We therefore suggest to interpret the results of \cite{Lightfoot09} as caused by the presence of gas bubbles associated to THGEM holes, inside which proportional EL in the gas phase took place, similarly to what happens in Liquid Hole Multipliers \cite{Erdal20}.
\begin{figure}
\includegraphics[width=0.99\columnwidth]{fig06}
\caption{Number of photoelectrons recorded in liquid Xe by a PMT as a function of the voltage on $10~\mu m$ thick anode wire \cite{Aprile14}: the experimental data (data points) and linear fit of proportional EL to the data (solid line) are shown, the latter defining the threshold of excimer EL. Top scale shows the corresponding reduced electric field on anode wire surface. For comparison, the theoretical assessment of the number of photoelectrons due to NBrS EL obtained in this work is shown (area between dashed lines).}
\label{fig06}
\end{figure}
In \cite{Aprile14}, where puzzling EL events were observed in liquid Xe under the threshold of excimer EL, the operation fields near the anode wire were much higher, around 400 kV/cm. Fig.~\ref{fig06} shows the experimental data and linear fit of proportional EL to the data, the latter defining the threshold of excimer EL. In addition, the experimental conditions were explicitly described. This allowed us to predict the number of photoelectrons recorded by the PMT due to NBrS EL, although with some difficulties associated with highly inhomogeneous field near the wire. Due to the latter, Eq.~\ref{Eq-NBrS-el-yield} if applied directly gives only the lower limit of the event amplitude, since it does not take into account the electron diffusion, which significantly increases the travel time of the electron to the wire and thus the overall photon yield. We tried to take into account the diffusion effect: as a result, the theoretical prediction in Fig.~\ref{fig06} is shown in the form of an area between two dashed curves, thus setting the theoretical uncertainty. Within this uncertainty, the NBrS EL theory well describes the puzzling underthreshold events, namely their absolute amplitudes and the dependence on the anode voltage, which might be treated as the first experimental evidence for NBrS EL in noble liquids.
\section{Conclusion}
In this work we systematically studied the effect of neutral bremsstrahlung (NBrS) electroluminescence (EL) in all noble liquids: the photon yields and spectra for NBrS EL have for the first time been theoretically calculated in liquid He, Ne, Ar, Kr and Xe. For heavy noble liquids, the calculations were done in the framework of Cohen-Lekner and Atrazhev theory describing the electron energy and transport parameters in the liquid medium.
Surprisingly, the "compressed-gas" approximation, successfully applied before to describe NBrS EL in noble gases, has led to almost the same results as that of the rigorous "liquid" theory in terms of the reduced EL yields and spectra when formally extrapolated to the atomic density of the noble liquid.
The predicted magnitude of the NBrS EL effect in a practical parallel-plate EL gap, of a thickness of 1 mm, is noticeable: at a field of 500 kV/cm the photon yield varies from 12 to 30 and 40 photons per drifting electron in liquid Ar, Xe and Ne respectively. The NBrS EL spectra in noble liquids are in the visible and NIR range.
The practical applications of the results obtained might be the use of THGEMs as EL plates in combination with SiPM-matrices, to optically record the S2 signal in single-phase noble liquid detectors for dark matter search and neutrino experiments.
\acknowledgments
This work was supported by Russian Science Foundation (project no. 19-12-00008). It was done within the R\&D program of the DarkSide-20k experiment.
\bibliographystyle{eplbib}
| 2024-02-18T23:39:40.194Z | 2021-12-06T02:13:07.000Z | algebraic_stack_train_0000 | 17 | 4,620 |
|
proofpile-arXiv_065-287 | "\\section{Data Specifications Table}\n\n\\begin{table}[htb]\n\\centering\n\\footnotesize\n\\label{D(...TRUNCATED) | 2024-02-18T23:39:40.960Z | 2021-05-31T02:02:16.000Z | algebraic_stack_train_0000 | 55 | 17,146 |
|
proofpile-arXiv_065-303 | "\\section{Introduction}\r\n\r\nKeyword spotting (KWS) is a frequently used technique in spoken data(...TRUNCATED) | 2024-02-18T23:39:41.025Z | 2020-07-22T02:13:51.000Z | algebraic_stack_train_0000 | 59 | 3,777 |
|
proofpile-arXiv_065-324 | "\\section{Introduction}\r\nLet $\\mathbb{N}$ be the set of all nonnegative integers. For any sequen(...TRUNCATED) | 2024-02-18T23:39:41.119Z | 2020-07-22T02:08:34.000Z | algebraic_stack_train_0000 | 63 | 4,189 |
|
proofpile-arXiv_065-351 | "\\section{Introduction}\n\n\n\nIn \\cite{Unf} free equations of spin 0 and spin 1/2 matter fields\n(...TRUNCATED) | 2024-02-18T23:39:41.232Z | 1996-09-03T12:27:33.000Z | algebraic_stack_train_0000 | 69 | 5,698 |
|
proofpile-arXiv_065-363 | "\\section{Introduction}\nSince the early work of Wigner \\cite{Wig53}\nrandom matrix theory (RMT) h(...TRUNCATED) | 2024-02-18T23:39:41.253Z | 1996-09-17T15:26:41.000Z | algebraic_stack_train_0000 | 72 | 9,332 |
|
proofpile-arXiv_065-428 | "\\section{Guidelines}\n\nIt is well known that the effects on the physics of a field, \ndue to a mu(...TRUNCATED) | 2024-02-18T23:39:41.423Z | 1996-09-19T12:32:13.000Z | algebraic_stack_train_0000 | 84 | 1,524 |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 60