\documentclass[sigconf, nonacm]{acmart}

\begin{document}

%%
%% The "title" command has an optional parameter,
%% allowing the author to define a "short title" to be used in page headers.
\title{A Survey of Research on the Data Driven AI Framework Testing}

%%
%% The "author" command and its associated commands are used to define
%% the authors and their affiliations.
%% Of note is the shared affiliation of the first two authors, and the
%% "authornote" and "authornotemark" commands
%% used to denote shared contribution to the research.
\author{Jinyang Shao}
\authornote{Both authors contributed equally to this research.}
\email{shaojinyang@whu.edu.cn}
\author{Qifan Wang}
\authornotemark[1]
\email{wangqifan@whu.edu.cn}
\affiliation{%
  \institution{School of Computer Science, Wuhan University}
  \city{Wuhan, Hubei}
  \country{China}
  \postcode{43000}
}


%%
%% The abstract is a short summary of the work to be presented in the
%% article.
\begin{abstract}
  In recent years, Artificial Intelligence (AI) has achieved tremendous success in many domains, especially for safety-critical system. The quality of the AI system is under great concern. Recent research proves that AI frameworks have a significant impact on systems. In this survey, we review some recent research that focus on the AI framework testing. We consider the pros and cons of each work. Finally, we propose our insight on AI operator testing for the possible solution.
\end{abstract}

%%
%% The code below is generated by the tool at http://dl.acm.org/ccs.cfm.
%% Please copy and paste the code instead of the example below.
%%

%%
%% Keywords. The author(s) should pick words that accurately describe
%% the work being presented. Separate the keywords with commas.
\keywords{aritficial intelligence framework, software testing, operator precision}


%%
%% This command processes the author and affiliation and title
%% information and builds the first part of the formatted document.
\maketitle

\section{Introduction}
Artificial intelligence systems have widely existed in national economic and social development. The artificial intelligence system provides artificial intelligence algorithms and solutions for practical application needs.
There are many types of artificial intelligence systems, which can be roughly divided into deep learning systems, evolutionary computing systems, decision support systems, and natural language processing systems from the perspective of algorithms.

In order to facilitate the research and development of artificial intelligence systems, artificial intelligence frameworks have emerged, such as Google's end-to-end open source learning platform TensorFlow\cite{tensorflow2015-whitepaper} and Baidu's open source deep learning service platform PaddlePaddle\cite{Paddlequantum}.
The goal of the artificial intelligence framework is to provide general basic functional modules and output artificial intelligence models according to specific specifications, so as to simplify the steps of repeatedly developing artificial intelligence-related software modules and reduce development costs.
Among them, the basic function modules are called operators. For example, both pooling and sigmoid can be considered as common operators in the construction of deep neural network models. But the difficulty of testing the combination of operators lies in the combinatorial explosion and test oracle missing.

The remainder of the survey is organized as follow: Section 2 presents techniques for common AI framework testing methods. Section 3 introduce the properties related to the floating point error of the operator and our research motivation. Section 4 introduces the possible approach we came up with to solve the AI framework testing problem. Section 5 sums up the full survey.

\section{Related Work}
For the four artificial intelligence frameworks of TensorFlow, PyTorch, CNTK and Theano, Audee\cite{guo2020audee} provides a test method for the underlying framework on which the AI model depends. Audee use the API that framework provided to construct mutated model and test their consistency. Audee can automatically detect three types of bugs, including logical bugs, crash and Not-a-Number (NaN). The generated deep neural network (DNN) uses API provided by the framework, covering 25 widely used APIs in the four frameworks, and found 26 unique unknown errors.

Liu et at. \cite{liu2021detecting}proposed a tool ShapeTracer that can detect TensorFlow based program bugs in industrial environments, which add detection tensor shape in addition to the statical analysis of python code. ShapeTracer also supports error checking for the configuration file loading process in industrial environments

Wandre et at. \cite{wardat2021deeplocalize} proposed DeepLocalize which is aimed at the errors in the training process of the deep neural network-based system. By tracking the changes of parameters such as network weights and biases during training, illegal training values that appear in the network training process can be found in time.

\subsection{Domain-specific AI Framework Testing}
For the target detection artificial intelligence framework in the intelligent perception system, Tian et al.\cite{deeptest} proposed DeepTest, which was applied to test the object recognition module in the autonomous driving system. Deeptest uses white-box testing to examine the coverage of neurons. By constructing mutated test cases, the coverage ratio of neurons can be increased, so as to improve the test adequacy. 

Wang et at.\cite{WangS20} proposed MetaOD, which is a black-box testing technique. This technology detects the errors of the target detection system by comparing the network output results of similar inputs by using the metamorphosis test technology, thereby shielding the complexity of the artificial intelligence system itself and making the construction of test cases simpler.

For the computer vision AI framework, Sun et al.\cite{sun2020explaining} proposed the DeepCover tool, which uses statistical fault localization techniques to explain the errors of image classifiers. By obtaining the pixel ranking of each input image, the set of key pixels that lead the classifier to produce the correct output can be analyzed.

\section{Motivation}
Detecting precision related errors is an interesting domain of AI framework testing\cite{zhang2021predoo}. Floating point representation in computer has limited range and precision, so floating point arithmetic can only approximate real arithmetic. Floating point computation approximation may cause precision related errors. An empirical study on numerical software libraries found that 32\% of all the examined bugs are numerical bugs\cite{di2017a}.

For using non-linear algorithms to process the complex shape-variable inputs, operators in Deep Learning libraries face similar threat of precision related errors. The clients of DL operators can customize input precision and operator precision. Both of them have impact on computation precision. Besides, detailed implementation of DL application considers the tradeoff between precision and performance, which make it more unclear to estimate the influence of customized precision. 

To reveal the defects of floating-point computation, oracle approximation is applied in testing. Nejadgholi and Yang\cite{nejadgholi2019a} studied oracle approximation assertions implemented to test four popular DL libraries and found that there exists a non-negligible portion of assertions that leverage oracle approximation in the test cases of DL libraries. However, it is hard to determine whether the produced output is correct for the precision error. Absolute error is to compare the output with the result produced under infinite precision, but it is impractical to calculate the result under infinite precision in automatic testing. A solution is to compare the output with the result produced by a higher precision instance. Absolute tolerance and relative tolerance are metrics used to evaluate precision errors. It would be a new problem to provide appropriate values for these metrics. Because different DL operators require different tolerance values. Formal precision error analysis could be complicated and requires expert knowledge about the implemented operator.

Another challenge in DL operator precision testing is to generate test inputs automatically. As the data type in DL operator, tensor has a complex structure. Developers of DL operator can manually prepare tensor inputs for testing. But these hand-written test cases are limited in effectiveness and non-scalable for different shapes. Some researchers addressed this problem by collecting intermediate output while testing DL models\cite{pham2019cradle}. From provided test inputs for the DL model, they can collect sufficient test data for operators inside the model. However, model construction is a procedure out of the library development lifecycle. The model-to-library technique requires extra cost for constructing model and collecting intermediate output. Thus, we need a new approach for generating sufficient testing inputs for library development.

\section{Research Method}
We think that it is a good research direction to propose an approach generating sufficient inputs for DL operator precision testing. For input generation, traditional random testing provides random and independent inputs. But the chance of hitting error-prone inputs would depend on the magnitude of the input. It would be inefficient to find an error-inducing input. 

We can introduce a mutation-based input generation technique to enhance the hit chance. Mutation is a basic operation in genetic algorithm. The complex structure of tensor requires us to modify the strategy and operation in genetic algorithm. Different strategy in genetic algorithm may have different performance. Moreover, expert knowledge about DL operators could be as configuration of our input generating approach. After designing the whole workflow of our approach, we would implement a domain-specific framework for precision-error testing.

To evaluate the work in the future, we can perform experiments to answer the following research questions:

\textbf{RQ1} How precision of input and computation impact DL operators?

\textbf{RQ2} Is our approach effective at generating error-inducing inputs?

\textbf{RQ3} Which strategy is more effective in generating testing inputs?

As mentioned in the previous section, DL operators can process inputs of different precision with different precision. \textbf{RQ1} is designed to find out impacts of them. To quantify the performance of our approach, we design \textbf{RQ2}. Different strategies of genetic algorithm are introduced to improve the efficiency of error-inducing input generation. We would compare these strategies in \textbf{RQ3}.

\section{Conclusion}
In this paper, we survey the recent research about AI framework testing. We consider the precision-related error in Deep Learning library operators and discuss the challenges of generating test input for operators. To generate sufficient and effective test input for DL operators, we try to apply the modified genetic algorithm to automatic operators testing with expert knowledge. We also propose the research question to evaluate this future work by experiments. We believe that generating input test cases for DL library operators will be valuable to AI framework testing.

%%
%% The next two lines define the bibliography style to be used, and
%% the bibliography file.
\bibliographystyle{ACM-Reference-Format}
\bibliography{references}

\end{document}
\endinput
%%
%% End of file `main.tex'.
