meta
dict | text
stringlengths 1
1.2M
|
---|---|
{
"arxiv_id": "2302.08637",
"language": "en",
"timestamp": "2023-02-20T02:04:29",
"url": "https://arxiv.org/abs/2302.08637",
"yymm": "2302"
} | \section{Introduction}\label{sec:introduction}
\IEEEPARstart{T}{raining} a high-performing deep learning model is incredibly costly, which consumes a massive amount of computational resources along with electricity, human resources, etc.
In fact, training such models is so time-consuming and expensive, that stealing a model is comparatively much simpler and cheaper.
Beyond directly copying an original model and simply claiming its ownership, there are several genres of deep learning model theft.
These include model fine-tuning \cite{wang2017growing, nagabandi2018neural, howard2018universal}, model pruning\cite{liu2018rethinking, he2017channel, zhu2017prune}, and knowledge distillation.
In the face of this many potential threats, model owners have sought ways to protect their intellectual properties, and one such method is model watermarking.
Model watermarking is a brand-new technique that embeds a traceable digital watermark into a deep learning model.
As such, it offers a promising way to model copyright protection.
The first attempt at model watermarking was made in 2017 by Yusuke Uchida {\it et al.} \cite{uchida2017embedding}, who proposed a method of embedding a watermark into a deep learning model.
The watermark was designed to verify ownership of the model given white-box access.
Since then, several exemplary works \cite{rouhani2018deepsigns, szyller2021dawn, namba2019robust} have emerged to provide better protection for deep learning models in different scenarios up to today, where even black-box attacks can be effectively prevented.
\begin{figure}[t!]
\centering
\includegraphics[width=.4\textwidth]{figure/attack_idea.drawio.pdf}
\caption{Attack workflow: The adversary sends the input image to the watermarked image-processing model to derive the watermarked output. If it trains a surrogate model directly with the input image and the watermarked output, the surrogate model will contain the watermark information. By overwriting the watermarked output, it removes the watermark in the output set and forges its own watermark inside the overwritten output. Finally, a watermark-free surrogate model can be trained.}
\label{fig:atk_idea}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=.4\textwidth]{figure/defence_idea.drawio.pdf}
\caption{Defense workflow:
After embedding the watermark, an overwriting network performs an overwriting attack that yields an overwritten image that is then fed to the retrieval network.
The retrieval network is then required to retrieve a valid recovered watermark from the overwritten image.
Together, these networks form the defense network.
}
\label{fig:def_idea}
\end{figure}
However, most model watermarking approaches are designed to protect classification models; methods that work for image-processing models are few and far between.
In short, an image-processing model takes images as its input and outputs modified images, which is quite unlike a classification model that takes in images and simply outputs the digits of a category.
In 2020, Zhang {\it et al.} \cite{zhang2020model} proposed a framework to watermark image-processing neural networks, which, to the best of our knowledge, was the first work in image-processing model watermarking.
Essentially, Zhang's work combines model watermarking with deep steganography so as to forcibly embed watermark information in the outputs of the released models.
Deep steganography is a technique that uses deep learning models to hide a secret image completely within a cover image, such that it is invisible to the naked human eye.
The image containing the embedded secret image is called the container image.
By releasing a set of processed images containing a hidden watermark, any attacker intending to steal the model is compelled to train their own watermarked model.
Subsequently, Quan {\it et al.} \cite{quan2020watermarking} devised another image-processing model watermarking scheme that takes a backdoor watermarking approach.
Briefly, the watermarked model functions normally when it receives normal input images.
When it receives a noise trigger input, it outputs a pre-defined watermark to validate ownership.
Even though recent studies show that steganography plays an essential role in the protection of images, this type of approach might still vulnerable to attacks.
In fact, our study shows that current watermarking methods for image-processing models are not adequately robust.
For example, we find that, due to the properties of deep steganography, watermarking with image-processing models is vulnerable to changes in the frequency domain, especially the high-frequency domain.
To outline what we mean, we devise an overwriting attack method that shows how existing image-processing model watermarking methods, and even deep steganography itself can be nullified.
Having designed the attack, we also designed a defense against it that promises to guarantee the safety of deep learning models.
The defense method mitigates the overwriting attack through a new adversarial training framework that combines a watermarking method with the overwriting attack.
The general workflow of the attack is described in Figure \ref{fig:atk_idea}.
Here, a released image-processing deep learning model is watermarked such that every image it outputs contains an invisible watermark.
If an attacker tries to train a surrogate model via knowledge distillation, the surrogate model will carry the watermark information automatically.
However, in our attack, we train an overwriting network that overwrites the embedded watermark in the output from the watermarked model.
A surrogate model is also trained with the overwritten output and the input image sets.
Thus, the watermark is nullified, for the original watermark can no longer be retrieved from the output of the surrogate model.
To effectively counter the overwriting attack, we propose an adversarial training framework that deliberately incorporates an overwriting network to enhance the robustness of the watermarking network.
Figure \ref{fig:def_idea} demonstrates.
Briefly, an overwriting network is trained along with a watermarking network, which together form a defense network.
There is an adversarial training process, where the overwriting network tries to overwrite the watermark in the container image so that the retrieval network in the watermarking network cannot retrieve a valid recovered watermark from it.
In contrast, the watermarking network tries to retrieve a valid recovered watermark even if the container image has been overwritten.
This competitive process significantly boosts the robustness of the watermarking network.
Overall, our contributions are as follows:
\begin{enumerate}[label=\roman*)]
\item Through frequency analysis, we have unraveled where a secret image signal is embedded in a container image.
Accordingly, we devised a possible attack to nullify the currently existing image-processing model watermarking methods.
\item We devised a corresponding defense method based on adversarial training that counters the proposed attack method with a new adversarial training framework to protect the image-processing network.
\item We discovered an overfitting problem with the current watermarking method for protecting image-processing models that will nullify the protection, and fixed it by modifying the training process.
\end{enumerate}
The rest of this paper is organized as follows.
In section \ref{sect:pre}, we demonstrate the preliminaries by listing the notations used in the context and illustrating the background and related works.
We then describe our proposed method in detail in Section \ref{sect:method}.
Our experiment processes and results are presented in Section \ref{sect:exp}, and they are analyzed and discussed in Section \ref{sect:dis}.
Lastly, we draw a conclusion about this work in Section \ref{sect:con}.
\section{Preliminary}
\label{sect:pre}
\begin{table}[ht!]
\caption{Notations}
\label{tab:notation}
\begin{tabularx}{.48\textwidth}{
|>{\centering\arraybackslash}m{.05\textwidth}
|>{\arraybackslash}m{.377\textwidth}
|}
\hline
\multicolumn{1}{|c|}{Notation}
&
\multicolumn{1}{c|}{Definition}
\\\hline
$\mcl U$
&
The overwriting network.
\\\hline
$\mcl O$
&
The defense network.
\\\hline
$\mcl E$
&
An embedding network that embeds a secret image into a cover image to yield a container image.
\\\hline
$\mcl R$
&
A retrieval network that retrieves a recovered secret image from a container image.
\\\hline
$\mcl D$
&
A discriminator network that identifies whether or not a given image contains hidden content.
\\\hline
$\mcl E_{\mcl U}$
&
The overwriting embedding network.
\\\hline
$\mcl R_{\mcl U}$
&
The overwriting retrieval network.
\\\hline
$H$
&
The original and watermark-free image-processing model.
\\\hline
$H'$
&
A surrogate model mimicking $H$, but trained on a watermarked dataset.
\\\hline
$H_0$
&
A surrogate model mimicking $H$, but trained on a watermark-free dataset.
\\\hline
$A$
&
A set of images for the image-processing network to process.
\\\hline
$B$
&
A set of processed images originating from $A$.
\\\hline
$B'$
&
A set of watermarked and processed images, originating from $B$.
\\\hline
$B''$
&
A set of noisy output images from, originating from $B$.
\\\hline
$B_{\mcl U}$
&
A set of watermarked and processed images, but having suffered from the overwriting attack.
\\\hline
$B_0$
&
A set of processed images from a surrogate model that is not trained on the watermarked dataset.
\\\hline
$C/c$
&
A set of cover images/a cover image for concealing secrets.
\\\hline
$C'/c'$
&
A set of container images/a container image where secrets are hidden inside.
\\\hline
$S/s$
&
A set of secret images/a secret image to hide.
\\\hline
$S'/s'$
&
A set of recovered secret images/a recovered secret image.
\\\hline
$w$
&
A watermark.
\\\hline
$w'$
&
A recovered watermark.
\\\hline
$w_0$
&
A pure black null image.
\\\hline
$c'$
&
A container image that contains a watermark.
\\\hline
$x$
&
An arbitrary image that is the same size as $c'$
\\\hline
$x'$
&
A recovered image originating from $x$.
\\\hline
$\epsilon$
&
A tolerable error range of a recovered secret image.
\\\hline
$\mcl L$
&
A loss function.
\\\hline
$\lambda$
&
A weight parameter for a regularizer in the loss function.
\\
\hline
\end{tabularx}
\end{table}
\subsection{Watermarking \& Deep Learning}
Watermarking is a powerful method for object authentication and ownership validation. It has established strong ties with deep learning in recent times. To provide a comprehensive overview of these interactions, we have categorized them into two main categories: model watermarking and image watermarking using deep learning. For the reader's convenience, a list of all the notations used in the subsequent sections can be found in Table \ref{tab:notation}.
\subsubsection{Model watermarking}
The existing techniques for model watermarking can be classified into three categories: model weight watermarking, backdoor watermarking, and active watermarking.
In model weight watermarking, as described in \cite{uchida2017embedding}, the watermark is embedded into the model's weight parameters during the training process. To retrieve the watermark, one needs complete access to the model's internal structure, which is often not feasible in real-world scenarios. Furthermore, these methods are not highly resilient against attacks such as model pruning, fine-tuning, and knowledge distillation.
Backdoor watermarking, as discussed in \cite{szyller2021dawn}, involves the deliberate alteration of a portion of the training data to create an overfitted model. This portion is referred to as the trigger dataset and can be used to validate the ownership of a suspect model. If the majority of the trigger data result in the suspect model producing the watermark labels, the model's ownership can be confirmed with just black-box access. Compared to model weight watermarking, this method is more robust against the previously mentioned attacks.
On the other hand, active watermarking methods aim to prevent model theft proactively. For instance, Tang et al. \cite{tang2020deep} proposed a method that requires the user to enter a valid serial number before using the desired model. This model is a student model derived from a teacher model and functions correctly only with a valid serial number. Although this approach is proactive in nature and protects the model, a malicious entity can still crack the serial number generator and propagate the stolen model.
\subsubsection{Image watermarking via deep learning}
Image watermarking methods that leverage deep learning can be further categorized into auto-encoder image watermarking and generative adversarial network image watermarking.
Auto-encoder image watermarking, first introduced by Baluja in \cite{baluja2017hiding}, involves the use of an embedding network and a retrieval network.
The embedding network embeds a watermark or secret image into a cover image to produce a container image that is visually similar to the cover image.
The retrieval network then retrieves the watermark from the container image with a tolerable error range.
While these methods achieve high perceptual quality, they are susceptible to steganalysis, a detection attack that identifies hidden content in an image.
Additionally, the container images generated by these methods lack robustness against distortions and malicious attacks that can result in damage or removal of the hidden content.
Generative adversarial network image watermarking is similar to auto-encoder image watermarking, but with the addition of a discriminator in the framework.
During adversarial training, the discriminator is trained to detect hidden content in any image, while the embedding network is tasked with deceiving the discriminator with the container images it generates.
This enhances the covertness of the container images against steganalysis.
However, they remain vulnerable to distortions during transmission and malicious attacks, such as JPEG compression and overwriting attacks.
\subsection{Related Work}
\begin{figure*}[t!]
\centering
\includegraphics[width=.9\textwidth]{figure/rerendered_wmer.drawio.pdf}
\caption{Framework of the Watermarking Network:
Starting from the very left, the embedding network is trained to embed a watermark into a processed image set so as to yield a watermarked image set.
An adversary trains a surrogate model with a set of raw images and the watermarked image set, and thus the surrogate model carries the watermark information.
Every time when the surrogate model yields a set of noisy output, the retrieval network is able to retrieve a recovered watermark from the noisy output to validate the model's ownership.
}
\label{fig:wm_net}
\end{figure*}
Watermarking is a powerful method for safeguarding intellectual property and preventing copyright infringement in various domains, including image protection \cite{potdar2005survey}, audio \cite{arnold2000audio} and video files \cite{doerr2003guide}. By embedding unique and imperceptible marks within the intellectual property, the watermark serves as evidence of ownership and can be used in legal proceedings to defend against infringement claims. Despite having a long history of use, watermarking is a relatively new application in the realm of deep learning models.
In 2017, Uchida {\it et al.} \cite{uchida2017embedding} introduced a novel method for embedding a watermark into the weight parameters of a model, which was considered to be the first attempt at using watermarking techniques for the protection of intellectual property in neural networks. Despite its pioneering efforts, the method's validity in proving ownership required complete access to the parameters, or white-box access, which made it not practical for real-world scenarios. Furthermore, its robustness to different types of attacks was subject to improvement.
Rouhani {\it et al.} \cite{rouhani2018deepsigns} then proposed a watermarking framework that provides protection against fine-tuning, pruning, and overwriting of watermarks in both white-box and black-box scenarios. This approach was more robust to attacks, however, it was not capable of preventing knowledge distillation attacks.
Szyller {\it et al.} \cite{szyller2021dawn} then introduced an approach that was capable of countering all types of attacks, including knowledge distillation, by making a portion of the output from the watermarked model deliberately false. This strategy forces the surrogate model to include the watermark information by overfitting the falsified labels, thus representing a trade-off between robustness and accuracy.
It is worth noting that all of these methods, including the one proposed by Szyller {\it et al.}, work in a passive manner to defend against attacks, as the watermarks only serve to prove ownership after a copyright violation has already occurred, rather than preventing the violation from happening in the first place. Furthermore, these methods, along with most other model watermarking methods, are designed for classification models, with only a limited number of watermarking methods available for image-processing models.
In 2020, Zhang {\it et al.} proposed a watermarking method for image-processing deep learning models \cite{zhang2020model}. This method is the first of its kind and incorporates the concept of deep steganography, which is the technique of hiding information in such a way that it is not detected. The method fuses imperceptible image watermarking with model watermarking, making it effective against black-box knowledge distillation attacks.
The technique of steganography has a long history, dating back centuries, and has been utilized in different domains. Baluja first introduced the use of a deep learning model for image steganography in 2017 \cite{baluja2017hiding}. The method involves hiding one image within another image in such a way that it is not visible to the naked eye.
Several advancements have been made in the field of deep steganography since then, with Wu {\it et al.} designing a framework to perform end-to-end deep steganography \cite{wu2018image} and Zhang {\it et al.} developing a framework to hide an arbitrary image within another image \cite{zhang2020udh}. In \cite{zhao2022jointw}, an image watermarking method was merged with the image-generative network, using deep steganography to prevent the model from being misused.
However, as deep steganography evolves, so do the attacks. Traditional attacks on steganography include image resizing, cropping, distortion, and compression, as illustrated in Hosam's work \cite{hosam2019attacking}. Additionally, deep learning has been utilized to perform these attacks, as seen in Boroumand {\it et al.}'s work \cite{boroumand2018deep}, where a deep convolution neural network (DCNN) framework was proposed to perform deep steganalysis. Corley {\it et al.} \cite{corley2019destruction} designed a framework based on a generative adversarial network (GAN) with significant performance that is capable of purging secret images hidden in container images. Thus, similar to the battles between attacks and defenses in model watermarking, intense battles also exist in image watermarking through deep steganography.
\section{Method – Attack and defense}
\label{sect:method}
\begin{figure*}[t!]
\centering
\includegraphics[width=.7\textwidth]{figure/rerendered_ower.drawio.pdf}
\caption{Framework of the Overwriting Network:
The overwriting network is trained to embed an arbitrary image, or, a watermark, into a container image so as to yield a overwritten image.
The overwriting also contains a retrieval network that is able to retrieve the recovered image, whereas the retrieval network in the watermarking network can only retrieve a null image from the overwritten image.
}
\label{fig:usurper}
\end{figure*}
\subsection{Attack Analysis}
In the current watermarking method for image-processing neural networks, deep steganography is seamlessly integrated with model watermarking.
The watermarking process is composed of two key components, an embedding network $\mathcal{E}$ and a retrieval network $\mathcal{R}$.
As illustrated in Figure \ref{fig:wm_net}, the watermarking process begins by training $\mathcal{E}$ and $\mathcal{R}$ on a set of processed images, $B$, and a watermark image, $w$.
The embedding network $\mathcal{E}$ then embeds the watermark image $w$ into each image $b_i$ in the set $B$ to produce a watermarked image set, $B'$.
This process is denoted as
\begin{equation}
\begin{aligned}
B' = \mcl E(B, w).
\end{aligned}
\end{equation}
In the event of the presence of an adversary, they will only have access to the unprocessed image set $A$, and the watermarked processed image set $B'$. The adversary can then train a surrogate model, denoted as $H'$, using $A$ and $B'$, such that the model learns to produce processed images with watermarks similar to those in $B'$. Finally, the retrieval network $\mcl R$ should be capable of retrieving a recovered watermark $w'$ from both the original watermarked image set $B'$ and the noisy output set $B''$ produced by the surrogate model $H'$, denoted as
\begin{equation}
\begin{aligned}
w' = \mcl R(b'), \text{ s.t. } w' = w + \epsilon, \textit{ iff } b' \in B' \cup B'',
\end{aligned}
\end{equation}
where $\epsilon$ represents a tolerable error range.
Meanwhile, if $\mcl R$ receives a watermark-free image $x$ as input, $\mcl R$ will yield a null image $w_0$ that is purely dark, denoted as
\begin{equation}
\begin{aligned}
w_0 = \mcl R(x), \forall \ x \not \in B' \cup B''.
\end{aligned}
\end{equation}
However, deep steganography is vulnerable to perturbations in the frequency domain, as highlighted in the work of Zhang {\it et al.} \cite{zhang2021universal}. This motivates us to explore an overwriting attack on the watermarked image set $B'$. The objective of the attack is to generate an overwritten image set $B_{\mcl U}$ such that the retrieval network $\mcl R$ is unable to retrieve a valid watermark from $B_{\mcl U}$ or from the outputs of a surrogate model $H'$ trained on $B_{\mcl U}$.
The objective of this attack is denoted as
\begin{equation}
\begin{aligned}
\forall \ b_u \in B_{\mcl U} \cup B'', \mcl R(b_u) \neq w + \epsilon.
\end{aligned}
\end{equation}
In other words, the goal here is to purge the signal of the watermark inside the container images so that the surrogate model trained on them does not contain the watermark's information.
Thus, the watermarking method is nullified.
Conversely, to counter the overwriting attack, we need a watermarking network that is sufficiently robust so as to be able to retrieve a valid recovered watermark $w'$ under such an attack.
The objective of the defense is denoted as
\begin{equation}
\begin{aligned}
\exists \ \mcl R, w' = \mcl R(b_u), \text{ s.t. } w' = w + \epsilon, \forall \ b_u \in B_{\mcl U} \cup B''.
\end{aligned}
\end{equation}
This objective requires the watermarking method to completely withstand the overwriting attack.
\subsection{The Attack Network}
\subsubsection{The overwriting attack}
\paragraph{Overview}
As depicted in Figure \ref{fig:usurper}, the overwriting attack aims at the output image set $B'$, which contains the watermark.
A deep steganographic model $\mcl U$ is trained, which consists of an embedding function $\mcl E_{\mcl U}$ and a retrieval function $\mcl R_{\mcl U}$.
As illustrated in Algorithm \ref{alg:atk_U}, this model is capable of embedding an arbitrary image into another arbitrary image so as to perform an overwriting attack on the given container image set $B''$.
The result is a set of overwritten container images $B_{\mcl U}$, where $w'$ cannot be validly retrieved.
\begin{algorithm}
\caption{Train the Overwriting Network}\label{alg:atk_U}
\begin{algorithmic}
\While{$\mcl L_{\mcl U}$ not converged}
\State $c_{\mcl U} \gets \mcl E_{\mcl U}(x, c')$ \Comment{Overwrite}
\State $x' \gets \mcl R_{\mcl U}(c_{\mcl U})$ \Comment{Retrieve}
\State $\mcl L_{\mcl U} \gets \mcl L^{\mcl U}_{\mcl E}(c', c_{\mcl U}) + \mcl L^{\mcl U}_{\mcl R}(c_{\mcl U}, x')$ \Comment{Get loss}
\State $\mcl L_{\mcl U}$.back\_propagation() \Comment{Backwards}
\EndWhile
\end{algorithmic}
\end{algorithm}
This attack is denoted as
\begin{equation}
\begin{aligned}
\mcl E_{\mcl U}(B') = B_{\mcl U}, \ \text{s.t.} \ \mcl R(B_{\mcl U}) \not= w + \epsilon.
\end{aligned}
\end{equation}
Since the watermark information in $B_{\mcl U}$ is lost, an attacker can train a surrogate model $\mcl H_{\mcl U}$ with $A$ and $B_{\mcl U}$, which is either watermark-free, or it contains a self-made watermark $w_{\mcl U}$.
\paragraph{Loss Functions.}
The loss function for training $\mcl U$ is defined as
\begin{equation}
\begin{aligned}
\mcl L_{\mcl U} = \mcl L_{\mcl E}^{\mcl U} + \mcl L_{\mcl R}^{\mcl U},
\end{aligned}
\label{eq:loss_u}
\end{equation}
where $\mcl L_{\mcl E}^{\mcl U}$ and $\mcl L_{\mcl R}^{\mcl U}$ respectively denote the embedding loss and the retrieval loss of $\mcl U$.
$\mcl L_{\mcl E}^{\mcl U}$ is further decomposed into
\begin{equation}
\begin{aligned}
\mcl L_{\mcl E}^{\mcl U} = \lambda_{mse} l_{mse} + \lambda_{vgg} l_{vgg} + \lambda_{freq} l_{freq},
\end{aligned}
\label{eq:loss_u_e}
\end{equation}
where the $\lambda$s are weight parameters.
$l_{bs}$ is the $L2$ loss between the cover images $C$ and container images $C'$, defined as
\begin{equation}
\begin{aligned}
l_{mse} = \sum_{c_i \in C, c_i' \in C'} \frac{1}{N_c} \| c_i - c_i'\|^2,
\end{aligned}
\end{equation}
where $N_c$ is the total number of pixels.
$l_{vgg}$ denotes the perceptual loss between $C$ and $C'$, defined as
\begin{equation}
\begin{aligned}
l_{vgg} = \sum_{c_i \in C, c_i' \in C'} \frac{1}{N_f} \| VGG_k(c_i) - VGG_k(c_i')\|^2,
\end{aligned}
\end{equation}
where $N_f$ and $VGG_k$ respectively denote the total number of feature neurons and the features extracted at layer $k$.
$l_{freq}$ is the frequency loss \cite{jiang2021focal} between $C$ and $C'$ for controlling consistency in the frequency domain, defined as
\begin{equation}
\begin{aligned}
l_{freq} = \sum_{c_i \in C, c_i' \in C'} \frac{1}{N_p} \mcl F(c_i, c_i'),
\end{aligned}
\end{equation}
where $\mcl F$ and $N_p$ are the focal frequency loss function and the total number of image pairs.
$\mcl L_{\mcl R_{\mcl U}}$ is also further decomposed into
\begin{equation}
\begin{aligned}
\mcl L_{\mcl R}^{\mcl U} = \lambda_{mse} l_{mse} + \lambda_{vgg} l_{vgg} + \lambda_{freq} l_{freq},
\end{aligned}
\end{equation}
where the terms therein are identical to those in $\mcl L_{\mcl E_{\mcl U}}$ but applied to image pairs $(s_i, s_i')$ from the secret images $S$ and the retrieved secret images $S'$.
\subsection{The defense Network}
\begin{figure*}[t!]
\centering
\includegraphics[width=.8\textwidth]{figure/rerendered_def_net.drawio.pdf}
\caption{Framework of the Defense Network:
The initial training stage is critical for the success of the proposed method, as it involves the concurrent training of both the watermarking and overwriting networks. The retrieval network must be able to extract a valid watermark from the overwritten images generated by the overwriting embedding network. During the adversarial training stage, the retrieval network is further refined through exposure to both watermarked images and watermark-free images. If the retrieval network encounters a watermarked image, it should produce a valid recovered watermark. Conversely, when it encounters a watermark-free image, it should output a null image.
}
\label{fig:oathkeeper}
\end{figure*}
\subsubsection{Overview}
To counter the threat of the attack network, we devised an adversarial training framework, i.e., the defense network $\mcl O$, that includes both the watermarking framework $f_{wm}$ and $\mcl U$, and where $f_{wm}$ and $\mcl U$ are each configured as a two-party-mini-max game.
In short, we set up an adversarial training scheme by training $f_{wm}$ along with $\mcl U$ according to the following settings in Figure \ref{fig:oathkeeper}:
As demonstrated in Algorithm \ref{alg:def_O_init}, the embedding network $\mcl E$ in $f_{wm}$ is initially trained to embed $w$ into $B$ to get $B'$.
A discriminator network $\mcl D$ then determines whether $B'$ contains a watermark so as to make $\mcl E$ to hide the watermark more covertly.
Meanwhile, the overwriting embedding network $\mcl E_{\mcl U}$ is trained to embed an arbitrary image into another arbitrary image so as to perform an overwriting attack.
$B'$ is then fed to $\mcl E_{\mcl U}$ along with an arbitrary image set $S$ of the same size of $B'$ to yield an overwritten image set $B_{\mcl U}$.
Lastly, $B'$ and $B_{\mcl U}$ are passed to the retrieval network $\mcl R$ in $f_{wm}$ to retrieve $w'$, and $\mcl R$ is required to produce a null image $w_0$ when it receives watermark-free images from $A$ and $B$.
\begin{algorithm}
\caption{The Defense Network - Initial Training Stage}
\label{alg:def_O_init}
\begin{algorithmic}
\While{$\mcl L$ not converged}
\State $B' \gets \mcl E(w, B)$ \Comment{Embed}
\State $B_{\mcl U} \gets \mcl E_{\mcl U}(w_{\mcl U}, B')$ \Comment{Overwrite}
\State $w_0 \gets \mcl R(A; B)$ \Comment{Null retrieval}
\State $w' \gets \mcl R(B'; B_{\mcl U})$ \Comment{Ret. watermark}
\State $\mcl L \gets \mcl L_{\mcl U}(A, B, B', B_{\mcl U}, w, w_{\mcl U}, w_0, w')$ \Comment{Get Loss}
\State $\mcl L \gets \mcl L + \mcl L_{\mcl O}(A, B, B', B_{\mcl U}, w, w_{\mcl U}, w_0, w')$
\State $\mcl L$.back() \Comment{Backwards}
\EndWhile
\end{algorithmic}
\end{algorithm}
At the adversarial training stage, as illustrated in Algorithm \ref{alg:def_O_adv}, only $\mcl R$ is trained for better robustness.
On top of the previous training settings, $\mcl R$ is further forced to retrieve a watermark from the noisy output $B''$ generated by the surrogate model $H'$.
Meanwhile, a clean surrogate model $H_0$ is trained to produce clean output $B_0$, which boosts the specificity of $\mcl R$, where $\mcl R$.
Further, $\mcl R$ must also retrieve a null image when it receives $B_0$.
This solves an intractable problem that we encountered in the experiments, which is further discussed in Section \ref{sect:problem}.
\begin{algorithm}
\caption{The defense Network - Adversarial Training Stage}
\label{alg:def_O_adv}
\begin{algorithmic}
\While{$\mcl L_{\mcl O}$ not converged}
\State $B_0 \gets \mcl H_0(A)$
\State $B'' \gets \mcl H'(A)$
\State $w_0 \gets \mcl R(A; B; B_0)$\Comment{Null Retrieval}
\State $w' \gets \mcl R(B'; B''; B_{\mcl U})$\Comment{Watermark Retrieval}
\State $\mcl L_{\mcl O} = \mcl L_{\mcl O}(A, B, B', B'', w, w_0, w')$\Comment{Get Loss}
\State $\mcl L_{\mcl O}$.back()\Comment{Backwards}
\EndWhile
\end{algorithmic}
\end{algorithm}
The two-party-mini-max game is defined as
\begin{equation}
\begin{aligned}
& \qquad \qquad \qquad \underset{\mcl E, \mcl R}{\min} \ \underset{\mcl E_{\mcl U}}{\max} \ \mcl L(\mcl E, \mcl R, \mcl E_{\mcl U}) =\\
& \bigg(
\mbb E \big[ \sum_{b_i \in B, s_i \in S} \frac{1}{N_c} \big\|
\mcl R \big(
\mcl E_{\mcl U}(
\mcl E(b_i, w),
s_i)
\big)
- w \big\|^2 \big]
\bigg),
\end{aligned}
\end{equation}
where $\mcl E_{\mcl U}$ mostly benefits when $\mcl R$ cannot retrieve a valid $w'$.
Additionally, $\mcl E$ and $\mcl R$ get the most bonuses when a $w'$ that is infinitely close to $w$ is retrieved.
\subsubsection{Loss Functions}
The loss function for training the defense network is defined as
\begin{equation}
\begin{aligned}
\mcl L = \mcl L_{\mcl U} + \mcl L_{\mcl O}
\end{aligned}
\end{equation}
where $\mcl L_{\mcl U}$ and $\mcl L_{\mcl O}$ respectively denote the loss of training the overwriting network and the watermarking part of the defense network.
Similar to Equation \ref{eq:loss_u}, $\mcl L_{\mcl U}$ here is defined as
\begin{equation}
\begin{aligned}
\mcl L_{\mcl U} = \mcl L_{\mcl E}^{\mcl U} + \mcl L_{\mcl R}^{\mcl U} + l_{\mcl U}.
\end{aligned}
\end{equation}
The extra term $l_{\mcl U}$ denotes the adversarial overwriting loss that attempts to make $\mcl R$ retrieve a blank image $w_0$ from $B_{\mcl U}$.
This is defined as
\begin{equation}
\begin{aligned}
l_{\mcl U} = \sum_{b_i \in B_{\mcl U}} \frac{1}{N_c} \| \mcl R(b_i) - w_0 \|^2.
\end{aligned}
\end{equation}
$\mcl L_{\mcl O}$ is then further decomposed into
\begin{equation}
\begin{aligned}
\mcl L_{\mcl O} = \mcl L^{\mcl O}_{\mcl E} + \mcl L^{\mcl O}_{\mcl R} + \mcl L^{\mcl O}_{\mcl D} + l_{\mcl O}
\end{aligned}
\end{equation}
where the terms represent the loss of training the embedding network, the retrieval network, the discriminator, and the defense adversarial loss.
Further, $\mcl L_{\mcl E}^{\mcl O}$ comprises the following terms:
\begin{equation}
\begin{aligned}
\mcl L_{\mcl E}^{\mcl O} = \lambda_{mse} l_{mse} + \lambda_{freq} l_{freq} + \lambda_{vgg} l_{vgg} + \lambda_{adv} l_{adv},
\end{aligned}
\end{equation}
where the former three losses are identical to those appearing in Equation \ref{eq:loss_u_e}.
The last term $l_{adv}$ represents the adversarial loss against the discriminator network, defined as
\begin{equation}
\begin{aligned}
l_{adv} = \mbb E_{b_i' \in B'} \big[ \log(\mcl D(b_i')) \big].
\end{aligned}
\end{equation}
The goal is to make the embedding network produce container images that cannot be detected by the discriminator network.
$\mcl L_{\mcl R}$ is decomposed into
\begin{equation}
\begin{aligned}
\mcl L_{\mcl R} = \lambda_{wm} l_{wm} + \lambda_{clean} l_{clean} + \lambda_{cst} l_{cst},
\end{aligned}
\end{equation}
where the $\lambda$s are weight parameters.
$\mcl L_{wm}$ denotes watermark retrieval loss
\begin{equation}
\begin{aligned}
l_{wm} = \sum_{b_i' \in B'} \frac{1}{N_c} \| \mcl R(b_i') - w\|^2 + \sum_{b_i'' \in B''} \frac{1}{N_c} \| \mcl R(b_i'') - w\|^2.
\end{aligned}
\end{equation}
$\mcl L_{cln}$ represents the blank extraction loss for guiding $\mcl E$ to extract only blank images from images not possessing watermark information, denoted as
\begin{equation}
\begin{aligned}
l_{clean} = \sum_{a_i \in A} \frac{1}{N_c} \| \mcl R(a_i) - w_0\| + \sum_{b_i \in B} \frac{1}{N_c} \| \mcl R(b_i) - w_0\|,
\end{aligned}
\end{equation}
where $w_0$ is a blank image.
Lastly, $\mcl L_{cst}$ is the consistency loss for ensuring that the watermarks extracted from different images are consistent, denoted as
\begin{equation}
\begin{aligned}
l_{cst} = \sum_{x, y \in B' \bigcup B''} \| \mcl R(x) - \mcl R(y) \|^2
\end{aligned}
\end{equation}
$l_{\mcl O}$ stands for the defense adversarial loss that guarantees that $\mcl R$ can retrieve $w' = w + \epsilon$ from the overwritten images $B_{\mcl U}$, defined as
\begin{equation}
\begin{aligned}
l_{\mcl O} = \sum_{b_i \in B_{\mcl U}} \frac{1}{N_c} \| \mcl R(b_i) - w \|^2.
\end{aligned}
\end{equation}
\subsection{Discussion}
In our defense framework, the overwriting network is trained in tandem with the watermarking network to form the defense network. The purpose of the overwriting network is to overwrite the original watermark with a forged watermark, creating an adversarial relationship between the two. The retrieval network of the watermarking network must then be able to retrieve the original watermark from the overwritten images, as demonstrated in previous work \cite{baluja2019hiding}.
As the two embedding networks embed the original and forged watermarks into the same container image in parallel, both secret images are preserved within the container image. This is because, as shown in \cite{baluja2019hiding}, it is possible to embed multiple secret images into one cover image, albeit with a higher perceptual quality loss.
Our experiments show that without proper adversarial training, the watermarking network is unable to retrieve a valid watermark. Thus, our adversarial training scheme is a crucial component of the defense framework.
\section{Experiment}
\label{sect:exp}
\subsection{Experimental Setup}
\subsubsection{Dataset}
Two datasets were used to train the image-processing surrogate model: the de-raining dataset from \cite{zhang2018density} and an 8-bit image dataset generated via the algorithm in \cite{zou2021stylized}.
The de-raining dataset is public available, while the 8-bit image dataset is generated using the images in ImageNet datset.
The goal of the first task was to remove rain drops from the images.
The second task was to transform an input image into an 8-bit style artwork.
With each task, we split the dataset into two subsets: a training set of $4000$ images and a test set of 1000 images.
All the images were resized to $256 \times 256$ for training.
We also took samples from the ImageNet dataset to train $\mcl U$.
Here, there were $40,000$ images in the training set and $10,000$ images in the test set.
Each image was greater than $256 \times 256$ in size so we randomly cropped the images down to $256 \times 256$ so as to enhance $\mcl U$'s robustness.
\subsubsection{Implementation details}
$\mcl E$'s network structure follows UNet \cite{ronneberger2015u}.
UNet has considerably high performance with reference to translation based and semantic segmentation tasks, so the network produces good results when there are close connections between the inputs and the outputs.
CEILNet \cite{fan2017generic} was used as the model for $\mcl R$, which is believed to work well when the inputs somewhat differ from the outputs.
Patch-GAN \cite{isola2017image} was used for $\mcl D$.
In terms of the overwriting network, UNet \cite{ronneberger2015u} was once more used as the network structure for $\mcl E_{\mcl U}$.
For $\mcl R_{\mcl U}$, we simply used stacks of convolutional layers, as the critical point lies in the overwriting procedure, and, here, the embedding network plays the more crucial role.
Moreover, there is no discriminator in this training process. Lastly, the defense network comprised the watermarking network and $\mcl U$.
Together, they form $\mcl O$.
\subsubsection{Evaluation metrics}
We chose PSNR and SSIM \cite{hore2010image} to evaluate the visual quality of the container image in comparison to its corresponding cover image.
Additionally, we used normalized cross correlation (NCC) to measure whether a retrieved watermark was valid.
If the NCC between a retrieved watermark and the original watermark was greater than $0.95$, the retrieved watermark was considered to be legitimate.
NCC is defined as
\begin{equation}
\begin{aligned}
NCC = \frac{\langle \mcl R (b_i'), w \rangle}{\| \mcl R (b_i') \| \cdot \| w\|}
\end{aligned}
\end{equation}
where $\langle \cdot , \cdot \rangle$ denotes the inner product, and $\| \cdot \|$ denotes the L2 norm.
The success rate $SR$ is defined as the ratio of successfully retrieved watermarks from a certain amount of container images.
\subsection{Baseline Reproduction}
\subsubsection{Training the watermarking network}
First, we reproduced Zhang's method \cite{zhang2020model} as our experimental baseline.
When training the watermarking networks $\mcl E$, $\mcl R$, and $\mcl D$, we set the initial learning rate of the Adam optimizer to 0.001.
Here, the goal was to equip the watermarking network with the ability to embed a fixed watermark into an arbitrary image, and to retrieve the watermark from the container image.
Therefore, we trained the network on the ``ImageNet" training dataset, where 1 epoch contains 40,000 images.
The images were randomly cropped down to $256 \times 256$ so as to increase the randomness of the input data.
We set the batch size to 10, which means that the model ran through 10 images in one iteration.
If there was no loss descent within 4000 iterations, we decreased the learning rate by 0.2.
All $\lambda$s were set to 1 except for $\lambda_{3} = 0.01$, which is the weight parameter of the adversarial loss.
Figure \ref{fig:test_wmer_res} depicts the test results of the two trained models.
Each row of the two images is an instance.
From left to right, each column represents the cover images $c$, the secret images $s$, the container images $c'$, the retrieved secret images $s'$, and the null images retrieved from watermark-free images.
From the results, it is clear that the watermarking network was able to complete both the embedding and the retrieval tasks quite well.
Further, a pure black image was guaranteed when the input contained no hidden content.
Here, our settings slightly differ with Zhang's method.
Zhang set the null image to pure white, but for ease of reading, we set the null image to black.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/wmer_test.png}
\caption{Flower}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/wmer_test_pepper.png}
\caption{Pepper}
\end{subfigure}
\caption{Test Results of the Watermarking Network}
\label{fig:test_wmer_res}
\end{figure}
\subsubsection{The adversarial stage}
This stage was designed to enhance the robustness of the retrieval network by training it to retrieve the watermark from the surrogate model's processed images.
The retrieval will fail if this step is not conducted, because the retrieval network does not meet any noisy samples from the surrogate model.
However, because of a problem we discovered (discussed in Section \ref{subsect:atk}), we made a change to this stage of the process where we involved the outputs from a watermark-free surrogate model in the fine-tuning process.
To train the surrogate models, we used the de-raining and the 8-bit datasets in two parallel experiments.
The paired processed images were watermarked by the watermarking network.
By training the surrogate models this way, we forced the surrogate models to overfit the watermark signals hidden in the processed images, such that every output from the surrogate models carried the watermark signal.
Here, the batch size was set to $20$ and the number of epochs was set to $50$ based on some previous experiments.
The initial learning rate was set to $0.001$, and then decayed by $0.2$ if the loss remained unchanged for 5 epochs.
Additionally, we use the same settings to train a watermark-free surrogate model with the watermark-free datasets.
After training the surrogate models, we used them to produce noisy watermarked images and the watermark-free images, which were then fed into the retrieval network $\mcl R$.
In the adversarial training state, the hyperparameters remained the same as those in training the watermarking network for updating $\mcl R$.
However, we reset the learning rate back to $0.001$ so as to let the trained network escape reach of the local minimum.
The fine-tuning process lasted for $50$ epochs.
As a result, the fine-tuned retrieval network was able to retrieve the watermark from both the watermarked image $B'$ and the surrogate model's outputs $B''$.
The results are visually presented in Figure \ref{fig:org_meth_res}.
Details of the results are listed in Table \ref{tab:res_org}.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/method_org_val.png}
\caption{De-raining + Flower}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/method_org_val_8bit.png}
\caption{8bit + Pepper}
\end{subfigure}
\caption{Watermarks Retrieved from the Surrogate Models' Output}
\label{fig:org_meth_res}
\end{figure}
\begin{table}[t!]
\centering
\scriptsize
\caption{Original Method Results}
\renewcommand\arraystretch{1.5}
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{l|ccccccc}
\toprule
Condition/Metric & \textbf{PSNR} & \textbf{SSIM} & \textbf{NCC} & \textbf{SR}(\%) \\
\midrule
\textbf{De-raining $\times$ $\mcl W$} & 30.49 & 0.8688 & 0.9992 & 100 \\
\textbf{De-raining $\times$ $\mcl W$ $\times$ UNet} & / & / & 0.9974 & 100 \\
\textbf{De-raining $\times$ $\mcl W$ $\times$ Res16} & / & / & 0.9877 & 100 \\
\textbf{8-bit $\times$ $\mcl W$} & 32.89 & 0.8739 & 0.9999 & 100 \\
\textbf{8-Bit $\times$ $\mcl W$ $\times$ UNet} & / & / & 0.9985 & 100 \\
\textbf{8-Bit $\times$ $\mcl W$ $\times$ Res16} & / & / & 0.9910 & 100 \\
\bottomrule
\end{tabular}}%
\label{tab:res_org}%
\end{table}%
With each dataset, we conducted three parallel experiments: one using UNet, one using a residue network with $16$ blocks (Res16), and one performed directly on the watermarked image $B'$.
PSNR and SSIM were used to measure the quality of the container image $c'$ compared to its corresponding cover image $c$.
NCC and SR were only used to validate the watermark retrieval.
Remarkably, the success rate of the watermark retrieval reached $100\%$ in each experiment, which firmly verifies the efficacy of Zhang's method.
\subsection{Attacks}
\label{subsect:atk}
We trained our overwriting network with the Adam optimizer on the ImageNet training set.
The learning rate and batch size were set to $0.001$ and $20$.
We decreased the learning rate by $0.2$ if there was no loss decrease within $2,000$ iterations.
After $20$ epochs, there was no significant descent loss, so we set the number of epochs to $30$.
The $\lambda$s were all set to $1$.
The cover images were randomly cropped into size of $256 \times 256$ so as to increase randomness of the input data and, in turn, enhance the robustness of the overwriting network.
Further, the overwriting network was trained to embed one of four selected watermarks: ``flower", ``copyright", ``lena", or ``pepper", into an arbitrary image, and then retrieve the watermark.
The effect is depicted in Figure \ref{fig:test_u}, where each row is an instance.
From left to right, each column respectively represents the cover images $c$, the secret images $s$, the container images $c'$, and the recovered secret images $s'$.
\begin{figure}[t!]
\centering
\includegraphics[width=.3\textwidth]{figure/usurper_test.png}
\caption{Test Results of the Overwriting Network}
\label{fig:test_u}
\end{figure}
After having the trained overwriting network, we launched an attack on the watermarked images.
Generally, the watermarked images $B'$ were overwritten with another watermark so as to prevent the watermark from being retrieved.
The direct effect of the attack is depicted in Figure \ref{fig:usurper_atk_res}, where each row is an instance.
From left to right, each column respectively represents the cover images $c$, the secret images $s$, the container images $c'$, and the retrieved secret images $s'$.
Table \ref{tab:res_atk} lists the results of the visual quality test for the container image and watermark retrieval under various conditions, namely, different combinations of surrogate model types and datasets.
Each value is the average number of the item derived from 100 randomly selected images.
We performed three experiments with each of the two tasks, i.e., a direct attack on $B'$, an attack on the UNet surrogate model, and another on the Res16 surrogate model.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/usurper_attack.png}
\caption{De-raining + Flower}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/usurper_attack_8bit.png}
\caption{8bit + Pepper}
\end{subfigure}
\caption{Results of the Overwriting Attack}
\label{fig:usurper_atk_res}
\end{figure}
Compared to the watermarked images $B'$, the quality of the attacked image $B_{\mcl U}$ decreased slightly.
However, the quality loss was still negligible to human eye.
Success rates across all the experiments were no greater than 10\%, and half of them reached 0\%, which proves the efficacy of our attack.
Notably, the success rate of the watermark retrieval with the Res16 surrogate model on both tasks was higher than the others, which is an interesting phenomenon.
\begin{table}[t!]
\centering
\scriptsize
\caption{Attack Results}
\renewcommand\arraystretch{1.5}
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{l|ccccccc}
\toprule
Condition/Metric & \textbf{PSNR} & \textbf{SSIM} & \textbf{NCC} & \textbf{SR}(\%) \\
\midrule
\textbf{De-raining $\times$ $\mcl W$ $\times$ $\mcl U$} & 27.24 & 0.8031 & 0.0565 & 4 \\
\textbf{De-raining $\times$ $\mcl W$ $\times$ $\mcl U$ $\times$ UNet} & / & / & 0.0109 & 0 \\
\textbf{De-raining $\times$ $\mcl W$ $\times$ $\mcl U$ $\times$ Res16} & / & / & 0.1527 & 10 \\
\textbf{8-bit $\times$ $\mcl W$ $\times$ $\mcl U$} & 31.91 & 0.6061 & 0.2968 & 0 \\
\textbf{8-Bit $\times$ $\mcl W$ $\times$ $\mcl U$ $\times$ Unet} & / & / & 0.0678 & 0 \\
\textbf{8-Bit $\times$ $\mcl W$ $\times$ $\mcl U$ $\times$ Res16} & / & / & 0.2248 & 5 \\
\bottomrule
\end{tabular}}%
\label{tab:res_atk}%
\end{table}%
\subsubsection{The overfitting problem in the retrieval network.}
\label{sect:problem}
\begin{figure*}[t!]
\centering
\includegraphics[width=.8\textwidth]{figure/fine_tune.png}
\caption{Training Process of the defense Network:
From left to right:
the images to process $A$,
the processed images $B$,
the watermarked processed images $B'$,
the overwritten watermarked processed images $B_{\mcl U}$,
the overwritten watermark-free processed images $B_{\mcl U}'$,
the null images retrieved from $A$ and $B$,
the watermarks retrieved from $B'$ and $B_{\mcl U}$,
the null images retrieved from $B_{\mcl U}'$,
and the watermark image $w$.
}
\label{fig:train_o}
\end{figure*}
In the attack, we also tried to use the fine-tuned retrieval network to extract watermarks from the images that were only processed by the overwriting network.
In other words, we tried to extract watermarks from images that did not contain a watermark signal embedded by the watermarking network.
Under these circumstances, the retrieval network was still able to retrieve the watermark with decreased quality as demonstrated in Figure \ref{fig:overfitting}.
This indicated that, during the fine-tuning, the retrieval network was tuned to output a pre-defined watermark if there was any secret image signal in the container images, regardless of what exactly the signal represented.
\begin{figure}[t!]
\centering
\includegraphics[width=.3\textwidth]{figure/overfitting.png}
\caption{Overfitting Phenomenom:
From left to right, the images depict the rainy image to process, the watermarked image from the overwriting network, the overwriting watermark, and the retrieved watermark from the second image by the retrieval network.
The watermark can be retrieved from any container image that has some steganographic content.}
\label{fig:overfitting}
\end{figure}
Though this method can withstand an overwriting attack by this overfitting phenomenon, the phenomenon is harmful to this method.
This is because the watermarking scheme is nullified if a valid watermark can be retrieved from any container image that does not contain the corresponding watermark information.
We managed to overcome this problem with a fairly simple manoeuvre.
We trained a watermark-free surrogate model, and then, we added its output images into the adversarial stage of fine-tuning the retrieval network.
The retrieval network was therefore made to differentiate the outputs of the watermark-free surrogate model from those of the watermarked surrogate model, and output the null images correspondingly.
This extra step successfully mitigates this problem.
\subsection{defenses}
Lastly, we trained the defense network with the same hyperparameters as above.
The main idea was to concurrently train a watermarking network and an overwriting network, and to make the retrieval network retrieve the watermark from the overwritten container image.
Meanwhile, as the adversary, the overwriting network attempts to overwrite the watermark within the container image so that the retrieval network will only yield null images.
Figure \ref{fig:train_o} directly illustrates the training process, where the defense network is trained to embed the watermark into an arbitrary image, and retrieve the watermark from the container and overwritten images.
Further, the retrieval network must generate a null image if there is not the embedded watermark signal in the input.
The settings in the fine-tuning stage were almost the same as for the watermarking network's adversarial stage.
Additionally, the overwriting network also participated in this stage so as to force the retrieval network to produce either the watermark or a null image when it encounters the overwritten container image or the container image generated only by the overwriting network.
Finally, we tested the defense network on two datasets with different settings.
Table \ref{tab:res_def} shows the test results.
As shown, the container images generated by the defense network have a better visual quality than those generated by the watermarking network.
Among the watermarking retrieval tests, all success rates reached 100\% except for the direct overwriting attack on the 8-bit datasets, which verifies the efficacy of our defense method.
\begin{table}[t!]
\centering
\scriptsize
\caption{Watermark Retrieval Results Comparison among Defenses}
\renewcommand\arraystretch{1.5}
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{l|ccccccc}
\toprule
Condition/Metric & \textbf{PSNR} & \textbf{SSIM} & \textbf{NCC} & \textbf{SR}(\%) \\
\midrule
\textbf{De-raining $\times$ $\mcl O$} & 34.07 & 0.9022 & 0.9997 & 100 \\
\textbf{De-raining $\times$ $\mcl O$ $\times$ $\mcl U$} & / & / & 0.9924 & 100 \\
\textbf{De-raining $\times$ $\mcl O$ $\times$ $\mcl U$ $\times$ UNet} & / & / & 0.9915 & 100 \\
\textbf{De-raining $\times$ $\mcl O$ $\times$ $\mcl U$ $\times$ Res16} & / & / & 0.9914 & 100 \\
\textbf{8-bit $\times$ $\mcl O$} & 34.54 & 0.8796 & 0.9998 & 100 \\
\textbf{8-bit $\times$ $\mcl O$ $\times$ $\mcl U$} & / & / & 0.9040 & 0.81 \\
\textbf{8-bit $\times$ $\mcl O$ $\times$ $\mcl U$ $\times$ UNet} & / & / & 0.9991 & 100 \\
\textbf{8-bit $\times$ $\mcl O$ $\times$ $\mcl U$ $\times$ Res16} & / & / & 0.9982 & 100 \\
\bottomrule
\end{tabular}}%
\label{tab:res_def}%
\end{table}%
\section{Discussion}
\label{sect:dis}
\subsection{Analysis of the Overwriting Attack}
\begin{figure*}[t!]
\centering
\begin{subfigure}[b]{.45\textwidth}
\includegraphics[width=\textwidth]{figure/freq_als_atk.pdf}
\caption{The Overwriting Attack}
\label{fig:freq_als_atk}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/freq_als_def.pdf}
\caption{The defense Method}
\label{fig:freq_als_def}
\end{subfigure}
\caption{Frequency Analysis}
\label{fig:freq_als}
\end{figure*}
\subsubsection{Frequency Analysis}
The objective of the study is to investigate the cause of the overwriting attack's capability to render the embedded watermark in the container image ineffective. This is achieved by calculating the Azimuthal Integral of the experimental images and comparing their frequency domains. The final data is obtained by averaging the Azimuthal Integral computed from 1,000 groups of test images, each group consisting of the container image generated by the watermarking network, the cover image, the overwritten container image, the output from the surrogate model, and the overwritten output from the surrogate model. The images within each group correspond to the same processed image.
Typically, images processed by Deep Convolutional Neural Networks (DCNNs) display a bias in the high frequency domain. As illustrated in Figure \ref{fig:freq_als_atk}, the container image generated by the watermarking network and its corresponding image generated by the surrogate model exhibit an abnormally high amplitude in the high frequency domain, which distinguishes them greatly from the cover image. This is the reason why the watermark can be invisibly embedded into the cover image, as human eyes are not sensitive enough to the high frequency domain of an image.
However, through fine-tuning, the retrieval network in the watermarking network can still retrieve the watermark from the surrogate model's output, despite its significant deviation from the frequency distribution of the container image. This emphasizes the significance of the fine-tuning stage. In the case of the overwritten container image, it displays a marked bias in the high frequency domain, both in comparison to the cover image and the watermarked image. A peak can be observed in the range of 160 to 175 on the frequency axis, which neutralizes the previously embedded watermark.
To further ascertain the location where the watermark is embedded, a low-pass filter is applied to the watermarked images. The filtered image retains its visual quality to the extent that changes are not easily noticeable by the human eye. This filter is applied to 1,000 container images and then the watermark retrieval is performed. As expected, the success rate of the retrieval drops to 0, and the direct effect can be seen in Figure \ref{fig:low_pass}, where each column in each row, from left to right, respectively represents the container image, the retrieved watermark, the filtered container image, and the nullified retrieval result. This underscores the high sensitivity of the watermark retrieval to the high frequency distribution of the container image.
\begin{figure}[t!]
\centering
\includegraphics[width=.3\textwidth]{figure/low_pass_attack.png}
\caption{Watermark Retrieval of Low-pass Filtered Container Images}
\label{fig:low_pass}
\end{figure}
\subsection{Analysis of the defense Network}
\subsubsection{Residue Analysis}
First, we performed a residue analysis on the container images generated by our defense and watermarking networks.
The details can be seen in Figure \ref{fig:residue_als}, where from left to right, each column in each row respectively represents the cover image, the container image, and the residue enhanced 10 times.
Intuitively, the residues of the defense network's output seem to be darker (better) than those of the watermarking network's output.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/residue_oath.png}
\caption{defense Network}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/residue_wmer.png}
\caption{Watermarking Network}
\end{subfigure}
\caption{Residue Analysis (10x Enhanced)}
\label{fig:residue_als}
\end{figure}
\subsubsection{Frequency Analysis.}
In the adversarial stage of the watermarking network, the retrieval network is required to retrieve the watermark from both the surrogate model's output and the container images.
Due to the bias in the frequency domain of the overwritten surrogate model's output shown in Figure \ref{fig:freq_als_atk}, the retrieval fails, because it has never encountered any input with such a frequency distribution.
However, in the defense network's fine-tuning stage, the surrogate model's output is protected by the defense network, and they share almost the same frequency distribution as the overwritten surrogate model's output in Figure \ref{fig:freq_als_def}.
This forces the retrieval network to become more robust to the mutable watermark signal.
Further, with the assistance of the frequency loss, the container images generated by the defense network share a more similar frequency distribution to the cover images than those generated by the watermarking network.
Our defense method therefore shows good robust to overwriting attack, even if the type of the surrogate model does not match the one used in fine-tuning.
Nevertheless, it is still possible for an adversary to change the attack method to cripple the watermark with a higher cost of visual quality – through low-pass filtering for example.
\subsection{Ablation Study}
\subsubsection{The frequency loss}
The ablation experiments in Zhang {\it et al.} \cite{zhang2020model} prove the necessity of several loss terms, including the clean loss and the consistent loss.
In our defense network, the frequency loss regularizer is added into the loss function so as to guide the network to generate the container images that share a more similar frequency distribution to the cover image.
The only difference between the loss terms in our defense network and the watermarking network is the frequency loss.
This boosts the image quality, as is proven in the test results presented in Tables \ref{tab:res_org}, and \ref{tab:res_def}.
Here, both the PSNR and the SSIM values of the container images generated by our defense network are higher than those from the watermarking network.
Further, as Figure \ref{fig:freq_als_def} shows, the high frequency distribution of the containers from the defense network is closer to the cover image than those from the watermarking network.
\subsubsection{Fine-tuning}
Unlike the original watermarking method in \cite{zhang2020model}, we additionally add a group of watermark-free images generated by the surrogate model trained on the watermark-free training dataset into the fine-tuning dataset.
This prevents the watermarking network from overfitting the steganographic signal in the container images so that it will retrieve the watermark regardless of what exact watermark signal lies in the container images.
Figure \ref{fig:overfitting} shows how the overfitting phenomenon nullifies the watermarking method.
Therefore, the watermark-free surrogate model's output is essential in this stage.
If a watermark can be retrieved from a container image that does not contain the specific watermarking signal, the method can be claimed unreliable.
Further, by inserting the overwriting network into the watermarking network to form the defense network, the defense network is pushed to become more robust to both the overwriting attack and the addition of noise.
Further, the embedding network hides the watermark more covertly, and the retrieval network differentiates the container image carrying both the specific watermark and the overwritten watermark images from the watermark-free images, and the watermark-free surrogate model's output, and the container images carrying any other secret images.
\section{Conclusion}
\label{sect:con}
In this study, we present an overwriting attack that effectively nullifies the watermark embedded in images processed by image processing neural networks. Our attack is also a threat to deep steganography, as it can invisibly replace a secret image with minimal impact on the visual quality of the image. Additionally, we identify an overfitting issue in the original watermarking method and resolve it with an alternative training approach.
To defend against our proposed overwriting attack, we develop an adversarial framework defense network that integrates the watermarking network and the overwriting network. To the best of our knowledge, this defense network is resilient against the overwriting attack. Through adversarial training, the defense network is able to retrieve valid watermarks from overwritten images and the output of the overwritten surrogate model.
There is ample room for future research in the area of image-processing model watermarking, including the development of robust watermarking techniques and malicious attacks. Although our method demonstrates robustness against overwriting attacks, the adversary can still manipulate the frequency domain of the output to erase the embedded watermark with minimal perceptual impact. To address this issue, a more robust watermarking method that embeds the watermark in the low frequency domain of the image should be explored.
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.08670",
"language": "en",
"timestamp": "2023-02-20T02:05:59",
"url": "https://arxiv.org/abs/2302.08670",
"yymm": "2302"
} | \section{}
Multispectral pedestrian detection is a technology designed to detect and locate pedestrians in Color and Thermal images, which has been widely used in automatic driving, video surveillance, etc. So far most available multispectral pedestrian detection algorithms only achieved limited success in pedestrian detection because of the lacking take into account the confusion of pedestrian information and background noise in Color and Thermal images. Here we propose a multispectral pedestrian detection algorithm, which mainly consists of a cascaded information enhancement module and a cross-modal attention feature fusion module. On the one hand, the cascaded information enhancement module adopts the channel and spatial attention mechanism to perform attention weighting on the features fused by the cascaded feature fusion block. Moreover, it multiplies the single-modal features with the attention weight element by element to enhance the pedestrian features in the single-modal and thus suppress the interference from the background. On the other hand, the cross-modal attention feature fusion module mines the features of both Color and Thermal modalities to complement each other, then the global features are constructed by adding the cross-modal complemented features element by element, which are attentionally weighted to achieve the effective fusion of the two modal features. Finally, the fused features are input into the detection head to detect and locate pedestrians. Extensive experiments have been performed on two improved versions of annotations (sanitized annotations and paired annotations) of the public dataset KAIST. The experimental results show that our method demonstrates a lower pedestrian miss rate and more accurate pedestrian detection boxes compared to the comparison method. Additionally, the ablation experiment also proved the effectiveness of each module designed in this paper.
\tiny
\fontsize{8}{11}\helveticabold { \section{Keywords:} multispectral pedestrian detection, attention mechanism, feature fusion, convolutional neural network, background noise}
\end{abstract}
\section{Introduction}
Pedestrian detection, parsing visual content to identify and locate pedestrians on an image/video, has been viewed as an essential and central task within the computer vision field and widely employed in various applications, e.g. autonomous driving, video surveillance and person re-identification \citep{jeong2017,zhang2018,lilingli2021,chenyiwen2021,dongneng2022,lishuang,wangshujuan}. The performance of such technology has greatly advanced through the facilitation of convolutional neural networks (CNN). Typically, pedestrian detectors take Color images as input and try to retrieve the pedestrian information from them. However, the quality of Color images highly depends on the light condition. Missing recognition of pedestrians occurs frequently when pedestrian detectors process Color images with poor resolution and contrast caused by unfavorable lighting. Consequently, the use of such models has been limited for the application of all-weather devices.
Thermal imaging is related to the infrared radiation of pedestrians, barely affected by changes in ambient light. The technique of combining Color and Thermal images has been explored in recent years \citep{hwang2015,liu2016,gonzalez2016,yangmoyuan2020,liu2020,li2018joint,li2020discriminative,wangjiaxin2021,huangbochun}. These methods has been shown to exhibit positive effects on pedestrian detection performance in complex environments as it could retrieve more pedestrian information. However, despite important initial success, there remain two major challenges. First, as shown in Figure \ref{fig:1}, the image of pedestrians tends to blend with the background for nighttime Color images resulting from insufficient light \citep{zhuzhipin}, and for daytime Thermal images as well due to similar temperatures between the human body and the ambient environment \citep{yang2022}. Second, there is an essential difference between Color images and Thermal images the former displays the color and texture detail information of pedestrians while the latter shows the temperature information. Therefore, solutions needed to be taken to augment the pedestrian features in Color and Thermal modalities in order to suppress background interference, and enable better integration and understanding of both Color and Thermal images to improve the accuracy of pedestrian detection in complex environments.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=15cm]{logo2.eps
\end{center}
\caption{Example of Color and Thermal images of pedestrians in daytime and nighttime scenes}\label{fig:1}
\end{figure}
To address the challenges above, the researches \citep{guan2019,zhou2020} designed illumination-aware networks to obtain illumination-measured parameters of Color and Thermal images respectively, which were used as fusion weights for Color and Thermal features in order to realize a self-adaptively fuse of two modal features. However, the acquisition of illumination-measured parameters relied heavily on the classification scores, the accuracy of which was limited by the performance of the classifier. Li et al. \citep{li2022} reported confidence-aware networks to predict the confidence of detection boxes for each modal, and then Dempster-Sheffer theory combination rules were employed to fuse the results of different branches based on uncertainty. Nevertheless, the accuracy of predicting the detection boxes' confidence is also affected by the performance of the confidence-aware network. A cyclic fusion and refinement scheme was introduced by \citep{zhang2020b} for the sake of gradually improving the quality of Color and Thermal features and automatically adjusting the complementary and consistent information balance of the two modalities to effectively utilize the information of both modalities. However, this method only used a simple feature cascade operation to fuse Color and Thermal features and failed to fully exploit the complementary features of these two modalities.
To tackle the problems aforementioned, we propose a multispectral pedestrian detection algorithm with cascaded information enhancement and cross-modal attention feature fusion. The cascaded information enhancement module (CIEM) is designed to enhance the pedestrian information suppressed by the background in the Color and Thermal images. CIEM uses a cascaded feature fusion block to fuse Color and Thermal features to obtain fused features of both modalities. Since the fused features contain the consistency and complementary information of Color and Thermal modalities, the fused features can be used to enhance Color and Thermal features respectively to reduce the interference of background on pedestrian information. Inspired by the attention mechanism, the attention weights of the fused features are sequentially obtained by channel and spatial attention learning, and the Color and Thermal features are multiplied with the attention weights element by element, respectively. In this way, the single-modal features have the combined information of the two modalities, and the single-modal information is enhanced from the perspective of the fused features. Although CIEM enriches single-modal pedestrian features, simple feature fusion of the enhanced single-modal features is still insufficient for robust multispectral pedestrian detection. Thus, we design the cross-modal attention feature fusion module (CAFFM) to efficiently fuse Color and Thermal features. Cross-modal attention is used in this module to implement the differentiation of pedestrian features between different modalities. In order to supplement the pedestrian information of the other modality to the local modality, the attention of the other modality is adopted to augment the pedestrian characteristics of the local modality. A global feature is constructed by adding the Color and Thermal features after performing cross-modal feature enhancement, and the global feature is used to guide the fusion of the Color and Thermal features. Overall, the method presented in this paper enables more comprehensive pedestrian features acquisition through cascaded information enhancement and cross-modal attention feature fusion, which effectively enhances the accuracy of multispectral image pedestrian detection. The main contributions of this paper are summarized as follows:
(1) A cascaded information enhancement module is proposed. From the perspective of fused features, it reduces the interference from the background of Color and Thermal modalities on pedestrian detection and augments the pedestrian features of Color and Thermal modalities separately through an attention mechanism.
(2) The designed cross-modal attention feature fusion module first mines the features of both Color and Thermal modalities separately through a cross-modal attention network and adds them to the other modality for cross-modal feature enhancement. Meanwhile, the cross-modal enhanced Color and Thermal features are used to construct global features to guide the feature fusion of the two modalities.
(3) Numerous experiments are conducted on the public dataset KAIST to demonstrate the effectiveness and superiority of the proposed method. In addition, the ablation experiments also demonstrate the effectiveness of the proposed modules.
\section{RELATED WORKS}
\subsection{Multispectral Pedestrian Detection}
Multispectral sensors can obtain paired Color-Thermal images to provide complementary information about pedestrian targets. A large multispectral pedestrian detection (KAIST) dataset was constructed by \citep{hwang2015}. Meanwhile, by combining the traditional aggregated channel feature (ACF) pedestrian detector \citep{dollar2014} with the HOG algorithm \citep{dalal2015}, an extended ACF (ACF+T+THOG) method was proposed to fuse Color and Thermal features. In 2016, Liu et al. \citep{liu2016} proposed four fusion modalities of low-layer feature, middle-layer feature, high-layer feature, and confidence fraction fusion with VGG16 as the backbone network, and the middle-layer feature fusion was proved to offer the maximum integration capability of Color and Thermal features. Inspired by this, \citep{konig2017} developed a multispectral region candidate network with Faster RCNN (Region with CNN features, RCNN) \citep{ren2017} as the architecture and replaced the original classifier in Faster RCNN with an enhanced decision tree classifier to reduce the missed and false detection of pedestrians. Recently, Kim et al. \citep{kim2021a} deployed the EfficientDet as the backbone network and proposed an EfficientDet-based fusion framework for multispectral pedestrian detection to improve the detection accuracy of pedestrians in Color and Thermal images by adding and cascading the Color and Thermal features. Although the studies \citep{hwang2015,liu2016,konig2017,kim2021a} fused Color and Thermal features for pedestrian detection, they mainly focused on exploring the impact of different stages of fusion on pedestrian detection, and only adopted simple feature fusion and not focusing on the case of pedestrian and background confusion.
In 2019, Zhang et al. \citep{zhang2019a} observed a weak alignment problem of pedestrian position between Color and Thermal images, for which the KAIST dataset was re-annotated and Aligned Region CNN (AR-CNN) was proposed to handle weakly aligned multispectral pedestrian detection data in an end-to-end manner. But the deployment of this algorithm requires pairs of annotations, and the annotation of the dataset is a time-consuming and labor-intensive task, which makes the algorithm difficult to be applied in realistic scenes. Kim et al. \citep{kim2021b} proposed a new single-stage multispectral pedestrian detection framework. This framework used multi-label learning to learn input state-aware features based on the state of the input image pair by assigning an individual label (if the pedestrian is visible in only one image of the image pair, the label vector is assigned as $y_1 \in[0,1]$ or $ {y_2} \in [1,0] $ ; if the pedestrian is visible in both images of the image pair, the label vector is assigned as $ {y_3} \in [1,1] $ ) to solve the problem of weak alignment of pedestrian locations between Color and Thermal images, but the model still requires pairs of annotations during training. Guan et al. \citep{guan2019} designed illumination-aware networks to obtain illumination-measured parameters for Color and Thermal images separately and used them as the fusion weights for Color and Thermal features. Zhou et al. \citep{zhou2020} designed a differential modality perception fusion module to guide the features of the two modalities to become similar, and then used the illumination perception network to assign fusion weights to the Color and Thermal features. Kim et al. \citep{kim2022} reported an uncertainty-aware cross-modal guidance (UCG) module to guide the distribution of modal features with high prediction uncertainty to align with the distribution of modal features with low prediction uncertainty. The researches \citep{guan2019,zhou2020} noticed that the pedestrians in Color and Thermal images are easily confused with the background and used illumination-aware networks to assign fusion weights to Color and Thermal features. However, the acquisition of illumination-measured parameters relied heavily on the classification scores, whose accuracy was limited by the performance of the classifier. In contrast, the method proposed in this paper not only considers the confusion of pedestrians and background in Color and Thermal images but also effectively fuses the two modal features.
\subsection{Attention Mechanisms}
Attention mechanisms \citep{atention} utilized in computer vision are aimed to perform the processing of visual information. Currently, attention mechanisms have been widely used in semantic segmentation \citep{li2020a}, image captioning \citep{li2020b}, image fusion \citep{xiaowanxin2022,cenyueliang2021}, image dehazing \citep{gaojirui2022}, saliency target detection \citep{xu2021}, person re-identification \citep{xukaixiong2022,ACMMM2021,wangyiming}, etc. Hu et al. \citep{hu2020} introduced the idea of a squeeze and excitation network (SENet) to simulate the interdependence between feature channels in order to generate channel attention to recalibrate the feature mapping of channel directions. Li et al. \citep{li2019a} employed the use of a selective kernel unit (SKNet) to adaptively fuse branches with different kernel sizes based on input information. A work inspired by this was from Dai et al.\citep{dai2021}. They designed a multi-scale channel attention feature fusion network that used channel attention mechanisms to replace simple fusion operations such as feature cascades or summations in feature fusion to produce richer feature representations. However, this recent progress in multispectral pedestrian detection has also been limited to two main challenges the interference caused by background and the difference of fundamental characteristics in Color and Thermal images. Therefore, we propose a multispectral pedestrian detection algorithm with cascaded information enhancement and cross-modal attention feature fusion based on the attention mechanism.
\section{Methods}
The overall network framework of the proposed algorithm is shown in Figure \ref{fig:2}. The network consists of an encoder, a cascaded information enhancement module (CIEM), a cross-modal attentional feature fusion module (CAFFM) and a detection head. Specifically, ResNet-101 \citep{he2016} is used as the backbone network of the encoder to encode the features of the input Color images $\boldsymbol{X}_c$ and Thermal images $\boldsymbol{X}_t$ to obtain the corresponding feature maps ${\boldsymbol{F}_c} \in {{\rm{R}}^{W \times H \times C}}$ and ${\boldsymbol{F}_t} \in {{\rm{R}}^{W \times H \times C}}$ ($ W $, $ H $, $ C $ represent the width, height and the number of channels of the feature maps, respectively). CIEM enhances single-modal information from the perspective of fused features by cascading feature fusion blocks to fuse ${\boldsymbol{F}_c}$ and ${\boldsymbol{F}_t}$, and attention weighting the fused features to enrich pedestrian features. CAFFM complements the features of different modalities by mining the complementary features between the two modalities and constructs global features to guide the effective fusion of the two modal features. The detection head is employed for pedestrian recognition and localization of the final fused features.
\subsection{Cascaded Information Enhancement Module}
Considering the confusion of pedestrians with the backgrounds in Color and Thermal images, we design a cascaded information enhancement module (CIEM) to augment the pedestrian features of both modalities to mitigate the effect of background interference on pedestrian detection.
Specifically, a cascaded feature fusion block is used to fuse the Color features ${\boldsymbol{F}_c}$ and Thermal features ${\boldsymbol{F}_t}$ . The cascaded feature fusion block consists of feature cascade, $1 \times 1$ convolution, $3 \times 3$ convolution, $ BN$ layer, and $ReLu$ activation function. The feature cascade operation splice ${\boldsymbol{F}_c}$ and ${\boldsymbol{F}_t}$ along the direction of channels. $1 \times 1$ convolution is conducive to cross-channel feature interaction in the channel dimension and reducing the number of channels in the splice feature map, while $3 \times 3$ convolution expands the field of perception and makes a more comprehensive fusion of features for generating fusion features $ {\boldsymbol{F}_{ct}}$ :
\begin{equation}
\boldsymbol{F}_{c t}={ReLu}\left({BN}\left({Conv}_3\left({Conv}_1\left[\boldsymbol{F}_c, \boldsymbol{F}_t\right]\right)\right)\right)
\end{equation}
where $ BN $ denotes batch normalization, $ Con{v_n}\left( \cdot \right)\ $ denotes a convolution kernel with kernel size $ n \times n $,
$ [ \cdot , \cdot ] $ denotes the cascade of features along the channel direction, $ ReLu( \cdot )$ represents $ReLu$ activation function. Fusion feature $ {\boldsymbol{F}_{ct}}$ is used to enhance the single-modal information because $ {\boldsymbol{F}_{ct}}$ combines the consistency and complementarity of the Color features ${\boldsymbol{F}_c}$ and Thermal features ${\boldsymbol{F}_t}$ . The use of $ {\boldsymbol{F}_{ct}}$ for enhancing the single-modal feature can reduce the interference of the noise in the single-modal features (for example, it is difficult to distinguish between the pedestrian information and the background noise).
\begin{figure}[h!]
\begin{center}
\includegraphics[width=18cm]{logo3.eps
\end{center}
\caption{Overall framework of the proposed algorithm}\label{fig:2}
\end{figure}
In order to effectively enhance pedestrian features, the fusion feature $ {\boldsymbol{F}_{ct}}$ is sent into the channel attention module (CAM) and spatial attention module (PAM) \citep{woo2018} to make the network pay attention to pedestrian features. The network structure of CAM and PAM is shown in Figure \ref{fig:3}. $ {\boldsymbol{F}_{ct}}$ first learns the channel attention weight ${\boldsymbol{w}_{ca}} \in {{\rm{R}}^{1 \times 1 \times C}}$ by CAM, then uses $ {\boldsymbol{w}_{ca}}$ to weight $ {\boldsymbol{F}_{ct}}$ , and the spatial attention weight $ {\boldsymbol{w}_{pa}} \in {{\rm{R}}^{W \times H \times 1}} $ is obtained from the weighted features by PAM.
The single-modal Color features ${\boldsymbol{F}_c}$ and Thermal features ${\boldsymbol{F}_t}$ are multiplied element by element with the attention weights $ {\boldsymbol{w}_{ca}}$ and $ {\boldsymbol{w}_{pa}}$ to enhance the single-modal features from the perspective of fused features. The whole process can be described as follows:
\begin{equation}
\boldsymbol{F}_t^{\prime}=\left(\boldsymbol{F}_t \otimes \boldsymbol{w}_{c a}\right) \otimes \boldsymbol{w}_{p a}
\end{equation}
\begin{equation}
\boldsymbol{F}_c^{\prime}=\left(\boldsymbol{F}_c \otimes \boldsymbol{w}_{c a}\right) \otimes \boldsymbol{w}_{p a}
\end{equation}
where $\boldsymbol{F}_t^{\prime}$ and $\boldsymbol{F}_c^{\prime}$ denote the Color features and Thermal features obtained by the cascaded information enhancement module, respectively. $\otimes$ represents the element by element multiplication.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.8\textwidth]{logo4.eps
\end{center}
\caption{Network structure of channel attention and spatial attention}\label{fig:3}
\end{figure}
\subsection{Cross-modal Attention Feature Fusion Module}
There is an essential difference between Color and Thermal images, Color images reflect the color and texture detail information of pedestrians while Thermal images contain the temperature information of pedestrians, however, they also have some complementary information. In order to explore the complementary features of different image modalities and fuse them effectively, we design a cross-modal attention feature fusion module.
Specifically, the Color features $\boldsymbol{F}_c^{\prime} $ and Thermal features $\boldsymbol{F}_t^{\prime}$ enhanced by CIEM are first mapped into feature vectors $\boldsymbol{v}_c \in \mathrm{R}^{1 \times 1 \times C}$ and $\boldsymbol{v}_t \in \mathrm{R}^{1 \times 1 \times C}$ , respectively, by using global average pooling operation. The cross-modal attention network consists of a set of symmetric $1 \times 1$ convolutions, $R e L u$ activation functions, and $Sigmoid$ activation functions. In order to obtain the complementary features of the two modalities, more pedestrian features need to be mined from the single-modal. The feature vectors $\boldsymbol{v}_t$ and $\boldsymbol{v}_c$ are learned to the respective modal attention weights $\boldsymbol{w}_t \in \mathrm{R}^{1 \times 1 \times C}$ and $\boldsymbol{w}_c \in \mathrm{R}^{1 \times 1 \times C}$ by a cross-modal attention network, and then the Color features $\boldsymbol{F}_c^{\prime} $ are multiplied element by element with the attention weights $\boldsymbol{w}_t$ of the Thermal modality, and the Thermal features $\boldsymbol{F}_t^{\prime}$ are multiplied element by element with the attention weights $\boldsymbol{w}_c$ of the Color modality to complement the features of the other modality into the present modality. The specific process can be expressed as follows.
\begin{equation}
\boldsymbol{w}_t=\operatorname{\mathit{Sigmoid}}\left(\mathit{ReLu}\left(\mathit{Conv}_1\left(\mathit{GAP}\left(\boldsymbol{F}_t^{\prime}\right)\right)\right)\right)
\end{equation}
\begin{equation}
\boldsymbol{F}_{c t}^{\prime}=\boldsymbol{w}_t \otimes G A P\left(\boldsymbol{F}_c^{\prime}\right)
\end{equation}
\begin{equation}
\boldsymbol{w}_c={Sigmoid}\left({ReLu}\left({Conv}_1\left({GAP}\left(\boldsymbol{F}_c^{\prime}\right)\right)\right)\right)
\end{equation}
\begin{equation}
\boldsymbol{F}_{t c}^{\prime}=\boldsymbol{w}_c \otimes G A P\left(\boldsymbol{F}_t^{\prime}\right)
\end{equation}
where $\boldsymbol{F}_{c t}^{\prime}$ denotes Color features after supplementation with Thermal features, $\boldsymbol{F}_{t c}^{\prime}$ denotes Thermal features after supplementation with Color features, $G A P(\cdot)$ denotes global average pooling operation, ${Conv}_1(\cdot)$ denotes convolution with convolution kernel size $1 \times 1$ , $ ReLu( \cdot )$ denotes $ReLu$ activation operation, and $Sigmoid$ $(\cdot)$ denotes $Sigmoid $ activation operation.
In order to efficiently fuse the two modal features, the features $\boldsymbol{F}_{c t}^{\prime}$ and $\boldsymbol{F}_{t c}^{\prime}$ are subjected to an element by element addition operation to obtain a global feature vector containing Color and Thermal features. Then, the features $\boldsymbol{F}_t^{\prime}$ and $\boldsymbol{F}_c^{\prime} $ are added element by element and multiplied with the attention weight $\boldsymbol{w}_{c t}$ of the global feature vector element by element to guide the fusion of Color and Thermal features from the perspective of global features to obtain the final fused feature $\boldsymbol{F}$. The fused feature $\boldsymbol{F}$ is input to the detection head to obtain the pedestrian detection results. The feature fusion process can be expressed as follows:
\begin{equation}
\boldsymbol{w}_{c t}={Sigmoid}\left({ReLu}\left({Conv}_1\left(\boldsymbol{F}_{c t}^{\prime} \oplus \boldsymbol{F}_{t c}^{\prime}\right)\right)\right)
\end{equation}
\begin{equation}
\boldsymbol{F}=\boldsymbol{w}_{c t} \otimes\left(\boldsymbol{F}_t^{\prime} \oplus \boldsymbol{F}_c^{\prime}\right)
\end{equation}
where $\oplus$ denotes element by element addition.
\subsection{Loss Function}
The loss function in this paper is consistent with the literature \citep{ren2017} and uses the Region Proposal Network (RPN) loss function $L_{R P N}$ and Fast RCNN \citep{girshick2015} loss function $L_{F R}$ to jointly optimize the network:
\begin{equation}
L=L_{R P N}+L_{F R}
\end{equation}
Both $L_{R P N}$ and $L_{F R}$ consist of classification loss $L_{c l s}$ and bounding box regression loss $L_{r e g}$:
\begin{equation}
L\left(\left\{p_i\right\},\left\{t_i\right\}\right)=\frac{1}{N_{c l s}} \sum_i L_{c l s}\left(p_i, p_i^*\right)+\lambda \frac{1}{N_{\text {reg }}} \sum_i p_i^* L_{r e g}\left(t_i, t_i^*\right)
\end{equation}
Where, $N_{c l s}$ is the number of anchors, $N_{r e g}$ is the sum of positive and negative sample number, $p_{i}$ is the probability that the $i$-th anchor is predicted to be the target, $p_i^*$ is 1 when the anchor is a positive sample, otherwise it is 0, $t_{i}$ denotes the bounding box regression parameter predicting the $i$-th anchor, and $t_i^*$ denotes the GT bounding box parameter of the $i$-th anchor, $\lambda=1$.
The difference between the classification loss of RPN network and Fast RCNN network is that the RPN network focuses only on the foreground and background when classifying, so its loss is a binary cross-entropy loss, while the Fast RCNN classification is focused to the target category and is a multi-category cross-entropy loss:
\begin{equation}
L_{c l s}\left(p_i, p_i^*\right)=-\log \left[p_i^* p_i+\left(1-p_i^*\right)\left(1-p_i\right)\right]
\end{equation}
The bounding box regression loss of RPN network and Fast RCNN network uses Smooth $_{L_1}$ loss:
\begin{equation}
L_{\text {reg }}\left(t_i, t_i^*\right)=R\left(t_i-t_i^*\right)
\end{equation}
Where, R denotes Smooth $_{L_1}$ function,
\begin{equation}
\text { Smooth }_{L_1}(x)=\left\{\begin{array}{cc}
\frac{x^2}{2 \sigma^2} & \text { if }|x|<\frac{1}{\sigma^2} \\
|x|-0.5 & \text { otherwise }
\end{array}\right.
\end{equation}
The difference between the bounding box regression loss of RPN loss and the regression loss of Fast RCNN loss is that the RPN network is trained when $\sigma$ =3 and the Fast RCNN network is trained when $\sigma$ =1.
\section{EXPERIMENTAL RESULTS AND ANALYSIS}
\subsection{Datasets}
This paper evaluates the algorithm performance on the KAIST pedestrian dataset \citep{hwang2015}, which is composed of 95,328 pairs of Color and Thermal images captured during daytime and nighttime. It is the most widely used multispectral pedestrian detection dataset at present. The dataset is labeled with four categories including person, people, person?, and cyclist. Considering the application areas of multispectral pedestrian detection (e.g., automatic driving), all four categories are treated as positive examples for detection in this paper. To address the problem of the annotation errors and missing annotations in the original annotation of the KAIST dataset, studies \citep{liu2016,li2018,zhang2019a} performed data cleaning and re-annotation of the original data. Given that the annotations used in various studies are not consistent, we use 7601 pairs of Color and Thermal images from synthetic annotation (SA) \citep{li2018} and 8892 pairs of Color and Thermal images from paired annotation (PA) \citep{zhang2019a} for model training. The test set consists of 2252 pairs of Color and Thermal images, of which 1455 pairs are from the daytime and 797 pairs are from the nighttime. For a fair comparison with other methods, the test experiments were performed according to the reasonable settings proposed in the literature \citep{hwang2015}.
\subsection{Evaluation Indexes}
In this paper, Log-average Miss Rate (MR) proposed by Dollar et al.\citep {dollar2012} is employed as an evaluation index and combined with the plotting of the Miss Rate-FPPI curve to assess the effectiveness of the algorithm. The horizontal coordinate of the Miss Rate-FPPI curve indicates the average number of False Positives Per Image (FPPI), and the vertical coordinate represents the Miss Rate (MR), which is expressed as:
\begin{equation}
\text { MissRate }=\frac{F N}{T P+F N}
\end{equation}
\begin{equation}
F P P I=\frac{F P}{\text { Total }(\text { images })}
\end{equation}
where ${F N}$ denotes False Negative, ${T P}$ denotes True Positive, ${F P}$ denotes False Positive, the sum of ${T P}$ and ${F N}$ is the number of all positive samples, and $\text { Total }(\text { images })$ denotes the total number of predicted images. It is worth noting that the lower the Miss Rate-FPPI curve trend, the better the detection performance; the smaller the MR value, the better the detection performance.
In order to calculate MR, in logarithmic space, 9 points are taken from the horizontal coordinate (limited value range is $\left[10^{-2}, 10^0\right]$ ) of Miss Rate-FPPI curve, and then there are 9 corresponding vertical coordinates $m_1$, $m_2$,...$m_9$ . By averaging these values, MR can be obtained as follows:
\begin{equation}
\mathrm{MR}=\exp \left[\frac{1}{n} \sum_{i=1}^n \ln \left(m_i\right)\right]
\end{equation}
where $n$ is 9.
\subsection{Implementation Details}
In this paper, the deep learning framework pytorch1.7 is adopted. The experimental platform is the ubuntu18.04 operating system and a single NVIDIA GeForce RTX 2080Ti GPU. Stochastic Gradient Descent (SGD) algorithm is used to optimize the network during model training, with momentum value of 0.9, weight attenuation value $5 \times 10^{-4}$ , and initial learning rate is $1 \times 10^{-3}$ . The model is iterated for 5 epochs with the batch size of 4, and the learning rate decay to $1 \times 10^{-4}$ after the 3rd epoch.
\subsection{Experimental Results and Analysis}
\subsubsection{Construction of the Baseline}
This work constructs a baseline algorithm architecture based on ResNet-101 backbone network and Faster RCNN detection head. Simple characteristic fusion (feature cascade, element by element addition and element by element multiplication) of the Color and Thermal features output by the backbone network is carried out in three sets of experiments. The fused feature is used as the input of the detection head. In order to ensure the high efficiency of the build baseline algorithm, synthesis annotation is employed to train and test the baseline. The test results are shown in Table \ref{tab:1}. The MR values using feature cascade, element by element addition and element by element multiplication in the all-weather scene are 14.62$\%$, 13.84$\%$ and 14.26$\%$, respectively. By comparing these three results, it can be seen that the feature element by element addition demonstrates the best performance. Therefore, we adopt the method of adding features element by element as the baseline integration method.
\begin{table}[!ht]
\centering
\begin{spacing}{1.5}
\caption{Experimental results of baseline under different fusion modes}\label{tab:1}
\begin{tabular}{cc} \hline
Fusion modes & All-weather \\ \hline
feature cascade & 14.62 \\
element by element multiplication & 14.26 \\
element by element addition & \textbf{\scriptsize 13.84} \\ \hline
\end{tabular}
\end{spacing}
\end{table}
\subsubsection{Performance comparison of different methods}
The performance of this method is compared with several other state-of-the-art methods. The compared methods include hand-represented methods, e.g., ACT+T+THOG \citep{hwang2015} and deep learning-based methods, e.g., Halfway Fusion \citep{liu2016}, CMT\_CNN\citep{xu2017}, CIAN\citep{zhang2019b}, IAF R-CNN\citep{li2019b}, IATDNN+IAMSS\citep{guan2019}, CS-RCNN \citep{zhang2020a}, IT-MN \citep{zhuang2022}, and DCRD \citep{liu2022}. Here, the model is trained using 7601 pairs of Color and Thermal images from SA and 8892 pairs of Color and Thermal images from PA, respectively. Besides, 2252 pairs of Color and Thermal images from the test set are used for model testing. Table \ref{tab:2} lists the experimental results.
Table \ref{tab:2} shows that when the model is trained with SA, the MRs of the method proposed in this paper are 10.71$\%$, 13.09$\%$ and 8.45$\%$ for all-weather, daytime and nighttime scenes, respectively, which are 0.72$\%$, -1.23$\%$ and 0.37$\%$ lower than the compared method CS-RCNN with the best performance, respectively. The PA (Color) and PA (Thermal) in Table \ref{tab:2} represent the Color annotation and Thermal annotation in the pairwise annotation PA, respectively, for the purpose of training the model. It can be seen from \ref{tab:2} that the MRs of the method in this paper are 11.11$\%$ and 10.98$\%$ when using Color annotation and Thermal annotation in the all-weather scene, which are 2.53$\%$ and 3.70$\%$, respectively, lower than those of compared method with the best performance. In addition, by analyzing the experimental results of two improved versions of annotations, it can be found that pedestrian detection results are different when using different annotations, indicating the importance of annotations.
\begin{table}[!ht]
\centering
\setlength{\tabcolsep}{1mm}
\begin{spacing}{1.5}
\caption{MRs of different methods on KAIST datasets}\label{tab:2}
\begin{tabular}{cccccccccc}\hline
\multicolumn{1}{c}{\multirow{2}{*}{Methods}} & \multicolumn{3}{c}{SA} & \multicolumn{3}{c}{PA(Color)} & \multicolumn{3}{c}{PA(Thermal)} \\ \cline{2-10}
& All-weather & Day & Night & All-weather & Day & Night & All-weather & Day & Night \\ \hline
ACF+T+THOG& 41.65 & 39.18 & 48.29 & 41.74 & 39.30 & 49.52 & 41.36 & 38.74 & 48.30\\
Halfway Fusion& 25.75 & 24.88 & 26.59 & 25.10 & 24.29 & 26.12 & 25.51 & 25.20 & 24.90\\
CMT\_CNN& 36.83 & 34.56 & 41.82 & 36.25 & 34.12 & 41.21 & -- & -- & --\\
IAF R-CNN& 15.73 & 14.55 & 18.26 & 15.65 & 14.95 & 18.11 & 16.00 & 15.22 & 17.56\\
IATDNN+IAMSS& 14.95 & 14.67 & 15.72 & 15.14 & 14.82 & 15.87 & 15.08 & \textcolor{blue}{15.02} & 15.20\\
CIAN& 14.12 & 14.77 & 11.13 & 14.64& 15.13 & \textcolor{blue}{12.43} & \textcolor{blue}{14.68} & 16.21 & \textcolor{blue}{9.88}\\
CS-RCNN & \textcolor{blue}{11.43} & \textcolor{red}{11.86} & \textcolor{blue}{8.82} & -- & -- & -- & -- & -- & -- \\
IT-MN& 14.19 & 14.30 & 13.98 & -- & -- & -- & -- & -- & -- \\
DCRD& 12.58 & 13.12 & 11.65 & \textcolor{blue}{13.64} & \textcolor{blue}{13.15} & 13.98 & -- & -- & --\\
Ours& \textcolor{red}{10.71} & \textcolor{blue}{13.09} & \textcolor{red}{8.45} & \textcolor{red}{11.11} & \textcolor{red}{12.85} & \textcolor{red}{8.77} & \textcolor{red}{10.98} & \textcolor{red}{13.07} & \textcolor{red}{8.53} \\ \hline
\end{tabular}
\end{spacing}
\end{table}
\subsubsection{Analysis of Ablation Experiments}
\noindent(1) Complementarity and importance of Color and Thermal features
\begin{table}[!ht]
\centering
\begin{spacing}{1.5}
\caption{MRs of different modal inputs}\label{tab:3}
\begin{tabular}{cccc} \hline
Input & All-weather& Day & Night \\ \hline
dual-stream Color images & 25.37& 19.31 & 31.18\\
dual-stream Thermal images & 17.55 & 22.81 & 12.61 \\
Color images + Thermal images & \textbf{\scriptsize 13.84} & \textbf{\scriptsize 15.35} & \textbf{\scriptsize 12.48} \\ \hline
\end{tabular}
\end{spacing}
\end{table}
This section compares the effect of different input sources on pedestrian detection performance. In order to eliminate the impact of the proposed module on detection performance, three sets of experiments are conducted on baseline: 1) the combination of Color and Thermal images as the input source (the input of the two branches of the backbone network are respectively Color and Thermal images); 2) dual-stream Color image as the input source (use Color images to replace Thermal images, that is, the backbone network input source is Color images); 3) dual-stream Thermal images as the input source (use Thermal images to replace Color images, that is, the backbone network input source is Thermal images).The training set of the model here is 7061 pairs of images of SA, and the test set is 2252 pairs of Color and Thermal images. Table \ref{tab:3} shows the MRs of these three input sources for the all-weather, daytime, and nighttime scenes. It can be seen from Table \ref{tab:3} that the MRs obtained using Color and Thermal images as input to the network are 13.84$\%$, 15.35$\%$ and 12.48$\%$ for the all-weather, daytime and nighttime scenes, respectively, which are 11.53$\%$, 3.96$\%$, 18.70$\%$ and 3.71$\%$, 7.46$\%$, 0.13$\%$ lower than using Color images and Thermal images as input alone. The experimental results prove that the detection network combining Color and Thermal features delivers better performance, indicating that Color and Thermal features are important for pedestrian detection.
Figure \ref{fig:4} shows the Miss Rate-FPPI curves of the detection results for these three input sources in the all-weather, daytime, and nighttime scenes (blue, red and green curves indicate dual-stream Thermal images, dual-stream Color images, and Color and Thermal images, respectively). By analyzing the Miss Rate-FPPI curve trend and combining with the experimental data in Table \ref{tab:3}, it can be seen that the detection effect of Color images as the input source is better than that of Thermal images in the daytime scene while the result is the opposite for the night scene, and the detection effect of Color and Thermal images combined as the input source is better than that of single-modal input in both daytime and nighttime. It shows that there are complementary features between Color and Thermal modalities, and the fusion of the two modal features can improve the pedestrian detection performance.\\
\begin{figure}[h!]
\begin{center}
\includegraphics[width=18cm]{logo5.eps
\end{center}
\caption{The Miss Rate-FPPI curves of the detection results of the three groups of input sources in the All-weather, Daytime and Nighttime scenes (From left to right, All-weather, Daytime and Nighttime Miss Rate-FPPI curves are shown in the figure)}\label{fig:4}
\end{figure}
\noindent(2) Ablation experiments
In this section, ablation experiments are conducted to demonstrate the effectiveness of the proposed cascaded information enhancement module (CIEM) and cross-modal attentional feature fusion module (CAFFM). Here, 7061 pairs of SA images are used to train the model, and 2252 pairs of Color and Thermal images in the test set are used to test the model.
Effectiveness of CIEM: CIEM is used to enhance the pedestrian features in Color and Thermal images to reduce the interference from the background. The experimental results are shown in Table \ref{tab:4}. The MRs of baseline on SA are 13.84$\%$, 15.35$\%$ and 12.48$\%$ for all-weather, daytime and nighttime scenes, respectively. When CIEM is additionally employed, the MRs are 11.21$\%$, 13.15$\%$ and 9.07$\%$ for all-weather, daytime and nighttime scenes, respectively, which are reduced by 2.63$\%$, 2.20$\%$ and 3.41$\%$ compared to the baseline, respectively. It is shown that the proposed CIEM effectively enhances the pedestrian features in both modalities, reduces the interference of background, and improves the pedestrian detection performance.
\begin{table}[!ht]
\centering
\begin{spacing}{1.5}
\caption{MRs for ablation studies of the proposed method on SA}\label{tab:4}
\begin{tabular}{cccc} \hline
Methods & All-weather& Day & Night \\ \hline
baseline & 13.84 & 15.35 & 12.48\\
baseline + CIEM & 11.21 & 13.15 & 9.07 \\
baseline + CAFFM & 11.68 & 13.81 & 9.50 \\
Overall model & \textbf{\scriptsize 10.71} & \textbf{\scriptsize 13.09} & \textbf{\scriptsize 8.45} \\ \hline
\end{tabular}
\end{spacing}
\end{table}
Validity of CAFFM: CAFFM is used to effectively fuse Color and Thermal features. The experimental results are shown in Table \ref{tab:4}. On the SA, when the baseline is used with CAFFM, the MRs are 11.68$\%$, 13.81$\%$ and 9.50$\%$ in all-weather, daytime and nighttime scenes, respectively, which are reduced by 2.16$\%$, 1.54$\%$ and 2.98$\%$ compared baseline, respectively. It shows that the proposed CAFFM effectively fuses the two modal features to achieve robust multispectral pedestrian detection.
Overall effectiveness: The proposed CIEM and CAFFM are additionally used on the basis of baseline. Experimental results show a reduction of 3.13$\%$, 2.26$\%$ and 4.03$\%$ in MRs for all-weather, daytime and nighttime scenes, respectively, compared to the baseline, indicating the overall effectiveness of the proposed method. A closer look reveals that with additional employment of CIEM and CAFFM alone, MRs are decreased by 2.63$\%$ and 2.16$\%$, respectively, in the all-weather scene, but the MR of the overall model is reduced by 3.13$\%$. It demonstrates that there is some orthogonal complementarity in the role of the proposed two modules.
Figure \ref{fig:5} shows the Miss Rate-FPPI curves for CIEM and CAFFM ablation studies in all-weather, daytime and nighttime scenes (blue, red, orange and green curves represent baseline, baseline + CIEM, baseline + CAFFM and overall model, respectively). It is clear that the curve trends of each module and the overall model are both lower than that of the baseline, which further proves the effectiveness of the method presented in this work.
Furthermore, in order to qualitatively analyze the effectiveness of the proposed CIEM and CAFFM, four pairs of Color and Thermal images (two pairs of images are taken from daytime and two pairs of images are taken from nighttime) are selected from the test set for testing. The pedestrian detection results of the baseline and each proposed module are shown in Figure \ref{fig:6}. The first row is the visualization results of labeled boxes for Color and Thermal images, and the second to the fifth rows are the visualization results of the labeled and prediction boxes for baseline, baseline + CIEM, baseline + CAFFM, and the overall model pedestrian detection with the green and red boxes representing the labeled and prediction boxes, respectively. It can be seen that the proposed method successfully addresses the problem of pedestrian missing detection in complex environments and achieves more accurate detection boxes. For example, the second row, pedestrian detection missing happens in the first, third, and fourth pairs of images in the baseline detection result, however, the pedestrian miss detection problem is properly solved with CIEM and CAFFM added to the baseline and the overall model produces more accurate pedestrian detection boxes.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=18cm]{logo6.eps
\end{center}
\caption{The Miss Rate-FPPI curves of CIEM and CAFFM ablation studies in All-weather, Daytime and Nighttime scenes (From left to right, All-weather, Daytime and Nighttime Miss Rate-FPPI curves are shown in the figure)}\label{fig:5}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=18cm]{logo7.eps
\end{center}
\caption{In this paper, each module and baseline pedestrian detection results (The first row is the visualization results of labeled boxes for Color and Thermal images, and the second to the fifth rows are the visualization results of the labeled and prediction boxes for baseline, baseline + CIEM, baseline + CAFFM and the overall model pedestrian detection with the green and red boxes representing the labeled and prediction boxes, respectively.)}\label{fig:6}
\end{figure}
\section{CONCLUSION}
In this paper, we propose a multispectral pedestrian detection algorithm including cascaded information enhancement module and cross-modal attention feature fusion module. The proposed method improves the accuracy of pedestrian detection in multispectral images (Color and Thermal images) by effectively fusing the features from the two modules and augmenting the pedestrian features. Specifically, on the one hand, a cascaded information enhancement module (CIEM) is designed to enhance single-modal features to enrich the pedestrian features and suppress interference from the background noise. On the other hand, unlike previous methods that simply splice Color and Thermal features directly, a cross-modal attention feature fusion module (CAFFM) is introduced to mine the features of both Color and Thermal modalities and to complement each other, then complementary enhanced modal features are used to construct global features. Extensive experiments have been conducted on two improved annotations of the public dataset KAIST. The experimental results show that the proposed method is conducive to obtain more comprehensive pedestrian features and improve the accuracy of multispectral image pedestrian detection.
\section{Tables}
\section*{Conflict of Interest Statement}
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
\section*{Author Contributions}
YY responsible for paper scheme design, experiment and paper writing. WK guide the paper scheme design and revision. XK guide to do experiments and write papers.
\section*{Funding}
This work was supported by the National Natural Science Foundation of China (No. 52107017) and Fundamental Research Fund of Science and Technology Department of Yunnan Province(No.202201AU070172)
\bibliographystyle{Frontiers-Vancouver}
|
{"arxiv_id":"2302.08582","language":"en","timestamp":"2023-02-20T02:02:07","url":"https://arxiv.org/(...TRUNCATED) | "\\section{Introduction}\\label{sec:intro}\n\n\\begin{figure}\n \\centering\n \n \\begin{sub(...TRUNCATED) |
{"arxiv_id":"2302.08606","language":"en","timestamp":"2023-02-20T02:03:02","url":"https://arxiv.org/(...TRUNCATED) | "\\section{Introduction}\nThe last two decades have witnessed an explosive development in deep learn(...TRUNCATED) |
{"arxiv_id":"2302.08682","language":"en","timestamp":"2023-02-20T02:06:28","url":"https://arxiv.org/(...TRUNCATED) | "\\section{Introduction}\nConvolutional Neural Network (CNN) is an important component in computer v(...TRUNCATED) |
{"arxiv_id":"2302.08681","language":"en","timestamp":"2023-02-20T02:06:27","url":"https://arxiv.org/(...TRUNCATED) | "\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\\input{sections/introduction.te(...TRUNCATED) |
{"arxiv_id":"2302.08643","language":"en","timestamp":"2023-02-28T02:05:57","url":"https://arxiv.org/(...TRUNCATED) | "\n\\section{Introduction} \\label{sec:intro}\n\nTime series modeling has been a quest in a wide ran(...TRUNCATED) |
{"arxiv_id":"2302.08655","language":"en","timestamp":"2023-02-20T02:05:31","url":"https://arxiv.org/(...TRUNCATED) | "\\section{Introduction}\nQuantum entanglement is a key resource in quantum information with wide ap(...TRUNCATED) |
{"arxiv_id":"2302.08608","language":"en","timestamp":"2023-02-20T02:03:03","url":"https://arxiv.org/(...TRUNCATED) | "\\section{Introduction}\nIn this paper, we build on an existing body of work that examines the extr(...TRUNCATED) |
{"arxiv_id":"2302.08607","language":"en","timestamp":"2023-02-20T02:03:03","url":"https://arxiv.org/(...TRUNCATED) | "\\section{Introduction}\nAccurate and efficient speech recognition models are key to realizing auto(...TRUNCATED) |
ArXiv papers from RedPajama-Data originally published in February 2023
We collect the ArXiv papers released shortly before the training data cutoff date for the OpenLLaMA models.
The OpenLLaMA models (V1) have been trained on RedPajama data. The last batch of ArXiv papers included in this dataset are papers published in February 2023. To get the members close to the cutoff data, we collect the 13,155 papers published in "2302" as part of the training dataset. We process the raw LateX files using this script.
This dataset has been used as source for 'member' documents to develop (document-level) MIAs against LLMs using data collected shortly before (member) and after (non-member) the training cutoff date for the target model (the suite of OpenLLaMA models). For non-members for the RDD setup, we refer to our Github repo. For more details and results see the section of Regression Discontiuity Design (RDD) in the paper "SoK: Membership Inference Attacks on LLMs are Rushing Nowhere (and How to Fix It)".
- Downloads last month
- 31