% !TEX encoding = UTF-8 Unicode
\documentclass{application}
\usepackage{CJKutf8}
\usepackage{hyperref}

\title{Competition: Our submission}
\author{John Doe, Peter Fox}
\date{\today}
	
% big font for sections
\usepackage{sectsty}
\sectionfont{\LARGE}

\usepackage{graphicx}
\usepackage{wrapfig}
\usepackage{caption}
\usepackage{subcaption}

% \begin{comment} ... \end{comment{}
\usepackage{verbatim}

\setlength{\parskip}{0pt}

\makeatletter
\renewcommand{\paragraph}{
  \@startsection{paragraph}{4}
    {\z@}{1.25ex \@plus 1ex \@minus .2ex}{-1em}
      {\normalfont\normalsize\bfseries}
      }
      \makeatother


\begin{document}
\begin{CJK*}{UTF8}{gbsn}
\newpage

\maketitle

\newpage


%%=================
%\section*{Who are we?}
%
%Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
%eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam
%voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet
%clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit
%amet


%=================
\section*{目标}

\subsection*{动机}
本文拟构建一个可通行性区域识别网络，从图像中分割出道路的可通行性区域。

\subsection*{思路}
使用语义分割网络，对当前图像中的物体进行分割分类。在可通行性数据库中查询分类好
的物体的可通行性能力。

\subsection*{相关论文}
主要参考以下论文进行项目的初期demo实现，部分论文有开源代码实现。
\subsubsection*{MultiNet: Real-time Joint Semantic Reasoning for Autonomous Driving}
While most approaches to semantic reasoning have focused on improving 
performance, in this paper we argue that computational times are 
very important in order to enable real time applications such as 
autonomous driving. Towards this goal, we present an approach to 
joint classification, detection and semantic segmentation via a 
unified  architecture where the encoder is shared amongst the three 
tasks. Our approach is very simple, can be trained end-to-end 
and performs extremely well in the challenging KITTI dataset, 
outperforming the state-of-the-art in the road segmentation task. Our 
approach is also very efficient, taking less than 100 ms to perform all tasks.

这篇文章发表在arxiv上，使用Kitti道路比赛数据集作为测试数据集。本文侧重于构建一个
网络，以满足无人驾驶实时处理的要求。为了达到这个目的，本文提出了一个通用的识别
框架，实现分类，识别和语义分割统一任务。该方法在数据集上的精度表现不错，
识别的速度较快，非常高效，组合处理分类，识别和语义分割的时间小于100ms。
该架构对三个任务来说使用同一个编码器，但是训练的解码器各自维护训练。
为了提高实现自动驾驶等实时应用的计算时间，提出了一种通过统一架构的联合分类、
检测和语义分割的方法。整个方法的架构可以表示为编码器—解码器。
其中编码器采用的是VGG-16网络的前13层的输出（feature map大小为39X12），
此特征在三个任务解码器之间共享。通过构造这种方法，可以进行端到端的训练，
并且在具有挑战性的KITTI数据集中表现非常出色，超越了道路划分任务中的最先进技术。

\begin{figure}[h]
        \centering
        \begin{subfigure}[b]{0.5\textwidth}
                \centering
	  \includegraphics[scale=0.3]{images/multiNet_1}
                %\caption{Visualization of the segmentation output}
                \label{fig:multiNet_1}
        \end{subfigure}%
        ~ 
        \begin{subfigure}[b]{0.5\textwidth}
                \centering
                \includegraphics[scale=0.3]{images/multiNet_2}
                %\caption{Visualization of the segmentation output}
                \label{fig:multiNet_2}
        \end{subfigure}
        \caption*{Visualization of the segmentation output: read blue plot}
\end{figure}

\begin{figure}[h]
        \centering
        \begin{subfigure}[b]{0.5\textwidth}
                \centering
	  \includegraphics[scale=0.3]{images/multiNet_3}
                %\caption{Visualization of the segmentation output}
                \label{fig:multiNet_3}
        \end{subfigure}%
        ~ 
        \begin{subfigure}[b]{0.5\textwidth}
                \centering
                \includegraphics[scale=0.3]{images/multiNet_4}
                %\caption{Visualization of the segmentation output}
                \label{fig:multiNet_4}
        \end{subfigure}
        \caption*{Visualization of the segmentation output: green plot}
\end{figure}

\begin{figure}[h]
	\centering
	\includegraphics[scale=0.35]{images/multiNet_arch}
	\caption*{MultiNet架构}
\end{figure}

\begin{figure}[h]
	\centering
	\includegraphics[scale=0.15]{images/multiNet_structure}
	\caption*{MultiNet网络细节图}
\end{figure}

\subsubsection*{SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation}
SegNet was primarily motivated by scene understanding applications. 
Hence, it is designed to be efficient both in terms of memory and
computational time during inference. It is also significantly smaller 
in the number of trainable parameters than other competing
architectures and can be trained end-to-end using stochastic 
gradient descent. We also performed a controlled benchmark of 
SegNet and other architectures on both road scenes and SUN 
RGB-D indoor scene segmentation tasks. These quantitative assessments
show that SegNet provides good performance with competitive 
inference time and most efficient inference memory-wise as compared
to other architectures.

SegNet是Cambridge提出旨在解决自动驾驶或者智能机器人的图像语义分割深度网络，
开放源码，基于caffe框架。SegNet基于FCN，修改VGG-16网络得到的语义分割网络。
有两种SegNet，分别为正常版与贝叶斯版，同时SegNet作者根据网络的深度提供了一个
basic版（浅网络）。从SegNet示例图片可以看出，SegNet不仅能够分割出小物体，而且
给出的划分结果也很平滑。其中用到了Camvid数据集，可以用来扩充自己的语义分割数据集。

One of the main contributions of this paper is our analysis
of the SegNet decoding technique and the widely used Fully
Convolutional Network (FCN). This is in order to convey
the practical trade-offs involved in designing segmentation architectures. 
Most recent deep architectures for segmentation have
identical encoder networks, i.e VGG16, but differ in the form
of the decoder network, training and inference. Another common
feature is they have trainable parameters in the order of hundreds
of millions and thus encounter difficulties in performing end-to-end 
training. The difficulty of training these networks has led
to multi-stage training, appending networks to a pre-trained
architecture such as FCN, use of supporting aids such as
region proposals for inference, disjoint training of classification
and segmentation networks and use of additional training data
for pre-training or for full training. In addition,
performance boosting post-processing techniques have also
been popular. Although all these factors improve performance on
challenging benchmarks, it is unfortunately difficult from
their quantitative results to disentangle the key design factors
necessary to achieve good performance. We therefore analysed
the decoding process used in some of these approaches,
and reveal their pros and cons.

上述给出了目前做语义分割比较流行的做法。但是随后作者提出，这些方法都让影响
网络效率的因素扑朔迷离。因此提出了segnet，并且分析了利弊。
\begin{figure}[h]
        \centering
        \begin{subfigure}[b]{0.45\textwidth}
                \centering
	  \includegraphics[width=200pt]{images/segNet_1}
                \caption{}
                \label{fig:segNet_1}
        \end{subfigure}%
        \hfill
        \begin{subfigure}[b]{0.45\textwidth}
                \centering
                \includegraphics[width=200pt]{images/segNet_3}
                \caption{}
                \label{fig:segNet_3}
        \end{subfigure}
        \hfill
        \begin{subfigure}[b]{0.45\textwidth}
                \centering
                \includegraphics[width=200pt]{images/segNet_2}
                \caption{}
                \label{fig:segNet_2}
        \end{subfigure}
        \hfill
        \begin{subfigure}[b]{0.45\textwidth}
                \centering
                \includegraphics[width=200pt]{images/segNet_4}
                \caption{}
                \label{fig:segNet_4}
        \end{subfigure}
        \caption*{SegNet示例图片}
\end{figure}

\begin{figure}[h]
	\centering
	\includegraphics[scale=0.5]{images/segNet_arch}
	\caption*{SegNet架构}
\end{figure}

\subsubsection*{FCNs for Free-Space Detection with Self-Supervised Online Training}
Recently, vision-based Advanced Driver Assist Systems have 
gained broad interest. In this work, we investigate 
free-space detection, for which we propose to employ 
a Fully Convolutional Network (FCN). We show that this 
FCN can be trained in a self-supervised manner and 
achieve similar results compared to training on 
manually annotated data, thereby reducing the 
need for large manually annotated training sets. 
To this end, our self-supervised training relies 
on a stereo-vision disparity system, to automatically 
generate (weak) training labels for the color-based FCN. 
Additionally, our self-supervised training facilitates 
online training of the FCN instead of offline. 
Consequently, given that the applied FCN is relatively 
small, the free-space analysis becomes highly adaptive 
to any traffic scene that the vehicle encounters. 
We have validated our algorithm using publicly 
available data and on a new challenging benchmark dataset. 
Experiments show that the online training boosts 
performance with 5\% over offline training, both for Fmax and AP.

这篇文章主要的一个亮点是使用了弱标签来在线训练网络，使用传统方法产生可行区域的
弱标签，然后利用FCN网络的泛化能力训练得到可行区域识别网络。该文章免去了语义分割的
大量人工标注问题，在无法得到标注数据的情况下，也能得到较好的识别结果。这里可以
利用这种方式，构建一个这样的网络，首先可以用标签数据得到可行区域识别网络，然后在
未知环境中，可能有些类别由于先前的数据集不够，导致精度下降，此时使用在线训练的
方法对弱标签数据训练，然后结合构建一个识别网络。

\begin{figure}[h]
	\centering
	\includegraphics[scale=0.5]{images/onlineNet}
	\caption*{onlineNet示例图片}
\end{figure}

\begin{figure}[h]
	\centering
	\includegraphics[scale=1]{images/onlineNet_arch}
	\caption*{onlineNet架构}
\end{figure}

\subsubsection*{Fully Convolutional Networks for Semantic Segmentation}
Convolutional networks are driving advances in recognition. 
Convnets are not only improving for whole-image classification, 
but also making progress on local tasks with structured output. 
These include advances in bounding box object detection, 
part and key- point prediction, and local correspondence.
The natural next step in the progression from coarse to 
fine inference is to make a prediction at every pixel. 
Prior approaches have used convnets for semantic  segmentation, 
in which each pixel is labeled with the class of its enclosing 
object or region, but with short- comings that this work addresses.

FCN网络是MultiNet、SegNet网络的基本架构，作为这些网络的基本组成部分进行训练。
正是由于FCN网络本身具有的众多优点，使得其作为一个稳定高效的语义分割模块被广泛地在
许多的网络中使用。FCN主要使用了如下三种技术：卷积化、上采样、跳跃结构。使用上述技术
构建FCN网络，训练中用AlexNet，VGG16或者GoogleNet训练好的模型做初始化，
在这个基础上做fine-tuning。采用wholeimage做训练，不进行patchwise
sampling。实验证明直接用全图已经很effective and efficient。
对classscore的卷积层做全零初始化。随机初始化在性能和收敛上没有优势。

\begin{figure}[h]
	\centering
	\includegraphics[scale=1]{images/fcn}
	\caption*{fcn示例图片}
\end{figure}

\begin{figure}[h]
	\centering
	\includegraphics[scale=1]{images/fcn_arch}
	\caption*{fcn架构}
\end{figure}

%\begin{figure}[h]
%        \centering
%        \begin{subfigure}[b]{0.5\textwidth}
%                \centering
%	  \includegraphics[scale=0.14]{images/phones}
%                \caption{Interface Mockup, Smartphone}
%                \label{fig:gull}
%        \end{subfigure}%
%        ~ 
%        \begin{subfigure}[b]{0.5\textwidth}
%                \centering
%                \includegraphics[scale=0.14]{images/tablet}
%                \caption{Interface Mockup, Tablet}
%                \label{fig:tiger}
%        \end{subfigure}
%\end{figure}

%Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
%eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam
%voluptua. 
%
%\subsection*{Competitor analysis}
%Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
%eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam
%voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet
%clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit
%amet.


%=================
\section*{实现细节}
由于训练网络需要大量标签数据，第一步是收集数据，上述论文中提供标签数据下载，
将这些数据扩充为实验的数据。其中主要的数据来自于KittiRoad
\footnote{\url{http://www.cvlibs.net/datasets/kitti/eval_road.php}}比赛。
KittiRoad数据集作为扩充数据集，为了提升在具体的应用场景中分类的精度，最好能得到
类似环境的标签数据。这里可以考虑两种方法，第一种先在KittiRoad
数据集中通过监督训练方法训练好特征，然后使用无监督训练方法在新的无标签特定
场景数据集中训练得到最后的可行区域分割特征，第二种通过精度较低的方法先对无标签
数据进行分类作为弱标签数据，考虑到FCN网络对这种充满噪声的标签数据也能有较好的
分类结果，直接将弱标签数据作为标签数据输入到FCN网络中进行训练得到最后的可通行
区域网络。第一种方法的难点在于找到合适的无监督训练方法，该领域方法未详细调研，
但是该方法都存在精度较低，聚类方法较难实现的问题，等具体调研有好的解决方案
在进行考虑，如果能做到完全无监督训练，并且精度相差不大的话，肯定是最完美的解决
方案。第二种方法的难点在于分类得到弱标签数据的分类方法，上述论文中使用双目
摄像头采集场景并用传统的基于视差的方法分类出场景的可通行区域，然后输出到FCN网络
作为弱标签数据进行训练，这里可以做得是能否使用单目或者激光雷达替代双目采集的方法，
以及能否找到合适的传统的基于视差的可通行区域分类方法。

标签数据收集好后，将标签数据的格式修改为和VOC竞赛的标签数据格式一样，然后修改SegNet
中的一些参数进行网络训练，保留结果较好的训练特征，并测试最终的测试精度，得到一个初步能用
的可行区域分割网络。在该网络初步可用的情况下，尝试使用激光传感器测试周围环境的可行区域，
得到和采用双目方法类似的弱标签数据，同样修改为统一的语义标签数据格式，使用预先训练的特征
集作为初始特征进行装换学习，测试最终的精度是否得到提升。在精度提升的同时，需关注该网络的
输出时间，此时需修改网络，适当裁剪网络的结构，减少训练参数，在精度没有下降太多的前提下，
提升网络的inference的速度。

训练网络需要强大的计算能力，所以需使用较好的显卡进行网络的训练，这里推荐使用英伟达显卡
泰坦X，显存12GB，也是论文中训练网络的最低配置。提供好标签的数据需要统一为标准格式的
标签数据，提供一个自动化脚本进行标记，对于手工标注的数据集提供一个简单的软件进行快速
标记并直接生成标准数据格式。
%Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
%eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam
%voluptua. 
%
%\paragraph{Architecture}
%\begin{figure}[h]
%        \centering
%        \includegraphics[width=0.4\textwidth]{images/BlockDiagram}
%        \caption{Architecture}
%        \label{fig:archDiagram}
%\end{figure}
%
%Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
%eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam
%voluptua\ref{fig:archDiagram}. At vero eos et accusam et justo duo dolores.
%Clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit
%amet.
%
%Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
%eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam
%voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet
%clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit
%amet.
%
%\paragraph{Routing}
%Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
%eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam
%voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet
%clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit
%amet.
%
%\paragraph{Data}
%Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
%eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam
%voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet
%clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit
%amet.
%
%\paragraph{Technologies}
%Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
%eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam
%voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet
%clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit
%amet.
%
%\paragraph{License}
%Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
%eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam
%voluptua. 
%
%\paragraph{Roadmap}
%Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
%eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam
%voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet
%clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit
%amet.


%%=================
%\section*{Conclusion}
%
%Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
%eirmod tempor invidunt.
%
%\paragraph{Nachhaltigkeit}
%Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
%eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam
%voluptua. At vero eos et accusam et justo.
%
%\paragraph{Realisierbarkeit}
%Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
%eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam
%voluptua. At vero eos et accusam et justo.
%
%\paragraph{Innovationsgehalt}
%Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
%eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam
%voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet
%clita kasd gubergren, no sea takimata sanctus est.
%\ \\
%
%Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
%eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam
%voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet
%clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit
%amet.

\end{CJK*}
\end{document}
