\section*{\large The application}
\begin{normalsize}

As in lab 2, our program consists of three nodes: robot, navigation and map. Figure \ref{services} shows the client-server relations. The robot node can be thought of as the actuator and the navigation node as the brain. The robot knowledge of the world is displayed by the map node step by step. Thanks to the modular design, the only differences between this lab and lab 2 are in the robot node.

\begin{figure}[H]
\centering
\includegraphics[width=\textwidth,trim=0cm 10cm 0cm 1cm, clip=true]{./images/lab2services.pdf}
\caption{client-server relations}
\label{services}
\end{figure}

The robot node instantiates an object of the VisualSensor class. It implements the frame pre-processing and our computer vision algorithm.

\section*{\large Frame pre-processing}

We list the frame pre-processing steps:

\begin{enumerate}
	\item acquire an image;
	\item flip it;
	\item undistort it, see Figure \ref{fig:undistorted_img}.
	\item convert it to gray;
	\item perform a basic thresholding operation in order to point out white lines, see Figure \ref{fig:bw_undistort_frame};
\end{enumerate}

\begin{figure}[H]
\centering
\includegraphics[scale=0.35]{./images/undistorted_img.png}
\caption{undistorted perspective image on the left, part of the undistorted panorama on the right \label{fig:undistorted_img}}
\end{figure}

\begin{figure}[H]
\centering
\includegraphics[scale=0.35]{./images/bw_undistort_frame.png}
\caption{the acquired frame after pre-processing \label{fig:bw_undistort_frame}}
\end{figure}

Following D. Scaramuzza's instructions, frames are undistorted by using two functions:
\begin{enumerate}
	\item \emph{create\_perspecive\_undistortion\_LUT} function of \emph{undistortFunctions} utilities: it creates a look-up table for perspective undistortion, which is done on a plane perpendicular to the camera axis. It requires a zoom parameter;
	\item \emph{cvRemap} function of OpenCV: it undistorts using specified interpolation method, i.e. bilinear interpolation.
\end{enumerate}

Undistortion will be very useful to recover the robot orientation with respect to the grid lines.

%One of the issues we encountered concerns the acquire step and the fact that the camera buffer may contain old frames. This can be solved by calling the OpenCV \emph{grab} function.

\section*{\large Changing the ``robot'' node}

Recalling the Lab2 client-server architecture in figure \ref{services}, we can see that the substitution of the intensity sensor and the gyroscope with an omnidirectional camera influences only the robot node behaviour.

\medskip

To keep a modular design, we implemented a C++ class named ``VisualSensor'' which, using camera calibration parameters, recognizes the status of the robot on the grid-carpet.

\medskip

An instance of ``VisualSensor'' gives access to three methods:

\begin{itemize}
\item \verb|bool isAligned()| , which returns true if there is an horizontal white line in front of the robot;
\item \verb|double getCenterXRatio()| , which assumes that the robot is aligned with a white line and returns the X position (in percentage on the image width) of the center of the horizontal white line in front of the robot (see figure \ref{fig:gostraight} as reference);
\item \verb|double getCenterYRatio()| , which assumes that the robot is aligned with a white line and returns the Y position (in percentage on the image height) of the center of the horizontal white line in front of the robot;
%\item \verb|double cvt\_pixel2meters(int px)| , TODO: MORRIS, QUESTO SCRIVILO TU CHE IO NON NE HO IDEA...
\end{itemize}

To implement these methods, the ``VisualSensor'' class performs a four steps filtering, which we have already partially mentioned:
\begin{enumerate}
\item acquires and undistorts the omnidirectional camera images;
\item filters the black and white images with a binary thresholding;
\item recognizes the grid lines using the Hough transform algorithm;
\item recognizes the grid cells looking for the line intersections;
\end{enumerate}

It is worthwhile to recall that the Lab2 robot navigates the grid using the following two methods:
\begin{itemize}
\item \verb|double rotate(ros::Publisher velocity_pub, ros::Rate r,| \\
		\verb|          double rad, bool (*stop_condition)() = stdCondition)|;
\item \verb|double goStraight(ros::Publisher velocity_pub, ros::Rate r, | \\
		\verb|          double length = 0, bool (*stop_condition)() = stdCondition)|;
\end{itemize}

It is straightforward to integrate the ``VisualSensor'' in the robot node passing as \verb|stop_condition()| the following conditions:

\begin{verbatim}
bool isAligned()
{
  return visual_sensor.isAligned();
}

bool overTheLineVisual()
{
  return visual_sensor.getCenterYRatio() > 0.4;
}
\end{verbatim}

The \verb|isAligned()| condition stops the robot rotation when it is aligned with a cell. \\
The \verb|overTheLineVisual()| condition stops the robot from going ahead when the white line of the next cell is under the omnidirectional camera (i.e. when the center of the next line is at the 40\% of the image height). Look at figures \ref{fig:rotation} and \ref{fig:gostraight} for practical examples.

\begin{figure}[H]
\centering
\includegraphics[scale=0.35]{./images/rotation_draw.png}
\caption{Filtered omnidirectional view during a rotation. \label{fig:rotation}}
\end{figure}

\begin{figure}[H]
\centering
\includegraphics[scale=0.35]{./images/gostraight.png}
\caption{Omnidirectional view when approaching a line center. \label{fig:gostraight}}
\end{figure}

% diagramma in graphviz
% tante immagini

\end{normalsize}

\section*{\large Conclusion}
\begin{normalsize}
We succeeded in calibrating the camera and developing the required module reusing our code as much as possible.
\end{normalsize}