\documentclass[journal,9pt]{IEEEtran}

\ifCLASSINFOpdf
\else
\fi
\usepackage{float}
\usepackage{url}
\usepackage{lettrine}
\usepackage{graphicx}
\usepackage[cmex10]{amsmath}
\usepackage{listings}
\interdisplaylinepenalty=2500
\graphicspath{{img/}}
\usepackage[colorlinks=true,linkcolor=blue,citecolor=blue,urlcolor=blue]{hyperref}
\hyphenation{op-tical wire-less net-works}
\usepackage{newcent}

\begin{document}
\title{A Novel Image Processing Based Lens Focal Length Measurement Technique}
\author{Kaan~Ak\c{s}it and K\a{i}van\c{c}~Hedili
\thanks{The authors are with the Optical Microsystems Laboratory, Ko\c{c} University, Istanbul,
34450 TURKEY (e-mail: kaksit@ku.edu.tr - mhedili@ku.edu.tr).}}
\markboth{Computer vision and pattern recognition/ June~2011}
{Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls
for the Journal of Optical Communications and Networking}
\IEEEpubid{0000--0000/00\$00.00~\copyright~2011 Computer vision and pattern recognition}


\maketitle
\begin{abstract}
This paper introduces a novel experiment bench to find the focal length of an unidentified lens. It is aimed to provide an easy to use plug and play system for the researchers who work in optics laboratories or in a related place. Thus, it is expected to increase the speed of research in such environments. The bench is planed to consist of an easy to build hardware and additional open-source image processing software. The tools to build the necessary software are all from open-source ecology such as Subversion, Python, OpenCV and its bindings. Moreover the software implementation can be examined under \cite{flf}.
\end{abstract}
\begin{IEEEkeywords}
Image processing, Edge detection, Python, Open-source, OpenCV, Lens, Optics, Focal Length, Magnification.
\end{IEEEkeywords}
\IEEEpeerreviewmaketitle

\section{Introduction}
\label{section:introduction}
\lettrine{I}{n} most of the optics laboratories, researchers suffer from the problem of mixing the lenses or finding an unidentified lens from the work benches and the drawers in the laboratory. This leads to a confusion and decreases the speed of the research that is being carried out. The confusion unfortunately can not be solved using labelling methods. Reason behind this limitation is as follows: typical lenses are made from glass or equivalent optical materials whose surfaces should be protected from scratching, because it reduces the performance of lenses dramatically; thus, the surface should not be touched. Also, a label on the surface reduces the usable surface area. Typically lenses are stored in special cases with the name plates at the edges, the opaque lens holder is not desired because it blocks the light at the periphery and it makes it impossible to align two lenses close enough when it is needed. 

Most typical problem that we are facing in our research group is finding unknown lenses outside of its box. That leads to a confusion and causes huge problems in a populated laboratory like ours. Very characteristic information to identify an unknown lens is to find out its focal length. This investigation requires a procedure where certain experiments have to be conducted. The focal length is the point that a lens focuses the light at the output, when the input is a collimated beam.

If you have a spherical lens, you can measure the diameter and the lens sag, to find its radius of curvature; then you need to find out from which material it is made of, to find the refractive index, then you can use these values to compute the focal length. Although it is a straight forward process in theory, it is mainly done manually, which makes this technique very inaccurate; moreover, you can rarely find spherical lenses because they are usually not good in terms of performance, so most of the time you have an aspheric or toroidal lens, for which you cannot calculate the radius of curvature easily. Therefore finding the focal length of a lens is a major problem for most of the optics researchers, which we believe can be solved using image processing.

In most of the labs, as a practical approach, the focal length of a lens is determined by imaging the lights on the ceiling and looking at the distance where the image is formed. The distances are most of the time estimated by the eye of the optician and the focal length is found using the imaging equation for lenses. This method is time consuming when compared with an automated approach because it is very inaccurate.

A similar device of our system, for measuring the focal length of an unidentified lens is called foci-meter. The unidentified lens is put on a special movable mount and the optician moves the lens back and forth using a knob, till s/he is able to see the pattern behind the lens clearly. Since the eyes vary from people to people, the measured value cannot be standardized. Since our system uses a camera, we eliminate an unknown random parameter coming from the human eye for calculating the focal length.   

\section{Background}
\label{section:background}

Lenses are generally used for their imaging properties; a lens with a focal length of $f$, creates an image of an object, whose distance to the lens is $d_{o}$, at a distance $d_{i}$ away from itself. Hence, $d_{i}$ depends on both $f$ and $d_{o}$. The exact relation is given by Equation \ref{equ:distance}. Magnification of the lens, which is the ratio of the image size to the object size, can be found using Equation \ref{equ:magnification}, see \cite{boreman1998basic} to review the basic theory.
\IEEEpubidadjcol

\begin{equation}
\label{equ:distance}
\begin{split}
d_{i} = \frac{d_{o}.f}{d_{o} - f}
\end{split}
\end{equation}

\begin{equation}
\label{equ:magnification}
\begin{split}
M = -\frac{d_{i}}{d_{o}}
\end{split}
\end{equation}

Using Equation \ref{equ:distance} and Equation \ref{equ:magnification}, one can find out that if the object is closer than one focal length to the lens, the image is created at the same side with the object and it is a virtual image, see Figure \ref{fig:virtualimage}. The virtual image is upright and if the lens is a positive lens, the image is larger than the object; if the lens is a negative lens, the image is smaller than the object. \\

\begin{figure}[H]
\setlength{\unitlength}{0.14in}
\centering
\begin{picture}(32,13)
\put(0,6){\line(1,0){23}}
\put(12.5,1.5){\line(0,1){9}}
\put(8,6){\vector(0,1){3}}
\put(8,9){\line(1,0){4.5}}
\put(21.5,5.5){\line(0,1){1}}
\put(21.2,4.5){$f$}
\put(12.5,9){\line(3,-1){11}}
\multiput(0.5,13)(1.5,-0.5){8}{\line(3,-1){1}}
\put(8,9){\line(3,-2){10}}
\multiput(0,14.3)(1.5,-1){6}{\line(3,-2){1}}
\put(3.5,6){\vector(0,1){6}}
\put(7,5){$Object$}
\put(7,7){$h_i$}
\put(2.2,8.5){$h_o$}
\put(11.5,11.5){$Lens$}
\put(0.5,5){$Virtual~Image$}
\put(12,11){\line(1,-1){1}}
\put(12,2){\line(1,-1){1}}
\put(10,4.5){$d_i$}
\put(8.1,4){\line(1,0){4.3}}
\put(8.1,3.9){\line(0,1){0.2}}
\put(12.4,3.9){\line(0,1){0.2}}
\put(7.6,2.5){$d_o$}
\put(3.6,3.3){\line(1,0){8.8}}
\put(3.6,3.2){\line(0,1){0.2}}
\put(12.4,3.2){\line(0,1){0.2}}
\end{picture}
\caption{Illustration on concept of virtual image creation.}
\label{fig:virtualimage}
\end{figure}

Using the relative magnification between the taken pictures, the focal length of the unidentified lens can be calculated as shown in Equation \ref{equ:focallength} using the following steps.

\begin{equation}
\label{equ:magnificationl}
\begin{split}
M_{1} = -\frac{d_{c}}{d_{o} + d}
\end{split}
\end{equation}

\begin{equation}
\label{equ:magnificationlens}
\begin{split}
M_{lens} = -\frac{d_{i}}{d_{o}}
\end{split}
\end{equation}

\begin{equation}
\label{equ:magnificationcamera}
\begin{split}
M_{camera} = -\frac{d_{c}}{d - d_{i}}
\end{split}
\end{equation}

\begin{equation}
\label{equ:magnificationtotal}
\begin{split}
M_{total} = M_{lens}.M_{camera}
\end{split}
\end{equation}

\begin{equation}
\label{equ:magnificationderivation}
\begin{split}
M = \frac{M_{total}}{M_{1}} = -\frac{d_{i}.(d_{o}+d)}{d_{o}.(d-d_{i})}
\end{split}
\end{equation}

\begin{equation}
\label{equ:focallength}
\begin{split}
f = \frac{M.d.d_{o}}{(M-1).(d_{o}+d)}
\end{split}
\end{equation}

The proposed system uses the magnification of a fixed object to find the focal length of an unidentified lens. First a picture of a fixed pattern at a fixed distance is taken using a camera, then the unidentified lens is placed between the fixed pattern and the camera, and another picture is taken, see Figure \ref{fig:opticalscheme}. 

\begin{figure}[H]
\setlength{\unitlength}{0.14in}
\centering
\begin{picture}(32,13)
\put(0,6){\line(1,0){23}}
\put(23,0.5){\line(0,1){10}}
\put(0.5,6){\vector(0,1){3}}
\put(11.5,11.5){$Camera~Lens$}
\put(3.5,11.5){$Unknown~Lens$}
\put(19,11.5){$CCD~Array$}
\put(0,9.5){$Pattern$}
\put(14.5,1.5){\line(0,1){9}}
\put(14,11){\line(1,-1){1}}
\put(14,2){\line(1,-1){1}}
\put(6.5,1.5){\line(0,1){9}}
\put(6,11){\line(1,-1){1}}
\put(6,2){\line(1,-1){1}}
\put(2.5,3.6){$d_o$}
\put(0.1,3.3){\line(1,0){5.8}}
\put(0.1,3.2){\line(0,1){0.2}}
\put(5.9,3.2){\line(0,1){0.2}}
\put(10.5,3.6){$d$}
\put(7,3.3){\line(1,0){7}}
\put(7,3.2){\line(0,1){0.2}}
\put(14,3.2){\line(0,1){0.2}}
\put(18.5,3.6){$d_c$}
\put(15,3.3){\line(1,0){7.5}}
\put(15,3.2){\line(0,1){0.2}}
\put(22.5,3.2){\line(0,1){0.2}}
\end{picture}
\caption{Drawing of optical system, the distance $d_c$ is an unknown that changes with the camera used in the system.}
\label{fig:opticalscheme}
\end{figure}

\section{Hardware}
\label{section:hardware}

The system consist of a camera (in this case, a conventional web camera with $640x480$ resolution), a fixed pattern and an optical table with necessary peripheral devices, one can see a photograph of the current setup in Figure \ref{fig:fixedsystem}. Optical table with necessary peripherals can be replaced with even cheaper equipment. The lens to be identified is placed in between the fixed pattern and the camera. The distances are known variables and used in the calculation of the focal length. System can be counted as an easy to build and a very cheap setup with few pieces. 

\begin{figure}[H]
\centering
\includegraphics[width=2.6in]{bench}
\caption{Photograph of the bench, $d_i$ is the distance between the second surface of the camera's lens and the first surface of the lens under test, $d_o$ is the distance between the second surface of the lens under test and the pattern printed on a piece of paper.}
\label{fig:fixedsystem}
\end{figure}

The fixed pattern used in the setup is an printed image that contains many squares inside each other. The reason behind the choice of using squares similar to Russian dolls, so called $Matrushka$, is to be able to make the system operate over a very wide range of magnification values. This is done by counting the number of rectangles in the image, so that a magnification can be found even if a part of the image cannot be seen by the camera, due to the very large magnification of the unknown lens. This way, the algorithm can find the focal length of the unidentified lens over a larger range.

\section{Algorithm}
\label{section:algorithm}
Implemented algorithm requires some inputs to work in different benches and hardware, such as the distance between the unknown lens and the camera lens and the distance between the unknown lens and the fixed pattern. This is given inside the code, one can change it by modifying the function so called $odakbul()$. One other necessary input is a reference picture of the pattern without the unknown lens in place inside the bench. This can also be replaced with the physical dimensions of the pattern printed on the paper. This is saved under $src/data/on.jpg$ and it is a predefined value. Finally, one and only variable input in the system is a photograph that shows an image with unknown lens in place. And this variable is used as the input of the implementation. In all the steps of the implementation of the algorithm Python programming Language, \cite{python} with OpenCV, \cite{opencv} is used. All work is done under Pardus 2011 Linux operating system, see \cite{pardus}. One can examine a routine of the code under \ref{fig:algorithm}. All the functions under Figure \ref{fig:algorithm} will be introduced one by one inside this section.

\begin{figure}[H]
\setlength{\unitlength}{0.14in}
\centering
\begin{picture}(32,7)
\put(-1,6.5){Input}
\put(0,5.5){\vector(1,0){3}}
\put(3,4){\framebox(5,3){Gray-Scale}}
\put(8,5.5){\vector(1,0){3}}
\put(11,4){\framebox(4,3){Binary}}
\put(15,5.5){\vector(1,0){3}}
\put(18,4){\framebox(5,3){Canny edge}}
\put(20.5,4){\vector(0,-1){1}}
\put(17,0){\framebox(7,3){Area calculator}}
\put(17,1.4){\vector(-1,0){3}}
\put(4,0){\framebox(10,3){Focal length calculator}}
\put(4,1.4){\vector(-1,0){3}}
\put(-1,2.4){Output}
\end{picture}
\caption{Sketch of algorithm, input here is a photograph taken from a web camera and output here is the focal length measurement of the unknown lens.}
\label{fig:algorithm}
\end{figure}

Very first step in Figure \ref{fig:algorithm} is the gray-scale function so called $gri()$ which takes a picture as an input and returns a gray-scale image to be processed for the next steps, see below:

  \lstset{language=Python,breaklines=true}
  \lstinputlisting[language=Python,firstline=133,lastline=136]{../src/ana.py}

Next step in Figure \ref{fig:algorithm} is the binary image conversion so called $ikilik()$ inside the code. This function takes a picture and a threshold number as inputs. Output given is another image to be processed in the next step. Threshold number is $8-bit$ gray-scale value ($0-255$). The function checks for the every pixel of the given image and replace the pixel with one if the threshold value is lower than the pixel value or zero in otherwise, see below for the function and see Figure \ref{fig:binary} for a sample result.

  \lstset{language=Python,breaklines=true}
  \lstinputlisting[language=Python,firstline=91,lastline=100]{../src/ana.py}
  
\begin{figure}[H]
\centering
\includegraphics[width=2in]{binary}
\caption{Sample result of binary conversion so called $ikilik()$ function.}
\label{fig:binary}
\end{figure}

Next step in Figure \ref{fig:algorithm} is the Canny edge so called $kenarbul()$ inside the code. This function takes an image as an input. The input is processed using canny edge detection algorithm implemented inside OpenCV, see \cite{opencv} and \cite{woods2006multidimensional}. One can examine below for the coding of the explained function:

  \lstset{language=Python,breaklines=true}
  \lstinputlisting[language=Python,firstline=102,lastline=108]{../src/ana.py}

\begin{figure}[H]
\centering
\includegraphics[width=2in]{edge}
\caption{Sample result of Canny Edge detector so called $kenarbul()$ function.}
\label{fig:edge}
\end{figure}

Next step in Figure \ref{fig:algorithm} is the Area calculator so called $kutucukyarat()$ inside the code. This function takes the edge image as the input. The input is processed to find the centroids of the edges in the given picture, see below:

  \lstset{language=Python,breaklines=true}
  \lstinputlisting[language=Python,firstline=70,lastline=89]{../src/ana.py}

\begin{figure}[H]
\centering
\includegraphics[width=2in]{dots}
\caption{Sample result of area calculator so called $kutucukyarat()$ function.}
\label{fig:dots}
\end{figure}

Next step in Figure \ref{fig:algorithm} is the focal length calculator so called $odakbul()$ and $oranbul()$ inside the code. Note that distances are given inside $odakbul()$. These functions calculate the magnification ratio and then relate it to the focal length of the unknown lens.

  \lstset{language=Python,breaklines=true}
  \lstinputlisting[language=Python,firstline=65,lastline=68]{../src/ana.py}
  
  \lstset{language=Python,breaklines=true}
  \lstinputlisting[language=Python,firstline=59,lastline=63]{../src/ana.py}

Figure \ref{fig:result} shows the terminal output of the whole implementation. In case of Figure \ref{fig:result}, two samples are tried. The first sample lens is that has a focal length of $250mm$ in both axes. The algorithm responded that the measured magnification is $1.123$ in X-axis and $1.122$ in Y-axis which corresponds to the focal length of $218mm$ in X-axis and $219mm$ in Y-axis. The second sample lens is that a focal length of $150mm$ in both axes. The algorithm responded that the measured magnification is $1.022$ in X-axis and $1.185$ in Y-axis which corresponds to the focal length of $1105mm$ in X-axis and $152mm$ in Y-axis.

\begin{figure}[H]
\centering
\includegraphics[width=3.5in]{result}
\caption{Sample result of focal length calculator so called $odakbul()$ and $oranbul()$ functions.}
\label{fig:result}
\end{figure}

\section{Conclusion and Discussion}
\label{section:conclusion}

It is shown that it is theoretically and practically possible to measure focal length of a given lens using image processing and a little amount of hardware. This project is also proved it is possible to do so with a very light weight algorithm. The introduced system can calculate the focal length of two different lenses within real $0m2.595s$, user $0m2.510s$ and sys $0m0.047s$ durations under a Linux operating system. The system is pretty cheap and easy to build. Since the implementations are all in open source using GPL version $2$ license, one can contribute, use or modify the implementation according to his/her needs. Thus it can be labelled as a flexible implementation.

The overall accuracy of the system does not fully meet the standards at this stage. According to the standards, the accuracy of a measurement device should be able to distinguish $0.25$ lens number. Lens number is defined as $\frac{1}{f}$, $f$ here represents the focal length of the lens in meters. The existing system can provide a measurement accuracy lower but close to this value.  

There are several reasons behind the lack of the performance. The theory that is used inside this paper is based on approximations in optics, thus some amount of error is introduced with the used equations and the derived solver function. The used camera is able to provide images in $640x480$ resolution format. A high definition camera would end up more accurate result. Better focusing optics inside the camera is also necessary. Note that camera itself is introducing false edges because of the blur originated from the focusing optics inside the camera. 

At this point, it can be concluded that the described system in this paper is a promising one with many other possibilities. By using a casing to block the environment illumination and a temporal coherent illumination (such as a single color laser source), it is possible to derive chromatic aberration of the lens under test. Thus another specification of the lens can be derived. One other possibility is to add a moving stage to increase the number of samples to be examined: moving stage will help the camera to take different images of the scene at different distances. By doing so, more accurate result can be achieved by averaging the results coming from each sample. It is also possible to examine lens systems (multiple lens system, microscope or telescope objectives, etc.) with the moving stage. We have derived necessary equation to measure it but did not introduce it inside this paper, since we could not verify it with an experiment yet.

\section{Acknowledgement}
\label{section:acknowledgement}
The authors would like to thank to Professor Y\"{u}cel Yemez for providing the chance to work on this project in the scope of the computer vision and pattern recognition lecture. The authors would also like to thank to Google Code for providing free subversion based online space for the project.

%\appendices
%\section{Python code}
%\label{code:ana}
%\lstset{language=Python,breaklines=true}
%\lstinputlisting{../src/../src/ana.py}

\ifCLASSOPTIONcaptionsoff
  \newpage
\fi

\bibliographystyle{ieeetr}
\bibliography{references}
\end{document}



