<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"><head>
<title>Learning OpenCV (Учим OpenCV)</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  <style type="text/css">
   h1 {
    font-size: 200%; /* Размер шрифта */
    border-bottom: 2px solid maroon; /* Параметры линии под текстом */
    font-weight: normal; /* Убираем жирное начертание */
    padding-bottom: 5px; /* Расстояние от текста до линии */
   }
  </style>

<link rel="icon" type="files_ru/image/png" href="images/OpenCV_Logo.png" />
<link rel="shortcut icon" type="image/png" href="images/OpenCV_Logo.png" />

<body>
<br>
<a HREF="http://opencv.willowgarage.com/wiki/"><img src="images/OpenCV_Logo.png"></img></a>

<H1>Learning OpenCV <font size="4" color="#120a8f"><i>(editors 4)</i></font></H1>
<br>
<font size="5" color="#120a8f"><i>Contents:</i></font>
<br>
<br>

<a HREF="#Overview"><font size="4" color="#120a8f"><i><u>1.Overview</u></i></font></a>
<ul>
  <li><a HREF="#What_Is_OpenCV">What Is OpenCV?</a></li>
  <li><a HREF="#Who_Uses_OpenCV">Who Uses OpenCV?</a></li>
  <li><a HREF="#What_Is_Computer_Vision">What Is Computer Vision?</a></li>
  <li><a HREF="#The_Origin_of_OpenCV">The Origin of OpenCV</a></li>
  <li><a HREF="#Downloading_and_Installing_OpenCV">Downloading and Installing OpenCV</a></li>
  <li><a HREF="#Getting_the_Latest_OpenCV_via_CVS">Getting the Latest OpenCV via CVS</a></li>
  <li><a HREF="#More_OpenCV_Documentation">More OpenCV Documentation</a></li>
  <li><a HREF="#">OpenCV Structure and Content</a></li>
  <li><a HREF="#">Portability</a></li>
  <li><a HREF="#">Exercises</a></li>
</ul>

<a HREF="#"><font size="4" color="#120a8f"><i><u>2.Introduction to OpenCV</u></i></font></a>
<ul>
  <li><a HREF="#">Getting Started</a></li>
  <li><a HREF="#">First Program—Display a Picture</a></li>
  <li><a HREF="#">Second Program—AVI Video</a></li>
  <li><a HREF="#">Moving Around</a></li>
  <li><a HREF="#">A Simple Transformation</a></li>
  <li><a HREF="#">A Not-So-Simple Transformation</a></li>
  <li><a HREF="#">Input from a Camera</a></li>
  <li><a HREF="#">Writing to an AVI File</a></li>
  <li><a HREF="#">Onward</a></li>
  <li><a HREF="#">Exercises</a></li>
</ul>

<a HREF="#"><font size="4" color="#120a8f"><i><u>3.Getting to Know OpenCV</u></i></font></a>
<ul>
  <li><a HREF="#">OpenCV Primitive Data Types</a></li>
  <li><a HREF="#">CvMat Matrix Structure</a></li>
  <li><a HREF="#">IplImage Data Structure</a></li>
  <li><a HREF="#">Matrix and Image Operators</a></li>
  <li><a HREF="#">Drawing Things</a></li>
  <li><a HREF="#">Data Persistence</a></li>
  <li><a HREF="#">Integrated Performance Primitives</a></li>
  <li><a HREF="#">Summary</a></li>
  <li><a HREF="#">Exercises</a></li>
</ul>

<a HREF="#"><font size="4" color="#120a8f"><i><u>4.HighGUI</u></i></font></a>
<ul>
  <li><a HREF="#">A Portable Graphics Toolkit</a></li>
  <li><a HREF="#">Creating a Window</a></li>
  <li><a HREF="#">Loading an Image</a></li>
  <li><a HREF="#">Displaying Images</a></li>
  <li><a HREF="#">Working with Video</a></li>
  <li><a HREF="#">ConvertImage</a></li>
  <li><a HREF="#">Exercises</a></li>
</ul>

<a HREF="#"><font size="4" color="#120a8f"><i><u>5.Image Processing</u></i></font></a>
<ul>
  <li><a HREF="#">Overview</a></li>
  <li><a HREF="#">Smoothing</a></li>
  <li><a HREF="#">Image Morphology</a></li>
  <li><a HREF="#">Flood Fill</a></li>
  <li><a HREF="#">Resize</a></li>
  <li><a HREF="#">Image Pyramids</a></li>
  <li><a HREF="#">Threshold</a></li>
  <li><a HREF="#">Exercises</a></li>
</ul>

<a HREF="#"><font size="4" color="#120a8f"><i><u>6.Image Transforms</u></i></font></a>
<ul>
  <li><a HREF="#">Overview</a></li>
  <li><a HREF="#">Convolution</a></li>
  <li><a HREF="#">Gradients and Sobel Derivatives</a></li>
  <li><a HREF="#">Laplace</a></li>
  <li><a HREF="#">Canny</a></li>
  <li><a HREF="#">Hough Transforms</a></li>
  <li><a HREF="#">Remap</a></li>
  <li><a HREF="#">Stretch, Shrink, Warp, and Rotate</a></li>
  <li><a HREF="#">CartToPolar and PolarToCart</a></li>
  <li><a HREF="#">LogPolar</a></li>
  <li><a HREF="#">Discrete Fourier Transform (DFT)</a></li>
  <li><a HREF="#">Discrete Cosine Transform (DCT)</a></li>
  <li><a HREF="#">Integral Images</a></li>
  <li><a HREF="#">Distance Transform</a></li>
  <li><a HREF="#">Histogram Equalization</a></li>
  <li><a HREF="#">Exercises</a></li>
</ul>

<a HREF="#"><font size="4" color="#120a8f"><i><u>7.Histograms and Matching</u></i></font></a>
<ul>
  <li><a HREF="#">Basic Histogram Data Structure</a></li>
  <li><a HREF="#">Accessing Histograms</a></li>
  <li><a HREF="#">Basic Manipulations with Histograms</a></li>
  <li><a HREF="#">Some More Complicated Stuff</a></li>
  <li><a HREF="#">Exercises</a></li>
</ul>

<a HREF="#"><font size="4" color="#120a8f"><i><u>8.Contours</u></i></font></a>
<ul>
  <li><a HREF="#">Memory Storage</a></li>
  <li><a HREF="#">Sequences</a></li>
  <li><a HREF="#">Contour Finding</a></li>
  <li><a HREF="#">Another Contour Example</a></li>
  <li><a HREF="#">More to Do with Contours</a></li>
  <li><a HREF="#">Matching Contours</a></li>
  <li><a HREF="#">Exercises</a></li>
</ul>

<a HREF="#"><font size="4" color="#120a8f"><i><u>9.Image Parts and Segmentation</u></i></font></a>
<ul>
  <li><a HREF="#">Parts and Segments</a></li>
  <li><a HREF="#">Background Subtraction</a></li>
  <li><a HREF="#">Watershed Algorithm</a></li>
  <li><a HREF="#">Image Repair by Inpainting</a></li>
  <li><a HREF="#">Mean-Shift Segmentation</a></li>
  <li><a HREF="#">Delaunay Triangulation, Voronoi Tesselation</a></li>
  <li><a HREF="#">Exercises</a></li>
</ul>

<a HREF="#"><font size="4" color="#120a8f"><i><u>10.Tracking and Motion</u></i></font></a>
<ul>
  <li><a HREF="#">The Basics of Tracking</a></li>
  <li><a HREF="#">Corner Finding</a></li>
  <li><a HREF="#">Subpixel Corners</a></li>
  <li><a HREF="#">Invariant Features</a></li>
  <li><a HREF="#">Optical Flow</a></li>
  <li><a HREF="#">Mean-Shift and Camshift Tracking</a></li>
  <li><a HREF="#">Motion Templates</a></li>
  <li><a HREF="#">Estimators</a></li>
  <li><a HREF="#">The Condensation Algorithm</a></li>
  <li><a HREF="#">Exercises</a></li>
</ul>

<a HREF="#"><font size="4" color="#120a8f"><i><u>11.Camera Models and Calibration</u></i></font></a>
<ul>
  <li><a HREF="#">Camera Model</a></li>
  <li><a HREF="#">Calibration</a></li>
  <li><a HREF="#">Undistortion</a></li>
  <li><a HREF="#">Putting Calibration All Together</a></li>
  <li><a HREF="#">Rodrigues Transform</a></li>
  <li><a HREF="#">Exercises</a></li>
</ul>

<a HREF="#"><font size="4" color="#120a8f"><i><u>12.Projection and 3D Vision</u></i></font></a>
<ul>
  <li><a HREF="#">Projections</a></li>
  <li><a HREF="#">Affine and Perspective Transformations</a></li>
  <li><a HREF="#">POSIT: 3D Pose Estimation</a></li>
  <li><a HREF="#">Stereo Imaging</a></li>
  <li><a HREF="#">Structure from Motion</a></li>
  <li><a HREF="#">Fitting Lines in Two and Three Dimensions</a></li>
  <li><a HREF="#">Exercises</a></li>
</ul>

<a HREF="#"><font size="4" color="#120a8f"><i><u>13.Machine Learning</u></i></font></a>
<ul>
  <li><a HREF="#">What Is Machine Learning</a></li>
  <li><a HREF="#">Common Routines in the ML Library</a></li>
  <li><a HREF="#">Mahalanobis Distance</a></li>
  <li><a HREF="#">K-Means</a></li>
  <li><a HREF="#">Naïve/Normal Bayes Classifier</a></li>
  <li><a HREF="#">Binary Decision Trees</a></li>
  <li><a HREF="#">Boosting</a></li>
  <li><a HREF="#">Random Trees</a></li>
  <li><a HREF="#">Face Detection or Haar Classifier</a></li>
  <li><a HREF="#">Other Machine Learning Algorithms</a></li>
  <li><a HREF="#">Exercises</a></li>
</ul>

<a HREF="#"><font size="4" color="#120a8f"><i><u>14. OpenCV’s Future</u></i></font></a>
<ul>
  <li><a HREF="#">Past and Future</a></li>
  <li><a HREF="#">Directions</a></li>
  <li><a HREF="#">OpenCV for Artists</a></li>
  <li><a HREF="#">Afterword</a></li>
</ul>

<a HREF="#"><font size="4" color="#120a8f"><i><u>Bibliography</u></i></font></a>
<br>
<a HREF="#"><font size="4" color="#120a8f"><i><u>Index</u></i></font></a>

<A NAME="Overview"><H1><center>Overview <br> <font size="4" color="#120a8f"><i>chapter 1</i></font></center></H1>

<A NAME="What_Is_OpenCV"><font size="5" color="#120a8f"><i>What Is OpenCV?</i></font>
<br><br>
OpenCV [OpenCV] is an open source ( see http://opensource.org) computer vision library
available from http://SourceForge.net/projects/opencvlibrary. The library is written in C
and C++ and runs under Linux, Windows and Mac OS X. There is active development
on interfaces for Python, Ruby, Matlab, and other languages.
OpenCV was designed for computational efficiency and with a strong focus on real-
time applications. OpenCV is written in optimized C and can take advantage of mul-
ticore processors. If you desire further automatic optimization on Intel architectures
[Intel], you can buy Intel’s Integrated Performance Primitives (IPP) libraries [IPP], which
consist of low-level optimized routines in many different algorithmic areas. OpenCV
automatically uses the appropriate IPP library at runtime if that library is installed.
One of OpenCV’s goals is to provide a simple-to-use computer vision infrastructure
that helps people build fairly sophisticated vision applications quickly. The OpenCV
library contains over 500 functions that span many areas in vision, including factory
product inspection, medical imaging, security, user interface, camera calibration, stereo
vision, and robotics. Because computer vision and machine learning often go hand-in-
hand, OpenCV also contains a full, general-purpose Machine Learning Library (MLL).
This sublibrary is focused on statistical pattern recognition and clustering. The MLL is
highly useful for the vision tasks that are at the core of OpenCV’s mission, but it is gen-
eral enough to be used for any machine learning problem.

<br><br>
<A NAME="Who_Uses_OpenCV"><font size="5" color="#120a8f"><i>Who Uses OpenCV?</i></font>
<br><br>
Most computer scientists and practical programmers are aware of some facet of the role
that computer vision plays. But few people are aware of all the ways in which computer
vision is used. For example, most people are somewhat aware of its use in surveillance,
and many also know that it is increasingly being used for images and video on the Web.
A few have seen some use of computer vision in game interfaces. Yet few people realize
that most aerial and street-map images (such as in Google’s Street View) make heavy
1
use of camera calibration and image stitching techniques. Some are aware of niche ap-
plications in safety monitoring, unmanned flying vehicles, or biomedical analysis. But
few are aware how pervasive machine vision has become in manufacturing: virtually
everything that is mass-produced has been automatically inspected at some point using
computer vision.
The open source license for OpenCV has been structured such that you can build a
commercial product using all or part of OpenCV. You are under no obligation to open-
source your product or to return improvements to the public domain, though we hope
you will. In part because of these liberal licensing terms, there is a large user commu-
nity that includes people from major companies (IBM, Microsoft, Intel, SONY, Siemens,
and Google, to name only a few) and research centers (such as Stanford, MIT, CMU,
Cambridge, and INRIA). There is a Yahoo groups forum where users can post questions
and discussion at http://groups.yahoo.com/group/OpenCV; it has about 20,000 members.
OpenCV is popular around the world, with large user communities in China, Japan,
Russia, Europe, and Israel.
Since its alpha release in January 1999, OpenCV has been used in many applications,
products, and research efforts. These applications include stitching images together in
satellite and web maps, image scan alignment, medical image noise reduction, object
analysis, security and intrusion detection systems, automatic monitoring and safety sys-
tems, manufacturing inspection systems, camera calibration, military applications, and
unmanned aerial, ground, and underwater vehicles. It has even been used in sound and
music recognition, where vision recognition techniques are applied to sound spectro-
gram images. OpenCV was a key part of the vision system in the robot from Stanford,
“Stanley”, which won the $2M DARPA Grand Challenge desert robot race [Thrun06].

<br><br>
<A NAME="What_Is_Computer_Vision"><font size="5" color="#120a8f"><i>What Is Computer Vision?</i></font>
<br><br>
Computer vision* is the transformation of data from a still or video camera into either a
decision or a new representation. All such transformations are done for achieving some
particular goal. The input data may include some contextual information such as “the
camera is mounted in a car” or “laser range finder indicates an object is 1 meter away”.
The decision might be “there is a person in this scene” or “there are 14 tumor cells on
this slide”. A new representation might mean turning a color image into a grayscale im-
age or removing camera motion from an image sequence.
Because we are such visual creatures, it is easy to be fooled into thinking that com-
puter vision tasks are easy. How hard can it be to find, say, a car when you are staring
at it in an image? Your initial intuitions can be quite misleading. The human brain di-
vides the vision signal into many channels that stream different kinds of information
into your brain. Your brain has an attention system that identifies, in a task-dependent
<br><br>
* Computer vision is a vast field. This book will give you a basic grounding in the field, but we also recom-
mend texts by Trucco [Trucco98] for a simple introduction, Forsyth [Forsyth03] as a comprehensive refer-
ence, and Hartley [Hartley06] and Faugeras [Faugeras93] for how 3D vision really works.
<br><br>
way, important parts of an image to examine while suppressing examination of other
areas. There is massive feedback in the visual stream that is, as yet, little understood.
There are widespread associative inputs from muscle control sensors and all of the other
senses that allow the brain to draw on cross-associations made from years of living in
the world. The feedback loops in the brain go back to all stages of processing including
the hardware sensors themselves (the eyes), which mechanically control lighting via the
iris and tune the reception on the surface of the retina.
In a machine vision system, however, a computer receives a grid of numbers from the
camera or from disk, and that’s it. For the most part, there’s no built-in pattern recog-
nition, no automatic control of focus and aperture, no cross-associations with years of
experience. For the most part, vision systems are still fairly naïve. Figure 1-1 shows a
picture of an automobile. 

<br><br>
<img src="images/1-1_image.jpg"></img>
<br> Figure 1-1. To a computer, the car’s side mirror is just a grid of numbers <br><br>

In that picture we see a side mirror on the driver’s side of the
car. What the computer “sees” is just a grid of numbers. Any given number within that
grid has a rather large noise component and so by itself gives us little information, but
this grid of numbers is all the computer “sees”. Our task then becomes to turn this noisy
grid of numbers into the perception: “side mirror”. Figure 1-2 gives some more insight
into why computer vision is so hard.In fact, the problem, as we have posed it thus far, 
is worse than hard; it is formally impossible to solve. Given a two-dimensional (2D) view 
of a 3D world, there is no unique way to reconstruct the 3D signal.

<br><br>
<img src="images/1-2_image.jpg"></img>
<br> Figure 1-2. The ill-posed nature of vision: the 2D appearance of objects can change radically with
viewpoint <br><br>

Formally, such an ill-posed problem has no unique or
definitive solution. The same 2D image could represent any of an infinite combination
of 3D scenes, even if the data were perfect. However, as already mentioned, the data i
corrupted by noise and distortions. Such corruption stems from variations in the world
(weather, lighting, reflections, movements), imperfections in the lens and mechanical
setup, finite integration time on the sensor (motion blur), electrical noise in the sensor
or other electronics, and compression artifacts after image capture. Given these daunt-
ing challenges, how can we make any progress?
In the design of a practical system, additional contextual knowledge can often be used
to work around the limitations imposed on us by visual sensors. Consider the example
of a mobile robot that must find and pick up staplers in a building. The robot might use
the facts that a desk is an object found inside offices and that staplers are mostly found
on desks. This gives an implicit size reference; staplers must be able to fit on desks. It
also helps to eliminate falsely “recognizing” staplers in impossible places (e.g., on the
ceiling or a window). The robot can safely ignore a 200-foot advertising blimp shaped
like a stapler because the blimp lacks the prerequisite wood-grained background of a
desk. In contrast, with tasks such as image retrieval, all stapler images in a database
may be of real staplers and so large sizes and other unusual configurations may have
been implicitly precluded by the assumptions of those who took the photographs.
That is, the photographer probably took pictures only of real, normal-sized staplers.
People also tend to center objects when taking pictures and tend to put them in char-
acteristic orientations. Thus, there is often quite a bit of unintentional implicit informa-
tion within photos taken by people.
Contextual information can also be modeled explicitly with machine learning tech-
niques. Hidden variables such as size, orientation to gravity, and so on can then be
correlated with their values in a labeled training set. Alternatively, one may attempt
to measure hidden bias variables by using additional sensors. The use of a laser range
finder to measure depth allows us to accurately measure the size of an object.
The next problem facing computer vision is noise. We typically deal with noise by us-
ing statistical methods. For example, it may be impossible to detect an edge in an image
merely by comparing a point to its immediate neighbors. But if we look at the statistics
over a local region, edge detection becomes much easier. A real edge should appear as a
string of such immediate neighbor responses over a local region, each of whose orienta-
tion is consistent with its neighbors. It is also possible to compensate for noise by taking
statistics over time. Still other techniques account for noise or distortions by building ex-
plicit models learned directly from the available data. For example, because lens distor-
tions are well understood, one need only learn the parameters for a simple polynomial
model in order to describe—and thus correct almost completely—such distortions.
The actions or decisions that computer vision attempts to make based on camera data
are performed in the context of a specific purpose or task. We may want to remove noise
or damage from an image so that our security system will issue an alert if someone tries
to climb a fence or because we need a monitoring system that counts how many people
cross through an area in an amusement park. Vision software for robots that wander
through office buildings will employ different strategies than vision software for sta-
tionary security cameras because the two systems have significantly different contexts
and objectives. As a general rule: the more constrained a computer vision context is, the
more we can rely on those constraints to simplify the problem and the more reliable our
final solution will be.
OpenCV is aimed at providing the basic tools needed to solve computer vision prob-
lems. In some cases, high-level functionalities in the library will be sufficient to solve
the more complex problems in computer vision. Even when this is not the case, the basic
components in the library are complete enough to enable creation of a complete solu-
tion of your own to almost any computer vision problem. In the latter case, there are
several tried-and-true methods of using the library; all of them start with solving the
problem using as many available library components as possible. Typically, after you’ve
developed this first-draft solution, you can see where the solution has weaknesses and
then fix those weaknesses using your own code and cleverness (better known as “solve
the problem you actually have, not the one you imagine”). You can then use your draft 
solution as a benchmark to assess the improvements you have made. From that point,
whatever weaknesses remain can be tackled by exploiting the context of the larger sys-
tem in which your problem solution is embedded.

<br><br>
<A NAME="The_Origin_of_OpenCV"><font size="5" color="#120a8f"><i>The Origin of OpenCV</i></font>
<br><br>

OpenCV grew out of an Intel Research initiative to advance CPU-intensive applications.
Toward this end, Intel launched many projects including real-time ray tracing and 3D
display walls. One of the authors working for Intel at that time was visiting universities
and noticed that some top university groups, such as the MIT Media Lab, had well-
developed and internally open computer vision infrastructures—code that was passed
from student to student and that gave each new student a valuable head start in develop-
ing his or her own vision application. Instead of reinventing the basic functions from
scratch, a new student could begin by building on top of what came before.
Thus, OpenCV was conceived as a way to make computer vision infrastructure uni-
versally available. With the aid of Intel’s Performance Library Team,* OpenCV started
with a core of implemented code and algorithmic specifications being sent to members
of Intel’s Russian library team. This is the “where” of OpenCV: it started in Intel’s re-
search lab with collaboration from the Soft ware Performance Libraries group together
with implementation and optimization expertise in Russia.
Chief among the Russian team members was Vadim Pisarevsky, who managed, coded,
and optimized much of OpenCV and who is still at the center of much of the OpenCV
effort. Along with him, Victor Eruhimov helped develop the early infrastructure, and
Valery Kuriakin managed the Russian lab and greatly supported the effort. There were
several goals for OpenCV at the outset:

<ul>

  <li>Advance vision research by providing not only open but also optimized code for
      basic vision infrastructure. No more reinventing the wheel.
  </li>

  <li>Disseminate vision knowledge by providing a common infrastructure that develop-
      ers could build on, so that code would be more readily readable and transferable.
  </li>

  <li>Advance vision-based commercial applications by making portable, performance-
      optimized code available for free—with a license that did not require commercial
      applications to be open or free themselves.
  </li>

</ul>

Those goals constitute the "why" of OpenCV. Enabling computer vision applications
would increase the need for fast processors. Driving upgrades to faster processors would
generate more income for Intel than selling some extra soft ware. Perhaps that is why this
open and free code arose from a hardware vendor rather than a software company. In
some sense, there is more room to be innovative at software within a hardware company.
In any open source effort, it’s important to reach a critical mass at which the project
becomes self-sustaining. There have now been approximately two million downloads
of OpenCV, and this number is growing by an average of 26,000 downloads a month.
The user group now approaches 20,000 members. OpenCV receives many user contri-
butions, and central development has largely moved outside of Intel.* OpenCV’s past
timeline is shown in Figure 1-3. Along the way, OpenCV was affected by the dot-com
boom and bust and also by numerous changes of management and direction. During
these fluctuations, there were times when OpenCV had no one at Intel working on it at
all. However, with the advent of multicore processors and the many new applications
of computer vision, OpenCV’s value began to rise. Today, OpenCV is an active area
of development at several institutions, so expect to see many updates in multicamera
calibration, depth perception, methods for mixing vision with laser range finders, and
better pattern recognition as well as a lot of support for robotic vision needs. For more
information on the future of OpenCV, see Chapter 14.


<br><br>
<img src="images/1-3_image.jpg"></img>
<br> Figure 1-3. OpenCV timeline <br><br>

<A NAME="Speeding_Up_OpenCV_with_IPP"><font size="4"><i><b>Speeding Up OpenCV with IPP</b></i></font>
<br><br>

Because OpenCV was "housed" within the Intel Performance Primitives team and sev-
eral primary developers remain on friendly terms with that team, OpenCV exploits the
hand-tuned, highly optimized code in IPP to speed itself up. The improvement in speed
from using IPP can be substantial. Figure 1-4 compares two other vision libraries, LTI
[LTI] and VXL [VXL], against OpenCV and OpenCV using IPP. Note that performance
was a key goal of OpenCV; the library needed the ability to run vision code in real time.
OpenCV is written in performance-optimized C and C++ code. It does not depend in
any way on IPP. If IPP is present, however, OpenCV will automatically take advantage
of IPP by loading IPP’s dynamic link libraries to further enhance its speed.

<br><br>
<img src="images/1-4_image.jpg"></img>
<br> Figure 1-4. Two other vision libraries (LTI and VXL) compared with OpenCV (without and with
IPP) on four different performance benchmarks: the four bars for each benchmark indicate scores
proportional to run time for each of the given libraries; in all cases, OpenCV outperforms the other
libraries and OpenCV with IPP outperforms OpenCV without IPP
 <br><br>

<A NAME="Who_Owns_OpenCV?"><font size="4"><i><b>Who Owns OpenCV?</b></i></font>
<br><br>

Although Intel started OpenCV, the library is and always was intended to promote
commercial and research use. It is therefore open and free, and the code itself may be
used or embedded (in whole or in part) in other applications, whether commercial or
research. It does not force your application code to be open or free. It does not require
that you return improvements back to the library—but we hope that you will.

<br><br>
<A NAME="Downloading_and_Installing_OpenCV"><font size="5" color="#120a8f"><i>Downloading and Installing OpenCV</i></font>
<br><br>

The main OpenCV site is on SourceForge at http://SourceForge.net/projects/opencvlibrary
and the OpenCV Wiki [OpenCV Wiki] page is at http://opencvlibrary.SourceForge.net.
For Linux, the source distribution is the file opencv-1.0.0.tar.gz; for Windows, you want
OpenCV_1.0.exe. However, the most up-to-date version is always on the CVS server at
SourceForge.

<br><br>
<A NAME="Install"><font size="4"><i><b>Install</b></i></font>
<br><br>

Once you download the libraries, you must install them. For detailed installation in-
structions on Linux or Mac OS, see the text file named INSTALL directly under the
.../opencv/ directory; this fi le also describes how to build and run the OpenCV test-
ing routines. INSTALL lists the additional programs you’ll need in order to become an
OpenCV developer, such as autoconf, automake, libtool, and swig.

<br><br>
<A NAME="Windows"><font size="3"><i>Windows</i></font>
<br><br>

Get the executable installation from SourceForge and run it. It will install OpenCV, reg-
ister DirectShow filters, and perform various post-installation procedures. You are now
ready to start using OpenCV. You can always go to the .../opencv/_make directory and open
opencv.sln with MSVC++ or MSVC.NET 2005, or you can open opencv.dsw with lower ver-
sions of MSVC++ and build debug versions or rebuild release versions of the library.*
To add the commercial IPP performance optimizations to Windows, obtain and in-
stall IPP from the Intel site (http://www.intel.com/software/products/ipp/index.htm);
use version 5.1 or later. Make sure the appropriate binary folder (e.g., c:/program files/
intel/ipp/5.1/ia32/bin) is in the system path. IPP should now be automatically detected
by OpenCV and loaded at runtime (more on this in Chapter 3).

<br><br>
<A NAME="Linux"><font size="3"><i>Linux</i></font>
<br><br>

Prebuilt binaries for Linux are not included with the Linux version of OpenCV owing
to the large variety of versions of GCC and GLIBC in different distributions (SuSE,
Debian, Ubuntu, etc.). If your distribution doesn’t offer OpenCV, you’ll have to build it
from sources as detailed in the .../opencv/INSTALL file.
To build the libraries and demos, you’ll need GTK+ 2.x or higher, including headers.
You’ll also need pkgconfig, libpng, zlib, libjpeg, libtiff, and libjasper with development
files. You’ll need Python 2.3, 2.4, or 2.5 with headers installed (developer package).
You will also need libavcodec and the other libav* libraries (including headers) from
ffmpeg 0.4.9-pre1 or later (svn checkout svn://svn.mplayerhq.hu/ffmpeg/trunk ffmpeg).
Download ffmpeg from http://ffmpeg.mplayerhq.hu/download.html. † The ffmpeg pro-
gram has a lesser general public license (LGPL). To use it with non-GPL soft ware (such
as OpenCV), build and use a shared ffmpg library:
<br>
<br>$> ./configure --enable-shared
<br>$> make
<br>$> sudo make install
<br><br>

You will end up with: /usr/local/lib/libavcodec.so.*, /usr/local/lib/libavformat.so.*, /usr/local/lib/libavutil.so.*, and include files under various /usr/local/include/libav*. 
To build OpenCV once it is downloaded:‡

<br><br>
<font size="2">
* It is important to know that, although the Windows distribution contains binary libraries for release builds,
it does not contain the debug builds of these libraries. It is therefore likely that, before developing with
OpenCV, you will want to open the solution file and build these libraries for yourself.
† You can check out ffmpeg by: svn checkout svn://svn.mplayerhq.hu/ffmpeg/trunk ff mpeg.
‡ To build OpenCV using Red Hat Package Managers (RPMs), use rpmbuild -ta OpenCV-x.y.z.tar.gz (for
RPM 4.x or later), or rpm -ta OpenCV-x.y.z.tar.gz (for earlier versions of RPM), where OpenCV-x.y.z.tar
.gz should be put in /usr/src/redhat/SOURCES/ or a similar directory. Then install OpenCV using rpm -i
OpenCV-x.y.z.*.rpm.
</font>

<br>
<br>$> ./configure
<br>$> make
<br>$> sudo make install
<br>$> sudo ldconfig
<br><br>

After installation is complete, the default installation path is /usr/local/lib/ and /usr/
local/include/opencv/. Hence you need to add /usr/local/lib/ to /etc/ld.so.conf (and run
ldconfig afterwards) or add it to the LD_LIBRARY_PATH environment variable; then you
are done.
To add the commercial IPP performance optimizations to Linux, install IPP as de-
scribed previously. Let’s assume it was installed in /opt/intel/ipp/5.1/ia32/. Add <your
install_path>/bin/ and <your install_path>/bin/linux32 LD_LIBRARY_PATH in your initial-
ization script (.bashrc or similar):

<br>
<br>LD_LIBRARY_PATH=/opt/intel/ipp/5.1/ia32/bin:/opt/intel/ipp/5.1
<br>/ia32/bin/linux32:$LD_LIBRARY_PATH
<br>export LD_LIBRARY_PATH
<br><br>

Alternatively, you can add <your install_path>/bin and <your install_path>/bin/linux32,
one per line, to /etc/ld.so.conf and then run ldconfig as root (or use sudo).
That’s it. Now OpenCV should be able to locate IPP shared libraries and make use of
them on Linux. See .../opencv/INSTALL for more details.

<br><br>
<A NAME="MacOS_X"><font size="3"><i>MacOS X</i></font>
<br><br>

As of this writing, full functionality on MacOS X is a priority but there are still some
limitations (e.g., writing AVIs); these limitations are described in .../opencv/INSTALL.
The requirements and building instructions are similar to the Linux case, with the fol-
lowing exceptions:

<ul>

  <li>By default, Carbon is used instead of GTK+.
  </li>

  <li>By default, QuickTime is used instead of ffmpeg.
  </li>

  <li>pkg-config is optional (it is used explicitly only in the samples/c/build_all.sh script).
  </li>
  
  <li>RPM and ldconfig are not supported by default. Use configure+make+sudo make
      install to build and install OpenCV, update LD_LIBRARY_PATH (unless ./configure
      --prefix=/usr is used).
  </li>

</ul>

For full functionality, you should install libpng, libtiff, libjpeg and libjasper from
darwinports and/or fink and make them available to ./configure (see ./configure
--help ). For the most current information, see the OpenCV Wiki at http://opencvlibrary
.SourceForge.net/ and the Mac-specific page http://opencvlibrary.SourceForge.net/
Mac_OS_X_OpenCV_Port.

<br><br>
<A NAME="Getting_the_Latest_OpenCV_via_CVS"><font size="5" color="#120a8f"><i>Getting the Latest OpenCV via CVS
</i></font>
<br><br>

OpenCV is under active development, and bugs are often fixed rapidly when bug re-
ports contain accurate descriptions and code that demonstrates the bug. However,
official OpenCV releases occur only once or twice a year. If you are seriously develop-
ing a project or product, you will probably want code fixes and updates as soon as they
become available. To do this, you will need to access OpenCV’s Concurrent Versions
System (CVS) on SourceForge.
This isn’t the place for a tutorial in CVS usage. If you’ve worked with other open source
projects then you’re probably familiar with it already. If you haven’t, check out Essential
CVS by Jennifer Vesperman (O’Reilly). A command-line CVS client ships with Linux,
OS X, and most UNIX-like systems. For Windows users, we recommend TortoiseCVS
(http://www.tortoisecvs.org/), which integrates nicely with Windows Explorer.
On Windows, if you want the latest OpenCV from the CVS repository then you’ll need
to access the CVSROOT directory:
<br><br>
<font size="2"><b>:pserver:anonymous@opencvlibrary.cvs.sourceforge.net:2401/cvsroot/opencvlibrary</b></font>
<br><br>
On Linux, you can just use the following two commands:
<br><br>
<font size="2"><b>cvs -d:pserver:anonymous@opencvlibrary.cvs.sourceforge.net:/cvsroot/opencvlibrary login</b></font>
<br><br>
When asked for password, hit return. Then use:
<br><br>
<font size="2"><b>cvs -z3 -d:pserver:anonymous@opencvlibrary.cvs.sourceforge.net:/cvsroot/opencvlibrary co -P opencv</b></font><br>

<br><br>
<A NAME="More_OpenCV_Documentation"><font size="5" color="#120a8f"><i>More OpenCV Documentation</i></font>
<br><br>

The primary documentation for OpenCV is the HTML documentation that ships with
the source code. In addition to this, the OpenCV Wiki and the older HTML documen-
tation are available on the Web.



</body>
