- The example project
- Using the OpenCV framework in your own projects
- Rebuilding the OpenCV framework
Update: Nov 28 2011 – The OpenCV framework has been rebuilt using opencv svn revision 7017
Update: Check out Part 2 of our series on computer vision.
This article is the first in a series looking at computer vision for iOS using the OpenCV library. OpenCV is released under the BSD licence and so is free for both academic and commercial use. It includes optimised implementations of all the standard algorithms used today in the field of computer vision and has a huge user base across the Linux, Windows, Mac and Android worlds. In the past, OpenCV has been difficult to build for iOS. However, recent sterling work by the OpenCV team has added iOS build support and video capture.
In this article we aim to reduce the learning curve even further by providing the OpenCV library as an iOS framework that can be added to your own Xcode projects simply by dragging and dropping. We also provide a build script for re-building the framework should you need to, and an example project that wraps everything up with a neat demo of video capture and image processing using OpenCV on iOS.
We start by walking you through the example project then describe how to use the OpenCV framework in your own projects. Finally, we describe how to re-build the OpenCV framework and explain how the build script works.
As ever, we build on the work of others that have gone before us. We would like to acknowledge the OpenCV team, Eugene Khvedchenya for previous work on building OpenCV for iOS and Diney Bomfim for work on iOS frameworks.
The example project is hosted on GitHub. You can visit the GitHub project page at:
or download a zip archive of the project directly at:
The project includes a pre-built OpenCV framework (OpenCV svn revision 7017), a build script to rebuild the framework and an example app that demonstrates video capture and simple image processing using OpenCV.
To build and run the app, open the OpenCVClient Xcode project and hit ‘Run’. Note that video capture is not supported on the iPhone Simulator. Run the example app on an iOS device to see video capture in action.
The app starts by running some simple performance tests converting between UIImages and cv::Mat objects. Timing results are output to the console. After the performance tests have completed the main interface is shown. Tap the ‘Capture’ button to capture a video frame. The frame is processed using the Canny edge detection algorithm to exercise some of OpenCV’s image processing functions and the results are displayed on screen. Use the sliders to adjust the low and high algorithm threshold values. The time taken to process the frame is also displayed. Typical values are around 90ms on the iPhone 4 and around 200ms on the iPhone 3.
Adding the OpenCV Framework
The easiest way to add the OpenCV framework to your own project is to drag the
OpenCV.framework folder from the example project folder in Finder and drop it onto the ‘Frameworks’ group of your target project in the Xcode Project navigator. Check the option ‘Copy items into destination group’s folder’ in the dialog that appears if you want to copy the OpenCV framework into your target project. If you are sharing a copy of the framework between multiple projects or have a common build location for the framework then you will want to leave the option unchecked.
Alternatively, you can navigate to the ‘Build Phases’ tab of the Project properties pane in Xcode. Drop down the ‘Link Binary With Libraries’ build phase item and click the ‘+’ button. Select “Add Other…’ from the dialog that appears and navigate to the OpenCV.framework folder.
Once you have added the OpenCV framework, your project is set up to link against the OpenCV libraries automatically and the OpenCV header files are also made available. Refer to OpenCV header files in your project
#include statements using the framework-relative notation (i.e
Add additional required frameworks
To use the OpenCV framework you must add a few extra Apple-supplied frameworks to your project. To do this, navigate to the ‘Build Phases’ tab of the Project properties pane in xCode. Drop down the ‘Link Binary With Libraries’ build phase item and click the ‘+’ button. Add the frameworks and libraries shown below. The frameworks in the first column of the table are required. The frameworks in the second column are optional and are only needed if you are using the video capture support included in OpenCV’s
|Framework||Required||Optional (required for video capture)|
When you have added all the required frameworks, your project’s Build Phases tab in the Project Properties pane should look like this:
Include the OpenCV headers
The headers that declare the OpenCV and OpenCV2 APIs are provided as part of the OpenCV framework. The headers can be browsed by dropping down the OpenCV.framework item in the ‘Frameoworks’ group of the Project navigator in XCode. To write OpenCV client code you need to include these headers in your project. The easiest way to do this is to modify your pre-complied header file (
<project name>-Prefix.pch) to add the three new lines shown below:
/////////////////////////////////////////////////////////////////////////// // Add this new section BEFORE the #import statements for UIKit and Foundation #ifdef __cplusplus #import <OpenCV/opencv2/opencv.hpp> #endif // Existing #import statements #ifdef __OBJC__ #import <UIKit/UIKit.h> #import <Foundation/Foundation.h> #endif
You must also change the extension of any source file in your project in which you wish to use OpenCV from ‘
.m‘ to ‘
.mm‘. This indicates to the compiler that the source file includes mixed Objective-C and C++ code. Note that individual source files that don’t use OpenCV can remain as ‘
For the curious, the OpenCV headers must be included before
Foundation.h because OpenCV defines a
MIN macro that conflicts with the
MIN function defined by the Apple frameworks. If you include the OpenCV headers after
Foundation.h you will receive compilation errors such as ‘LLVM GCC 4.2 Error: Statement-expressions are allowed only inside functions’. Including the OpenCV headers first and surrounding the
#import with the
__cplusplus conditional test avoids this problem and means that you can still use plain Objective-C for ‘
.m‘ files in your project that don’t call the OpenCV APIs.
Using the UIImage extensions
The example project includes extensions to UIImage for converting to and from cv:Mat objects. These are provided as a UIImage category in two source files (UIImage+OpenCV.h and .mm). To use the extensions simply add the two source files to your project and make use of the following new UIImage methods and properties:
@interface UIImage (UIImage_OpenCV) // Returns an autoreleased UIImage from cv::Mat +(UIImage *)imageWithCVMat:(const cv::Mat&)cvMat; // Initialises a UIImage from cv::Mat -(id)initWithCVMat:(const cv::Mat&)cvMat; // Returns cv::Mat object from UIImage @property(nonatomic, readonly) cv::Mat CVMat; // Returns grayscale cv::Mat object from UIImage @property(nonatomic, readonly) cv::Mat CVGrayscaleMat; @end
Included with the example project is a shell script (
opencvbuild.sh) that automates building and packaging of the OpenCV libraries. Before starting you will need to make sure that you have Subversion and CMake on your build system. Subversion is required to download the latest OpenCV sources and CMake is the build system used by the OpenCV team. Binary installers for both are provided at the locations listed below:
|Subversion for Mac||http://www.open.collab.net/downloads/community/|
|CMake for Mac||http://www.cmake.org/cmake/resources/software.html|
Obtaining the OpenCV source
First change to the directory where you want the source files to be extracted. If you are following the layout of the example project, this is the
opencv subdirectory of the project root directory:
cd <project root>/opencv
Next, checkout the latest sources from the offical repository. At the time of writing, support for iOS builds and video capture have not made it into the stable OpenCV release so we are using the latest sources in ‘trunk’. (Note the final period at the end of the command):
svn co https://code.ros.org/svn/opencv/trunk .
You should now have the OpenCV source tree under your chosen location:
opencvbuild shell script takes two command-line arguments: the head of the OpenCV source tree and the location where you want the build to be performed. For the example project we built the framework in the project root directory (again, note the final period as the second argument to opencvbuild):
cd <project root> ./opencvbuild opencv/opencv .
If the build completed successfully you should now have the OpenCV framework along with three library packages in your chosen build location:
|OpenCV.framework||framework for use with iOS device or Simulator|
|OpenCV_iPhoneOS||libraries and headers for use with iOS device|
|OpenCV_iPhoneSimulator||libraries and headers for use with iPhone Simulator|
|OpenCV_Universal||fat libraries and headers for use with iOS device or Simulator|
The library packages are built as an intermediate step before the framework is assembled. You can choose to remove them or you may prefer to link against the individual libraries instead of using the OpenCV framework.
How it works
Most of the heavy lifting in
opencvbuild is delegated to
xcodebuild. First, an Xcode project file is created using
cmake and the CMake configuration files provided by the OpenCV team. From revision 6675, support for iOS builds was introduced, which makes our life much easier. A command-line build is then initiated using
xcodebuild driven by the Xcode project file created in the first step.
The build script actually performs the build twice, once targeting iOS devices (armv6 and armv7) and once targeting the Simulator (i386). The resulting binaries are then combined with the
lipo tool to produce fat binaries that support operation on both device and simulator. These fat binaries can be found in the
OpenCV_Universal intermediate build directory.
Finally, the libraries and OpenCV header files are assembled into the OpenCV framework. Two problems arise here, which the script overcomes with a couple of sneaky tricks. First, a framework for iOS can only include a single static library but the OpenCV build has produced 11 that we need to include. To get around this restriction the script simply combines the 11 libraries into one using
libtool and then moves the resulting super-library into place within the framework. Secondly, the OpenCV headers use relative paths in
#include statements, which makes them difficult to use without configuring header search paths within your project settings. To solve this, the script replaces any occurrence of a relative include path (i.e.
"opencv/.../...") with a framework-based include path (i.e.
<OpenCV/opencv/.../...>). The headers are added to the framework so that the whole package, library and headers, can be added to your project in a single step.
Robin Summerhill is a tech blogger, developer and architect. He is co-founder of Emu Analytics where he is currently working as Head of Technology.