Skip to content

Intermediate Computer Vision with openFrameworks

Kyle McDonald edited this page Jun 5, 2012 · 5 revisions

Intermediate Computer Vision with openFrameworks

For Eyeo Festival 2012. Tuesday, June 5th 2:00-5:00 / Walker Art Center - Conference Room

Description

For the past few decades researchers have been slaving away on advanced mathematics and computer science to help computers see the world the way humans do. These techniques regularly find their way into interactive artwork and installations: blob detection, face tracking, foreground/background segmentation. The OpenCV library is a massive collaborative effort to implement and connect these different computer vision techniques. Programming with only OpenCV can be intimidating, while openFrameworks’ ofxOpenCv only exposes a few technique from OpenCV. So this workshop will introduce a new addon for openFrameworks that makes it easier to use everything from the simplest OpenCV functions to most bleeding edge algorithms. We will cover techniques including background learning, camera calibration, optical flow, contour detection and tracking, basic face tracking and advanced expression analysis.

Experience with openFrameworks is assumed. Attendees should bring a laptop with the latest build of openFrameworks and an up-to-date clone of ofxCv The workshop will cater to OSX, while Windows users should be running Code::Blocks. Linux users will be expected to answer everyone else’s questions. (Kyle will be assisted by Golan Levin)

Introduction

The goal of this workshop is twofold. We want to:

  1. Familiarize you with a number of useful techniques from computer vision.
  2. Demystify standard OpenCV programming by opening up the reference manual together.

A Brief History of Computer Vision

"Computer vision" refers to a broad class of algorithms that allow computers to make intelligent assertions about digital images and video.

For a great overview of the historical relationship between computer vision and the arts, see this Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers by Golan.

The Structure of ofxCv

There are essentially four things going on inside ofxCv.

  1. Utilities: this is the glue between OF and OpenCV. toCv() and imitate()
  2. Wrappers: functions that are named the same as OpenCV, but accept all kinds of objects.
  3. Helpers: functions that handle higher level tasks that aren't exactly OF or OpenCV specific.
  4. Classes: handling camera calibration, object tracking, contour finding, etc.

Topics and Examples

Thresholding

example-threshold: Thresholding is one of the most common tasks in computer vision, it's a fundamental step for most kinds of shape-based analysis. Even with the Kinect, depth thresholding is incredibly common.

Difference Images

example-difference: frame differencing is a classic technique for measuring movement in a scene.

example-difference-colmuns: if you want to measure movement along a single axis, you can look for the column mean of the frame difference.

Background learning

example-background: background subtraction can be important when you don't have control over your background, or the background is too complex to threshold from the foreground. Adapting the background slowly is super important for dealing

Camera calibration

example-calibration: camera calibration can be really important when you're dealing with wide angle lenses that have significant distortion. ofxCv provides some simple techniques for solving this. The calibration example will train a calibration .yml file using a chessboard.

example-undistortion: the undistortion example will load the calibration .yml file and undistort the incoming image.

example-ar: proper augmented reality tricks require some knowledge about the camera's calibration parameters. This example loads the calibration .yml and superimposes some cubes over a tracked chessboard.

mapamok is an extension of camera calibration tools.

Contour detection and tracking

example-contours-basic: simple brightness contours.

example-contours-color: color-tracked contours.

example-contours-quad: quad extraction from tracked contours.

example-contours-tracking: labeling contours with consistent IDs over time.

Basic face detection

example-face: face detection using raw OpenCV.

Optical flow

example-flow: for this one to work, you need my fork of ofxControlPanel.

Miscellaneous

example-edge: edge detection is a common tool in computer vision.

example-homography: homography allows you to remap one plane onto another, and requires 4 or more points.

example-bayer: an example of color conversion that sometimes comes up when working with nicer cameras.

Example-less topics (aka, the future)

Region of interest (ROI): you often want to work with a section of an image instead of an entire image. When working with a Mat OpenCV makes this easy by giving you a constructor that accepts a cv::Rect subregion.

Classic filters like erosion and dilation: when you're working with binary images, sometimes there's a shot noise on edges that you can filter out with these morphological operations.

Convolution: OpenCV provides some really simple functions for doing convolution between a small image kernel across a larger image.

Template matching: similar to convolution, template matching is about finding a small image in a large image. It's essentially the same as doing convolution followed by peak detection.

K-Means clustering: sometimes if you have a bunch of data, like colors or points, and they're evenly distributed around some centroids, you want to know the value of those centroids. For example, a picture of an American flag against a blue sky would have 4 dominant colors: red, white, blue, and sky blue. K-Means clustering is one technique that can solve for these centroids.

Backprojection and camshift: instead of color tracking, you can do histogram tracking. This means taking the local neighborhood of each pixel, determining the histogram, and comparing it to a reference for similarity. Camshift is an algorithm that stores a sort of oriented histogram and will tell you about new orientations for a tracked object.

SIFT, SURF, and feature matching: sometimes you want to do things like homography transforms automatically. This means you need to find one feature in both images. Algorithms like SIFT and SURF accomplish this. There are also more advanced techniques like Ferns, which was used for Rise/Fall.

Background/foreground separation: there are more advanced techniques than background subtraction. If you have a tree waving in front of a complex scene, and the tree is the foreground, you can model every pixel in the scene as a mixture of gaussians.

Eigenfaces and PCA: you can extract a collection of representative components for any set of images (or other data). These representative components allow you to reconstruct these original images fairly accurately using a small collection of weights instead of recording every pixel value.

FFT/DFT: you don't need any other libraries or tricks to do FFT with OpenCV, it has all the FFT power you need! Just see cv::dft().

Next Level

ofxFaceTracker is based on a combination of ofxCv and FaceTracker. Because ofxFaceTracker relies on a library from Jason, I won't show the code directly. Instead let's talk about FaceOSC for a moment.