【Feature detector】SURF feature point matching

via http://morf.lv/modules.php?name=tutorials&lasit=2#.UCmL0J2PXh4

What findfeature algorithms do, is they then find keypoints in those two images and calculate their descriptors. The feature finding process is usually composed of 2 steps:

  1. first, find the interest points in the image which might contain meaningful structures; this is usually done by comparing the Difference of Gaussian (DoG) in each location in the image under different scales. A major orientation is also calculated when a point is considered a feature point.
  2. The second step is to construct the scale invariant descriptor on each interest point found in the previous step. To achieve rotation invariant, we align a rectangle to the major orientation. The size of the rectangle is proportional to the scale where the interest point is detected. The rectangle is then cropped into a 4 by 4 grid. Different informations such as gradient or absolute value of gradient are then subtracted from each of these sub square and composed into the interest point descriptor.

the descriptors are the ones which we will compare to each other and determine whether the object was found in  the scene or not.

  • First you have to include “features2d” library into your project:

#include "opencv2\features2d\features2d.hpp"
#include "opencv2\nonfree\features2d.hpp" //This is where actual SURF and SIFT algorithm is located
#include <opencv2/legacy/legacy.hpp> //This is where BruteForceMatcher is located

  •  Extract keypoints and calculate their descriptors  To do that, declare vector of keypoints and matrix of descriptors.

vector keypointsO; //keypoints for object
vector keypointsS; //keypoints for scene
//Descriptor matrices
Mat descriptors_object, descriptors_scene;

  • Declare an SURF object which will actually extract keypoints
  • Calculate descriptors and save them in memory. When decleraing SURF object you have to provide the minimum hessian value, the smaller it is the more keypoints your program would be able to find with the cost of performance.

SurfFeatureDetector surf(1500); //1500 is low enough most of the times, but it may vary from application to application.
surf.detect(sceneMat,keypointsS);
surf.detect(objectMat,keypointsO);

  • Calculate the descriptors:

SurfDescriptorExtractor extractor;
extractor.compute( sceneMat, keypointsS, descriptors_scene );
extractor.compute( objectMat, keypointsO, descriptors_object );

  • Do the actual comparison(object detection) choose the matcher,e.g. FlannBasedMatcher (the fastest) or Brute Force matcher

//Declering flann based matcher
FlannBasedMatcher matcher;
//BFMatcher for SURF algorithm can be either set to NORM_L1 or NORM_L2.
//But if you are using binary feature extractors like ORB, instead of NORM_L* you use "hamming"
BFMatcher matcher(NORM_L1);

  • Donearest neighbor matching, which is built in OpenCV library:

vector< vector > matches;
matcher.knnMatch( descriptors_object, descriptors_scene, matches, 2 ); // find the 2 nearest neighbors

  • After matching, discard invalid results. Basically we have to filter out the good matches by use of Nearest Neighbor Distance Ratio.

vector< DMatch > good_matches;
good_matches.reserve(matches.size());
for (size_t i = 0; i < matches.size(); ++i)
{
if (matches[i].size() < 2)
continue;
const DMatch &m1 = matches[i][0];
const DMatch &m2 = matches[i][1];
if(m1.distance <= nndrRatio * m2.distance)
good_matches.push_back(m1);
}

  • Assumption: when you have 7 and more good_matches you can assume the object has been found, and do whatever you want to do, e.g. draw boundty around the detected object.
    Ok now let’s extract the coordinates of the good matches from object and scene so we can find homography, which we are going to use to find the boundry of object in scene.

std::vector obj;
std::vector scene;
for( unsigned int i = 0; i < good_matches.size(); i++ )
{ //-- Get the keypoints from the good matches
obj.push_back( keypointsO[ good_matches[i].queryIdx ].pt );
scene.push_back( keypointsS[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, CV_RANSAC );
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( objectP.cols, 0 );
obj_corners[2] = cvPoint( objectP.cols, objectP.rows ); obj_corners[3] = cvPoint( 0, objectP.rows );
std::vector scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( outImg, scene_corners[0] , scene_corners[1], color, 2 ); //TOP line
line( outImg, scene_corners[1] , scene_corners[2], color, 2 );
line( outImg, scene_corners[2] , scene_corners[3], color, 2 );
line( outImg, scene_corners[3] , scene_corners[0] , color, 2 );

BRC_1BRC_0

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

生活在西班牙

自己动手丰衣足食

BlueAsteroid

Just another WordPress.com site

Jing's Blog

Just another WordPress.com site

Start from here......

我的心情魔方

天才遠私廚

希望能做一個分享各種資訊的好地方

语义噪声

西瓜大丸子汤的博客

笑对人生,傲立寰宇

Just another WordPress.com site

Where On Earth Is Waldo?

A Project By Melanie Coles

the Serious Computer Vision Blog

A blog about computer vision and serious stuff

Cauthy's Blog

paper review...

Cornell Computer Vision Seminar Blog

Blog for CS 7670 - Special Topics in Computer Vision

datarazzi

Life through nerd-colored glasses

Luciana Haill

Brainwaves Augmenting Consciousness

槑烎

1,2,∞

Dr Paul Tennent

and the university of nottingham

turn off the lights, please

A bunch of random, thinned and stateless thoughts around the Web

%d bloggers like this: