RANSAC

RANSAC, “RANdom SAmple Consensus“, is an iterative method to fit models to data that can contain outliers.

Given a model, e.g. a homography between points, the basic idea is that the data contains inliers, the data points that can be described by the model, and outliers, those that do not fit the model.

RANSAC is a very useful algorithm and is frequently used on computer vision to make homography estimation and structure from motion robust to noise and false image correspondences.

http://stackoverflow.com/questions/1500498/how-to-use-sift-algorithm-to-compute-how-similiar-two-images-are

http://stackoverflow.com/questions/5998664/how-to-use-homography-to-recognize-two-images-while-having-sift-descriptors-and?lq=1

RANSAC loop:

1. Randomly select a seed group of matches
2. Compute transformation from seed group
3. Find inliers to this transformation
4. If the number of inliers is sufficiently large, re-compute least-squares estimate of transformation on all of the inliers

1. Randomly select minimal subset of points

1a

2. Hypothesize a model

1b

3. Compute error function

1c

4. Select points consistent with model

1d

5. Repeat hypothesize‐and‐verify loop

1e11e31e1e2

 

Repeat N times:
• Draw s points uniformly at random
• Fit line to these s points
• Find inliers to this line among the remaining points (i.e., points whose distance from the line is less than t)
• If there are d or more inliers, accept the line and refit using all inliers

Choosing the parameters

Initial number of points s
Typically minimum number needed to fit the model
Distance Distance threshold threshold t
Choose t so probability for inlier is p (e.g. 0.95)
Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2
Number of samples N
Choose N so that, with probability p, at least one random sample sample is free from outliers (e g is free from outliers        (e.g. pp=0=0 99) (outlier ratio: .99) (outlier ratio: ee))
• Consensus set size d
Should match expected inlier ratio

RANSAC pros and cons

• Pros

  • Simple and general
  • Applicable Applicable to many different problems to many different problems
  • Often works well in practice

• Cons

  • Lots of parameters to tune
  • Doesn’t work well for low inlier ratios (too many iterations, or or can fail completely) can fail completely)
  • Can’t always get a good initialization of the model based on the minimum number number of samples of samples

via http://www.cs.illinois.edu/~slazebni/spring11/lec09_fitting.pdf

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

生活在西班牙

自己动手丰衣足食

BlueAsteroid

Just another WordPress.com site

Jing's Blog

Just another WordPress.com site

Start from here......

我的心情魔方

天才遠私廚

希望能做一個分享各種資訊的好地方

语义噪声

西瓜大丸子汤的博客

笑对人生,傲立寰宇

Just another WordPress.com site

Where On Earth Is Waldo?

A Project By Melanie Coles

the Serious Computer Vision Blog

A blog about computer vision and serious stuff

Cauthy's Blog

paper review...

Cornell Computer Vision Seminar Blog

Blog for CS 7670 - Special Topics in Computer Vision

datarazzi

Life through nerd-colored glasses

Luciana Haill

Brainwaves Augmenting Consciousness

槑烎

1,2,∞

Dr Paul Tennent

and the university of nottingham

turn off the lights, please

A bunch of random, thinned and stateless thoughts around the Web

%d bloggers like this: