RANSAC, “RANdom SAmple Consensus“, is an iterative method to fit models to data that can contain outliers.

Given a model, e.g. a homography between points, the basic idea is that the data contains inliers, the data points that can be described by the model, and outliers, those that do not fit the model.

RANSAC is a very useful algorithm and is frequently used on computer vision to make homography estimation and structure from motion robust to noise and false image correspondences.



RANSAC loop:

1. Randomly select a seed group of matches
2. Compute transformation from seed group
3. Find inliers to this transformation
4. If the number of inliers is sufficiently large, re-compute least-squares estimate of transformation on all of the inliers

1. Randomly select minimal subset of points


2. Hypothesize a model


3. Compute error function


4. Select points consistent with model


5. Repeat hypothesize‐and‐verify loop



Repeat N times:
• Draw s points uniformly at random
• Fit line to these s points
• Find inliers to this line among the remaining points (i.e., points whose distance from the line is less than t)
• If there are d or more inliers, accept the line and refit using all inliers

Choosing the parameters

Initial number of points s
Typically minimum number needed to fit the model
Distance Distance threshold threshold t
Choose t so probability for inlier is p (e.g. 0.95)
Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2
Number of samples N
Choose N so that, with probability p, at least one random sample sample is free from outliers (e g is free from outliers        (e.g. pp=0=0 99) (outlier ratio: .99) (outlier ratio: ee))
• Consensus set size d
Should match expected inlier ratio

RANSAC pros and cons

• Pros

  • Simple and general
  • Applicable Applicable to many different problems to many different problems
  • Often works well in practice

• Cons

  • Lots of parameters to tune
  • Doesn’t work well for low inlier ratios (too many iterations, or or can fail completely) can fail completely)
  • Can’t always get a good initialization of the model based on the minimum number number of samples of samples

via http://www.cs.illinois.edu/~slazebni/spring11/lec09_fitting.pdf


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s




Just another WordPress.com site

Jing's Blog

Just another WordPress.com site

Start from here......







Just another WordPress.com site

Where On Earth Is Waldo?

A Project By Melanie Coles

the Serious Computer Vision Blog

A blog about computer vision and serious stuff

Cauthy's Blog

paper review...

Cornell Computer Vision Seminar Blog

Blog for CS 7670 - Special Topics in Computer Vision


Life through nerd-colored glasses

Luciana Haill

Brainwaves Augmenting Consciousness



Dr Paul Tennent

and the university of nottingham

turn off the lights, please

A bunch of random, thinned and stateless thoughts around the Web

%d bloggers like this: