I was trying to achieve rotation invariance for shape Context. The general approach for shape context is
- to compute distances and angles between each set of interest points in a given image.
- Then bin into a histogram based on whether these calculated values fall into certain ranges.
You do this for both a standard and a test image.
- To match two different images, from this you use a chi-square function to estimate a “cost” between each possible pair of points in the two different histograms.
- Finally, use an optimization technique such as the hungarian algorithm to find optimal assignments of points and then sum up the total cost, which will be lower for good matches.
they say that to make the above approach rotation invariant, you need to calculate each angle between each pair of points using the tangent vector as the x-axis. (ie http://www.cs.berkeley.edu/~malik/papers/BMP-shape.pdf page 513) Learning grasping points with shape context
Color-shape context for object recognition
Advanced shape context for plant species identiﬁcation using leaf image retrieval