Chamfer matching basically calculates the distance (dis-similarity) between two images. It works well when the model and the image do not have rotational and scaling differences. Take care of scaling, and sliding windows as well if target image is larger than query image
CM is popular to find the best alignment between two edge maps. Although many shape matching algorithms have been proposed over the decades, chamfer matching remains among the fastest and most robust approaches in the presence of clutter.
CM provides a fairly smooth measure of fitness, and can tolerate small rotations, misalignments, occlusions, and deformations.
The basic idea is to:
- Extract the edge/contours of a query image as well as target image.
- Take one point/pixel of contour in query image and find the distance of a closest point/pixel of contour in target image.
- Sum the distances for all edge points/pixels of query image.
Most methods use a nearest neighbor approach to match two sets of descriptors.
- match SIFT descriptors performs an additional heuristic check between the first and the second nearest neighbor – more robust match
bipartite graph matching – global dissimilarity minimization
- weighted sum of costs of a generalization of the shape context descriptor
According to the definition of log-polar bins, pixels are indexed by the ring number R and the wedge number W.
shape context + chamfer matching for clutter – Thayananthan 2003
fast directional chamfer matching (FDCM) – 2010[pdf]
- improves the accuracy of chamfer matching by including edge orientation.
- achieves massive improvements in matching speed using line-segment approximations of edges, a 3D distance transform, and directional integral images.
- other applications in the context of deformable and articulated shape matching.
- Similar to other edge-based vision algorithms, edge map’s quality affects the detection performance.
The best computational complexity for existing chamfer matching algorithms is linear in the number of template edge points.
optimize the directionalmatching cost in three stages:
(1)We present a linear representation of the template edges.
(2) We then describe a three dimensional distance transform representation.
(3) Finally, we present a directional integral image representation over distance transforms.
- The matching cost can be computed efficiently via a distance transform image, which specifies the distance from each pixel to the nearest edge pixel in edge map of query image V
matching shapes in cluttered images: edge-based vision algorithms – edge map
- require a clean segmentation of the target object
- clean shape
- foreground-background separation
- less suitable for dealing with unstructured scenes