|
|
|
Programming Project #3 (proj3B) (second part) |
FEATURE MATCHING for AUTOSTITCHING
(second part of a larger project)
The goal of this project is to create a system for automatically stitching images into a mosaic.
A secondary goal is to learn how to read and implement a research paper. The project
will consist of the following steps:
For the following sections, we will follow the paper “Multi-Image Matching using Multi-Scale Oriented Patches” by Brown et al.
but with several simplifications. Read the paper first and make sure you understand it, then implement the algorithm.
Deliverables: Show detected corners overlaid on image, with and without ANMS.
Implement Feature Descriptor extraction (Section 4 of the paper). Don’t worry
about rotation-invariance – just extract axis-aligned 8x8 patches. Note that
it’s extremely important to sample these patches from the larger 40x40 window to have a nice big blurred descriptor.
Don’t forget to bias/gain-normalize the descriptors. Ignore the wavelet transform section.
Deliverables: Extract normalized 8x8 feature descriptors. Show several extracted
features.
Implement Feature Matching (Section 5 of the paper). That is, you will need to find pairs of features that look similar and are
thus likely to be good matches. For thresholding, use the simpler approach due to Lowe of thresholding on the ratio between the
first and the second nearest neighbors. Consult Figure 6b in the paper for picking the threshold.
Ignore Section 6 of the paper.
Deliverables: Show matched features between image pairs.
For step 4, use 4-point RANSAC as described in class to compute robust homography estimates. Then, produce mosaics by adapting
your code from Part A. You may use the same images from part A, but show both manually and automatically stitched results side
by side. Produce at least three mosaics.
Deliverables: Implement 4-point RANSAC from scratch. Show comparison of stitching
manually and automatically. Create >=3 automatic mosaics.