Project 4: [Auto]Stiching Photo Mosaics

Kishan Jani

Project 4a: Image Warping and Mosaicing

Part 4a.1: Shooting the Images

In this part, we manually touch grass (whoa!) and shoot some photos. To ensure that the photos can be stitched together well, it is important to shoot all images while fixing the position of the lens. Additionally, a wide-angle lens is recommended. Furthermore, it is recommended that we overlap the fields of view significantly. 40% to 70% overlap is recommended. Too little overlap makes registration harder.
To see the images shot, click on the following link:
Part 4a.1 Results

Part 4a.2: Computing Homographies and Warping Images

Computing Homographies between keypoints

Suppose we wish to send points \([wx:wy:w]\) (in \(\mathbb{P}^2\) projective space) to \([w'x':w'y':w']\) for indices \(i=1,2,\ldots, n\) (for conciceness, we supress the indices) via some homography \(H\) i.e.; \[ \begin{bmatrix} w'x'\\w'y'\\w'\end{bmatrix} = H \begin{bmatrix} wx\\wy\\w\end{bmatrix}\]

Thus we have (scaling \(H\) to set \(h_{33} = 1\)) \[ s \begin{bmatrix} x'\\y'\\1\end{bmatrix} = H \begin{bmatrix} x\\y\\1\end{bmatrix} = \begin{bmatrix} h_{11}& h_{12}& h_{13}\\h_{21}& h_{22}& h_{23}\\h_{31}& h_{32}& 1\end{bmatrix} \begin{bmatrix}x\\y\\1\end{bmatrix},\] where we set \(s:= w'/w\). Then substituting the value for \(s\) from the third equation and rearranging, we get the linear system \[ \begin{bmatrix} x& y& 1& 0& 0& 0& -x'x& -x'y \\ 0& 0& 0& u& v& 1& -y'x& -y'y \end{bmatrix} \begin{bmatrix} h_{11} \\ h_{12} \\ h_{13} \\ h_{21} \\ h_{22} \\ h_{23} \\ h_{31} \\ h_{32} \end{bmatrix} =\begin{bmatrix} x' \\ y' \end{bmatrix} \] For a batch of \(n\) distinct points, we then have the overdetermined linear system \(A \mathbf{h} = \mathbf{b}\) to solve, where \(\mathbf{h} = \begin{bmatrix} h_{11} & \cdots & h_{32} \end{bmatrix}^\top \) as above, \(A\) is a \(2n\times 8\) stacking \(n\) matrices of shapre \(2\times 8\) and form above. Finally, \(\mathbf{b}\) is formed by interleaving the new \(xy\)-coordinates \(x'\) and \(y'\).

Now observe that this overdetermined system can be solved via least-squares, which then gives the best homography mapping \( (x_1,y_1),\ldots, (x_n,y_n) \) to \( (x_1',y_1'), \ldots, (x_n, y_n') \).

Inverse Warp

Once we have the homography \(H\), to actually transform the image, we will use an inverse warp. Specifically, we first determine the bounding box for the image by predicting where the corners map under \(H\). Then we find the pixel value of \((x',y')\) in the new image via \[ g(x',y') = f(H^{-1}(x',y')) \] so we set the new image pixels to be the the old-image pixel values for \(H^{-1}(\text{bounding box})\). To ensure that \(H^{-1}(\text{bounding box})\) hits integer coordinates, we use scipy.interpolate.RegularGridInterpolator like in the previous project. Multiple homographies can also be chained together to generate mosaics with \( > 2\) images.
Results of this section are presented in Image Rectification and Mosaicing.

Part 4a.3: Image Rectification (Contains Deliverables)

Introduction

By setting the target keypoints to be a well-defined rectangle in the homography, given an image, we can select a feature that we know is rectangular to "rectify" it i.e, make it straight.
Results in this section are highly sensitive to selection of points. Firstly, it is crucial that the points selected are accurate, since rectification empirically proved to be highly sensitive to this. Furthermore, the ordering of points needs to be correct (otherwise unintelligible results are produced since the image gets "flipped over itself").
Finally, to produce aesthetic images, the dimensions of the new rectified feature were chosen to be somewhat similar to those of the old feature.

Results

We present some examples of image rectification. Click the following link for results.
Part 4a.3 Results

Part 4a.4: Image Mosaicing (Contains Deliverables)

Introduction

Finally, we can use the developed machinery to create a mosaic using several images. The strategy is to choose several corresponding keypoints (from prominent features) between the two images, and warp the first image into the second so that keypoints map to keypoints (as best as possible).
With this, we will have two images whose keypoints roughly match. In order to properly blend these images, we use a distance transform on the alpha channel. This way, during the blending, we give more weight to the image which is further from the background at a certain point. Specifically, \[\text{Blended Image}[i,j] = \mathbb{1}_k\{ \text{dist}_1[i,j] > \text{dist}_2[i,j] \}\cdot \text{Image}_1[i,j] + \big( 1 - \mathbb{1}_k\{ \text{dist}_1[i,j] > \text{dist}_2[i,j] \} \big) \cdot \text{Image}_2[i,j].\] In order to ensure that the blending is smooth, we use a Gaussian blur on the indicator/mask \[\mathbb{1}_k\{ \text{dist}_1[i,j] > \text{dist}_2[i,j]\} = \text{Gaussian}(\sigma_k = 2^k \sigma_0, \text{window} = \lfloor 6\sigma_k \rfloor +1) \star \mathbb{1}\{ \text{dist}_1[i,j] > \text{dist}_2[i,j]\},\] skipping the first few steps in the Gaussian stack until the mask is sufficiently blurred. This ensures that there are no edge artifacts in the final product.

Results

Click the following link for results. Across all cases, we start with \(\sigma_0 = 7\) and skip the first \(2\) images from the Gaussian Stack of the mask, with depth \(5\). Part 4a.4 Results

Project 4b: Image Warping and Mosaicing

Part 4b.1: Detecting corner features in an image

Harris Point Detector

In this part, we use the Harris Matrix \( H_\ell(x,y) \) at each position \( (x,y) \) (and appropriate level \(\ell\) ) to detect corners. It is defined as \[ H_\ell(x,y) = \nabla P_\ell(x,y) \nabla P_\ell(x,y)^{\text{T}} \star g_\sigma(x,y).\] Here \(P_\ell \) is the level \( \ell \) image from the pyramid for our image. Thus, the Harris Matrix looks at the outer product of (smoothed at level \( \ell \)) the gradient of \( P_\ell \) with itself, smoothed by an appropriate Gaussian \( g_\sigma \).

The corner strength is measured by the Harmonic Mean of the eigenvalues of \( H_\ell(x,y) \) i.e. \[ S(x,y) = \frac{\lambda_1 \lambda_2}{\lambda_1 + \lambda_2}.\] Reasoning: When the corner strength is large and positive, we have one eigenvalue that is large and another that is small (say \( \lambda_1 > \lambda_2\) WLOG). Thus there is sharp change in the direction of the eigenvector of \( \lambda_1\) while the \( \lambda_2\) direction does not change as much. This is precisely a corner. We look for corners which are sufficiently spaced out and match a certain threshold value.

Adaptive Non-Maximal Suppression (ANMS)

There are two issues with simply using Harris Corners: the points are not evenly spaced out across the image to ensure a robust homography, and there are too many points. The Adaptive Non-Maximal Suppression Algorithm selects only the interest points \( p* \) that are sufficiently far from other interest points. The reasoning is that these other interest points, should their corner values exceed that of \( p*\), would suppress the value of \( p*\) as a corner. We choose \( c=0.9 \) as our "robustness" parameter, which is the fraction by which the \(S(p*)\) would have to be smaller than its neighbors in order to not be selected. Particularly, we compute \[r_i = \min_i |\vec{x}_i - \vec{x}_j| \; \forall \vec{x}_j \text{ s.t. } S(\vec{x}_i) < c\cdot S(\vec{x}_j)\] then choose the top \( N_{ip}=500\) candidates to be the new interest points. As can be seen with the results, not only does this cull a lot of points that are densely packed, but it also chooses spaced out points.

Results


To see the selected Harris Points and Harris + ANMS points, click the link below. Part 4b.1 Results

Part 4b.2 and 4b.3: Extracting and Matching Feature Descriptors

Extracting Feature Detectors

We generate axis-aligned 8x8 patches of images as our feature descriptor, sub-sampled (with anti-aliasing) from a 40x40 window. For coherence across the image, we bias/gain normalize the sub-image and also turn it black and white.

Matching Feature Detectors

For matching, we can use Lowe's trick. Specifically, we can compute the 1-NN and 2-NN of each feature patch. We can then divide the distance of the 1-NN by the distance of the 2-NN, and if this ratio is below a certain threshold, we can consider the 1-NN to be a match. This threshold \(t \in (0,1)\) is determined empirically by taking the value of the squared error 1-NN/2-NN below which having a correct match is much more likely. This can be seen in Figure 6(b) from the paper. We choose \( t=0.3\) for our purposes.

Results

To see the matchings and some matched features from each mosaic, click the link below.
Part 4b.2 and 4b.3 Results

Part 4b.4: Use RANSAC for Robust Homography

The RANSAC Algorithm

The idea here is to choose the optimal homography \( H^*\), fitted to 4 points chosen uniformly at random from all interest points, that maximizes the number of inliers i.e. interest points which are mapped accurately within a certain threshold \(\varepsilon \in \{2,3,4\}\) by \( H^*\). We then use these inliers as our new interest points and output the Homography that we get by least-squares on the interest points. This removes any non-matchings that are inconsistent with the ideal homomgraphy. We use around 5000 iterations of RANSAC.

Results

To see results, click on the following link
Part 4b.4 Results

Part 4b.5: Produce Mosaics

Using exactly the same warping technique as in Part 4a, we can now use the Homography \( H\) computed via RANSAC to warp one image into another, producing a mosaic. We use Pyramid Blending with the same parameters as before.

Results


Across all mosaics, we use a Laplacian Pyramid with skip = 2, depth = 4, sigma = 11 as parameters for blending. See Part 4a.5 for more detail on how exactly this is implemented.

We see that the VLSB and house images blend very well: most detail is recovered and there are no obvious double-edge / blurry artifacts from a non-robust homography. However, the text for the blackboard is not clear. The reasoning is that the text consists of many sharp edges, and pyramid blending does not particularly help in recovering the sharpness of individual characters (too weak). We need to ideally find a balance between good blending and being able to recover/understand the characters. Here, we chose better blending, however, we can still see that the images have different shadows (due to weak blending). Thus we see that there is a certain trade-off between good blending and level of detail, which becomes a lot more apparent for highly detaield images.

Additionally, in choosing correspondence points manually, we see significant straight-edge distortion (compare the VLSB 3&4 mosaics)

To see the final mosaics (and comparison with manual mosaics), click the link below:
Part 4b.5 Results