Feature Points Detection - Harris Detector

As a part of my student exchange program in the Czech Technical University in Prague and especially of the Computer Vision Methods course, I should implement an algorithm for feature points detection. It is a vital step for describing an image by its feature points, for applying some descriptors to the feature points and later for object recognition. It is essential also for searching of correspondent points etc.  The most common methods are the Hessian detector and Harris detector. The stress in this post is on the Harris detector.

  • Hessian Detector
The goal of the Hessian detector is to repeatedly find the feature points which are well localized and there is enough information in their neighborhood for creating a good description. Detector of the local extrema of Hessian – determinant of Hessian Matrix (matrix of second derivations)


finds centers of so called blobs, local extrema of intensity function, which are well localized in neighborhood given varianceInterest points are such local maxima, which value (response) is above the particular threshold t, which is given by noise in the image. Determinant of the Hessian matrix is:


For computation of the second derivation I use the derivation of Gaussian with varianceFor the localization of Hessian extrema it should be implemented a function, which finds out if the point is local extremum. This is done by comparing the pixel value with pixels in its neighborhood.

  • Harris Detector
Harris detector detects points, in which the gradient changes in two orthogonal directions. For this reason it is sometimes called detector of corner points.


We should easily recognize the point by looking through a small window. Shifting a window in any direction should give a large change. This detector uses properties of so called auto-correlation matrix of gradients .


where * is the convolution operator. Using the windowing function  matrices of outer products of gradients in the  neighborhood are accumulated - is the scale (size) of the shifting window. The matrix of outer product has the eigenvector corresponding to the higher eigenvalue in the direction of the gradient. By accumulating the outer product using convolution and finding the eigenvalues of auto-correlation matrix we can find two dominant orientations of gradient and its size in a point neighborhood. So we need to find the properties of their ratio.

Classification of image points using eigenvalues




Harris and Stephens realized this and designed an elegant function:


where alpha is empirical constant, alpha = 0.04-0.06
The above equation avoids a computation of eigenvaluesof matrix thanks to the following equivalency: 



So we need to investigate the measurement of the corner response R:

Classification of image points using corner response R

The theory is enough, so I will show you my code. Here is an example of a main Matlab program which you can copy and paste wherever you like and to implement it in another program. Because I use my webcam as an image source and the code will be quite different for your case, let assume that you have an RGB image IM:

IM2=rgb2gray(IM);% uint8
img=im2double(IM2);% double
response=harris_response(img, sigma, sigmai);
nms = nonmaxsup2d(response, 0.0001^2);

[y x] = find(nms);
So, when you apply the "find" Matlab function you will retrieve the "y" and "x" coordinates of the pixels in matrix coordinate system which are the interesting points. Here is the code of "harris_response" function which computes the response of a Harris function for each pixel of an input image img. For computation, I use the derivation of Gaussianwith standard deviationand a Gaussian as window function with standard deviation:

function [response]=harris_response(img, sigma, sigmai)
x =[ceil(-(2*(sigma*3.0)+1)):ceil(2*(sigma*3.0)+1)];
bias=max(x);
    for x =min(x):1:max(x)
        G(x+bias+1) =(1/(sigma*sqrt(2*pi)))*exp(-x^2/(2*(sigma^2)));% the index           to the array have to be positive, that's vector;
        D(x+bias+1) =(-1/(sigma^3*sqrt(2*pi)))*x*exp(-x^2/(2*sigma^2)); %first           derivative
    end
s=D'*G;
gx=conv2(img,s','same');
gy=conv2(img,s,'same');

x =[ceil(-(2*(sigmai*3.0)+1)):ceil(2*(sigmai*3.0)+1)];
bias=max(x);
    for x =min(x):1:max(x)
        G1(x+bias+1) =(1/(sigmai*sqrt(2*pi)))*exp(-x^2/(2*(sigmai^2)));% the             index to the array have to be positive, that's vector;
    end
s1=G1'*G1;
gx2=gx.^2;
gy2=gy.^2;
gxy=gx.*gy;
Ix2=conv2(gx2,s1,'same');
Iy2=conv2(gy2,s1,'same');
Ixy=conv2(gxy,s1,'same');  

k=0.04;
response = (Ix2.*Iy2 - Ixy.^2) - k*(Ix2 + Iy2).^2;

And here is the code for the "nonmaxsup2d" threshold function:

function [maximg]=nonmaxsup2d(response,thresh)
[imax jmax]=size(response);
maximg1=zeros(imax,jmax);
for i=1:imax
  for j=1:jmax
      if (response(i,j)>thresh) maximg1(i,j)=response(i,j);% triabwa da se             inicializira
   end
  end
end
maximg=zeros(imax,jmax);
for i=2:imax-1
  for j=2:jmax-1
      if (maximg1(i,j)>maximg1(i-1,j-1) && maximg1(i,j)>maximg1(i,j-1) && maximg1(i,j)>maximg1(i+1,j-1) && maximg1(i,j)>maximg1(i-1,j) && maximg1(i,j)>maximg1(i+1,j) && maximg1(i,j)>maximg1(i-1,j+1) && maximg1(i,j)>maximg1(i,j+1) && maximg1(i,j)>maximg1(i+1,j+1))
          maximg(i,j)=1;
      end
  end
end
end

Test image of my hand for feature points searching









Няма коментари:

Публикуване на коментар