Talk:Corner detection
This template must be substituted. Replace {{Requested move ...}} with {{subst:Requested move ...}}.
Recent changes
Did some minor editing here and there, but most importantly squared the trace term in the expressions for to make them consistent with the literature. --KYN 13:22, 6 May 2006 (UTC)
Oops, my bad... it was a typo, the previous step, I had the square. Retardo 20:07, 6 May 2006 (UTC)
Requested move
I suggest that the article is moved to the heading "Interest point". The reason is that it is generally accepted that all the methods which is described here detect general interest points rather than corners specifically. There is no reason why Wikipedia must add to this confusion by presenting these methods under the heading "Corner detection". Also, the heading "Interest point" is more general than "Interest point detection" and can include apects of interest points other than detection, e.g., tracking or other applications which use the detected points. --KYN 18:04, 31 August 2006 (UTC)
I agree RE: interest points. I think there should be a short page on corner detection explaining the slightly confused terminology, but otherwise redirecting to interest points. In practice, even brand new papers call it corner detection. Serviscope Minor 20:35, 31 August 2006 (UTC)
There are other interest point operators than those who can be referred to as corner detectors
In the computer vision literature, there are several blob detectors, for example the scale normalized Laplacian, the scale-normalized determinant of the Hessian as well as a hybrid operator "Hessian-Laplace" which uses the determinant of the Hessian for spatial selection and the scale-normalized Laplacian for scale selection. The appropriate section to put these operators would be under the heading "blob detection" which is referred to from the page "scale-space". This page has, however, not been written yet.
If the current page on "corner detection" is moved to "interest point", then the scope of the article would have to be extended substantially from the current scope based on the Harris operator. Tpl 13:47, 2 September 2006 (UTC)
- That is my intention. This article started with only the Harris operator, but given its current content is seems more appropriate to present a discussion about what interest points are (there is something on this already) and how they are used. In particular, a new heading can cover different ways of extracting the image coordinate (x,y) for an interest point. Also, the difference between point and blob can be discussed. The aspect of transformation from image gray values (typically) to a set of image coordinates should be discussed. For example, if we only threshold the response from Harris, we get a blob of pixels. If we instead try to estimate the local maxima we get a pixel coordinate, perhaps even with sub-pixel accuracy if certain measures have been taken. Non-max suppression should also be mentioned. Then there can also be a list of detection methods, more or less like in the current article. Alternatively, there could be one (shorter) article for each specific method and only links from the new page. --KYN 17:23, 2 September 2006 (UTC)
- Note that some of the now mentioned methods have applications in different areas. For example, the Tomasi-Kanade or Shi-Tomasi stuff was originally used for stereo image registration, but has also been used for tracking a region in an image sequence, and can of course also be used for finding interest points in one single image. From that perspective, it could make sense to develop each individual method on a page of their own, describing various details and their applications. There can also be survey articles, like "interest point" which describes the concept from a general point of view, presents a list of methods which can be used, and refer the reader to the page of each specific method for the details. --KYN 17:23, 2 September 2006 (UTC)
Thanks for your reply. I could start writing on an article on blob detection that describes a number of the main blob detectors in the literature, in order to clear up a number of common misunderstandings and also to show how they are respectively related and differ. Then, that material could be a better starting point for a discussion on if the articles on corner detection and blob detection should be merged or not. I think that I could do the writing during next week, not today however. Tpl 06:23, 3 September 2006 (UTC)
I think MSER is the best candidate for blob detection. Hessian interest points look pretty similar to Harris interest points, in practice. Also, the LoG/ DoG detector is arguable a blob detector (it's matched filtering for LoG shaped blobs), but in practice, it's still referred to as a corner or interest point detector. Also, there aren't any corner detectors I know of which aren't really interest point detectors. There are some genuine corner detectors which detect corners (ie sharp bends) in deteced edges, but I haven't seen them referenced (other than in surveys) in recent work.
Serviscope Minor 15:20, 2 September 2006 (UTC)
Now, there is a first outline of an article about blob detection. Four commonly used blob detectors based on differential expressions are described in sufficient detail, and headers have been added for two other important notions of blobs based on local extrema with extent (including MSER). Tpl 17:17, 4 September 2006 (UTC) This description has now been complemented by brief descriptions of two extremum based blob detection methods Tpl 18:16, 4 September 2006 (UTC) Now, I think that it should be easier to make an informed decision whether the articles on corner detection and blob detection should be merged and transferred to an article on interest point detection, or whether they should be kept separate. From my point of view, a division into two articles is more informative provided that cross-references are kept and explanatory comments are given on the notion of interest points.
There is still room for extending these articles with additional corner and/or blob detectors. Regarding the area of feature detection, there are also articles on edge detection and ridge detection. Tpl 08:03, 5 September 2006 (UTC)
Affine invariance (or not)
I don't want to contaminate the article with my views before discussion has taken place on this.
With the typical implementation of the Affine adapted interest points, especially Harris-affine points, the resulting detector is not affine invariant. This is because a search through affine space (unlike scale space) is too expensive.
Any successfully detected points are invariant to affine transformations, in that the affine ellipse which can be drawn around them will more or less cover the same part of the image even after affine transformations. However, the implementation relies on multiscale feature detection, followed by iterative affine adaption. The normal Harris detector is not particularly invariant (or repeatable) under affine transformations of the image. Since this is the first step, it puts an upper bound on the `affine invariantness' of the overall algorithm. That is, under affine transformations, many points will not be detected repeatably. Serviscope Minor 15:51, 5 September 2006 (UTC)
You are right in the observation that the commonly used Euclidean and scale invariant preprocessing stage to affine shape adaptation is not invariant to the full affine group. The correct statement of the affine shape adaptation is that if a fixed point can be found for the affine shape adaptation algorithm, then the resulting image features are affine invariant. This statement is also said explicitly in the original reference (Lindeberg and Garding 1994, 1997). In practic this implies that affine transformations with moderate deviations from the similarity group will imply reasonably high repeatibility of the image features, while almost degenerate affine transformations will imply substantial problems. Nevertheless, the overall approach is highly useful for applications such as wide baseline stereo matching. Tpl 18:09, 5 September 2006 (UTC)
Since the text on affine shape adaptation is much more general than the scope of this article, I moved it to a separate article affine shape adaptation. Besides, corner detection and blob detection, affine shape adaptation also applies to texture segmentation, texture classification and texture recognition. Tpl 10:08, 6 September 2006 (UTC)
Implementation
Do you think it's reasonable on a page like this to have some external links to implementations?
Here's my thoughts, since I'm not in the business of endorsing anyone's code in particular.
Some detectors have sample implementations by the authors, eg SUSAN, DoG (in SIFT), Harris-Laplace. These take precedence, since they may have details not exactly present in the paper and all results _should_ be reproducable with these implementations.
Other detectors (eg Harris, Shi-Tomasi) have very stable implementations in certain libraries, eg intel OpenCV and these libraries are sufficiently widely used that they're not going to be disappearing anytime soon.
If you concur that this section is reasonable, then I'll start adding links, noting whether they are they authors' sample implementations or not. Serviscope Minor 16:56, 8 September 2006 (UTC)
Move, etc
I marked this article some time ago for moving its title to something like "interest point". Given that there now is an article also on "blob detection", I would like to bring some order to the overall presentation. Here is my proposals:
- Parts of the content of this article (Corner detection) is moved to new article "Interest point" which is intended to give a general introduction to this topic, describe applications for interest points and also provide a list of methods for detecting interest points. This list would probably include most of those which now are found in the "Corner detection" article. My proposal is that they are presented at a general level, technical details are not presented in this list of methods.
- The technical details of each of the interest point detection methods are put into separate articles, one article per method.
- The relation between "blobs" and "interest points" needs to be sorted out. Personally, I don't know if they should be kept separate or if they should be presented in the same article. Any ideas? Either way, the distinctions or similarities need to be discussed.
- I also propose that the current article "Blob detection" is renamed to "Blob (computer vision)". Detection is only one aspect of blobs which should be discussed in that article. Applications and general rationale for why we should worry about blobs are other aspects which also should be presented. Detection of blobs should rather be a section in that article.
--KYN 20:50, 14 September 2006 (UTC)
Relationship between blobs and interest points: Well, there's definitely an intersection there. I've never heard of MSER referred to as interest points, or Harris points as blobs, but DoG/LoG features fall happily in to either camp. Maybe the place to cover this is in a generic "Features" article. Features of interest include edges (1D) , interest points (0D or 2D depending on your inclination), blobs and regions. The thing is that all of these features share the same roles (eg matching them for various reasons), so it might be worth dealing with all of the together. As well as sharing similar uses, they should all have the same kind of properties (eg repeatability). That also sidesteps the issue of "is a given feature detector a corner detector or a blob detector".
One could then have a list under each of the headings (interest point/corner, blob, etc), pointing to the relavent article. That kind of implies that I agree on having each detector in its own article. One can then have detectors under multiple headings.