|
|
Line 1: |
Line 1: |
| {{Technical|date=September 2009}}
| | The author is called Irwin Wunder but it's not the most masucline title out there. North Dakota is her beginning location but she will have to transfer one day or another. To collect cash is 1 of the things I love most. For many years he's been operating as a receptionist.<br><br>Feel free to surf to my website :: [http://www.webmdbook.com/index.php?do=/profile-11685/info/ at home std testing] |
| {{FeatureDetectionCompVisNavbox}}
| |
| | |
| In the field of [[computer vision]], '''blob detection''' refers to mathematical methods that are aimed at detecting regions in a [[digital image]] that differ in properties, such as brightness or color, compared to areas surrounding those regions. Informally, a '''blob''' is a region of a digital image in which some properties are constant or vary within a prescribed range of values; all the points in a blob can be considered in some sense to be similar to each other.
| |
| | |
| Given some property of interest expressed as a function of position on the digital image, there are two main classes of blob detectors: (i) ''[[Differential calculus|differential]] methods'', which are based on derivatives of the function with respect to position, and (ii) ''methods based on local [[Maxima and minima|extrema]]'', which are based on finding the local maxima and minima of the function. With the more recent terminology used in the field, these detectors can also be referred to as ''interest point operators'', or alternatively interest region operators (see also [[interest point detection]] and [[corner detection]]).
| |
| | |
| There are several motivations for studying and developing blob detectors. One main reason is to provide complementary information about regions, which is not obtained from [[edge detection|edge detectors]] or [[corner detection|corner detectors]]. In early work in the area, blob detection was used to obtain regions of interest for further processing. These regions could signal the presence of objects or parts of objects in the image domain with application to [[object recognition]] and/or object [[video tracking|tracking]]. In other domains, such as [[Image histogram|histogram]] analysis, blob descriptors can also be used for peak detection with application to [[segmentation (image processing)|segmentation]]. Another common use of blob descriptors is as main primitives for [[wikt:texture|texture]] analysis and texture recognition. In more recent work, blob descriptors have found increasingly popular use as [[interest point detection|interest points]] for wide baseline [[image registration|stereo matching]] and to signal the presence of informative image features for appearance-based object recognition based on local image statistics. There is also the related notion of [[ridge detection]] to signal the presence of elongated objects.
| |
| | |
| ==The Laplacian of Gaussian==
| |
| One of the first and also most common blob detectors is based on the [[Laplacian]] of the Gaussian (LoG). Given an input image <math>f(x, y)</math>, this image is [[Convolution|convolved]] by a Gaussian kernel
| |
| :<math>g(x, y, t) = \frac{1}{2\pi t^2} e^{-\frac{x^2 + y^2}{2 t^2}}</math>
| |
| | |
| at a certain scale <math>t</math> to give a [[scale space representation]] <math>L(x, y; t)\ = g(x, y, t) * f(x, y)</math>. Then, the [[Laplacian]] operator
| |
| :<math>\nabla^2 L =L_{xx} + L_{yy}</math>
| |
| is computed, which usually results in strong positive responses for dark blobs of extent <math>\sqrt{2t}</math> and strong negative responses for bright blobs of similar size. A main problem when applying this operator at a single scale, however, is that the operator response is strongly dependent on the relationship between the size of the blob structures in the image domain and the size of the Gaussian kernel used for pre-smoothing. In order to automatically capture blobs of different (unknown) size in the image domain, a multi-scale approach is therefore necessary.
| |
| | |
| A straightforward way to obtain a ''multi-scale blob detector with automatic scale selection'' is to consider the ''scale-normalized Laplacian operator''
| |
| :<math>\nabla^2_{norm} L(x, y; t) = t (L_{xx} + L_{yy})</math>
| |
| and to detect ''scale-space maxima/minima'', that are points that are ''simultaneously local maxima/minima of <math>\nabla^2_{norm} L</math> with respect to both space and scale'' (Lindeberg 1994, 1998). Thus, given a discrete two-dimensional input image <math>f(x, y)</math> a three-dimensional discrete scale-space volume <math>L(x, y, t)</math> is computed and a point is regarded as a bright (dark) blob if the value at this point is greater (smaller) than the value in all its 26 neighbours. Thus, simultaneous selection of interest points <math>(\hat{x}, \hat{y})</math> and scales <math>\hat{t}</math> is performed according to
| |
| :<math>(\hat{x}, \hat{y}; \hat{t}) = \operatorname{argmaxminlocal}_{(x, y; t)}(\nabla^2_{norm} L(x, y; t))</math>.
| |
| Note that this notion of blob provides a concise and mathematically precise operational definition of the notion of "blob", which directly leads to an efficient and robust algorithm for blob detection. Some basic properties of blobs defined from scale-space maxima of the normalized Laplacian operator are that the responses are covariant with translations, rotations and rescalings in the image domain. Thus, if a scale-space maximum is assumed at a point <math>(x_0, y_0; t_0)</math> then under a rescaling of the image by a scale factor <math>s</math>, there will be a scale-space maximum at <math>(s x_0, s y_0; s^2 t_0)</math> in the rescaled image (Lindeberg 1998). This in practice highly useful property implies that besides the specific topic of Laplacian blob detection, ''local maxima/minima of the scale-normalized Laplacian are also used for scale selection in other contexts'', such as in [[corner detection]], scale-adaptive feature tracking (Bretzner and Lindeberg 1998), in the [[scale-invariant feature transform]] (Lowe 2004) as well as other image descriptors for image matching and [[object recognition]].
| |
| | |
| ==The difference of Gaussians approach==
| |
| From the fact that the [[scale space representation]] <math>L(x, y, t)</math> satisfies the [[diffusion equation]]
| |
| :<math>\partial_t L = \frac{1}{2} \nabla^2 L</math>
| |
| it follows that the Laplacian of the Gaussian operator <math>\nabla^2 L(x, y, t)</math> can also be computed as the limit case of the difference between two Gaussian smoothed images ([[scale space representation]]s)
| |
| :<math>\begin{align}
| |
| \nabla^2_{norm} L(x, y; t) &\approx \frac{t}{\Delta t} \left( L(x, y; t+\Delta t) - L(x, y; t-\Delta t) \right)
| |
| \end{align}</math>.
| |
| In the computer vision literature, this approach is referred to as the [[Difference of Gaussians]] (DoG) approach. Besides minor technicalities, however, this operator is in essence similar to the [[Laplacian]] and can be seen as an approximation of the Laplacian operator. In a similar fashion as for the Laplacian blob detector, blobs can be detected from scale-space extrema of differences of Gaussians -- see Lindeberg (2012) for the explicit relation between the difference-of-Gaussian operator and the scale-normalized Laplacian operator. This approach is for instance used in the [[Scale-invariant feature transform|SIFT]] algorithm -- see Lowe (2004).
| |
| | |
| ==The determinant of the Hessian==
| |
| By considering the scale-normalized determinant of the Hessian, also referred to as the [[Monge–Ampère equation|Monge–Ampère operator]],
| |
| :<math>\operatorname{det} H L(x, y; t) = t^2 (L_{xx} L_{yy} - L_{xy}^2)</math>
| |
| where <math>H L</math> denotes the [[Hessian matrix]] of <math>L</math> and then detecting scale-space maxima of this operator one obtains another straightforward differential blob detector with automatic scale selection which also responds to saddles (Lindeberg 1994, 1998)
| |
| :<math>(\hat{x}, \hat{y}; \hat{t}) = \operatorname{argmaxlocal}_{(x, y; t)}(\operatorname{det} H L(x, y; t))</math>.
| |
| The blob points <math>(\hat{x}, \hat{y})</math> and scales <math>\hat{t}</math> are also defined from an operational differential geometric definitions that leads to blob descriptors that are covariant with translations, rotations and rescalings in the image domain. In terms of scale selection, blobs defined from scale-space extrema of the determinant of the Hessian (DoH) also have slightly better scale selection properties under non-Euclidean affine transformations than the more commonly used Laplacian operator (Lindeberg 1994, 1998). In simplified form, the scale-normalized determinant of the Hessian computed from [[Haar wavelet]]s is used as the basic interest point operator in the [[SURF]] descriptor (Bay et al. 2006) for image matching and object recognition.
| |
| | |
| ==The hybrid Laplacian and determinant of the Hessian operator (Hessian-Laplace)==
| |
| A hybrid operator between the Laplacian and the determinant of the Hessian blob detectors has also been proposed, where spatial selection is done by the determinant of the Hessian and scale selection is performed with the scale-normalized Laplacian (Mikolajczyk and Schmid 2004):
| |
| :<math>(\hat{x}, \hat{y}) = \operatorname{argmaxlocal}_{(x, y)}(\operatorname{det} H L(x, y; t))</math>
| |
| :<math>\hat{t} = \operatorname{argmaxminlocal}_{t}(\nabla^2_{norm} L(\hat{x}, \hat{y}; t))</math>
| |
| This operator has been used for image matching, object recognition as well as texture analysis.
| |
| | |
| ==Affine-adapted differential blob detectors==
| |
| The blob descriptors obtained from these blob detectors with automatic scale selection are invariant to translations, rotations and uniform rescalings in the spatial domain. The images that constitute the input to a computer vision system are, however, also subject to perspective distortions. To obtain blob descriptors that are more robust to perspective transformations, a natural approach is to devise a blob detector that is ''invariant to affine transformations''. In practice, affine invariant interest points can be obtained by applying [[affine shape adaptation]] to a blob descriptor, where the shape of the smoothing kernel is iteratively warped to match the local image structure around the blob, or equivalently a local image patch is iteratively warped while the shape of the smoothing kernel remains rotationally symmetric (Lindeberg and Garding 1997; Baumberg 2000; Mikolajczyk and Schmid 2004, Lindeberg 2008/2009). In this way, we can define affine-adapted versions of the Laplacian/Difference of Gaussian operator, the determinant of the Hessian and the Hessian-Laplace operator (see also [[Harris-Affine]] and [[Hessian-Affine]]).
| |
| | |
| ==Grey-level blobs, grey-level blob trees and scale-space blobs==
| |
| A natural approach to detect blobs is to associate a bright (dark) blob with each local maximum (minimum) in the intensity landscape. A main problem with such an approach, however, is that local extrema are very sensitive to noise. To address this problem, Lindeberg (1993, 1994) studied the problem of detecting local maxima with extent at multiple scales in [[scale space]]. A region with spatial extent defined from a watershed analogy was associated with each local maximum, as well a local contrast defined from a so-called delimiting saddle point. A local extremum with extent defined in this way was referred to as a ''grey-level blob''. Moreover, by proceeding with the watershed analogy beyond the delimiting saddle point, a ''grey-level blob tree'' was defined to capture the nested topological structure of level sets in the intensity landscape, in a way that is invariant to affine deformations in the image domain and monotone intensity transformations. By studying how these structures evolve with increasing scales, the notion of ''scale-space blobs'' was introduced. Beyond local contrast and extent, these scale-space blobs also measured how stable image structures are in scale-space, by measuring their ''scale-space lifetime''.
| |
| | |
| It was proposed that regions of interest and scale descriptors obtained in this way, with associated scale levels defined from the scales at which normalized measures of blob strength assumed their maxima over scales could be used for guiding other early visual processing. An early prototype of simplified vision systems was developed where such regions of interest and scale descriptors were used for directing the focus-of-attention of an active vision system. While the specific technique that was used in these prototypes can be substantially improved with the current knowledge in computer vision, the overall general approach is still valid, for example in the way that local extrema over scales of the scale-normalized Laplacian operator are nowadays used for providing scale information to other visual processes.
| |
| | |
| ===Lindeberg's watershed-based grey-level blob detection algorithm===
| |
| For the purpose of detecting ''grey-level blobs'' (local extrema with extent) from a watershed analogy,
| |
| Lindeberg developed an algorithm based on ''pre-sorting'' the pixels,
| |
| alternatively connected regions having the same intensity, in
| |
| decreasing order of the intensity values.
| |
| Then, comparisons were made between nearest neighbours of either pixels or connected regions.
| |
| | |
| For simplicity, let us consider the case of detecting bright grey-level blobs and | |
| let the notation "higher neighbour" stand for "neighbour pixel having a higher grey-level value".
| |
| Then, at any stage in the algorithm (carried out in decreasing order of intensity values)
| |
| is based on the following classification rules:
| |
| | |
| # If a region has no higher neighbour, then it is a local maximum and will be the seed of a blob.
| |
| # Else, if it has at least one higher neighbour, which is background, then it cannot be part of any blob and must be background.
| |
| # Else, if it has more than one higher neighbour and if those higher neighbours are parts of different blobs, then it cannot be a part of any blob, and must be background.
| |
| # Else, it has one or more higher neighbours, which are all parts of the same blob. Then, it must also be a part of that blob.
| |
| | |
| Compared to other watershed methods, the flooding in this algorithm stops once the intensity level falls below the intensity value of the so-called ''delimiting saddle point'' associated with the local maximum. However, it is rather straightforward to extend this approach to other types of watershed constructions. For example, by proceeding beyond the first delimiting saddle point a "grey-level blob tree" can be constructed. Moreover, the grey-level blob detection method was embedded in a [[scale space representation]] and performed at all levels of scale, resulting in a representation called the ''scale-space primal sketch''.
| |
| | |
| This algorithm with its applications in computer vision is described in more detail in Lindeberg's thesis <ref>[http://www.csc.kth.se/~tony/abstracts/CVAP84.html Lindeberg, T. (1991) ''Discrete Scale-Space Theory and the Scale-Space Primal Sketch'', PhD thesis, Department of Numerical Analysis and Computing Science, Royal Institute of Technology, S-100 44 Stockholm, Sweden, May 1991. (ISSN 1101-2250. ISRN KTH NA/P--91/8--SE) (The grey-level blob detection algorithm is described in section 7.1)]</ref> as well as the monograph on scale-space theory <ref>[http://www.nada.kth.se/~tony/book.html Lindeberg, Tony, ''Scale-Space Theory in Computer Vision'', Kluwer Academic Publishers, 1994, ISBN 0-7923-9418-6]</ref> partially based
| |
| on that work. Earlier presentations of this algorithm can also be found in.<ref>[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=139563 T. Lindeberg and J.-O. Eklundh, "Scale detection and region extraction from a scale-space primal sketch", in ''Proc. 3rd International Conference on Computer Vision'', (Osaka, Japan), pp. 416--426, Dec. 1990. (See Appendix A.1 for the basic definitions for the watershed-based grey-level blob detection algorithm.)]</ref><ref>T. Lindeberg and J.-O. Eklundh, "On the computation of a scale-space primal sketch", ''Journal of Visual Communication and Image Representation'', vol. 2, pp. 55--78, Mar. 1991.</ref> More detailed treatments of applications of grey-level blob detection and the scale-space primal sketch to computer vision and medical image analysis are given in.<ref>[http://www.nada.kth.se/~tony/abstracts/Lin92-IJCV.html Lindeberg, T.: Detecting salient blob-like image structures and their scales with a scale-space primal sketch: A method for focus-of-attention, ''International Journal of Computer Vision'', 11(3), 283--318, 1993.]</ref><ref>[http://www.csc.kth.se/cvap/abstracts/cvap223.html Lindeberg, T, Lidberg, Par and Roland, P. E..: "Analysis of Brain Activation Patterns Using a 3-D Scale-Space Primal Sketch", ''Human Brain Mapping'', vol 7, no 3, pp 166--194, 1999.]</ref><ref>[http://brainvisa.info/pdf/mangin-AImed03.pdf Jean-Francois Mangin, Denis Rivière, Olivier Coulon, Cyril Poupon, Arnaud Cachia, Yann Cointepas, Jean-Baptiste Poline, Denis Le Bihan, Jean Régis, Dimitri Papadopoulos-Orfanos: "Coordinate-based versus structural approaches to brain image analysis". ''Artificial Intelligence in Medicine'' 30(2): 177-197 (2004)]</ref>
| |
| | |
| ==Maximally stable extremum regions (MSER)==
| |
| {{Main|Maximally stable extremal regions}}
| |
| Matas et al. (2002) were interested in defining image descriptors that are robust under [[3D projection#Perspective projection|perspective transformations]]. They studied level sets in the intensity landscape and measured how stable these were along the intensity dimension. Based on this idea, they defined a notion of ''maximally stable extremum regions'' and showed how these image descriptors can be used as image features for [[Computer stereo vision|stereo matching]].
| |
| | |
| There are close relations between this notion and the above mentioned notion of grey-level blob tree. The maximally stable extremum regions can be seen as making a specific subset of the grey-level blob tree explicit for further processing.
| |
| | |
| ==See also==
| |
| * [[Blob extraction]]
| |
| * [[Corner detection]]
| |
| * [[Affine shape adaptation]]
| |
| * [[Scale space]]
| |
| * [[Ridge detection]]
| |
| * [[Interest point detection]]
| |
| * [[Feature detection (computer vision)]]
| |
| * [[Harris-Affine]]
| |
| * [[Hessian-Affine]]
| |
| * [[Principal Curvature-Based Region Detector|PCBR]]
| |
| | |
| ==References==
| |
| *{{Cite journal
| |
| | author = Christopher Evans
| |
| | title = Notes on the OpenSURF library
| |
| | booktitle = Research into Robust Visual Feature Detection
| |
| | url = http://www.chrisevansdev.com/opensurf.html
| |
| }}
| |
| | |
| *{{Cite conference
| |
| | author = H. Bay, T. Tuytelaars and L. van Gool
| |
| | title = SURF: Speeded Up Robust Features
| |
| | booktitle = Proceedings of the 9th European Conference on Computer Vision, Springer LNCS volume 3951, part 1
| |
| | pages = 404–417
| |
| | year = 2006
| |
| | url = http://www.vision.ee.ethz.ch/~surf/papers.html
| |
| }}
| |
| * {{Cite journal
| |
| | author=L. Bretzner and T. Lindeberg
| |
| | title=Feature Tracking with Automatic Selection of Spatial Scales
| |
| | journal=Computer Vision and Image Understanding
| |
| | year=1998
| |
| | volume=71
| |
| | pages=pp 385–392
| |
| | url=http://www.nada.kth.se/cvap/abstracts/cvap201.html
| |
| | doi=10.1006/cviu.1998.0650
| |
| | format=abstract page
| |
| | issue=3
| |
| }}
| |
| * {{Cite journal
| |
| | author=T. Lindeberg
| |
| | title=Detecting Salient Blob-Like Image Structures and Their Scales with a Scale-Space Primal Sketch: A Method for Focus-of-Attention
| |
| | journal=International Journal of Computer Vision
| |
| | year=1993
| |
| | volume=11
| |
| | issue=3
| |
| | pages=pp 283–318
| |
| | url=http://www.nada.kth.se/~tony/abstracts/Lin92-IJCV.html
| |
| | doi=10.1007/BF01469346
| |
| | format=abstract page
| |
| }}
| |
| *{{Cite book|
| |
| author=T. Lindeberg |
| |
| title= Scale-Space Theory in Computer Vision |
| |
| url = http://www.nada.kth.se/~tony/book.html |
| |
| publisher= Springer |
| |
| year=1994 |
| |
| isbn=0-7923-9418-6}}
| |
| * {{Cite journal
| |
| | author=T. Lindeberg
| |
| | title=Feature detection with automatic scale selection
| |
| | journal=International Journal of Computer Vision
| |
| | year=1998
| |
| | volume=30
| |
| | issue=2
| |
| | pages=pp 77–116
| |
| | url=http://www.nada.kth.se/cvap/abstracts/cvap198.html
| |
| | doi=10.1023/A:1008045108935
| |
| | format=abstract page
| |
| }}
| |
| * {{Cite journal
| |
| | author=T. Lindeberg and J. Garding
| |
| | title=Shape-adapted smoothing in estimation of 3-{D} depth cues from affine distortions of local 2-{D} structure
| |
| | journal=Image and Vision Computing
| |
| | year=1997
| |
| | volume=15
| |
| | pages=pp 415–434
| |
| | url=http://www.nada.kth.se/~tony/abstracts/LG94-ECCV.html
| |
| | doi=10.1016/S0262-8856(97)01144-X
| |
| }}
| |
| *{{Cite journal
| |
| | author=T. Lindeberg
| |
| | title=Scale-space
| |
| | journal=Encyclopedia of Computer Science and Engineering (Benjamin Wah, ed), John Wiley and Sons
| |
| | volume = IV
| |
| | pages = 2495–2504
| |
| | year = 2008/2009
| |
| | doi=10.1002/9780470050118.ecse609
| |
| | url = http://www.nada.kth.se/~tony/abstracts/Lin08-EncCompSci.html
| |
| | chapter=Scale-Space
| |
| | isbn=0-470-05011-X
| |
| }}
| |
| *{{Cite journal
| |
| | author=T. Lindeberg
| |
| | title=Scale invariant feature transform
| |
| | journal=Scholarpedia
| |
| | pages = 7(5):10491
| |
| | year = 2012
| |
| | doi=10.4249/scholarpedia.10491
| |
| }}
| |
| * {{Cite journal
| |
| | author=D. G. Lowe
| |
| | title=Distinctive Image Features from Scale-Invariant Keypoints
| |
| | journal=International Journal of Computer Vision
| |
| | year=2004
| |
| | volume=60
| |
| | issue=2
| |
| | pages=pp 91–110
| |
| | url=http://citeseer.ist.psu.edu/lowe04distinctive.html
| |
| | doi=10.1023/B:VISI.0000029664.99615.94
| |
| }}
| |
| * {{Cite conference
| |
| | author=J. Matas, O. Chum, M. Urban and T. Pajdla
| |
| | title=Robust wide baseline stereo from maximally stable extremum regions
| |
| | booktitle=British Machine Vision Conference
| |
| | year=2002
| |
| | pages=384–393
| |
| | url=http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc02.pdf
| |
| }}
| |
| * {{Cite journal
| |
| | author= K. Mikolajczyk, K. and C. Schmid
| |
| | title=Scale and affine invariant interest point detectors
| |
| | year=2004
| |
| | journal=International Journal of Computer Vision
| |
| | volume=60
| |
| | issue=1
| |
| | pages=pp 63–86
| |
| | url=http://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/mikolajczyk_ijcv2004.pdf
| |
| | note=Integration of the multi-scale Harris operator with the methodology for automatic scale selection as well as with affine shape adaptation
| |
| | doi=10.1023/B:VISI.0000027790.02288.f2
| |
| }}
| |
| {{Reflist}}
| |
| | |
| {{DEFAULTSORT:Blob Detection}}
| |
| [[Category:Feature detection]]
| |