Jump to content

Talk:Kernel (image processing)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Hoehermann (talk | contribs) at 09:45, 20 June 2018 (Terminology/relationship: more elaborate comments about flipping). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Template:Findsourcesnotice

Add these Concepts

Concept which should be added to the article.

Details

Separability of the kernel, which can significantly increase algorithmic efficiency (though memory requirements also increase)

http://www.songho.ca/dsp/convolution/convolution2d_separable.html

Convolution

Flipping of the kernel, which preserves commutativity and associativity (evidently...)

http://s000.tinyupload.com/index.php?file_id=00035872171331523574

Terminology/relationship

Why is this termed a "kernel"? Is it simply an example (applied to image processing) of precisely something that was already termed a kernel in other pre-existing fields of mathematics? Is there an agreed definition? For example, does it cease to be a kernel if the input image is as small as the matrix being convolved with it? Cesiumfrog (talk) 02:09, 11 October 2015 (UTC)[reply]

And "convolution" seems to have a generic meaning, and a specific meaning of "flipping" (as described above in this talk article). A "convolution matrix" is not flipped? I am guessing here but note there is no point "flipping" the symetric martricies used in this article. Can someone who knows about image processing check that.

I teach image processing and my students dug up this source. I have never heard of "flipping" being necessary and neither have five colleagues of mine. But apparently, I lean something new every day. It is important to know that "flipping" means "mirror both ways" – not "transpose the matrix". Some sources like to hide this by relocating the origin and reversing the axis directions. Others show the already flipped kernel and do not mention the change at all.
It becomes obvious only if you compare the mathematical definitions of Cross-correlation and Convolution:


In convolution, the kernel's elements are read in reverse direction. This affects commutativity and separability, which in turn is important for fast implementations. I am still trying to figure out what this means for the practical aspects of image processing and how to teach this in the future. --Hoehermann (talk) 13:36, 13 June 2018 (UTC)[reply]

Unsharp masking kernel might not be correct

I believe the Unsharp Masking kernel should have a central element of 220/256, not 476/256, so that the sum of the elements is 1 (not 2), just like the Sharpen kernel. The fix, if I am correct, is to replace "-476" with "-220". The answer depends on how the author used the Unsharping Masking operator. Engineer editor (talk) 19:24, 5 February 2018 (UTC)[reply]

The unsharp operation is: identity + amount * (identity - gaussian blur). Engineer editor (talk) 19:28, 5 February 2018 (UTC)[reply]