Jump to content

Talk:Nyquist–Shannon sampling theorem/Archive 3

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by MiszaBot I (talk | contribs) at 02:46, 2 December 2012 (Robot: Archiving 2 threads from Talk:Nyquist–Shannon sampling theorem.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Archive 1Archive 2Archive 3

Figs 3, 4 and 8 -- very unclear

These figures are not very clear and should be tidied up by someone more knowledgeable than myself.

Problems:

1. The term "images" is introduced without explanation. From my mostly-forgotten understanding of Shannon's theorem, it appears to me that these "images" are similar to what are called "sidebands" in radio communications. Whatever "images" are, they should be explained either in the text or the figures.

2. The lettering on the frequency scale is unclear, particularly for Fig 3. For example, what is supposed to be made of "-f+ BB"? Some of the lettering should be moved above the scale to get it out of the way of the others. —Preceding unsigned comment added by Guyburns (talkcontribs)

Fig 3 was an .svg file and was reverted to the .png file it previous was. They tell us that the vector graphics versions of the same drawn image is better because they are scalable without sampling artifacts, but in fact, because of some screw up, the .svg files never appear the same uploaded as they were created as one can see from the comments of the image creator at Commons. The letters were jammed together. The .png file is better.
Images are not quite the same as "sideband" like in single sideband or double sideband in AM communications. If your reference point is that of an amateur radio operator or similar, images from sampling are like what happens with what we used to call a "crystal calibrator" that began as a 100 kHz signal, then passed through a nonlinearity to create more images of that 100 kHz at integer multiples of 100 kHz. The sampling operation is such a non-linear operator that takes the original spectrum and creates copies of that original spectrum centered at integer multiples of fs. Those copies are the images and an ideal brickwall filter can remove the images while leaving the original spectrum unchanged. 70.109.185.199 (talk) 03:23, 27 April 2010 (UTC)

'Undersampling and an application of it' looks a little dubious to me. It starts out quite interesting but further down some rambling starts. I'm not sure if an encyclopedia should link to this site. Even if it is legit, I don't think it's entirely on topic. In accordance with wiki guidelines to avoid external links I'd vote to remove it (and possibly use it as a reference rather than an external link in an article more focused on undersampling, in case it meets the quality guidelines). 91.113.115.233 (talk) 08:00, 18 August 2010 (UTC)

Angular frequency vs. Frequency

I think the article should show equivalent forms of the sampling theorem stated in terms of angular frequency, as many textbooks use this convention. I realize its simple to convert, but still... 173.206.212.10 (talk) 03:39, 23 November 2010 (UTC)

You might be right, but sometimes I wish we would stamp out nearly all of the use of angular frequency in EE lit because either the Fourier Transform is not "unitary" (a scaling difference between forward and inverse F.T.) or there is this awful scaling factor in both forward and inverse. Having a unitary transform with no scaling factor in front makes it easy to remember how specific transforms are scaled (like the rect() and sinc() functions) and makes theorems like Parsevals and duality much simpler. 71.169.180.100 (talk) 06:57, 23 November 2010 (UTC)

The Sampling Process Section

The article currently states: "In practice, for signals that are a function of time, the sampling interval is typically quite small, on the order of milliseconds, microseconds, or less."

This is not really true - it depends on which "practice" to which you are referring. What about long-term studies? Moreover, this sentence is not really helpful. It doesn't add any useful or insightful information to the article. — Preceding unsigned comment added by Wingnut123 (talkcontribs) 16:46, 22 March 2011 (UTC)

Sentence from intro removed

I removed the following sentence from the introductory section. It is not really related to the Nyquist-Shannon theorem and furthermore it is false.

A signal that is bandlimited is constrained in how rapidly it changes in time, and therefore how much detail it can convey in an interval of time.

Using results from Robert M. Young, An Introduction to Nonharmonic Fourier Series, Academic Press, 1980, one can show without much trouble that the following is true:

For every B>0, every f∈L2([a,b]) and every ε>0, there exists a function g∈L2(R) which is band-limited with bandwidth at most B and such that .

So band-limited functions can change extremely rapidly and can convey arbitrary large amounts of detail in a given interval, as long as one doesn't care about what happens outside of the interval. AxelBoldt (talk) 22:56, 15 October 2011 (UTC)

Your point is taken, and the sentence should probably be removed (if not reworded). However, I think your example might actually weaken your argument. After all, g is not chosen uniformly over all B, a, and b. Moreover, your f is taken from L2, which constrains the behavior of the function substantially. So even though the wording of the phrase you removed was poor, I think there is still a relevant sentiment which could be re-inserted that does not go against your example (perhaps something about the information content of a bandlimited signal being captured entirely (and thus upper bounded) by a discrete set of samples with certain temporal characteristics). —TedPavlic (talk/contrib/@) 05:09, 16 October 2011 (UTC)
I've never liked that sentence much either, since it has no definite meaning. Even the information rate is not limited to be proportional to B, unless you include noise, so it's not clear what is intended by "how much detail it can convey". Dicklyon (talk) 05:13, 16 October 2011 (UTC)
  • g is not chosen uniformly over all B, a, and b.
True, g must depend on B, the bandwidth we desire, and on a and b, since that's the time-interval we are looking at. In a sense that is the whole point: if you focus solely on one time interval, any crazy behavior can be prescribed there for a band-limited function, and furthermore you can require the bandwidth to be as small as you want.
  • f is taken from L2, which constrains the behavior of the function substantially
That's correct, but L2[a,b] has a lot of detailed and extremely rapidly changing stuff in it. For example, you could encode all of Wikipedia as a bit string in an L2[0,1] function, where a 1 is encoded as a +∞ singularity and a 0 is a -∞ singularity. Choosing your ε wisely, you will find a band-limited g (with bandwidth as small as you want!) that still captures all the crazyness that is Wikipedia.
AxelBoldt (talk) 18:44, 16 October 2011 (UTC)

No, the point is that "constrained in how rapidly it changes in time" relates to the size of the function. And indeed, the L2-norm of the derivative of a band-limited function (indeed any derivative) is bounded by the product of (a power of) the bandwidth and the L2-norm of the function itself.

Or the other way around: given such a band-limited approximation for the restriction to an interval, the behavior outside of the interval can and typically will be explosive. And more so with increasing accuracy of the approximation--LutzL (talk) 15:32, 22 November 2011 (UTC)

Question

Isn't it the case that in practice, due to the possibility of accidentally sampling the ‘nodes’ of a wave, frequencies near the limit will suffer on average an effective linear volume reduction of 2/pi? — Preceding unsigned comment added by 82.139.90.173 (talk) 04:57, 6 March 2012 (UTC)

In practice, "the limit" is chosen significantly above the highest frequency in the passband of the anti-aliasing filter, to accommodate the filter's skirts. So I think the answer is "no". And I have no clue how you arrived at the 2/π factor. It might help to explain that.
--Bob K (talk) 05:42, 6 March 2012 (UTC)

It depends on the filters used. If you reconstruct with square pulses instead of sincs (or zero-order hold instead of impulses into a sinc filter), then you get a rolloff at Nyquist that's equal to an amplitude gain of 2/pi, which comes from evaluating the sinc in the frequency domain, since that's the transform of the rect. It's nothing to do with "accidentally sampling the nodes". Dicklyon (talk) 05:50, 6 March 2012 (UTC)