Jump to content

Talk:Nyquist–Shannon sampling theorem/Archive 2

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by MiszaBot I (talk | contribs) at 09:45, 18 August 2010 (Archiving 1 thread(s) from Talk:Nyquist–Shannon sampling theorem.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Archive 1Archive 2Archive 3

citation for theorem statement

Finell has requested a citation for the statement of the theorem. I agree that's a good idea, but the one we have stated now was not intended to be a quote, just a good statement of it. It may take a while to find a great quotable statement of the theorem, but I'll look for some. Here's one that's not too bad. Something you also find incorrect ones, which say that the sampling frequency above twice the highest frequency is necessary for exact reconstruction; that's true for the particular reconstruction formula normally used, but it not a part of what the sampling theorem says. That's why I'm trying to be careful about wording says necessary and/or sufficient in various places. Dicklyon 22:18, 29 October 2007 (UTC)

Nyquist-Shannon sampling theorem and quantum physics?

When I browsed through the article, I felt that there might be a connection to what is known as the "duality" of time and energy in quantum Physics. Partly because the interrelation of limiting frequency and time spacing of signals seems to originate in the properties of the Fourier transform, partly because from physics it is known that the longer you look, the more precise your measurement can be. Does anyone feel compentent to comment on this (maybe even in the article)? Peeceepeh (talk) 10:50, 22 May 2008 (UTC)

The Fourier transform pair (time and frequency) are indeed a Heisenberg dual, i.e. they satisfy the Heisenberg uncertainty relationship. I'm not sure if this is what you were alluding to.
I'm not sure I see a direct connection to the sampling theorem, though. Oli Filth(talk) 11:38, 22 May 2008 (UTC)

Sampling and Noisy Channels

At Bell Labs, I was given the impression that "Shannon's Theorem" was about more than just the "Nyquist rate". It was also about how much information per sample was available, for an imperfect communication channel with a given signal-to-noise ratio. Kotelnivkov should be mentioned here, because he anticipated this result. The primary aim of Kotelnikov and Shannon was to understand "transmission capacity".

The Nyquist rate was an old engineering rule of thumb, known long before Nyquist. The problem of sampling first occured in the realm of facsimile transmission of images over telegraph wire, which began in the 19th century. By the 1910s, people understood the theory of scanning -- scanning is "analog" in the horizontal direction, but it "samples" in the vertical direction. People designed shaped apetures, for example raised cosine, which years later was discovered again as a filter window by Hamming (the head of division 1135 where I worked at Bell Labs, but he left shortly before I arrived).

And of course mathematicians also knew about sampling rate of functions built up from bandlimited fourier series. But again, I do not believe Whittiker or Cauchey or Nyquist discovered what one woudl call the "sampling theorem", because they did not consider the issue of channel noise or signals or messages.

Also, it seems folks have invented the term "Nyquist-Shannon" for this article. It is sometimes called "Shannon-Kotelnikov" theorem. You could argue for "Kotelnikov-Shannon", but I believe Shannon developed the idea of digital information further than the esteemed Vladimir Alexandrovich. I hesitate to comment here, after seeing the pages of argument above, but I hope you will consider consulting a professional electrical engineer about this, because I believe the article has some problems. DonPMitchell (talk) 22:29, 9 September 2008 (UTC)

See channel capacity, Shannon–Hartley theorem, and noisy channel coding theorem to connect with what you're thinking of. As for the invention of the name Nyquist–Shannon, that and Shannon–Nyquist are not nearly as common as simply Nyquist sampling theorem, but somewhat more sensible, seems to me; check these books and others; let us know if you find another more common or more appropriate term. Dicklyon (talk) 01:53, 10 September 2008 (UTC)

Nyquist–Shannon sampling theorem is not correct?

Dear Sir/Madam,

Sorry, but I think that Nyquist–Shannon sampling theorem about the sampling rate is not correct.

Could you please be so kind to see the papers below?

http://www.ieindia.org/pdf/88/88ET104.pdf

http://www.ieindia.org/pdf/89/89CP109.pdf

http://www.pueron.org/pueron/nauchnakritika/Th_Re.pdf

Also I believe the following rule could be applied:

"If everything else is neglected you could divide the sampling rate Fd at factor of four (4) in order to find the guaranteed bandwidth (-3dB) from your ADC in the worst case sampling of a sine wave without direct current component (DC= 0)."

I hope that this is useful to clarify the subject.

The feedback is welcomed. Best and kind regards

Petre Petrov ppetre@caramail.com —Preceding unsigned comment added by 78.90.230.235 (talk) 21:30, 24 December 2008 (UTC)

I think most mathematicians are satisfied that the proof of the sampling theorem is sound. At any rate, article talk pages are for discussing the article itself, not the subject in general... Oli Filth(talk|contribs) 22:00, 24 December 2008 (UTC)
Incidentally, I've had a brief look at those papers. They are pretty incoherent, and seem mostly concerned with inventing new terminology, and getting confused in the process. Oli Filth(talk|contribs) 22:24, 24 December 2008 (UTC)
I believe that Mr. Petrov is very confused, yet does have a point. He's confused firstly by thinking that the sampling theorem is somehow associated with the converse, which is that if you sample at a rate less than twice the highest frequency, information about the signal will necessarily be lost. As we said in this talk page before, that converse is not what the sampling theorem says and is not generally true. I think the what Petrov has shown (confusingly) is a counter-example, dis-proving that converse. In particular, that if you know your signal is a sinusoid, you can reconstruct it with many few samples. This is not really a very interesting result and is not related to the sampling theorem, which, by the way, is true. Dicklyon (talk) 05:38, 25 December 2008 (UTC)
On second look, I think I misinterpreted. It seems to me now that Petrov is saying you need 4 samples per cycle (as opposed to 1/4, which I though at first), and that the sampling theorem itself is not true. Very bogus. Dicklyon (talk) 03:12, 26 December 2008 (UTC)

Dear All, Many thanks for your attention. May be I am confused but I would like to say that perhaps you did not have pay enough attention to the “Nyquist theorem” and the publication stated by above. I’m really sorry if my English is not enough comprehensible. I would like to ask the following questions:

  1. Do you think that H. Nyquist really formulated clearly “sampling theorem” applicable to real analog signal conversion and reconstruction?
  2. . What is the mathematical equation of the simplest real band limited signal (SBLS)?
  3. Do you know particular cases when the SBLS can be reconstructed with signal sampling factor (SSF) N= Fd/Fs <2?
  4. Do you know particular cases when the SBLS can not be reconstructed with SSF N= 2?
  5. Do you know something written by Nyquist, Shannon, Kotelnikov, etc. which gives you possibility to evaluate the maximal amplitude errors during the sampling the SBLS, SS or CS with N>2? (Emax, etc, Please see the formulas and the tables in the papers).
  6. What is the primary effect with sampling SS, CS and SBLS with SF N=2?
  7. Do not you think that clarifying the terminology is one possible way to clarify the subject and to advance in the good direction?
  8. If the “classical sampling theorem” is not applicable to the signal conversion and cannot pass the test of SBLS, SS and CS to what it is applicable and true?

I hope that you will help me to clarify the subject. BR P Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 09:22, 25 December 2008 (UTC)

Petrov, I don't think anyone ever claimed that Nyquist either stated or proved the sampling theorem. Shannon did, as did some of the other guys mentioned, however. I'm most familiar with Shannon's proof, and with decades of successful engineering applications of the principle. Using the constructive reconstruction technique mentioned, amplitude errors are always zero when the conditions of the theorem are satisfied. If you can rephrase some of your questions in more normal terms, I might attempt answers. Dicklyon (talk) 03:12, 26 December 2008 (UTC)
He should take it to comp.dsp. They'll set him straight. 71.254.7.35 (talk) 04:02, 26 December 2008 (UTC)

Rephrasing

Hello! Marry Christmas to all! If I understand clearly:

  1. Nyquist has never formulated or proved “sampling theorem” but there are “Nyqiust theorem/zone/frequency/criteria” etc? (PP: Usually the things are named after the author? Or this is a joke?)
  2. Shannon has proved “sampling theorem” applicable to real world signal conversion and reconstruction ? (PP: It is strange because I have read the papers of the “guys” (Kotelnikov included) and I have found nothing applicable to the real world! Just writings of theoreticians who do not understand the sampling and conversions processes?)
  3. Yes the engineering applications have done a lot to mask the failure of the theoreticians to explain and evaluate the signal conversion!
  4. The amplitude errors are zero?? (PP: This is false!. The errors are not zero and the signal cannot be reconstructed “exactly” or “completely! Try and you will see them!)
  5. Starting the rephrasing:
    • N< 2 is “under sampling”.
    • N=2 is “Shannon (?) sampling” or just “sampling”.
    • N>2 is “over sampling”.
    • SBLS is “the simplest band limited signal” or according to me “analog signal with only two lines into its spectrum which are a DC component and a sine or co-sine wave”.
  6. comp.dsp will set me straight? (PP: OK).

I hope the situation now is clearer. P.Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 06:40, 26 December 2008 (UTC)

A proof of the sampling theorem is included in one of (I don't remember which) "A Mathematical Theory of Communication" or "Communication in the presence of noise", both by Shannon.
The "amplitude errors" are zero, assuming we're using ideal converters (i.e. no quantisation errors, which the sampling theorem doesn't attempt to deal with), and ideal filters. In other words, the signal can be reconstructed perfectly; the mathematical proof is very simple.
I'm not sure you're going to get very far by introducing your own terminology and concepts ("SBLS", "sampling factor", etc.), because no-one will understand what you're talking about! Oli Filth(talk|contribs) 13:10, 26 December 2008 (UTC)
  1. ???A Mathematical Theory of Communication" or "Communication in the presence of noise", both by Shannon?? I have read them carefully. Nothing is applicable to the sampling and ADC. Please specify the page and line number. Please specify how this publications are related to the real conversion of an analog signal.
  2. Perhaps I will not advance with my terminology but at least I will not repeating unrelated to the signal conversion "proven" theory.
  3. Errors are inevitable. You will never reconstruct "exactly" an analog signal coveted into digital for. Try it and you will see!
  4. About the amplitude error. Could you please pay attention to the Figure 5 at page 55 at http://www.ieindia.org/pdf/89/89CP109.pdf. You will see clearly the difference between the amplitude of the signal and the maximal sample. OK?
BR P. Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 15:21, 26 December 2008 (UTC)
The sampling theorem doesn't attempt to deal with implementation limitations such as quantisation, non-linearities and non-ideal filters. No-one has claimed that it does.
You can reconstruct a bandlimited analogue signal to an arbitrary degree of accuracy. Just use tighter filters and higher-resolution converters.
What you've drawn there is the result of a "stair-case" reconstruction filter, i.e. a filter with impulse response . This is not the ideal reconstruction filter; it doesn't fully eliminate the images. In practice, a combination of oversampling and compensation filters can reduce the image power to a negligible level (for any definition of "negligible") and hence eliminate the "amplitude errors". None of this affects the sampling theorem!
In summary, no-one is disputing the fact that if you use sub-optimal/non-ideal converters and filters, you won't get the same result as the sampling theorem predicts. Oli Filth(talk|contribs) 15:33, 26 December 2008 (UTC)


Hello again!

I am really sorry but we are talking about different things.

I am not sure that you are understanding my questions. and answers.

I am not disputing any filters at the moment.

Only the differences between the amplitude of the samples and the amplitude of the converted signal.

Also I am not sure that you have read "the classics" in the sampling theory.

Also, please note that there is a difference between the "analog multiplexing" (analog telephony discussed by the "classics" during 1900-1950) and analog to digital conversion and reconstruction.

I wish you good luck with the "classics" in the sampling theory! BR P Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 15:46, 26 December 2008 (UTC)

You started this conversation with "I think that Nyquist–Shannon sampling theorem about the sampling rate is not correct", with links to papers that discussed "amplitude errors" as if there was some mistake in the sampling theorem. That is what I have been talking about! If you believe we're talking about different things, then yes, I must be misunderstanding your questions! Perhaps you'd like to re-state exactly what you see as the problem with the sampling theorem.
As for filters, as far as your paper is concerned, it's entirely about filters, although you may not realise it. In your diagram, you're using a sub-optimal filter, and that is the cause of your "amplitude errors". Oli Filth(talk|contribs) 15:59, 26 December 2008 (UTC)
Joke?

Petrov, you ask "Usually the things are named after the author? Or this is a joke?" This is clear evidence that you have not bothered to read the article that you are criticizing. Please consider doing so, or keeping quiet. Dicklyon (talk) 00:44, 27 December 2008 (UTC)


Hello!

Ok.

I will repeat some of the questions again in more simple and clear form:

  • Where H. Nyquist has formulated or proved clearly stated “sampling theorem” applicable in signal conversion theory? (paper, page, line number?)
  • Where is the original clear definition of Nyquist theorem mention in Wikipedia (? (paper, page, line number?)
  • Where Shannon has formulated or proved “sampling theorem” applicable in signal conversion theory with ADC? (paper, page, line number?)
  • What we will lose if we remove the papers of Nyquist and Shannon from the signal conversion theory and practice with ADC ?
  • What is your definition of “band limited” signal discussed by Shannon and Kotelnikov?
  • Is it possible to reconstruct an analog signal which in fact is with infinite accuracy if you cut in to finite number of bits and put into circuitry with finite precision and unpredictable accuracy (As you know there are no exact value in electronics)?
  • The number e =2.7... and pi=3.14... are included in most of the real signal. How you will reconstruct them “exactly” or “completely”?

I am waiting for the answers

Br

P.Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 10:40, 27 December 2008 (UTC)

I don't know why you keep requesting where Nyquist proved it; the article already summarises the history of the theorem. As we've already stated, Shannon presents a proof in "Communication in the presence of noise"; it is quoted directly in the article. As we've already stated, this is an idealised model. Just as in all aspects of engineering, practical considerations impose compromises; in this case it's bandwidth and non-linearities. As we've already stated, no-one is claiming that the original theorem attempts to deal with these imperfections. I don't know why you keep talking about practical imperfections as if they invalidate the theorem; they don't, because the theorem is based on an idealised model.
By your logic, we might as well say that, for instance, LTI theory and small-signal transistor models are invalid, because the real world isn't ideal! Oli Filth(talk|contribs) 11:57, 27 December 2008 (UTC)


"If a function x(t) contains no frequencies higher than B cps, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart."

PP: Imagine that you have a sum of a DC signal and a SS signal.

How you will completely determine them by giving only 2 or even 3 points?

OK? —Preceding unsigned comment added by 78.90.230.235 (talk) 10:49, 27 December 2008 (UTC)

The theorem and the article aren't talking about 2 or 3 points. They're talking about an infinite sequence of points.
However, as it happens, in the absence of noise, one can theoretically determine all the parameters of a sinusoid with just three samples (up to aliases). I imagine that if one had four samples, one could determine the DC offset as well. However, this is not what the theorem is talking about. Oli Filth(talk|contribs) 11:57, 27 December 2008 (UTC)


PP: "They're talking about an infinite sequence of points." Where did you find that? paper, page, line number?

"I imagine that if one had four samples, one could determine the DC offset as well" This is my paper. Normally should be covered by the "classical theorem". OK? —Preceding unsigned comment added by 78.90.230.235 (talk) 12:17, 27 December 2008 (UTC)

It's pretty clear that you haven't read the original papers very carefully (or misunderstood them)! In "Communication in the presence of noise", Theorem #1 states it. Yes, it's true that the word "infinite" is not used in the prose, but then look at limits of the summation in Eq.7.
As for your paper, it's already a known fact (in fact, it's obvious; four equations in four unknowns), and is not in the scope of the sampling theorem (although you can probably derive the same result from the theorem). Oli Filth(talk|contribs) 12:25, 27 December 2008 (UTC)

PP: H. Nyquist, "Certain topics in telegraph transmission theory", Trans. AIEE, vol. 47, pp. 617-644, Apr. 1928 Reprint as classic paper in: Proc. IEEE, Vol. 90, No. 2, Feb 2002.

Question: Where in that publication is the "Sampling theorem"?

"I don't know why you keep requesting where Nyquist proved it..." You are stating that there is "Nyquit theorem?" (please see the article in Wikipedia). There should be a statement and a proof. OK? Where they are? —Preceding unsigned comment added by 78.90.230.235 (talk) 12:43, 27 December 2008 (UTC)

There is no article on "Nyquist theorem", only a redirect to this article. Please stop asking the same question over and over again; both Dick and I have already answered it, and the article already explains it. Oli Filth(talk|contribs) 12:47, 27 December 2008 (UTC)


PP: http://www.stanford.edu/class/ee104/shannonpaper.pdf page 448, Theorem I:

1. First failure for SS, CS or SBLS sampled and zero crossings. (One failure is enough!)

What failure? The only point of contention is in the nature of the inequality (i.e. an open or closed bound). It is generally accepted today that it is true only for an open bound. The article discusses in the introduction and in the section "Critical frequency". Again, it is clear that you haven't actually read the article.

2. Second failure: "completely" is wrong word.

Please don't tell me you're talking about your "amplitude errors" again...

3. Third failure: It is about "function" not about " a signal". Every "signal" is a "function", but not every "function" is a "signal". OK?

How is this a failure?

4. Forth failure: "common knowledge"??? Is that a proof?

No. What follows is a proof.

5. Fifth failure: No phase in the Fourier series! The phase is inherent part of the signal!

isn't constrained to be real, and neither is (and hence neither is ). Oli Filth(talk|contribs) 13:11, 27 December 2008 (UTC)

Imagine same number of failures for another theorem, e.g. Pythagoras theorem! Will you defend it in that case? —Preceding unsigned comment added by 78.90.230.235 (talk) 12:56, 27 December 2008 (UTC)

PP: "...F(ω) isn't constrained to be real, and neither is f(t)...".

You could write any equation, but cannot produce any signal.OK?

Sorry, I am talking about real signals with real functions and I am forced to evaluate the errors. You can produce the signals and test the equipment. Please excuse me. May be my mistake to start that talk. —Preceding unsigned comment added by PetrePetrov (talkcontribs) 13:19, 27 December 2008 (UTC)

"Real" as opposed to "complex"... i.e. phase is included. Oli Filth(talk|contribs) 13:21, 27 December 2008 (UTC)


PP: Hello! Again, I have looked at the papers of the "classics" in the field. May be the following chronology of the events in the field of the “sampling” theorem is OK:

1. Before V. Kotelnikov: H. Nyquist did not formulated any “sampling theorem”. His analysis (?) even of the DC (!) is really strange for an engineer. (Please see the referenced papers). No sense to be mention in sampling, SH, ADC and DAC systems. In "analog multiplexing telephony" is OK.

2. V. Kotelnikov (1933) For the first time formulated theorems, but unfortunately incomplete because did not include the necessarily definitions and calculations. No ideas on errors! May be should be mention just to see the difference between the theory and the practice.

3. C. Shannon.(1949 ) In fact repetition of part of that given by V. Kotelnikov. There is no even clearly formulated proof of something utilizable in ADC. No excuse for 1949! The digital computers were created!

No understanding of the signals (even theoretical understanding) to test its “theorems”. the necessarily definitions and calculations. No ideas of errors! No idea of application of an oscilloscope and multimeter!


4. Situation now: No full theory describing completely the conversion of the signals from analog to digital form and reconstruction.

But there are several good definitions and verifiable in practice theorem to evaluate the errors of non sampling the SS and CS into their maximums. Verifiable even with an analogous oscilloscope and multimeter!


I hope that is good and acceptable. BR

P Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 08:41, 28 December 2008 (UTC)

I'm going to say this one last time. The sampling theorem doesn't attempt to deal with "errors", such as those caused by non-ideal filters. Please stop stating the same thing time and time again; everyone already knows that the theorem is based on an ideal case. It has nothing to do with "multimeters and oscilloscopes". The only theoretical difference between "analog multiplexing" and A-D conversion is the quantisation. To say that there is "no understanding of the signals..." is total nonsense. Please stop posting the same mis-informed points!
Incidentally, Nyquist uses the term "D.C." in the context of "DC-wave", as the opposite of "Carrier wave"; we would call these "baseband" and "passband" signalling today.
If you have something on "the conversion of the signals from analog to digital form and reconstruction" from a Reliable source, then please post it here, and we'll take a look. Your own papers aren't going to do it, I'm afraid. However, even if you do find something, it's unlikely to make it into the article, because the article is about the original theorem. Oli Filth(talk|contribs) 11:37, 28 December 2008 (UTC)

Hello!

1. No need to repeat more times. From my point of view the "Nyquist-Shannon theorem" does not exists, and what exists is not applicable fully (even largely)into practice. You are free to think that it exists and the people use it.

  • And you are free to not to accept it! (although saying "it doesn't exist" is meaningless...) Yes, of course people use it. It's been the basis of a large part of information theory, comms theory and signal-processing theory for the last 60 years or so.

2. Please note that there are "representative" (simplified but still utilizable) and "non-representative" ("oversimplified" and not usable ) models. The "original theorem" is based on the "oversimplified" model and is not representable.

  • You still haven't said why. Remember, one can approximate the ideal as closely as one desires.

3. I have seen the "DC" of Nyquist before your note and I am not accepting it.

  • I have no idea what you mean, I'm afraid.

4. Because I am not a "reliable source" I will not spam any more the talks here.

  • You're free to write what you like on the talk page (within reason - see WP:TALK). However, we can only put reliable material into the article itself.

5. If you insist on the "original theorem", please copy and paste "exactly" the texts of Nyquist, Shannon, Kotelnikov,etc. which you think are relevant to the subject and let the readers to put their own remarks and conclusions outside the "original" texts. You could put your own, of course. OK?

  • The article already has the exact text from Shannon's paper. I'm not sure what more you expect?

6. I have put here a lot of questions and texts without individual answers. If Wikipedia keep them someone will answer and comment them (may be).

  • I believe I've answered all the meaningful questions. But yes, this text will be kept.

7. I do not believe that my own papers will change something in the better direction, but someone will change it because the theory (with “representative” models) and the practice should go in the same direction and the errors (“differences”) should be evaluated.

  • The cause of your "errors" is already well understood. For instance, CD players since the late 1980s onwards use oversampling DACs and sinc-compensation filters to eliminate these "errors". That's not due to a limitation in the theory, it's due to hardware limitations. The solution can be explained with the sampling theorem. Oli Filth(talk|contribs) 15:09, 28 December 2008 (UTC)

Good luck again. I am not sure that I will answer promptly to any comment (if any) posted here.

BR Petre Petrov

Rapidly oscillating edits

I noticed some oscillation between 65.60.217.105 and Oli Filth about what to say about the conditions on x(t). I would suggest we remove the parenthetical comment

"(which exists if is square-integrable)"

For the following two reasons. First, it exists also in many other situations. Granted this is practically the most common. Second, it is not entirely clear the integral we then follow this statement with exists in if x(t) is square integrable. I do not think it detracts at all from the article to simply say that X(f) is the continuous Fourier transform of x(t). How do other people feel about this? Thenub314 (talk) 19:03, 3 January 2009 (UTC)

PS I think 65.60.217.105 thinks the phrase continuous Fourier transform is about the Fourier transform of x(t) being continuous, instead of being a synonym for "the Fourier transform on the real line." Thenub314 (talk) 19:14, 3 January 2009 (UTC)

I realise that I'm dangerously close to 3RR, so I won't touch this again today! The reason I've been reverting is that replacing "square-integrable" with "integrable" is incorrect (however, square-integrability is a sufficient condition for the existence of the FT; I can find refs if necessary). I'm not averse to removing the condition entirely; I'm not sure whether there was a reason for its inclusion earlier in the article's history. Oli Filth(talk|contribs) 19:10, 3 January 2009 (UTC)
I agree with your guess as to how 65.60.217.105 is interpreting "continuous"; see his comments on my talk page. Oli Filth(talk|contribs) 19:36, 3 January 2009 (UTC)
Yes, thanks for pointing me there. Hopefully my removal of "continuous" will satisfy him. I suppose I should put back "or square integrable". Dicklyon (talk) 20:08, 3 January 2009 (UTC)
Not a problem. I agree with you Oli that the Fourier transform exists, but the integral may diverge. I think it follows from Carleson's theorem about almost everywhere convergence of Fourier series that this happens at worst almost everywhere, but I don't off hand know of a reference that goes into this level of detail (and this would apply only to the 1-d transform).
Anyways I am definitely digressing. The conditions are discussed in some detail in the Fourier transform article, which we link to. So overall I would be slightly in favor of removing the condition entirely. But I think (Dicklyon)'s version works also. (Dicklyon), how do you feel about removing the parenthetical comment?
I wouldn't mind removing the parentheetical conditions. Dicklyon (talk) 22:06, 3 January 2009 (UTC)

Geometric interpretation of critical frequency

I'm not sure the new addition is correct. Specifically:

  • the parallel implied by "Just as the angles on a circle are parametrized by the half-open interval [0,2π) – the point 2π being omitted because it is already counted by 0 – the Nyquist frequency must be omitted from reconstruction" is invalid, not least because the Nyquist frequency is at π, not 2π.
  • the discussion of "half a point" is handwaving, which is only amplified by the use of scare quotes. And it's not clear how it makes sense in continuous frequency.
  • it's not made clear why the asymmetry disappears for complex signals.

Oli Filth(talk|contribs) 19:24, 14 April 2009 (UTC)

Critical frequency

This section is unnecessarily verbose. It is sufficient to point out that the samples of:

are identical to the samples of:

and yet the continuous functions are different (for sin(θ) ≠ 0).

--Bob K (talk) 19:29, 14 April 2009 (UTC)

higher dimensional nyquist theorem equivalent?

The Nyquist theorem applies to more than just time-series signals. The theorem also applies in 2-D (and higher) cases, such as in sampling terrain (for example), in defining the maximum reconstructable wavenumbers in the terrain. However, there is some debate as to whether the theorem applies directly, or whether it has subtle differences. Can anyone care to comment on that or derive it? I will attempt to do so following the derivations here, but I probably will lose interest before then.

It seems that it should apply directly given that the Fourier transform is a linear transform, but the debate has been presented so I thought it should go in discussion before the main page. Thanks.

Andykass (talk) 17:45, 12 August 2009 (UTC)

You need to ask for sources, not derivations. Dicklyon (talk) 02:03, 13 August 2009 (UTC)
Check the article on Poisson summation formula, and especially the cited paper Higgins: Five short stories... There is the foundation for sampling on rectangular and other lattices and an local abelian groups, connected with the name Kluvanek.--LutzL (talk) 08:24, 13 August 2009 (UTC)

this T factor issue is coming up again.

remember that "Note about scaling" that was taken out here ?

well, the difference of this article from the common (and flawed, from some of our perspectives) convention of sampling with the unnormalized Dirac comb and including a passband gain of T in the reconstruction filter is starting to have a consequence. i still think we should continue to do things they way we are (why repeat the mistake of convention?) but people have begun to object to this scaling (because it "not in the textbooks" even though it is in at least one).

anyway, Dick, BobK, anyone else want to mosey on over to Talk:Zero-order hold and take a look and perchance offer some comment? r b-j 21:01, 26 January 2007 (UTC)

OK, I gave it my best shot. Dicklyon 23:06, 26 January 2007 (UTC)
Hello again. I certainly can't match Doug's passion for this subject. And I can't improve on Rbj's arguments. I haven't given this as much thought as you guys, but at first glance, it seems to me that the root of the problem is our insistence that "sampling" is correctly modelled by the product of a signal with a Dirac comb. We only do that to "prove" the sampling theorem in a cool way that appeals to newbies. (It certainly sucked me in about 40 years ago.) But there is a reason why Shannon did it his way.
Where the comb really comes from is not the sampling process, but rather it is an artifact of the following bit of illogic: Suppose we have a bandlimited spectrum on interval -B < f < B, and we do a Fourier series expansion of it, as per Shannon. That produces a function, S(f), that only represents the original spectrum in the interval -B < f < B. Outside that interval, S(f) is periodic, which is physically meaningless. But if we ignore that detail, and perform an inverse Fourier transform of S(f), voilà... the Dirac comb emerges for the first time.
Then we compound our mistake by defining sampling to be the product of a signal with a Dirac comb that we created out of very thin air.  I'd say that puts us on very thin ice.
--Bob K 23:14, 26 January 2007 (UTC)
Thin ice is right. Taking transforms of things that aren't square integrable is asking for trouble. Doing anything with "signals" that aren't square integrable is asking for trouble. But as long as we're doing it, might as well not make matters worse by screwing it up with funny time units. There's good reason for this approach in analysing a ZOH, of course, but one still does want to remain cognizant of the thin ice. Dicklyon 23:40, 26 January 2007 (UTC)
I totally agree about the units. I'd just like to reiterate that even without the "square-integrable" issue, what justification do we have for treating S(f) as non-compact (if that's the right terminology)? I.e., what right do we have to assign any importance to its values outside the (-B, B) domain? Similarly, when we window a time-series of samples and do a DFT, the inverse of the DFT is magically periodic. But that is just an artifact of inversing the DFT instead of the DTFT. It means nothing. It is the time-domain manifestation of a frequency-domain approximation.
If this issue seems irrelevant to the discussion, I apologize. But my first reaction to the ZOH article was "the Dirac comb is not necessary here". One should be able to have a perfectly good article without it. But I need to go and really read what everybody has said there. Maybe I will be able to squeeze that in later today.
--Bob K 16:16, 27 January 2007 (UTC)
the liklihood of crashing through the ice is no greater than that of crashing in Richard Hamming's airplane designed using Riemann instead of Lebesgue integration. why would nearly all of these texts including O&S (which i have always considered kinda a formal reference book, not so much for describing cool DSP tricks, but more as a rigorous description of simply what is going on) have no problem with using the Dirac comb? They instead like to convolve with the F.T. of the Dirac comb (which is, itself, a Dirac comb) which is more complicated than just using the shifting theorem caused by the sinusoids in the Fourier series of the Dirac comb. wouldn't that have to be even thinner ice, yet these textbooks do it anyway. their only problem is the misplaced T factor.
BTW, Dick, i agree with you that
is more compact and nicer than
but also less recognizable. it's just like
instead of
except it is harder to see the scaling of time in the infinitely thin delta. r b-j 08:02, 27 January 2007 (UTC)
r b-j 08:02, 27 January 2007 (UTC)
I'm not up on the history of this thread, but FWIW I like     better than   .   And I like   best,   because it's easiest to see that its integral is T.
--Bob K 16:31, 27 January 2007 (UTC)
Bob, good point, and that's why we stuck with that form. Dicklyon 17:11, 27 January 2007 (UTC)
I just read your response at ZOH, and the point about scaling the width instead of the amplitude is compelling. That elevates     up a notch in my estimation. --Bob K 16:44, 29 January 2007 (UTC)
Robert, re the thin ice in textbooks like O&S, it's OK, but it's too bad they don't put the necessary disclaimers, references, or whatever to allow a mathematician to come in and understand the conditions under which the things they derive make sense. It's about enough for engineers, because they're all too willing to let the mathematical niceties slide, but then that makes it tricky when people try to use and extend the ideas or try to make them rigorous. So we end up arguing... not that we have any real disagreement at this point, but so often I see things where a Fourier transform is assumed to exist even when there is no way within delta functions and such even. Dicklyon 17:11, 27 January 2007 (UTC)

I find confusing the traditional textbook discussion of using of a Dirac comb to represent discrete sampling. I am also not sure that I agree with the assertions made here that it all wrong ('mistake'). As I understand it, delta functions only having meaning with multiplication AND integration over infinity. So, simply multiplying a Dirac comb by a function does not, on its own, represent discrete sampling. One must also perform the integration. Doesn't this correct the dimensional issues ('T factor')? —Preceding unsigned comment added by 168.103.74.126 (talk) 17:43, 27 March 2010 (UTC)

Simplifications?

Bob K, can you explain your major rewrite of the "Mathematical basis for the theorem" section? I'm not a huge fan of how this section was done before, but I think we had it at least correct. Now I think I have to start over and check your version, some of what I'm not so sure I understand. Dicklyon (talk) 21:10, 12 September 2008 (UTC)

Hi Dick,
I thought it was obvious (I'm assuming you noticed Nyquist–Shannon_sampling_theorem#math_Eq.1), but I'm happy to explain. I wasn't aware of the elegant Poisson summation formula back when we were creating the "bloated proof" that I just replaced. Without bothering with Dirac comb functions and their transforms, it simply says that a uniformly sampled function in one domain can be used to construct a periodically extended version of the continuous function's transform in the other domain. The proof is quite easy and does not involve continuous Fourier transforms of periodic functions (frowned on by the mathematicians). And best of all, it's an internal link... no need to repeat it here. Or I could put it in a footnote, if you like that better.
Given that starting point, it is obvious that can be recovered from under the conditions assumed by Shannon. All that's left is the math to derive the reconstruction formula.
Is that what you wanted to know?
--Bob K (talk) 22:28, 12 September 2008 (UTC)

As I see it, the main problem with this new version of the proof is that it doesn't appeal to most people's way of thinking about sampling ... many will think about picking off measurements in the time domain. Furthermore, there really should be two versions of the proof, one that works in the time domain and one that works in the frequency domain. Although Bob K might disagree, I think the time domain proof (that was once on this page) is fine, and it should use the Dirac comb. But the application of the Dirac comb involves more than just multiplication of the Dirac comb by the function being sampled. Also needed is integration over the entire time domain. Oddly, I don't see this seemingly important step in textbooks. —Preceding unsigned comment added by 136.177.20.13 (talk) 18:50, 27 March 2010 (UTC)

I'm in agreement that the proof that existed earlier was far more clear than what we see now, but 136, could you be more specific about what you mean by your last three sentences? How is multiplication of the function being sampled by a Dirac comb inadequate to fully performing the sampling operation? What goes in is the function being sampled, and what comes out is a sequence of Dirac impulses weighted by the sample values; the sample values fully defines the Dirac comb sampled signal in the time domain. 70.109.175.221 (talk) 20:15, 27 March 2010 (UTC)

This is '136' again. I'm still working this out myself, and I'm probably wrong on a few details (I'm not a mathematician, but a scientist!) but think first of just one delta function d(t-t0) and how it is applied to a function f(t), we multiply the function by the delta function and then integrate over all space (t in this case). So, Int[f(t) . delta(t-t0)] dt = f(t0). This, in effect, samples the time series f(t) at the point t0. And, if you like, the 'units' on the delta function are the inverse of its arguement, so integrating over all space doesn't change the dimensional value of the sample. Now, the comb function is the sum of delta functions. To sample the time series with the comb function we have a sum of integrated applications of each individual delta function. So, Sum_k Int[f(t) . delta(t - k.t0)] dt and this will equal a bunch of discrete samples. What I'm still figuring out is how this is normalized. Recall that int[d(t)]dt = 1. For the comb function this normalizing integral is infinite, but I think you can get around this by first considering n delta functions, then take the limit as n goes to infinity. You'd need to multiply some of the results by 1/n. —Preceding unsigned comment added by 168.103.74.126 (talk) 20:36, 27 March 2010 (UTC) A related issue is how it is we should treat convolution of the comb function. Following on from my discussion of how discrete sampling might be better expressed (right above this paragraph), it appears to me that convolution will involve an integral for the convolution itself, an infinite sum over all delta functions in the comb, and another infinite integration to handle the actual delta-function sampling of the time series. —Preceding unsigned comment added by 168.103.74.126 (talk) 22:00, 27 March 2010 (UTC)

You might want to take this up on USENET at comp.dsp. Essentially, the sampling operation, the multiplication of the dirac deltas are what samples f(t). To rigorously determine the weight of each impulse, mathematically, we don't need to integrate from -inf to +inf, but only need to integrate from sometime before t0 to sometime after t0. But multiplication by the dirac comb keeps the f(t) information at the discrete sample times and throws away all of the other information about f(t). You don't integrate over all t for the whole comb. For any sample instance, you integrate from, say, 1/2 sample time before to 1/2 sample after the sample instance. 70.109.175.221 (talk) 05:59, 28 March 2010 (UTC)

This is '136' again. I'd also like to say that the formula 3, which is supposed to show the time domain version of the sampling theorem results, kind of makes a mess of needed obvious symmetry between multiplication and convolution in the time and frequency domains. So, the multiplication of the rectangle function in the frequency domain (to band limit results) should obviously be seen as convolution of the sinc function in the time domain (which amounts to interpolation). What we have right now does not make any of this clear (and, at least at first glance, seems wrong). Compare the mathematical development with the main formula under 'interpolation as convolution' on the Whittaker-Shannon page. This formula should be popping out here on the sampling page as well. So, I'm afraid what we have on this page is not really a 'simplification'. Instead, it is really just a mess. —Preceding unsigned comment added by 75.149.43.78 (talk) 18:04, 4 April 2010 (UTC)

Reconstructability not a real word?

I can't find reconstructability in any dictionary. What I do find are the following terms:

  1. Reconstruction (noun)
  2. Reconstructible (adjective)
  3. Reconstruct (verb)
  4. Reconstructive (adjective)
  5. Reconstructively (adverb)
  6. Constructiveness (noun)

This would point to reconstructable not being a real word, but reconstructible is. Reconstructiveness and reconstructibility might be. --209.113.148.82 (talk) 13:16, 5 April 2010 (UTC)

max data rate = (2H)(log_2_(V)) bps

Quoting from a lecture slide:

In 1924, Henry Nyquist derived an equation expressing the maximum rate for a finite-bandwidth noiseless channel.
H is the maximum frequency
V is the number of levels used in each sample
max data rate = (2H)(log_2_(V)) bps
Example
A noiseless 3000Hz channel cannot transmit binary signals at a rate exceeding 6000bps (this would mean there are 2 "levels")

I can't relate that very well to this article. I recognize the 2H parameter, but the "levels" referred to here I'm not sure where they come from.

Then it says Shannon extended Nyquist's work:

The amount of thermal noise ( in a noisy channel) can be measured by a ratio of the signal power to the noise power ( aka signal-to-noise ratio). The quantity (10)log_10_(S/N) is called decibels.
H is the bandwidth of the channel
max data rate = (H)log_2_(1+S/N) bps
Example
A channel of 3000Hz bandwidth and a signal-to-noise ratio of 30dB cannot transmit binary signals at a rate exceeding 30,000bps.

Just bringing this up because people looking for clarification from computer communication lectures might find the presentation a bit odd, take it or leave it. kestasjk (talk) 06:47, 26 April 2010 (UTC)

The first example is misleading. It should state "a noiseless 3000Hz channel cannot transmit signals at a rate exceeding 6000 baud." Nyquist says nothing of the bit rate. Oli Filth(talk|contribs) 07:34, 26 April 2010 (UTC)
Oli, you need to qualify your statement to say binary signals. a noiseless channel of any finite and non-zero bandwidth can conduct a signal of any information rate. but if you're limited to binary signals, what you say is true. 70.109.185.199 (talk) 16:01, 26 April 2010 (UTC)
No, there is no "need". Baud is symbols per second, see the given link.--LutzL (talk) 16:28, 26 April 2010 (UTC)
The point is, Nyquist said nothing about the information rate, and Shannon said nothing about the alphabet size, so the comparison is an "apples vs oranges" one. Oli Filth(talk|contribs) 16:32, 26 April 2010 (UTC)


A few clarifications and suggestions:
1. Many of the issues in your lecture notes are discussed in the bit rate article.
2. The term "data rate" in the lecture notes should be replaced by gross bit rate in the Nyquist formula, and net bit rate (or information rate) in the Shannon-Hartley formula. The formulas are not about the same data rate. Many computer networking textbooks confuse this. I tried to clarify that in this article once, but it was reverted.
3. Almost all computer networking textbooks credit Nyquist for calculating gross bit rate over noiseless channels, while telecom/digital transmission literature typically call this Hartley's law. At Wikipedia, datatransmission is discussed in the Nyquist rate article. I agree with that it should also be discussed in the Nyquist theorem article, because so many students are checking it. Hartley's law is so important that it deserves its own Wikipedia article, and not only a section in the Shannon-Hartly article.
4. When applied to data transmission, the bandwidth in the Nyquist formula refers to passband bandwidth=upper minus lower cut-off frequency (especially if passband transmission=carrier modulated transmission). In signal processing, it refers to baseband bandwidth (also in so called over-sampling, which is said to exceed the Nyquist rate).
5. The Nyquist formula is valid to baseband transmission (i.e. line coding), but in practice when it comes to passband transmission (digital modulation), most modulation schemes only offer less than half the Nyquist rate. I have only heard about the vestigial sideband modulation (VSB) digital modulation scheme, that may offer near the Nyquist rate.
6. Many of the articles in information theory, for example Nyquist–Shannon sampling theorem, channel capacity, etc, can only be understood by people with a signal processing/electrical engineering background. The article lead should written in a way that can be understood by computer science students without university level math knowledge. I tried a couple of years ago to address this issue, but someone reverted most of my changes instead of further improving them, so I gave up.Mange01 (talk) 18:55, 26 April 2010 (UTC)