Talk:Nyquist–Shannon sampling theorem/Archive 2
![]() | This is an archive of past discussions about Nyquist–Shannon sampling theorem. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 |
citation for theorem statement
Finell has requested a citation for the statement of the theorem. I agree that's a good idea, but the one we have stated now was not intended to be a quote, just a good statement of it. It may take a while to find a great quotable statement of the theorem, but I'll look for some. Here's one that's not too bad. Something you also find incorrect ones, which say that the sampling frequency above twice the highest frequency is necessary for exact reconstruction; that's true for the particular reconstruction formula normally used, but it not a part of what the sampling theorem says. That's why I'm trying to be careful about wording says necessary and/or sufficient in various places. Dicklyon 22:18, 29 October 2007 (UTC)
Nyquist-Shannon sampling theorem and quantum physics?
When I browsed through the article, I felt that there might be a connection to what is known as the "duality" of time and energy in quantum Physics. Partly because the interrelation of limiting frequency and time spacing of signals seems to originate in the properties of the Fourier transform, partly because from physics it is known that the longer you look, the more precise your measurement can be. Does anyone feel compentent to comment on this (maybe even in the article)? Peeceepeh (talk) 10:50, 22 May 2008 (UTC)
- The Fourier transform pair (time and frequency) are indeed a Heisenberg dual, i.e. they satisfy the Heisenberg uncertainty relationship. I'm not sure if this is what you were alluding to.
- I'm not sure I see a direct connection to the sampling theorem, though. Oli Filth(talk) 11:38, 22 May 2008 (UTC)
Sampling and Noisy Channels
At Bell Labs, I was given the impression that "Shannon's Theorem" was about more than just the "Nyquist rate". It was also about how much information per sample was available, for an imperfect communication channel with a given signal-to-noise ratio. Kotelnivkov should be mentioned here, because he anticipated this result. The primary aim of Kotelnikov and Shannon was to understand "transmission capacity".
The Nyquist rate was an old engineering rule of thumb, known long before Nyquist. The problem of sampling first occured in the realm of facsimile transmission of images over telegraph wire, which began in the 19th century. By the 1910s, people understood the theory of scanning -- scanning is "analog" in the horizontal direction, but it "samples" in the vertical direction. People designed shaped apetures, for example raised cosine, which years later was discovered again as a filter window by Hamming (the head of division 1135 where I worked at Bell Labs, but he left shortly before I arrived).
And of course mathematicians also knew about sampling rate of functions built up from bandlimited fourier series. But again, I do not believe Whittiker or Cauchey or Nyquist discovered what one woudl call the "sampling theorem", because they did not consider the issue of channel noise or signals or messages.
Also, it seems folks have invented the term "Nyquist-Shannon" for this article. It is sometimes called "Shannon-Kotelnikov" theorem. You could argue for "Kotelnikov-Shannon", but I believe Shannon developed the idea of digital information further than the esteemed Vladimir Alexandrovich. I hesitate to comment here, after seeing the pages of argument above, but I hope you will consider consulting a professional electrical engineer about this, because I believe the article has some problems. DonPMitchell (talk) 22:29, 9 September 2008 (UTC)
- See channel capacity, Shannon–Hartley theorem, and noisy channel coding theorem to connect with what you're thinking of. As for the invention of the name Nyquist–Shannon, that and Shannon–Nyquist are not nearly as common as simply Nyquist sampling theorem, but somewhat more sensible, seems to me; check these books and others; let us know if you find another more common or more appropriate term. Dicklyon (talk) 01:53, 10 September 2008 (UTC)
Nyquist–Shannon sampling theorem is not correct?
Dear Sir/Madam,
Sorry, but I think that Nyquist–Shannon sampling theorem about the sampling rate is not correct.
Could you please be so kind to see the papers below?
http://www.ieindia.org/pdf/88/88ET104.pdf
http://www.ieindia.org/pdf/89/89CP109.pdf
http://www.pueron.org/pueron/nauchnakritika/Th_Re.pdf
Also I believe the following rule could be applied:
"If everything else is neglected you could divide the sampling rate Fd at factor of four (4) in order to find the guaranteed bandwidth (-3dB) from your ADC in the worst case sampling of a sine wave without direct current component (DC= 0)."
I hope that this is useful to clarify the subject.
The feedback is welcomed. Best and kind regards
Petre Petrov ppetre@caramail.com —Preceding unsigned comment added by 78.90.230.235 (talk) 21:30, 24 December 2008 (UTC)
- I think most mathematicians are satisfied that the proof of the sampling theorem is sound. At any rate, article talk pages are for discussing the article itself, not the subject in general... Oli Filth(talk|contribs) 22:00, 24 December 2008 (UTC)
- Incidentally, I've had a brief look at those papers. They are pretty incoherent, and seem mostly concerned with inventing new terminology, and getting confused in the process. Oli Filth(talk|contribs) 22:24, 24 December 2008 (UTC)
- I believe that Mr. Petrov is very confused, yet does have a point. He's confused firstly by thinking that the sampling theorem is somehow associated with the converse, which is that if you sample at a rate less than twice the highest frequency, information about the signal will necessarily be lost. As we said in this talk page before, that converse is not what the sampling theorem says and is not generally true. I think the what Petrov has shown (confusingly) is a counter-example, dis-proving that converse. In particular, that if you know your signal is a sinusoid, you can reconstruct it with many few samples. This is not really a very interesting result and is not related to the sampling theorem, which, by the way, is true. Dicklyon (talk) 05:38, 25 December 2008 (UTC)
- On second look, I think I misinterpreted. It seems to me now that Petrov is saying you need 4 samples per cycle (as opposed to 1/4, which I though at first), and that the sampling theorem itself is not true. Very bogus. Dicklyon (talk) 03:12, 26 December 2008 (UTC)
Dear All, Many thanks for your attention. May be I am confused but I would like to say that perhaps you did not have pay enough attention to the “Nyquist theorem” and the publication stated by above. I’m really sorry if my English is not enough comprehensible. I would like to ask the following questions:
- Do you think that H. Nyquist really formulated clearly “sampling theorem” applicable to real analog signal conversion and reconstruction?
- . What is the mathematical equation of the simplest real band limited signal (SBLS)?
- Do you know particular cases when the SBLS can be reconstructed with signal sampling factor (SSF) N= Fd/Fs <2?
- Do you know particular cases when the SBLS can not be reconstructed with SSF N= 2?
- Do you know something written by Nyquist, Shannon, Kotelnikov, etc. which gives you possibility to evaluate the maximal amplitude errors during the sampling the SBLS, SS or CS with N>2? (Emax, etc, Please see the formulas and the tables in the papers).
- What is the primary effect with sampling SS, CS and SBLS with SF N=2?
- Do not you think that clarifying the terminology is one possible way to clarify the subject and to advance in the good direction?
- If the “classical sampling theorem” is not applicable to the signal conversion and cannot pass the test of SBLS, SS and CS to what it is applicable and true?
I hope that you will help me to clarify the subject. BR P Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 09:22, 25 December 2008 (UTC)
- Petrov, I don't think anyone ever claimed that Nyquist either stated or proved the sampling theorem. Shannon did, as did some of the other guys mentioned, however. I'm most familiar with Shannon's proof, and with decades of successful engineering applications of the principle. Using the constructive reconstruction technique mentioned, amplitude errors are always zero when the conditions of the theorem are satisfied. If you can rephrase some of your questions in more normal terms, I might attempt answers. Dicklyon (talk) 03:12, 26 December 2008 (UTC)
- He should take it to comp.dsp. They'll set him straight. 71.254.7.35 (talk) 04:02, 26 December 2008 (UTC)
Rephrasing
Hello! Marry Christmas to all! If I understand clearly:
- Nyquist has never formulated or proved “sampling theorem” but there are “Nyqiust theorem/zone/frequency/criteria” etc? (PP: Usually the things are named after the author? Or this is a joke?)
- Shannon has proved “sampling theorem” applicable to real world signal conversion and reconstruction ? (PP: It is strange because I have read the papers of the “guys” (Kotelnikov included) and I have found nothing applicable to the real world! Just writings of theoreticians who do not understand the sampling and conversions processes?)
- Yes the engineering applications have done a lot to mask the failure of the theoreticians to explain and evaluate the signal conversion!
- The amplitude errors are zero?? (PP: This is false!. The errors are not zero and the signal cannot be reconstructed “exactly” or “completely! Try and you will see them!)
- Starting the rephrasing:
- N< 2 is “under sampling”.
- N=2 is “Shannon (?) sampling” or just “sampling”.
- N>2 is “over sampling”.
- SBLS is “the simplest band limited signal” or according to me “analog signal with only two lines into its spectrum which are a DC component and a sine or co-sine wave”.
- comp.dsp will set me straight? (PP: OK).
I hope the situation now is clearer. P.Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 06:40, 26 December 2008 (UTC)
- A proof of the sampling theorem is included in one of (I don't remember which) "A Mathematical Theory of Communication" or "Communication in the presence of noise", both by Shannon.
- The "amplitude errors" are zero, assuming we're using ideal converters (i.e. no quantisation errors, which the sampling theorem doesn't attempt to deal with), and ideal filters. In other words, the signal can be reconstructed perfectly; the mathematical proof is very simple.
- I'm not sure you're going to get very far by introducing your own terminology and concepts ("SBLS", "sampling factor", etc.), because no-one will understand what you're talking about! Oli Filth(talk|contribs) 13:10, 26 December 2008 (UTC)
- ???A Mathematical Theory of Communication" or "Communication in the presence of noise", both by Shannon?? I have read them carefully. Nothing is applicable to the sampling and ADC. Please specify the page and line number. Please specify how this publications are related to the real conversion of an analog signal.
- Perhaps I will not advance with my terminology but at least I will not repeating unrelated to the signal conversion "proven" theory.
- Errors are inevitable. You will never reconstruct "exactly" an analog signal coveted into digital for. Try it and you will see!
- About the amplitude error. Could you please pay attention to the Figure 5 at page 55 at http://www.ieindia.org/pdf/89/89CP109.pdf. You will see clearly the difference between the amplitude of the signal and the maximal sample. OK?
- BR P. Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 15:21, 26 December 2008 (UTC)
- The sampling theorem doesn't attempt to deal with implementation limitations such as quantisation, non-linearities and non-ideal filters. No-one has claimed that it does.
- You can reconstruct a bandlimited analogue signal to an arbitrary degree of accuracy. Just use tighter filters and higher-resolution converters.
- What you've drawn there is the result of a "stair-case" reconstruction filter, i.e. a filter with impulse response . This is not the ideal reconstruction filter; it doesn't fully eliminate the images. In practice, a combination of oversampling and compensation filters can reduce the image power to a negligible level (for any definition of "negligible") and hence eliminate the "amplitude errors". None of this affects the sampling theorem!
- In summary, no-one is disputing the fact that if you use sub-optimal/non-ideal converters and filters, you won't get the same result as the sampling theorem predicts. Oli Filth(talk|contribs) 15:33, 26 December 2008 (UTC)
Hello again!
I am really sorry but we are talking about different things.
I am not sure that you are understanding my questions. and answers.
I am not disputing any filters at the moment.
Only the differences between the amplitude of the samples and the amplitude of the converted signal.
Also I am not sure that you have read "the classics" in the sampling theory.
Also, please note that there is a difference between the "analog multiplexing" (analog telephony discussed by the "classics" during 1900-1950) and analog to digital conversion and reconstruction.
I wish you good luck with the "classics" in the sampling theory! BR P Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 15:46, 26 December 2008 (UTC)
- You started this conversation with "I think that Nyquist–Shannon sampling theorem about the sampling rate is not correct", with links to papers that discussed "amplitude errors" as if there was some mistake in the sampling theorem. That is what I have been talking about! If you believe we're talking about different things, then yes, I must be misunderstanding your questions! Perhaps you'd like to re-state exactly what you see as the problem with the sampling theorem.
- As for filters, as far as your paper is concerned, it's entirely about filters, although you may not realise it. In your diagram, you're using a sub-optimal filter, and that is the cause of your "amplitude errors". Oli Filth(talk|contribs) 15:59, 26 December 2008 (UTC)
- Joke?
Petrov, you ask "Usually the things are named after the author? Or this is a joke?" This is clear evidence that you have not bothered to read the article that you are criticizing. Please consider doing so, or keeping quiet. Dicklyon (talk) 00:44, 27 December 2008 (UTC)
Hello!
Ok.
I will repeat some of the questions again in more simple and clear form:
- Where H. Nyquist has formulated or proved clearly stated “sampling theorem” applicable in signal conversion theory? (paper, page, line number?)
- Where is the original clear definition of Nyquist theorem mention in Wikipedia (? (paper, page, line number?)
- Where Shannon has formulated or proved “sampling theorem” applicable in signal conversion theory with ADC? (paper, page, line number?)
- What we will lose if we remove the papers of Nyquist and Shannon from the signal conversion theory and practice with ADC ?
- What is your definition of “band limited” signal discussed by Shannon and Kotelnikov?
- Is it possible to reconstruct an analog signal which in fact is with infinite accuracy if you cut in to finite number of bits and put into circuitry with finite precision and unpredictable accuracy (As you know there are no exact value in electronics)?
- The number e =2.7... and pi=3.14... are included in most of the real signal. How you will reconstruct them “exactly” or “completely”?
I am waiting for the answers
Br
P.Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 10:40, 27 December 2008 (UTC)
- I don't know why you keep requesting where Nyquist proved it; the article already summarises the history of the theorem. As we've already stated, Shannon presents a proof in "Communication in the presence of noise"; it is quoted directly in the article. As we've already stated, this is an idealised model. Just as in all aspects of engineering, practical considerations impose compromises; in this case it's bandwidth and non-linearities. As we've already stated, no-one is claiming that the original theorem attempts to deal with these imperfections. I don't know why you keep talking about practical imperfections as if they invalidate the theorem; they don't, because the theorem is based on an idealised model.
- By your logic, we might as well say that, for instance, LTI theory and small-signal transistor models are invalid, because the real world isn't ideal! Oli Filth(talk|contribs) 11:57, 27 December 2008 (UTC)
"If a function x(t) contains no frequencies higher than B cps, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart."
PP: Imagine that you have a sum of a DC signal and a SS signal.
How you will completely determine them by giving only 2 or even 3 points?
OK? —Preceding unsigned comment added by 78.90.230.235 (talk) 10:49, 27 December 2008 (UTC)
- The theorem and the article aren't talking about 2 or 3 points. They're talking about an infinite sequence of points.
- However, as it happens, in the absence of noise, one can theoretically determine all the parameters of a sinusoid with just three samples (up to aliases). I imagine that if one had four samples, one could determine the DC offset as well. However, this is not what the theorem is talking about. Oli Filth(talk|contribs) 11:57, 27 December 2008 (UTC)
PP:
"They're talking about an infinite sequence of points."
Where did you find that? paper, page, line number?
"I imagine that if one had four samples, one could determine the DC offset as well" This is my paper. Normally should be covered by the "classical theorem". OK? —Preceding unsigned comment added by 78.90.230.235 (talk) 12:17, 27 December 2008 (UTC)
- It's pretty clear that you haven't read the original papers very carefully (or misunderstood them)! In "Communication in the presence of noise", Theorem #1 states it. Yes, it's true that the word "infinite" is not used in the prose, but then look at limits of the summation in Eq.7.
- As for your paper, it's already a known fact (in fact, it's obvious; four equations in four unknowns), and is not in the scope of the sampling theorem (although you can probably derive the same result from the theorem). Oli Filth(talk|contribs) 12:25, 27 December 2008 (UTC)
PP: H. Nyquist, "Certain topics in telegraph transmission theory", Trans. AIEE, vol. 47, pp. 617-644, Apr. 1928 Reprint as classic paper in: Proc. IEEE, Vol. 90, No. 2, Feb 2002.
Question: Where in that publication is the "Sampling theorem"?
"I don't know why you keep requesting where Nyquist proved it..." You are stating that there is "Nyquit theorem?" (please see the article in Wikipedia). There should be a statement and a proof. OK? Where they are? —Preceding unsigned comment added by 78.90.230.235 (talk) 12:43, 27 December 2008 (UTC)
- There is no article on "Nyquist theorem", only a redirect to this article. Please stop asking the same question over and over again; both Dick and I have already answered it, and the article already explains it. Oli Filth(talk|contribs) 12:47, 27 December 2008 (UTC)
PP:
http://www.stanford.edu/class/ee104/shannonpaper.pdf
page 448, Theorem I:
1. First failure for SS, CS or SBLS sampled and zero crossings. (One failure is enough!)
- What failure? The only point of contention is in the nature of the inequality (i.e. an open or closed bound). It is generally accepted today that it is true only for an open bound. The article discusses in the introduction and in the section "Critical frequency". Again, it is clear that you haven't actually read the article.
2. Second failure: "completely" is wrong word.
- Please don't tell me you're talking about your "amplitude errors" again...
3. Third failure: It is about "function" not about " a signal". Every "signal" is a "function", but not every "function" is a "signal". OK?
- How is this a failure?
4. Forth failure: "common knowledge"??? Is that a proof?
- No. What follows is a proof.
5. Fifth failure: No phase in the Fourier series! The phase is inherent part of the signal!
- isn't constrained to be real, and neither is (and hence neither is ). Oli Filth(talk|contribs) 13:11, 27 December 2008 (UTC)
Imagine same number of failures for another theorem, e.g. Pythagoras theorem! Will you defend it in that case? —Preceding unsigned comment added by 78.90.230.235 (talk) 12:56, 27 December 2008 (UTC)
PP: "...F(ω) isn't constrained to be real, and neither is f(t)...".
You could write any equation, but cannot produce any signal.OK?
Sorry, I am talking about real signals with real functions and I am forced to evaluate the errors. You can produce the signals and test the equipment. Please excuse me. May be my mistake to start that talk. —Preceding unsigned comment added by PetrePetrov (talk • contribs) 13:19, 27 December 2008 (UTC)
- "Real" as opposed to "complex"... i.e. phase is included. Oli Filth(talk|contribs) 13:21, 27 December 2008 (UTC)
PP:
Hello!
Again, I have looked at the papers of the "classics" in the field.
May be the following chronology of the events in the field of the “sampling” theorem is OK:
1. Before V. Kotelnikov: H. Nyquist did not formulated any “sampling theorem”. His analysis (?) even of the DC (!) is really strange for an engineer. (Please see the referenced papers). No sense to be mention in sampling, SH, ADC and DAC systems. In "analog multiplexing telephony" is OK.
2. V. Kotelnikov (1933) For the first time formulated theorems, but unfortunately incomplete because did not include the necessarily definitions and calculations. No ideas on errors! May be should be mention just to see the difference between the theory and the practice.
3. C. Shannon.(1949 ) In fact repetition of part of that given by V. Kotelnikov. There is no even clearly formulated proof of something utilizable in ADC. No excuse for 1949! The digital computers were created!
No understanding of the signals (even theoretical understanding) to test its “theorems”. the necessarily definitions and calculations. No ideas of errors! No idea of application of an oscilloscope and multimeter!
4. Situation now:
No full theory describing completely the conversion of the signals from analog to digital form and reconstruction.
But there are several good definitions and verifiable in practice theorem to evaluate the errors of non sampling the SS and CS into their maximums. Verifiable even with an analogous oscilloscope and multimeter!
I hope that is good and acceptable.
BR
P Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 08:41, 28 December 2008 (UTC)
- I'm going to say this one last time. The sampling theorem doesn't attempt to deal with "errors", such as those caused by non-ideal filters. Please stop stating the same thing time and time again; everyone already knows that the theorem is based on an ideal case. It has nothing to do with "multimeters and oscilloscopes". The only theoretical difference between "analog multiplexing" and A-D conversion is the quantisation. To say that there is "no understanding of the signals..." is total nonsense. Please stop posting the same mis-informed points!
- Incidentally, Nyquist uses the term "D.C." in the context of "DC-wave", as the opposite of "Carrier wave"; we would call these "baseband" and "passband" signalling today.
- If you have something on "the conversion of the signals from analog to digital form and reconstruction" from a Reliable source, then please post it here, and we'll take a look. Your own papers aren't going to do it, I'm afraid. However, even if you do find something, it's unlikely to make it into the article, because the article is about the original theorem. Oli Filth(talk|contribs) 11:37, 28 December 2008 (UTC)
Hello!
1. No need to repeat more times. From my point of view the "Nyquist-Shannon theorem" does not exists, and what exists is not applicable fully (even largely)into practice. You are free to think that it exists and the people use it.
- And you are free to not to accept it! (although saying "it doesn't exist" is meaningless...) Yes, of course people use it. It's been the basis of a large part of information theory, comms theory and signal-processing theory for the last 60 years or so.
2. Please note that there are "representative" (simplified but still utilizable) and "non-representative" ("oversimplified" and not usable ) models. The "original theorem" is based on the "oversimplified" model and is not representable.
- You still haven't said why. Remember, one can approximate the ideal as closely as one desires.
3. I have seen the "DC" of Nyquist before your note and I am not accepting it.
- I have no idea what you mean, I'm afraid.
4. Because I am not a "reliable source" I will not spam any more the talks here.
- You're free to write what you like on the talk page (within reason - see WP:TALK). However, we can only put reliable material into the article itself.
5. If you insist on the "original theorem", please copy and paste "exactly" the texts of Nyquist, Shannon, Kotelnikov,etc. which you think are relevant to the subject and let the readers to put their own remarks and conclusions outside the "original" texts. You could put your own, of course. OK?
- The article already has the exact text from Shannon's paper. I'm not sure what more you expect?
6. I have put here a lot of questions and texts without individual answers. If Wikipedia keep them someone will answer and comment them (may be).
- I believe I've answered all the meaningful questions. But yes, this text will be kept.
7. I do not believe that my own papers will change something in the better direction, but someone will change it because the theory (with “representative” models) and the practice should go in the same direction and the errors (“differences”) should be evaluated.
- The cause of your "errors" is already well understood. For instance, CD players since the late 1980s onwards use oversampling DACs and sinc-compensation filters to eliminate these "errors". That's not due to a limitation in the theory, it's due to hardware limitations. The solution can be explained with the sampling theorem. Oli Filth(talk|contribs) 15:09, 28 December 2008 (UTC)
Good luck again. I am not sure that I will answer promptly to any comment (if any) posted here.
BR Petre Petrov
Rapidly oscillating edits
I noticed some oscillation between 65.60.217.105 and Oli Filth about what to say about the conditions on x(t). I would suggest we remove the parenthetical comment
- "(which exists if is square-integrable)"
For the following two reasons. First, it exists also in many other situations. Granted this is practically the most common. Second, it is not entirely clear the integral we then follow this statement with exists in if x(t) is square integrable. I do not think it detracts at all from the article to simply say that X(f) is the continuous Fourier transform of x(t). How do other people feel about this? Thenub314 (talk) 19:03, 3 January 2009 (UTC)
PS I think 65.60.217.105 thinks the phrase continuous Fourier transform is about the Fourier transform of x(t) being continuous, instead of being a synonym for "the Fourier transform on the real line." Thenub314 (talk) 19:14, 3 January 2009 (UTC)
- I realise that I'm dangerously close to 3RR, so I won't touch this again today! The reason I've been reverting is that replacing "square-integrable" with "integrable" is incorrect (however, square-integrability is a sufficient condition for the existence of the FT; I can find refs if necessary). I'm not averse to removing the condition entirely; I'm not sure whether there was a reason for its inclusion earlier in the article's history. Oli Filth(talk|contribs) 19:10, 3 January 2009 (UTC)
- I agree with your guess as to how 65.60.217.105 is interpreting "continuous"; see his comments on my talk page. Oli Filth(talk|contribs) 19:36, 3 January 2009 (UTC)
- Yes, thanks for pointing me there. Hopefully my removal of "continuous" will satisfy him. I suppose I should put back "or square integrable". Dicklyon (talk) 20:08, 3 January 2009 (UTC)
- Not a problem. I agree with you Oli that the Fourier transform exists, but the integral may diverge. I think it follows from Carleson's theorem about almost everywhere convergence of Fourier series that this happens at worst almost everywhere, but I don't off hand know of a reference that goes into this level of detail (and this would apply only to the 1-d transform).
- Anyways I am definitely digressing. The conditions are discussed in some detail in the Fourier transform article, which we link to. So overall I would be slightly in favor of removing the condition entirely. But I think (Dicklyon)'s version works also. (Dicklyon), how do you feel about removing the parenthetical comment?
- I wouldn't mind removing the parentheetical conditions. Dicklyon (talk) 22:06, 3 January 2009 (UTC)
Geometric interpretation of critical frequency
I'm not sure the new addition is correct. Specifically:
- the parallel implied by "Just as the angles on a circle are parametrized by the half-open interval [0,2π) – the point 2π being omitted because it is already counted by 0 – the Nyquist frequency must be omitted from reconstruction" is invalid, not least because the Nyquist frequency is at π, not 2π.
- the discussion of "half a point" is handwaving, which is only amplified by the use of scare quotes. And it's not clear how it makes sense in continuous frequency.
- it's not made clear why the asymmetry disappears for complex signals.
Oli Filth(talk|contribs) 19:24, 14 April 2009 (UTC)
Critical frequency
This section is unnecessarily verbose. It is sufficient to point out that the samples of:
are identical to the samples of:
and yet the continuous functions are different (for sin(θ) ≠ 0).