Talk:Quantization (signal processing)
![]() | Physics Start‑class Low‑importance | |||||||||
|
![]() | Professional sound production Start‑class Mid‑importance | |||||||||
|
![]() | The content of Quantization error was merged into Quantization (signal processing) on 2013-05-19. The former page's history now serves to provide attribution for that content in the latter page, and it must not be deleted as long as the latter page exists. For the discussion at that location, see its talk page. |
Several comments on page
I am not an expert with Wiki markup, so I do not plan to edit the page directly. However, I have some brief comments/opinions that may be of interest to the next person who decides to edit this page:
1. I think the basic description of "quantization" could use a little bit of tweaking. The current definition is: "quantization is the process of approximating a continuous range of values (or a very large set of possible discrete values) by a relatively-small set of discrete symbols or integer values." I believe that the definition could be made more precise: "quantization is the non-reversible process of approximating a value to one of a countable set of values." Then, specify that the original value can be from a continuous and uncountable domain, and can be multi-dimensional (i.e. vector vs. scalar quantization). The problem with the current definition is that the set that is mapped to by the quantizer does not necessarily have to be "small". In fact, the set being mapped to could have infinite cardinality. The only restriction that is placed on a quantizer is that the range of values being mapped must be countable.. i.e. mappable to the set of integers in some way.
2. I disagree with the floor in the scalar quantization function. It is not correct to say that scalar quantizers perform a floor.
3. Some examples will be useful for non-technical readers. I would recommend the classical "round to the nearest integer" example, as well as an example with non-uniform scalar quantization. The non-uniform quantizer doesn't have to be useful in practice, but it is to open the mind of many readers who might think a quantizer must always "round" in some regular fashion.
4. Demonstrate that quantization is non-reversible by stating a simple example: For the "rounding to the nearest integer quantizer", demonstrate that the quantizer would round 2.6, 2.8, 2.95, 3.3 all to the value of 3. But given (only) the quantized value of 3, there is no way of recovering what the original value was. This is an important issue for lossy compression.
—Preceding unsigned comment added by 70.187.205.90 (talk • contribs) 02:37, 30 January 2006
Definition of floor function
To stress that the transition from continuous to discrete data is achieved by the floor function, it might be useful to require to be continuous. Additionally, I think
- is the floor function, yielding the integer
is confusing, it may be better to use or instead of .
--134.109.80.239 14:50, 11 October 2006 (UTC)
- I liked your suggestion, and just put it into the article. -SudoMonas 17:22, 13 October 2006 (UTC)
Incorrect statement about quantization in nature
This page incorrectly stated that at a fundamental level, all quantities in nature are quantized. This is not true. For example, the position of a particle or an atom is not quantized, and while the energy of an electron orbiting an atomic nucleus is quantized, an electron's energy in free space is not quantized. I have changed the word "all" to "some" in the text to correct the false statement, but a much more optimal revision could be made.
71.242.70.246 18:03, 12 May 2007 (UTC)
- Agree. I find that the whole section is unrelated to quantisation in signal processing, and it hasn't been edited in years. I've decided to delete the whole section. C xong (talk) 04:06, 31 March 2010 (UTC)
pi and e
" For example we can design a quantizer such that it represents a signal with a single bit (just two levels) such that, one level is "pi=3,14..." (say encoded with a 1) and the other level is "e=2.7183..." ( say encoded with a 0), as we can see, the quantized values of the signal take on infinite precision, irrational numbers. But there are only two levels. "
How you will build, test and prove that?
How you will measure "pi" and "e" levels?
P. Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 18:59, 20 March 2010 (UTC)
- The example is poorly written, but its premise is correct. Infinite precision is possible only in theory, so it cannot be tested in practice. The example could be better worded. C xong (talk) 04:09, 31 March 2010 (UTC)
OK I admit that the example is poorly stated :) The reason for such an example is the condition I have seen of the prior editors, having an opinion biased towards " a quantizer should have integer (like 1,2,3) or at least rational fraction wise (like 0.25,0.50,0.75) output values." This illusive bias is a mere and natural result of using computers for digital signal processing and inputing analog signals with soundcards for practical applications of quantizers within ADC of such devices. It is true , that a compuer needs fractional numbers that can be exactly representable within finite N-bits of binary resolution (either in integer format or in floating point formats). BUT a quantizer is something else. Its main function is to map a range of uncountable/countabe things into a set of countable ones with a much smaller number in case of countable to countable mapping. Weather those things have numerical values or not is a secondary issue and even if that numerical values have integer,real or rational values is completely irrelevant from a quantizer's point of view. —Preceding unsigned comment added by 88.226.19.210 (talk) 14:55, 2 April 2011 (UTC)
Page is Mature Enough?
I did my best to bring the premature page of Quantization(Signal Processing) to an acceptable state. Now it has all the necessary definitions and mathematical explanations. It still lacks alot though. For example : - good graphs for quantizer I/O maps ( I cannot add, since I am not a member) - graphs for companding functions - a few numerical examples - adaptive quantization details - further decoration of the topics - application examples
I might add as much in the future, NEVERTHELESS, this page is acceptable now. we may get rid of the banner on top. —Preceding unsigned comment added by 88.226.92.114 (talk) 19:49, 6 April 2011 (UTC)
- Maybe there is a new banner, but right now it says the article needs additional citations, and I think that is true.Constant314 (talk) 22:25, 6 April 2011 (UTC)
This article has substantial problems. I am not even convinced that the edits over the last month or so have been improvements.
- The article says it is a summary of what is in some book (Introduction to Data Compression, K. Sayood, M.Kaufmann). That does not seem proper for Wikipedia and I don't think that is actually true, based on the edit history (although I don't have a copy of that book to be able to say for sure).
- It has a substantial number of grammatical and formatting problems and spelling errors.
- It should be about quantization in general, but it now seems to be exclusively about scalar quantization.
- It isn't quite correct in various places.
- It no longer even contains a definition of what quantization is.
—SudoMonas (talk) 17:03, 8 April 2011 (UTC)
So you mean according to your standards the previous stage was better, then I will revert back my edits.
=> I tried but it is too tiring to revert back, you should do it to the date "15 February 2011" that was the prior date I began my edits. —Preceding unsigned comment added by 88.224.91.218 (talk) 21:33, 8 April 2011 (UTC)
Looking back, I don't really think that the shortcomings that I see in the article are your fault, and you made your edits in good faith, so I will not revert them. I think the article wasn't very good back in February either. I guess I should stop complaining and just try to help contribute to make the article better. —SudoMonas (talk) 01:17, 9 April 2011 (UTC)
- I like what you have done with the introduction.Constant314 (talk) 22:57, 9 April 2011 (UTC)
- While you are at it, you may want to eliminate the use of first person plural in favor of third person. Example: "After defining these two performance metrics for the quantizer, we can express a typical Rate–Distortion formulation for a quantizer design problem in one of two ways: " could be rewritten as "After defining these two performance metrics for the quantizer, a typical Rate–Distortion formulation for a quantizer design problem can be expressed in one of two ways: "Constant314 (talk) 13:43, 12 April 2011 (UTC)
The new state of the page
The following represents my personal point of view.
OK now after several enhancements, corrections, modifications and additions, the page seems not any better ? :)) why so ?
1- whole page seems not relating to the main topic (Quantization - "Signal Processing" ?) It seems mainly about quantization in "mathematics","communication systems" and "source coding" (data compression), however the sole purpose of quantization for signal processing is simply the representation of analog signals by digital ones. This point is almost not discussed. That is a practical point of view (ADC/DAC,Data Acquisition, Instrumentation) I think that this point must be stressed. There are many considerations, input signal conditioning, clipping,distortions, AGC, dynamic range modifications, loading factor calculations, independent noise assumptions, SQNRdB calculations, input types and their effects on the resulting signal fidelity...
2- modifications dont work: The previous stage was based on my very personal style. I like personal writing :). And it doesnt fit wiki, but your (SudoMonas) modifications now are too much constrained and limited by that previous things. And that creates too frequent style-mismatches which make it difficult to read. lets consider writing this page from "scratch" :)), so that it at least becomes consistent in terminology and style.
3- it seems boring without actual examples, applicaitons and figures.
4- and it is quite long now.
now a few suggestions.
1- That whole Rate-Distortion based mathematical stuff shall either be omitted or be moved to a proper place. Analysis and Design of a quantizer shall better be treated separate than its definitions, types, usages and properties.
2- Shorter is better ! : at various places too lengthy explanations pervade.(some of them belonging to me) Even the first few sentences are unnecessarily (almost redundantly like this one) long, what is wrong with saying => "Quantization is the process of mapping large set of input values to a much smaller set" conscise, compact, and if any ambiguity happens (definetely it does) it can always be expanded and clarified on the what follows, instead of inside a single sentence.
3- Quantization in Signal Prcessing, Mathematics, Communications and Source Coding have quite different purpose/type of usages. Therefore they shall better be treated separately.
in Signal Processing => ADC/DAC characterizations,binary data representation formats, rounding, rounding in matlab/C, rounding in IEEE floating point formats, input signal conditioning, independent quantization noise assumption and its effcets on outputs, spectral noise shaping via noise feedback applications, input loading factors. quantizer resolution wrt bit-size, relations to sampling rate. It is very natural to consider quantization with sampling in here.
in Communications => telephone lines, PCM, DPCM, ADPCM, Delta Modulation, Nonuniform Max-Lloyd and Adaptive Qauntizers, a-low, u-low Companders. the ones employed in codecs likes ITU-T G.723,G.726,G.722 standarts.
in Source Coding => rate distorion based encoder-decoder design, vector quantization , Psychoacoustic/Psychovisual facts for shaping the design, quantizers used in JPEG, MPEG-audio, H.263/4 would show some nice examples. —Preceding unsigned comment added by 88.226.198.117 (talk) 23:52, 13 April 2011 (UTC)
- I think it is some better. My perception is that it jumps into specialized math too quickly. My thoughts are that roughly the first half ought to be descriptive and qualitative with simple examples and only a few simple equations and should only be about uniform step size quanitization. Then the second half could have all that math. First anything to do with the uniform quantitizer then the others.Constant314 (talk) 18:13, 14 April 2011 (UTC)
- Upon further reflection, I think this article should deal only with uniform quantization and the other types moved to their own pages.Constant314 (talk) 18:24, 14 April 2011 (UTC)
- I just noticed these comments – some further edits have been done since those comments were made. I just included the suggestion regarding the simplification of the first sentence. As you have probably seen, I have just started at the beginning and have been trying to improve what I saw from paragraph-to-paragraph as I moved forward. It's true that this is an incremental approach. I haven't yet gotten to the later sections or really attempted any significant restructuring or added substantial new topics. I had planned to get to some of that, but hadn't yet had time. The rate-distortion and Lloyd-Max material was already there – I have only refined them. I certainly think that the article has been getting substantially more correct and that there has been some improvement in the logical flow, consistency, notation, and referencing. In my opinion, quantization for source coding and communication are within the scope of signal processing. Of course, I have been the one doing the recent edits, so I may not be perfectly objective about them. –SudoMonas (talk) 19:12, 14 April 2011 (UTC)
- I think you are making improvements. I don't know how this article got to where it was. It looked like two guys who knew a lot about the subject were in a contest to see who could add the most stuff. Regarding "quantization for source coding and communication are within the scope of signal processing" I agree but that is no reason why they could not have their own pages with a link in this page.Constant314 (talk) 21:46, 14 April 2011 (UTC)
1- well, first of all there are certainly improvements: At the very beginning, once upon a time, quantization was described almost like rounding to integer. Now it is definetely better.
2- The fundamental problem results from the fact that while doing my edits, I thought it would be a good idea to start from the most general, rate distortion based case and move on to the specific cases as special examples (a rather logical axiomatic approach). Now I think that is not good. It seems better to go, as Constant314 points, from simpler uniform quantizer to more general cases. For me it is definetely better in this state, from general theory to specific examples. But I guess most people visiting this page have no idea about either entropy or rate distortion theory and for those people (the majority) it is difficult to read in this fashion.
3- There are no different quantizers for signal processing, communiciation or source coding. However the application target and hence the constraints may get radically different. For example dithering has no meaning in source coding while it is a useful tool for image/audio post-processing. For most DSP applications, for example, due to practical CPU architectures, FLC is used, that is the natural machine arithmetic and machine word size. It would be difficult to use entropy techniques there. As all these are different application constraints on the same general problem, that is why I assume treating them separateley would be better. By the way, my edits were geared towards source coding and scalar quantization in particular. SudoMonas seems to have vector quantization, VQ, basis. That "classification of input" argument, instead of simply calling them decision intervals, has very little meaning and significance for a scalar quantizer, although it is understandable for pattern recognition or vector quantization. I strongly suggest avoiding a mixture of VQ and SQ. It would be much better to treat VQ in a separate brand new and free page.
4- Since quantization is a vast subject there is no last word on it. Anybody who knows about it would like to add an extra paragraph of his own. Expanding some vague overly compressed definitions, giving a more unambigious description, adding a new point of view or some application examples...And that would make this page too long. I guess only those necessary and sufficient explanations should be included.
5- finally, I am not in a contest, as suggested by Constant314. I am not puting anything new. Possibly I wont either. I wish good luck for the remaining editors.
—Preceding unsigned comment added by 88.224.26.202 (talk) 12:20, 15 April 2011 (UTC)
- Re your #5: Sorry I don't mean that you were in a contest. I think it was that condition before you started working on it.
- Re your #3: Your approach of general to specific would appeal to mathematicians, which few readers are.Constant314 (talk) 13:25, 15 April 2011 (UTC)
Since these further comments, I have done various things to try to simplify the presentation. I have tried to restrict the introduction section to basic ideas and applications without getting into detailed equations. I have also moved more of the simpler uniform quantization discussion up before the discussion of rate-distortion optimization. (I agree that the axiomatic approach was a bit too tough for most readers.) I have substantially condensed and simplified much of the material after the rate-distortion and Lloyd-Max sections and removed some of the unreferenced material that seemed confusingly written, overly mathematical, and in some cases not especially noteworthy. There is already a separate article on VQ, and it is linked near the beginning of the article. I am becoming reasonably satisfied with the article, although I do still plan some further refinements. —SudoMonas (talk) 01:23, 20 April 2011 (UTC)
- I'd like to see the Quantization Noise sub section reinstated. People do sometimes analize quantization error as noise and sometimes that is OK and sometimes it yields wrong answers.Constant314 (talk) 17:07, 21 April 2011 (UTC)
- Excellent suggestion – although I think that the material that was previously in the article on that subject was not such a good presentation of the subject. If someone else doesn't do it, I'll add some discussion of that topic soon. —SudoMonas (talk) 21:58, 21 April 2011 (UTC)
- You are doing fine. I would suggest that you use PDF instead of pdf and that you write it out fully at least the first time in every section.Constant314 (talk) 17:38, 22 April 2011 (UTC)
- Thanks. I just inserted a section about the additive noise model. Regarding pdf, I put some changes in the article to improve that aspect, although not exactly as suggested. According to the PDF (disambiguation)#In science and probability density function pages (and my personal experience), the usual abbreviation uses lowercase letters. To me (and I think to most people), PDF refers to the file format, and that assumption is reflected in the Wikilink redirect on the PDF page. In the article modification, I defined the abbreviation in parenthesis in the first place where it is used in the article and put Wikilinks in the first use in each other section. In some places, defining the term in parentheses might mix with math formulas that immediately follow the term. —SudoMonas (talk) 20:22, 22 April 2011 (UTC)
- LOL, I have just the opposite reaction: I think pdf is a file type and PDF is an acronym.Constant314 (talk) 16:01, 23 April 2011 (UTC)
- Thanks. I just inserted a section about the additive noise model. Regarding pdf, I put some changes in the article to improve that aspect, although not exactly as suggested. According to the PDF (disambiguation)#In science and probability density function pages (and my personal experience), the usual abbreviation uses lowercase letters. To me (and I think to most people), PDF refers to the file format, and that assumption is reflected in the Wikilink redirect on the PDF page. In the article modification, I defined the abbreviation in parenthesis in the first place where it is used in the article and put Wikilinks in the first use in each other section. In some places, defining the term in parentheses might mix with math formulas that immediately follow the term. —SudoMonas (talk) 20:22, 22 April 2011 (UTC)
- You are doing fine. I would suggest that you use PDF instead of pdf and that you write it out fully at least the first time in every section.Constant314 (talk) 17:38, 22 April 2011 (UTC)
- Excellent suggestion – although I think that the material that was previously in the article on that subject was not such a good presentation of the subject. If someone else doesn't do it, I'll add some discussion of that topic soon. —SudoMonas (talk) 21:58, 21 April 2011 (UTC)
Mid rise, Mid tread, mu-law, A-law
I cannot find a reference right now, but my recollection is the mu-law was mid rise and A-law was mid tread which means the slightest noise causes the mu-law device to toggle between two states while the A-law does not. Thus, a mu-law circuit transmits noise where an A-law would not. This got to be a marketing issue over who had the quietest network. Manfacturers started adding a half a bit bias to the mu-law encoders to get a circuit so quiet that "you could hear a pin drop". Anyway, you may want to work mid-rise, mid tread into the section on mu-law and A-law or maybe work the A-law, mu-law into the mid-rise, mid-tread section.Constant314 (talk) 16:14, 23 April 2011 (UTC)
Clipping
BarrelProof asserts that Clipping (signal processing) needs to mentioned as a source of quantization. Hopefully he'll explain why here. ~KvnG 19:11, 3 December 2013 (UTC)
- To be more precise, I assert that clipping should be mentioned as a source of quantization error, not as a source of quantization itself. The sentence in question concerns the sources of error in analog-to-digital conversion (in the lead section of the article). In practice, there can be several sources of error in practical analog-to-digital converters, including such sources as analog circuitry nonlinearity, analog noise, etc., but we can neglect most of those in an idealized model. The term "analog-to-digital conversion" generally refers to the application of uniform quantization with a finite number of levels (e.g., using a 10 bit or 12 bit a/d converter, thus having 1024 or 4096 distinct representable values). In such an operation, there are basically two sources of error – granular distortion and clipping distortion (where clipping distortion is also known as "overload distortion"). Both kinds of distortion are discussed in sections in the article, and I don't understand why neglecting one of them in the lead section would be desirable. See, for example, Quantization (signal processing)#Granular distortion and overload distortion. Clipping is something that definitely does occur in practice. If clipping distortion was not a concern, one could just amplify the gain at the input and thus drive the granular distortion to zero and there would be no distortion. Clipping/overload can be a major source of the error introduced by a quantizer. —BarrelProof (talk) 20:14, 3 December 2013 (UTC)
- The statement we're discussing is, "The difference between the actual analog value and quantized digital value is called quantization error or quantization distortion. This error is either due to rounding, truncation or clipping." The section you link to talks about overload distortion being caused by clipping. I can go along with that. Your proposed edit makes the claim that clipping causes quantization error or quantization distortion. I can't go along with that. ~KvnG 21:11, 3 December 2013 (UTC)
- Actually, that wasn't the exact wording, but I suppose that doesn't matter for purposes of this discussion. My definition of "quantization error" is that it is any error introduced by a quantizer – i.e., any error introduced by conversion of a continuous-domain input signal (or an uncountable input domain or a countable input domain with a larger set of countable values) to a countable output representation. Do you disagree with that definition? —BarrelProof (talk) 21:26, 3 December 2013 (UTC)
- Sorry if I misquoted your proposal. I did include the link to the diffs for those who want to go to the horse's mouth.
- Do you have a citation for a definition which includes overload? ~KvnG 21:53, 3 December 2013 (UTC)
- Here is a classic one that is already cited in the article: The paper by Joel Max, "Quantizing for Minimum Distortion" (1960). It says "
The difference between input and output signals, assuming errorless transmission of the digits, is the quantization error. ... one has to use a quantizer which sorts the input into a finite number of ranges, N.
" He then computes the mean-square quantization error by performing an integration of the pdf over the full range of the input signal from minus infinity to infinity (just above equation 1), while keeping the number of reconstruction values N as a finite constant. Since N is constant and finite, the (infinite-extent) integration range includes the error introduced both by granularity and overload. Does that suffice? - To me, it seems rather self-evident that "quantization error" or "quantization distortion" should be interpreted as referring to (all of) the error/distortion introduced by quantization – which should include all sources of such error (both granularity and overload). While some authors may provide simplified presentations that neglect to discuss overload, and while there may not be any overload distortion in some applications (e.g., if the signal has a known finite input range and the quantizer gain is set to cover that entire range), when there is overload in the quantization operation, the error introduced by the overload is part of the quantization error/distortion. If you want to refer to only the granular element of the error, then the appropriate term is "granularity error", but the "quantization error" properly/generally should include all error induced by the quantization operation.
- The sources that you cited seem to generally not even consider the topic of clipping/overload distortion. They seem to mostly be less scholarly, simplified discussions of the topic. Here's an alternative challenge: Can you find any sources that actually include a discussion of clipping/overload in any significant detail and do not include it within the scope of their definition of "quantization error" or "quantization distortion"?
- —BarrelProof (talk) 22:17, 3 December 2013 (UTC)
- Here is a classic one that is already cited in the article: The paper by Joel Max, "Quantizing for Minimum Distortion" (1960). It says "
- Do you have a citation for a definition which includes overload? ~KvnG 21:53, 3 December 2013 (UTC)
- I took a quick look at the sources at Clipping (signal processing) and Clipping (audio) and didn't find what you're looking for. I don't have access to the paper you cite above. Hopefully another editor will join the conversation and help get us unstuck. ~KvnG 23:19, 3 December 2013 (UTC)