Jump to content

Talk:Vector quantization

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 129.206.66.37 (talk) at 08:24, 2 August 2013 (Each cluster the same number of points?!: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Some math

During some of my colleges I got some math what could be nice to be on this page. only I don't have enough mathimatical background to prove the used maths.

The Math

create set of prototypes = the data =

by using the Squared_Euclidean_Distance we can determine the multidimention distance between a prototype and a data point. Based on this we can find the closest prototype to a given datapoint. assign to prototype

This way the winner takes it all and the closest prototype should be moved using:

where is the learning rate

Spidfire (talk) 15:29, 31 January 2013 (UTC)[reply]

Untitled

also want to see pictures —Preceding unsigned comment added by 138.246.7.74 (talk) 13:50, 15 July 2010 (UTC)[reply]

Damn. This article made me feel dumb. --NoPetrol 06:41, 24 Nov 2004 (UTC)

I have modified the article to give a clear explanation of what vector quantization is, together with some uses for it. It still needs tidying up and referencing Pog 21:46, 1 August 2007 (UTC)[reply]

Unclear sentence

"Find the quantization vector centroid with the smallest <distance-sensitivity>"

What does "<distance-sensitivity>" mean? Does it mean sensitivity? Or does it mean distance minus sensitivity? -Pgan002 00:17, 18 August 2007 (UTC)[reply]

Spam

Why the hell is there a picture of an aeroplane on this page? —Preceding unsigned comment added by Criffer (talkcontribs) 16:24, 11 October 2007 (UTC)[reply]

Definition

Is there a kind of agreed definition on this term? At least [1] attempts to define it. Should Wikipedia adopt this definition? Are there alternative definitions somewhere? Arkadi kagan (talk) 21:11, 25 January 2010 (UTC)[reply]

Another option from [2]:

A data compression technique in which a finite sequence of values is presented as resembling the template (from among the choices available to a given codebook) that minimizes a distortion measure.

Arkadi kagan (talk) 08:38, 28 January 2010 (UTC)[reply]

Use in data compression

"All possible combinations of the N-dimensional vector [y1,y2,...,yn] form the Gaurav."

What the hell is a Gaurav?

Secondly, even if there is a correct technical term for all possible combinations of an N-Dimensional vector, it is completely out of context in that particular article. It should be removed, or correct and given a context. —Preceding unsigned comment added by 198.151.130.16 (talk) 21:46, 1 April 2011 (UTC)[reply]

Where is a block diagram?

From the article: Block Diagram: A simple vector quantizer is shown below Huh? Where is it? Cuddlyable3 (talk) 09:15, 7 June 2011 (UTC)[reply]

Each cluster the same number of points?!

"It works by dividing a large set of points (vectors) into groups having approximately the same number of points closest to them."

This is not true, isn't it? E.g. clustering a 1-d normally distributioned data with k-means results in groups with very different numbers of points assigned to each cluster.

"Since data points are represented by the index of their closest centroid, commonly occurring data have low error, and rare data high error."

This contradicts the first quote: If all clusters have the same number of points assigned (As the first quote states), than rarely occuring data is quantized with the same precision as frequently occuring data.

I am confused. I hope i am correct with my concerns here.