Jump to content

Talk:K-nearest neighbors algorithm

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Jiuguang Wang (talk | contribs) at 18:13, 23 July 2008 (assessment). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
WikiProject iconRobotics Start‑class Mid‑importance
WikiProject iconThis article is within the scope of WikiProject Robotics, a collaborative effort to improve the coverage of Robotics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-importance on the project's importance scale.

Comment

I will write a more complete article for this when the exam season has passed.


Are there not any weighted kNN algorithms, that will take (to some degree) the actual distances into account? If the 5 closest neighbors are 0.1, 4000, 10000, 20000 and 55570 units away, we might say the very close first match has more importance than the others.

  • For an example of a weighted kNN algorithm, see F. Nigsch et al., Journal of Chemical Information and Modeling, 46, 2412-2422 (2006), DOI: 10.1021/ci060149f

So what is the complexity of kNN? I'm guessing O(n^2) but I'm not 100 % sure...

refly to other something.....

Comoplexly of kNN is O(n^2). I think so "It's guess" Complexly of NNS(Nearest Neighbor Search) is O(nlogn) on the KB-tree, finding one item.

I'm studing a kNN in multiple nodes. so I made Hk-NN is called by kNN. Processing time reduced effectively. I need to share and discussion in order to find out better Hk-NN method. I don't know cost of kNN, and anyone who are known cost did nothing but guessing.

If you known kNN's processing cost with accuracy, please send a message by the mail.

My e-mail: hanjjon@gmail.com

Maybe this draft version of an information retrieval book will help you. Chapter 14 discusses kNN. http://www-csli.stanford.edu/~hinrich/information-retrieval-book.html Hope this helps. --GrandiJoos 07:59, 1 October 2007 (UTC)[reply]

Kriging?

How did any of this Kriging stuff get in here (history)? —Preceding unsigned comment added by 66.92.232.105 (talk) 23:15, 16 March 2008 (UTC)[reply]

I think the Kriging stuff was given too much prominence. Is there a consensus on its relevance and whether it would be better further down the article?

Jbom1 (talk) 16:45, 17 March 2008 (UTC)[reply]

Listen - Kriging has nothing to do with K-nearest neighbor, and shouldn't be in this article. KNN is a classifier - takes a set of multidimensional points and a user choice of K, then splits the data into K different classes. Kriging is a way to take a set of multidimensional points, then for a new point with one dimension unknown, interpolate to generate that unknown. Simply, KNN does not rely on Kriging, and has no relationship that is apparent from the paragraph on Kriging in this article. —Preceding unsigned comment added by 158.130.14.30 (talk) 16:58, 17 March 2008 (UTC)[reply]

"Furthest" Neighbors?

I removed reference to "furthest" neighbours from overview. kNN regression is done on the basis of a (possibly weighted) average of the properties of 'nearest neighbours. See the Nigsch et al. article cited earlier in the discussion.