Jump to content

Talk:Function approximation

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Nvrmnd (talk | contribs) at 03:07, 29 November 2004. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Function approximation is a general class of problem solving where one tries to approximate an unknown function from a labeled data set (X, y).
....
Mathematically the problem can be posed as:

I do not understand the above. What does "labeled" mean? Is the "data set" simply a finite collection of ordered pairs of numbers? The part following the words "the problem can be posed as" makes no sense at all. If the author of these words or anyone else has any idea what was meant, could they make some attempt to explain it in the article? Michael Hardy 04:22, 27 Nov 2004 (UTC)


This article seems seriously confused. It says:

Function approximation is used in the field of supervised learning and statistical curve fitting. The goal of this problem is to find a function within the family of functions:
that minimizes the expected error defined as:
Where is the probability that the example will be sampled and is the loss function which describes how accurate the function f is at predicting the correct value for a given

The reference to the probability that xi will be sampled means that the randomness resides in the independent variable and not in its image under f. That is strange when talking about curve-fitting. It also implies that the probability distribution is discrete. Also very strange. In the expression L(xi, f) it is not clear whether the f is supposed to be the unobservable true f or some fitted estimate of it given the data. "... how accurate the function f is at predicting the correct value" implies f is an estimate, but speaking of L as a loss function would normally presuppose that f is the true value. The article then goes on to say that ordinary least squares estimation is an example. Ordinary least squares minimizes the sum of squares of residuals; it does not minimize an expected loss.

Yes that definition is not that great, I'll try and replace it with a better one when I have time. Also I think that loss function in this sense is defined differently. Nvrmnd 03:07, 29 Nov 2004 (UTC)


A simple example of linear function approximation posed as an optmization problem is:

I still find the above incomprehensible as written. I've put a guess as to what it means on the discussion page of the person who wrote the above. But one shouldn't have to guess.

If this is supposed to be about curve-fitting as usually practiced, it fails to explain it. Michael Hardy 02:28, 29 Nov 2004 (UTC)