Jump to content

Learning curve (machine learning)

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Perceptron599 (talk | contribs) at 14:01, 27 October 2024 (Formal definition: Improve structure, remove nonsensical paragraphs). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Learning curve plot of training set size vs training score (loss) and cross-validation score

In machine learning (ML), a learning curve (or training curve) is a graphical representation that shows how a model's performance on a training set (and usually a validation set) changes with the number of training iterations (epochs) or the amount of training data.[1] Typically, the number of training epochs or training set size is plotted on the x-axis, and the value of the loss function (and possibly some other metric such as the cross-validation score) on the y-axis.

Synonyms include error curve, experience curve, improvement curve and generalization curve.[2]

More abstractly, learning curves plot the difference between learning effort and predictive performance, where learning effort usually means the number of training samples, and predictive performance means accuracy on testing samples.[3]

Learning curves have many useful purposes in ML, including:[4][5][6]

  • choosing model parameters during design,
  • adjusting optimization to improve convergence,
  • diagnosing problems such as overfitting (or underfitting),
  • and determining the amount of data used for training.

Formal definition

When creating a function to approximate the distribution of some data, it is necessary to define a loss function to measure how good the model output is (e.g., accuracy for classification tasks or mean squared error for regression). We then define an optimization process which finds model parameters such that is minimized, referred to as .

Training curve for amount of data

If the training data is

and the validation data is

,

a learning curve is the plot of the two curves

where

Training curve for number of iterations

Many optimization algorithms are iterative, repeating the same step (such as backpropagation) until the process converges to an optimal value. Gradient descent is one such algorithm. If is the approximation of the optimal after steps, a learning curve is the plot of

Choosing the size of the training dataset

It is a tool to find out how much a machine model benefits from adding more training data and whether the estimator suffers more from a variance error or a bias error. If both the validation score and the training score converge to a value that is too low with increasing size of the training set, it will not benefit much from more training data.[7]

In the machine learning domain, there are two implications of learning curves differing in the x-axis of the curves, with experience of the model graphed either as the number of training examples used for learning or the number of iterations used in training the model.[8]

See also

References

  1. ^ "Mohr, Felix and van Rijn, Jan N. "Learning Curves for Decision Making in Supervised Machine Learning - A Survey." arXiv preprint arXiv:2201.12150 (2022)". arXiv:2201.12150.
  2. ^ Viering, Tom; Loog, Marco (2023-06-01). "The Shape of Learning Curves: A Review". IEEE Transactions on Pattern Analysis and Machine Intelligence. 45 (6): 7799–7819. arXiv:2103.10948. doi:10.1109/TPAMI.2022.3220744. ISSN 0162-8828. PMID 36350870.
  3. ^ Perlich, Claudia (2010), "Learning Curves in Machine Learning", in Sammut, Claude; Webb, Geoffrey I. (eds.), Encyclopedia of Machine Learning, Boston, MA: Springer US, pp. 577–580, doi:10.1007/978-0-387-30164-8_452, ISBN 978-0-387-30164-8, retrieved 2023-07-06
  4. ^ Madhavan, P.G. (1997). "A New Recurrent Neural Network Learning Algorithm for Time Series Prediction" (PDF). Journal of Intelligent Systems. p. 113 Fig. 3.
  5. ^ "Machine Learning 102: Practical Advice". Tutorial: Machine Learning for Astronomy with Scikit-learn.
  6. ^ Meek, Christopher; Thiesson, Bo; Heckerman, David (Summer 2002). "The Learning-Curve Sampling Method Applied to Model-Based Clustering". Journal of Machine Learning Research. 2 (3): 397. Archived from the original on 2013-07-15.
  7. ^ scikit-learn developers. "Validation curves: plotting scores to evaluate models — scikit-learn 0.20.2 documentation". Retrieved February 15, 2019.
  8. ^ Sammut, Claude; Webb, Geoffrey I. (Eds.) (28 March 2011). Encyclopedia of Machine Learning (1st ed.). Springer. p. 578. ISBN 978-0-387-30768-8.