Jump to content

Mean absolute scaled error

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by BG19bot (talk | contribs) at 09:29, 16 May 2016 (WP:CHECKWIKI error fix for #61. Punctuation goes before References. Do general fixes if a problem exists. -). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In statistics, the mean absolute scaled error (MASE) is a measure of the accuracy of forecasts . It was proposed in 2005 by statistician Rob J. Hyndman and Professor of Decision Sciences Anne B. Koehler, who described it as a "generally applicable measurement of forecast accuracy without the problems seen in the other measurements."[1] The mean absolute scaled error has favorable properties when compared to other methods for calculating forecast errors, such as root-mean-square-deviation, and is therefore recommended for determining comparative accuracy of forecasts.[2]

Non seasonal time series

For a non-seasonal time series,[3] the mean absolute scaled error is estimated by

[4]

where the numerator et is the forecast error for a given period, defined as the actual value (Yt) minus the forecast value (Ft) for that period: et = Yt − Ft, and the denominator is the mean absolute error of the one-step "naive forecast method" on the training set,[3] which uses the actual value from the prior period as the forecast: Ft = Yt−1[5]

Seasonal time series

For a seasonal time series, the mean absolute scaled error is estimated in a manner similar to the method for non-seasonal time series:

[3]

The main difference with the method for non-seasonal time series, is that the denominator is the mean absolute error of the one-step "seasonal naive forecast method" on the training set,[3] which uses the actual value from the prior season as the forecast: Ft = Yt−m,[5] where m is the seasonal period.

This scale-free error metric "can be used to compare forecast methods on a single series and also to compare forecast accuracy between series. This metric is well suited to intermittent-demand series[clarification needed] because it never gives infinite or undefined values[1] except in the irrelevant case where all historical data are equal.[4]

When comparing forecasting methods, the method with the lowest MASE is the preferred method.

See also

References

  1. ^ a b Hyndman, R. J. (2006). "Another look at measures of forecast accuracy", FORESIGHT Issue 4 June 2006, pg46 [1]
  2. ^ Franses, Philip Hans (2016-01-01). "A note on the Mean Absolute Scaled Error". International Journal of Forecasting. 32 (1): 20–22. doi:10.1016/j.ijforecast.2015.03.008.
  3. ^ a b c d "2.5 Evaluating forecast accuracy | OTexts". www.otexts.org. Retrieved 2016-05-15.
  4. ^ a b Hyndman, R. J. and Koehler A. B. (2006). "Another look at measures of forecast accuracy." International Journal of Forecasting volume 22 issue 4, pages 679-688. doi:10.1016/j.ijforecast.2006.03.001
  5. ^ a b Hyndman, Rob et al, Forecasting with Exponential Smoothing: The State Space Approach, Berlin: Springer-Verlag, 2008. ISBN 978-3-540-71916-8.