User:Chenruduan/sandbox
In statistics, the term linear model is used in different ways according to the context. The most common occurrence is in connection with regression models and the term is often taken as synonymous with linear regression model. However, the term is also used in time series analysis with a different meaning. Although linear models are developed and are most popular during the 1950s, they are closely related to many aspects of modern statistical learning. In each case, the designation "linear" is used to identify a subclass of models for which substantial reduction in the complexity of the related statistical theory is possible.
Linear regression models
[edit]For the regression case, the statistical model is as follows. Given a (random) sample the relation between the observations Yi and the independent variables Xij is formulated as
where may be nonlinear functions. In the above, the quantities εi are random variables representing errors in the relationship. The "linear" part of the designation relates to the appearance of the regression coefficients, βj in a linear way in the above relationship. Alternatively, one may say that the predicted values corresponding to the above model, namely
are linear functions of the βj.
Given that estimation is undertaken on the basis of a least squares analysis, estimates of the unknown parameters βj are determined by minimising a sum of squares function
From this, it can readily be seen that the "linear" aspect of the model means the following:
- the function to be minimised is a quadratic function of the βj for which minimisation is a relatively simple problem;
- the derivatives of the function are linear functions of the βj making it easy to find the minimising values;
- the minimising values βj are linear functions of the observations Yi;
- the minimising values βj are linear functions of the random errors εi which makes it relatively easy to determine the statistical properties of the estimated values of βj.
Time series models
[edit]An example of a linear time series model is an autoregressive moving average model. Here the model for values {Xt} in a time series can be written in the form
where again the quantities εt are random variables representing innovations which are new random effects that appear at a certain time but also affect values of X at later times. In this instance the use of the term "linear model" refers to the structure of the above relationship in representing Xt as a linear function of past values of the same time series and of current and past values of the innovations.[1] This particular aspect of the structure means that it is relatively simple to derive relations for the mean and covariance properties of the time series. Note that here the "linear" part of the term "linear model" is not referring to the coefficients φi and θi, as it would be in the case of a regression model, which looks structurally similar.
In statistical learning
[edit]Linear regression and linear classification
[edit]A linear model of regression has the hypothesis that the regression output has a linear relation with its input. This hypothesis is simple such that with a cost function of root mean square deviation (RMSD), its solution can be analytically obtained. Linear regression models are most prevalent precomputer age of statistical analysis, however, today people still have good reasons to apply them [2] . First, they are easy to implement even for people who do not have a strong background in statistics and programming, providing an initial glimpse to the dataset. The performance of a linear model can also serve as a baseline for comparison with a more complicated model. Second, although very simple, linear models are not guaranteed to perform worse than fancier models, such as neural networks, especially in cases where only a small number of training data are available or training data are very noisy. This situation may hold in experimental research such as chemistry or biology experiments, in which the data density could be low and the noise in data could be large[3]. Finally, linear models can be generalized to what is called basis-function methods, which expands their scope considerably.
Linear methods are also related to other statistical methods, such as principal component analysis and neural network. Some people hold a belief that understanding the behaviors of linear models are essential to understand those more complicated nonlinear ones. Besides being used in regression tasks, linear models can also be applied in classification tasks with only minor changes. Usually, a sigmoid function is applied at the outcome of the linear function, turning the outcome as the probability of a certain class. Cost functions used in classification tasks are also different from those in regression tasks, for example, cross entropy. Despite these differences, the spirit of linear models in regression and classification tasks is the same: the hypothesis of a linear relationship between the inputs and target.
Relation to principal component analysis
[edit]Principal component analysis (PCA) is a statistical procedure that converts a set of linearly correlated variables into a set of linearly uncorrelated variables called principal components with an orthogonal transformation. This transformation is implemented such that
- Any pairs of principal components are orthogonal with each other.
- These principal components are sorted with respect to their variances.
- Each principal component is a linear combination of all input variables.
In this sense, PCA is connected to linear models. Simply treated as a set of new input variables into a linear model, these principal components cannot make linear model perform better due to the linear nature of these models. However, using principal components as inputs for nonlinear models, such as kernel methods and neural networks, the performance can possibly be improved. Thus, PCA is commonly used to do feature selection and dimension reduction, although feature selected in this way may be difficult to be interpreted [4].
Relation to neural networks
[edit]The neural network itself is not a single algorithm, but rather a framework of machine learning algorithms. Several types of most popular machine learning models today, such as artificial neural network, convolutional neural network, and recurrent neural network, all belong to the category of the neural network. They usually consist of a series non-linear transformations of the input data, which in principle can approximate any functions [5]. Despite the architectures of neural networks can be wildly different, they share one thing in common: the last layer of the neural network is always linear regression. Therefore, one way of viewing neural networks is that each neural network can be split into two parts: the non-linear transformations before the last layer and the last layer of linear regression. The first part works as a process of feature engineering, with which input data are transformed properly so that at the very last layer the label is almost linearly dependent on these engineered features. With this idea, various of feature extraction models are developed. One representative is the autoencoder-decoder model, which is frequently used for dimensionality reduction.
Other uses in statistics
[edit]There are some other instances where "nonlinear model" is used to contrast with a linearly structured model, although the term "linear model" is not usually applied. One example of this is nonlinear dimensionality reduction.
See also
[edit]- General linear model
- Generalized linear model
- Linear predictor function
- Linear system
- Statistical model
- Machine learning
References
[edit]- ^ Priestley, M.B. (1988) Non-linear and Non-stationary time series analysis, Academic Press. ISBN 0-12-564911-8
- ^ Hastie, T.; Tibshirani, R.; Friedman, J. H. (2009), The elements of statistical learning: data mining, inference, and prediction. 2nd ed., Springer: New York
- ^ Ahneman, D. T.; Estrada, J. G.; Lin, S. S.; Dreher, S. D.; Doyle, A. G., Predicting reaction performance in C-N cross-coupling using machine learning. Science 2018, 360 (6385), 186-190.
- ^ Jolliffe, I., Principal component analysis. Springer: International encyclopedia of statistical science, 2011.
- ^ Hornik, K.; Stinchcombe, M.; White, H., Multilayer Feedforward Networks Are Universal Approximators. Neural Networks 1989, 2 (5), 359-366