https://de.wikipedia.org/w/api.php?action=feedcontributions&feedformat=atom&user=Zephyrus+Tavvier Wikipedia - Benutzerbeiträge [de] 2025-11-11T23:07:36Z Benutzerbeiträge MediaWiki 1.46.0-wmf.1 https://de.wikipedia.org/w/index.php?title=Benutzer:JonskiC/Scheinbar_unverbundene_Regressionsgleichungen&diff=164624465 Benutzer:JonskiC/Scheinbar unverbundene Regressionsgleichungen 2013-07-12T03:33:59Z <p>Zephyrus Tavvier: /* Estimation */ Found one more t to change.</p> <hr /> <div>In [[econometrics]], the '''seemingly unrelated regressions''' ('''SUR''')&lt;ref&gt;{{harvtxt|Davidson|MacKinnon|1993|loc=page 306}}, {{harvtxt|Hayashi|2000|loc=page 279}}, {{harvtxt|Greene|2002|loc=p. 340}}&lt;/ref&gt; or '''seemingly unrelated regression equations ''' ('''SURE''')&lt;ref&gt;{{harvtxt|Zellner|1962}}, {{harvtxt|Srivastava|Giles|1987|loc=page 2}}&lt;/ref&gt; model, proposed by [[Arnold Zellner]] in [[#CITEREF_Zellner_1962|(1962)]], is a generalization of a [[linear regression model]] that consists of several regression equations, each having its own dependent variable and potentially different sets of exogenous explanatory variables. Each equation is a valid linear regression on its own and can be estimated separately, which is why the system is called ''seemingly unrelated'',&lt;ref&gt;{{harvtxt|Greene|2002|loc=p. 342}}&lt;/ref&gt; although some authors&lt;ref&gt;{{harvtxt|Davidson|MacKinnon|1993|loc=page 306}}&lt;/ref&gt; suggest that the term ''seemingly related'' would be more appropriate, since the [[errors and residuals in statistics|error terms]] are assumed to be correlated across the equations.<br /> <br /> The model can be estimated equation-by-equation using standard [[ordinary least squares]] (OLS). Such estimates are consistent, however generally not as efficient as the SUR method, which amounts to [[feasible generalized least squares]] with a specific form of the variance-covariance matrix. Two important cases when SUR is in fact equivalent to OLS, are: either when the error terms are in fact uncorrelated between the equations (so that they are truly unrelated), or when each equation contains exactly the same set of regressors on the right-hand-side.<br /> <br /> The SUR model can be viewed as either the simplification of the [[general linear model]] where certain coefficients in matrix Β&lt;!-- this is Greek capital beta, should be non-italic --&gt; are restricted to be equal to zero, or as the generalization of the [[general linear model]] where the regressors on the right-hand-side are allowed to be different in each equation. The SUR model can be further generalized into the [[simultaneous equations model]], where the right-hand side regressors are allowed to be the endogenous variables as well.<br /> <br /> == The model ==<br /> Suppose there are ''m'' regression equations<br /> <br /> : &lt;math&gt;<br /> y_{ir} = x_{ir}^\mathsf{T}\;\!\beta_i + \varepsilon_{ir}, \quad i=1,\ldots,m.<br /> &lt;/math&gt;<br /> <br /> Here ''i'' represents the equation number, {{nowrap|1=''r'' = 1, …, ''R''}} is the observation index and we are taking the transpose of the &lt;math&gt;x_{ir}&lt;/math&gt; column vector. The number of observations is assumed to be large, so that in the analysis we take {{nowrap|''R'' → ∞}}, whereas the number of equations ''m'' remains fixed.<br /> <br /> Each equation ''i'' has a single response variable ''y''&lt;sub&gt;''ir''&lt;/sub&gt;, and a ''k''&lt;sub&gt;''i''&lt;/sub&gt;-dimensional vector of regressors ''x''&lt;sub&gt;''ir''&lt;/sub&gt;. If we stack observations corresponding to the ''i''-th equation into ''R''-dimensional vectors and matrices, then the model can be written in vector form as<br /> <br /> : &lt;math&gt;<br /> y_i = X_i\beta_i + \varepsilon_i, \quad i=1,\ldots,m,<br /> &lt;/math&gt;<br /> where ''y''&lt;sub&gt;''i''&lt;/sub&gt; and ''ε''&lt;sub&gt;''i''&lt;/sub&gt; are ''R''×1 vectors, ''X''&lt;sub&gt;''i''&lt;/sub&gt; is a ''R''×''k''&lt;sub&gt;''i''&lt;/sub&gt; matrix, and ''β''&lt;sub&gt;''i''&lt;/sub&gt; is a ''k''&lt;sub&gt;''i''&lt;/sub&gt;×1 vector.<br /> <br /> Finally, if we stack these ''m'' vector equations on top of each other, the system will take form &lt;ref&gt;{{harvtxt|Zellner|1962|loc=eq. (2.2)}}&lt;/ref&gt;<br /> : {{NumBlk|:|&lt;math&gt;<br /> \begin{pmatrix}y_1 \\ y_2 \\ \vdots \\ y_m \end{pmatrix} =<br /> \begin{pmatrix}X_1&amp;0&amp;\ldots&amp;0 \\ 0&amp;X_2&amp;\ldots&amp;0 \\ \vdots&amp;\vdots&amp;\ddots&amp;\vdots \\ 0&amp;0&amp;\ldots&amp;X_m \end{pmatrix}<br /> \begin{pmatrix}\beta_1 \\ \beta_2 \\ \vdots \\ \beta_m \end{pmatrix} +<br /> \begin{pmatrix}\varepsilon_1 \\ \varepsilon_2 \\ \vdots \\ \varepsilon_m \end{pmatrix}<br /> = X\beta + \varepsilon\,.<br /> &lt;/math&gt;|{{EquationRef|1}}}}<br /> <br /> The assumption of the model is that error terms ''ε''&lt;sub&gt;''ir''&lt;/sub&gt; are independent across time, but may have cross-equation contemporaneous correlations. Thus we assume that {{nowrap|1=E[&amp;thinsp;''ε&lt;sub&gt;ir&lt;/sub&gt;&amp;thinsp;ε&lt;sub&gt;js&lt;/sub&gt;''&amp;thinsp;{{!}}&amp;thinsp;''X''&amp;thinsp;] = 0}} whenever {{nowrap|''r ≠ s''}}, whereas {{nowrap|1=E[&amp;thinsp;''ε''&lt;sub&gt;''ir''&lt;/sub&gt;&amp;thinsp;''ε''&lt;sub&gt;''jr''&lt;/sub&gt;&amp;thinsp;{{!}}&amp;thinsp;''X''&amp;thinsp;] = ''σ&lt;sub&gt;ij&lt;/sub&gt;''}}. Denoting {{nowrap|1=Σ = [[''σ''&lt;sub&gt;''ij''&lt;/sub&gt;]]}} the ''m×m'' skedasticity matrix of each observation, the covariance matrix of the stacked error terms ''ε'' will be equal to &lt;ref&gt;{{harvtxt|Zellner|1962|loc=eq. (2.4)}}, {{harvtxt|Greene|2002|loc=p. 342}}&lt;/ref&gt;<br /> <br /> : &lt;math&gt;<br /> \Omega \equiv \operatorname{E}[\,\varepsilon\varepsilon^\mathsf{T}\,|X\,] = \Sigma \otimes I_R,<br /> &lt;/math&gt;<br /> <br /> where ''I''&lt;sub&gt;''R''&lt;/sub&gt; is the ''R''-dimensional [[identity matrix]] and ⊗ denotes the matrix [[Kronecker product]].<br /> <br /> == Estimation ==<br /> The SUR model is usually estimated using the [[feasible generalized least squares]] (FGLS) method. This is a two-step method where in the first step we run [[ordinary least squares]] regression for ({{EquationNote|1}}). The residuals from this regression are used to estimate the elements of matrix Σ: &lt;ref name=&quot;Amemiya198&quot;&gt;{{harvtxt|Amemiya|1985|loc=page 198}}&lt;/ref&gt;<br /> : &lt;math&gt;<br /> \hat\sigma_{ij} = \frac1R\, \hat\varepsilon_i^\mathsf{T} \hat\varepsilon_j .<br /> &lt;/math&gt;<br /> <br /> In the second step we run [[generalized least squares]] regression for ({{EquationNote|1}}) using the variance matrix &lt;math style=&quot;vertical-align:-.2em&quot;&gt;\scriptstyle\hat\Omega\;=\;\hat\Sigma\,\otimes\,I_R&lt;/math&gt;:<br /> : &lt;math&gt;<br /> \hat\beta = \Big( X^\mathsf{T}(\hat\Sigma^{-1}\otimes I_R) X \Big)^{\!-1} X^\mathsf{T}(\hat\Sigma^{-1}\otimes I_R)\,y .<br /> &lt;/math&gt;<br /> <br /> This estimator is [[bias of an estimator|unbiased]] in small samples assuming the error terms ''ε&lt;sub&gt;ir&lt;/sub&gt;'' have symmetric distribution; in large samples it is [[consistent estimator|consistent]] and [[asymptotic distribution|asymptotically normal]] with limiting distribution &lt;ref name=&quot;Amemiya198&quot;/&gt;<br /> : &lt;math&gt;<br /> \sqrt{R}(\hat\beta - \beta) \ \xrightarrow{d}\ \mathcal{N}\Big(\,0,\; \Big(\tfrac1R X^\mathsf{T}(\Sigma^{-1}\otimes I_R) X \Big)^{\!-1}\,\Big) .<br /> &lt;/math&gt;<br /> <br /> Other estimation techniques besides FGLS were suggested for SUR model: the maximum likelihood (ML) method under the assumption that the errors are normally distributed; the iterative generalized least squares (IGLS), were the residuals from the second step of FGLS are used to recalculate the matrix &lt;math style=&quot;vertical-align:0&quot;&gt;\scriptstyle\hat\Sigma&lt;/math&gt;, then estimate &lt;math style=&quot;vertical-align:-.3em&quot;&gt;\scriptstyle\hat\beta&lt;/math&gt; again using GLS, and so on, until convergence is achieved; the iterative ordinary least squates (IOLS) scheme, where estimation is performed on equation-by-equation basis, but every equation includes as additional regressors the residuals from the previously estimated equations in order to account for the cross-equation correlations, the estimation is run iteratively until convergence is achieved. {{harvtxt|Kmenta|Gilbert|1968}} ran a Monte-Carlo study and established that all three methods — IGLS, IOLS and ML — yield the numerically equivalent results, they also found that the asymptotic distribution of these estimators is the same as the distribution of the FGLS estimator, whereas in small samples neither of the estimators was more superior than the others. {{harvtxt|Zellner|Ando|2010}} developed a direct Monte Carlo method for the Bayeisan analysis of SUR model<br /> <br /> == Equivalence to OLS ==<br /> There are two important cases when the SUR estimates turn out to be equivalent to the equation-by-equation OLS, so that there is no gain in estimating the system jointly. These cases are:<br /> # When the matrix Σ is known to be diagonal, that is, there are no cross-equation correlations between the error terms. In this case the system becomes not seemingly but truly unrelated.<br /> # When each equation contains exactly the same set of regressors, that is {{nowrap|1=''X''&lt;sub&gt;1&lt;/sub&gt; = ''X''&lt;sub&gt;2&lt;/sub&gt; = … = ''X&lt;sub&gt;m&lt;/sub&gt;''}}. That the estimators turn out to be numerically identical to OLS estimates follows from [[Kruskal's theorem]],&lt;ref&gt;{{harvtxt|Davidson|MacKinnon|1993|loc=page 313}}&lt;/ref&gt; or can be shown via the direct calculation.&lt;ref&gt;{{harvtxt|Amemiya|1985|loc=page 197}}&lt;/ref&gt;<br /> <br /> == See also ==<br /> <br /> * [[General linear model]]<br /> * [[Simultaneous equations models]]<br /> <br /> === Notes ===<br /> {{reflist|3}}<br /> <br /> ==References==<br /> {{refbegin}}<br /> * {{cite book<br /> | last = Amemiya | first = Takeshi<br /> | title = Advanced econometrics<br /> | year = 1985<br /> | publisher = Harvard University Press<br /> | location = Cambridge, Massachusetts<br /> | isbn = 0-674-00560-0<br /> | ref = harv<br /> }}<br /> * {{cite book<br /> | last1 = Davidson | first1 = Russell<br /> | last2 = MacKinnon | first2 = James G.<br /> | title = Estimation and inference in econometrics<br /> | year = 1993<br /> | publisher = Oxford University Press<br /> | isbn = 978-0-19-506011-9<br /> | ref = harv<br /> }}<br /> * {{cite book<br /> | last = Greene | first = William H.<br /> | title = Econometric analysis<br /> | publisher = Prentice Hall<br /> | year = 2002 | edition = 5th<br /> | isbn = 0-13-066189-9<br /> | ref = harv<br /> }}<br /> * {{cite book<br /> | last = Hayashi | first = Fumio<br /> | title = Econometrics<br /> | year = 2000<br /> | publisher = Princeton University Press<br /> | isbn = 0-691-01018-8<br /> | ref = harv<br /> }}<br /> * {{cite journal<br /> | last1 = Kmenta | first1 = Jan |author-link =Jan Kmenta<br /> | last2 = Gilbert | first2 = Roy F.<br /> | title = Small sample properties of alternative estimators of seemingly unrelated regressions<br /> | year = 1968<br /> | journal = Journal of the American Statistical Association<br /> | volume = 63 | issue = 324<br /> | pages = 1180–1200<br /> | doi = 10.2307/2285876<br /> | ref = harv<br /> }}<br /> * {{cite book<br /> | last1 = Srivastava | first1 = Virendra K.<br /> | last2 = Giles | first2 = David E.A.<br /> | title = Seemingly unrelated regression equations models: estimation and inference<br /> | year = 1987<br /> | publisher = Marcel Dekker<br /> | location = New York<br /> | isbn = 978-0-8247-7610-7<br /> | ref = harv<br /> }}<br /> * {{cite journal<br /> | last = Zellner | first = Arnold<br /> | title = An efficient method of estimating seemingly unrelated regression equations and tests for aggregation bias<br /> | journal = Journal of the American Statistical Association<br /> | year = 1962<br /> | volume = 57<br /> | pages = 348–368<br /> | doi = 10.2307/2281644<br /> | ref = harv<br /> }}<br /> * {{cite journal<br /> | last1 = Zellner | first1 = Arnold<br /> | last2 = Ando | first2 = Tomohiro<br /> | title = A direct Monte Carlo approach for Bayesian analysis of the seemingly unrelated regression model<br /> | year = 2010<br /> | journal = Journal of Econometrics<br /> | volume = 159<br /> | pages = 33-45<br /> | ref = harv<br /> }}<br /> {{refend}}<br /> <br /> [[Category:Econometrics]]<br /> [[Category:Simultaneous equation methods (econometrics)]]<br /> [[Category:Mathematical and quantitative methods (economics)]]<br /> [[Category:Regression analysis]]</div> Zephyrus Tavvier https://de.wikipedia.org/w/index.php?title=Benutzer:JonskiC/Scheinbar_unverbundene_Regressionsgleichungen&diff=164624464 Benutzer:JonskiC/Scheinbar unverbundene Regressionsgleichungen 2013-07-12T03:31:55Z <p>Zephyrus Tavvier: /* Estimation */ Finished changing T&#039;s to R&#039;s.</p> <hr /> <div>In [[econometrics]], the '''seemingly unrelated regressions''' ('''SUR''')&lt;ref&gt;{{harvtxt|Davidson|MacKinnon|1993|loc=page 306}}, {{harvtxt|Hayashi|2000|loc=page 279}}, {{harvtxt|Greene|2002|loc=p. 340}}&lt;/ref&gt; or '''seemingly unrelated regression equations ''' ('''SURE''')&lt;ref&gt;{{harvtxt|Zellner|1962}}, {{harvtxt|Srivastava|Giles|1987|loc=page 2}}&lt;/ref&gt; model, proposed by [[Arnold Zellner]] in [[#CITEREF_Zellner_1962|(1962)]], is a generalization of a [[linear regression model]] that consists of several regression equations, each having its own dependent variable and potentially different sets of exogenous explanatory variables. Each equation is a valid linear regression on its own and can be estimated separately, which is why the system is called ''seemingly unrelated'',&lt;ref&gt;{{harvtxt|Greene|2002|loc=p. 342}}&lt;/ref&gt; although some authors&lt;ref&gt;{{harvtxt|Davidson|MacKinnon|1993|loc=page 306}}&lt;/ref&gt; suggest that the term ''seemingly related'' would be more appropriate, since the [[errors and residuals in statistics|error terms]] are assumed to be correlated across the equations.<br /> <br /> The model can be estimated equation-by-equation using standard [[ordinary least squares]] (OLS). Such estimates are consistent, however generally not as efficient as the SUR method, which amounts to [[feasible generalized least squares]] with a specific form of the variance-covariance matrix. Two important cases when SUR is in fact equivalent to OLS, are: either when the error terms are in fact uncorrelated between the equations (so that they are truly unrelated), or when each equation contains exactly the same set of regressors on the right-hand-side.<br /> <br /> The SUR model can be viewed as either the simplification of the [[general linear model]] where certain coefficients in matrix Β&lt;!-- this is Greek capital beta, should be non-italic --&gt; are restricted to be equal to zero, or as the generalization of the [[general linear model]] where the regressors on the right-hand-side are allowed to be different in each equation. The SUR model can be further generalized into the [[simultaneous equations model]], where the right-hand side regressors are allowed to be the endogenous variables as well.<br /> <br /> == The model ==<br /> Suppose there are ''m'' regression equations<br /> <br /> : &lt;math&gt;<br /> y_{ir} = x_{ir}^\mathsf{T}\;\!\beta_i + \varepsilon_{ir}, \quad i=1,\ldots,m.<br /> &lt;/math&gt;<br /> <br /> Here ''i'' represents the equation number, {{nowrap|1=''r'' = 1, …, ''R''}} is the observation index and we are taking the transpose of the &lt;math&gt;x_{ir}&lt;/math&gt; column vector. The number of observations is assumed to be large, so that in the analysis we take {{nowrap|''R'' → ∞}}, whereas the number of equations ''m'' remains fixed.<br /> <br /> Each equation ''i'' has a single response variable ''y''&lt;sub&gt;''ir''&lt;/sub&gt;, and a ''k''&lt;sub&gt;''i''&lt;/sub&gt;-dimensional vector of regressors ''x''&lt;sub&gt;''ir''&lt;/sub&gt;. If we stack observations corresponding to the ''i''-th equation into ''R''-dimensional vectors and matrices, then the model can be written in vector form as<br /> <br /> : &lt;math&gt;<br /> y_i = X_i\beta_i + \varepsilon_i, \quad i=1,\ldots,m,<br /> &lt;/math&gt;<br /> where ''y''&lt;sub&gt;''i''&lt;/sub&gt; and ''ε''&lt;sub&gt;''i''&lt;/sub&gt; are ''R''×1 vectors, ''X''&lt;sub&gt;''i''&lt;/sub&gt; is a ''R''×''k''&lt;sub&gt;''i''&lt;/sub&gt; matrix, and ''β''&lt;sub&gt;''i''&lt;/sub&gt; is a ''k''&lt;sub&gt;''i''&lt;/sub&gt;×1 vector.<br /> <br /> Finally, if we stack these ''m'' vector equations on top of each other, the system will take form &lt;ref&gt;{{harvtxt|Zellner|1962|loc=eq. (2.2)}}&lt;/ref&gt;<br /> : {{NumBlk|:|&lt;math&gt;<br /> \begin{pmatrix}y_1 \\ y_2 \\ \vdots \\ y_m \end{pmatrix} =<br /> \begin{pmatrix}X_1&amp;0&amp;\ldots&amp;0 \\ 0&amp;X_2&amp;\ldots&amp;0 \\ \vdots&amp;\vdots&amp;\ddots&amp;\vdots \\ 0&amp;0&amp;\ldots&amp;X_m \end{pmatrix}<br /> \begin{pmatrix}\beta_1 \\ \beta_2 \\ \vdots \\ \beta_m \end{pmatrix} +<br /> \begin{pmatrix}\varepsilon_1 \\ \varepsilon_2 \\ \vdots \\ \varepsilon_m \end{pmatrix}<br /> = X\beta + \varepsilon\,.<br /> &lt;/math&gt;|{{EquationRef|1}}}}<br /> <br /> The assumption of the model is that error terms ''ε''&lt;sub&gt;''ir''&lt;/sub&gt; are independent across time, but may have cross-equation contemporaneous correlations. Thus we assume that {{nowrap|1=E[&amp;thinsp;''ε&lt;sub&gt;ir&lt;/sub&gt;&amp;thinsp;ε&lt;sub&gt;js&lt;/sub&gt;''&amp;thinsp;{{!}}&amp;thinsp;''X''&amp;thinsp;] = 0}} whenever {{nowrap|''r ≠ s''}}, whereas {{nowrap|1=E[&amp;thinsp;''ε''&lt;sub&gt;''ir''&lt;/sub&gt;&amp;thinsp;''ε''&lt;sub&gt;''jr''&lt;/sub&gt;&amp;thinsp;{{!}}&amp;thinsp;''X''&amp;thinsp;] = ''σ&lt;sub&gt;ij&lt;/sub&gt;''}}. Denoting {{nowrap|1=Σ = [[''σ''&lt;sub&gt;''ij''&lt;/sub&gt;]]}} the ''m×m'' skedasticity matrix of each observation, the covariance matrix of the stacked error terms ''ε'' will be equal to &lt;ref&gt;{{harvtxt|Zellner|1962|loc=eq. (2.4)}}, {{harvtxt|Greene|2002|loc=p. 342}}&lt;/ref&gt;<br /> <br /> : &lt;math&gt;<br /> \Omega \equiv \operatorname{E}[\,\varepsilon\varepsilon^\mathsf{T}\,|X\,] = \Sigma \otimes I_R,<br /> &lt;/math&gt;<br /> <br /> where ''I''&lt;sub&gt;''R''&lt;/sub&gt; is the ''R''-dimensional [[identity matrix]] and ⊗ denotes the matrix [[Kronecker product]].<br /> <br /> == Estimation ==<br /> The SUR model is usually estimated using the [[feasible generalized least squares]] (FGLS) method. This is a two-step method where in the first step we run [[ordinary least squares]] regression for ({{EquationNote|1}}). The residuals from this regression are used to estimate the elements of matrix Σ: &lt;ref name=&quot;Amemiya198&quot;&gt;{{harvtxt|Amemiya|1985|loc=page 198}}&lt;/ref&gt;<br /> : &lt;math&gt;<br /> \hat\sigma_{ij} = \frac1R\, \hat\varepsilon_i^\mathsf{T} \hat\varepsilon_j .<br /> &lt;/math&gt;<br /> <br /> In the second step we run [[generalized least squares]] regression for ({{EquationNote|1}}) using the variance matrix &lt;math style=&quot;vertical-align:-.2em&quot;&gt;\scriptstyle\hat\Omega\;=\;\hat\Sigma\,\otimes\,I_R&lt;/math&gt;:<br /> : &lt;math&gt;<br /> \hat\beta = \Big( X^\mathsf{T}(\hat\Sigma^{-1}\otimes I_R) X \Big)^{\!-1} X^\mathsf{T}(\hat\Sigma^{-1}\otimes I_R)\,y .<br /> &lt;/math&gt;<br /> <br /> This estimator is [[bias of an estimator|unbiased]] in small samples assuming the error terms ''ε&lt;sub&gt;it&lt;/sub&gt;'' have symmetric distribution; in large samples it is [[consistent estimator|consistent]] and [[asymptotic distribution|asymptotically normal]] with limiting distribution &lt;ref name=&quot;Amemiya198&quot;/&gt;<br /> : &lt;math&gt;<br /> \sqrt{R}(\hat\beta - \beta) \ \xrightarrow{d}\ \mathcal{N}\Big(\,0,\; \Big(\tfrac1R X^\mathsf{T}(\Sigma^{-1}\otimes I_R) X \Big)^{\!-1}\,\Big) .<br /> &lt;/math&gt;<br /> <br /> Other estimation techniques besides FGLS were suggested for SUR model: the maximum likelihood (ML) method under the assumption that the errors are normally distributed; the iterative generalized least squares (IGLS), were the residuals from the second step of FGLS are used to recalculate the matrix &lt;math style=&quot;vertical-align:0&quot;&gt;\scriptstyle\hat\Sigma&lt;/math&gt;, then estimate &lt;math style=&quot;vertical-align:-.3em&quot;&gt;\scriptstyle\hat\beta&lt;/math&gt; again using GLS, and so on, until convergence is achieved; the iterative ordinary least squates (IOLS) scheme, where estimation is performed on equation-by-equation basis, but every equation includes as additional regressors the residuals from the previously estimated equations in order to account for the cross-equation correlations, the estimation is run iteratively until convergence is achieved. {{harvtxt|Kmenta|Gilbert|1968}} ran a Monte-Carlo study and established that all three methods — IGLS, IOLS and ML — yield the numerically equivalent results, they also found that the asymptotic distribution of these estimators is the same as the distribution of the FGLS estimator, whereas in small samples neither of the estimators was more superior than the others. {{harvtxt|Zellner|Ando|2010}} developed a direct Monte Carlo method for the Bayeisan analysis of SUR model<br /> <br /> == Equivalence to OLS ==<br /> There are two important cases when the SUR estimates turn out to be equivalent to the equation-by-equation OLS, so that there is no gain in estimating the system jointly. These cases are:<br /> # When the matrix Σ is known to be diagonal, that is, there are no cross-equation correlations between the error terms. In this case the system becomes not seemingly but truly unrelated.<br /> # When each equation contains exactly the same set of regressors, that is {{nowrap|1=''X''&lt;sub&gt;1&lt;/sub&gt; = ''X''&lt;sub&gt;2&lt;/sub&gt; = … = ''X&lt;sub&gt;m&lt;/sub&gt;''}}. That the estimators turn out to be numerically identical to OLS estimates follows from [[Kruskal's theorem]],&lt;ref&gt;{{harvtxt|Davidson|MacKinnon|1993|loc=page 313}}&lt;/ref&gt; or can be shown via the direct calculation.&lt;ref&gt;{{harvtxt|Amemiya|1985|loc=page 197}}&lt;/ref&gt;<br /> <br /> == See also ==<br /> <br /> * [[General linear model]]<br /> * [[Simultaneous equations models]]<br /> <br /> === Notes ===<br /> {{reflist|3}}<br /> <br /> ==References==<br /> {{refbegin}}<br /> * {{cite book<br /> | last = Amemiya | first = Takeshi<br /> | title = Advanced econometrics<br /> | year = 1985<br /> | publisher = Harvard University Press<br /> | location = Cambridge, Massachusetts<br /> | isbn = 0-674-00560-0<br /> | ref = harv<br /> }}<br /> * {{cite book<br /> | last1 = Davidson | first1 = Russell<br /> | last2 = MacKinnon | first2 = James G.<br /> | title = Estimation and inference in econometrics<br /> | year = 1993<br /> | publisher = Oxford University Press<br /> | isbn = 978-0-19-506011-9<br /> | ref = harv<br /> }}<br /> * {{cite book<br /> | last = Greene | first = William H.<br /> | title = Econometric analysis<br /> | publisher = Prentice Hall<br /> | year = 2002 | edition = 5th<br /> | isbn = 0-13-066189-9<br /> | ref = harv<br /> }}<br /> * {{cite book<br /> | last = Hayashi | first = Fumio<br /> | title = Econometrics<br /> | year = 2000<br /> | publisher = Princeton University Press<br /> | isbn = 0-691-01018-8<br /> | ref = harv<br /> }}<br /> * {{cite journal<br /> | last1 = Kmenta | first1 = Jan |author-link =Jan Kmenta<br /> | last2 = Gilbert | first2 = Roy F.<br /> | title = Small sample properties of alternative estimators of seemingly unrelated regressions<br /> | year = 1968<br /> | journal = Journal of the American Statistical Association<br /> | volume = 63 | issue = 324<br /> | pages = 1180–1200<br /> | doi = 10.2307/2285876<br /> | ref = harv<br /> }}<br /> * {{cite book<br /> | last1 = Srivastava | first1 = Virendra K.<br /> | last2 = Giles | first2 = David E.A.<br /> | title = Seemingly unrelated regression equations models: estimation and inference<br /> | year = 1987<br /> | publisher = Marcel Dekker<br /> | location = New York<br /> | isbn = 978-0-8247-7610-7<br /> | ref = harv<br /> }}<br /> * {{cite journal<br /> | last = Zellner | first = Arnold<br /> | title = An efficient method of estimating seemingly unrelated regression equations and tests for aggregation bias<br /> | journal = Journal of the American Statistical Association<br /> | year = 1962<br /> | volume = 57<br /> | pages = 348–368<br /> | doi = 10.2307/2281644<br /> | ref = harv<br /> }}<br /> * {{cite journal<br /> | last1 = Zellner | first1 = Arnold<br /> | last2 = Ando | first2 = Tomohiro<br /> | title = A direct Monte Carlo approach for Bayesian analysis of the seemingly unrelated regression model<br /> | year = 2010<br /> | journal = Journal of Econometrics<br /> | volume = 159<br /> | pages = 33-45<br /> | ref = harv<br /> }}<br /> {{refend}}<br /> <br /> [[Category:Econometrics]]<br /> [[Category:Simultaneous equation methods (econometrics)]]<br /> [[Category:Mathematical and quantitative methods (economics)]]<br /> [[Category:Regression analysis]]</div> Zephyrus Tavvier https://de.wikipedia.org/w/index.php?title=Benutzer:JonskiC/Scheinbar_unverbundene_Regressionsgleichungen&diff=164624463 Benutzer:JonskiC/Scheinbar unverbundene Regressionsgleichungen 2013-07-12T03:27:11Z <p>Zephyrus Tavvier: /* The model */ Changed T&#039;s and t&#039;s to R&#039;s and r&#039;s to avoid confusion with transpose T.</p> <hr /> <div>In [[econometrics]], the '''seemingly unrelated regressions''' ('''SUR''')&lt;ref&gt;{{harvtxt|Davidson|MacKinnon|1993|loc=page 306}}, {{harvtxt|Hayashi|2000|loc=page 279}}, {{harvtxt|Greene|2002|loc=p. 340}}&lt;/ref&gt; or '''seemingly unrelated regression equations ''' ('''SURE''')&lt;ref&gt;{{harvtxt|Zellner|1962}}, {{harvtxt|Srivastava|Giles|1987|loc=page 2}}&lt;/ref&gt; model, proposed by [[Arnold Zellner]] in [[#CITEREF_Zellner_1962|(1962)]], is a generalization of a [[linear regression model]] that consists of several regression equations, each having its own dependent variable and potentially different sets of exogenous explanatory variables. Each equation is a valid linear regression on its own and can be estimated separately, which is why the system is called ''seemingly unrelated'',&lt;ref&gt;{{harvtxt|Greene|2002|loc=p. 342}}&lt;/ref&gt; although some authors&lt;ref&gt;{{harvtxt|Davidson|MacKinnon|1993|loc=page 306}}&lt;/ref&gt; suggest that the term ''seemingly related'' would be more appropriate, since the [[errors and residuals in statistics|error terms]] are assumed to be correlated across the equations.<br /> <br /> The model can be estimated equation-by-equation using standard [[ordinary least squares]] (OLS). Such estimates are consistent, however generally not as efficient as the SUR method, which amounts to [[feasible generalized least squares]] with a specific form of the variance-covariance matrix. Two important cases when SUR is in fact equivalent to OLS, are: either when the error terms are in fact uncorrelated between the equations (so that they are truly unrelated), or when each equation contains exactly the same set of regressors on the right-hand-side.<br /> <br /> The SUR model can be viewed as either the simplification of the [[general linear model]] where certain coefficients in matrix Β&lt;!-- this is Greek capital beta, should be non-italic --&gt; are restricted to be equal to zero, or as the generalization of the [[general linear model]] where the regressors on the right-hand-side are allowed to be different in each equation. The SUR model can be further generalized into the [[simultaneous equations model]], where the right-hand side regressors are allowed to be the endogenous variables as well.<br /> <br /> == The model ==<br /> Suppose there are ''m'' regression equations<br /> <br /> : &lt;math&gt;<br /> y_{ir} = x_{ir}^\mathsf{T}\;\!\beta_i + \varepsilon_{ir}, \quad i=1,\ldots,m.<br /> &lt;/math&gt;<br /> <br /> Here ''i'' represents the equation number, {{nowrap|1=''r'' = 1, …, ''R''}} is the observation index and we are taking the transpose of the &lt;math&gt;x_{ir}&lt;/math&gt; column vector. The number of observations is assumed to be large, so that in the analysis we take {{nowrap|''R'' → ∞}}, whereas the number of equations ''m'' remains fixed.<br /> <br /> Each equation ''i'' has a single response variable ''y''&lt;sub&gt;''ir''&lt;/sub&gt;, and a ''k''&lt;sub&gt;''i''&lt;/sub&gt;-dimensional vector of regressors ''x''&lt;sub&gt;''ir''&lt;/sub&gt;. If we stack observations corresponding to the ''i''-th equation into ''R''-dimensional vectors and matrices, then the model can be written in vector form as<br /> <br /> : &lt;math&gt;<br /> y_i = X_i\beta_i + \varepsilon_i, \quad i=1,\ldots,m,<br /> &lt;/math&gt;<br /> where ''y''&lt;sub&gt;''i''&lt;/sub&gt; and ''ε''&lt;sub&gt;''i''&lt;/sub&gt; are ''R''×1 vectors, ''X''&lt;sub&gt;''i''&lt;/sub&gt; is a ''R''×''k''&lt;sub&gt;''i''&lt;/sub&gt; matrix, and ''β''&lt;sub&gt;''i''&lt;/sub&gt; is a ''k''&lt;sub&gt;''i''&lt;/sub&gt;×1 vector.<br /> <br /> Finally, if we stack these ''m'' vector equations on top of each other, the system will take form &lt;ref&gt;{{harvtxt|Zellner|1962|loc=eq. (2.2)}}&lt;/ref&gt;<br /> : {{NumBlk|:|&lt;math&gt;<br /> \begin{pmatrix}y_1 \\ y_2 \\ \vdots \\ y_m \end{pmatrix} =<br /> \begin{pmatrix}X_1&amp;0&amp;\ldots&amp;0 \\ 0&amp;X_2&amp;\ldots&amp;0 \\ \vdots&amp;\vdots&amp;\ddots&amp;\vdots \\ 0&amp;0&amp;\ldots&amp;X_m \end{pmatrix}<br /> \begin{pmatrix}\beta_1 \\ \beta_2 \\ \vdots \\ \beta_m \end{pmatrix} +<br /> \begin{pmatrix}\varepsilon_1 \\ \varepsilon_2 \\ \vdots \\ \varepsilon_m \end{pmatrix}<br /> = X\beta + \varepsilon\,.<br /> &lt;/math&gt;|{{EquationRef|1}}}}<br /> <br /> The assumption of the model is that error terms ''ε''&lt;sub&gt;''ir''&lt;/sub&gt; are independent across time, but may have cross-equation contemporaneous correlations. Thus we assume that {{nowrap|1=E[&amp;thinsp;''ε&lt;sub&gt;ir&lt;/sub&gt;&amp;thinsp;ε&lt;sub&gt;js&lt;/sub&gt;''&amp;thinsp;{{!}}&amp;thinsp;''X''&amp;thinsp;] = 0}} whenever {{nowrap|''r ≠ s''}}, whereas {{nowrap|1=E[&amp;thinsp;''ε''&lt;sub&gt;''ir''&lt;/sub&gt;&amp;thinsp;''ε''&lt;sub&gt;''jr''&lt;/sub&gt;&amp;thinsp;{{!}}&amp;thinsp;''X''&amp;thinsp;] = ''σ&lt;sub&gt;ij&lt;/sub&gt;''}}. Denoting {{nowrap|1=Σ = [[''σ''&lt;sub&gt;''ij''&lt;/sub&gt;]]}} the ''m×m'' skedasticity matrix of each observation, the covariance matrix of the stacked error terms ''ε'' will be equal to &lt;ref&gt;{{harvtxt|Zellner|1962|loc=eq. (2.4)}}, {{harvtxt|Greene|2002|loc=p. 342}}&lt;/ref&gt;<br /> <br /> : &lt;math&gt;<br /> \Omega \equiv \operatorname{E}[\,\varepsilon\varepsilon^\mathsf{T}\,|X\,] = \Sigma \otimes I_R,<br /> &lt;/math&gt;<br /> <br /> where ''I''&lt;sub&gt;''R''&lt;/sub&gt; is the ''R''-dimensional [[identity matrix]] and ⊗ denotes the matrix [[Kronecker product]].<br /> <br /> == Estimation ==<br /> The SUR model is usually estimated using the [[feasible generalized least squares]] (FGLS) method. This is a two-step method where in the first step we run [[ordinary least squares]] regression for ({{EquationNote|1}}). The residuals from this regression are used to estimate the elements of matrix Σ: &lt;ref name=&quot;Amemiya198&quot;&gt;{{harvtxt|Amemiya|1985|loc=page 198}}&lt;/ref&gt;<br /> : &lt;math&gt;<br /> \hat\sigma_{ij} = \frac1T\, \hat\varepsilon_i^\mathsf{T} \hat\varepsilon_j .<br /> &lt;/math&gt;<br /> <br /> In the second step we run [[generalized least squares]] regression for ({{EquationNote|1}}) using the variance matrix &lt;math style=&quot;vertical-align:-.2em&quot;&gt;\scriptstyle\hat\Omega\;=\;\hat\Sigma\,\otimes\,I_T&lt;/math&gt;:<br /> : &lt;math&gt;<br /> \hat\beta = \Big( X^\mathsf{T}(\hat\Sigma^{-1}\otimes I_T) X \Big)^{\!-1} X^\mathsf{T}(\hat\Sigma^{-1}\otimes I_T)\,y .<br /> &lt;/math&gt;<br /> <br /> This estimator is [[bias of an estimator|unbiased]] in small samples assuming the error terms ''ε&lt;sub&gt;it&lt;/sub&gt;'' have symmetric distribution; in large samples it is [[consistent estimator|consistent]] and [[asymptotic distribution|asymptotically normal]] with limiting distribution &lt;ref name=&quot;Amemiya198&quot;/&gt;<br /> : &lt;math&gt;<br /> \sqrt{T}(\hat\beta - \beta) \ \xrightarrow{d}\ \mathcal{N}\Big(\,0,\; \Big(\tfrac1T X^\mathsf{T}(\Sigma^{-1}\otimes I_T) X \Big)^{\!-1}\,\Big) .<br /> &lt;/math&gt;<br /> <br /> Other estimation techniques besides FGLS were suggested for SUR model: the maximum likelihood (ML) method under the assumption that the errors are normally distributed; the iterative generalized least squares (IGLS), were the residuals from the second step of FGLS are used to recalculate the matrix &lt;math style=&quot;vertical-align:0&quot;&gt;\scriptstyle\hat\Sigma&lt;/math&gt;, then estimate &lt;math style=&quot;vertical-align:-.3em&quot;&gt;\scriptstyle\hat\beta&lt;/math&gt; again using GLS, and so on, until convergence is achieved; the iterative ordinary least squates (IOLS) scheme, where estimation is performed on equation-by-equation basis, but every equation includes as additional regressors the residuals from the previously estimated equations in order to account for the cross-equation correlations, the estimation is run iteratively until convergence is achieved. {{harvtxt|Kmenta|Gilbert|1968}} ran a Monte-Carlo study and established that all three methods — IGLS, IOLS and ML — yield the numerically equivalent results, they also found that the asymptotic distribution of these estimators is the same as the distribution of the FGLS estimator, whereas in small samples neither of the estimators was more superior than the others. {{harvtxt|Zellner|Ando|2010}} developed a direct Monte Carlo method for the Bayeisan analysis of SUR model<br /> <br /> == Equivalence to OLS ==<br /> There are two important cases when the SUR estimates turn out to be equivalent to the equation-by-equation OLS, so that there is no gain in estimating the system jointly. These cases are:<br /> # When the matrix Σ is known to be diagonal, that is, there are no cross-equation correlations between the error terms. In this case the system becomes not seemingly but truly unrelated.<br /> # When each equation contains exactly the same set of regressors, that is {{nowrap|1=''X''&lt;sub&gt;1&lt;/sub&gt; = ''X''&lt;sub&gt;2&lt;/sub&gt; = … = ''X&lt;sub&gt;m&lt;/sub&gt;''}}. That the estimators turn out to be numerically identical to OLS estimates follows from [[Kruskal's theorem]],&lt;ref&gt;{{harvtxt|Davidson|MacKinnon|1993|loc=page 313}}&lt;/ref&gt; or can be shown via the direct calculation.&lt;ref&gt;{{harvtxt|Amemiya|1985|loc=page 197}}&lt;/ref&gt;<br /> <br /> == See also ==<br /> <br /> * [[General linear model]]<br /> * [[Simultaneous equations models]]<br /> <br /> === Notes ===<br /> {{reflist|3}}<br /> <br /> ==References==<br /> {{refbegin}}<br /> * {{cite book<br /> | last = Amemiya | first = Takeshi<br /> | title = Advanced econometrics<br /> | year = 1985<br /> | publisher = Harvard University Press<br /> | location = Cambridge, Massachusetts<br /> | isbn = 0-674-00560-0<br /> | ref = harv<br /> }}<br /> * {{cite book<br /> | last1 = Davidson | first1 = Russell<br /> | last2 = MacKinnon | first2 = James G.<br /> | title = Estimation and inference in econometrics<br /> | year = 1993<br /> | publisher = Oxford University Press<br /> | isbn = 978-0-19-506011-9<br /> | ref = harv<br /> }}<br /> * {{cite book<br /> | last = Greene | first = William H.<br /> | title = Econometric analysis<br /> | publisher = Prentice Hall<br /> | year = 2002 | edition = 5th<br /> | isbn = 0-13-066189-9<br /> | ref = harv<br /> }}<br /> * {{cite book<br /> | last = Hayashi | first = Fumio<br /> | title = Econometrics<br /> | year = 2000<br /> | publisher = Princeton University Press<br /> | isbn = 0-691-01018-8<br /> | ref = harv<br /> }}<br /> * {{cite journal<br /> | last1 = Kmenta | first1 = Jan |author-link =Jan Kmenta<br /> | last2 = Gilbert | first2 = Roy F.<br /> | title = Small sample properties of alternative estimators of seemingly unrelated regressions<br /> | year = 1968<br /> | journal = Journal of the American Statistical Association<br /> | volume = 63 | issue = 324<br /> | pages = 1180–1200<br /> | doi = 10.2307/2285876<br /> | ref = harv<br /> }}<br /> * {{cite book<br /> | last1 = Srivastava | first1 = Virendra K.<br /> | last2 = Giles | first2 = David E.A.<br /> | title = Seemingly unrelated regression equations models: estimation and inference<br /> | year = 1987<br /> | publisher = Marcel Dekker<br /> | location = New York<br /> | isbn = 978-0-8247-7610-7<br /> | ref = harv<br /> }}<br /> * {{cite journal<br /> | last = Zellner | first = Arnold<br /> | title = An efficient method of estimating seemingly unrelated regression equations and tests for aggregation bias<br /> | journal = Journal of the American Statistical Association<br /> | year = 1962<br /> | volume = 57<br /> | pages = 348–368<br /> | doi = 10.2307/2281644<br /> | ref = harv<br /> }}<br /> * {{cite journal<br /> | last1 = Zellner | first1 = Arnold<br /> | last2 = Ando | first2 = Tomohiro<br /> | title = A direct Monte Carlo approach for Bayesian analysis of the seemingly unrelated regression model<br /> | year = 2010<br /> | journal = Journal of Econometrics<br /> | volume = 159<br /> | pages = 33-45<br /> | ref = harv<br /> }}<br /> {{refend}}<br /> <br /> [[Category:Econometrics]]<br /> [[Category:Simultaneous equation methods (econometrics)]]<br /> [[Category:Mathematical and quantitative methods (economics)]]<br /> [[Category:Regression analysis]]</div> Zephyrus Tavvier https://de.wikipedia.org/w/index.php?title=Benutzer:JonskiC/Scheinbar_unverbundene_Regressionsgleichungen&diff=164624462 Benutzer:JonskiC/Scheinbar unverbundene Regressionsgleichungen 2013-07-12T03:22:28Z <p>Zephyrus Tavvier: Changed the apostrophes to T&#039;s to represent transpose as per Wiki&#039;s Manual of Style for Mathematics.</p> <hr /> <div>In [[econometrics]], the '''seemingly unrelated regressions''' ('''SUR''')&lt;ref&gt;{{harvtxt|Davidson|MacKinnon|1993|loc=page 306}}, {{harvtxt|Hayashi|2000|loc=page 279}}, {{harvtxt|Greene|2002|loc=p. 340}}&lt;/ref&gt; or '''seemingly unrelated regression equations ''' ('''SURE''')&lt;ref&gt;{{harvtxt|Zellner|1962}}, {{harvtxt|Srivastava|Giles|1987|loc=page 2}}&lt;/ref&gt; model, proposed by [[Arnold Zellner]] in [[#CITEREF_Zellner_1962|(1962)]], is a generalization of a [[linear regression model]] that consists of several regression equations, each having its own dependent variable and potentially different sets of exogenous explanatory variables. Each equation is a valid linear regression on its own and can be estimated separately, which is why the system is called ''seemingly unrelated'',&lt;ref&gt;{{harvtxt|Greene|2002|loc=p. 342}}&lt;/ref&gt; although some authors&lt;ref&gt;{{harvtxt|Davidson|MacKinnon|1993|loc=page 306}}&lt;/ref&gt; suggest that the term ''seemingly related'' would be more appropriate, since the [[errors and residuals in statistics|error terms]] are assumed to be correlated across the equations.<br /> <br /> The model can be estimated equation-by-equation using standard [[ordinary least squares]] (OLS). Such estimates are consistent, however generally not as efficient as the SUR method, which amounts to [[feasible generalized least squares]] with a specific form of the variance-covariance matrix. Two important cases when SUR is in fact equivalent to OLS, are: either when the error terms are in fact uncorrelated between the equations (so that they are truly unrelated), or when each equation contains exactly the same set of regressors on the right-hand-side.<br /> <br /> The SUR model can be viewed as either the simplification of the [[general linear model]] where certain coefficients in matrix Β&lt;!-- this is Greek capital beta, should be non-italic --&gt; are restricted to be equal to zero, or as the generalization of the [[general linear model]] where the regressors on the right-hand-side are allowed to be different in each equation. The SUR model can be further generalized into the [[simultaneous equations model]], where the right-hand side regressors are allowed to be the endogenous variables as well.<br /> <br /> == The model ==<br /> Suppose there are ''m'' regression equations<br /> <br /> : &lt;math&gt;<br /> y_{it} = x_{it}^\mathsf{T}\;\!\beta_i + \varepsilon_{it}, \quad i=1,\ldots,m.<br /> &lt;/math&gt;<br /> <br /> Here ''i'' represents the equation number, {{nowrap|1=''t'' = 1, …, ''T''}} is the observation index and we are taking the transpose of the &lt;math&gt;x_{it}&lt;/math&gt; column vector. The number of observations is assumed to be large, so that in the analysis we take {{nowrap|''T'' → ∞}}, whereas the number of equations ''m'' remains fixed.<br /> <br /> Each equation ''i'' has a single response variable ''y''&lt;sub&gt;''it''&lt;/sub&gt;, and a ''k''&lt;sub&gt;''i''&lt;/sub&gt;-dimensional vector of regressors ''x''&lt;sub&gt;''it''&lt;/sub&gt;. If we stack observations corresponding to the ''i''-th equation into ''T''-dimensional vectors and matrices, then the model can be written in vector form as<br /> <br /> : &lt;math&gt;<br /> y_i = X_i\beta_i + \varepsilon_i, \quad i=1,\ldots,m,<br /> &lt;/math&gt;<br /> where ''y''&lt;sub&gt;''i''&lt;/sub&gt; and ''ε''&lt;sub&gt;''i''&lt;/sub&gt; are ''T''×1 vectors, ''X''&lt;sub&gt;''i''&lt;/sub&gt; is a ''T''×''k''&lt;sub&gt;''i''&lt;/sub&gt; matrix, and ''β''&lt;sub&gt;''i''&lt;/sub&gt; is a ''k''&lt;sub&gt;''i''&lt;/sub&gt;×1 vector.<br /> <br /> Finally, if we stack these ''m'' vector equations on top of each other, the system will take form &lt;ref&gt;{{harvtxt|Zellner|1962|loc=eq. (2.2)}}&lt;/ref&gt;<br /> : {{NumBlk|:|&lt;math&gt;<br /> \begin{pmatrix}y_1 \\ y_2 \\ \vdots \\ y_m \end{pmatrix} =<br /> \begin{pmatrix}X_1&amp;0&amp;\ldots&amp;0 \\ 0&amp;X_2&amp;\ldots&amp;0 \\ \vdots&amp;\vdots&amp;\ddots&amp;\vdots \\ 0&amp;0&amp;\ldots&amp;X_m \end{pmatrix}<br /> \begin{pmatrix}\beta_1 \\ \beta_2 \\ \vdots \\ \beta_m \end{pmatrix} +<br /> \begin{pmatrix}\varepsilon_1 \\ \varepsilon_2 \\ \vdots \\ \varepsilon_m \end{pmatrix}<br /> = X\beta + \varepsilon\,.<br /> &lt;/math&gt;|{{EquationRef|1}}}}<br /> <br /> The assumption of the model is that error terms ''ε''&lt;sub&gt;''it''&lt;/sub&gt; are independent across time, but may have cross-equation contemporaneous correlations. Thus we assume that {{nowrap|1=E[&amp;thinsp;''ε&lt;sub&gt;it&lt;/sub&gt;&amp;thinsp;ε&lt;sub&gt;js&lt;/sub&gt;''&amp;thinsp;{{!}}&amp;thinsp;''X''&amp;thinsp;] = 0}} whenever {{nowrap|''t ≠ s''}}, whereas {{nowrap|1=E[&amp;thinsp;''ε''&lt;sub&gt;''it''&lt;/sub&gt;&amp;thinsp;''ε''&lt;sub&gt;''jt''&lt;/sub&gt;&amp;thinsp;{{!}}&amp;thinsp;''X''&amp;thinsp;] = ''σ&lt;sub&gt;ij&lt;/sub&gt;''}}. Denoting {{nowrap|1=Σ = [[''σ''&lt;sub&gt;''ij''&lt;/sub&gt;]]}} the ''m×m'' skedasticity matrix of each observation, the covariance matrix of the stacked error terms ''ε'' will be equal to &lt;ref&gt;{{harvtxt|Zellner|1962|loc=eq. (2.4)}}, {{harvtxt|Greene|2002|loc=p. 342}}&lt;/ref&gt;<br /> <br /> : &lt;math&gt;<br /> \Omega \equiv \operatorname{E}[\,\varepsilon\varepsilon^\mathsf{T}\,|X\,] = \Sigma \otimes I_T,<br /> &lt;/math&gt;<br /> <br /> where ''I''&lt;sub&gt;''T''&lt;/sub&gt; is the ''T''-dimensional [[identity matrix]] and ⊗ denotes the matrix [[Kronecker product]].<br /> <br /> == Estimation ==<br /> The SUR model is usually estimated using the [[feasible generalized least squares]] (FGLS) method. This is a two-step method where in the first step we run [[ordinary least squares]] regression for ({{EquationNote|1}}). The residuals from this regression are used to estimate the elements of matrix Σ: &lt;ref name=&quot;Amemiya198&quot;&gt;{{harvtxt|Amemiya|1985|loc=page 198}}&lt;/ref&gt;<br /> : &lt;math&gt;<br /> \hat\sigma_{ij} = \frac1T\, \hat\varepsilon_i^\mathsf{T} \hat\varepsilon_j .<br /> &lt;/math&gt;<br /> <br /> In the second step we run [[generalized least squares]] regression for ({{EquationNote|1}}) using the variance matrix &lt;math style=&quot;vertical-align:-.2em&quot;&gt;\scriptstyle\hat\Omega\;=\;\hat\Sigma\,\otimes\,I_T&lt;/math&gt;:<br /> : &lt;math&gt;<br /> \hat\beta = \Big( X^\mathsf{T}(\hat\Sigma^{-1}\otimes I_T) X \Big)^{\!-1} X^\mathsf{T}(\hat\Sigma^{-1}\otimes I_T)\,y .<br /> &lt;/math&gt;<br /> <br /> This estimator is [[bias of an estimator|unbiased]] in small samples assuming the error terms ''ε&lt;sub&gt;it&lt;/sub&gt;'' have symmetric distribution; in large samples it is [[consistent estimator|consistent]] and [[asymptotic distribution|asymptotically normal]] with limiting distribution &lt;ref name=&quot;Amemiya198&quot;/&gt;<br /> : &lt;math&gt;<br /> \sqrt{T}(\hat\beta - \beta) \ \xrightarrow{d}\ \mathcal{N}\Big(\,0,\; \Big(\tfrac1T X^\mathsf{T}(\Sigma^{-1}\otimes I_T) X \Big)^{\!-1}\,\Big) .<br /> &lt;/math&gt;<br /> <br /> Other estimation techniques besides FGLS were suggested for SUR model: the maximum likelihood (ML) method under the assumption that the errors are normally distributed; the iterative generalized least squares (IGLS), were the residuals from the second step of FGLS are used to recalculate the matrix &lt;math style=&quot;vertical-align:0&quot;&gt;\scriptstyle\hat\Sigma&lt;/math&gt;, then estimate &lt;math style=&quot;vertical-align:-.3em&quot;&gt;\scriptstyle\hat\beta&lt;/math&gt; again using GLS, and so on, until convergence is achieved; the iterative ordinary least squates (IOLS) scheme, where estimation is performed on equation-by-equation basis, but every equation includes as additional regressors the residuals from the previously estimated equations in order to account for the cross-equation correlations, the estimation is run iteratively until convergence is achieved. {{harvtxt|Kmenta|Gilbert|1968}} ran a Monte-Carlo study and established that all three methods — IGLS, IOLS and ML — yield the numerically equivalent results, they also found that the asymptotic distribution of these estimators is the same as the distribution of the FGLS estimator, whereas in small samples neither of the estimators was more superior than the others. {{harvtxt|Zellner|Ando|2010}} developed a direct Monte Carlo method for the Bayeisan analysis of SUR model<br /> <br /> == Equivalence to OLS ==<br /> There are two important cases when the SUR estimates turn out to be equivalent to the equation-by-equation OLS, so that there is no gain in estimating the system jointly. These cases are:<br /> # When the matrix Σ is known to be diagonal, that is, there are no cross-equation correlations between the error terms. In this case the system becomes not seemingly but truly unrelated.<br /> # When each equation contains exactly the same set of regressors, that is {{nowrap|1=''X''&lt;sub&gt;1&lt;/sub&gt; = ''X''&lt;sub&gt;2&lt;/sub&gt; = … = ''X&lt;sub&gt;m&lt;/sub&gt;''}}. That the estimators turn out to be numerically identical to OLS estimates follows from [[Kruskal's theorem]],&lt;ref&gt;{{harvtxt|Davidson|MacKinnon|1993|loc=page 313}}&lt;/ref&gt; or can be shown via the direct calculation.&lt;ref&gt;{{harvtxt|Amemiya|1985|loc=page 197}}&lt;/ref&gt;<br /> <br /> == See also ==<br /> <br /> * [[General linear model]]<br /> * [[Simultaneous equations models]]<br /> <br /> === Notes ===<br /> {{reflist|3}}<br /> <br /> ==References==<br /> {{refbegin}}<br /> * {{cite book<br /> | last = Amemiya | first = Takeshi<br /> | title = Advanced econometrics<br /> | year = 1985<br /> | publisher = Harvard University Press<br /> | location = Cambridge, Massachusetts<br /> | isbn = 0-674-00560-0<br /> | ref = harv<br /> }}<br /> * {{cite book<br /> | last1 = Davidson | first1 = Russell<br /> | last2 = MacKinnon | first2 = James G.<br /> | title = Estimation and inference in econometrics<br /> | year = 1993<br /> | publisher = Oxford University Press<br /> | isbn = 978-0-19-506011-9<br /> | ref = harv<br /> }}<br /> * {{cite book<br /> | last = Greene | first = William H.<br /> | title = Econometric analysis<br /> | publisher = Prentice Hall<br /> | year = 2002 | edition = 5th<br /> | isbn = 0-13-066189-9<br /> | ref = harv<br /> }}<br /> * {{cite book<br /> | last = Hayashi | first = Fumio<br /> | title = Econometrics<br /> | year = 2000<br /> | publisher = Princeton University Press<br /> | isbn = 0-691-01018-8<br /> | ref = harv<br /> }}<br /> * {{cite journal<br /> | last1 = Kmenta | first1 = Jan |author-link =Jan Kmenta<br /> | last2 = Gilbert | first2 = Roy F.<br /> | title = Small sample properties of alternative estimators of seemingly unrelated regressions<br /> | year = 1968<br /> | journal = Journal of the American Statistical Association<br /> | volume = 63 | issue = 324<br /> | pages = 1180–1200<br /> | doi = 10.2307/2285876<br /> | ref = harv<br /> }}<br /> * {{cite book<br /> | last1 = Srivastava | first1 = Virendra K.<br /> | last2 = Giles | first2 = David E.A.<br /> | title = Seemingly unrelated regression equations models: estimation and inference<br /> | year = 1987<br /> | publisher = Marcel Dekker<br /> | location = New York<br /> | isbn = 978-0-8247-7610-7<br /> | ref = harv<br /> }}<br /> * {{cite journal<br /> | last = Zellner | first = Arnold<br /> | title = An efficient method of estimating seemingly unrelated regression equations and tests for aggregation bias<br /> | journal = Journal of the American Statistical Association<br /> | year = 1962<br /> | volume = 57<br /> | pages = 348–368<br /> | doi = 10.2307/2281644<br /> | ref = harv<br /> }}<br /> * {{cite journal<br /> | last1 = Zellner | first1 = Arnold<br /> | last2 = Ando | first2 = Tomohiro<br /> | title = A direct Monte Carlo approach for Bayesian analysis of the seemingly unrelated regression model<br /> | year = 2010<br /> | journal = Journal of Econometrics<br /> | volume = 159<br /> | pages = 33-45<br /> | ref = harv<br /> }}<br /> {{refend}}<br /> <br /> [[Category:Econometrics]]<br /> [[Category:Simultaneous equation methods (econometrics)]]<br /> [[Category:Mathematical and quantitative methods (economics)]]<br /> [[Category:Regression analysis]]</div> Zephyrus Tavvier