Jump to content

Distribution-free maximum likelihood for binary responses

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Carolineneil (talk | contribs) at 14:26, 8 June 2017. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


Distribution Free maximum Likelihood for Binary Responses

In this article, let’s take the latent utility model as an example for the binary response model. The intuition of the latent utility model is that respondents will pick up the choice which will give the highest utility for her. Because the utility is not observable, it is assumed that the latent utility is linear with the some explanatory variables which affects the utility of the choice to the respondent and there is an additive response error capturing the randomness of the choice-making process. In this model, the choice is: , where are two vectors of the explanatory covariates,





If some distributional assumption about the response error is imposed, then the log likelihood function will have specific close form representation. For instance, if the response error is assumed to be distributed as: N(0,), then the likelihood function can be rewritten as:

Q=

where is the CDF for standard normal distribution. Here, even if doesn't have a closed form of representation, its derivative does. Therefore, the Maximum Likelihood Estimation can be explicitly computed by solving the first order condition. Alternatively, if the response error is assumed to be distributed as Gumbel , then the log-likelihood function can be rewritten as:

where F(.) is the CDF for the standard logistic distribution, which has a closed form representation.

Both of the models above are based on the distribution assumption about the response error term. Adding specific distribution assumption into the model can make the model computationally tractable due to the existence of the closed form representation. But if the distribution of the error term is missepcified, the estimates based on the distribution assumption will be inconsistent. To get more robust estimator, models which don’t depend on the distribution assumption can be used. The basic idea of the distribution-free model is to replace the two probability term in the log-likelihood function with other weights. The general form of the log-likelihood function can written as:

For instance, Manski (1975) proposed a discrete weighting scheme for multi-response model, in the binary context which can be represented as:

where w1 and w0 are two constants in (0,1). The intuition of this weighting scheme is that the probability of the choice depends on the relative order of the certainty part of the utility. Under the discrete weighting scheme, the estimator, which is also called Maximum Score Estimator, doesn’t have very nice asymptotic property , and Horowitz (1992) proposed a smoothed weighting scheme, which can be represented as:

where the weight function K (.) has to satisfy the following conditions:

(1) |K (.) is bounded over R ;

(2) lim K(u)=0 and lim K (u)=1

(3) K(u) =K(-u)

Here, the weight function is analogous to a CDF but can be more general and flexible than the weight functions in the models based on specific distribution assumption. The estimator under this weighting scheme is also called Smoothed Maximum Score Estimator. Usually, it is more computationally tractable than the Maximum Score Estimator for its smoothness and it is also more robust than the estimator based on the distribution assumptions.




References