Jump to content

Local case-control sampling

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Gzluyongxi (talk | contribs) at 21:24, 12 June 2015. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In machine learning, local case-control sampling is an algorithm used to reduce the complexity of training a logistic regression classifier. The algorithm reduces the training complexity by selecting a small subsample of the original dataset for training. It assumes the availability of a (unreliable) pilot estimation of the parameters. It then performs a single pass over the entire dataset using the pilot estimation to identify the most "surprising" samples. In practice, the pilot may come from prior knowledge or training using a subsample of the dataset. The algorithm is most effective when the underlying dataset is imbalanced. It exploits the structures of conditional imbalanced datasets more efficiently than alternative methods, such as case control sampling and weighted case control sampling.

Imbalanced Datasets

In classification, a dataset is a set of N data points , where is a feature vector, is a label. Intuitively, a dataset is imbalanced when certain important statistical patterns are rare. The lack of observations of certain patterns does not always imply their irrelevance. For example, in medical studies of rare diseases, the small number of infected patients (cases) conveys the most valuable information for diagnosis and treatments.

Formally, an imbalanced dataset exhibits one or more of the following properties:

  • Marginal Imbalance. A dataset is marginally imbalanced if one class is rare compared to the other class. In other words, .
  • Conditional Imbalance. A dataset is conditionally imbalanced when it is easy to predict the correct labels in most cases. For example, if , the dataset is conditionally imbalanced if and .

Local case-control sampling

In logistic regression, given the model , the prediction is made according to . The local-case control sampling algorithm assumes the availability of a pilot model . Given the pilot model, the algorithm performs a single pass over the entire dataset to select the subset of samples to include in training the logistic regression model. For a sample , define the acceptance probability as . The algorithm proceeds as follows:

  1. Generate independent for .
  2. Fit a logistic regression model to the subsample , obtaining the unadjusted estimates .
  3. The output model is , where and .


[1]

  1. ^ Fithian, William; Hastie, Trevor (2014). "Local case-control sampling: Efficient subsampling in imbalanced data sets". The Annals of Statistics. 42 (5): 1693-1724. {{cite journal}}: External link in |ref= (help)