Jump to content

Boyer–Moore majority vote algorithm

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by David Eppstein (talk | contribs) at 07:40, 1 January 2024 (Original reference already includes the 2nd pass). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
The state of the Boyer–Moore algorithm after each input symbol. The inputs are shown along the bottom of the figure, and the stored element and counter are shown as the symbols and their heights along the black curve.

The Boyer–Moore majority vote algorithm is an algorithm for finding the majority of a sequence of elements using linear time and a constant number of words of memory. It is named after Robert S. Boyer and J Strother Moore, who published it in 1981,[1] and is a prototypical example of a streaming algorithm.

In its simplest form, the algorithm finds a majority element, if there is one: that is, an element that occurs repeatedly for more than half of the elements of the input. A version of the algorithm that makes a second pass through the data can be used to verify that the element found in the first pass really is a majority.[1]

If a second pass is not performed and there is no majority the algorithm will not detect that no majority exists. In the case that no strict majority exists, the returned element can be arbitrary; it is not guaranteed to be the element that occurs most often (the mode of the sequence). It is not possible for a streaming algorithm to find the most frequent element in less than linear space, for sequences whose number of repetitions can be small.[2]

Description

The algorithm maintains in its local variables a sequence element and a counter, with the counter initially zero. It then processes the elements of the sequence, one at a time. When processing an element x, if the counter is zero, the algorithm stores x as its remembered sequence element and sets the counter to one. Otherwise, it compares x to the stored element and either increments the counter (if they are equal) or decrements the counter (otherwise). At the end of this process, if the sequence has a majority, it will be the element stored by the algorithm. This can be expressed in pseudocode as the following steps:

  • Initialize an element m and a counter c with c = 0
  • For each element x of the input sequence:
    • If c = 0, then assign m = x and c = 1
    • else if m = x, then assign c = c + 1
    • else assign c = c − 1
  • Return m

Even when the input sequence has no majority, the algorithm will report one of the sequence elements as its result. However, it is possible to perform a second pass over the same input sequence in order to count the number of times the reported element occurs and determine whether it is actually a majority. This second pass is needed, as it is not possible for a sublinear-space algorithm to determine whether there exists a majority element in a single pass through the input.[3]

Analysis

The amount of memory that the algorithm needs is the space for one element and one counter. In the random access model of computing usually used for the analysis of algorithms, each of these values can be stored in a machine word and the total space needed is O(1). If an array index is needed to keep track of the algorithm's position in the input sequence, it doesn't change the overall constant space bound. The algorithm's bit complexity (the space it would need, for instance, on a Turing machine) is higher, the sum of the binary logarithms of the input length and the size of the universe from which the elements are drawn.[2] Both the random access model and bit complexity analyses only count the working storage of the algorithm, and not the storage for the input sequence itself.

Similarly, on a random access machine the algorithm takes time O(n) (linear time) on an input sequence of n items, because it performs only a constant number of operations per input item. The algorithm can also be implemented on a Turing machine in time linear in the input length (n times the number of bits per input item).

Correctness

Suppose that the input contains a majority element x, whose number of copies is more than half of the input length. The correctness of the algorithm (the fact that it outputs x) can be proven by induction on the sequence length. There are two cases:

  • If the algorithm's counter ever returns to zero, then (at the step at which it first returns to zero) exactly half of the input elements seen so far equal the current value of m and half are some other value. Regardless of whether m = x, at most half of the values so far equal x. In order for x to have so few copies up to this point, and still to be a majority element of the whole input, it must also be a majority element in the remaining subsequence, which the algorithm processes in exactly the same way as if it were the whole input. In this case, the correctness of the algorithm follows by applying the induction hypothesis to this remaining subsequence.
  • If the algorithm's counter never returns to zero, then (for an input of length n) there must be exactly (n + c)/2 copies of m (the first element of the input), and (nc)/2 other input values, in order to obtain a counter value of c. But by the defining assumption of this case, c > 0, so the number of copies of m, (n + c)/2, is greater than n/2. That is, the returned value m has a majority, and is the correct value to return.

See also

References

  1. ^ a b Boyer, R. S.; Moore, J S. (1991), "MJRTY - A Fast Majority Vote Algorithm", in Boyer, R. S. (ed.), Automated Reasoning: Essays in Honor of Woody Bledsoe, Automated Reasoning Series, Dordrecht, The Netherlands: Kluwer Academic Publishers, pp. 105–117, doi:10.1007/978-94-011-3488-0_5, DTIC ADA131702. Originally published as a technical report in 1981.
  2. ^ a b Trevisan, Luca; Williams, Ryan (January 26, 2012), "Notes on streaming algorithms" (PDF), CS154: Automata and Complexity, Stanford University.
  3. ^ Cormode, Graham; Hadjieleftheriou, Marios (October 2009), "Finding the frequent items in streams of data", Communications of the ACM, 52 (10): 97, doi:10.1145/1562764.1562789, S2CID 823439, no algorithm can correctly distinguish the cases when an item is just above or just below the threshold in a single pass without using a large amount of space.