Jump to content

Talk:Boyer–Moore–Horspool algorithm

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by SirWumpus (talk | contribs) at 16:37, 5 November 2015 (No mention of Boyer-Moore-Sunday algorithm.: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
WikiProject iconComputer science Unassessed
WikiProject iconThis article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
???This article has not yet received a rating on Wikipedia's content assessment scale.
???This article has not yet received a rating on the project's importance scale.
Things you can help WikiProject Computer science with:

More Typical Wiki Crap

Well, we have an implentation which does not produce correct results using two compilers. And the original poster does not perform the shift correctly. Do not use this code.

151.196.6.139 (talk)noloader

cross-platform

I'm not native to the C programming language and I don't have time to start sifting through the code in this article to develop a pseudocode representation of this algorithm.

Can someone who knows C better than I please read through and put the code into something cross-platform?

--Oddb411 13:01, 17 September 2006 (UTC)[reply]

C is actually a cross-platform language when written properly. The sample I wrote is meant to be of that kind. I'll see if I can edit it to use less of the kind of syntax that is inherent to C to make it better understood for those who don't have experience of languages of C origin. --Bisqwit 21:00, 17 September 2006 (UTC)[reply]

variable names

Many programmers (WikiWikiWeb:MeaningfulName) now recommend "Always, always, always use good, unabbreviated, correctly-spelled meaningful names." Is there some more meaningful name we could use instead of "hpos" ?

In the Boyer-Moore algorithm article, the "hpos" variable tracks the position where a copy of the needle might possibly start in the haystack, although we generally look at some later position in the haystack. (What would be a more meaningful name for this?)

In this Boyer–Moore–Horspool algorithm article, the "hpos" variable tracks the actual letter in the haystack that we look at. (What would be a more meaningful name for this?)

(The difference is precisely

   BMH_hpos = BM_hpos + npos

).

I think doing it the same way in both articles would be less confusing. Which way (the current BM way, or the current BMH way) would make clearer, easier-to-understand code?

--68.0.120.35 02:32, 25 March 2007 (UTC)[reply]

Possible miscalculation of comparisons

The article says "For instance a 32 byte needle ending in "z" searching through a 255 byte haystack which doesn't have a 'z' byte in it would take 3 byte comparisons."

I think it meant 7 byte comparisons, since each one skips 32 bytes until there's less that 32 bytes remaining.

Can anybody confirm? —Preceding unsigned comment added by 155.210.218.53 (talk) 18:21, 18 January 2008 (UTC)[reply]

Optimising the table size

I might be wrong, but it looks like space can be saved in the bad character skip table by using a hash of the character instead of its actual value. In the case of a collision, the result will just be a smaller shift.

Not a particularly useful idea when the table's only 256 chars long, but it would allow for much better storage requirements if you were using, say, a 32-bit character set. In a case like that, probably only a small fraction of the character set would be seen, and so chances of collision would be low for a reasonably sized hash. CountingPine (talk) 08:27, 17 July 2009 (UTC)[reply]

KMP

How is this related to KMP? If anything, the other heuristic in Boyer-Moore (which is not in this algorithm) is closely related to KMP's table (e.g. the compute_prefix code in http://en.wikipedia.org/wiki/Boyer%E2%80%93Moore_string_search_algorithm is exactly the pre-processing in KMP). Raduberinde (talk) 23:36, 17 August 2010 (UTC)[reply]

Python implementation

From http://code.activestate.com/recipes/117223-boyer-moore-horspool-string-searching/

Python

# bmh.py
#
# An implementation of Boyer-Moore-Horspool string searching.
#
# This code is Public Domain.
#
def BoyerMooreHorspool(pattern, text):
    m = len(pattern)
    n = len(text)
    if m > n: return -1
    skip = []
    for k in range(256): skip.append(m)
    for k in range(m - 1): skip[ord(pattern[k])] = m - k - 1
    skip = tuple(skip)
    k = m - 1
    while k < n:
        j = m - 1; i = k
        while j >= 0 and text[i] == pattern[j]:
            j -= 1; i -= 1
        if j == -1: return i + 1
        k += skip[ord(text[k])]
    return -1

if __name__ == '__main__':
    text = "this is the string to search in"
    pattern = "the"
    s = BoyerMooreHorspool(pattern, text)
    print 'Text:',text
    print 'Pattern:',pattern
    if s > -1:
        print 'Pattern \"' + pattern + '\" found at position',s
## end of http://code.activestate.com/recipes/117223/ }}}

(code is released under the PSF license (http://www.opensource.org/licenses/Python-2.0) <-- What? It makes absolutely no sense to apply a more restrictive license to public domain source code. We can do whatever we want with it, PSF be damned.

Real life examples

Wouldn't there be a benefit in pointing out real live examples, e.g. https://github.com/ggreer/the_silver_searcher, to substantiate the usefulness of this algorithm?

I saw earlier the talk page that the algorithm itself was not implemented correctly. I am too lazy to verify the correctness of this code, so I'll leave it to others, but in the above example you have an implementation that does work (might have bugs though as any algo/sw).

There you have a truly super-fast grep tool using this algorithm for substring searches. — Preceding unsigned comment added by 192.75.88.232 (talk) 19:49, 25 June 2013 (UTC)[reply]

It is amazing that BMH has reigned (since 1980) for 33 years, it's time to utilize new CPUs in a much faster way!

It took me 2+ years to optimize and explore this beautiful and simple algorithm, finally the gods of searching helped me to reveal the FASTEST function for searching a block of memory into another block, the so called MEMMEM, given that Windows OS lacks this important function and seeing how *nix world has got nothing worthy enough (except some 2007-2008 empty talks in newsgroups) I think it is time for change, the role of the successor is played by 'Railgun_Sekireigan_Bari'.

Why did you remove my contribution?

Just saw that my BMH order 2/12 link in 'External links' is removed, what is the problem? — Preceding unsigned comment added by Sanmayce (talkcontribs) 17:45, 19 December 2013 (UTC)[reply]

Proposed merge with Raita Algorithm

Raita's is apparently an optimization of BMH. QVVERTYVS (hm?) 15:08, 15 March 2015 (UTC)[reply]

Pseudocode doesn't match the original

I translated this from the Python version that was previously on this page, but it doesn't match Horspool's pseudocode very closely. It also contains some bugs that I discovered when implementing it (corner cases). QVVERTYVS (hm?) 10:26, 6 June 2015 (UTC)[reply]

No mention of Boyer-Moore-Sunday algorithm.

"A very fast substring search algorithm"; Daniel M. Sunday; Communications of the ACM; August 1990

The Sunday variant is slightly more efficient than Horspool.

Thierry Lecroq covers the three versions presented by Sunday. "Exact String Matching Algorithms"