Jump to content

Web crawler

From Simple English Wikipedia, the free encyclopedia
Revision as of 07:16, 1 January 2023 by Firefangledfeathers (talk | changes) (add a related page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

A web crawler or spider is a computer program that automatically fetches the contents of a web page. The program then analyses the content, for example to index it by certain search terms. Search engines commonly use web crawlers.[1]

[change | change source]
  • HTTrack – a web crawler released in 1998

References

[change | change source]
  1. Masanès, Julien (February 15, 2007). Web Archiving. Springer. p. 1. ISBN 978-3-54046332-0. Retrieved April 24, 2014.[permanent dead link]