Jump to content

Robots exclusion standard

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 213.253.40.235 (talk) at 20:05, 12 November 2002 (a quickie). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Robots.txt is the name of the file used in the robots exclusion standard that aims to provide information to well-behaved web spiders and other web robots so that they can ignore parts of websites that the web site operators do not want accessed.

Note that the robots exclusion standard is only advisory, not mandatory, so marking an area of your site out of bounds with robots.txt does not guarantee privacy.