Robots.txt

Welcome to the official Robots.txt tutorial on

A robots.txt file (formally known as a robots exclusion standard or robots exclusion protocol) is a standard used by websites to communicate with web crawlers and other web robots. The standard specifies how to inform the web robot about which areas of the website should not be processed or scanned. The standard can be used in conjunction with Sitemaps, a robot inclusion standard for websites.

Not all robots cooperate with the standard: email harvesters, spambots, malware and robots that scan for security vulnerabilities may even start with the portion of the website where they have been told to stay out.


 * 1) History
 * 2) Syntax
 * 3) Meta tags and headers