Robots.txt Validator


See how to use Robots.txt Validator

The Robots Exclusion Protocol, commonly referred to as /robots.txt, is used to give direction and information about a given website to web robots. When a robots visits a specific website, the first thing they do is find the robots.txt file to identify which pages, if any, are disallowed.

It’s important to note that regardless of any direction given, robots can choose to ignore the file and /robots.txt files are public.

Having been first developed in 1994, the original document has undergone some changes, resulting in a revised Method of Robots Control in 1997. There is currently no effort being made to develop further /robots.txt; however, unofficial efforts to extend robots exclusion mechanisms had been discussed in 2008.

This tool can help identify errors that may exist within your current /robots.txt file. It also lists the pages that you’ve specified to be disallowed.

If you like this tool, please Plus it, Like it, Tweet it, or best yet, link to it - Jim