URLs restricted by robots.txt errors

by Geethalakshmi 2010-12-01 18:23:00

URLs restricted by robots.txt errors


Google was unable to crawl the URL due to a robots.txt restriction. This can happen for a number of reasons. For instance, your robots.txt file might prohibit the Googlebot entirely; it might prohibit access to the directory in which this URL is located; or it might prohibit access to the URL specifically. Often, this is not an error. You may have specifically set up a robots.txt file to prevent us from crawling this URL. If that is the case, there's no need to fix this; we will continue to respect robots.txt for this file.

If a URL redirects to a URL that is blocked by a robots.txt file, the first URL will be reported as being blocked by robots.txt (even if the URL is listed as Allowed in the robots.txt analysis tool).

Tagged in:

1305
like
0
dislike
0
mail
flag

You must LOGIN to add comments