Views:

Using a robots.txt file to prevent search engines from spidering dynamic variables and pages

Issue

By default, when a search engine spider visits your web site it follows all variable links, queries, search boxes and dynamic pages on your web site This can end up in an endless loop causing excessive load on your web site and generating high traffic.

Solution

The problem can be solved by uploading a robots.txt file to your web site which will prevent the search engines from following any URLs that begin with /? (these are the common dynamic page references).

Insert this text into your robots.txt file:

User-agent: *Allow: /*?$Disallow: /*?

Get in touch

For any additional help, give us a call on 0800 477 333 (8AM to 10PM, 7 days a week).