With Search Engine Optimization, your aim is to be at the top of search results. But which page of your website gets displayed there is also important. It’s good to know that there are ways to implement that! So you have total control over what can be crawled and what cannot be crawled by search engine spiders.
Some pages of your web site may not be useful to end users. This is the top most reason to create restrictions. The Robot.txt file helps you manage the information on your website that you want to restrict for search engine crawlers. The Robots.txt file must be created and placed in the root directory of the domain for your site. Just in case you are working with sub domains, each sub domain must have one separate robot.txt file.
Let’s take a look at an example file:
User-agent*
Disallow: /Images/
Disallow: /search
In the above example, the search engine will not crawl any or access any content under Images or search. Well, if you wish search engines to index all content on your web site, you do not need a robots.txt file. However, don’t rely on robots.txt for sensitive information. It can be ignored easily by spammers.
SEO has tips and techniques for everything. Implement with care!
Image Credit: https://tinyurl.com/t8err58