Robots.txt Generator


Default - All Robots are:  
    
Crawl-Delay:
    
Sitemap: (leave blank if you don't have) 
     
Search Robots: Google
  Google Image
  Google Mobile
  MSN Search
  Yahoo
  Yahoo MM
  Yahoo Blogs
  Ask/Teoma
  GigaBlast
  DMOZ Checker
  Nutch
  Alexa/Wayback
  Baidu
  Naver
  MSN PicSearch
   
Restricted Directories: The path is relative to root and must contain a trailing slash "/"
 
 
 
 
 
 
   



Now, Create 'robots.txt' file at your root directory. Copy above text and paste into the text file.


About Robots.txt Generator

The Robots.txt or Robots Exclusion Protocol is a simple text file used by websites to speak with web crawlers and different web robots. The text file specifies a way to inform the web robot regarding which areas of the website must not be processed or scanned. Robots are typically used by search engines to categories web sites. Not all robots work with the standard, email harvesters, spam bots, malware and robots that scan for security vulnerabilities might even begin with the parts of the website wherever they have been told to remain out. The standard is totally different from, however will be used in conjunction with Sitemaps, a robot inclusion standard for websites.

What Is Robots.txt.

When a website owner needs to provide directions to web robots they place a document known as robots.txt within the root of the top-level directory of a web server (e.g. https://www.seogape.com/robots.txt). This document contains the directions in a very specific format (examples are given below). Robots that choose to follow the directions try and fetch this file and read the directions before fetching the other file from the website. If this file does not exist, web robots assume that the website owner did not provide any specific directions and crawl the complete website.

A robots.txt file on a website can work as a request that specific robots ignore specific files or directories when crawling a site. This could be, as an example, out of a preference for privacy from search engine results, or the idea that the content of the chosen directories could be deceptive or impertinent to the categorization of the site as an entire, or out of a desire that an application solely operate on certain information. Links to pages listed in robots.txt will still seem in search results if they are connected to from a page that is crawled.

How To Create Robots.txt file.

A robot.txt file can simply made by using this free seo tool. Generate robots.txt online, Simply putting in the files and directories path in the input column, manage the allow or disallow protocols of the robots, include the sitemap as reference in this regard and can also set parameters to allow or refused different web robots.

Here is an explanation of what the different words mean in a robots.txt file.

User-agent:

This part is used to specify directions to robot.

User-agent: *

This part will tell all robots to crawl a website because of the wildcard "*" stands for all robots.

User-agent: Googlebot

This part will tell, these directions apply to just Googlebot.

Disallow:

This Disallow part tell the robot that which pages on the site must be excluded from crawling. If disallow directive has no value, that means no pages are disallowed.

EXAMPLE:

Here are some examples which will help you to understand and use robots.txt.

This example will allow full access of all robots to a website:

User-agent: *
Disallow:

This example will block all access of all robots to a website:

User-agent: *
Disallow: /

This example will not allow all robots to enter one specific directorie:

User-agent: *
Disallow: /cgi-bin/

This example will not allow all robots to enter one specific file in the directorie:

User-agent: *
Disallow: /directory/file.html

Note that all other files in the specified directory will be processed.