SafeDNS Categorization Crawler Policy


Our goal and policy concerning the collecting of web pages by SafeDNS Categorization Crawler are briefly described below. If you have any query, please email us. We appreciate your cooperation and support.


1. Goal

The main reason for us at SafeDNS to collect web pages, is to correctly categorize the Internet resources and to develop new technologies and products for SafeDNS.


2. Policy

Our crawler always respects the common crawling norm such as the following:
Our crawler accesses each site in a page-by-page manner with some intervals. Currently, we fetch only a limited number of pages from each server.

Though we interleave the crawling processes with the processes for detecting host aliases, chances are that an aliased server may be accessed simultaneously under different host names.

It always reads the robots.txt file and never crawls restricted pages.
You can specify directives to the crawler in robots.txt file at the top of your site. For example, the following directive forbids our crawler to retrieve any content from your site.

User-agent: SafeDNSBot

Disallow: /

We exercise great care regarding the management of web pages.

We register collected web pages in databases in the SafeDNS. We manage access to the database and prevent unauthorized access.


3. Contact

For any query or comment or request please email us at

Please clarify host name(s) and IP address(es) of your site in the email.