Create a perfect robots.txt file to guide search engine crawlers and control which pages they can access.
A robots.txt file tells search engine crawlers which pages or files they can or can't request from your site. It's used mainly to avoid overloading your site with requests; it is not a mechanism for keeping a web page out of Google.
To keep a page out of Google, use noindex directives or password-protect the page.