Using meta tags to block access to your site
To entirely prevent a page’s contents from being listed in the Google web index even if other sites link to it, use a noindex meta tag. As long as Googlebot fetches the page, it will see the noindex meta tag and prevent that page from showing up in the web index.
The noindex meta standard is useful if you don’t have root access to your server, as it allows you to control access to your site on a page-by-page basis.
To prevent all robots from indexing a page on your site, place the following meta tag into the <head> section of your page:
<meta name="robots" content="noindex">
To allow other robots to index the page on your site, preventing only Google’s robots from indexing the page:
<meta name="googlebot" content="noindex">
When we see the noindex meta tag on a page, Google will completely drop the page from our search results, even if other pages link to it. Other search engines, however, may interpret this directive differently. As a result, a link to the page can still appear in their search results.
Note that because we have to crawl your page in order to see the noindex meta tag, there’s a small chance that Googlebot won’t see and respect the noindex meta tag. If your page is still appearing in results, it’s probably because we haven’t crawled your site since you added the tag. (Also, if you’ve used your robots.txt file to block this page, we won’t be able to see the tag either.)