AI Crawler Check
Free Bot Analysis Tool
Safe Google Bots

Googlebot-Discovery

Operated by Google

Quick Facts

User-Agent:Googlebot-Discovery
Category:Google Bots
Operator:Google
Safety:Safe
Blocking Impact:Critical — Blocking removes you from search results
SEO Impact Score:10/10

What is Googlebot-Discovery?

Googlebot-Discovery is used by Google to crawl content specifically for the Google Discover feed on mobile devices.

Googlebot-Discovery is used by Google to crawl content specifically for the Google Discover feed on mobile devices. Googlebot-Discovery is one of Google's specialised crawlers, distinct from the general Googlebot. It serves a specific Google product (Images, Video, News, etc.) and uses the user-agent Googlebot-Discovery. Selectively blocking it disables the corresponding Google feature for your site (e.g., blocking Googlebot-Image removes your images from Google Image Search). Always verify which Google product is affected before blocking.

What happens if you block Googlebot-Discovery?

⛔ **Critical Impact** — Blocking Googlebot-Discovery will stop Google from crawling and indexing your pages. Within days or weeks you may see pages drop out of Google's search index entirely, resulting in a significant loss of organic search traffic. This is the most severe possible SEO consequence. Only do this intentionally, for example if you are migrating to a different search engine or decommissioning a domain. If you accidentally blocked Googlebot-Discovery, remove the rule immediately and request re-indexing via Google's webmaster tools.
Never block — it will remove your site from major search results.

How to block Googlebot-Discovery with robots.txt

<code>User-agent: Googlebot-Discovery</code> — Matching is case-insensitive. Robots.txt is fetched from the root of each subdomain separately.

Block completely (robots.txt)
User-agent: Googlebot-Discovery Disallow: /
Allow all (robots.txt)
User-agent: Googlebot-Discovery Allow: /
Block private only (robots.txt)
User-agent: Googlebot-Discovery Disallow: /private/ Disallow: /api/ Disallow: /admin/ Allow: /
Nginx server block
# Nginx: Hard-block Googlebot-Discovery if ($http_user_agent ~* "Googlebot\-Discovery") { return 403 "Bot blocked"; }
Apache .htaccess
# Apache: Hard-block Googlebot-Discovery SetEnvIfNoCase User-Agent "Googlebot\-Discovery" bad_bot Order Allow,Deny Allow from all Deny from env=bad_bot
Meta robots tag
<meta name="robots" content="noindex, nofollow">
X-Robots-Tag header
X-Robots-Tag: noindex, nofollow

Is Googlebot-Discovery safe to allow?

Yes, Googlebot-Discovery is a **safe and legitimate** crawler. It is operated by Google, which publicly documents its crawler at an official URL and follows the Robots Exclusion Protocol (RFC 9309). The user-agent string Googlebot-Discovery is verifiable via reverse-DNS lookup on the crawling IP addresses. You can safely allow it unless you have a specific reason to block (e.g., AI training opt-out or SEO tool visibility).
Verify by reverse-DNS lookup: legitimate Googlebot-Discovery requests resolve to google's domain.

What does Googlebot-Discovery do?

Understanding Googlebot-Discovery's purpose helps you decide whether to allow or block it.

Frequently Asked Questions

What is the official user-agent string for Googlebot-Discovery?
The official user-agent string for Googlebot-Discovery is: Googlebot-Discovery. This is the exact string you must use in robots.txt, Nginx, Apache, or Cloudflare firewall rules to target this bot. User-agent matching in robots.txt is case-insensitive, but the string must be spelled correctly. You can verify that a request genuinely comes from Googlebot-Discovery by performing a reverse-DNS lookup on the source IP — legitimate bots resolve back to their operator's domain.
Is Googlebot-Discovery safe?
Yes, Googlebot-Discovery is a **safe and legitimate** crawler. It is operated by Google, which publicly documents its crawler at an official URL and follows the Robots Exclusion Protocol (RFC 9309). The user-agent string Googlebot-Discovery is verifiable via reverse-DNS lookup on the crawling IP addresses. You can safely allow it unless you have a specific reason to block (e.g., AI training opt-out or SEO tool visibility).
Will blocking Googlebot-Discovery hurt my SEO?
⛔ **Critical Impact** — Blocking Googlebot-Discovery will stop Google from crawling and indexing your pages. Within days or weeks you may see pages drop out of Google's search index entirely, resulting in a significant loss of organic search traffic. This is the most severe possible SEO consequence. Only do this intentionally, for example if you are migrating to a different search engine or decommissioning a domain. If you accidentally blocked Googlebot-Discovery, remove the rule immediately and request re-indexing via Google's webmaster tools.
How do I block Googlebot-Discovery in robots.txt?
Add the following lines to your /robots.txt file:
User-agent: Googlebot-Discovery
Disallow: /
This instructs Googlebot-Discovery not to crawl any path on your site. The Disallow: / directive covers the entire domain including subfolders. To only block specific sections, replace / with the path (e.g., Disallow: /blog/). Note: robots.txt is publicly readable — any bot or human can inspect it at yourdomain.com/robots.txt.
Does Googlebot-Discovery respect robots.txt?
Yes — Googlebot-Discovery is a well-behaved bot operated by Google. It fetches and parses /robots.txt before crawling any page, following RFC 9309.
How do I verify if Googlebot-Discovery is crawling my site?
Search your web server access logs for the string Googlebot-Discovery (case-insensitive grep: grep -i "Googlebot-Discovery" /var/log/nginx/access.log). You can also check Google Search Console → Coverage → Crawl Stats for Googlebot variants. For Googlebot-Discovery specifically, filter by user-agent in your log analysis tool (GoAccess, AWStats, etc.).
What is the crawl frequency of Googlebot-Discovery?
Critical-impact search crawlers like Googlebot-Discovery typically crawl popular pages daily and less popular pages weekly. You can manage crawl rate via the crawl-delay directive or via the search console.
Can I block Googlebot-Discovery from specific pages only?
Yes. Instead of a global Disallow: / you can restrict Googlebot-Discovery to specific paths:
User-agent: Googlebot-Discovery
Disallow: /private/
Disallow: /staging/
Allow: /
This allows Googlebot-Discovery everywhere except the listed paths. Path matching in robots.txt uses prefix matching — Disallow: /private/ blocks /private/page.html but NOT /public/private/.

Related Bots

Is Googlebot-Discovery blocked on your site?

Check instantly with our free AI Bot Checker

Check Your Website