AI Crawler Check
Free Bot Analysis Tool
Safe Search Engines

DuckDuckBot

Operated by DuckDuckGo

Quick Facts

User-Agent:DuckDuckBot
Category:Search Engines
Operator:DuckDuckGo
Safety:Safe
Blocking Impact:Critical — Blocking removes you from search results
SEO Impact Score:10/10

What is DuckDuckBot?

DuckDuckBot is the web crawler for DuckDuckGo. While DuckDuckGo largely sources results from Bing, this bot is used for favicon fetching and other specific indexing tasks.

DuckDuckBot is the web crawler for DuckDuckGo. While DuckDuckGo largely sources results from Bing, this bot is used for favicon fetching and other specific indexing tasks. DuckDuckBot is a production-grade search engine crawler operated by DuckDuckGo. It uses a distributed crawl infrastructure that respects crawl-delay directives, follows RFC 9309 (robots.txt) spec, and processes Sitemaps to prioritise fresh content. The user-agent string DuckDuckBot must be whitelisted if your site uses rate-limiting or WAF rules. Blocking impact is Critical — Blocking removes you from search results.

What happens if you block DuckDuckBot?

⛔ **Critical Impact** — Blocking DuckDuckBot will stop DuckDuckGo from crawling and indexing your pages. Within days or weeks you may see pages drop out of DuckDuckGo's search index entirely, resulting in a significant loss of organic search traffic. This is the most severe possible SEO consequence. Only do this intentionally, for example if you are migrating to a different search engine or decommissioning a domain. If you accidentally blocked DuckDuckBot, remove the rule immediately and request re-indexing via DuckDuckGo's webmaster tools.
Never block — it will remove your site from major search results.

How to block DuckDuckBot with robots.txt

<code>User-agent: DuckDuckBot</code> — Matching is case-insensitive. Robots.txt is fetched from the root of each subdomain separately.

Block completely (robots.txt)
User-agent: DuckDuckBot Disallow: /
Allow all (robots.txt)
User-agent: DuckDuckBot Allow: /
Block private only (robots.txt)
User-agent: DuckDuckBot Disallow: /private/ Disallow: /api/ Disallow: /admin/ Allow: /
Nginx server block
# Nginx: Hard-block DuckDuckBot if ($http_user_agent ~* "DuckDuckBot") { return 403 "Bot blocked"; }
Apache .htaccess
# Apache: Hard-block DuckDuckBot SetEnvIfNoCase User-Agent "DuckDuckBot" bad_bot Order Allow,Deny Allow from all Deny from env=bad_bot
Meta robots tag
<meta name="robots" content="noindex, nofollow">
X-Robots-Tag header
X-Robots-Tag: noindex, nofollow

Is DuckDuckBot safe to allow?

Yes, DuckDuckBot is a **safe and legitimate** crawler. It is operated by DuckDuckGo, which publicly documents its crawler at an official URL and follows the Robots Exclusion Protocol (RFC 9309). The user-agent string DuckDuckBot is verifiable via reverse-DNS lookup on the crawling IP addresses. You can safely allow it unless you have a specific reason to block (e.g., AI training opt-out or SEO tool visibility).
Verify by reverse-DNS lookup: legitimate DuckDuckBot requests resolve to duckduckgo's domain.

What does DuckDuckBot do?

Understanding DuckDuckBot's purpose helps you decide whether to allow or block it.

Frequently Asked Questions

What is the official user-agent string for DuckDuckBot?
The official user-agent string for DuckDuckBot is: DuckDuckBot. This is the exact string you must use in robots.txt, Nginx, Apache, or Cloudflare firewall rules to target this bot. User-agent matching in robots.txt is case-insensitive, but the string must be spelled correctly. You can verify that a request genuinely comes from DuckDuckBot by performing a reverse-DNS lookup on the source IP — legitimate bots resolve back to their operator's domain.
Is DuckDuckBot safe?
Yes, DuckDuckBot is a **safe and legitimate** crawler. It is operated by DuckDuckGo, which publicly documents its crawler at an official URL and follows the Robots Exclusion Protocol (RFC 9309). The user-agent string DuckDuckBot is verifiable via reverse-DNS lookup on the crawling IP addresses. You can safely allow it unless you have a specific reason to block (e.g., AI training opt-out or SEO tool visibility).
Will blocking DuckDuckBot hurt my SEO?
⛔ **Critical Impact** — Blocking DuckDuckBot will stop DuckDuckGo from crawling and indexing your pages. Within days or weeks you may see pages drop out of DuckDuckGo's search index entirely, resulting in a significant loss of organic search traffic. This is the most severe possible SEO consequence. Only do this intentionally, for example if you are migrating to a different search engine or decommissioning a domain. If you accidentally blocked DuckDuckBot, remove the rule immediately and request re-indexing via DuckDuckGo's webmaster tools.
How do I block DuckDuckBot in robots.txt?
Add the following lines to your /robots.txt file:
User-agent: DuckDuckBot
Disallow: /
This instructs DuckDuckBot not to crawl any path on your site. The Disallow: / directive covers the entire domain including subfolders. To only block specific sections, replace / with the path (e.g., Disallow: /blog/). Note: robots.txt is publicly readable — any bot or human can inspect it at yourdomain.com/robots.txt.
Does DuckDuckBot respect robots.txt?
Yes — DuckDuckBot is a well-behaved bot operated by DuckDuckGo. It fetches and parses /robots.txt before crawling any page, following RFC 9309.
How do I verify if DuckDuckBot is crawling my site?
Search your web server access logs for the string DuckDuckBot (case-insensitive grep: grep -i "DuckDuckBot" /var/log/nginx/access.log). You can also check Google Search Console → Coverage → Crawl Stats for Googlebot variants. For DuckDuckBot specifically, filter by user-agent in your log analysis tool (GoAccess, AWStats, etc.).
What is the crawl frequency of DuckDuckBot?
Critical-impact search crawlers like DuckDuckBot typically crawl popular pages daily and less popular pages weekly. You can manage crawl rate via the crawl-delay directive or via the search console.
Can I block DuckDuckBot from specific pages only?
Yes. Instead of a global Disallow: / you can restrict DuckDuckBot to specific paths:
User-agent: DuckDuckBot
Disallow: /private/
Disallow: /staging/
Allow: /
This allows DuckDuckBot everywhere except the listed paths. Path matching in robots.txt uses prefix matching — Disallow: /private/ blocks /private/page.html but NOT /public/private/.
How do I check if DuckDuckBot is blocked by my robots.txt?
Use Google's robots.txt Tester in Search Console, or a third-party checker to simulate a DuckDuckBot request. You can also manually check by opening https://aicrawlercheck.com/robots.txt and scanning for DuckDuckBot entries. If a block exists, immediately test it against your most important URLs using the Google Search Console URL Inspection tool.
My site is blocked by DuckDuckBot in Search Console — what do I do?
1. Open yourdomain.com/robots.txt and look for any User-agent: DuckDuckBot or User-agent: * Disallow rules covering your key pages. 2. Remove or restrict the blocking rules. 3. Validate via Google Search Console → robots.txt Tester. 4. Request re-indexing using the URL Inspection tool. 5. Wait 1-2 weeks for re-crawl. Monitor Coverage report for recovery.

Related Bots

Is DuckDuckBot blocked on your site?

Check instantly with our free AI Bot Checker

Check Your Website