Operated by Baidu
Baiduspider is the official web crawler for Baidu, China's leading search engine. It is essential for having your website indexed and visible to users in China.
Baiduspider is the official web crawler for Baidu, China's leading search engine. It is essential for having your website indexed and visible to users in China.
Baiduspider is a production-grade search engine crawler operated by Baidu. It uses a distributed crawl infrastructure that respects crawl-delay directives, follows RFC 9309 (robots.txt) spec, and processes Sitemaps to prioritise fresh content. The user-agent string Baiduspider must be whitelisted if your site uses rate-limiting or WAF rules. Blocking impact is Critical — Blocking removes you from search results.
<code>User-agent: Baiduspider</code> — Matching is case-insensitive. Robots.txt is fetched from the root of each subdomain separately.
Baiduspider is verifiable via reverse-DNS lookup on the crawling IP addresses. You can safely allow it unless you have a specific reason to block (e.g., AI training opt-out or SEO tool visibility).Understanding Baiduspider's purpose helps you decide whether to allow or block it.
Baiduspider. This is the exact string you must use in robots.txt, Nginx, Apache, or Cloudflare firewall rules to target this bot. User-agent matching in robots.txt is case-insensitive, but the string must be spelled correctly. You can verify that a request genuinely comes from Baiduspider by performing a reverse-DNS lookup on the source IP — legitimate bots resolve back to their operator's domain.Baiduspider is verifiable via reverse-DNS lookup on the crawling IP addresses. You can safely allow it unless you have a specific reason to block (e.g., AI training opt-out or SEO tool visibility)./robots.txt file:
User-agent: Baiduspider Disallow: /This instructs Baiduspider not to crawl any path on your site. The Disallow: / directive covers the entire domain including subfolders. To only block specific sections, replace / with the path (e.g.,
Disallow: /blog/). Note: robots.txt is publicly readable — any bot or human can inspect it at yourdomain.com/robots.txt.Baiduspider (case-insensitive grep: grep -i "Baiduspider" /var/log/nginx/access.log). You can also check Google Search Console → Coverage → Crawl Stats for Googlebot variants. For Baiduspider specifically, filter by user-agent in your log analysis tool (GoAccess, AWStats, etc.).Disallow: / you can restrict Baiduspider to specific paths:
User-agent: Baiduspider Disallow: /private/ Disallow: /staging/ Allow: /This allows Baiduspider everywhere except the listed paths. Path matching in robots.txt uses prefix matching —
Disallow: /private/ blocks /private/page.html but NOT /public/private/.https://aicrawlercheck.com/robots.txt and scanning for Baiduspider entries. If a block exists, immediately test it against your most important URLs using the Google Search Console URL Inspection tool.yourdomain.com/robots.txt and look for any User-agent: Baiduspider or User-agent: * Disallow rules covering your key pages.
2. Remove or restrict the blocking rules.
3. Validate via Google Search Console → robots.txt Tester.
4. Request re-indexing using the URL Inspection tool.
5. Wait 1-2 weeks for re-crawl. Monitor Coverage report for recovery.Check instantly with our free AI Bot Checker
Check Your Website