Operated by Apple
Applebot-Extended is a secondary user-agent used by Apple. It allows web publishers to control how their content is used for training Apple's generative AI models while still remaining in search results.
Applebot-Extended is a secondary user-agent used by Apple. It allows web publishers to control how their content is used for training Apple's generative AI models while still remaining in search results.
Applebot-Extended is a production-grade search engine crawler operated by Apple. It uses a distributed crawl infrastructure that respects crawl-delay directives, follows RFC 9309 (robots.txt) spec, and processes Sitemaps to prioritise fresh content. The user-agent string Applebot-Extended must be whitelisted if your site uses rate-limiting or WAF rules. Blocking impact is Critical — Blocking removes you from search results.
<code>User-agent: Applebot-Extended</code> — Matching is case-insensitive. Robots.txt is fetched from the root of each subdomain separately.
Understanding Applebot-Extended's purpose helps you decide whether to allow or block it.
Applebot-Extended. This is the exact string you must use in robots.txt, Nginx, Apache, or Cloudflare firewall rules to target this bot. User-agent matching in robots.txt is case-insensitive, but the string must be spelled correctly. You can verify that a request genuinely comes from Applebot-Extended by performing a reverse-DNS lookup on the source IP — legitimate bots resolve back to their operator's domain./robots.txt file:
User-agent: Applebot-Extended Disallow: /This instructs Applebot-Extended not to crawl any path on your site. The Disallow: / directive covers the entire domain including subfolders. To only block specific sections, replace / with the path (e.g.,
Disallow: /blog/). Note: robots.txt is publicly readable — any bot or human can inspect it at yourdomain.com/robots.txt.Applebot-Extended (case-insensitive grep: grep -i "Applebot-Extended" /var/log/nginx/access.log). You can also check Google Search Console → Coverage → Crawl Stats for Googlebot variants. For Applebot-Extended specifically, filter by user-agent in your log analysis tool (GoAccess, AWStats, etc.).Disallow: / you can restrict Applebot-Extended to specific paths:
User-agent: Applebot-Extended Disallow: /private/ Disallow: /staging/ Allow: /This allows Applebot-Extended everywhere except the listed paths. Path matching in robots.txt uses prefix matching —
Disallow: /private/ blocks /private/page.html but NOT /public/private/.https://aicrawlercheck.com/robots.txt and scanning for Applebot-Extended entries. If a block exists, immediately test it against your most important URLs using the Google Search Console URL Inspection tool.yourdomain.com/robots.txt and look for any User-agent: Applebot-Extended or User-agent: * Disallow rules covering your key pages.
2. Remove or restrict the blocking rules.
3. Validate via Google Search Console → robots.txt Tester.
4. Request re-indexing using the URL Inspection tool.
5. Wait 1-2 weeks for re-crawl. Monitor Coverage report for recovery.Check instantly with our free AI Bot Checker
Check Your Website