Operated by Amazon
Amazon's general-purpose web crawler. It indexes content for Alexa answers and improves Amazon product search results. It is separate from the AWS IP ranges.
Amazon's general-purpose web crawler. It indexes content for Alexa answers and improves Amazon product search results. It is separate from the AWS IP ranges.
Amazonbot is operated by Amazon as part of their cloud infrastructure stack. It may perform security scanning, CDN pre-warming, or threat intelligence collection. It uses the user-agent Amazonbot. Evaluate whether your site uses Amazon services before blocking, as this crawler may be required for service functionality.
<code>User-agent: Amazonbot</code> — Matching is case-insensitive. Robots.txt is fetched from the root of each subdomain separately.
Amazonbot is verifiable via reverse-DNS lookup on the crawling IP addresses. You can safely allow it unless you have a specific reason to block (e.g., AI training opt-out or SEO tool visibility).Understanding Amazonbot's purpose helps you decide whether to allow or block it.
Amazonbot. This is the exact string you must use in robots.txt, Nginx, Apache, or Cloudflare firewall rules to target this bot. User-agent matching in robots.txt is case-insensitive, but the string must be spelled correctly. You can verify that a request genuinely comes from Amazonbot by performing a reverse-DNS lookup on the source IP — legitimate bots resolve back to their operator's domain.Amazonbot is verifiable via reverse-DNS lookup on the crawling IP addresses. You can safely allow it unless you have a specific reason to block (e.g., AI training opt-out or SEO tool visibility)./robots.txt file:
User-agent: Amazonbot Disallow: /This instructs Amazonbot not to crawl any path on your site. The Disallow: / directive covers the entire domain including subfolders. To only block specific sections, replace / with the path (e.g.,
Disallow: /blog/). Note: robots.txt is publicly readable — any bot or human can inspect it at yourdomain.com/robots.txt.Amazonbot (case-insensitive grep: grep -i "Amazonbot" /var/log/nginx/access.log). You can also check Google Search Console → Coverage → Crawl Stats for Googlebot variants. For Amazonbot specifically, filter by user-agent in your log analysis tool (GoAccess, AWStats, etc.).User-agent: Amazonbot Crawl-delay: 10(10 second delay between requests).
Disallow: / you can restrict Amazonbot to specific paths:
User-agent: Amazonbot Disallow: /private/ Disallow: /staging/ Allow: /This allows Amazonbot everywhere except the listed paths. Path matching in robots.txt uses prefix matching —
Disallow: /private/ blocks /private/page.html but NOT /public/private/.