Operated by Screaming Frog
The user-agent for the Screaming Frog SEO Spider desktop application. It is used by SEO professionals to audit websites.
The user-agent for the Screaming Frog SEO Spider desktop application. It is used by SEO professionals to audit websites.
Screaming Frog SEO Spider is a commercial SEO analytics crawler operated by Screaming Frog. It builds backlink graphs, crawls for technical SEO issues, and tracks keyword rankings. The user-agent Screaming Frog SEO Spider is well-known and respected in the SEO industry. Blocking it removes your domain from Screaming Frog's index, preventing competitors from analysing your backlink profile via their platform. However, other Screaming Frog users also lose visibility into links pointing TO your site — weigh this trade-off carefully.
<code>User-agent: Screaming Frog SEO Spider</code> — Matching is case-insensitive. Robots.txt is fetched from the root of each subdomain separately.
Screaming Frog SEO Spider is verifiable via reverse-DNS lookup on the crawling IP addresses. You can safely allow it unless you have a specific reason to block (e.g., AI training opt-out or SEO tool visibility).Understanding Screaming Frog SEO Spider's purpose helps you decide whether to allow or block it.
Screaming Frog SEO Spider. This is the exact string you must use in robots.txt, Nginx, Apache, or Cloudflare firewall rules to target this bot. User-agent matching in robots.txt is case-insensitive, but the string must be spelled correctly. You can verify that a request genuinely comes from Screaming Frog SEO Spider by performing a reverse-DNS lookup on the source IP — legitimate bots resolve back to their operator's domain.Screaming Frog SEO Spider is verifiable via reverse-DNS lookup on the crawling IP addresses. You can safely allow it unless you have a specific reason to block (e.g., AI training opt-out or SEO tool visibility)./robots.txt file:
User-agent: Screaming Frog SEO Spider Disallow: /This instructs Screaming Frog SEO Spider not to crawl any path on your site. The Disallow: / directive covers the entire domain including subfolders. To only block specific sections, replace / with the path (e.g.,
Disallow: /blog/). Note: robots.txt is publicly readable — any bot or human can inspect it at yourdomain.com/robots.txt.Screaming Frog SEO Spider (case-insensitive grep: grep -i "Screaming Frog SEO Spider" /var/log/nginx/access.log). You can also check Google Search Console → Coverage → Crawl Stats for Googlebot variants. For Screaming Frog SEO Spider specifically, filter by user-agent in your log analysis tool (GoAccess, AWStats, etc.).User-agent: Screaming Frog SEO Spider Crawl-delay: 10(10 second delay between requests).
Disallow: / you can restrict Screaming Frog SEO Spider to specific paths:
User-agent: Screaming Frog SEO Spider Disallow: /private/ Disallow: /staging/ Allow: /This allows Screaming Frog SEO Spider everywhere except the listed paths. Path matching in robots.txt uses prefix matching —
Disallow: /private/ blocks /private/page.html but NOT /public/private/.Check instantly with our free AI Bot Checker
Check Your Website