Operated by
Twitterbot (now X) fetches content to generate 'Cards' (previews) when links are posted on the X platform.
Twitterbot (now X) fetches content to generate 'Cards' (previews) when links are posted on the X platform.
Twitterbot is operated by to generate rich link previews when URLs are shared on their platform. It sends GET requests to your URL, reads <meta property="og:..."> and <meta name="twitter:..."> tags, and caches the result. Blocking Twitterbot means all links to your domain shared on appear as raw text without thumbnail, title, or description. This can reduce CTR from social referrals but has zero SEO impact.
<code>User-agent: Twitterbot</code> — Matching is case-insensitive. Robots.txt is fetched from the root of each subdomain separately.
Twitterbot is verifiable via reverse-DNS lookup on the crawling IP addresses. You can safely allow it unless you have a specific reason to block (e.g., AI training opt-out or SEO tool visibility).Understanding Twitterbot's purpose helps you decide whether to allow or block it.
Twitterbot. This is the exact string you must use in robots.txt, Nginx, Apache, or Cloudflare firewall rules to target this bot. User-agent matching in robots.txt is case-insensitive, but the string must be spelled correctly. You can verify that a request genuinely comes from Twitterbot by performing a reverse-DNS lookup on the source IP — legitimate bots resolve back to their operator's domain.Twitterbot is verifiable via reverse-DNS lookup on the crawling IP addresses. You can safely allow it unless you have a specific reason to block (e.g., AI training opt-out or SEO tool visibility)./robots.txt file:
User-agent: Twitterbot Disallow: /This instructs Twitterbot not to crawl any path on your site. The Disallow: / directive covers the entire domain including subfolders. To only block specific sections, replace / with the path (e.g.,
Disallow: /blog/). Note: robots.txt is publicly readable — any bot or human can inspect it at yourdomain.com/robots.txt.Twitterbot (case-insensitive grep: grep -i "Twitterbot" /var/log/nginx/access.log). You can also check Google Search Console → Coverage → Crawl Stats for Googlebot variants. For Twitterbot specifically, filter by user-agent in your log analysis tool (GoAccess, AWStats, etc.).User-agent: Twitterbot Crawl-delay: 10(10 second delay between requests).
Disallow: / you can restrict Twitterbot to specific paths:
User-agent: Twitterbot Disallow: /private/ Disallow: /staging/ Allow: /This allows Twitterbot everywhere except the listed paths. Path matching in robots.txt uses prefix matching —
Disallow: /private/ blocks /private/page.html but NOT /public/private/.Twitterbot and ensure your pages include <meta property="og:title">, og:description, and og:image.