Operated by Unknown
A scraper bot associated with AI data aggregation. Specific ownership details are often opaque.
A scraper bot associated with AI data aggregation. Specific ownership details are often opaque.
MyCentralAIScraperBot is an AI data-collection crawler operated by Unknown. It harvests web content to build or expand training datasets for large language models (LLMs). Unlike search crawlers, MyCentralAIScraperBot does NOT influence your page ranking in any search engine. The user-agent string MyCentralAIScraperBot can be safely blocked via robots.txt, meta tags (noai), or the emerging llms.txt standard without any SEO penalty. Robots.txt is voluntary; for hard enforcement, combine it with server-level IP blocking.
User-agent: MyCentralAIScraperBot / Disallow: / without any SEO penalty. This is the recommended approach if you want to opt out of Unknown's LLM training datasets.<code>User-agent: MyCentralAIScraperBot</code> — Matching is case-insensitive. Robots.txt is fetched from the root of each subdomain separately.
MyCentralAIScraperBot is verifiable via reverse-DNS lookup on the crawling IP addresses. You can safely allow it unless you have a specific reason to block (e.g., AI training opt-out or SEO tool visibility).Understanding MyCentralAIScraperBot's purpose helps you decide whether to allow or block it.
MyCentralAIScraperBot. This is the exact string you must use in robots.txt, Nginx, Apache, or Cloudflare firewall rules to target this bot. User-agent matching in robots.txt is case-insensitive, but the string must be spelled correctly. You can verify that a request genuinely comes from MyCentralAIScraperBot by performing a reverse-DNS lookup on the source IP — legitimate bots resolve back to their operator's domain.MyCentralAIScraperBot is verifiable via reverse-DNS lookup on the crawling IP addresses. You can safely allow it unless you have a specific reason to block (e.g., AI training opt-out or SEO tool visibility).User-agent: MyCentralAIScraperBot / Disallow: / without any SEO penalty. This is the recommended approach if you want to opt out of Unknown's LLM training datasets./robots.txt file:
User-agent: MyCentralAIScraperBot Disallow: /This instructs MyCentralAIScraperBot not to crawl any path on your site. The Disallow: / directive covers the entire domain including subfolders. To only block specific sections, replace / with the path (e.g.,
Disallow: /blog/). Note: robots.txt is publicly readable — any bot or human can inspect it at yourdomain.com/robots.txt.MyCentralAIScraperBot (case-insensitive grep: grep -i "MyCentralAIScraperBot" /var/log/nginx/access.log). You can also check Google Search Console → Coverage → Crawl Stats for Googlebot variants. For MyCentralAIScraperBot specifically, filter by user-agent in your log analysis tool (GoAccess, AWStats, etc.).User-agent: MyCentralAIScraperBot Crawl-delay: 10(10 second delay between requests).
Disallow: / you can restrict MyCentralAIScraperBot to specific paths:
User-agent: MyCentralAIScraperBot Disallow: /private/ Disallow: /staging/ Allow: /This allows MyCentralAIScraperBot everywhere except the listed paths. Path matching in robots.txt uses prefix matching —
Disallow: /private/ blocks /private/page.html but NOT /public/private/.<meta name="MyCentralAIScraperBot" content="noai, noimageai, noindex"> to your pages.
2. Add a llms.txt file at your domain root (emerging standard).
3. Use Cloudflare WAF or Nginx to return 403 for this user-agent.
4. Consider IP blocklists for Unknown's known crawler IP ranges.<meta name="MyCentralAIScraperBot" content="noindex">
• **X-Robots-Tag HTTP header**: X-Robots-Tag: noai, noimageai
• **llms.txt**: Add a /llms.txt file (similar to robots.txt but for LLMs)
• **Server block**: Return 403 or 429 for this user-agent via WAF or Nginx
Using multiple layers provides the strongest protection.Check instantly with our free AI Bot Checker
Check Your Website