AI Crawler Check
Free Bot Analysis Tool
Safe Data Scrapers

coccocbot-web

Operated by

Quick Facts

User-Agent:coccocbot-web
Category:Data Scrapers
Safety:Safe
Blocking Impact:Low — No SEO ranking impact
SEO Impact Score:2/10

What is coccocbot-web?

The web crawler for Cốc Cốc, a popular browser and search engine in Vietnam.

The web crawler for Cốc Cốc, a popular browser and search engine in Vietnam. coccocbot-web is a data aggregation crawler. Unlike search bots or AI crawlers, its purpose is typically to collect content for private datasets, price monitoring, or research. Blocking coccocbot-web via robots.txt or at the server level has NO negative SEO impact. If you see excessive crawl volume from this bot in your logs, a hard block is recommended.

What happens if you block coccocbot-web?

✅ **Minimal Impact** — Blocking coccocbot-web has no meaningful effect on your search engine rankings or organic traffic.
Generally safe to allow; provides legitimate crawling value.

How to block coccocbot-web with robots.txt

<code>User-agent: coccocbot-web</code> — Matching is case-insensitive. Robots.txt is fetched from the root of each subdomain separately.

Block completely (robots.txt)
User-agent: coccocbot-web Disallow: /
Allow all (robots.txt)
User-agent: coccocbot-web Allow: /
Block private only (robots.txt)
User-agent: coccocbot-web Disallow: /private/ Disallow: /api/ Disallow: /admin/ Allow: /
Nginx server block
# Nginx: Hard-block coccocbot-web if ($http_user_agent ~* "coccocbot\-web") { return 403 "Bot blocked"; }
Apache .htaccess
# Apache: Hard-block coccocbot-web SetEnvIfNoCase User-Agent "coccocbot\-web" bad_bot Order Allow,Deny Allow from all Deny from env=bad_bot
Meta robots tag
<meta name="robots" content="noindex, nofollow">
X-Robots-Tag header
X-Robots-Tag: noindex, nofollow

Is coccocbot-web safe to allow?

Yes, coccocbot-web is a **safe and legitimate** crawler. It is operated by , which publicly documents its crawler at an official URL and follows the Robots Exclusion Protocol (RFC 9309). The user-agent string coccocbot-web is verifiable via reverse-DNS lookup on the crawling IP addresses. You can safely allow it unless you have a specific reason to block (e.g., AI training opt-out or SEO tool visibility).
Verify by reverse-DNS lookup: legitimate coccocbot-web requests resolve to 's domain.

What does coccocbot-web do?

Understanding coccocbot-web's purpose helps you decide whether to allow or block it.

Frequently Asked Questions

What is the official user-agent string for coccocbot-web?
The official user-agent string for coccocbot-web is: coccocbot-web. This is the exact string you must use in robots.txt, Nginx, Apache, or Cloudflare firewall rules to target this bot. User-agent matching in robots.txt is case-insensitive, but the string must be spelled correctly. You can verify that a request genuinely comes from coccocbot-web by performing a reverse-DNS lookup on the source IP — legitimate bots resolve back to their operator's domain.
Is coccocbot-web safe?
Yes, coccocbot-web is a **safe and legitimate** crawler. It is operated by , which publicly documents its crawler at an official URL and follows the Robots Exclusion Protocol (RFC 9309). The user-agent string coccocbot-web is verifiable via reverse-DNS lookup on the crawling IP addresses. You can safely allow it unless you have a specific reason to block (e.g., AI training opt-out or SEO tool visibility).
Will blocking coccocbot-web hurt my SEO?
✅ **Minimal Impact** — Blocking coccocbot-web has no meaningful effect on your search engine rankings or organic traffic.
How do I block coccocbot-web in robots.txt?
Add the following lines to your /robots.txt file:
User-agent: coccocbot-web
Disallow: /
This instructs coccocbot-web not to crawl any path on your site. The Disallow: / directive covers the entire domain including subfolders. To only block specific sections, replace / with the path (e.g., Disallow: /blog/). Note: robots.txt is publicly readable — any bot or human can inspect it at yourdomain.com/robots.txt.
Does coccocbot-web respect robots.txt?
Yes — coccocbot-web is a well-behaved bot operated by . It fetches and parses /robots.txt before crawling any page, following RFC 9309.
How do I verify if coccocbot-web is crawling my site?
Search your web server access logs for the string coccocbot-web (case-insensitive grep: grep -i "coccocbot-web" /var/log/nginx/access.log). You can also check Google Search Console → Coverage → Crawl Stats for Googlebot variants. For coccocbot-web specifically, filter by user-agent in your log analysis tool (GoAccess, AWStats, etc.).
What is the crawl frequency of coccocbot-web?
coccocbot-web crawls at a moderate rate. If you notice excessive traffic in your logs, you can add a Crawl-delay directive:
User-agent: coccocbot-web
Crawl-delay: 10
(10 second delay between requests).
Can I block coccocbot-web from specific pages only?
Yes. Instead of a global Disallow: / you can restrict coccocbot-web to specific paths:
User-agent: coccocbot-web
Disallow: /private/
Disallow: /staging/
Allow: /
This allows coccocbot-web everywhere except the listed paths. Path matching in robots.txt uses prefix matching — Disallow: /private/ blocks /private/page.html but NOT /public/private/.
Is coccocbot-web causing high server load?
If coccocbot-web is generating excessive requests, you can: 1. Add Crawl-delay: 30 below the User-agent directive in robots.txt. 2. Rate-limit the user-agent via Nginx's limit_req_zone or Apache's mod_ratelimit. 3. Block it outright at Cloudflare WAF with rule: http.user_agent contains "coccocbot-web". 4. Use fail2ban to auto-block IPs exceeding request thresholds.

Related Bots

Is coccocbot-web blocked on your site?

Check instantly with our free AI Bot Checker

Check Your Website