Crawl Budget
The number of URLs a search engine crawler will fetch and the rate at which it fetches them on a given site.
Definition
Crawl budget is the combination of how many URLs Googlebot wants to crawl on your site (crawl demand) and how many URLs your server can handle without slowing down (crawl capacity).
Google says crawl budget is generally not a concern for sites with fewer than a few thousand URLs. It becomes relevant for very large sites or sites where new content is published frequently. Wasted crawl budget on duplicate, low-value or non-indexable URLs can delay how quickly fresh or important pages are discovered. Improvements come from reducing duplicates, fixing soft 404s, removing infinite spaces and keeping server response times fast.
Examples
Large e-commerce site with faceted navigation
A retailer with 50,000 products generates millions of filter-combination URLs. Without `noindex` or robots rules, Googlebot spends most of its crawl budget on near-duplicate filter pages instead of the canonical product pages.
Server response time impact
A news site improves its average server response from 800ms to 200ms. Googlebot increases its crawl rate because the site can absorb more requests, and new articles get discovered hours sooner.
Sources
Related terms
Where QueryCatch uses this
Last updated: 2026-05-10