Crawl Error
A condition that prevents Googlebot from successfully fetching or processing a URL, such as a server failure, DNS issue or HTTP error response.
Definition
A crawl error is any issue that stops a search engine crawler from retrieving a page. Google reports these in Search Console's Page Indexing report, classifying URLs by reasons such as server error 5xx, not found 404, redirect error, blocked by robots.txt or soft 404.
Crawl errors split into site-wide problems — DNS failures, repeated 5xx responses, robots.txt fetch errors that block all crawling — and URL-level problems specific to individual pages. Site-wide issues can throttle Googlebot's overall crawl rate for a host, while URL-level issues affect indexing only for the URLs involved. Search Console groups affected URLs by reason so site owners can diagnose patterns; many reasons are descriptive rather than evaluative, for example 'Crawled — currently not indexed' indicates Google fetched the page but chose not to index it.
Examples
Server error reported in Search Console
Search Console's Page Indexing report shows a spike of 'Server error (5xx)' URLs after a deployment. The team checks logs, finds a database connection bug and the URLs return to the indexed bucket once the fix ships.
Soft 404 detected by Google
A category page with no products returns 200 OK but the page reads 'No results found'. Google labels it 'Soft 404' in Search Console because the response code does not match the empty content.
Sources
Related terms
- 404 ErrorAn HTTP 404 "Not Found" response, indicating the requested URL does not exist on the server. Google drops 404 URLs from its index over time.
- Soft 404A URL that returns an HTTP 200 status but displays content telling the user the page doesn't exist. Google treats it as a 404.
- HTTP Status CodeA three-digit response code returned by a server that tells the client — including search engine crawlers — the outcome of an HTTP request.
- robots.txtA plain-text file at the root of a domain that tells crawlers which paths they may or may not request.
- GooglebotThe generic name for Google's web crawlers — the automated software that discovers and fetches pages for inclusion in Google Search.
- IndexingThe process by which a search engine analyses a fetched page and stores information about it so the page can later be returned in search results.
Where QueryCatch uses this
Last updated: 12/05/2026