Plain-English Definition
A file listing important URLs and metadata to help crawlers discover and prioritize pages.
Category
Crawling, Access, and Indexing
How bots fetch pages and how your site allows or blocks them—critical for both SEO and AI visibility.
Explore categoryRelated concepts
Other terms in this category that are worth understanding alongside this one.
An automated program that fetches pages for discovery and indexing.
An identifier sent by a bot or browser; used in server rules and robots directives.
A file that provides crawling directives; can allow or disallow specific user agents and paths.
A proposed convention for providing an LLM-friendly site summary and key links in a compact format.
Whether bots can reach your pages through links and allowed paths.
Whether bots can successfully download the page (status codes, auth, paywalls, blocks).
Now see where you stand
Run a free audit on your site. Get a score on all 15 dimensions and a clear list of what to fix.