Pages tab filters (website workspace)
On a website’s Pages tab (/website/<id>), the left column lists URLs the crawler has stored for that site. Above the list you can search and apply filters so you can quickly find problem URLs or audit indexability—without exporting anything.
Search by URL or title
- Type in the search field to match URL or page title.
- Narrowing kicks in from two characters onward (single-character typing shows a hint but does not filter yet).
- Very long queries are clipped server-side so searches stay fast.
If nothing matches, check whether the URL has been crawled yet or falls outside your crawl scope—the empty state explains that case in the product.
Indexability filter
Three modes narrow the list using your page’s robots meta signal:
- All — no indexability filter.
- Index — pages treated as indexable (no
noindexin robots meta, or robots meta absent). - Noindex — pages whose robots meta indicates
noindex.
This is a practical workspace filter—it reflects what we stored from the crawl, not a live Google index guarantee.
HTTP status filter
The dropdown limits rows by the HTTP status code stored for the page’s last crawl fetch:
- Any — no status filter.
- 200 only — successful responses.
- 3xx redirects — redirect responses.
- 4xx client errors — not found, gone, etc.
- 5xx server errors — server-side failures.
Combining filters
Search, indexability, and HTTP filters apply together. If the list is empty, try relaxing one filter at a time (for example back to All / Any HTTP status).
The list loads in batches; use Load more when present to fetch additional rows with the same filters.