Crawl scope and subdomains
When SEO Perception crawls a website, it follows internal links it finds in the HTML to discover more URLs, up to your plan’s page limit (see Tiers, pages, and data retention).
Default: same hostname as your Search Console property
Each site you add is tied to a Google Search Console property. We store a domain key from that property (URL-prefix host or sc-domain: apex — see Adding a website).
By default, a link is treated as internal only if its hostname matches that scope: the property’s hostname plus the usual apex and www pairing, and the hostname of the page you are on. Links to other subdomains (for example blog.example.com when your property is example.com) are not queued automatically.
Optional: crawl subdomains too
On the website overview, you can turn on Crawl subdomains too. When enabled, we use the same classification as before for apex/www, but we also treat hosts under those bases as internal, so subdomain URLs can be discovered and crawled.
URL-prefix vs domain property
- A URL-prefix property such as
https://shop.example.com/uses that host as the scope; subdomains ofshop.example.comfollow the same rules (off by default). - A Domain property (
sc-domain:example.com) uses the registrable hostexample.com; without the subdomain option, we do not crawl arbitrary subdomains likeblog.example.comunless you enable it.
Changing the setting later
Turning off subdomain crawling stops new subdomain URLs from being added from link discovery. URLs already in your page list may still be crawled until you remove them or they age out of your workflow — we do not automatically delete existing rows when you change the setting.
For how crawling works in general, see How the crawler works.