Google’s John Mueller Explains How Pages Blocked by Robots.txt Are Ranked

Google’s John Mueller recently explained how query relevancy is determined for pages blocked by robots.txt.
It has been stated that Google will still index pages that are blocked by robots.txt. But how does Google know what types of queries to rank these pages for?
That’s the question that came up in yesterday’s Google Webmaster Central hangout:
In response, Mueller says Google obviously cannot look at the content if it’s blocked.
So what Google does is find other ways to compare the URL with other URLs, which is admittedly much harder when blocked by robots.txt.
In most cases, Google will prioritize the indexing of other pages of a site that are more accessible and not blocked from crawling.
Sometimes pages blocked by robots.txt will rank in search results if Google considers them worthwhile. That’s determined by the links pointing to the page.
Official Google Ads Templates
Select your industry. Download your campaign template. Custom built with exact match keywords and converting ad copy with high clickthrough rates.
So how does Google figure out how to rank blocked pages? The answer comes down to links.
Ultimately, it wouldn’t we wise to block content with robots.txt and hope Google knows what to do with it.
But if you happen to have content that is blocked by robots.txt, Google will do its best to figure out how to rank it.
You can hear the full answer below, starting at the 21:49 mark: