.Google.com's John Mueller addressed a question concerning why Google marks web pages that are forbidden from creeping through robots.txt and why the it is actually safe to disregard the relevant Look Console documents about those crawls.Robot Visitor Traffic To Question Criterion URLs.The person inquiring the inquiry recorded that robots were actually making links to non-existent question specification Links (? q= xyz) to web pages along with noindex meta tags that are additionally obstructed in robots.txt. What motivated the inquiry is actually that Google is crawling the links to those web pages, acquiring obstructed through robots.txt (without envisioning a noindex robotics meta tag) then obtaining turned up in Google Explore Console as "Indexed, though obstructed by robots.txt.".The person inquired the following concern:." Yet here's the major question: why would certainly Google.com mark web pages when they can't also see the web content? What's the benefit because?".Google.com's John Mueller confirmed that if they can't creep the webpage they can not view the noindex meta tag. He also helps make an appealing mention of the website: hunt operator, suggesting to neglect the outcomes because the "average" consumers won't find those end results.He created:." Yes, you're appropriate: if our company can not crawl the webpage, our company can not see the noindex. That claimed, if we can't creep the pages, at that point there is actually not a great deal for our company to index. So while you might find a number of those web pages along with a targeted website:- query, the normal individual won't observe them, so I would not fuss over it. Noindex is likewise fine (without robots.txt disallow), it merely indicates the Links will find yourself being actually crawled (as well as wind up in the Explore Console report for crawled/not indexed-- neither of these statuses trigger concerns to the rest of the website). The essential part is that you do not create them crawlable + indexable.".Takeaways:.1. Mueller's answer verifies the constraints being used the Web site: search accelerated search driver for diagnostic causes. Some of those main reasons is actually due to the fact that it is actually certainly not hooked up to the routine hunt index, it's a distinct point completely.Google.com's John Mueller discussed the internet site hunt driver in 2021:." The brief solution is that a web site: query is actually not meant to become full, nor made use of for diagnostics objectives.A website concern is actually a specific sort of search that limits the end results to a certain internet site. It's basically only the word site, a digestive tract, and afterwards the site's domain name.This query confines the outcomes to a details website. It is actually not implied to be a thorough assortment of all the web pages coming from that internet site.".2. Noindex tag without using a robots.txt is fine for these sort of circumstances where a robot is actually linking to non-existent webpages that are acquiring found out through Googlebot.3. URLs with the noindex tag are going to produce a "crawled/not listed" entry in Browse Console which those will not possess a bad impact on the rest of the internet site.Go through the concern and also answer on LinkedIn:.Why would Google mark web pages when they can not also view the content?Featured Graphic through Shutterstock/Krakenimages. com.