Possible Reasons Why Google is Not indexing your pages

Why Google is Not Indexing Your Website URls or Pages

Why Google is Not Indexing Your Website Pages

If your website URLs are not indexing by google then you might read this which can help you understand and resolve your issue of your website pages google is not indexing and know the reason why google is not indexing your website and why your pages are missing in google search. Some time google will not index all your pages and the only reason will be you gave instruction to google bot not to index your webpage by passing a meta tag of no index no follow in header or http status code from server side which you don’t even know until and unless google bot says these pages cant be included in google search via search console and if google is not indexing all of your pages then you might need to check the guidelines of google and optimize your website content and follow the guidelines strictly.

If google is not indexing your website and all your pages or posts if you are on wordpress then its simple to say that you have blocked google bot. If you block google bot or google bot is unable to crawl your website by denying permission of status code 401 or 403 or any server side errors then google bot will not index your website and crawl all of your pages.

There are still more reasons why google bot will not index all your website pages and one of the main thing to check is your website is not attacked or infected with malware, google will not index malware website for the safety of users and that is why google recommends to go with httpS to make web secure.

Possible Reasons Why Google is Not indexing your pages:

Meta tag No index No Follow:

If you have passed or placed meta tag no index no follow then google will not index those pages this is telling google not to index a particular page by meta tag no index no follow. If you have done this intentionally then its ok. Google picked up correctly and not indexing your website URLs and this having status of URLs excluded in Search console. If you have not passed meta tag and by accident you have passed no index no follow meta tag then remove the no index no follow meta tag and allow google bot to index the page because google bot strictly follows robots.txt rules and meta tags and respects them and process them in google index accordingly.

Blocked By Robots.txt File:

If the pages are blocked by robots.txt then google will not process of crawling if its blocked by robots.txt. If your pages are blocked by robots.txt are not you can simply test it with robots.txt tester in search console. If you have blocked by robots.txt then google still be able to crawl and index the pages in google index from other resources and also if google can find the other information about the page without loading it, the page could be still indexed. To safeguard just pass on meta tag no index no follow meta tag so that when google bot crawls the page and then let google bot see no index no follow meta tag then obviously google bot will not index after seeing no index no follow meta tag.

Related Coverage: Search Console Articles

1. Crawl This URL And its Direct Links as Fetch as Google in Search Console

2. Fix Submitted URL Has Crawl Issue Errors in Search Console

3. How to Fix Google Critical Mobile Usability Errors in Search Console

4. Discovered Currently Not Indexed Status Excluded Search Console

5. How to Fix 502 Bad Gateway Error Timeout Issue

Blocked by Page Removal Tool:

If some one who have access to your search console and made a url request removal then the page will be currently blocked and will not be indexed. You can check to see who submitted a url removal request and require authorization of the page and remove the page. Removal urls requests are only good for about 90 days after the removal request have been made. After 90 days google bot may still go back and index after 90 and process the request url in google index, if you don’t want the url to be listed in google index just pass no index no follow tag.

Blocked due to Unauthorized request – 401:

If google bot doesn’t have access to crawl the page in such scenarios like if a request for authorization is blocked with http status code 401 response code. If you want google bot to crawl and index either remove unauthorized requirements.

Not Indexing due to Crawl Anomaly:

Crawl anomaly which his an unspecified anomaly error occurred but not indexed this means the error level of 4xx or 5xx level response code and thus the page was not indexed to fix crawl anomaly just do a fetch as google or url inspection tool in new search console.

Crawled Currently Not Indexed: Means Urls Not Indexed by Google

The page was crawled by google but not indexed. Google search says URL with status crawl Currently not indexed will not be indexed in future no need to resubmit it.

Duplicate Pages will not index in Google-Alternative page with Canonical Page:

The page will not be indexed if you have alternative version or duplicate of the page that google recognizes as canonical. So there is nothing you can do.

Duplicate Google Choose different Canonical than user:

If pages are marked as canonical for a set of pages but google thinks another url makes a better canonical and google has indexed the page that google considered canonical rather than other.

Google will not Index Pages with Redirect:

If the url is a redirect and therefore will not be added in index if you have 301 redirect in place and working.

Soft 404:

If pages expected to return soft 404 will not be indexed and read more here about soft 404.


If your pages are not indexing in google then these are the main reasons to check if google is not indexing your website pages and its important to check daily what are the pages google is still indexing and what pages google is not indexing.

Tips if google is not indexing Your Pages

How to check Pages are indexed or not:

Simply by doing a site: command will return your webpages indexed in google search and you can match the count with your number of urls on your website. Search console tells you everything how many pages are indexed and how many pages are excluded from google search and urls which are blocked by robots.txt ask google to recrawl your website and provide a sitemap file so that google can discover your website urls and index them in google serp.