Discovered Currently Not Indexed Status Excluded:
Discovered currently not indexed means that Google has successfully discovered your website URLs with sitemap.xml file or from your website URLs by Google Bot and then it has been discovered from your website but not indexed in Google search index thus it shows in search console saying these URLs has been crawled and discovered but not the part of Google search index.
Even though you submit a URL with sitemap or Google Bot discovered your webpage through linking mechanism it has been discovered some from Google organic crawling but it is yet though not indexed and Google Bot also seeing lot of pages on your website and Google Bot is not interested in indexing them for now, so they fall under discovered currently not indexed excluded status if the pages are often similar pages of many URLs but gently different on your website like listings, paginations etc which is already indexed on Google and adding few of the words said by John Muller on Webmaster central hangouts here.
How to fix Discovered Currently not indexed Status in Search Console:
There are three area you might have to look around if you see discovered currently not indexed:
1- First make sure that you’re not accidentally generating too many URLs.
2 – Make sure that the internal linking is working well. And trying to reduce the number of pages
3 – A kind of combine the content to make it much stronger.
Related SEO Articles
Discovered currently not Indexed in Search Console:
Once you see a website URL as discovered currently not indexed then it means that URL is presently not indexed but URL is successfully completing the process Google Bot discovered and crawling process and but it’s not indexed yet and this status may change after some time when Google thinks the Google Bot thinks if the URL should be indexed or not based up on the information you are providing on those URLs.
How to Avoid Discovered Currently Not Indexed in Search Console: Technical Issues
Always try to make your website URLs easily discoverable by Google Bot and crawl successfully without any server errors or soft 404 or return with no content and if google thinks that google already indexed have and first of all if you’re really seeing 99 percent of those pages not being indexed. I would first of all perhaps look at some of the technical things, as well. So in particular that you’re not accidentally generating URLs with kind of differing URL patterns, Where it’s not a matter of us not indexing your content pages but just getting kind of lost in this, I don’t know, jungle of URLs that all look very similar but they’re the subtly different. So things like the parameters that you have in your URL, upper lower case, all of these things can lead to essentially duplicate content. And if we’ve discovered a lot of these duplicate URLs, we might think well we don’t actually need to crawl all of these duplicates because we have some variation of this page already in there.
What to Do If Discovered Currently Not indexed:
If you see a URLs in this section discovered currently not indexed then list out those URLs and check with the URL inspection tool and see if you see any errors with URL inspection tool. If you get an error from the specific pages just fix those errors and let Google bot crawl via search console live testing and make changes for the pages of URL and request indexing and make your website URLs discovered currently not indexed status exclude or if you want try to run a fetch and fetch and render and see how Google Bot sees your webpage.