Page has noindex: why search engines won't index it
If a page tells Google "don't index me," Google obeys. The hard part is finding which layer of your stack added the directive.
A page is returning a noindex directive, either via <meta name="robots" content="noindex">, a Googlebot-specific meta tag, or an X-Robots-Tag HTTP header. Once Google sees that directive on a crawl, it removes the URL from the search index, even if the page is otherwise high-quality.
noindex is the strongest signal you can give Google to drop a URL. It overrides canonicals, internal links, and external backlinks. A misconfigured noindex on a key page can wipe organic traffic to that page within days.
- A SEO plugin (Yoast, Rank Math, All in One SEO) was set to noindex an entire post type or taxonomy.
- A framework (Next.js, Remix, Nuxt) is exporting metadata.robots = { index: false } from a layout that wraps too many routes.
- A staging environment leaks noindex headers via X-Robots-Tag into production.
- A reverse proxy or CDN injects X-Robots-Tag: noindex for entire path prefixes.
- A WordPress "discourage search engines from indexing this site" toggle is left enabled.
- An SSR template renders meta robots noindex on error or empty data states.
- Open Noindex Checker and paste the page URL.
- Read whether the directive is in HTML, in the X-Robots-Tag header, or both.
- Note whether the directive is generic (robots) or targeted at Googlebot.
- If it's a header, the fix is at the server, CDN, or framework response layer, not in the HTML.
- If it's a meta tag, view the raw HTML source to see which template emits it.
- 1
Locate the layer adding noindex
HTML noindex usually lives in your CMS, framework metadata, or layout. Header noindex usually lives in your server, CDN, or framework response middleware. Noindex Checker tells you which one.
- 2
Remove or invert the directive
Change content="noindex" to content="index, follow" (or remove the tag entirely). Remove X-Robots-Tag from server/CDN config. Disable the SEO plugin toggle that flagged the post type.
- 3
Be careful with noindex on staging
Many teams gate noindex behind environment variables. Make sure production never inherits a staging-only noindex header.
- 4
Re-fetch and re-verify
Run Noindex Checker again to confirm both HTML and header layers are clean. Then request indexing in Search Console.
- 5
Wait for Google to recrawl
Removing noindex doesn't reindex the page instantly. Google needs to recrawl and re-evaluate. This typically takes days to a couple of weeks.
What is the difference between noindex and nofollow?
noindex tells search engines not to include a URL in the index. nofollow tells them not to pass link signals from the page's outgoing links. They are independent, a page can be either, both, or neither.
Can X-Robots-Tag noindex affect PDFs?
Yes. X-Robots-Tag is sent as an HTTP header, so it works for any resource type, HTML, PDF, images, JSON. That makes it the only practical way to noindex non-HTML files.
How do I remove noindex in Next.js or WordPress?
In Next.js, audit your layout and page metadata exports for robots: { index: false }. In WordPress, check Settings → Reading for the "discourage search engines" toggle, and check your SEO plugin's content type settings.
Ready to diagnose your URL?
Noindex Checker runs the exact checks discussed above.