Whatever You Required To Know About The X-Robots-Tag HTTP Header

Posted by

Seo, in its the majority of standard sense, relies upon one thing above all others: Search engine spiders crawling and indexing your site.

However nearly every site is going to have pages that you do not wish to consist of in this expedition.

For instance, do you actually want your privacy policy or internal search pages showing up in Google results?

In a best-case scenario, these are not doing anything to drive traffic to your website actively, and in a worst-case, they might be diverting traffic from more important pages.

Thankfully, Google allows web designers to inform search engine bots what pages and material to crawl and what to disregard. There are a number of ways to do this, the most common being utilizing a robots.txt file or the meta robotics tag.

We have an outstanding and in-depth description of the ins and outs of robots.txt, which you must absolutely check out.

But in top-level terms, it’s a plain text file that resides in your site’s root and follows the Robots Exemption Protocol (REP).

Robots.txt supplies crawlers with guidelines about the website as an entire, while meta robots tags consist of directions for specific pages.

Some meta robotics tags you may employ consist of index, which informs search engines to add the page to their index; noindex, which informs it not to add a page to the index or include it in search results; follow, which instructs an online search engine to follow the links on a page; nofollow, which tells it not to follow links, and a whole host of others.

Both robots.txt and meta robots tags are useful tools to keep in your tool kit, but there’s likewise another method to instruct online search engine bots to noindex or nofollow: the X-Robots-Tag.

What Is The X-Robots-Tag?

The X-Robots-Tag is another method for you to manage how your web pages are crawled and indexed by spiders. As part of the HTTP header response to a URL, it manages indexing for an entire page, along with the specific aspects on that page.

And whereas using meta robots tags is relatively simple, the X-Robots-Tag is a bit more complex.

But this, obviously, raises the question:

When Should You Utilize The X-Robots-Tag?

According to Google, “Any instruction that can be used in a robots meta tag can likewise be defined as an X-Robots-Tag.”

While you can set robots.txt-related instructions in the headers of an HTTP response with both the meta robotics tag and X-Robots Tag, there are certain situations where you would wish to use the X-Robots-Tag– the two most common being when:

  • You wish to manage how your non-HTML files are being crawled and indexed.
  • You wish to serve regulations site-wide rather of on a page level.

For example, if you want to obstruct a specific image or video from being crawled– the HTTP response method makes this easy.

The X-Robots-Tag header is also useful since it permits you to combine multiple tags within an HTTP response or utilize a comma-separated list of instructions to specify instructions.

Maybe you don’t want a certain page to be cached and want it to be unavailable after a particular date. You can utilize a mix of “noarchive” and “unavailable_after” tags to instruct online search engine bots to follow these guidelines.

Basically, the power of the X-Robots-Tag is that it is a lot more versatile than the meta robots tag.

The benefit of utilizing an X-Robots-Tag with HTTP responses is that it enables you to use routine expressions to perform crawl directives on non-HTML, as well as use parameters on a bigger, worldwide level.

To help you comprehend the distinction in between these regulations, it’s handy to categorize them by type. That is, are they crawler regulations or indexer directives?

Here’s a helpful cheat sheet to discuss:

Crawler Directives Indexer Directives
Robots.txt– uses the user agent, permit, prohibit, and sitemap instructions to specify where on-site online search engine bots are allowed to crawl and not allowed to crawl. Meta Robotics tag– enables you to define and avoid online search engine from revealing particular pages on a website in search results.

Nofollow– permits you to specify links that ought to not hand down authority or PageRank.

X-Robots-tag– allows you to manage how defined file types are indexed.

Where Do You Put The X-Robots-Tag?

Let’s state you want to obstruct specific file types. An ideal technique would be to include the X-Robots-Tag to an Apache configuration or a.htaccess file.

The X-Robots-Tag can be added to a site’s HTTP actions in an Apache server setup via.htaccess file.

Real-World Examples And Utilizes Of The X-Robots-Tag

So that sounds excellent in theory, however what does it appear like in the real life? Let’s take a look.

Let’s state we desired online search engine not to index.pdf file types. This configuration on Apache servers would look something like the below:

Header set X-Robots-Tag “noindex, nofollow”

In Nginx, it would appear like the listed below:

area ~ * . pdf$ add_header X-Robots-Tag “noindex, nofollow”;

Now, let’s look at a different situation. Let’s say we want to use the X-Robots-Tag to block image files, such as.jpg,. gif,. png, etc, from being indexed. You might do this with an X-Robots-Tag that would appear like the below:

Header set X-Robots-Tag “noindex”

Please note that understanding how these regulations work and the impact they have on one another is important.

For example, what occurs if both the X-Robots-Tag and a meta robotics tag are located when crawler bots discover a URL?

If that URL is blocked from robots.txt, then particular indexing and serving directives can not be found and will not be followed.

If instructions are to be followed, then the URLs including those can not be disallowed from crawling.

Look for An X-Robots-Tag

There are a few various methods that can be used to look for an X-Robots-Tag on the site.

The most convenient way to check is to set up a browser extension that will tell you X-Robots-Tag details about the URL.

Screenshot of Robots Exemption Checker, December 2022

Another plugin you can utilize to identify whether an X-Robots-Tag is being used, for example, is the Web Developer plugin.

By clicking the plugin in your internet browser and navigating to “View Action Headers,” you can see the different HTTP headers being used.

Another technique that can be utilized for scaling in order to pinpoint concerns on sites with a million pages is Yelling Frog

. After running a site through Yelling Frog, you can browse to the “X-Robots-Tag” column.

This will reveal you which areas of the site are utilizing the tag, along with which particular directives.

Screenshot of Screaming Frog Report. X-Robot-Tag, December 2022 Using X-Robots-Tags On Your Site Comprehending and controlling how search engines connect with your website is

the cornerstone of seo. And the X-Robots-Tag is an effective tool you can use to do simply that. Just be aware: It’s not without its risks. It is extremely simple to slip up

and deindex your whole website. That stated, if you read this piece, you’re most likely not an SEO novice.

So long as you utilize it wisely, take your time and examine your work, you’ll find the X-Robots-Tag to be a beneficial addition to your arsenal. More Resources: Included Image: Song_about_summer/ Best SMM Panel