Knowing the basics of JavaScript has become a vital skill for SEO specialists, although until recently, the relationship between these two disciplines has been a very controversial topic.

The crucial question that lies between SEO and JavaScript is whether search engine robots can correctly perceive the content of a website and realistically evaluate the user’s experience.

While HTML, which works with PHP, CSS, etc., can be read directly by a crawler, a JavaScript-based website is not immediately accessible. Google’s crawlers first analyze the DOM, and only then do they browse the website.

General Definitions

Before we look at JavaScript best practices, let’s take a quick look at the basic terminology:

  • JavaScript is a programming language used to make web pages dynamic and interactive. You can put JavaScript in an HTML document, or make a link or reference to it.
  • HTML stands for Hypertext Markup Language (Hypertext Markup Language). In simple terms, it is a content organizer: HTML provides the structure of a website (lists, titles, subtitles, paragraphs, etc.) and defines the static content.
  • AJAX is Asynchronous JavaScript and XML. Basically, it updates the content without refreshing the entire page. AJAX allows web applications and servers to communicate without intrusion on the current page. However, you should note that starting in the second quarter of 2018, Google will no longer need AJAX to browse JavaScript-based websites.
  • An SEO professional should also have a basic understanding of the Document Object Model (DOM). You can think of the DOM as a tool used by Google to explore and analyze web pages.

First, Google gets an HTML document and identifies the elements of its JavaScript structure. After this, the browser gets the DOM running, and the search engine can render the page.

1. Let the search engines see your JavaScript

The robots.txt file is configured to provide Google search engines with appropriate crawl capabilities. If you prevent them from viewing JavaScript, the page will look different for crawlers than for users.

This means that search engines will not get the full experience of the user, and Google may interpret these actions as a cover-up.

The best approach is to provide crawlers with all the resources they need to see web pages in the same way as users.

2. Internal links

The internal link is a powerful SEO tool used to show search engines the architecture of your website and point to the most important web pages.

The most important tip here: use the internal links, and do not even try to replace them with JavaScript events during clicks.

Yes, end URLs are likely to be found and browsed with on-click events, but web crawlers do not associate them with the overall navigation of your site.

Therefore, it would be better to implement internal links by using regular anchor tags in the HTML or DOM, to provide users with a better experience.

3. The structure of the URL

JavaScript-based sites used to include fragment identifiers in URLs, but hashes (#) and hashbangs (#!) Are not recommended by Google.

A highly recommended method is to use the pushState history API. It updates the URL in the address bar and allows sites in JavaScript to take advantage of their own URLs.

A clean URL is also SEO-friendly, consisting of simple text, easily understandable by unsophisticated users.

Consider using pushState () for an infinite scroll, so that the URL updates each time the user accesses a new part of the page. In a perfect scenario, the user can refresh the page and stay in exactly the same place.

4. Test your website

Google is able to explore and understand many forms of Javascript, although some of them may be more difficult than others.

However, it is always better to predict possible errors and problems and avoid them, so why not do some tests?

Follow these two basic steps to detect any breaks:

  • Check if the content of your web pages appears in the DOM.
  • Test a few pages to make sure Google is able to index your content.

It is crucial to know if Google is able to see your content and JavaScript in the robots.txt and analyze it correctly. Therefore, consider manually checking the items in your content and retrieving them with Google to see if the content appears.

You can test your site just by using the Google Search Console and the “explore like Google” feature.

5. HTML snapshots

First, what exactly is an HTML snapshot?

The snapshot contains the content of a page after it has been completely analyzed, interpreted and rendered.

What you should know is that Google still supports HTML snapshots, although it has determined that these are things to avoid.

HTML snapshots may be necessary for some situations, so it is necessary to know them.

For example, if search engines can not retrieve JavaScript on your website, you can provide them with an HTML snapshot, which is better than not having your content indexed at all.

In a perfect world, a website would use some sort of server-side user agent detection and show the HTML snapshot to robots and users.

Note that Google strives to see exactly the same experience as a user. Therefore, it is best to return HTML snapshots to search engine robots.

6. The latency of the site

When a browser creates the DOM from a received HTML document, it loads most of the resources exactly as they are mentioned in the HTML document.

If a massive file exists at the top of an HTML document, a browser will first load that huge file, and all other information will only appear afterward, with significant delay.

The key idea of Google’s “critical rendering path” is to first load crucial information for users. In other words, place the most essential content for users above the waterline.

If your JavaScript files or some useless resources are blocking the loading speed of pages, you probably have JavaScript blocking rendering, also known as perceived latency.

This means that your pages have the potential to appear faster, but the JavaScript code slows them down.

Check how long it takes to load a page with Page Speed Insights or similar tools. Analyze the results to see if there is JavaScript blocking the rendering.

Here are two ways to solve this problem:

  • Add the ‘async’ attribute in your HTML to make your JavaScript asynchronous.
  • Minimize JavaScript elements in the HTML document.

When trying to improve the situation, keep in mind the basic rules of JavaScript.

For example, scripts should be sorted in a certain order. If some scripts reference files, they can only be used after loading the referenced files.

You should always keep in touch with your IT team to make sure that any changes do not interrupt the user experience.

Conclusion

The search engines are constantly evolving, so they will probably perform better and faster your JavaScript in the future.

For now, make sure your existing content can be browsed and obtained, with proper site latency.