

At this point, the URL will be processed for JavaScript. The HTML content on the parsed page may then be indexed. On the diagram above, this is the “crawler” stage.Īt the crawler stage, any new links (URLs) that Googlebot discovers are sent back to the crawl queue. Assuming the page is not blocked via robots.txt, Googlebot will parse the page. First, Googlebot gets the address for a page - say the category page on an ecommerce store - from the crawl queue, and follows the URL. Each time Googlebot finds a new URL, it adds it to the crawl queue. This diagram shows the steps Googlebot takes to parse and render page content. Thus the extra steps only apply to content that JavaScript adds to the page in the browser. The guide describes the stages or steps Google takes to crawl, render, and index content that JavaScript adds to a page.Īs we look at this process, it is important to understand that Googlebot will read and, presumably, index any conventional HTML content it finds. In July 2019, Google published a new, brief guide about JavaScript SEO. But an online store might want a new sale page or a holiday buying guide to appear in Google’s index and relevant SERPs as soon as possible. Thus an extra few days may be worth the wait. It is likely the page will change little over time and will be in place for a long time. This may not be a problem for a product detail page. For example, while we know that Googlebot can eventually “see” content added with JavaScript, it may be the case that the content will take longer to be indexed and, therefore, take longer to appear on Google search results. The facts that JavaScript must be processed separately and a little latter are among several reasons ecommerce marketers will want to pay special attention as to how and why JavaScript is employed. “Separating indexing and rendering allows us to index content that is available without JavaScript as fast as possible, and to come back and add content that does require JavaScript at a later time.” “Googlebot executes JavaScript when rendering the page, but because this rendering stage is expensive it can’t always be done immediately,” Splitt said. But, as Google’s Martin Splitt put it, “JavaScript requires an extra stage in the process, the rendering stage.” The web spider is responsible for crawling more than 130 trillion pages.įortunately, Googlebot can crawl lots of pages at the same time. If each web page took just one second to load, Googlebot would have more than four years’ worth of page loading and processing to fetch each page once. The web spider is responsible for crawling more than 130 trillion pages. But where search engine optimization is concerned, JavaScript requires an extra degree of care. JavaScript can improve a shopper’s buying experience, encourage interaction, and even bolster site performance in some cases.
