You can check your original source code by clicking Ctrl + U or entering “view-source:” in the browser bar before the page address.
Regular HTML is what your server returns. And the JS code is the result of processing by the client browser. The client can be a search engine or any visitor.
When crawling your website, search engines first process the original source of the page. They can also use non-JS titles, metadata and content when indexing pages of your website. Therefore, you need to pay attention to all the points we have described below.
To see if your indexing rules have changed, go on to the JS crawling results, then go on to the “Content” – “JS vs HTML” – “Changed indexability report”. In this data table, you will see a list of pages that have different indexing rules in the original page source and the rendered JS.
To do this, go on to the “Content” – “JS vs HTML” – “Changed titles” report to see a list of pages with different titles on JS and non-JS versions.
You can also filter out elements that have been found in non-JS versions. All of these elements must be correct for search bots to process them. For example, to check if the canonicals are absolute, select “Original canonical URL” – “Is absolute”.
Also make sure that the links in the JS code are the same for robots and users. In order for the search engine to crawl your urls, they need to be used with <a href=>. All other urls, as well as those that appear on the page after the user has performed an action (for example, when the user moves the cursor onto a page element), are not crawled by search engines. Accordingly, these URLs cannot be indexed.