There’s a lot of misinformation about SEO out there. Some stuff is plain wrong, other infos may have been true 10 years ago. We’re now planning on getting a lot of content online, so I wanted to get up-to-date information from people who have solved SEO problems.
So I just had a very interesting talk with Franz Enzenhofer from Full Stack Optimization.
Google tries hard to filter out duplicate content, so having unique content on our site gives us a clear advantage and hopefully a better rank than services that just crawl other pages and copy their content.
Also very interesting is the fact that Google tries to find out how a page looks to the user. So it renders CSS and, contrary to popular belief, does execute some Javascript when crawling. It even executes some asynchronous AJAX requests, although it might do so a day later. Does this mean we can throw away server-side HTML generation and just use client-side rendering frameworks like AngularJS and Ext JS to make single-page applications that are search engine optimized?
Unfortunately no. Unfortunately not yet.
While Google does execute some JavaScript snippets, it will probably choke at any application that’s too complex or too large to download. Also, it must be the case that each page is accessible via a unique URL.
Maybe in 5 years things will be different 😉