Mobile app version of vmapp.org
Login or Join
Kaufman445

: AngularJS SEO for static webpages (S3 CDN) I've been looking into ways to improve SEO for angularJS apps that are hosted on a CDN like Amazon S3 (i.e. simple storage with no backend). Most

@Kaufman445

Posted in: #Ajax #AmazonS3 #Javascript #Seo

I've been looking into ways to improve SEO for angularJS apps that are hosted on a CDN like Amazon S3 (i.e. simple storage with no backend). Most of the solutions out there, PhantomJS, prerender.io, seo.js etc., rely on a backend to recognise the ?_escaped_fragment_ url that the crawler generates and then fetch the relevant page from elsewhere. Even grunt-html-snapshot ultimately needs you to do this, even though you generate the snapshot pages ahead of time.

This solution is basically relying on using cloudflare as a reverse proxy, which seems a bit of a waste given that most of the security apparatus etc. that their service provides is totally redundant for a static site. Setting up a reverse proxy myself as suggested here also seems problematic given that it would require either i) routing all AngularJS apps I need static html for through one proxy server which would potentially hamper performance or ii) setting up a separate proxy server for each app, at which point I may as well set up a backend, which isn't affordable at the scale I am working.

Is there anyway of doing this, or are statically hosted AngularJS apps with great SEO basically impossible until google updates their crawlers?

(Reposted from StackOverflow).

10.01% popularity Vote Up Vote Down


Login to follow query

More posts by @Kaufman445

1 Comments

Sorted by latest first Latest Oldest Best

 

@Connie744

I have no clue if this is going to work well, but neither have I if it's about AngularJS and SEO in general. There is very little evidence it works the way it is supposed to do.

I would suggest leaving PhantomJS in the dark, it is known to have issues and is not very lightweight. Also considering you don't want to write/set up a backend I would use Firefox to use an awesome feature: Element.innerHTML, to capture the html content at any given moment in JavaScript and use the Amazone CDN API to upload the content to a different html page.

The thing that rests is to let the crawler know to index the other page. This is the tricky part since you don't want to have a backend, hence you can't use the ?_escaped_fragment_ url as you said it. I would use just a canonical relation between the pages using a link tag. But remember, I am not completely sure it will work.

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme