Mobile app version of vmapp.org
Login or Join
Bethany197

: Why do big sites host their images/css on external domains? Why do sites like Facebook, Twitter, and Google host their images and css on external domains such as: Facebook: static.ak.fbcdn.net

@Bethany197

Posted in: #Cdn #Css #Images #MultipleDomains #StaticContent

Why do sites like Facebook, Twitter, and Google host their images and css on external domains such as:


Facebook: static.ak.fbcdn.net
Twitter: a0.twimg.com
Google: ssl.gstatic.com


Question(s):


Is is performance? or Security?

10.04% popularity Vote Up Vote Down


Login to follow query

More posts by @Bethany197

4 Comments

Sorted by latest first Latest Oldest Best

 

@Bryan171

The 2-item restriction is not an issue any more. While it's a recommendation of the HTTP spec, all modern browsers allow at least 6 concurrent connections.

10% popularity Vote Up Vote Down


 

@Lengel546

Large sites move their static content (images, JS & CSS files) to a Content Delivery Network or CDN as deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user's perspective.

As the CDN has a different domain name, it also provides domain sharding benefits.

10% popularity Vote Up Vote Down


 

@Rivera981

@toomanyairmiles is partially correct - the purpose of this technique is to allow parallel connections from the web-browser to the server. Web browsers should allow a minimum of two simultaneous connections to a single host, but many new browsers can manage up to 60. Regardless, concurrent simultaneous connections between browser and web-server(s) is a major speed bottleneck.

From Google's resource:


The HTTP 1.1 specification (section 8.1.4) states that browsers should allow at most two concurrent connections per hostname (although newer browsers allow more than that: see Browserscope for a list). If an HTML document contains references to more resources (e.g. CSS, JavaScript, images, etc.) than the maximum allowed on one host, the browser issues requests for that number of resources, and queues the rest. As soon as some of the requests finish, the browser issues requests for the next number of resources in the queue. It repeats the process until it has downloaded all the resources. In other words, if a page references more than X external resources from a single host, where X is the maximum connections allowed per host, the browser must download them sequentially, X at a time, incurring 1 RTT for every X resources. The total round-trip time is N/X, where N is the number of resources to fetch from a host. For example, if a browser allows 4 concurrent connections per hostname, and a page references 100 resources on the same domain, it will incur 1 RTT for every 4 resources, and a total download time of 25 RTTs.


So the way to get around this is to "shard' the requests to either different domains, or hosts:

Again, from the same Google resource:


Balance parallelizable resources across hostnames.
Requests for most static resources, including images, CSS, and other binary objects, can be parallelized. Balance requests to all these objects as much as possible across the hostnames. If that's not possible, as a rule of thumb, try to ensure that no one host serves more than 50% more than the average across all hosts. So, for example, if you have 40 resources, and 4 hosts, each host should serve ideally 10 resources; in the worst case, no host should serve more than 15. If you have 100 resources and 4 hosts, each host should serve 25 resources; no one host should serve more than 38.


But, there's one more piece to the puzzle. Each request normally comes with it's own overheads, normally in the form of cookies. Static elements like images, CSS and JavaScript don't need to transmit cookie data, so serving them from cookie-less (sub)domains can result in faster Round Trips:


Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies. This technique is especially useful for pages referencing large volumes of rarely cached static content, such as frequently changing image thumbnails, or infrequently accessed image archives. We recommend this technique for any page that serves more than 5 static resources. (For pages that serve fewer resources than this, it's not worth the cost of setting up an extra domain.)

To reserve a cookieless domain for serving static content, register a
new domain name and configure your DNS database with a CNAME record
that points the new domain to your existing domain A record. Configure
your web server to serve static resources from the new domain, and do
not allow any cookies to be set anywhere on this domain. In your web
pages, reference the domain name in the URLs for the static resources.

10% popularity Vote Up Vote Down


 

@Voss4911412

In the past, Web browsers were only able to download two items at once (now 6 or more), so downloading resources from various domains is faster than a single domain. This applies to everything from images to javascripts.

Many companies also use a CDN, a tool which ensures the end user gets their data from a server that is geographically close to them, which also increases site performance by reducing the roundtrip time for resource requests.

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme