Mobile app version of vmapp.org
Login or Join
Gail5422790

: Rolling Updates in a Webserver Farm? Big websites (Amazon, Facebook, Yahoo etc.) don't schedule downtime for upgrades. Usually they are done "live" and rolled progressively through the server farm.

@Gail5422790

Posted in: #Architecture #SiteDeployment #Webfarm

Big websites (Amazon, Facebook, Yahoo etc.) don't schedule downtime for upgrades. Usually they are done "live" and rolled progressively through the server farm. They also have big infrastructure and teams to manage this.

Smaller websites usually take the entire site offline to update the database structure and upgrade the code running on the web servers. The downtime can be very minimal but its still an interruption to customers.

How did you make the jump to no-downtime rolling updates? What are the minimum requirements to get this done? What can we do to build applications that make this possible from the start?

10.03% popularity Vote Up Vote Down


Login to follow query

More posts by @Gail5422790

3 Comments

Sorted by latest first Latest Oldest Best

 

@Mendez628

There's always some reason for scheduled downtime, but it can be minimized.

Depending on your infrastructure, different strategies can minimize downtime. Regular-old-updates ought not require downtime.

On a number of PHP-driven sites I manage, I maintain side-by-side copies of the codebase, let's say version 1.0, 1.1 and 1.2:

/sites/site-1-0-0
/sites/site-1-1-0
/sites/site-1-2-0


And then create symlinks that the web server can use:

/sites/production --> /sites/site-1-1-0
/sites/staging --> /sites/site-1-2-0


This way, I can stage my code on the production server for last-minute sanity checks, and when I want to go live, I just:

$ rm /sites/production; ln -s /sites/site-1-2-0 /sites/production


The web server uses the symlinks in the DocumentRoot specification, so the cutover is practically instantaneous.

There are, of course, gotchas, here. One needs to ensure that external data is stored somewhere, er, external. You don't want to be writing temp files, or storing user-generated content in the filesystem under the site-x-y-z directories.

Another alternative, if you've got multiple servers is to make the cutover via routing. Some VPS vendors (Linode comes to mind) make it easy to take two virtual machines and swap their IP addresses. So you set up your new version on a new server, do whatever testing is necessary, and then swap IPs to deploy your update. The same issues about keeping non-code assets up-to-date apply, but some careful thinking and planning can make that a non-issue.

With more robust, load-balanced setups, strategies like those suggested in danlefree's answer work as well.

10% popularity Vote Up Vote Down


 

@Courtney195

0 downtime is the webserver equivalent of "the design must look exactly the same in every browser." Scheduled downtime is ok, just schedule it and put a static notice up. Unless it actually costs you tons of hits or money, which as a small website it would be definition not. If you do not want anyone to see it do it over the Fourth of July (or Thanksgiving or another opportune moment) unless your website is the #1 Google search result for "Firework burn treatment" or "Francis Scott Key lyrics" you will be just fine.

Doing this from the start usually leads to a much larger risk: over-engineering.

10% popularity Vote Up Vote Down


 

@Cody1181609

What are the minimum requirements to
get this done?


Once you have at least two servers behind a load-balancer, you can sequentially remove a server from the cluster, update it, and add it back to the cluster to complete the update (insofar as the visitor is concerned).


What can we do to build applications
that make this possible from the
start?


Design your application with load-balancing requirements in mind.

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme