: Preparing for a huge spike in traffic Our company will be appearing on a prime time TV show this week, and they've told us we can expect around 200,000 visitors on our website all at once.
Our company will be appearing on a prime time TV show this week, and they've told us we can expect around 200,000 visitors on our website all at once.
We normally only get about 100 visits per day, so I've no idea if we can handle that much traffic. We're hosted by 1and1.co.uk.
Are there any precautions we can take to prevent our site getting crippled?
Know this old, but very good question and wish I had good info on subject few years back...
From time to time we have (school activity related) sites featured on TV networks. Since we operate on a very tight budget, "load balancing" is the solution. VPS boxes can be had pretty cheap these days and we just mirror/duplicate our content on 2-3 of them.
Look at this article and read about "round-robin".
More info about load testing can be found here.
When we first started trying to handle spikes, we simply had our content on 2-3 VPS boxes and placed their NS in registrar settings.
You haven't defined "all at once" very well. Let's say you're looking at 200,000 unique visitors in half an hour. That's 111 requests per second, not taking into account visitors that click through and open more pages (which you want, right?).
The first thing I would do is Google stories of people handling similar amounts of traffic. Many people will write about their experiences on their blogs to help others. You'll notice that it's extremely hard to find a story about someone doing it on shared hosting, and there's a reason for that. Look into solutions like Digital Ocean or Amazon Web Services, for starters, using the closest data centre to your audience. And I agree that offloading all your static resources to CloudFlare, even a free account, is an excellent idea.
Aside from that, test your code by adding timing scripts to the top and bottom of your pages, assuming they're dynamic. Assuming my assumption about numbers is correct, you'll need to be able to serve each page in under 10 milliseconds to maintain any sort of acceptable performance. If you're serving all requests through SSL by default, disable that for a couple of days while the storm passes.
Also, 200,000 sounds very scary, but bear in mind that you don't need to be too scared (though you should be, a bit). For example, when Paper magazine published NSFW photos of Kim Kardashian, it took just four medium-size web servers and Amazon ELB to handle the load, according to this article (SFW). I definitely don't think your current setup will handle it, but you shouldn't exactly need sixteen web servers with 48 cores each powered by their own small nuclear generator.
Good luck rewriting your website, switching providers and migrating content to a CDN in less than a week.
As you may have realised from the other answers these are the minimum things you need to do to get your site ready for a large increase in traffic. Although if you're currently running on 1and1.co.uk, you probably don't have a strong team of network engineers, DBAs, programmers and front end optimizers working for you.
It's not likely to happen, is it?
You've not said what you do with your website, whether it runs shopping cart or whether it could be implemented with static content. If the latter is the case, then you might just survive the tsunami if you scrape the whole site into static files and publish them in place of the regular site (do backup the current version first).
You should also be speaking to 1and1 (with your credit card in hand).
From my personal experience, I have known that even the best VPS has its limitations. I am going real layman's here.
One of our sports website was hosted on a VPS. During a match between Pakistan and India, we received over 70,000 hits. We had an Inmotinghosting VPS with 4GB RAM and 2.something GHz processing, 1TB bandwidth, SSD storage and other fancy stuff that comes along. We had a paid version of Cloudflare activated as well.
It was just half way through the match and the website went down. It never came live during the match and we lost potentially 70,000+ more visitors. It was later we knew our bandwidth was consumed and without the source host not working, CDN is most of the times useless.
Lesson: Alongside getting a VPS and tuning into a CND like Cloudflare, minimize your page size. The lesser is the better. You can make use of page caching and code minification which comes a lot handy in handling traffic.
Best is to have dedicated servers with multi clustered will solve your problem
If you're on 1and1, likely you're looking for cheap hosting. Cheap hosting means you tend to do everything under one box. A major pain point for hosting is that when you host everything on the same box, you're splitting resources between to important parts of your site:
Your web server (Apache, Nginx, etc)
Your database (MySQL, PostGreSQL, etc)
And being 1and1 there's a good chance you're using a control panel like Plesk or cPanel, which means you have an extra layer of things competing for resources. And the final nail in your coffin? You don't have a lot of resources. You have maybe 1 CPU (or a virtual CPU) and very little RAM (if you have more than 2GB I'll be surprised).
When we ditched 1and1 we went with a scalable hosting provider (Amazon Web Services in our case) and we did several things we couldn't before
Amazon has its own instances for databases (RDS) and so our database got resources to breathe. Most RDBMS systems live and breathe on RAM and that was something we could get plenty of. Now you can provision SSDs with high I/O as well, making the other DB choke point (writing data) less painful.
We got a load balancer with 2 web servers. With a hefty DB backend we didn't need high end front ends so we got two lower end servers.
We switched to something that could bring fully configured machines up on demand. Using something like Chef or Puppet, you can easily add new web servers and it's 100% transparent to your end users if done right. AWS also has Opsworks so you can build your scripts directly into AWS.
Change your instance size on demand. This is a KEY piece for us. If the DB gets bogged down I can bring it down and relaunch it as a larger one in a couple of minutes. Yes, it would involve downtime but a few mins of downtime is better than hours of a horribly slow site. Totally scared of downtime? Keep a read replica in the wings, then bring that down, switch it to a larger instance, promote to master and you avoid any downtime for the cost of an extra machine.
AWS isn't the only game in town (Azure, Rackspace, etc) but make sure 1and1 can scale to meet your demand.
Consider load testing your site. There are free tools available such as JMeter, The Grinder, and Gatling, which can simulate large numbers of visitors to your site.
By testing the impact of heavy traffic ahead of time, you can determine whether any tuning you've done has been effective, and look at further tuning if not.
Check with your ISP and see if there is a cap on your bandwidth. Upgrade your hosting plan if the bandwidth is insufficient for the amount of traffic you expect. You do not want to show a "Bandwidth Limit Exceeded" message you your visitors.
First of all, I'd recommend Cloudflare. You can create a free basic account and it will route traffic via local data centers to minimise the amount of server hops. Cloudflare's also great for caching content and has DDOS protection.
Other than that, try to trim the fat from your service layer. Make sure you don't have any overly bloated database queries bottlenecking your code, or any CPU intensive logic that could be simplified.
Also try to cache any database queries. Some great options for query caching are Redis or Memcache. OpCaching is another consideration if you're using a non-compiled language.
Don't underestimate how much bandwidth and load time you can save through compressing images as well!
Finally, consider monitoring performance with tools such as New Relic.
Best of luck!!
Source: One of the developers for the 12th most popular site in the UK according to Alexa.
During the high traffic period your server should be able to handle all requests made by visitors to your website.
But there are some limits in concurrent connections handled by the server. So it's best to serve the page requests as fast as possible.
Here are some suggestions to consider in these situations,
Application level improvements:
1. Minimize HTTP Requests to Speed Up Page Load Times.
a) Combine all JS files together in a single combined JS file, and all CSS files in a single combined CSS file.
b) Minify JS, and CSS files, so the file size will be reduced and it will be downloaded faster.
c) Use CSS Sprites - When you combine most or all of your images into a sprite, you turn multiple images requests into just one. Then you just use the background-image CSS property to display the section of the image you need.
d) Delay image download with lazy-loading, this will be helpful to reduce the http requests.
2. Prepare lightweight pages which are expecting more visits:
a)Exclude decorative elements like images or Flash wherever possible; use text instead of images in the site navigation and chrome, and put most of the content in HTML.
b)Use static HTML pages rather than dynamic ones; the latter place more load on your servers. You can also cache the static output of dynamic pages to reduce server load.
Server level improvements:
1. Reduce server timeout values by consulting your hosting provider (shouldn't be too low).
When timeouts are lower the connection will be released soon, so the server will be able to handle more connections.
2. Use third party services like CloudFlare for static data caching, and to protect your website from malicious users and attacks like DDOS.
3. Upgrade your server hardware - Upgrade physical and Virtual memories, increase I/O and Entry processes limits, if required. Your hosting provider will be able to help you better.
4. Cache dynamic code - Use APC to cache PHP opcode.
5. Load Balancing - Distribute load across multiple load balancing servers.
When all required actions are taken, now it's time to check if the website is ready for a huge traffic spike.
There are some third party services like loadimpact.com who provide load testing with simulated traffic. The analysis will help you to understand how much load your website can handle and what can be improved.
Also, during the traffic spike period, avoid high CPU usage operations like website backup cronjobs etc.