: Massive sites like Google, Facebook and Twitter don't necessarily get the 'best' servers in that they don't run a small number of high-powered servers, they run a massive number of smaller
Massive sites like Google, Facebook and Twitter don't necessarily get the 'best' servers in that they don't run a small number of high-powered servers, they run a massive number of smaller and cheaper servers. They expect hardware to die and be replaced and the code allows for that.
Some things that are typical in massive scale sites:
They don't use SQL databases like mySQL. Instead they key-value stores like HBase or Cassandra. mySQL and other SQL DBs are too slow when the numbers of requests are huge.
They cache as much as possible. HTML caching as you say. User's data is stored in memory using things like memcached.
Some sites, like Reddit, pre-cache pages before a user has even requested it.
Pre-calculate as much as possible, sites tend to work out stuff like your number of friends (or whatever) and cache that too - a little as possible is done dynamically.
highscalability.com/ is a great site to learn more about this.
More posts by @Pierce454
Terms of Use Create Support ticket Your support tickets Stock Market News! © vmapp.org2024 All Rights reserved.