Mobile app version of vmapp.org
Login or Join
Debbie626

: Multiple servers - crash safe I have 2 servers on different networks. Both of them host different website (one is CentOS powered another one is using Mint). How do I assure maximum uptime?

@Debbie626

Posted in: #Backups #Dns #Downtime #WebHosting

I have 2 servers on different networks. Both of them host different website (one is CentOS powered another one is using Mint).

How do I assure maximum uptime? In case one server is overloaded how to automatically redirect required traffic to another server?

Content of the website change on daily basis, how to provide new content to another web-server (basically I could post new content to another website and add it there also, or create a cron to download new content once a day), is there better solution?

Is this done with DNS servers? I don't really understand purpose of slave zone.

10.02% popularity Vote Up Vote Down


Login to follow query

More posts by @Debbie626

2 Comments

Sorted by latest first Latest Oldest Best

 

@Murray432

The best way to do this is usually to do the load balancing inside your network where everything's under your control, eg with a load balancing proxy, a floating IP or internal routing. However, if these servers are on different unaffiliated networks as you say, this is generally not practical.

The other method to achieve this is using DNS failover, but it's not as reliable to update. Simply, your DNS servers will need to monitor the two servers, and return the A records when they're online, eg typically domain.tld would return A records for both server 1 and 2, but if 1 was down it would only return 2.

This is not trivial to set up with most DNS servers, and there are reasons why there aren't many tools around for it. The A records must be set with a very low TTL (eg 5 min not the more standard 1 day) so these responses aren't cached too long in case the server status changes.

Most of the tools I've seen to host it yourself are for the database-driven PowerDNS server, or you can pay for a DNS provider to run it for you. See serverfault.com/questions/60553/why-is-dns-failover-not-recommended for more on DNS failover.

As far as the content goes, that's a whole other issue, and depends on how the content is generated. If it's database driven, you'll first need to replicate the databases (you probably want master-master replication), but otherwise, the code free option is to use a tool like rsync on a regular cron.

Just because you set up a theoretically redundant configuration, doesn't mean you'll necessarily increase the uptime. For example, if one of your servers is much more reliable than the other, it may be better to use this as a hot spare if load sharing is not a priority.

Regardless of how you set up DNS failover, users can try to connect to a down host in the period before it's checked or DNS is updated, and then your query can be cached by the TTL by the end user ISP, and some of these run slow caches, which can delay this even further -- it's not a simple or pain-free solution, and it's far from perfect

10% popularity Vote Up Vote Down


 

@Murphy175

I found this with a quick search


Its a simple solution. Buy hosting service from two different
providers. Preferably in different geographic regions.

For DNS of your site example.com use the two different IP addresses
from the providers.

Example. ISP #1 assigns you IP 111.222.333.444 ISP #2 assigns you IP
555.666.777.888

Then your DNS entry would look like

111.222.333.444 example.com

555.666.777.888 example.com

When you do a nslookup you will get 2 IPs instead of one. In this case
your requests would be load balanced between the two sites and if one
is down the other would be available.

Source: webmasters.stackexchange.com/a/6346/14331

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme