Portal Home > Knowledgebase > Articles Database > Question regarding DNS Failover


Question regarding DNS Failover




Posted by starman2978, 03-24-2010, 01:19 PM
Hello WHT users I have a important website and I cannot afford to have it down due to server issues. I have 2 servers located in different datacentres, incase one server goes down the website should operate from the other server. One of my friend suggested me to use dns failover. I did some research on this and found out that it can be done using dnsmadeeasy's failover dns service. I am thinking of using it, but I have a question. Most of the ISP's , web browsers and operating systems (windows for example) cache the dns so incase any one server goes down, dnsmadeeasy will forward the requests to the other server, but due to the caching my clients will still continue to connect to the dead server. Can this be avoided ?

Posted by Crashus, 03-24-2010, 04:33 PM
1) Do you have static websites? 2) Set TTL for your DNS zones to 20 minutes, it will help 3) Use failover DNS within your 2 servers, even maybe without dnsmadeeasy

Posted by marrtins, 03-24-2010, 06:11 PM
It is quite hard to design such failover solution between datacenters (DC), because you should take such things into account: *) Do you use database? How you plan to replicate it between datacenters in a way your whole database is up-to-date in both DC? *) How do you plan replicate files on filesystem between those DC? *) For DNS you could try to add multiple A records pointing on all IP in all DC. Of course, there must be some clever DNS server too, which will disable resolving IP to dead server. Maybe you should try to host your site on some cloud?

Posted by tulaweb, 03-24-2010, 06:57 PM
or less. If I'm working on something where I expect to need to change the DNS on short notice, I set TTL to 600 seconds (10 minutes)

Posted by foobic, 03-24-2010, 07:08 PM
They all should cache your A records for only as long as the TTL you set, and (depending on the amount of DNS traffic you're willing to take) you can set TTL as low as 5 - 15 minutes. Sure, some ISPs and browsers may cache the records for longer than this but if it's a choice between getting 90% of visitors switched over within 30 minutes vs. having the website down for everyone until the server comes back up, I know which I'd choose. Using two A records is another option, so you point the website to both servers at once. Most browsers will try one IP address and if that fails, move on to the next. But this can take a minute or two so many visitors won't wait for it to happen, and unless it's a static site having it running constantly on two servers can present you with lots of headaches. Best way to avoid problems: Get a very reliable primary server.

Posted by starman2978, 03-25-2010, 02:57 AM
Thank you everyone for your suggestions. The site is based on PHP scripts with a MySQL database, the files never change so I can have copies of the files on both servers, as for the database , it changes every week so I have setup a replication between the two MySQL servers. I will try using multiple A records, I also have a small VPS which I could use for apache reverse proxy.

Posted by madaboutlinux, 03-25-2010, 06:37 AM
To be honest whatever TTL OR the number of A records you set, the ISP and the local network/machine cache does play an important role in it. Even if the requests are served from the fail over machine, some requests will still be served using the old A records for sometime thus causing the downtime. So instead of a failover, I would search for a reliable provider / Data Center and get a good master server which will itself provide 100% uptime



Was this answer helpful?

Add to Favourites Add to Favourites    Print this Article Print this Article

Also Read
Installation problem! (Views: 619)