Portal Home > Knowledgebase > Articles Database > Multiple DNS Entries


Multiple DNS Entries




Posted by BostonGuru, 03-16-2008, 09:31 PM
Hi Guys, I have been searching through threads on WHT, but have not found anything really conclusive regarding this. If I were to list two A records for the same domain name on my nameserver, such as: www.mydomain.com -> 123.234.123.234 www.mydomain.com -> 234.123.234.123 How would most browsers handle this? Would it only try the first entry, or would it try the first entry, and if the specified IP is unavailable it would try the second entry? Thanks!

Posted by david510, 03-16-2008, 09:49 PM
It will take the one that you added last.

Posted by Panopta, 03-16-2008, 10:18 PM
It really depends on the specific DNS client that's doing the lookup. Some will just take the first that they get from the DNS server, others will try to do more intelligent things. This approach is called Round Robin DNS - there's more details at http://en.wikipedia.org/wiki/Round_robin_DNS

Posted by plumsauce, 03-16-2008, 10:37 PM
The first thing to remember is that clients very rarely contact the dns server directly. Therefore you have an additional layer of indirection inserted. Some dns caches will hand out one at a time, rotating as they go. Others will hand out all records in the cache. As far as browsers are concerned, the later versions of IE will switch to any other known address for a FQDN. If none is known, a new lookup is issued immediately. There are also managed dns services that will manipulate the ip addresses given out based upon the current state of the servers, or geographic origin of the request. The software that does this is usually custom built inhouse and not generally available publicly.

Posted by InfiniteTech, 03-16-2008, 10:38 PM
Isn't round robin based on AUX/priority? I am not a DNS expert but I have my own share of doubts

Posted by BostonGuru, 03-16-2008, 10:40 PM
Thanks for the responses guys. From what I understand, round robin is more for load balancing rather than redundancy. I figure it would be easy to have a dns server which would check which servers are online, and adjust the record's ip accordingly, but even if the records have low ttl's, IE still holds on to dns cache for atleast 1/2 hour, therefore i was thinking if IE was able to cache multiple IPs, then that might solve the potential 1/2 hour gap.

Posted by plumsauce, 03-16-2008, 10:50 PM
In round robin of A records there is no preference level. BTW, I just came back to note an inaccuracy in the posted wikipedia article. It claims that round robin dns is a means of geographically distributing load. It does nothing of the sort. Unless of course you are playing anycast tricks. Round robin is just a simple means of load balancing. A not very finegrained one at that. What the OP might really want is dns failover. And there are managed dns services that will handle this too. The best of them handle all of the features mentioned in this thread and let you turn them on and off according to your own certain needs. I am helping someone now who has just bought, I guess rented really, a VPS for the sole purpose of being available during scheduled maintenance and unexpected outages. He is planning on using such a service to point at his main production server cluster which is locally load balanced with the VPS in a hot standby role. In unexpected outages, the service flips the switch automagically, and for scheduled maintenance, he will flip the switch manually. So that poor old VPS is going to stand in for four servers sitting behind two load balancers on two ip addresses. But, it is only going to be a partial presence. Mainly to let his users know that they did reach the right site even if not all services are available at the time. It's a question of branding. PS. No, IE does not hold for half an hour. It immediately goes out looking again on a failure. That's the latest that I have heard. Of course, the real way to know is to run a sniffer during a simulation.

Posted by foobic, 03-16-2008, 11:06 PM
Published behaviour of recent versions of Firefox is to try the first IP returned by the nameserver first, then the second (if any) and so on. But as plumsauce stated, the nameserver your visitors use won't usually be yours so they may not get the records in the right order. The best you can say is that most modern browsers will try all the IPs you provide. You can't fully control which one they try first but there may be some bias towards the first one in the list, as provided by your nameservers. To my mind, it actually works quite well for website redundancy. The problem is the unpredictable load-(un)balancing side-effect. I don't believe that's the case. I've not seen any evidence of it.

Posted by dkitchen, 03-16-2008, 11:09 PM
No that's absolutely wrong. It doesn't matter what order they are in. It will randomly hit one of the addresses, as has been explained later in the thread. We have many customers with multiple loadbalancers set up in a round robin configuration, each load balancer takes almost exactly the same amount of traffic. DNS failover is usually pretty reliable, you will get the occasional entry cached for longer than you would like, but in general it works pretty well. The only real way to achieve 100% availability is by using anycast, that's expensive though... Dan

Posted by david510, 03-17-2008, 12:18 AM
What I have meant was, we add a new entry with an existing one and reload the rndc. Now if we query for the IP for the domain from a fresh location, the new entry is not taken? I am not referring to any failover technique or clustering. Just plain rndc reloading.

Posted by dexxtreme, 03-17-2008, 06:02 AM
New requests for that DNS entry will have the new IPs listed, however the old IP information will still be cached in any DNS servers that have previously loaded that DNS record. The problem comes in when you have DNS servers that decide to ignore TTLs and hold on to the old information for longer than they should (as much as a week in some cases).

Posted by david510, 03-17-2008, 06:14 AM
Yes, that is why I mentioned "fresh location" earlier.

Posted by foobic, 03-17-2008, 07:35 AM
I was curious about this so I've been testing it. I now have a domain using 2 A records on just one authoritative nameserver that's set up to deliver them always in the same order. From every ISP nameserver I've tried I get the same result: both IPs are given but their order reverses on each lookup. Only the OpenDNS nameservers provide them in the same fixed order as my own nameserver. So, until the world starts using OpenDNS, you really can't control the order in which your visitors receive your multiple A records.

Posted by wKkaY, 03-17-2008, 08:03 AM
I've posted about this before. Try visiting these two: http://failover-test1.doubleukay.com http://failover-test2.doubleukay.com Both #1 and #2 have two records each; one invalid and one valid. In #1, one address is unrouteable and simulates a server going down.. hard. Browsers attempting to connect to that IP will timeout and try the other IP. From experience, not many people wait long enough for this failover to happen, instead preferring to hit the refresh button. #2, one address is reachable but does not have httpd running on it and a "connection refused" error is returned. Browsers quickly failover to the other IP.

Posted by foobic, 03-17-2008, 08:18 AM
Right - it works quite well as a failover mechanism. The problem, for me, is what happens when both servers are up. You can't define a primary and a secondary; traffic will go to them both. I think the only way to achieve that would be to block the secondary server (firewall port 80 or stop the webserver), but then you're relying 100% on this browser behaviour for everyday use - something I'm not totally comfortable with.

Posted by BostonGuru, 03-17-2008, 05:17 PM
Wow, thanks for all the information. Foobic, I had actually read somewhere that IE automatically caches DNS info for a minimum of 30 minutes, and firefox 60 seconds. I then tried it by creating DNS entries with TTLs of 5 minutes then seeing how long after that TTL I saw the changes in Firefox and IE. My findings agree with what I read. I think that using the multiple A entries, even if it does take a while for the pages to load, may be a good plan B for other redundancy measures. One thing I am concerned with though; I read that the lower the TTL, the higher the rate of DNS poisoning, and that a TTL of 5 minutes has a 90% chance of getting poisoned within the first hour. Is this true?

Posted by wKkaY, 03-17-2008, 05:28 PM
Yes, I have the same concerns too. These days, it's not just browsers that hit the webservers, but all sorts of critters like RSS readers, search engines, and proxies.

Posted by plumsauce, 03-17-2008, 11:53 PM
Ok, your observations agreed with what you read. But did you test what you meant to test? For example, did you create a server down situation in order to see how the client reacted? Were you pointed directly at your own dns servers, or were you pointed at a cache? A cache that you do not control, that may be ignoring short ttl's. Many isp's in the hopes of saving money and load will setup their caches to honor a "reasonable"(in their estimation) ttl, while forcing longer values on a "unreasonable" ttl. In the end, playing dns tricks is not as controlled as playing with a local load balancer. There will always be delays in the cutover. This is just a fact of life. But for many sites, this is a fair tradeoff. There are not that many sites that cannot take a few minutes of apparent outage.

Posted by foobic, 03-18-2008, 12:55 AM
Well, it is true that browsers cache dns records, independently of the OS / resolver. Just one more variable to add in. Personally I've not seen IE cache for any longer than the TTL but I guess it could be system / version dependent or even a registry tweak. In Firefox, watch out for the "performance tweaks" like the FasterFox plugin which sets network.dnsCacheExpiration to 3600 - that'll really slow down your failover. Edit: meant to reply to this too: Running a short TTL means more lookups, so presumably more chances for someone to poison your dns, but a 90% chance? No, I don't think so! Last edited by foobic; 03-18-2008 at 12:59 AM.

Posted by dH2K, 03-18-2008, 01:07 PM
Hey, I've worked on a scalable, fail over supported architecture in the last half year. We would like to extend this system geographically by deploying components to multiple data centers. As far as I know, there are always trade-offs in these kind of setups. The goal is to give continuous availability to our customers even if our master data center burns down by fire or destroyed by a Sunday afternoon Martian attack. I've collected the following approaches to receive this goal: 1. DNS/failover (multiple AA records) 2. GSLB (global server load balancing) 3. BGP + GSLB All solutions above needs 30-120 secs to switch from the master to the slave data center, and we would like to eliminate this gap. In an active (data center) / standby (data center) mode, does anybody knows how can we eliminate the gap (switching time) completely ? I hope this is not the catch 22. have a nice day, Tamas



Was this answer helpful?

Add to Favourites Add to Favourites    Print this Article Print this Article

Also Read
Outsourcing Support (Views: 616)