The Goodie Bag

Just another review and sharing site

Monthly Archives: January 2011

Performance Benefits of Google Public DNS

As web pages become more complex, referencing resources from numerous domains, DNS lookups can become a significant bottleneck in the browsing experience. Whenever a client needs to query a DNS resolver over the network, the latency introduced can be significant, depending on the proximity and number of nameservers the resolver has to query (more than 2 is rare, but it can happen). As an example, the following screen shot shows the timings reported by the Page Speed web performance measurement tool. Each bar represents a resource referenced from the page; the black segments indicate DNS lookups. In this page, 13 lookups are made in the first 11 seconds in which the page is loaded. Although several of the lookups are done in parallel, the screen shot shows that 5 serial lookup times are required, accounting for several seconds of the total 11 seconds page load time.

There are two components to DNS latency:

  • Latency between the client (user) and DNS resolving server. In most cases this is largely due to the usual round-trip time (RTT) constraints in networked systems: geographical distance between client and server machines; network congestion; packet loss and long retransmit delays (one second on average); overloaded servers, denial-of-service attacks and so on.
  • Latency between resolving servers and other nameservers. This source of latency is caused primarily by the following factors:
    • Cache misses. If a response cannot be served from a resolver’s cache, but requires recursively querying other nameservers, the added network latency is considerable, especially if the authoritative servers are geographically remote.
    • Underprovisioning. If DNS resolvers are overloaded, they must queue DNS resolution requests and responses, and may begin dropping and retransmitting packets.
    • Malicious traffic. Even if a DNS service is overprovisioned, DoS traffic can place undue load on the servers. Similarly, Kaminsky-style attacks can involve flooding resolvers with queries that are guaranteed to bypass the cache and require outgoing requests for resolution.

We believe that the cache miss factor is the most dominant cause of DNS latency, and discuss it further below.

Cache misses

Even if a resolver has abundant local resources, the fundamental delays associated with talking to remote nameservers are hard to avoid. In other words, assuming the resolver is provisioned well enough so that cache hits take zero time on the server-side, cache misses remain very expensive in terms of latency. To handle a miss, a resolver has to talk to at least one, but often two or more external nameservers. Operating the Googlebot web crawler, we have observed an average resolution time of 130 ms for nameservers that respond. However, a full 4-6% of requests simply time out, due to UDP packet loss and servers being unreachable. If we take into account failures such as packet loss, dead nameservers, DNS configuration errors, etc., the actual average end-to-end resolution time is 300-400 ms. However, there is high variance and a long tail.

Though the cache miss rate may vary among DNS servers, cache misses are fundamentally difficult to avoid, for the following reasons:

  • Internet size and growth. Quite simply, as the Internet grows, both through the addition of new users and of new sites, most content is of marginal interest. While a few sites (and consequently DNS names) are very popular, most are of interest to only a few users and are accessed rarely; so the majority of requests result in cache misses.
  • Low time-to-live (TTL) values. The trend towards lower DNS TTL values means that resolutions need more frequent lookups.
  • Cache isolation. DNS servers are typically deployed behind load balancers which assign queries to different machines at random. This results in each individual server maintaining a separate cache rather than being able to reuse cached resolutions from a shared pool.

Mitigations

In Google Public DNS, we have implemented several approaches to speeding up DNS lookup times. Some of these approaches are fairly standard; others are experimental:

  • Provisioning servers adequately to handle the load from client traffic, including malicious traffic.
  • Preventing DoS and amplification attacks. Although this is mostly a security issue, and affects closed resolvers less than open ones, preventing DoS attacks also has a benefit for performance by eliminating the extra traffic burden placed on DNS servers. For information on the approaches we are using to minimize the chance of attacks, see the page on security benefits.
  • Load-balancing for shared caching, to improve the aggregated cache hit rate across the serving cluster.
  • Prefetching name resolutions, to overcome the limits of conventional, passive caching and aim to serve the majority of requests out of cache. We are experimenting with a DNS prefetching technique which we think offers a significant opportunity for DNS speed-up. Below, we give an overview of the benefits, limitations, and challenges in implementing prefetching, and how we hope to meet those challenges with additional techniques such as traffic prioritization and cache partitioning.
  • Providing global coverage for proximity to all users.

Provisioning serving clusters adequately

Caching DNS resolvers have to perform more expensive operations than authoritative nameservers, since many responses cannot be served from memory; instead, they require communication with other nameservers and thus demand a lot of network input/output. Furthermore, open resolvers are highly vulnerable to cache poisoning attempts, which increase the cache miss rate (such attacks specifically send requests for bogus names that can’t be resolved from cache), and to DoS attacks, which add to the traffic load. If resolvers are not provisioned adequately and cannot keep up with the load, this can have a very negative impact on performance. Packets get dropped and need to be retransmitted, nameserver requests have to be queued, and so on. All of these factors add to delays.

Therefore, it’s important for DNS resolvers to be provisioned for high-volume input/output. This includes handling possible DDoS attacks, for which the only effective solution is to over-provision with many machines. At the same time, however, it’s important not to reduce the cache hit rate when you add machines; this requires implementing an effective load-balancing policy, which we discuss below.

Scaling resolver infrastructure by adding machines can actually backfire and reduce the cache hit rate if load balancing is not done properly. In a typical deployment, multiple machines sit behind a load balancer that equally distributes traffic to each machine, using a simple algorithm such as round robin. The result of this is that each machine maintains its own independent cache, so that the cached content is isolated across machines. If each incoming query is distributed to a random machine, depending on the nature of the traffic, the effective cache miss rate can be increased proportionally. For example, for names with long TTLs that are queried repeatedly, the cache miss rate can be increased by the number of machines in the cluster. (For names with very short TTLs, that are queried very infrequently, or that result in uncacheable responses (0 TTL and errors), the cache miss rate is not really affected by adding machines.)

To boost the hit rate for highly cacheable names, it’s important to load-balance servers so that the cache is not fragmented. There are two ways to accomplish this: one is to use a global cache that is shared by all machines; the other is to partition the cache by name, so that all queries for one name are sent to the same machine. In Google Public DNS, we use both approaches. One pool of machines shares a small global cache containing the most popular names; these machines are load balanced without any affinity or stickiness. If a query cannot be satisfied from this cache, it is sent to another pool of machines that divide up the cache by (less popular) names. All queries for the same name are sent to the same machine, where the name is either cached or it isn’t.

Prefetching name resolutions

As the Internet grows, there is a limit to the cache hit rate a resolver can reach with regular, passive caching: there are relatively few very popular names, and caching the unpopular ones doesn’t help much, since they expire before they are requested again. However, we’d like to be able to serve less popular, less frequently requested names as quickly as we can serve popular names like http://www.gmail.com. Since it’s not acceptable to serve stale or extrapolated results for those names, we also need to ensure that the resolutions are always valid (i.e. have not passed the TTL expiry time).

To accomplish these goals, we are experimenting with aggressively prefetching and refreshing names independently of whether and when users ask for them. There are two major challenges to implementing prefetching properly:

  • Respecting other nameservers’ capacity limits. To ensure that we don’t overload other nameservers with prefetch queries, we impose rate limits on our outgoing requests. To ensure that that self-imposed cap on outgoing capacity is not exhausted by rogue traffic, we divide up the QPS-based quota for each of the nameservers into multiple traffic queues and strictly prioritize prefetch traffic over user traffic.
  • Managing our own servers’ capacity limits. As our servers don’t have infinite memory and CPU, we need to carefully select the names which we can prefetch. We discuss the challenges and our approaches to selecting names for prefetching below.

    In addition, to protect the cache from being eaten up by bogus entries from attackers, we separate the main cache into primary and secondary partitions and give read/write access to each partition according to traffic type.

To maximize the use of the fixed nameserver-QPS limit we have imposed, we need to weigh the benefit of each name record against its cost (nameserver capacity and local CPU and RAM), which is inversely proportional to its TTL. However, determining the benefit of a given name record is a complex task, as we’d like to achieve all of the following, sometimes competing, goals:

  • Minimizing the incidence of cache misses. This implies that we don’t need to independently prefetch popular names such as http://www.google.com and ad.doubleclick.com, since they are likely to be kept fresh by regular user traffic.
  • Serving popular names even when our servers are under DoS attacks that are exhausting our outbound capacity. This implies that we should, in fact, prefetch popular names.
  • Avoiding bogus names injected by deliberate pollution attacks. This implies that we can’t rely on solely on user traffic as the source of names to select.
  • Keeping the set of selected names up to date. We need to be able to adjust the candidate names according to shifts in popularity, cost, etc.

The complexity of the name selection problem makes it impossible to solve online, so we have separated the prefetch system into two components: a pipeline component, which runs as an external, offline, periodic process that selects the names to commit to the prefetch system; and a runtime component, that regularly resolves the selected names according to their TTL windows.

The pipeline component synthesizes the names from a variety of sources, mainly the Google web search index and recent Google Public DNS server logs, and ranks them according to benefit. Benefit is determined by a combination of various factors, including hit rate, the cost of a cache miss, and popularity. The pipeline then factors in the TTL cost; re-ranks all the records; and selects all the names from the top of the list until the cost budget is exhausted. The pipeline continuously updates the list of selected names. The runtime component continuously resolves the selected name records and refreshes them according to their TTL settings.

For closed resolvers, this is not really an issue. For open resolvers, the closer your servers are located to your users, the less latency they will see at the client end. In addition, having sufficient geographical coverage can indirectly improve end-to-end latency, as nameservers typically return results optimized for the DNS resolver’s location. That is, if a content provider hosts mirrored sites around the world, that provider’s nameservers will return the IP address in closest proximity to the DNS resolver.

Google Public DNS is hosted in data centers worldwide, and uses anycast routing to send users to the geographically closest data center.

Note, however, that because nameservers geolocate according to the resolver’s IP address rather than the user’s, Google Public DNS has the same limitations as other open DNS services: that is, the server to which a user is referred might be farther away than one to which a local DNS provider would have referred. This could cause a slower browsing experience for certain sites.