<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
      <title>Blogs on TurboBytes </title>
      <generator uri="https://hugo.spf13.com">Hugo</generator>
    <link>https://www.turbobytes.com/blog/index.xml/</link>
    <language>en-us</language>
    
    
    <updated>Mon, 16 Oct 2017 00:00:00 &#43;0000</updated>
    
    <item>
      <title>CDNetworks joins TurboBytes&#39; global Multi-CDN platform</title>
      <link>https://www.turbobytes.com/blog/cdnetworks-joins-turbobytes-multi-cdn/</link>
      <pubDate>Mon, 16 Oct 2017 00:00:00 &#43;0000</pubDate>
      
      <guid>https://www.turbobytes.com/blog/cdnetworks-joins-turbobytes-multi-cdn/</guid>
      <description>&lt;!--&lt;p&gt;
    Performance: Myanmar, Cambodia, Iraq, Egypt
    Features and APIs
    Support
&lt;/p&gt;--&gt;

&lt;p&gt;
    We&#39;re excited to announce CDNetworks is joining our Multi-CDN platform! 
    The businesses of our customers will benefit greatly from lower latency and higher availability across the globe and especially in Russia, the Middle-East and emerging markets in Asia.
    Furthermore, customers who need to deliver content to users in China, but don&#39;t have an ICP license, may expect huge performance gains.
&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;Russia&lt;/h2&gt;
&lt;p&gt;
    CDNetworks has &lt;a href=&#34;/products/optimizer/network-map/#cdn&#34;&gt;15 POPs in Russia&lt;/a&gt;, including in Moscow, Yekaterinburg and Novosibirsk.
    The strong POP presence translates to excellent performance for the tens of millions of users in Russia.
&lt;/p&gt;
&lt;p&gt;
    The chart below shows the average CDN response times in Russia for several CDNs, based on our millions of real-world performance tests in the past week.
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/cdn-performance-russia-oct-2017.png&#34; class=&#34;m-b-20&#34; width=&#34;490&#34; height=&#34;355&#34; alt=&#34;CDN performance in Russia&#34;&gt;
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;China&lt;/h2&gt;
&lt;p&gt;
    Need great performance in China but don&amp;rsquo;t have an ICP license?
    CDNetworks is your solution as they are the best CDN for delivery &lt;i&gt;into&lt;/i&gt; mainland China from POPs outside the country.
&lt;/p&gt;
&lt;p&gt;
    ICP license holders can benefit from even better performance in China by utilizing CDNetworks&amp;rsquo; many POPs in China mainland.
    This service is not yet available through TurboBytes, but we&amp;rsquo;re actively looking into this.
&lt;/p&gt;
&lt;p&gt;
    The two charts below show the reponse times and failratio of &amp;ldquo;EdgeCast China&amp;rdquo; (available through TurboBytes, ICP required), CDNetworks and a few other CDNs.
    &amp;ldquo;EdgeCast China&amp;rdquo; is best because content is delivered from POPs in Beijing and Shanghai, but CDNetworks outperforms all the other CDNs greatly.
    The average response time of CDNetworks is about 90 ms, while most other CDNs are &amp;gt;200 ms. More importantly, the reliability of CDNetworks is much higher, as shown in the failratio chart (lower is better).
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/cdn-performance-china-oct-2017-.png&#34; class=&#34;m-b-20&#34; width=&#34;490&#34; height=&#34;355&#34; alt=&#34;CDN performance in China&#34;&gt;
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/cdn-performance-failratio-china-oct-2017.png&#34; class=&#34;m-b-20&#34; width=&#34;490&#34; height=&#34;355&#34; alt=&#34;CDN performance in China&#34;&gt;
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;Portugal&lt;/h2&gt;
&lt;p&gt;
    CDNetworks is one of the few CDNs that has edge servers in the country, resulting in 30% lower response times.
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/cdn-performance-portugal-oct-2017.png&#34; class=&#34;m-b-20&#34; width=&#34;490&#34; height=&#34;355&#34; alt=&#34;CDN performance in Portugal&#34;&gt;
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;Turkey&lt;/h2&gt;
&lt;p&gt;
    The two POPs of CDNetworks in Ankara and Istanbul clearly make the difference:
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/cdn-performance-turkey-oct-2017.png&#34; class=&#34;m-b-20&#34; width=&#34;490&#34; height=&#34;355&#34; alt=&#34;CDN performance in Turkey&#34;&gt;
&lt;/p&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>The real impact of the DDoS against Dyn</title>
      <link>https://www.turbobytes.com/blog/real-impact-of-ddos-against-dyn/</link>
      <pubDate>Sat, 22 Oct 2016 18:47:00 &#43;0000</pubDate>
      
      <guid>https://www.turbobytes.com/blog/real-impact-of-ddos-against-dyn/</guid>
      <description>&lt;p&gt;
    Yesterday, October 21 2016, there was a large DDoS attack against Dyn, one of the leading authoritative DNS providers. 
    The attack started around 11 AM UTC and lasted for hours, severely hurting the reachability of big name sites like Twitter, GitHub and PayPal. 
    It &lt;a href=&#34;https://status.fastly.com/incidents/50qkgsyvk9s4&#34;&gt;hurt CDN Fastly&lt;/a&gt; too. 
    &lt;a href=&#34;http://www.nytimes.com/2016/10/22/business/internet-problems-attack.html&#34;&gt;Mainstream&lt;/a&gt; &lt;a href=&#34;https://www.washingtonpost.com/news/the-switch/wp/2016/10/21/someone-attacked-a-major-part-of-the-internets-infrastructure/&#34;&gt;media&lt;/a&gt; logically picked up on the story and it seems the attack was done &lt;a href=&#34;https://www.flashpoint-intel.com/mirai-botnet-linked-dyn-dns-ddos-attacks/&#34;&gt;using Mirai IoT botnet&lt;/a&gt;. The exact nature and scale of this attack is currently not known to the public.
    24 hours after the attack ended, it&#39;s still a &lt;a href=&#34;https://twitter.com/search?q=Dyn%20ddos&amp;src=typd&#34;&gt;much talked about topic on Twitter&lt;/a&gt;.
&lt;/p&gt;
&lt;p&gt;
    This article is not about what exactly happened, who was behind the attack or why they did it. 
    We want to show the &lt;strong&gt;real&lt;/strong&gt; impact of the attack on the performance of Dyn&#39;s authoritative DNS service globally, using our unique &lt;a href=&#34;/blog/introducing-rum-for-dns/&#34;&gt;RUM for DNS&lt;/a&gt; data. 
    TurboBytes monitors the real-world performance of authoritative DNS providers from across the globe, 24/7, by running tests in the browsers of millions of people that are connected to thousands of networks.
&lt;/p&gt;

&lt;h2&gt;First wave: it was not just US-East and the impact was huge&lt;/h2&gt;
&lt;p&gt;
    &lt;a href=&#34;https://www.dynstatus.com/incidents/5r9mppc1kb77&#34;&gt;Dyn official report&lt;/a&gt;: &lt;i&gt;&#34;On Friday October 21, 2016 at approximately 11:10 UTC, Dyn came under attack by a large Distributed Denial of Service (DDoS) attack against our Managed DNS infrastructure in the US-East region. Customers affected may have seen regional resolution failures in US-East and intermittent spikes in latency globally. Dyn&#39;s engineers were able to successfully mitigate the attack at approximately 13:20 UTC, and shortly after, the attack subsided.&#34;&lt;/i&gt;
&lt;/p&gt;
&lt;p&gt;
    This text tells very little about the level of pain Dyn customers had. You &lt;i&gt;may&lt;/i&gt; read it as if only &lt;i&gt;some&lt;/i&gt; customers&#39; sites were unreachable for &lt;i&gt;some&lt;/i&gt; users (in US-East) and there was an &lt;i&gt;occasional&lt;/i&gt; slow response on other continents.&lt;br&gt;
    Our RUM for DNS data clearly shows it was pretty bad:
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/dns-rum-us-ny-failratio-dyn-vs-others-20161021.png&#34;&gt;
    The chart above shows the Failratio of Dyn and other DNS providers as measured through recursive resolvers (Google Public DNS, OpenDNS and ISP resolvers) in the state of New York. Our tests are initiated in the browser and use a random subdomain to force the recursive to get the response from the authoritative.&lt;br&gt;
    During the two hour attack, Failratio averaged at 45% and peaked at 80%. In our book, that qualifies as an outage.
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/dns-rum-us-ny-responsetime-dyn-vs-others-20161021.png&#34;&gt;
    &lt;i&gt;If&lt;/i&gt; Dyn servers responded, often it did so very slowly.
&lt;/p&gt;
&lt;p&gt;
    From our data we can confirm the problems in US were limited to US-East. But what happened outside US?
    Most countries were just fine, but many users in Germany and France definitely noticed Dyn failing during the first wave of the attack:
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/dns-rum-germany-failratio-dyn-vs-others-20161021.png&#34;&gt;
    &lt;br&gt;
    &lt;img src=&#34;/images/dns-rum-france-failratio-dyn-vs-others-20161021.png&#34;&gt;
&lt;/p&gt;
    
&lt;h2&gt;Second wave: 5 hours of big pain across continents&lt;/h2&gt;
&lt;p&gt;
    &lt;a href=&#34;https://www.dynstatus.com/incidents/5r9mppc1kb77&#34;&gt;Dyn official report&lt;/a&gt;: &lt;i&gt;&#34;At roughly 15:50 UTC a second DDoS attack began against the Managed DNS platform. This attack was distributed in a more global fashion. Affected customers may have seen intermittent resolution issues as well as increased global latency. At approximately 17:00 UTC, our engineers were again able to mitigate the attack and service was restored.&#34;&lt;/i&gt;&lt;br&gt;
    That end time of 17:00 is maybe a typo, as the &lt;a href=&#34;https://www.dynstatus.com/incidents/nlr4yrr162t8&#34;&gt;original status post&lt;/a&gt; states the attack lasted much longer and the incident was marked as Resolved at 22:17 UTC.&lt;br&gt;
    Our data shows the second wave indeed started at roughly 15:50 UTC and ended at ~ 20:30 UTC.
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/dns-rum-us-failratio-dyn-vs-others-wave2-20161021.png&#34;&gt;
    &lt;br&gt;
    &lt;img src=&#34;/images/dns-rum-us-failratio-dyn-vs-others-wave2b-20161021.png&#34;&gt;
    In US the Failratio averaged 32% and again the peak was 80%. 
    Dyn performance was different in other countries, like United Kingdom and France:
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/dns-rum-uk-failratio-dyn-vs-others-wave2-20161021.png&#34;&gt;
    &lt;br&gt;
    &lt;img src=&#34;/images/dns-rum-france-failratio-dyn-vs-others-wave2-20161021.png&#34;&gt;
    In both major markets in Europe, the Failratio peaked at 100% and this peak lasted 30 minutes. This means no query to Dyn got a response! 
    After the peak the Failratio dropped to ~ 10% in United Kingdom but in France it stabilized at a still very high 60%. 
    The France chart also shows performance of other DNS providers was degraded for two hours. 
    After taking a closer look at our data it became clear this pattern showed only on AS3215, which belongs to Orange, the biggest consumer ISP. 
    &lt;img src=&#34;/images/dns-rum-france-as3215-failratio-dyn-vs-others-wave2-20161021.png&#34;&gt;
    Dyn was completely down from AS3215. Failratio was stable at 100%. Ouch.
    This must have impacted millions of people!&lt;br&gt;
    Interestingly, for two hours the other major DNS providers also suffered on the network of Orange. 
    Is this related to the attack against Dyn? Maybe. We don&#39;t know. 
    &lt;img src=&#34;/images/dns-rum-brasil-failratio-dyn-vs-others-wave2-20161021.png&#34;&gt;
    Also in Brasil the websites of Dyn customers suffered.
&lt;/p&gt;

&lt;h2&gt;Pulse: instant diagnostics from eyeball networks&lt;/h2&gt;

&lt;p&gt;
    During the second wave, we ran some &lt;a href=&#34;https://pulse.turbobytes.com/&#34;&gt;Pulse&lt;/a&gt; tests to view the real-world DNS behaviour from 80 machines around the globe, most connected to consumer ISP networks.
&lt;/p&gt;
&lt;p&gt;
    The &lt;a href=&#34;https://pulse.turbobytes.com/results/580a5178ecbe402e2201a74c/&#34;&gt;first test&lt;/a&gt; ran at 17:33:44 UTC and showed a &lt;strong&gt;70.1% error rate&lt;/strong&gt; when our Pulse agents tried to query for &lt;code&gt;soundcloud.com&lt;/code&gt; directly from Dyn&#39;s servers (&lt;code&gt;ns1.p20.dynect.net.&lt;/code&gt;, &lt;code&gt;ns2.p20.dynect.net.&lt;/code&gt;, &lt;code&gt;ns3.p20.dynect.net.&lt;/code&gt;, &lt;code&gt;ns4.p20.dynect.net.&lt;/code&gt;).
&lt;/p&gt;
&lt;p&gt;
    The &lt;a href=&#34;https://pulse.turbobytes.com/results/580a51ceecbe402e2201a74e/&#34;&gt;second test&lt;/a&gt; ran at 17:35:10 UTC. This time the Pulse agents queried for &lt;code&gt;soundcloud.com&lt;/code&gt; via Google Public DNS, OpenDNS and their ISP resolvers. &lt;strong&gt;The error rate was 40.6%&lt;/strong&gt;.
&lt;/p&gt;
&lt;p&gt;
    The record in question has a TTL of 600 (10 minutes), so logically the error rate was lower when querying recursive resolvers because some could serve the response from cache and did not have to reach out to Dyn. Another possible explanation for the difference in error rates is that in effort to mitigate attack traffic, Dyn may have blocked end user IPs from hitting their nameservers in order to give recursives a higher chance of reaching Dyn. In hindsight, we should have run more Pulse tests using random subdomains to conclusively test recursives&#39; reachability to Dyn.
&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;Closing remarks&lt;/h2&gt;
&lt;p&gt;
    This attack was a big one. The impact was tremendous because it lasted for several hours and it was against a leading DNS provider, bringing down many popular websites and online services in nations on multiple continents.
    TurboBytes is in the unique position to see the real-world impact for users on consumer ISP networks. Our RUM for DNS data does not lie.
&lt;/p&gt;
&lt;p&gt;
    April 2 2015, we wrote a blog post &lt;a href=&#34;/blog/why-use-two-dns-providers/&#34;&gt;Why You Should Use Two DNS Providers&lt;/a&gt;. This explains how recursive resolvers work and why using more than one DNS provider makes your website or online service much more reliable and resilient against an attack like this one against Dyn.
&lt;/p&gt;
&lt;p&gt;
    The TurboBytes tool Pulse can come in handy in case you want to diagnose DNS performance on eyeball networks worldwide.
    Next time you think something may be wrong, visit &lt;a href=&#34;https://pulse.turbobytes.com/&#34;&gt;https://pulse.turbobytes.com/&lt;/a&gt;.
&lt;/p&gt;
&lt;p&gt;
    Consider using OpenDNS as your primary resolver, from home and the office.
    Their founder David Ulevitch tweeted yesterday about a cool feature that helps you experience the Internet as you expect it to be even if a DNS provider like Dyn is down:&lt;br&gt;
    &lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;Pro-tip: OpenDNS users generally see the Internet as they should. We do a good job of handing &amp;quot;last known good&amp;quot; IPs when we can&amp;#39;t resolve.&lt;/p&gt;&amp;mdash; ☁ David Ulevitch ☁ (@davidu) &lt;a href=&#34;https://twitter.com/davidu/status/789515590297800704&#34;&gt;October 21, 2016&lt;/a&gt;&lt;/blockquote&gt;
    &lt;script async src=&#34;//platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;
&lt;/p&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Level3 and SwiftServe boost TurboBytes&#39; Multi-CDN performance</title>
      <link>https://www.turbobytes.com/blog/level3-swiftserve-boost-multi-cdn-performance/</link>
      <pubDate>Tue, 16 Feb 2016 16:00:00 &#43;0000</pubDate>
      
      <guid>https://www.turbobytes.com/blog/level3-swiftserve-boost-multi-cdn-performance/</guid>
      <description>&lt;p&gt;
    We&#39;re excited to announce two great CDN providers are joining our Multi-CDN platform: Level3 and SwiftServe. 
    The businesses of our customers will benefit greatly from lower latency and higher availability across the globe and especially in Russia, Latin-America, the Middle-East and emerging markets in Asia. 
    Together, Level3 and SwiftServe add 75 new POP locations to our &lt;a href=&#34;/products/optimizer/network-map/#cdn&#34;&gt;Multi-CDN network map&lt;/a&gt;.
&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;Level3&lt;/h2&gt;
&lt;p&gt;
    Level3 owns and operates the worlds largest Tier-1 network and provides a range of products and services, including CDN.
    Level3 CDN is used by companies like Apple and Netflix to deliver web objects, downloads and video to users worldwide.
    A big strongpoint of Level3 CDN is the high number of POPs in locations where most other CDNs are not present.
&lt;/p&gt;
&lt;p&gt;
    The chart below shows the daily median response time (TTFB) of our 5 CDNs in Russia, South-Africa, Qatar and Chili,
    based on our millions of real-world performance tests in the past weeks.
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/cdn-performance-ru-za-qa-cl.png&#34; class=&#34;m-b-20&#34; width=&#34;660&#34; height=&#34;355&#34; alt=&#34;CDN performance in Russia, South-Africa, Qatar, Chili&#34;&gt;
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;SwiftServe&lt;/h2&gt;
&lt;p&gt;
    SwiftServe is a high-growth CDN provider with offices in Singapore and UK and a strong focus on Asia.
    They have partnerships with major ISPs in Vietnam, Thailand, Indonesia and United Arab Emirates and this results in best-in-class performance. SwiftServe is launching new POPs soon in mainland China and India.
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/cdn-performance-vn-th-id-ae.png&#34; class=&#34;m-b-20&#34; width=&#34;658&#34; height=&#34;355&#34; alt=&#34;CDN performance in Vietnam, Thailand, Indonesia, United Arab Emirates&#34;&gt;
&lt;/p&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>CloudFront outage in the UK on BT lasted 6.5 hours</title>
      <link>https://www.turbobytes.com/blog/cloudfront-outage-bt-lasted-7-hours/</link>
      <pubDate>Tue, 25 Aug 2015 13:00:00 &#43;0000</pubDate>
      
      <guid>https://www.turbobytes.com/blog/cloudfront-outage-bt-lasted-7-hours/</guid>
      <description>&lt;p&gt;
    CloudFront had a bad day in the UK yesterday, causing many websites to be unavailable to the millions of people on the BT network. The outage lasted 6.5 hours, during the day. Ouch.&lt;br&gt; We&#39;ll show you the problem was in DNS, and how you - a CloudFront user/customer - can easily and quickly do diagnostics on your CDN, from many consumer ISP networks across the globe.
&lt;/p&gt;
&lt;p&gt;
    It&#39;s not the first time CloudFront had DNS related availability problems. On Nov 27 2014 &lt;a href=&#34;/blog/cloudfront-cdn-global-outage/&#34;&gt;CloudFront had a major global outage&lt;/a&gt; lasting ~ 90 minutes. 
    Yesterday&#39;s CloudFront outage in Great-Brittain was different, because it occurred on a single ISP network only, but it lasted for a very long time. What happened? In short: the DNS resolvers of BT sent &#39;empty&#39; responses for &lt;code&gt;&amp;lt;something&amp;gt;.cloudfront.net&lt;/code&gt; queries, resulting in browsers and apps not being able to connect to a CloudFront CDN server.
&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;RUM does not lie&lt;/h2&gt;
&lt;p&gt;
    Here at TurboBytes, we constantly monitor performance of CDNs with RUM (Real User Monitoring) from within browsers of people at home and at work, everywhere in the world. We use this data to power our Multi-CDN service.
    Our non-blocking JS executes after page load and then silently in the background fetches a 15 KB object from a few CDNs, and beacons the load time details to our servers. If the 15 KB object failed to load within 5 seconds, we beacon a Fail.
&lt;/p&gt;
&lt;p&gt;
    In the chart below (UTC time zone), a vertical blue line was drawn for every test that passed (browser fetched 15 KB object from CDN within 5000 ms) and a vertical red line was drawn for every test that failed. Before and after the CloudFront outage it&amp;rsquo;s clear there is a lot more blue than red. During the outage it&amp;rsquo;s almost all red.
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/cloudfront-outage-uk-bt-as2856-20150824-failratio.png&#34; class=&#34;m-b-20&#34; width=&#34;640&#34; height=&#34;230&#34; alt=&#34;CloudFront Failratio in UK on BT network - Aug 24 2015&#34;&gt;
    The Failratio jumps around 10:10 UTC and you can clearly see it&amp;rsquo;s a hard &amp;lsquo;break&amp;rsquo;. In the following 6.5 hours some &amp;lsquo;ok&amp;rsquo; beacons do come in (likely because some users on BT have configured their machines to use Google Public DNS or OpenDNS), but by far most beacons show failure. Around 16:40 UTC the problem was fixed.
&lt;/p&gt;
&lt;h3&gt;AWS status page&lt;/h3&gt;
&lt;p&gt;
    AWS first reported about the problem circa two and a half hours after it started. That&amp;rsquo;s not great.
    Also, and surprisingly, it was classified as &amp;lsquo;Informational message&amp;rsquo; and not as &amp;lsquo;Performance issues&amp;rsquo; or &amp;lsquo;Service disruption&amp;rsquo;.&lt;br&gt;
    &lt;img src=&#34;/images/cloudfront-bt-aws-status-page.png&#34; width=&#34;321&#34; height=&#34;294&#34; class=&#34;m-t-20&#34;&gt;&lt;br&gt;
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;How Pulse helped us quickly diagnose the problem&lt;/h2&gt;
&lt;p&gt;
    We first heard about the CloudFront outage from a tweet by &lt;a href=&#34;https://twitter.com/OpenRent/status/635822393345318912&#34;&gt;@OpenRent&lt;/a&gt;, who responded to a site visitor and linked to Pulse test results. We then ran a HTTPS test on Pulse against their CloudFront endpoint and &lt;a href=&#34;https://pulse.turbobytes.com/results/55db2e50ecbe400bf800143a/&#34;&gt;all seemed fine&lt;/a&gt;. The @OpenRent person gave a good hint to why that HTTPS test did not show the CloudFront problem: &amp;ldquo;&amp;hellip; or the Agent is using a different public DNS&amp;rdquo;. Ah, yes, that must be it! We then quickly ran a &lt;a href=&#34;https://pulse.turbobytes.com/results/55db3196ecbe400bf800143b/&#34;&gt;DNS test&lt;/a&gt; (scroll down to agent 128-Lee-Armstrong, located in Portsmouth) and this gave insight into what was going on: BT resolvers sent a NOERROR response without an ANSWER section, meaning the client (browser/app) gets no IP address to connect to.
&lt;/p&gt;
&lt;p&gt;
    Searching Twitter for &amp;ldquo;Cloudfront BT&amp;rdquo; led us to some tweets by &lt;a href=&#34;&#34;&gt;@Lovell&lt;/a&gt;, a nice guy from London who runs a &lt;a href=&#34;https://dimens.io/&#34;&gt;image resizing web service&lt;/a&gt;. Lovell&amp;rsquo;s tweets &lt;a href=&#34;https://twitter.com/lovell/status/635802576559194112&#34;&gt;one&lt;/a&gt; and &lt;a href=&#34;https://twitter.com/lovell/status/635801366582194177&#34;&gt;two&lt;/a&gt; gave more insight into what was going on with CloudFront on BT:
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/lovell-tweet-1.png&#34;&gt;&lt;br&gt;&lt;img src=&#34;/images/lovell-tweet-2.png&#34; class=&#34;m-t-20&#34;&gt;
&lt;/p&gt;
&lt;p&gt;
    We could have (and should have) run Pulse tests to verify Lovell&amp;rsquo;s findings about that single NS being unavailable, but we didn&amp;rsquo;t, so let&amp;rsquo;s safely assume Lovell is right.
&lt;/p&gt;
&lt;h3&gt;Shameless plug&lt;/h3&gt;
&lt;p&gt;
    TurboBytes Pulse has grown from 10 to 80+ agents (test locations) in a few months time, and we&amp;rsquo;re always looking for more agents. Agent hosts get access to the Pulse API so if you want that and you can install the Pulse software on a Linux machine or Raspberry Pi that is connected to a consumer ISP network, please &lt;a href=&#34;https://pulse.turbobytes.com/host/&#34;&gt;reach out to us&lt;/a&gt;.
&lt;/p&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Introducing TurboBytes Pulse</title>
      <link>https://www.turbobytes.com/blog/introducing-turbobytes-pulse/</link>
      <pubDate>Mon, 08 Jun 2015 12:20:00 &#43;0000</pubDate>
      
      <guid>https://www.turbobytes.com/blog/introducing-turbobytes-pulse/</guid>
      <description>&lt;p&gt;
    &lt;img src=&#34;/images/pulse-agent-pic.jpg&#34; width=&#34;480&#34; height=&#34;293&#34; alt=&#34;TurboBytes Pulse agent&#34; style=&#34;border:1px solid #c1c6c8; padding:0.2em;&#34;&gt;
&lt;/p&gt;
&lt;p&gt;
    Is DNS working correctly everywhere for your domains? 
    Are your servers and CDNs serving the desired responses?  
    Are your users taking efficient network paths to your sites and apps?&lt;br&gt;
    If you regularly need these questions answered, you&#39;re going to love our new service.
&lt;/p&gt;
&lt;p&gt;
    &lt;a href =&#34;https://pulse.turbobytes.com/&#34;&gt;TurboBytes Pulse&lt;/a&gt; enables you to easily &amp;amp; quickly collect DNS, HTTP(S) and Traceroute responses from computers around the world. 
    Most of these &#39;agents&#39; are connected to consumer ISP networks. Pulse is free and open-source ! 
&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;How Pulse works&lt;/h2&gt;
&lt;p&gt;
    Pulse is a collection of test machines (we call these agents) and the CNC, the command and control center.
    Users send a test request to the CNC and agents are then instructed to run the test and send back the results.
    All communication between the CNC and the agents happens encrypted over TLS.&lt;br&gt;
    Learn more about the DNS, HTTP and Traceroute tests in the &lt;a href=&#34;https://pulse.turbobytes.com/faq/&#34;&gt;Pulse FAQ&lt;/a&gt;.
&lt;/p&gt;
&lt;p&gt;
    Currently tests can only be initiated from the Pulse website, but soon (end of June 2015) we&amp;rsquo;ll release the API to TurboBytes customers and everybody who &lt;a href=&#34;https://pulse.turbobytes.com/host/&#34;&gt;hosts an agent&lt;/a&gt;.
&lt;/p&gt;
&lt;p&gt;
    Pulse currently has 20+ agents, including in San Francisco, Seattle, Vancouver, New York, Manchester, Goteborg, Utrecht, Taipei and Sydney.
    New agents come online every week and we expect to grow to 100 agents within a few months.
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/pulse-agents-world-map.png&#34; width=&#34;640&#34; height=&#34;296&#34; alt=&#34;TurboBytes Pulse locations map&#34; style=&#34;border:1px solid #c1c6c8; padding:0.2em;&#34;&gt;
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;Why we developed Pulse&lt;/h2&gt;
&lt;p&gt;
    Pulse is built primarily as a service to our Multi-CDN customers: enable them to quickly find out if the CDNs are behaving correctly everywhere.&lt;br&gt;
    For example: a TurboBytes customer receives an email from a website visitor about not being able to load the site.
    Some questions that will come to mind are: is the CDN completely failing in that user&amp;rsquo;s country? Is it failing everywhere? Is one particular file not served correctly?
    Pulse makes it easy to &lt;strong&gt;quickly&lt;/strong&gt; get those questions answered and gain insight in the behavior of the CDNs at that time.
&lt;/p&gt;
&lt;p&gt;
    The secondary purpose of Pulse is internal: as a Multi-CDN provider, we need a way to know when new and updated configs have provisioned globally across all the CDN POPs. If the API of CDN X tells us provisioning is done, has it then really completed across all their POPs?
    We&amp;rsquo;ve found out it&amp;rsquo;s best to run some verification checks against all POPs &amp;hellip;
&lt;/p&gt;
&lt;p&gt;
    So why did we build our own? Couldn&amp;rsquo;t we have better integrated with an existing service like Pingdom or Catchpoint?
    We had two good reasons to build Pulse:
    &lt;ul class=&#34;inline-list&#34;&gt;
        &lt;li&gt;&lt;strong&gt;Real-World&lt;/strong&gt;: we don&amp;rsquo;t want to test from datacenters, but from homes and offices connected to consumer ISP networks. That is where your users are, right?&lt;/li&gt;
        &lt;li&gt;&lt;strong&gt;Control&lt;/strong&gt;: we have strong views on what Pulse must be able to do and what not.&lt;/li&gt;
    &lt;/ul&gt;
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;Future&lt;/h2&gt;
&lt;p&gt;
    Pulse is a baby and will grow to maturity over time. We have big plans set out for our new precious !
&lt;/p&gt;
&lt;h3&gt;More agents&lt;/h3&gt;
&lt;p&gt;
    Brasil, Spain, Hong Kong, Israel, South Africa. These are just a few countries where we will have Pulse agents soon.
    Our goal is to have at least one agent connected to each of the top 5 consumer ISP networks in all major countries.
    In the United States, agents will live in many states. Just one agent on Comcast is not enough, right?&lt;br&gt;
    Do you want to have full access to all of Pulse? &lt;a href =&#34;https://pulse.turbobytes.com/host/&#34;&gt;Host an agent!&lt;/a&gt;
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;API&lt;/h3&gt;
&lt;p&gt;
    We always build the API first and then interfaces on top of it, so the Pulse API already exists.
    Giving users access to Pulse via API is high on our todo list, but we first need to polish a bit and implement things like key management, rate limiting and queueing. We expect to have the API ready in June.&lt;br&gt;
    &lt;strong&gt;Important:&lt;/strong&gt; API access is a feature we will make available only to TurboBytes customers and Pulse agent hosts.
    Everybody else can use Pulse freely via the Web ui at &lt;a href=&#34;https://pulse.turbobytes.com/&#34;&gt;pulse.turbobytes.com&lt;/a&gt;.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;Features&lt;/h3&gt;
&lt;p&gt;
    We don&amp;rsquo;t want to tell you everything, but lots of good and useful things are coming to Pulse.
    Something is currently in the making for HTTP/2 &amp;hellip;&lt;br&gt;
    Stay tuned via &lt;a href=&#34;https://twitter.com/TurboBytesPulse&#34;&gt;@TurboBytesPulse&lt;/a&gt; on Twitter.
&lt;/p&gt;
&lt;p&gt;
    We always welcome your thoughts, ideas and feedback. Please share below in the comments section or send an email to &lt;a href=&#34;mailto:pulse@turbobytes.com&#34;&gt;pulse@turbobytes.com&lt;/a&gt;.
&lt;/p&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Join us at Velocity 2015 in Santa Clara</title>
      <link>https://www.turbobytes.com/blog/velocity-us-2015-join-us/</link>
      <pubDate>Mon, 20 Apr 2015 10:00:00 &#43;0000</pubDate>
      
      <guid>https://www.turbobytes.com/blog/velocity-us-2015-join-us/</guid>
      <description>&lt;p&gt;
    &lt;a href=&#34;http://goo.gl/yfU6Tq&#34;&gt;Velocity conference&lt;/a&gt;, organized by O&#39;Reilly, is the place to be to learn how to make your site or app faster and stronger, through attending talks by industry experts and by networking at the evening events, lunch or any other time really. 
    We&#39;ve been to Velocity many times and it&#39;s always super useful and great fun. 
&lt;/p&gt;
&lt;a href=&#34;#&#34;&gt;&lt;img src=&#34;/images/velocity-us-2015-300x250.jpg&#34; class=&#34;m-l-20 fl-r&#34; width=&#34;300&#34; height=&#34;250&#34; alt=&#34;&#34;&gt;&lt;/a&gt;
&lt;p&gt;
    This year&#39;s first Velocity takes place May 27-29 in sunny Santa Clara and we can&#39;t wait to be there. 
    Speakers this year include performance experts Tammy Everts (SOASTA), Ilya Grigorik (Google) and Mark Zeman (SpeedCurve). 
    &lt;a href=&#34;http://velocityconf.com/devops-web-performance-2015/public/schedule/speakers&#34;&gt;Dozens of other experts&lt;/a&gt; will be on stage sharing their knowledge and learnings.
&lt;/p&gt;
&lt;p&gt;
    ... and TurboBytes will on stage too! 
    We&#39;re honored to give a talk &lt;a href=&#34;http://velocityconf.com/devops-web-performance-2015/public/schedule/detail/41785&#34;&gt;Preparing for CDN Failure: Why and How&lt;/a&gt;, together with our friends at &lt;a href=&#34;http://www.mobify.com/&#34;&gt;Mobify&lt;/a&gt;.
&lt;/p&gt;
&lt;p&gt;
    The discount code 20TURBO gives you 20% off: &lt;a href=&#34;http://goo.gl/YkStvL&#34;&gt;register now&lt;/a&gt;!
&lt;/p&gt;
&lt;p&gt;
    We hope to see you in May. 
&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Global DNS outage at Zerigo: the real-world perspective</title>
      <link>https://www.turbobytes.com/blog/global-dns-outage-zerigo/</link>
      <pubDate>Tue, 14 Apr 2015 13:00:00 &#43;0000</pubDate>
      
      <guid>https://www.turbobytes.com/blog/global-dns-outage-zerigo/</guid>
      <description>&lt;p&gt;
    On April 11 2015, during US daytime, Zerigo suffered a global DNS outage due to a DDoS attack. 
    The &lt;a href=&#34;http://zerigostatus.com/&#34;&gt;Zerigo status page&lt;/a&gt; informs us that the attack hit their origin nameservers &lt;sup&gt;[&lt;a href=&#34;#origin&#34;&gt;1&lt;/a&gt;]&lt;/sup&gt;, and gives the impression the problems started before 15:24 UTC and ended at 18:30 UTC.  
    But Zerigo customers kept complaining on &lt;a href=&#34;https://twitter.com/search?f=realtime&amp;q=zerigo%20dns&amp;src=typd&#34;&gt;Twitter&lt;/a&gt; even hours later. 
    What happened, really? Our data tells the story.
&lt;/p&gt;

&lt;p&gt;
    Here at TurboBytes we closely monitor the real-world response time and availability of many authoritative DNS providers, including Zerigo, with our &lt;a href=&#34;/blog/introducing-rum-for-dns/&#34;&gt;RUM for DNS&lt;/a&gt;. Every day we run millions of measurements from across the globe, and our data for April 11 clearly shows: &lt;strong&gt;Zerigo DNS performance was very poor for about 8 hours globally&lt;/strong&gt;.
&lt;/p&gt;

&lt;h2&gt;The real-world response times and fail ratio&lt;/h2&gt;

&lt;p&gt;
    The DDoS attack hit the Zerigo origin nameservers hard worldwide, resulting in very slow responses and poor availability:
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/DNSRUM-Global-Zerigo-ResponseTimeMedian-20150411.png&#34; width=&#34;640&#34; height=&#34;320&#34; alt=&#34;Zerigo DNS median response time globally&#34; style=&#34;border:1px solid #c1c6c8; padding:0.2em;&#34;&gt;
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/DNSRUM-Global-Zerigo-FailRatio-20150411.png&#34; width=&#34;640&#34; height=&#34;320&#34; alt=&#34;Zerigo DNS fail ratio globally&#34; style=&#34;border:1px solid #c1c6c8; padding:0.2em;&#34;&gt;
&lt;/p&gt;
&lt;p&gt;
    &lt;small&gt;Browsers beacon a Fail when the authoritative was too slow, down or sent a bad response.&lt;br&gt;
    Fail Ratio = % of measurements that failed. &lt;a href=&#34;/blog/introducing-rum-for-dns/&#34;&gt;More info&lt;/a&gt;.&lt;/small&gt;
&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;14:40 UTC: it all starts&lt;/h3&gt;
&lt;p&gt;
    Zerigo first mentions the DDoS attack at 15:24 UTC, but it started well before that, at ~ 14:40 UTC.
    Fail Ratio jumped and response times went sky high.
    Between 14:45 and 16:40 the median response time was well over 4 seconds (!) and Fail Ratio was between 40% and 60%.
&lt;/p&gt;
&lt;p&gt;
    In reality the Fail Ratio was actually even higher, because the number of beacons we received from browsers running the performance tests for Zerigo dropped by ~ 12%. This is easily explained by the fact that our tests don&amp;rsquo;t have a set timelimit and if a test takes a long time, some users will navigate to the next page before the test completes or fails.
&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;16:50 UTC: all is well again, but not for long&lt;/h3&gt;
&lt;p&gt;
    Two hours after Zerigo DNS peformance went bad, suddenly performance is back to normal.
    The Zerigo status page: &lt;i&gt;&amp;ldquo;Valued customers, please be advised that the DDOS attack has been mitigated and all Zerigo services are now restored&amp;rdquo;.&lt;/i&gt;
    However, within 10 minutes performance goes bad again, and it gets even worse than before.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;17:20 UTC: Zerigo DNS completely unavailable&lt;/h3&gt;
&lt;p&gt;
    About 30 minutes after performance was back to normal levels, Zerigo DNS is completely down.
    Fail Ratio hits the 100% mark and response times are very, very high.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;18:00 UTC: performance improved&lt;/h3&gt;
&lt;p&gt;
    Luckily for Zerigo customers, performance starts to improve soon and at ~ 18:00 UTC response times are much lower and the Fail Ratio is down to ~ 25%.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;18:45 UTC: response times back to normal&lt;/h3&gt;
&lt;p&gt;
    Global median response time is ~ 260 ms and this is normal for Zerigo.
    But Fail Ratio is still a whopping 25% and it stays like that for over 2 hours.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;21:05 UTC: everything looks good again, but not for long&lt;/h3&gt;
&lt;p&gt;
    Fail Ratio is back to normal &amp;hellip; but only for about 10 minutes.
    At 21:30, it&amp;rsquo;s above 20%.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;22:30 UTC: stable&lt;/h3&gt;
&lt;p&gt;
    Finally, all problems are gone and Zerigo DNS performance is back to normal levels.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;Performance not the same on all networks&lt;/h2&gt;
&lt;p&gt;
    So far we&amp;rsquo;ve taken a global view on Zerigo&amp;rsquo;s DNS performance.
    The charts for individual countries look more or less the same as the ones we showed above.
    When browsing through our data some more, we did spot some interesting things though.
&lt;/p&gt;
&lt;p&gt;
    Perhaps one would expect performance to be the same on all networks, because the DDoS attack hit Zerigo origin nameservers directly.
    If those nameservers are down due to an overload in traffic, this impacts performance on all networks the same, right?
    Well, this turns out not to be the case.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h4&gt;US - AS7922 (Comcast)&lt;/h4&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/DNSRUM-US-AS7922-Zerigo-FailRatio-20150411.png&#34; width=&#34;640&#34; height=&#34;320&#34; alt=&#34;Zerigo DNS fail ratio in US on AS7922&#34; style=&#34;border:1px solid #c1c6c8; padding:0.2em;&#34;&gt;
&lt;/p&gt;
&lt;p&gt;
    In the global chart we saw that after ~ 18:00 UTC the Fail Ratio was at ~ 25%.
    On AS7922 however, Zerigo DNS remains almost completely unavailable until 22:30.
    Fail Ratio does not get below 60%, except for that short 10 minute dip at ~ 21:05.&lt;br /&gt;
    On some networks in other countries we see the same.
&lt;/p&gt;
&lt;p&gt;
    From our data it is clear that resolvers on AS7922 had a harder time getting a response from Zerigo&amp;rsquo;s nameservers than for example Google Public DNS resolvers (see below). Why? We don&amp;rsquo;t know.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h4&gt;US - AS15169 (Google)&lt;/h4&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/DNSRUM-US-AS15169-Zerigo-FailRatio-20150411.png&#34; width=&#34;640&#34; height=&#34;320&#34; alt=&#34;Zerigo DNS fail ratio in US on AS15169&#34; style=&#34;border:1px solid #c1c6c8; padding:0.2em;&#34;&gt;
&lt;/p&gt;
&lt;p&gt;
    AS15169 is Google&amp;rsquo;s network. The Google Public DNS resolvers live on this network.
    After 18:00 UTC, the Fail Ratio stays relatively low and well below that 20% - 25% we see on AS7922.
    This is what our data shows for most networks.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;Takeaways&lt;/h2&gt;
&lt;h3&gt;Don&amp;rsquo;t expect timely and insightful information from your provider&lt;/h3&gt;
&lt;p&gt;
    Zerigo could have done a better job at informing their customers during the DNS performance problems.
    Many tweets of customers were not answered and the info on the status page was not awesome.
    We give a shoutout here to DNSimple.
    They suffered from a global DDoS attack in December and communicated about this in a proactive, professional way &lt;a href=&#34;http://dnsimplestatus.com/incidents/v0x4h75gxf7x&#34;&gt;during the outage&lt;/a&gt; and wrote a lengthy &lt;a href=&#34;http://blog.dnsimple.com/2014/12/incident-report-ddos/&#34;&gt;post-mortem&lt;/a&gt;.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;All DNS providers have performance issues&lt;/h3&gt;
&lt;p&gt;
    DDoS attacks happen and will continue to happen.
    It&amp;rsquo;s likely Zerigo will be hit again, and it&amp;rsquo;s likely other DNS providers will be under attack too.
    And then there are all sorts of other causes of poor DNS performance (BGP route leaks, broken peering links, &amp;hellip;).
    What can you as a customer of managed DNS do? One thing: use two DNS providers.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;Consider using two DNS providers&lt;/h3&gt;
&lt;p&gt;
    Here at TurboBytes we use two DNS providers (NSONE and AWS Route53) for extra reliability and speed. Read our blog post about &lt;a href=&#34;/blog/why-use-two-dns-providers/&#34;&gt;Why You Should Use Two DNS Providers&lt;/a&gt;.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h4&gt;Notes:&lt;/h4&gt;
&lt;p&gt;
    &lt;a name=&#34;origin&#34;&gt;&lt;/a&gt;&lt;a href=&#34;http://www.zerigo.com/news/new-functionality&#34;&gt;Since February 2015&lt;/a&gt;, Zerigo uses &lt;a href=&#34;https://www.cloudflare.com/virtual-dns&#34;&gt;CloudFlare Virtual DNS&lt;/a&gt;, which means CloudFlare proxies the DNS traffic and caches the results.
    If CloudFlare receives a query for an expired or uncached record, it will query the Zerigo origin.
    Many DNS records have a TTL lower than one 5 minutes (&lt;a href=&#34;https://00f.net/2012/05/10/distribution-of-dns-ttls/&#34;&gt;source&lt;/a&gt;), so if the origin nameservers are down, that will cause real-world problems for Zerigo customers.&lt;br&gt;
    TurboBytes measures authoritative DNS performance using a wildcard A record, so with Zerigo, we always hit their origin nameservers.
&lt;/p&gt;
&lt;p&gt;
    &lt;strong&gt;April 14, 13:50 UTC&lt;/strong&gt;: our data shows Zerigo DNS performance has been steadily getting worse in the past 48 hours, with Fail Ratio peaking at 10% at 23:00 UTC on April 13. We&amp;rsquo;ll keep a close eye out and update this blog post when it makes sense.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;
    We always welcome your thoughts, ideas and feedback. Please share below in the comments section and don&#39;t forget to check out our &lt;a href=&#34;/reports/dns-performance/&#34;&gt;&lt;strong&gt;Authoritative DNS Performance Reports&lt;/strong&gt;&lt;/a&gt;.
&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Why You Should Use Two DNS Providers</title>
      <link>https://www.turbobytes.com/blog/why-use-two-dns-providers/</link>
      <pubDate>Thu, 02 Apr 2015 15:55:00 &#43;0000</pubDate>
      
      <guid>https://www.turbobytes.com/blog/why-use-two-dns-providers/</guid>
      <description>&lt;p&gt;
    Most domains have one DNS provider configured as authoritative.
    A handful of the big players use two. For example Amazon, TripAdvisor and IMDB use Dyn and UltraDNS.
    LinkedIn, AOL and eBay use their own nameservers in combination with those of a third party DNS provider.
&lt;/p&gt;

&lt;p&gt;
    Here at TurboBytes we use NSONE and AWS Route53 in our Multi-CDN platform, and our friends at MaxCDN recently started using these providers as well.
    We had two reasons for using two DNS providers:
    &lt;ol&gt;
        &lt;li&gt;improve the reliability of our service&lt;/li&gt;
        &lt;li&gt;improve DNS lookup times&lt;/li&gt;
    &lt;/ol&gt;
    The combined network map of NSONE and AWS Route53 is impressive (&lt;a href=&#34;/products/optimizer/network-map/#dns&#34;&gt;view&lt;/a&gt;) and most resolvers have a built-in mechanism for quick failover and response time optimization (SRTT; more on this later).
&lt;/p&gt;

&lt;p&gt;
    In this article we&#39;ll show you - using our &lt;a href=&#34;/blog/introducing-rum-for-dns/&#34;&gt;RUM for DNS&lt;/a&gt; performance data - that using two DNS providers indeed results in lower response times and higher reliability, for example in Brazil, as you can see in our &lt;a href=&#34;/reports/dns-performance/#BR&#34;&gt;Authoritative DNS Performance Reports&lt;/a&gt;.   
&lt;/p&gt;

&lt;h2&gt;NSONE and AWS Route53 in Brazil&lt;/h2&gt;

&lt;p&gt;
    We&#39;ll take a look at two possible situations:
    &lt;ol&gt;
        &lt;li&gt;Performance of both providers is &#39;as usual&#39;&lt;/li&gt;
        &lt;li&gt;Performance of one provider is significantly worse than normal&lt;/li&gt;
    &lt;/ol&gt;
&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;Case 1: Performance of both providers is &amp;lsquo;as usual&amp;rsquo;&lt;/h3&gt;
&lt;p&gt;
    Both providers have a POP in Sao Paulo and AWS Route53 also has a POP in Rio de Janeiro.
    From that alone you&amp;rsquo;d expect Route53 to outperform NSONE in terms of response time. And that is indeed the case:
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/DNSRUM-Public-BR-Combo-ResponseTimeMedian-20150401.png&#34; width=&#34;640&#34; height=&#34;320&#34; alt=&#34;DNS response time in Brazil&#34; style=&#34;border:1px solid #c1c6c8; padding:0.2em;&#34;&gt;
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;
    The days where both Route53 and NSONE have &#39;a normal day&#39; (not: March 25 and 26), you can see in the chart that
    &lt;ol&gt;
        &lt;li&gt;Route53 was faster than the combo&lt;/li&gt;
        &lt;li&gt;the combo&#39;s response time was almost on par with Route53: the difference is small&lt;/li&gt;
    &lt;/ol&gt;
    This can be explained by the SRTT (Smoothed Round Trip Time) mechanism that most resolvers have: the resolver figures out which nameserver gives best performance and sends most - but not all - queries to that nameserver.
&lt;/p&gt;

&lt;p&gt;
    One may expect that the SRTT mechanism also results in a low Fail Ratio. And it does: in this 14 day time frame, the Fail Ratio of the combo was clearly better than that of a single provider.
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/DNSRUM-Public-BR-Combo-FailRatio-20150401.png&#34; width=&#34;640&#34; height=&#34;320&#34; alt=&#34;DNS fail ratio in Brazil&#34; style=&#34;border:1px solid #c1c6c8; padding:0.2em;&#34;&gt;
&lt;/p&gt;
&lt;p&gt;
    &lt;small&gt;Our RUM beacons a Fail when the authoritative was too slow, down or sent a bad response. &lt;a href=&#34;/blog/introducing-rum-for-dns/&#34;&gt;More info&lt;/a&gt;.&lt;/small&gt;
&lt;/p&gt;

&lt;p&gt;
    In the first chart you saw that on March 25 and March 26, Route53 response time increased by ~ 30% and became worse than NSONE&#39;s response time.
    Let&#39;s look a bit more closely at dual provider DNS performance in case the performance of one of the providers goes bad.
&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;Case 2: one provider has degraded performance&lt;/h3&gt;
&lt;p&gt;
    So far we&amp;rsquo;ve been taking a country level view. Let&amp;rsquo;s zoom in on a few networks to have better insight in what happened in Brazil in the past 14 days and especially on March 25 and 26.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h4&gt;AS28573 (NET Servios de Comunicao S.A.)&lt;/h4&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/DNSRUM-Public-BR-AS28573-Combo-ResponseTimeMedian-20150401.png&#34; width=&#34;640&#34; height=&#34;320&#34; alt=&#34;DNS response time in Brazil on AS28573&#34; style=&#34;border:1px solid #c1c6c8; padding:0.2em;&#34;&gt;
&lt;/p&gt;
&lt;p&gt;
    About 31% of our performance tests in BR were initiated by end users connected to AS28573 and most of them used resolvers on the same network.
    Route53 response time was not very consistent and most days better than NSONE.
    On March 25 and 26, Route53&amp;rsquo;s response time jumped and so did the response time of the combo, but not so much.
    Do the resolvers here do SRTT? One can argue it&amp;rsquo;s not crystal clear. Let&amp;rsquo;s zoom in on those two days in March:&lt;/p&gt;

&lt;p&gt;&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/DNSRUM-Public-BR-AS28573-Combo-ResponseTimeMedian-20150325-20150327.png&#34; width=&#34;640&#34; height=&#34;320&#34; alt=&#34;DNS response time in Brazil on AS28573 - March 25 and 26&#34; style=&#34;border:1px solid #c1c6c8; padding:0.2em;&#34;&gt;
&lt;/p&gt;
&lt;p&gt;
    And now we know: resolvers on AS28573 do SRTT.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h4&gt;AS18881 (Global Village Telecom)&lt;/h4&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/DNSRUM-Public-BR-AS18881-Combo-ResponseTimeMedian-20150401.png&#34; width=&#34;640&#34; height=&#34;320&#34; alt=&#34;DNS response time in Brazil on AS18881&#34; style=&#34;border:1px solid #c1c6c8; padding:0.2em;&#34;&gt;
&lt;/p&gt;
&lt;p&gt;
    AS18881 was the #2 network in Brazil, with 17% of our performance tests initiated by end users connected to that network. Again, most of them used resolvers on the same network. Normally, Route53 and NSONE median response times are not too far apart (5 - 15 ms difference), and it&amp;rsquo;s clear the resolvers here do SRTT and favored Route53. If the resolvers did not do SRTT, on March 21 the blue line would have peaked too just like NSONE did. Thank you SRTT!
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h4&gt;AS7738 (Telemar Norte Leste S.A.)&lt;/h4&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/DNSRUM-Public-BR-AS7738-Combo-ResponseTimeMedian-20150401.png&#34; width=&#34;640&#34; height=&#34;320&#34; alt=&#34;DNS response time in Brazil on AS7738&#34; style=&#34;border:1px solid #c1c6c8; padding:0.2em;&#34;&gt;
&lt;/p&gt;
&lt;p&gt;
    The #3 network in Brazil for us at that time and - together with AS28573 - the network that made a difference on March 25 and 26: Route53 response times jumped from ~ 130 ms to ~ 215 ms daily median, SRTT kicked in and most queries flowed to NSONE.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h4&gt;Query Share&lt;/h4&gt;
&lt;p&gt;
    So far we&amp;rsquo;ve made statements about resolvers doing SRTT based on performance data only, but we can do better.&lt;br&gt;
    In our RUM for DNS, we have a way to track the &amp;lsquo;Query Share&amp;rsquo; of each provider in the combo: we know what % of responses were served by NSONE and what % by Route53. See the table below for the Query Share per provider for the 3 aforementioned ASNs for March 31.
&lt;/p&gt;
&lt;p&gt;
    &lt;table class=&#34;simple-table&#34;&gt;
        &lt;thead&gt;
            &lt;tr&gt;
                &lt;th&gt;ASN&lt;/th&gt;
                &lt;th&gt;NSONE query share&lt;/th&gt;
                &lt;th&gt;Route53 query share&lt;/th&gt;
                &lt;th&gt;Fail %&lt;/th&gt;
            &lt;/tr&gt;
        &lt;/thead&gt;
        &lt;tbody&gt;
            &lt;tr&gt;
                &lt;td&gt;AS28573&lt;/td&gt;
                &lt;td&gt;58.51%&lt;/td&gt;
                &lt;td&gt;41.38%&lt;/td&gt;
                &lt;td&gt;0.11%&lt;/td&gt;
            &lt;/tr&gt;
            &lt;tr&gt;
                &lt;td&gt;AS18881&lt;/td&gt;
                &lt;td&gt;20.36%&lt;/td&gt;
                &lt;td&gt;79.59%&lt;/td&gt;
                &lt;td&gt;0.05%&lt;/td&gt;
            &lt;/tr&gt;
            &lt;tr&gt;
                &lt;td&gt;AS7738&lt;/td&gt;
                &lt;td&gt;23.15%&lt;/td&gt;
                &lt;td&gt;76.77%&lt;/td&gt;
                &lt;td&gt;0.08%&lt;/td&gt;
            &lt;/tr&gt;
        &lt;/tbody&gt;
    &lt;/table&gt;
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;Conclusions &amp;amp; Summary&lt;/h2&gt;
&lt;h3&gt;Assume all DNS providers have performance degradations&lt;/h3&gt;
&lt;p&gt;
    We&amp;rsquo;ve shown data here for just two providers, NSONE and AWS Route53, but you can be assured the other providers have performance hiccups too (hint: more blog posts coming soon).
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;SRTT is awesome&lt;/h3&gt;
&lt;p&gt;
    Queries from your end users to your authoritative nameservers go through resolvers, and many resolvers have a built-in mechanism for failover and RTT/latency optimization. Isn&amp;rsquo;t that just great? We think it is.&lt;br&gt;
    FYI, not all resolvers do SRTT (hint: more blog posts coming soon).
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;Performance matters, but it&amp;rsquo;s not everything&lt;/h3&gt;
&lt;p&gt;
    We acknowledge there are several valid reasons not to use more than one DNS provider, but if you care a lot about DNS performance and the reliability of your sites and apps, you should consider using two DNS providers.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;
    We always welcome your thoughts, ideas and feedback. Please share below in the comments section and don&#39;t forget to check out our &lt;a href=&#34;/reports/dns-performance/#BR&#34;&gt;&lt;strong&gt;Authoritative DNS Performance Reports&lt;/strong&gt;&lt;/a&gt;.
&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Introducing RUM for DNS</title>
      <link>https://www.turbobytes.com/blog/introducing-rum-for-dns/</link>
      <pubDate>Tue, 31 Mar 2015 15:00:00 &#43;0000</pubDate>
      
      <guid>https://www.turbobytes.com/blog/introducing-rum-for-dns/</guid>
      <description>&lt;p&gt;
    How good is the response time and availability of your authoritative DNS, &lt;i&gt;really&lt;/i&gt;?
&lt;/p&gt;
&lt;p&gt;
    To answer that question, it&#39;s important to query authoritative nameservers through the resolvers that people use at home, in the office, on mobile and at the local Starbucks. TurboBytes does just that, monitoring the &lt;strong&gt;real-world performance of authoritative DNS providers&lt;/strong&gt; from across the globe, 24/7, by running tests in the browsers of millions of people that are connected to thousands of networks.&lt;br&gt;
    We&#39;re excited to announce our RUM for DNS !
&lt;/p&gt;
&lt;p&gt;
    In this article you&#39;ll read about why we built RUM for DNS, our test methodology and the benefits of RUM (Real User Monitoring) versus synthetic monitoring. 
    But maybe you want to skip all that and take a look at some of the data? View the past 14 days performance of CloudFlare, AWS Route53, Dyn and others in our &lt;a href=&#34;/reports/dns-performance/&#34;&gt;&lt;strong&gt;Authoritative DNS Performance Reports&lt;/strong&gt;&lt;/a&gt;.
&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;Why we developed RUM for DNS&lt;/h2&gt;
&lt;p&gt;
    TurboBytes runs a global Multi-CDN platform: it closely monitors CDN performance (with RUM) and makes sure traffic is always routed to the best performing CDN.
    Our platform constantly switches CDNs by changing low TTL CNAME records. Needless to say, our DNS needs to be awesome, with excellent performance and all the features we need.
&lt;/p&gt;
&lt;p&gt;
    We had been using Dyn&amp;rsquo;s DNS platform since 2012 and never had issues with performance, but we did run into must-have functional requirements that Dyn could not meet. Last year in Q2 we started looking into alternatives to Dyn and obviously performance was a key evaluation criterium. We had to be sure the performance of our new DNS provider(s) was good across the globe and we wanted to have a &lt;strong&gt;real-world view on authoritative DNS performance&lt;/strong&gt;, and not benchmark performance based on a handful of tests from a handful of datacenters.
    We needed &amp;lsquo;RUM for DNS&amp;rsquo;, so we built it.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;How we measure real-world authoritative DNS performance&lt;/h2&gt;
&lt;p&gt;
    All TurboBytes Multi-CDN customers add our non-blocking JavaScript snippet to their webpages, which executes after page load and then silently in the background runs tests to measure performance of a few CDN and DNS providers.
&lt;/p&gt;
&lt;p&gt;
    We want to give you some insight in what our JS code does for the DNS performance tests and the big challenge we ran into, but we&amp;rsquo;ll start with laying out our requirements.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;Our requirements for RUM for DNS&lt;/h3&gt;
&lt;ol&gt;
    &lt;li&gt;can measure response time &amp;aacute;nd fail ratio (availability)&lt;/li&gt;
    &lt;li&gt;timing data is accurate&lt;/li&gt;
    &lt;li&gt;works with all resolvers, including resolvers that do &lt;a href=&#34;http://en.wikipedia.org/wiki/DNS_hijacking#Manipulation_by_ISPs&#34;&gt;NXDOMAIN hijacking&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;our JS has no negative impact on the user experience&lt;/li&gt;
    &lt;li&gt;works at least in Chrome and Chrome for Mobile&lt;/li&gt;
    &lt;li&gt;is scalable and future-proof&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
    We&amp;rsquo;re happy to say our solution meets all these requirements.
&lt;/p&gt;
&lt;h3&gt;The challenge&lt;/h3&gt;
&lt;p&gt;
    The key challenge we quickly ran into was this: there is no way with JavaScript to instruct the browser to &amp;lsquo;do just a DNS lookup and let me know how long that took&amp;rsquo;.
    We first played a bit with dynamically inserting a dns-prefetch link element into the DOM but that was a dead end, simply because the browser does not expose how long the DNS lookup took. It did not take long to decide the only way forward was to use the &lt;a href=&#34;http://www.w3.org/TR/resource-timing/&#34;&gt;Resource Timing API&lt;/a&gt;. This API exposes timing information for webpage resources. The API is implemented in IE10+, FF35+, Chrome, Opera and the default browser in Android 4.4+.
    We tested the behaviour of the API in all those browsers and found out that it had serious issues in IE and Firefox (DNS lookup data is missing, wrong or unreliable), and so we implemented a check in our JS to only run our RUM for DNS tests in Chrome and Opera for now.
&lt;/p&gt;
&lt;p&gt;
    The good thing about the using the Resource Timing API is that in Chrome and Opera the data is reliable and accurate: we always get the real DNS lookup time.
    But we also want to reliably detect the Authoritative DNS was too slow/unreachable, down or sent a bad response, so we can track the Fail Ratio/Availability too and not just response time. Read the next section to find out how we accomplished this.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;Our solution&lt;/h3&gt;
&lt;p&gt;
    TurboBytes&amp;rsquo; RUM for DNS test methodology in a nutshell:
    &lt;ul class=&#34;inline-list&#34;&gt;
        &lt;li&gt;fetch a very small object from a TurboBytes webserver - going through resolver only - and get the DNS Lookup Time from the Resource Timing API&lt;/li&gt;
        &lt;li&gt;if successful, do the same but now going through the authoritative DNS&lt;/li&gt;
    &lt;/ul&gt;
    &lt;img src=&#34;/images/rum-for-dns-diagram.png&#34; class=&#34;m-t-20 m-b-20&#34; width=&#34;640&#34; height=&#34;245&#34; alt=&#34;RUM for DNS diagram&#34; style=&#34;border:1px solid #c1c6c8; padding:0.2em;&#34;&gt;
&lt;/p&gt;
&lt;h4&gt;Resolver HIT test&lt;/h4&gt;
&lt;p&gt;
    Before we run tests that go to authoritative, we always first do a test hitting a FQDN with a 24 hrs TTL A record: the resolver hardly ever goes to authoritative because it has the response in cache.
    We have developed a way in JavaScript to force the browser/OS to go to the resolver, and not use the DNS response from its local cache (magic!).
    This test must complete within 5000 ms. If not, it&amp;rsquo;s likely our web server is not in good shape and we then don&amp;rsquo;t run any performance tests hitting authoritative. If the resolver HIT test does complete within 5000 ms, we know two things:
    &lt;ol&gt;
        &lt;li&gt;the time it takes to get a response from resolver (nice to have)&lt;/li&gt;
        &lt;li&gt;our web server is reachable and responding well&lt;/li&gt;
    &lt;/ol&gt;
    We&amp;rsquo;re good to go and run tests hitting the authoritative DNS.
&lt;/p&gt;
&lt;h4&gt;Resolver MISS test&lt;/h4&gt;
&lt;p&gt;
    Unlike the Resolver HIT test, the Resolver MISS tests don&amp;rsquo;t have a time limit. We just let it run. Browsers and resolvers do retries and have timeout limits and  we just let it run. If the authoritative can&amp;rsquo;t be reached, was very slow or sent a bad response (not a NOERROR), then at some point in time the browser will receive the SERVFAIL response from the resolver and our JS will then beacon a Fail for the authoritative. The test can&amp;rsquo;t have failed because of our web server because just a few seconds ago we ran the Resolver HIT test and from that we know our server is reachable and responding just fine.
    After monitoring performance of several DNS providers for a few months, spotting jumps in Fail Ratio and talking to DNS providers about this, we know for a fact that our Fail Ratio metric is solid.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;Benefits of RUM versus synthetic monitoring&lt;/h2&gt;
&lt;p&gt;
    There are three important benefits of our RUM for DNS compared to the synthetic monitoring done by for example &lt;a href=&#34;http://dnsperf.com&#34;&gt;dnsperf.com&lt;/a&gt;, &lt;a href=&#34;http://www.solvedns.com/dns-comparison/&#34;&gt;SolveDNS&lt;/a&gt; and &lt;a href=&#34;https://cloudharmony.com/status-of-dns&#34;&gt;CloudHarmony&lt;/a&gt;:
    &lt;ul class=&#34;inline-list&#34;&gt;
        &lt;li&gt;&lt;strong&gt;Relevance&lt;/strong&gt;: our tests run in the browser of millions of Internet users and go through real-world resolvers to authoritative&lt;/li&gt;
        &lt;li&gt;&lt;strong&gt;Reach&lt;/strong&gt;: our tests run on many networks and through many different resolvers&lt;/li&gt;
        &lt;li&gt;&lt;strong&gt;Test frequency&lt;/strong&gt;: our tests run often, not just once per 5 or 15 minutes&lt;/li&gt;
    &lt;/ul&gt;
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;In the spotlight: The Netherlands&lt;/h3&gt;
&lt;p&gt;
    To give you a feel for our relevance, reach and test frequency, here are some numbers for March 29 2015 based on beacons received from clients in The Netherlands for a single DNS provider:
    &lt;table class=&#34;simple-table&#34;&gt;
        &lt;thead&gt;
            &lt;tr&gt;
                &lt;th&gt;Metric&lt;/th&gt;
                &lt;th&gt;Value&lt;/th&gt;
            &lt;/tr&gt;
        &lt;/thead&gt;
        &lt;tbody&gt;
            &lt;tr&gt;
                &lt;td&gt;Beacons&lt;/td&gt;
                &lt;td&gt;98134&lt;/td&gt;
            &lt;/tr&gt;
            &lt;tr&gt;
                &lt;td&gt;Unique Client IPs&lt;/td&gt;
                &lt;td&gt;84981&lt;/td&gt;
            &lt;/tr&gt;
            &lt;tr&gt;
                &lt;td&gt;Unique Client networks (ASNs)&lt;/td&gt;
                &lt;td&gt;146&lt;/td&gt;
            &lt;/tr&gt;
            &lt;tr&gt;
                &lt;td&gt;Unique Resolver IPs&lt;/td&gt;
                &lt;td&gt;920&lt;/td&gt;
            &lt;/tr&gt;
            &lt;tr&gt;
                &lt;td&gt;Unique Resolver networks (ASNs)&lt;/td&gt;
                &lt;td&gt;237&lt;/td&gt;
            &lt;/tr&gt;
        &lt;/tbody&gt;
    &lt;/table&gt;
    2.2% of all tests went through Google Public DNS (AS15169) and 0.5% of tests hit the authoritative via OpenDNS (AS36692).&lt;/p&gt;

&lt;p&gt;&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;Future&lt;/h2&gt;
&lt;h3&gt;More DNS providers&lt;/h3&gt;
&lt;p&gt;
    VeriSign, UltraDNS, DNS Made Easy. Those are just some of DNS providers we want to add to our RUM for DNS tracking.
    Who would you like to see added? Let us know on &lt;a href =&#39;http://twitter.com/turbobytes&#39;&gt;Twitter&lt;/a&gt;!&lt;br&gt;
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;Increase reach and test frequency&lt;/h3&gt;
&lt;p&gt;
    In some countries/on some networks we want to run tests more often. Over the course of the next months we&amp;rsquo;ll increase count there.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;Blog posts about DNS performance&lt;/h3&gt;
&lt;p&gt;
    We want to regularly publish blog posts about findings from our RUM for DNS data and things related to (authoritative) DNS performance.
    In the next article we will probably put the spotlight on the NSONE-Route53 combo.
&lt;/p&gt;
&lt;p&gt;
    We always welcome your thoughts, ideas and feedback. Please share below in the comments section and don&amp;rsquo;t forget to check out our &lt;a href=&#34;/reports/dns-performance/&#34;&gt;&lt;strong&gt;Authoritative DNS Performance Reports&lt;/strong&gt;&lt;/a&gt;.
&lt;/p&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>TurboBytes&#39; Multi-CDN prevents 60 minutes of downtime</title>
      <link>https://www.turbobytes.com/blog/multi-cdn-prevents-60-minutes-downtime/</link>
      <pubDate>Tue, 16 Dec 2014 14:00:34 &#43;0000</pubDate>
      
      <guid>https://www.turbobytes.com/blog/multi-cdn-prevents-60-minutes-downtime/</guid>
      <description>&lt;p&gt;
    No CDN is excellent all the time everywhere. They all have their bad days.
&lt;/p&gt;
&lt;p&gt;
    Last week we showed how several &lt;a href=&#34;/blog/cdn-performance-problems-france/&#34;&gt;CDNs struggle to deliver great performance in France&lt;/a&gt; and on Nov 27 &lt;a href=&#34;/blog/cloudfront-cdn-global-outage/&#34;&gt;CloudFront had a major global outage&lt;/a&gt; lasting ~ 90 minutes, allegedly caused by DNS issues. 
    Today we write about another CDN&#39;s performance that recently went bad as a result of DNS issues (Highwinds) and we show how our Multi-CDN platform provided great value by reducing downtime by 60 minutes.
&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;DNS breaks: Failratio spikes&lt;/h2&gt;
&lt;p&gt;
    Here at TurboBytes, we monitor performance of CDNs with RUM (Real User Monitoring) all the time from across the globe, and we use the data to power our Multi-CDN service.
    Our non-blocking JS executes after page load and then silently in the background fetches a 15 KB object from a few CDNs, and beacons the load time details to our servers. If the 15 KB object failed to load within 5 seconds, we beacon a Fail.
&lt;/p&gt;
&lt;p&gt;
    On Sunday Dec 14 2014 around 14:20 UTC, the Failratio of Highwinds jumped globally and it was not until well over an hour later that the Failratio returned to a normal level.
    Let&amp;rsquo;s take a look at what happened with Highwinds and see how TurboBytes&amp;rsquo; Multi-CDN platform performed. We zoom in on France because in that country the TurboBytes Multi-CDN platform was routing traffic to Highwinds when the problem started.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;Multi-CDN triumphs over single CDN&lt;/h2&gt;
&lt;p&gt;
    In the charts below, a vertical blue line was drawn for every test that passed (browser fetched 15 KB object from CDN within 5000 ms) and a vertical red line was drawn for every test that failed to finish within 5000 ms. Before the problem started, it&amp;rsquo;s clear there is a lot more blue than red, and during the outage, red has the upperhand.
&lt;/p&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/highwinds-outage-france-20141214-failratio.png&#34; class=&#34;m-t-20 m-b-20&#34; width=&#34;640&#34; height=&#34;230&#34; alt=&#34;Highwinds Failratio in France - Dec 14 2014&#34;&gt;
    You can see the Failratio increase starting around 14:22 and after 10 minutes it&amp;rsquo;s almost all red, but some &amp;lsquo;ok&amp;rsquo; beacons do keep coming in (our guess is some resolvers will serve a stale response if they can&amp;rsquo;t get to the authoritative DNS). Around 15:25 - about an hour after the issue started - we started receving a lot more &amp;lsquo;ok&amp;rsquo; beacons and the Failratio declined. All in all the downtime lasted about 70 minutes.
&lt;/p&gt;
&lt;p&gt;
    How did TurboBytes&amp;rsquo; Multi-CDN service perform? Did we switch away from Highwinds quickly?
    &lt;img src=&#34;/images/turbobytes-france-20141214-failratio.png&#34; class=&#34;m-t-20 m-b-20&#34; width=&#34;640&#34; height=&#34;230&#34; alt=&#34;TurboBytes Failratio in France - Dec 14 2014&#34;&gt;
    As it turned out, we switched &lt;i&gt;to&lt;/i&gt; Highwinds right before the issue started at 14:22.
    In the next few minutes, more and more traffic started flowing to Highwinds but most traffic was still going to the CDN we previously mapped to.
    Apparently most DNS resolvers in France were still handing out the CNAME to that previous CDN (we use a DNS TTL of 300 seconds).
    At 14:25, our platform decided to switch away from Highwinds and it automatically updated our authoritative DNS.
    It took about 5 minutes, due to the DNS TTL, for traffic to stop flowing to Highwinds alltogether.
    Conclusion&lt;strong&gt;: TurboBytes prevented ~ 60 minutes of downtime in France&lt;/strong&gt;.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;A look outside France&lt;/h3&gt;
&lt;p&gt;
    As mentioned, Highwinds suffered from DNS issues not just in France but globally.
    Below you see charts for Highwinds and TurboBytes in The Netherlands and Global. &lt;br&gt;
    Note: the charts for TurboBytes don&amp;rsquo;t provide great value because we were not routing traffic to Highwinds in The Netherlands at the time of the issue and a global comparison makes little sense because our platform doesn&amp;rsquo;t make routing decisions at a global level, but I thought it&amp;rsquo;s best to show them here anyway.
    &lt;img src=&#34;/images/highwinds-outage-netherlands-20141214-failratio.png&#34; class=&#34;m-t-20 m-b-20&#34; width=&#34;640&#34; height=&#34;230&#34; alt=&#34;Highwinds Failratio in NL - Dec 14 2014&#34;&gt;
    &lt;img src=&#34;/images/turbobytes-netherlands-20141214-failratio.png&#34; class=&#34;m-t-20 m-b-20&#34; width=&#34;640&#34; height=&#34;230&#34; alt=&#34;TurboBytes Failratio in NL - Dec 14 2014&#34;&gt;
    &lt;img src=&#34;/images/highwinds-outage-global-20141214-failratio.png&#34; class=&#34;m-t-20 m-b-20&#34; width=&#34;640&#34; height=&#34;230&#34; alt=&#34;Highwinds Failratio globally - Dec 14 2014&#34;&gt;
    &lt;img src=&#34;/images/turbobytes-global-20141214-failratio.png&#34; class=&#34;m-t-20 m-b-20&#34; width=&#34;640&#34; height=&#34;230&#34; alt=&#34;TurboBytes Failratio globally - Dec 14 2014&#34;&gt;
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;Some final words&lt;/h2&gt;
&lt;h3&gt;How we will improve our Multi-CDN service&lt;/h3&gt;
&lt;p&gt;
    We&amp;rsquo;ve closely analyzed our data and what happened in our platform before, during and after the issue with Highwinds, and from that analysis we have defined a few ways to optimize our service, with the object of switching away from a bad CDN more quickly. One thing we can do is lower the DNS TTL. That is easy and this was already on our To Do list. Another way to improve has to do with how our platform processes the incoming beacons and makes switching decisions: process the data more quickly and make an equally good decision with less data.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;Highwinds proactively informed customers: +1&lt;/h3&gt;
&lt;p&gt;
    We can&amp;rsquo;t know from our RUM data &lt;i&gt;why&lt;/i&gt; Highwinds was failing, and we did not run any other tests during the time of the incident.
    We know it was DNS because Highwinds told us.
    Highwinds informed customers about the incident not long after it started and proactively kept them informed until it was resolved.
    Highwinds deserves credit for this behavior.
    Unfortunately, it&amp;rsquo;s not common for CDN providers to inform customers about content delivery performance degradations.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h3&gt;Not all Highwinds endpoints were broken&lt;/h3&gt;
&lt;p&gt;
    We don&amp;rsquo;t know how many endpoints of Highwinds were unavailable due to the DNS issue.
    TurboBytes has two endpoints with Highwinds: one for HTTP-only traffic and one for SSL-enabled traffic.
    Our HTTP-only endpoint was impacted but the other was just fine.
&lt;/p&gt;
&lt;p&gt;
    Was your Highwinds endpoint broken on Dec 14? If so, how did you spot it and what action did you take to mitigate the problem?
    We always welcome your thoughts, ideas and feedback. Please share below in the comments section.
&lt;/p&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>CDNs struggle to deliver great performance in all of France</title>
      <link>https://www.turbobytes.com/blog/cdn-performance-problems-france/</link>
      <pubDate>Tue, 09 Dec 2014 09:21:00 &#43;0000</pubDate>
      
      <guid>https://www.turbobytes.com/blog/cdn-performance-problems-france/</guid>
      <description>&lt;p&gt;
    About two weeks ago, &lt;a href=&#34;/blog/cloudfront-cdn-global-outage/&#34;&gt;CloudFront had a major global outage&lt;/a&gt; lasting ~ 90 minutes.
    That was a exceptionally widespread outage and those don&#39;t happen too often, but regional CDN performance degradations are by no means exceptional. 
    In the spotlight today: France.
&lt;/p&gt;
&lt;p&gt;
    France is a country where CDNs struggle to deliver solid performance consistently to everybody. 
    The performance problems vary from an occasional hiccup to frequent slowdowns and elevated failratio.
    Let&#39;s look at a recent timeframe of 5 days, Nov 29 - Dec 03 2014.
&lt;/p&gt;
&lt;h2&gt;Five days, five CDNs struggling&lt;/h2&gt;
&lt;p&gt;
    &lt;img src=&#34;/images/france-cdns-failing-20141129-20141203.png&#34; width=&#34;640&#34; height=&#34;460&#34; alt=&#34;CDN performance failratio in France - Nov Dec 2014&#34;&gt;
&lt;/p&gt;
&lt;p&gt;
    The chart shows the Failratio in France for CDNetworks, Akamai, Limelight, Tata Communications (formerly: Bitgravity) and Fastly, based on a sample of our RUM data. 
    Our RUM sends a Fail beacon if the browser could not completely fetch a highly cacheable 15 KB static object from CDN within 5 seconds.
&lt;/p&gt;
&lt;p&gt;
    The data shows elevations in the Failratio for Akamai, CDNetworks, Limelight and Tata Communications on multiple days for part(s) of the day.
    Fastly is a difference case: Failratio was elevated all day, every day.
    Let&#39;s zoom in on Akamai, CDNetworks and Fastly, to show the problems occur not in all of France, but on one or a few networks only.
&lt;/p&gt;
&lt;h2&gt;Akamai on AS8228 and AS15557&lt;/h2&gt;
&lt;p&gt;
    Akamai&#39;s performance on two networks of &lt;a href=&#34;http://www.sfr.com&#34;&gt;SFR&lt;/a&gt; is much worse than on the other networks in France. 
    The charts below give a clear picture. A vertical blue line was drawn for every test that passed (browser fetched 15 KB object CDN within 5000 ms) and a vertical red line was drawn for every test that failed to finish within 5000 ms. 
    The top chart is for all of France, with the two SFR networks excluded. 
    There is a lot more blue than red and this is what you expect from a CDN.
    &lt;img src=&#34;/images/akamai-fr-excl-AS8228-AS15557-20141130.png&#34; class=&#34;m-t-20 m-b-20&#34; width=&#34;640&#34; height=&#34;240&#34; alt=&#34;Akamai performance failratio in France - Nov 30 2014&#34;&gt;
    For &lt;a href=&#34;http://bgp.he.net/AS8228&#34; title=&#34;AS8228 CEGETEL-AS Societe Francaise du Radiotelephone S.A&#34;&gt;AS8228&lt;/a&gt; and &lt;a href=&#34;http://bgp.he.net/AS15557&#34; title=&#34;AS15557 LDCOMNET Societe Francaise du Radiotelephone S.A&#34;&gt;AS15557&lt;/a&gt; however, the charts look very different.
    Our RUM took more measurements on AS15557 than on AS8228, so the &#39;shades&#39; of red and blue are different, but we see about the same on both networks: 
    between ~18:15 and 21:15, there is more red than blue.
    &lt;img src=&#34;/images/akamai-fr-as15557-20141130.png&#34; class=&#34;m-t-20&#34; width=&#34;640&#34; height=&#34;240&#34; alt=&#34;Akamai performance failratio on AS15557 - Nov 30 2014&#34;&gt;
    &lt;img src=&#34;/images/akamai-fr-as8228-20141130.png&#34; class=&#34;m-t-20&#34; width=&#34;640&#34; height=&#34;240&#34; alt=&#34;Akamai performance failratio on AS8228 - Nov 30 2014&#34;&gt;
&lt;/p&gt;
&lt;h2&gt;CDNetworks on AS3215&lt;/h2&gt;
&lt;p&gt;
    &lt;a href=&#34;http://www.orange.fr/&#34;&gt;Orange&lt;/a&gt; is a large ISP in France and on Dec 3 2014, we received the highest number of beacons per ASN from clients on their &lt;a href=&#34;http://bgp.he.net/AS3215&#34; title=&#34;AS3215 Orange S.A.&#34;&gt;AS3215&lt;/a&gt; network. The first chart below shows CDNetworks&#39; Failratio for France, excluding AS3215. That looks normal.
    &lt;img src=&#34;/images/cdnetworks-fr-excl-as3215-.png&#34; class=&#34;m-t-20 m-b-20&#34; width=&#34;640&#34; height=&#34;240&#34; alt=&#34;CDNetworks performance failratio in France - Dec 2014&#34;&gt;
    On AS3215, the CDN&#39;s performance is normal for the biggest part of the day, but between~ 19:45 and 21:00 there is a lot more red. Not good.
    &lt;img src=&#34;/images/cdnetworks-fr-as3215-.png&#34; class=&#34;m-t-20 m-b-20&#34; width=&#34;640&#34; height=&#34;240&#34; alt=&#34;CDNetworks performance failratio on AS3215 - Dec 2014&#34;&gt;    
&lt;/p&gt;
&lt;h2&gt;Fastly on AS12322&lt;/h2&gt;
&lt;p&gt;
    And then there is Fastly. Fastly&#39;s Failratio is fine on all the major networks in France, except on &lt;a href=&#34;http://bgp.he.net/AS12322&#34; title=&#34;AS12322 PROXAD Free SAS&#34;&gt;AS12322&lt;/a&gt; of ISP &lt;a href=&#34;http://www.free.fr/&#34;&gt;Free&lt;/a&gt;.
    Two charts again, the top showing France excl. AS12322 and below that is a chart for AS12322 only. The Failratio on that network is a stunning 50% throughout the day.
    &lt;img src=&#34;/images/fastly-fr-excl-as12322-20141201.png&#34; class=&#34;m-t-20&#34; width=&#34;640&#34; height=&#34;240&#34; alt=&#34;Fastly performance failratio in France excl As12322 - Dec 1 2014&#34;&gt;
    &lt;img src=&#34;/images/fastly-fr-as12322-20141201.png&#34; class=&#34;m-t-20 m-b-20&#34; width=&#34;640&#34; height=&#34;240&#34; alt=&#34;Fastly performance failratio in France on As12322 - Dec 1 2014&#34;&gt;
    What is most striking: Fastly&#39;s performance on that network has been that bad for a long time. The chart below - for all of France, not just AS12322 - shows Fastly&#39;s Failratio having been about the same at least since Nov 9. 
    &lt;img src=&#34;/images/france-cdns-failing-20141109-20141203-.png&#34; class=&#34;m-t-20&#34; width=&#34;640&#34; height=&#34;370&#34; alt=&#34;CDN performance failratio in France - Nov 2014&#34;&gt;
&lt;/p&gt;
&lt;h2&gt;Other CDNs have their bad moments too&lt;/h2&gt;
&lt;p&gt;
    As the last chart shows, it&#39;s not just Fastly, Akamai and CDNetworks. We already mentioned Limelight and Tata Communications earlier in this article, and you can see EdgeCast is not fully in the clear either, with Failratio elevations on Nov 10 and Nov 13. Let&#39;s conclude with this statement: &lt;em&gt;&#34;no CDN delivers excellent performance everywhere all the time&#34;&lt;/em&gt;. Also true: &lt;em&gt;&#34;All CDNs suffer from significant performance degradations every now and then&#34;&lt;/em&gt;. 
&lt;/p&gt;
&lt;p&gt;
    Do you have solid insight in the real world performance of your CDN(s)? Do you use one CDN or multiple? Speak up, we love feedback on our blog and interaction with our readers. Please share below in the comments section.
&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Global outage of AWS CloudFront CDN on Nov 26 2014</title>
      <link>https://www.turbobytes.com/blog/cloudfront-cdn-global-outage/</link>
      <pubDate>Thu, 27 Nov 2014 15:30:34 &#43;0000</pubDate>
      
      <guid>https://www.turbobytes.com/blog/cloudfront-cdn-global-outage/</guid>
      <description>&lt;p&gt;
    The CDN of Amazon Web Services, CloudFront, experienced a major global outage yesterday due to DNS issues. 
    The CloudFront outage lasted circa 90 minutes, starting at ~ 00:15 UTC (04:15 PM PST). 
    On &lt;a href=&#34;https://twitter.com/search?f=realtime&amp;q=cloudfront%20down&amp;src=typd&#34;&gt;Twitter&lt;/a&gt; and other social media channels (incl. &lt;a href=&#34;https://news.ycombinator.com/item?id=8665367&#34;&gt;Hacker News&lt;/a&gt;) people started talking about it, and the news was picked up by &lt;a href=&#34;http://thenextweb.com/insider/2014/11/27/amazon-cloudfront-outage-causing-issues-many-services/&#34;&gt;The Next Web&lt;/a&gt;, &lt;a href=&#34;http://www.forbes.com/sites/benkepes/2014/11/26/in-response-to-azure-outages-amazon-has-its-day-of-doom-aws-cloudfront-suffers-global-issue/&#34;&gt;Forbes&lt;/a&gt; and other media. The AWS status page did not mark Cloudfront as being in trouble until 45 minutes after the problems started, and surprisingly it was marked as &#39;informational&#39; and not as &#39;big problem&#39;, or something like that.&lt;br&gt;
    &lt;img src=&#34;/images/cloudfront-outage-20141126-aws-status-page.png&#34; class=&#34;m-t-20&#34; width=&#34;432&#34; height=&#34;160&#34;&gt;
&lt;/p&gt;
&lt;p&gt;
    The CloudFront outage had a big impact around the globe. 
    Thousands of websites delivered a poor user experience and undoubtedly suffered from a drop in conversion, clicks, sales etc. 
    &lt;!-- Also, some ad networks could not deliver ads and analytics trackers and widgets failed to load too. --&gt;
    But the impact does not stop there: banner ads, analytics trackers and widgets did not load, as quite a few of these &#39;third party content&#39; providers use CloudFront CDN. 
&lt;/p&gt;
&lt;h2&gt;Real world performance of CloudFront&lt;/h2&gt;
&lt;p&gt;
    Here at TurboBytes, we monitor performance of CDNs with RUM (Real User Monitoring) all the time from all across the globe, and use the data to power our Multi-CDN service. 
    Our customers add our non-blocking JS to their site, which executes after page load. It then silently in the background fetches a 15 KB object from a few CDNs and beacons the load time details to our servers. If the 15 KB object failed to load within 5 seconds, we beacon a Fail.
&lt;/p&gt;
&lt;p&gt;
    Our RUM clearly shows how big the CloudFront outage was.
    &lt;img src=&#34;/images/cloudfront-global-outage-20141126-failratio.png&#34; class=&#34;m-t-20&#34; width=&#34;640&#34; height=&#34;360&#34;&gt;
    The Failratio went sky high, but CloudFront did not reach 1 (=fail all the time). Why not? Well, our RUM data does not tell us exactly what was going on, but from all the info we gathered online, it seems the authoritative DNS of cloudfront.net was not responding *most of the time*. Resolvers often do retries, and apparently, sometimes, one of the authoritative DNS servers would send a good response and the browser would then be able to connect to CloudFront. If indeed the DNS lookup was successful, it was on average much slower than normally:
    &lt;img src=&#34;/images/cloudfront-global-outage-20141126-dns-median.png&#34; class=&#34;m-t-20&#34; width=&#34;640&#34; height=&#34;360&#34;&gt;
&lt;/p&gt;
&lt;p&gt;
    While looking at the data, we created a visualization that perhaps makes even more clear how things went really bad:
    &lt;img src=&#34;/images/cloudfront-global-outage-20141126-scatter.png&#34; class=&#34;m-t-20 m-b-20&#34; width=&#34;640&#34; height=&#34;230&#34;&gt;
    In this chart, a vertical blue line was drawn for every test that passed (browser fetched 15 KB object from CloudFront within 5000 ms) and a vertical red line was drawn for every test that failed to finish within 5000 ms. Before the problems started, it&#39;s clear there is a lot more blue than red, and during the outage, red has the upperhand. 
&lt;/p&gt;
&lt;p&gt;
    Was your business impacted by this CloudFront outage? How will you prepare for a similar fail of your CDN in the future? We welcome your thoughts, ideas and feedback. Please share below in the comments section.

&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Meet us at Velocity Barcelona</title>
      <link>https://www.turbobytes.com/blog/meet-us-velocity-barcelona/</link>
      <pubDate>Tue, 14 Oct 2014 10:41:24 &#43;0000</pubDate>
      
      <guid>https://www.turbobytes.com/blog/meet-us-velocity-barcelona/</guid>
      <description>&lt;p&gt;
Want to learn how to make your site faster and stronger? The O’Reilly Velocity Conference, happening 17 -19 November in Barcelona, Spain, is the place where DevOps, web operations, and performance professionals (from Fortune 500 companies to exciting startups) gather for a legendary learning and networking experience that explores why a faster, stronger web is no longer an option, but a necessity. Hear from the best speakers in the industry, who&#39;ll delve into topics ranging from hardcore math and statistics to monitoring, clustering, analytics, and organizational culture.
&lt;/p&gt; 
&lt;p&gt;
The TurboBytes team will be present on the two conference days (Tue and Wed) and we&#39;d love to talk to you about CDN and improving web performance.
&lt;/p&gt;
&lt;p&gt;
Velocity is great for meeting new people and learning from the pros.
&lt;ul style=&#34;list-style:disc; padding-left:40px; margin-bottom:20px;&#34;&gt;
&lt;li&gt;Fantastic lineup of speakers, incl. &lt;a href=&#34;http://velocityconf.com/velocityeu2014/public/schedule/speaker/88031?cmp=mp-velocity-confreg-home-vleu14_turbobytes&#34;&gt;Ilya Grigorik&lt;/a&gt;, &lt;a href=&#34;http://velocityconf.com/velocityeu2014/public/schedule/speaker/137186?cmp=mp-velocity-confreg-home-vleu14_turbobytes&#34;&gt;Andrew Betts&lt;/a&gt;, &lt;a href=&#34;http://velocityconf.com/velocityeu2014/public/schedule/speaker/94006?cmp=mp-velocity-confreg-home-vleu14_turbobytes&#34;&gt;Tammy Everts&lt;/a&gt;, and &lt;a href=&#34;http://velocityconf.com/velocityeu2014/public/schedule/speakers?cmp=mp-velocity-confreg-home-vleu14_turbobytes&#34;&gt;many more&lt;/a&gt;
&lt;li&gt;Lots of networking opportunities, parties, meeting new people, recruiting team members ...
&lt;li&gt;20% discount with our special discount code TURBO
&lt;/ul&gt;
We hope to see you in Barcelona. &lt;a href=&#34;http://oreil.ly/1pWXdFj&#34;&gt;Register for Velocity Barcelona 2014&lt;/a&gt; with 20% discount (discount code: TURBO).
&lt;/p&gt;
&lt;p&gt;
Until October 20, you have a chance to win a &lt;a href=&#34;http://www.aaronpeters.nl/blog/velocity-barcelona-2014-raffle&#34;&gt;free 2-day Velocity Barcelona conference pass&lt;/a&gt;! 
&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>SuperTag goes global, powered by TurboBytes</title>
      <link>https://www.turbobytes.com/blog/supertag-goes-global-powered-by-turbobytes/</link>
      <pubDate>Wed, 11 Jun 2014 13:36:52 &#43;0000</pubDate>
      
      <guid>https://www.turbobytes.com/blog/supertag-goes-global-powered-by-turbobytes/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://supertag.datalicious.com/&#34;&gt;SuperTag&lt;/a&gt; is the leading tag management platform in Australia and rapidly expanding its business on other continents. SuperTag needed a strong content delivery solution and was already convinced multi-CDN is the only way forward. After testing with various solution providers, SuperTag selected TurboBytes.
&lt;/p&gt;
&lt;p&gt;
&#34;We were using a multi-CDN solution but wanted to have better: easy to use, excellent performance globally including in South-America and we wanted the solution to be including the CDNs: one stop shop. We engaged with TurboBytes and found both the team and service to meet all our needs. &#34;, says SuperTag lead engineer Jeremie Leca.
&lt;/p&gt;
&lt;p&gt;
SuperTag is a great customer for TurboBytes. They truly care about performance and content delivery speed &amp;amp; reliability. We feel honored they have chosen to work with us. 
&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;SuperTag gives customers insight in CDN usage&lt;/h2&gt;
&lt;p&gt;
The team at SuperTag is always busy making their service better for their customers.
In a &lt;a href=&#34;http://blog.datalicious.com/supertag-release-v2-8-4/&#34;&gt;recent release&lt;/a&gt;, SuperTag implemented a tracker that allows the customer to see how much of the SuperTag CDN their account is using. TurboBytes provides SuperTag the raw access logs from the CDNs and SuperTag present the data in an easy to use interface.
&lt;/p&gt;&lt;/p&gt;

&lt;p&gt;&lt;h2&gt;About SuperTag&lt;/h2&gt;
&lt;p&gt;
SuperTag was created by Datalicious approximately 4 years ago,  currently employs 15 people (and they&amp;rsquo;re &lt;a href=&#34;http://www.datalicious.com/about/jobs&#34;&gt;hiring&lt;/a&gt;!).
SuperTag features include enterprise level workflow and security, support for synchronous and asynchronous tags and the ability to implement complex web tagging solutions such as media attribution and cross domain and conversion tracking.
Learn more about SuperTag on &lt;a href=&#34;http://supertag.datalicious.com&#34;&gt;supertag.datalicious.com&lt;/a&gt;.
&lt;/p&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>SoundCloud executes multi-CDN strategy with TurboBytes</title>
      <link>https://www.turbobytes.com/blog/soundcloud-executes-multi-cdn-strategy-turbobytes/</link>
      <pubDate>Tue, 24 Sep 2013 14:22:41 &#43;0000</pubDate>
      
      <guid>https://www.turbobytes.com/blog/soundcloud-executes-multi-cdn-strategy-turbobytes/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://soundcloud.com/&#34;&gt;SoundCloud&lt;/a&gt;, the world&#39;s leading social sound platform, has selected TurboBytes to help them improve the speed and reliability of their content delivery globally. TurboBytes&#39; multi-CDN platform constantly measures the real world performance of SoundCloud&#39;s CDNs and makes sure traffic is automatically routed to the best performing CDN.
&lt;/p&gt;
&lt;p&gt;
&#34;SoundCloud has a global audience of millions of users on desktop and mobile and they expect our content to load fast, always. We have researched using multiple CDNs for some time and recently decided to actually do it with TurboBytes, starting with our mobile site. Their platform works smoothly and effectively and the team is great&#34;, says SoundCloud VP of Engineering Alexander Grosse.
&lt;/p&gt;
&lt;p&gt;
We are very excited SoundCloud has selected TurboBytes to help execute their multi-CDN strategy. We have met the SoundCloud people regularly at conferences and it&#39;s always a pleasure to talk about technology, CDN and performance. We feel honored they have chosen to work with us. 
&lt;/p&gt;
                

&lt;p&gt;&lt;h2&gt;About SoundCloud&lt;/h2&gt;
&lt;p&gt;
SoundCloud is the world&amp;rsquo;s leading social sound platform where anyone can create sounds and share them everywhere.
Recording and uploading sounds to SoundCloud lets people easily share them privately with their friends or publicly to blogs, sites and social networks.
SoundCloud is headquartered in Berlin, Germany and has offices in San Francisco, London and Sofia.
Learn more about SoundCloud on &lt;a href=&#34;http://soundcloud.com/&#34;&gt;soundcloud.com&lt;/a&gt;.
&lt;/p&gt;&lt;/p&gt;
</description>
    </item>
    
  </channel>
</rss>