The problem with speedtests

Imagine this scenario. Outside your house, the most awesome superhighway has been built.  It has a speed limit of 120 Mile Per Hour.  You calculate at those speeds you can get to and from work 20 minutes earlier. Life is good.  Monday morning comes, you hop in your 600 horsepower Nissan GT-R, put on some new leather driving gloves, and crank up some good driving music.  Your pull onto the dedicated on-ramp from your house and are quickly cruising at 120 Miles an hour. You make it into work before most anyone else. Life is good.  

Near the end of the week, you notice more and more of your neighbors and co-workers using this new highway.  Things are still fast, but you can’t get up to speed like you could earlier in the week.  As you ponder why you notice you are coming up on the off-ramp to your work.  Traffic is backed up. Everyone is trying to get to the same place.  As you are waiting in the line to get off the superhighway, you notice folks passing you by going on down the road at high rates of speed.  You surmise your off-ramp must be congested because it is getting used more now.

Speedtest servers work the same way. A speedtest server is a destination on the information super-highway. Man, there is an oldie term.  To understand how these servers work we need a quick understanding of how the Internet works.   The internet is basically a bunch of virtual cities connected together.  Your local ISP delivers a signal to you via Wireless, Fiber, or some sort of media. When it leaves your house it travels to the ISP’s equipment and is aggregated with your neighbors and sent over faster lines to larger cities. It’s just like a road system. You may get access via a gravel road, which turns into a 2 lane blacktop, which then may turn into a 4 lane highway, and finally a super-highway.  The roads you take depend on where you are going. Your ISP may not have much control over how the traffic flows once it leaves their network.

Bottlenecks can happen anywhere. Anything from fiber optic cuts, oversold capacity, routing issues, and plain old unexpected usage. Why are these important? All of these can affect your results and can be totally out of control of your ISP and you.  They can also be totally your ISP’s fault.

They can also be your fault, just like your car can be.  An underpowered router can be struggling to keep up with your connection.Much like a moped on the above super-highway can’t keep up with a 600 horsepower car, your router might not be able to keep up either.  Other things can cause issues such as computer viruses, and low performing components.

Just about any network can become a speedtest.net node or a node with some of the other speedtest sites.  These networks have to meet minimum requirements, but there is no indicator of how utilized these servers are.  A network could put up one and it’s 100 percent utilized when you go running a test. This doesn’t mean your ISP is slow, just the off-ramp to that server is slow.

The final thing we want to talk about is the utilization of your internet pipe from your ISP.  This is something most don’t take into consideration.  Let’s go back to our on-ramp analogy.  Your ISP is selling you a connection to the information super-highway.   Say they are selling you a 10 meg download connection.  If you have a device in your house streaming an HD Netflix stream, which is typically 5 megs or so, that means you only have 5 megs available for a capacity test while that HD stream is happening. Speedtest only tests your current available capacity.  Many folks think a speedtest somehow stops all the traffic on your network, runs the test, and starts the traffic. It doesn’t work that way. Your available capacity is only tested at that point in time.  The same is true for any point between you and the speedtest server.  Remember our earlier analogy about slowing down when you got to work because there were so many people trying to get there.  They exceeded the capacity of that destination.  However, that does not mean your connection is necessarily slow because people were zooming past you on their way to less congested destinations.

This is why results to a server should be taken with a grain of salt. They are a useful tool, but not an absolute. The speedtest server is just a destination.  That destination can have bottlenecks, but others don’t.  Even after this long article, there are many other factors which can affect Internet speed. Things we didn’t touch on like Peering, the technology used, speed limits, and other things can also affect your internet speed to destinations.

“Glue addresses” in networking

Imagine this scenario.  You have bought an IP or DIA circuit from someone that is going to provide your network with bandwidth.  Typically this company will make the connection, IP wise, over a /30 or even a /29 of IP space.  I have called this the “glue address” for many years.  This is the IP address that binds (the glue reference) you to the other provider’s network. They can route you IP blocks over that glue address or you can establish BGP across it, but it is the static address which binds the two networks together.

Some network folks call this a peering address.  This isn’t wrong but can infer you are doing BGP peering across the address.  You aren’t always doing BGP across the glue address.

#routinglight #packetsdownrange

Remote Peering

Martin J. Levy from Cloudflare did a presentation about remote peering possibly being a bad thing. In this presentation, he brings up several valid points.

https://www.globalpeeringforum.org/pastEvents/gpf14/presentations/Wed_2_MartinLevy_remote_peering_is_bad_for.pdf

Some thoughts of my own.

Yes, remote peering is happening.  One thing touched upon is the layer3 vs layer2 traffic.  We at MidWest-IX only allow remote peering at a layer2 level unless it is groups like routeviews.org or other non-customer traffic situations.

Many providers are overselling their backbone and transit links.  This oversubscription means access to content networks in places that do not have an exchange or places that do have the content locally can suffer through no fault of the ISP or the content provider.  We have situations with content folks like Netflix who do not join for-profit IXes at the moment, keeping the content further away from customers.  These customers are reaching Netflix through the same transit connections many other providers are.  The can result in congested ports and poor quality for the customer.  The ISP is left trying to find creative ways to offload that traffic.  An Internet Exchange is ideal for these companies because cross-connect charges within data centers are on the rise.

When we first turned up MidWest-IX, now known as FD-IX, in Indianapolis we used a layer2 connection to Chicago to bring some of the most needed peers down to our members.  This connection allowed us to kick-start our IX.  We had one member, who after peering with their top talkers, actually saw an increase in bandwidth.  The data gained told the member that their upstream providers were having a bottleneck issue. They had suspected this for a while, but this confirmed it. Either the upstream provider had a congested link, or their peering ports were getting full.

As content makes it way closer remote peering becomes less and less of an issue.  There are many rural broadband companies just now getting layer2 transport back to carrier hotels. These links may stretch a hundred miles or more to reach the data center.  The rural broadband provider will probably never get a carrier hotel close to them.  As they grow, they might be able to afford to host caching boxes. The additional cost and pipe size to fill the caches is also a determining factor. The tradeoff of hosting and filling multiple cache boxes outweighs the latency of a layer2 circuit back to a carrier hotel.

I think remote peering is necessary to by-pass full links which give the ISP more control over their bandwidth.  In today’s race to cut corners to improve the bottom line having more control over your own network is a good thing. By doing a layer2 remote peer you might actually cut down on your latency, even if your upstream ISP is peered or has cache boxes.