Totally anecdotal evidence, but I was in a rural NY house served by DSL for the past 6 months. The DSL has consistent packet loss between 4 and 6%. The only video service that could handle this level of packet loss well was Amazon Prime. Netflix couldn't even load its browse screen until the past two weeks, where something changed, and suddenly Netflix could handle the high packet loss as well as Amazon Prime.
Seperate annecdote - I worked on an inflight satelite wifi project and I was surprised at how well both Youtube and Netlifx worked over a medium-bandwidth/high-latency connection.
Granted, we had specific QoS/traffic shaping to improve reliability without gobbling up all the bandwidth (stream Netflix was an advertised feature of the wifi service), but it still seemed like magic.
When Plex rolled out it's auto quality/auto bandwidth adjustment it actually worked very well over airplane satellite wifi as well. I watched a few things from my own server.
I'm amazed that service allowed streaming though...
YouTube has gotten way better in the past couple of years. When they first launched DASH streaming, it was terrible on high-latency international connections. If a US-based content creator uploaded a video and you were the first to view it in your region, you could actually notice how it was populating the CDN and it was unwatchable without disabling DASH and using the old-fashioned buffered player. These days it's flawless for me in nearly every situation.
This sounds like an MTU issue. TCP takes care of mere (eg probabilistic) packet loss ok. MTU issues have actually crept back up because TLS exacerbates any underlying MTU problems. IPv6 doubly so (when any hops - especially yours - don’t follow path MTU detection requirements).
TCP doesn't take care of packet loss. What TCP does is make sure your packets are not lost, even if you have 99% packet loss. On the flip-side, that means that if TCP can't deliver a single packet (say out of a billion), the whole stream stops at this one packet...
Which is why TCP is a horrible choice for any streaming service and a horrible choice for lossy connections, and I would be quite surprised if Netflix relied on it. UDP is the perfect choice for streaming, since video decoders can handle packet loss pretty well. The rest you can achieve with good tradeoff between Reed-Solomon codes and key framing.
Aren’t MTU issues typically only up to a router? As in, even if the parent had a different MTU than Netflix uses, it wouldn’t matter since their router or the ISP’s router will transform packets between their appropriate MTUs?
And if this is true, then how could it be that Amazon works without problem and Netflix doesn’t?
If I had to guess they probably had timeouts that were too aggressive. Client timeouts are a very hard problem because it is difficult to tell the difference between "working, but slowly" and "something went wrong, the best bet is to try again".
Back in the day we used to have timeouts based on individual reads/writes which will often better answer "is this HTTP request making progress". However the problem with these sort of timeouts is they don't compose well so most people end up having an end-to-end deadline.
I doubt Netflix is doing anything tricky with UDP anywhere in their stack.
QUIC doesn't count because it's not tricky.
I'd love to see a source for this but seeing as YouTube works great over regular HTTP and TCP, I doubt anyone else is out in the weeds trying some custom UDP solution and reinventing wheels.
Slightly unrelated but does the packet loss happen all the time or when close to maximum of the line.
Used to have similar problems with an ADSL line but found if I limited the line (Both up and down) I could find a magic number where the packet loss went away. (Well most of the time :))
Though it did need to be tuned for different times of they . ie high congestion times need it to be lower.
Though technically it shouldn't be your problem :(
This is normal if your router doesn’t prioritize control traffic. A rate limit allows all the ACKs to normally leave your network instead of getting queued up.
I'd believe it. When you know that there is going to be packet loss (whether from the user's spotty internet or from internal load-shedding), building your applications to be as resilient as possible to it makes sense. The infrastructure experimentation platform mentioned in the article is probably helpful for sniffing out potential trouble-spots in applications.
Any chance there weren't any line filters on the POTS equipment? I haven't had DSL in years but when I did I had to have filters on any telephone devices connected to the same line.
How did you measure 4 to 6% packet loss? Do you have scripts to ping some server and you are collecting packet loss data? I would like to collect such data for my home network and am curious.
there has been the good kind of capitalism going on between the video streaming services before. earlier on I remember netflix was way better than amazon, but amazon upped their game since.
When deciding what mechanism to employ to load shed, you should keep in mind the layer at which you are load shedding. Modern distributed systems are comprised of many layers. You can do it at the load balancer, at the OS level, or in the application logic. This becomes a trade-off. As you get closer to the core application logic, the more information you will have to make a decision. On the other hand, as you get closer, the more work you have already performed and the more cost there is to throwing away the request.
You may employ techniques more complex than a simple bucketing mechanism, such as acutely observing the degree at which clients are exceeding their baseline. However, these techniques aren’t free. The cost of simply throwing away the request can overwhelm your server - and the more steps you add before the shedding part the lower the maximum throughput you can tolerate before going to 0 availability. It’s important to understand at what point this happens when designing a system that takes advantage of this technique.
For example, If you do it at the OS level, it is a lot cheaper than leaving it to the server process. If you choose to do it in your application logic, think carefully about how much work is done for the request before it gets thrown away. Are you validating a token before you are making your decision?
You touch on the key thing that people sometimes overlook. Whatever you are doing to serve errors has to be strictly less expensive than serving successes. If your load shedding error path does things like logging synchronously to a file (as you might get from a logging library that synchronizes outputs for warnings and errors, but not information), taking a lock to update a global error counter, or formatting stack traces in exceptions, it's possible that load shedding will _cause_ the collapse of your service instead of preventing it.
+1 additionally, if you end up in a scenario where you don't even have enough capacity in a given layer to fail quickly, your only options are either increase capacity or throttle load pre-server (either in the network or clients)
A lot of websites will now fail requests early based on a timeout, forcing users to refresh the page. I have to wonder if ad-based sites enjoy this behavior because it could lead to more ad impressions. Talking about you reddit.
I think you’re talking about SPAs in specific. Many have race conditions in frontend code that are not revealed on fast connections or when all resources are loaded with the same speed/consistency. Open the developer console next time it happens, I bet you’ll find a “foo is not a function” or similar error caused by something not having init yet and the code not properly awaiting it. If an SPA core loop errors out, load will be halted or even a previously loaded or partially loaded page will become blank or partially so. Refreshing it will load already retrieved resources from cache and often “fixes” the problem.
You see it in backend code too. For example Golang's context.WithTimeout is used to time out http requests and database calls that may be taking too long. This is particularly irksome with microservices where multiple services are running timeouts that interfere with one another.
It is becoming du jour to quell 99 percentile latency spikes (i.e. 1:100 requests will take substantially longer) by terminating the requests, which may not always be in the best interest of the user even if it is convenient for the devops teams and their promotion packets.
It's surprising to me how slow reddit is on mobile. If only there was a way of serving content so that the browser can start to render before the full payload has been served.
I’m wondering if it’s more of a “Chaos Injector” component/service that reads configuration data from the Chaos Control Plane on what to target, with parameters on how/when to do so. That would make the arrow make sense in my mind given it sounds like that’s a solid pattern for scaling these data/control plane flows: https://aws.amazon.com/builders-library/avoiding-overload-in...
This. It's an internal system called ChAP, Chaos Automation Platform. It has the ability to target failure down to specific RPC calls in single instances, using platform components that services consume as the mechanism for doing that injection.
Seems like pretty standard browser/app handover behaviour to me, although the app not working is a massive fail and should - hopefully - flag up automatically as a critical issue on Medium's side.
Obvious suggestion but not made in snark: uninstall the medium app? I’ve had to do that for lots of poorly developed apps or apps developed not in sync with the web frontend.
Edit: it is a bad link and I can see why this would happen if you had the Medium app installed. It’s a “branded” Medium post (i.e. appears on the Netflix-owned domain) but clicking the link redirects you to medium.com then redirects you back to the cname.
How is it corporate-speak? Sounds just like standard thoughtful naming. If I was working on a module that did this I would be happy to name it this even if it never got mentioned in any corporate context.
Thank you to the engineers and developers!
Granted, we had specific QoS/traffic shaping to improve reliability without gobbling up all the bandwidth (stream Netflix was an advertised feature of the wifi service), but it still seemed like magic.
I'm amazed that service allowed streaming though...
Which is why TCP is a horrible choice for any streaming service and a horrible choice for lossy connections, and I would be quite surprised if Netflix relied on it. UDP is the perfect choice for streaming, since video decoders can handle packet loss pretty well. The rest you can achieve with good tradeoff between Reed-Solomon codes and key framing.
And if this is true, then how could it be that Amazon works without problem and Netflix doesn’t?
I'd imagine this is largely due to MSS clamping rather than actual MTU caused packet loss.
I assume the browse screen is based entirely on TCP?
I'm struggling to understand why packet loss would prevent it from loading -- it should be slower but TCP should handle re-transmission, no?
Or is Netflix doing something tricky with UDP even in their browsing UX?
Back in the day we used to have timeouts based on individual reads/writes which will often better answer "is this HTTP request making progress". However the problem with these sort of timeouts is they don't compose well so most people end up having an end-to-end deadline.
QUIC doesn't count because it's not tricky.
I'd love to see a source for this but seeing as YouTube works great over regular HTTP and TCP, I doubt anyone else is out in the weeds trying some custom UDP solution and reinventing wheels.
Deleted Comment
Used to have similar problems with an ADSL line but found if I limited the line (Both up and down) I could find a magic number where the packet loss went away. (Well most of the time :))
Though it did need to be tuned for different times of they . ie high congestion times need it to be lower.
Though technically it shouldn't be your problem :(
You may employ techniques more complex than a simple bucketing mechanism, such as acutely observing the degree at which clients are exceeding their baseline. However, these techniques aren’t free. The cost of simply throwing away the request can overwhelm your server - and the more steps you add before the shedding part the lower the maximum throughput you can tolerate before going to 0 availability. It’s important to understand at what point this happens when designing a system that takes advantage of this technique.
For example, If you do it at the OS level, it is a lot cheaper than leaving it to the server process. If you choose to do it in your application logic, think carefully about how much work is done for the request before it gets thrown away. Are you validating a token before you are making your decision?
It is becoming du jour to quell 99 percentile latency spikes (i.e. 1:100 requests will take substantially longer) by terminating the requests, which may not always be in the best interest of the user even if it is convenient for the devops teams and their promotion packets.
Looks like the arrow goes the wrong direction.
Seems like a pretty bad Medium bug.
Edit: it is a bad link and I can see why this would happen if you had the Medium app installed. It’s a “branded” Medium post (i.e. appears on the Netflix-owned domain) but clicking the link redirects you to medium.com then redirects you back to the cname.
"Load Shedding".
Shout-out to my fellow South Africans.
Dead Comment