Readit News logoReadit News
bklyn11201 · 5 years ago
Totally anecdotal evidence, but I was in a rural NY house served by DSL for the past 6 months. The DSL has consistent packet loss between 4 and 6%. The only video service that could handle this level of packet loss well was Amazon Prime. Netflix couldn't even load its browse screen until the past two weeks, where something changed, and suddenly Netflix could handle the high packet loss as well as Amazon Prime.

Thank you to the engineers and developers!

madeofpalk · 5 years ago
Seperate annecdote - I worked on an inflight satelite wifi project and I was surprised at how well both Youtube and Netlifx worked over a medium-bandwidth/high-latency connection.

Granted, we had specific QoS/traffic shaping to improve reliability without gobbling up all the bandwidth (stream Netflix was an advertised feature of the wifi service), but it still seemed like magic.

seized · 5 years ago
When Plex rolled out it's auto quality/auto bandwidth adjustment it actually worked very well over airplane satellite wifi as well. I watched a few things from my own server.

I'm amazed that service allowed streaming though...

kalleboo · 5 years ago
YouTube has gotten way better in the past couple of years. When they first launched DASH streaming, it was terrible on high-latency international connections. If a US-based content creator uploaded a video and you were the first to view it in your region, you could actually notice how it was populating the CDN and it was unwatchable without disabling DASH and using the old-fashioned buffered player. These days it's flawless for me in nearly every situation.
closeparen · 5 years ago
Wow! You actually stream Netflix to an airplane? I always guessed that inflight VOD services had the movies stored in a server on the plane.
ComputerGuru · 5 years ago
This sounds like an MTU issue. TCP takes care of mere (eg probabilistic) packet loss ok. MTU issues have actually crept back up because TLS exacerbates any underlying MTU problems. IPv6 doubly so (when any hops - especially yours - don’t follow path MTU detection requirements).
marta_morena_28 · 5 years ago
TCP doesn't take care of packet loss. What TCP does is make sure your packets are not lost, even if you have 99% packet loss. On the flip-side, that means that if TCP can't deliver a single packet (say out of a billion), the whole stream stops at this one packet...

Which is why TCP is a horrible choice for any streaming service and a horrible choice for lossy connections, and I would be quite surprised if Netflix relied on it. UDP is the perfect choice for streaming, since video decoders can handle packet loss pretty well. The rest you can achieve with good tradeoff between Reed-Solomon codes and key framing.

stingraycharles · 5 years ago
Aren’t MTU issues typically only up to a router? As in, even if the parent had a different MTU than Netflix uses, it wouldn’t matter since their router or the ISP’s router will transform packets between their appropriate MTUs?

And if this is true, then how could it be that Amazon works without problem and Netflix doesn’t?

xxpor · 5 years ago
>TCP takes care of mere (eg probabilistic) packet loss ok.

I'd imagine this is largely due to MSS clamping rather than actual MTU caused packet loss.

legulere · 5 years ago
Isn’t streaming done usually via UDP?
ikiris · 5 years ago
... no it doesn't. Like not even close.
crazygringo · 5 years ago
> Netflix couldn't even load its browse screen until the past two weeks

I assume the browse screen is based entirely on TCP?

I'm struggling to understand why packet loss would prevent it from loading -- it should be slower but TCP should handle re-transmission, no?

Or is Netflix doing something tricky with UDP even in their browsing UX?

kevincox · 5 years ago
If I had to guess they probably had timeouts that were too aggressive. Client timeouts are a very hard problem because it is difficult to tell the difference between "working, but slowly" and "something went wrong, the best bet is to try again".

Back in the day we used to have timeouts based on individual reads/writes which will often better answer "is this HTTP request making progress". However the problem with these sort of timeouts is they don't compose well so most people end up having an end-to-end deadline.

ReactiveJelly · 5 years ago
I doubt Netflix is doing anything tricky with UDP anywhere in their stack.

QUIC doesn't count because it's not tricky.

I'd love to see a source for this but seeing as YouTube works great over regular HTTP and TCP, I doubt anyone else is out in the weeds trying some custom UDP solution and reinventing wheels.

Deleted Comment

timhaak · 5 years ago
Slightly unrelated but does the packet loss happen all the time or when close to maximum of the line.

Used to have similar problems with an ADSL line but found if I limited the line (Both up and down) I could find a magic number where the packet loss went away. (Well most of the time :))

Though it did need to be tuned for different times of they . ie high congestion times need it to be lower.

Though technically it shouldn't be your problem :(

baq · 5 years ago
This is normal if your router doesn’t prioritize control traffic. A rate limit allows all the ACKs to normally leave your network instead of getting queued up.
bklyn11201 · 5 years ago
It happens nearly all the time. We use very little DSL bandwidth but are quite rural (miles from primary telephone infrastructure)
fkskdkgif · 5 years ago
Dropped packets are often a symptom that the MTU value is set too high. That would be uncorrelated to congestion, though.
tyrust · 5 years ago
I'd believe it. When you know that there is going to be packet loss (whether from the user's spotty internet or from internal load-shedding), building your applications to be as resilient as possible to it makes sense. The infrastructure experimentation platform mentioned in the article is probably helpful for sniffing out potential trouble-spots in applications.
epc · 5 years ago
Any chance there weren't any line filters on the POTS equipment? I haven't had DSL in years but when I did I had to have filters on any telephone devices connected to the same line.
umbs · 5 years ago
How did you measure 4 to 6% packet loss? Do you have scripts to ping some server and you are collecting packet loss data? I would like to collect such data for my home network and am curious.
wtallis · 5 years ago
Smokeping is one of the better-known tools for tracking latency and loss over time: https://oss.oetiker.ch/smokeping/
lostmsu · 5 years ago
Simple ping command actually prints statistics
gameswithgo · 5 years ago
there has been the good kind of capitalism going on between the video streaming services before. earlier on I remember netflix was way better than amazon, but amazon upped their game since.
kache_ · 5 years ago
When deciding what mechanism to employ to load shed, you should keep in mind the layer at which you are load shedding. Modern distributed systems are comprised of many layers. You can do it at the load balancer, at the OS level, or in the application logic. This becomes a trade-off. As you get closer to the core application logic, the more information you will have to make a decision. On the other hand, as you get closer, the more work you have already performed and the more cost there is to throwing away the request.

You may employ techniques more complex than a simple bucketing mechanism, such as acutely observing the degree at which clients are exceeding their baseline. However, these techniques aren’t free. The cost of simply throwing away the request can overwhelm your server - and the more steps you add before the shedding part the lower the maximum throughput you can tolerate before going to 0 availability. It’s important to understand at what point this happens when designing a system that takes advantage of this technique.

For example, If you do it at the OS level, it is a lot cheaper than leaving it to the server process. If you choose to do it in your application logic, think carefully about how much work is done for the request before it gets thrown away. Are you validating a token before you are making your decision?

jeffbee · 5 years ago
You touch on the key thing that people sometimes overlook. Whatever you are doing to serve errors has to be strictly less expensive than serving successes. If your load shedding error path does things like logging synchronously to a file (as you might get from a logging library that synchronizes outputs for warnings and errors, but not information), taking a lock to update a global error counter, or formatting stack traces in exceptions, it's possible that load shedding will _cause_ the collapse of your service instead of preventing it.
joatmon-snoo · 5 years ago
+1 additionally, if you end up in a scenario where you don't even have enough capacity in a given layer to fail quickly, your only options are either increase capacity or throttle load pre-server (either in the network or clients)
tmpz22 · 5 years ago
A lot of websites will now fail requests early based on a timeout, forcing users to refresh the page. I have to wonder if ad-based sites enjoy this behavior because it could lead to more ad impressions. Talking about you reddit.
ComputerGuru · 5 years ago
I think you’re talking about SPAs in specific. Many have race conditions in frontend code that are not revealed on fast connections or when all resources are loaded with the same speed/consistency. Open the developer console next time it happens, I bet you’ll find a “foo is not a function” or similar error caused by something not having init yet and the code not properly awaiting it. If an SPA core loop errors out, load will be halted or even a previously loaded or partially loaded page will become blank or partially so. Refreshing it will load already retrieved resources from cache and often “fixes” the problem.
tmpz22 · 5 years ago
You see it in backend code too. For example Golang's context.WithTimeout is used to time out http requests and database calls that may be taking too long. This is particularly irksome with microservices where multiple services are running timeouts that interfere with one another.

It is becoming du jour to quell 99 percentile latency spikes (i.e. 1:100 requests will take substantially longer) by terminating the requests, which may not always be in the best interest of the user even if it is convenient for the devops teams and their promotion packets.

anitil · 5 years ago
It's surprising to me how slow reddit is on mobile. If only there was a way of serving content so that the browser can start to render before the full payload has been served.
annoyingnoob · 5 years ago
According to the diagram, Netflix is injecting chaos into the chaos control panel. Is that right?

Looks like the arrow goes the wrong direction.

khalilravanna · 5 years ago
I’m wondering if it’s more of a “Chaos Injector” component/service that reads configuration data from the Chaos Control Plane on what to target, with parameters on how/when to do so. That would make the arrow make sense in my mind given it sounds like that’s a solid pattern for scaling these data/control plane flows: https://aws.amazon.com/builders-library/avoiding-overload-in...
TheSwordsman · 5 years ago
This. It's an internal system called ChAP, Chaos Automation Platform. It has the ability to target failure down to specific RPC calls in single instances, using platform components that services consume as the mechanism for doing that injection.
abalone · 5 years ago
For me this link just opens the Medium app and fails to load the article. I had to force it to open in a browser.

Seems like a pretty bad Medium bug.

Cthulhu_ · 5 years ago
Seems like pretty standard browser/app handover behaviour to me, although the app not working is a massive fail and should - hopefully - flag up automatically as a critical issue on Medium's side.
ComputerGuru · 5 years ago
Obvious suggestion but not made in snark: uninstall the medium app? I’ve had to do that for lots of poorly developed apps or apps developed not in sync with the web frontend.

Edit: it is a bad link and I can see why this would happen if you had the Medium app installed. It’s a “branded” Medium post (i.e. appears on the Netflix-owned domain) but clicking the link redirects you to medium.com then redirects you back to the cname.

herodoturtle · 5 years ago
Hah.

"Load Shedding".

Shout-out to my fellow South Africans.

perryizgr8 · 5 years ago
And Indians.

Dead Comment

lacker · 5 years ago
I love the phrase "Prioritized Load Shedding" as corporate-speak for "dropping the less-important traffic."
therealdrag0 · 5 years ago
How is it corporate-speak? Sounds just like standard thoughtful naming. If I was working on a module that did this I would be happy to name it this even if it never got mentioned in any corporate context.
dboreham · 5 years ago
Dropping some traffic to avoid complete melt down.
dkarp · 5 years ago
A lot of which is non-essential as well as NON_CRITICAL, like tracking and telemetry
tofuahdude · 5 years ago
Would you actually name this feature "drop less important traffic"?
kbar13 · 5 years ago
engineer-speak for dropping requests that are less noticeable to users when dropped in order to give services room to recover