Readit News logoReadit News
coretx · a year ago
Decreasing the gamut before encoding is not my definition of "advanced video encoding". I"d like to call it "ghetto video encoding" instead. What bothers me most about it, is that many people are slowly getting used to it. It also makes me feel sorry for the people who worked hard on the production. Very few people will ever see the true quality of their work.
maxsilver · a year ago
To be fair, the new test footage that Netflix released (into creative commons even) explicitly tries to protect against that to some degree. Some shots in the new footage use HDR to crank brightness up to a requested 4000 nits on the subject, while still having a mathematical-black background. (You can even compare the raw *.mxf to say, YouTube's 1080p version of it, and see the difference.

Variety's writeup has a bit more technical detail on it - https://variety.com/2016/digital/news/netflix-meridian-imf-t... .

It's pretty clever, many of the shots seem clearly designed to stress-test common encoder issues and failures -- they put thought and care into it, and it shows.

999900000999 · a year ago
Maybe Netflix will lower its standards for production as well. Right now if you want to create content for Netflix you need to buy like a $30,000 camera .

What's the point of all that, if I can shoot 4K on a $2,000 camera and it's going to get compressed heavily anyway.

I actually think streaming would be best for lower budget productions, I'm not fully engaged when I watch something on Netflix anyway. It's kind of strange to see them spend $200 million or so on a movie knowing that I'm doing a million other things while it's on.

adrianparsons · a year ago
> Right now if you want to create content for Netflix you need to buy like a $30,000 camera .

Just noting that the official guide[0] lists a number of cheaper cameras, including one that retails for $2k (the Panasonic BGH1).

0: https://partnerhelp.netflixstudios.com/hc/en-us/articles/360...

rajnathani · a year ago
But, for the future internet and compression compute 8-15 years later, the existing content will be streamed better. It's probably not too much to ask for if the cameras are available today for it.
kranke155 · a year ago
Netflix standards are so dumb. They asked for a 4K camera before Alexa had widely available 4K outside of the huge Alexa 65, and they got a bunch of shows that look terrible for it since the whole industry had standardized around Alexa pipes and people were forced to use other cameras.
ThrowawayTestr · a year ago
Chroma subsampling is a crime and I hate it.
dusted · a year ago
I wonder if these giant companies constantly trying to squeeze extra microcents out of everything is why pretty much all online video now looks horrible, despite us having the sharpest displays, fastest processors and fastest Internet in the history of mankind..
notRobot · a year ago
Yes, yes it is. I personally have just started to torrent stuff even if it is on the streaming services I am a subscriber to because the quality is just so much better even though both are called 1080p.
graynk · a year ago
If it’s a TV show - I’d expect it to be the same quality, because most of those are WEB-DL from those same services, no?

Dead Comment

maxsilver · a year ago
Yes, it's crazy to me that Netflix puts so much effort into improving encode quality, and then they don't let most people see that version -- almost no one gets to see the "good" versions of their encodes.

Whether it's because Netflix locks the good high-bitrate encodes to the most expensive subscription plans many people aren't on, or because their "Roku 4K" or "Fire TV 4K" or whatever is actually negotiating the 1080p-H.264-low-bitrate stream, or because their internet connection latency was auto-detected too low, or their device doesn't have enough memory to buffer nicely or whatever.

I bet Netflix is absolutely capable of delivering something really nice. But in my experience, Netflix quality as delivered looks about on-par with "YouTube free 1080p" -- so much so, that even plugging in a 1080p-only-Blu-Ray, is a huge improvement even non-techie folks can notice.

felixg3 · a year ago
1080p Blu-ray vs Netflix 2160p Dolby Vision?

I‘d choose the blu-ray any day.

inhumantsar · a year ago
I found out the other day that many android set top boxes (and I'd assume smart TVs) aren't Netflix certified and don't get to run a native app but rather the packaged form of their web app.
tracker1 · a year ago
I think it varies and depends a lot on your connection speed and latency from the nearest source. There's a lot of effort to give a "good enough" quality at different screen sizes, throughput and devices.

Even if you look at torrent content, you can find a given movie anywhere from a few gb (1080p) to a few dozen gb in size at 4k with higher relative throughput and quality. Not to mention the encoder settings.

Personally, I don't go for the highest quality encodes because I don't see the difference well enough. I tend to favor x265 over x264 only because the blur degredation is less annoying than blotchiness imo.

When you are streaming, the level of encoding will swap out, even at the same resolution a number of times depending on what you're getting. You may well do better with a 1080p stream over a 4k stream depending on your screen and device's upscaling behavior. 4k takes at least 4x the bandwidth, and has tighter latency needs. There are others that can go over this and explain it much better than I can, but it's not malicious, it's partly cost sure, but it's also a matter of what your connection can handle.

ruszki · a year ago
Netflix, Disney+, and others don’t allow better quality than 1080p on laptops. I think Disney+ doesn’t even allow more than 720p. So they are never 4K. It’s quite annoying and obvious on large monitors.

Edit: Netflix changed this sometime in the past half years: https://help.netflix.com/en/node/30081

islewis · a year ago
> Reed Hastings announced that the company was expanding into almost every country around the world — including markets with subpar broadband infrastructure and consumers who primarily accessed the internet from their mobile phone.

The optimistic take here is that the purpose of this encoding work is to make Netflix accessible consumers in ALL countries- not consumers with beefy internet.

IMO the realistic take is that Netflix knows it can do this while saving $$$ serving lower quality video to consumers who cant tell the difference.

whalesalad · a year ago
I’d imagine a substantial amount of Netflix’s operating costs are in bandwidth. I bet a little bit of optimization goes a long way.
kalupa · a year ago
someone's gotta pay for it, i guess. unfortunately, in this case, it's us and our eyeballs
cainxinth · a year ago
4k resolution at a VHS bitrate.
bbstats · a year ago
Yeah everything online looks so terrible
refulgentis · a year ago
This is some really glorified PR that's either dumbed down or extremely excessive in attributing basic insights and algorithms to Netflix itself

I got into programming/software by encoding my cough well-sourced cough movies/TV shows to MP4 for my iPod video.

Far be it from me, maybe it's insufferably geeky detail, but the slow decade-long march described as "gee each movie is different" and "gee each scene is different" followed by Herculean work of FAANGers insufficiently appreciated by creative types was solved by VBR years upon years earlier. (VBR = variable bit rate)

Once you're getting to "we'll use ML as a VBR algorithm!", that's original, but the problems described and solution was understandable and solvable by a 18 year old non-programmer in 2007 with free software.

VBR wasn't some niche thing either, it's a very very obvious optimization I've never seen a codec miss, from MP3 audio to MP4 video. There's no caveats here or haughtiness or flippant "Dropbox is rsync + my weekend" dismissiveness on my part. It wasn't news to _anyone_, it's a very obvious optimization that was applied by everyone

I'd be veeeeery curious if there was much contribution here beyond using x264, occasionally with patches, and then engineering a pipeline around it

garyclarke27 · a year ago
I detest Netflix and other streamers recent fashion for using letterbox format - completely wasting a significant chunk of my precious screen real estate. Why do idiot directors think this is a good thing? These will never be shown in cinemas and even there the best format is Imax which has even more vertical space than 16:9 TVs
cqqxo4zV46cp · a year ago
As a professional industry, Hollywood is probably the worst offender when it comes to being so obsessed with itself that it genuinely can’t tell when it’s doing something in service of pandering to and perpetuating its own weird culture.

I kind of look at this in the same way as the fact that so many movies or TV shows can’t help but eventually have a plot where someone makes a movie or a TV show, maybe at a stretch a stage production. And, somehow, all the characters know a whole lot about stage productions.

appletrotter · a year ago
Idiot directors? I was going to rebut your comment but on second read I’d rather just point out how dismissive and inflammatory your comment is.
kranke155 · a year ago
As long as it’s not too aggressive (Scope) I don’t mind it. The aspect ratio can have a feel to it.
dusted · a year ago
4:3 > *.
3836293648 · a year ago
Nah, 3:2 > 16:10 > 4:3 > 16:9
fragmede · a year ago
Meanwhile, gen Z is watching things vertically on Tiktok.
misiek08 · a year ago
Funny how they care about encoding quality only by sponsoring such articles, but not by really improving the encoding quality <3 Dark scenes are just awful, rest gets awful only in 4k. In 1080p it’s just bad.

Even better they „researched” better metrics like pVMAF so they can again show how good they are, in theory.

spaceywilly · a year ago
Maybe it’s just me, but I would much rather watch a higher bitrate 1080p copy of a movie than a horribly encoded 4k copy. I also wish that we had made 1080p60 commonplace for sports before trying to make the jump to 4k. Seems like the industry has just focused on “big number better” instead of making a product that actually looks better.
brisketbbq · a year ago
Yeah, the industry focused on 4k (and soon to be 8k) for some reason. I guess the same reason there's a decrease in good acting and an increase in special effects.
kmeisthax · a year ago
Per-shot encoding sounds like something that should be handled by multipass encoding. Presumably it isn't - so I'm wondering what is failing in those encoders to make it necessary to tweak settings that much.

Or are they just aggressively searching for corners to cut to save bits?

refulgentis · a year ago
You're 100% right, the first instinct a programmer would have as a codec dev is "gee we def don't need that many bits to encode pure black frames"

I can't wrap my mind around this article and exactly what's going on because AFAIK VBR has been default ~forever. You don't even need multipass for that because you just have to look ahead N frames, it was received wisdom, statistically backed, as far back as 2007 that multipass was a waste of time because it was extremely marginally, like .0001%, better than just doing VBR, and using CBR was really dumb and wrong unless you had very specific edge case technical needs. I can't even remember what, maybe poorly designed initial H.264 decoders or people feeding into legacy broadcast system

Little more context here: https://news.ycombinator.com/item?id=40770734

Havoc · a year ago
I'm a little surprised that we haven't seen something more AI driven yet. (yes yes buzzword I know)

i.e. slice it not just into scenes but also into objects and do bitrate on that level. i.e. Face and objects in foreground get more. It seems we now have pretty small models that can do that sort of stuff (see Apple & MS ones recently) so should be feasible at scale.

I'd imagine you can also train an LLM on patterns that encoders choke on...chequered patterns etc.

cornstalks · a year ago
> i.e. Face and objects in foreground get more.

Encoders have been doing that for decades at this point.

AI-related models are actively being researched for multiple applications in video encoding. From generating predicted frames, to generating fine details, choosing tools and parameters for each frame, narrowing the search space, etc.

Havoc · a year ago
I know they do all sorts of clever predictive sorcery but I meant specifically what this image shows:

https://deeplobe.ai/exploring-object-detection-applications-...

Literally getting closer to understanding the context of the scene and what matters to a human rather than understanding the pixels

coldsmoke · a year ago
We've experimented with that at SVT (Sweden's public service broadcaster). Or rather, the video team has, I'm just a web dev, so I don't know the details other than what's in this blog post:

From the eyes of the viewer: Attention-aware video compression for improved low bitrate video https://medium.com/the-svt-tech-blog/from-the-eyes-of-the-vi...

sovereign_bits · a year ago
Had to create an account just to ask more about this! Interesting approach, but the link is from last year and only talks about a research project.

Did you ever test this on a larger scale? And do you know if it is only applicable to one encoder?

kmeisthax · a year ago
After he got kicked out of MPEG, Leonardo Chiariglione opened his own standards organization[0] called MPAI, which is focused primarily on... standardizing machine learning techniques for video coding tools.

I'm not sure if they've actually shipped anything yet, though.

[0] With blackjack, and hookers: https://blog.chiariglione.org/the-mpai-framework-licence-app...

42lux · a year ago
You mean transformer/diffuser based when you are talking about AI in this context? Because there are a lot of ml papers out there for encoding video. Stadia (RIP), Parsecs, Epic (Unreal Pixel streaming), META, Apple and Nvidia doing amazing stuff for their streaming services/VR/AR plays.
Havoc · a year ago
Yeah. This essentially

https://deeplobe.ai/exploring-object-detection-applications-...

If the model can figure out the key objects in the scene on a bit more deeper level then that can drive what parts the encoder needs to spend more bits on

FrenchDevRemote · a year ago
The algorithms exists, but it requires way too much processing power and you also need to have a model already downloaded.
karmakaze · a year ago
Then we can have many 6-fingered characters and other hallucinations. It's all fiction anyways so what's one for a other? I hope it either gets great fast or phases out as a fad.
Havoc · a year ago
No not model to generate it. Model to steer the encoder and tell it what parts to invest more Bitrate into
rowanG077 · a year ago
This is supremely easy to avoid. I'm not even sure how you arrived at thinking this will be a concern.