For those of you who are new to movie codecs, here is some info that might be useful:
roughly, videos are not streams of images, one after the other (well motion jpeg is, but ignore that) They are "key frames" ie full images, then a set of blocks with some vectors that move those blocks around to make a moving image. So you'll see something like b, i and p frames, each have a different role for making either a full image from which the next frames re-constructed, or the blocks that make the in-between frames.
Another key piece is the discrete cosine transform.
Most image and video codecs (JPEG and MPEG and AV1 too) use DCT or a related technique. It's a very simple idea at its core. The algorithm looks for an equation that, when plotted, somewhat looks like the original bitmap. The set of possible equations are picked so they can produce a lot of complex different patterns, and be compactly represented.
In more detail, the sum of a set of sine waves, can describe discrete data, or vice versa. It is related to how the sum of square waves produced by a digital to analog converter, are exactly the same as the original analog waveform, within the sampling rate and bit depth limits. Once the data is transformed into a set of sine waves added up, you can drop the minor terms with relatively little effect on the output. The more terms you drop, the higher the compression, and the more blobby and repetitive the pattern, and the blurrier and less detailed the result.
This technique is also used in lossy audio compression.
Yeah. And that's why TV stores really like slow-motion shots or static landscapes to show off the TV. Any motion will cause "HDTV blur" as the encoder struggles to describe complex motion with the limited number of bits it's allowed to use.
Stuff like static, film grain, particles like snow or rain, those all suck up bits from the same encoding budget.
This could be a problem for video game streaming, and it could affect the artistic decisions a game studio makes - Drawing a billion tiny particles on a local GPU will look crisp and cool, but asking a hardware encoder to encode those for consumer Internet (or phone Internet) might be too much. I think streamers have run into this problem already.
https://en.wikipedia.org/wiki/AV1
roughly, videos are not streams of images, one after the other (well motion jpeg is, but ignore that) They are "key frames" ie full images, then a set of blocks with some vectors that move those blocks around to make a moving image. So you'll see something like b, i and p frames, each have a different role for making either a full image from which the next frames re-constructed, or the blocks that make the in-between frames.
Most image and video codecs (JPEG and MPEG and AV1 too) use DCT or a related technique. It's a very simple idea at its core. The algorithm looks for an equation that, when plotted, somewhat looks like the original bitmap. The set of possible equations are picked so they can produce a lot of complex different patterns, and be compactly represented.
In more detail, the sum of a set of sine waves, can describe discrete data, or vice versa. It is related to how the sum of square waves produced by a digital to analog converter, are exactly the same as the original analog waveform, within the sampling rate and bit depth limits. Once the data is transformed into a set of sine waves added up, you can drop the minor terms with relatively little effect on the output. The more terms you drop, the higher the compression, and the more blobby and repetitive the pattern, and the blurrier and less detailed the result.
This technique is also used in lossy audio compression.
Stuff like static, film grain, particles like snow or rain, those all suck up bits from the same encoding budget.
"Why Snow and Confetti Ruin YouTube Video Quality" by Tom Scott probably explains it nicer than I can https://www.youtube.com/watch?v=r6Rp-uo6HmI&pp=ygUaYnJlYWtpb...
This could be a problem for video game streaming, and it could affect the artistic decisions a game studio makes - Drawing a billion tiny particles on a local GPU will look crisp and cool, but asking a hardware encoder to encode those for consumer Internet (or phone Internet) might be too much. I think streamers have run into this problem already.
Noise is really hard to compress!