His comment immediately after describes exactly what happened:
> Even before it has ceased to exists, the MPEG engine had run out of steam – technology- and business wise. The same obscure forces that have hijacked MPEG had kept it hostage to their interests impeding its technical development and keeping it locked to outmoded Intellectual Property licensing models delaying market adoption of MPEG standards. Industry has been strangled and consumers have been deprived of the benefits of new technologies. From facilitators of new opportunities and experiences, MPEG standards have morphed from into roadblocks.
Big companies abused the setup that he was responsible for. Gentlemen's agreements to work together for the benefit of all got gamed into patent landmines and it happened under his watch.
Even many of the big corps involved called out the bullshit, notably Steve Jobs refusing to release a new Quicktime till they fixed some of the most egregious parts of AAC licencing way back in 2002.
From ZiffDavis article:
> QuickTime 6 media player and QuickTime Broadcaster, a free application that aims to simplify using MPEG-4 in live video feeds over the Net.
More context for this: Chiariglione has been extremely vocal that FRAND patent royalties are entirely necessary for the development of video compression tools, and believes royalty-free standards outpacing the ones that cost money represents the end of innovation in video codecs.
To be clear, Chiariglione isn't opposed to royalty-free standards at all, he just wants them to be deliberately worse so that people who need better compression will pay independent researchers for it. His MPEG actually wound up trying to make such a standard: IVC. You've never heard of MPEG IVC because Samsung immediately claimed ownership over it and ISO patent policy does not allow MPEG to require notice of what specific patents to remove so long as the owner agrees to negotiate a license with a patent pool.
You might think at this point that Chiariglione is on the side of the patent owners, but he's actually not. In fact, it's specifically those patent owners that pushed him out of MPEG.
In the 90s, patent owners were making bank off MPEG-2 royalties, but having trouble monetizing anything newer. A patent pool never actually formed for H.263, and the one for MPEG-4 couldn't agree on a royalty free rate for Internet streaming[0]. H.264 practically is royalty free for online video, but that only happened because Google bought On2[1] and threatened to make YouTube exclusively serve VP8. The patent owners very much resent this state of affairs and successfully sabotaged efforts at MPEG to make dedicated royalty-free codecs.
The second and more pressing issue (to industry, not to us) is the fact that H.265 failed to form a single patent pool. There's actually three of them, thanks to skulduggery by Access Advance to force people to pay for the same patent license twice by promising a sweetheart licensing deal[2] to Samsung. I'm told H.266 is even more insane, mostly because Access Advance is forcing people to buy licenses in a package deal to cover up the fact that they own very little of H.266.
Chiariglione is only pro-patent-owner in the narrow sense that he believes research needs to be 'paid for'. His attempt to keep patent owners honest got him sidelined and marginalized in ISO, which is why he left. He's actually made his own standards organization, with blackjack and hookers^Wartificial intelligence. MPAI's patent policy actually requires companies agree to 'framework licenses' - i.e. promise to actually negotiate with MPAI's own patent pool specifically. No clue if they've actually shipped anything useful.
Meanwhile, the rest of the Internet video industry coalesced around Google and Xiph's AV1 proposal. They somehow manage to do without direct royalty payments for AV1, which to me indicates that this research didn't need to be 'paid for' after all. Though, the way Chiariglione talks about AV1, you'd think it's some kind of existential threat to video encoding...
[0] Practically speaking, this meant MPEG-4 ASP was predominantly used by pirates, as legit online video sites that worked in browsers were using Flash based players, and Flash only supported H.263 and VP6.
[1] The company that made VP3 (Theora) and VP6
[2] The idea is that Samsung and other firms are "net implementer" companies. They own some of H.265, but they need to license the rest of it from MPEG-LA. So Access Advance promised those companies a super-low rate on the patents they need if they all pulled out of MPEG-LA, and they make it up by overcharging everyone else, including making them pay extra if they'd already gotten licenses from MPEG-LA before the Access companies pulled out of it.
As someone who hasn't had any exposure to the human stories behind mpeg before, it feels to me like it's been a force for evil since long before 2020. Patents on h264, h265, and even mp3 have been holding the industry back for decades. Imagine what we might have if their iron grip on codecs was broken.
Possibly, nothing. Codec development is slow and expensive. Free codecs only came along at all because Google decided to subsidize development but that became possible only 15 years or so after MPEG was born, and it's hardly a robust strategy. Plus free codecs were often built by acquiring companies that had previously been using IP licensing as a business model rather than from-scratch development.
I avoided a career in codecs after spending about a year in college learning about them. The patent minefield meant I couldn't meaningfully build incremental improvements on what existed, and the idea of dilligently dancing around existing patents and then releasing something which intentionally lacked state-of-the-art ideas wasn't compelling.
Codec development is slow and expensive becuase you can't just release a new codec, you have to dance around patents.
IP law, especially defence against submarine patents, makes codec development expensive.
In the early days of MPEG codec development was difficult, because most computers weren't capable of encoding video, and the field was in its infancy.
However, by the end of '00s computers were fast enough for anybody to do video encoding R&D, and there was a ton of research to build upon. At that point MPEG's role changed from being a pioneer in the field to being an incumbent with a patent minefield, stopping others from moving the field forward.
I disagree. Video is such a large percentage of internet traffic and licensing fees are so high that it becomes possible for any number of companies to subsidize the development cost of a new codec on their own and still net a profit. Google certainly spends the most money, but they were hardly the only ones involved in AV1. At Mozilla we developed Daala from scratch and had reached performance competitive with H.265 when we stopped to contribute the technology to the AV1 process, and our team's entire budget was a fraction of what the annual licensing fees for H.264 would have been. Cisco developed Thor on their own with just a handful of people and contributed that, as well. Many other companies contributed technology on a royalty-free basis. Outside of AV1, you regularly see things like Samsung's EVC (or LC-EVC, or APV, or...), or the AVS series from the Chinese.... If the patent situation were more tenable, you would see a lot more of these.
The cost of developing the technology is not the limitation. I would argue the cost to get all parties to agree on a common standard and the cost to deploy it widely enough for people to rely on it is much higher, but people manage that on a royalty-free basis for many other standards.
> Free codecs only came along … and it's hardly a robust strategy
Maybe you don’t remember the way that the gif format (there was no jpeg, png, or webp initially) had problems with licensing, and then years later having scares about it potentially becoming illegal to use gifs. Here’s a mention of some of the problems with Unisys, though I didn’t find info about these scares on Wikipedia’s GIF or Compuserve pages:
Similarly, the awful history of digital content restriction technology in-general (DRM, etc.). I’m not against companies trying to protect assets, but data assets historically over all time are inherently prone to “use”, whether that use is intentional or unintentional by the one that provided the data. The problem has always been about the means of dissemination, not that the data itself needed to be encoded with a lock that anyone with the key or means to get/make one could unlock nor that it should need to call home, basically preventing the user from actually legitimately being able to use the data.
"Free codecs only came along at all because Google decided to subsidize development but that became possible only 15 years or so after MPEG was born, and it's hardly a robust strategy"
I don't know about video codecs but MP3 (also part of MPEG) came out of Fraunhofer and was paid by German tax money. It should not have been patented in the first place (and wasn't in Germany).
This is the sort of project that should be developed and released via open source from academia.
Audio and video codecs, document formats like PDF, are all foundational to computing and modern life from government to business, so there is a great incentive to make it all open, and free.
This is impossible to know. Not that long ago something like Linux would have sounded like a madman's dream to someone with your perspective. It turns out great innovations happen outside the capitalist for-profit context and denying that is very questionable. If anything, those kinds of setups often hinder innovation. How much better would linux be if it was mired in endless licensing agreements, per monthly rates, had a board full of fortune 500 types, and billed each user a patent fee? Or any form of profit incentive 'business logic'?
If that stuff worked better, linux would have failed entirely, instead near everyone interfaces with a linux machine probably hundreds if not thousands of times a day in some form. Maybe millions if we consider how complex just accessing internet services is and the many servers, routers, mirrors, proxies, etc one encounters in just a trivial app refresh. If not linux, then the open mach/bsd derivatives ios uses.
Then looking even previous to the ascent of linux, we had all manner of free/open stuff informally in the 70s and 80s. Shareware, open culture, etc that led to today where this entire medium only exists because of open standards and open source and volunteering.
Software patents are net loss for society. For profit systems are less efficient than open non-profit systems. No 'middle-man' system is better than a system that goes out of its way to eliminate the middle-man rent-seeker.
Who would develop those codecs? A good video coding engineer costs about 100-300k USD a year. The really good ones even more. You need a lot of them. JVET has an attendance of about 350 such engineers each meeting (four times a year).
Not to mention the computer clusters to run all the coding sims, thousands and thousands of CPUs are needed per research team.
People who are outside the video coding industry do not understand that it is an industry. It’s run by big companies with large R&D budgets. It’s like saying ”where would we be with AI if Google, OpenAI and Nvidia didn’t have an iron grip”.
MPEG and especially JVET are doing just fine. The same companies and engineers who worked on AVC, HEVC and VVC are still there with many new ones especially from Asia.
MPEG was reorganized because this Leonardo guy became an obstacle, and he’s been angry about ever since. Other than that I’d say business as usual in the video coding realm.
Who would write a web server? Who would write Curl? Who would write a whole operating system to compete with Microsoft when that would take thousands of engineers being paid $100,000s per year? People don't understand that these companies have huge R&D budgets!
(The answer is that most of the work would be done by companies who have an interest in video distribution - eg. Google - but don't profit directly by selling codecs. And universities for the more research side of things. Plus volunteers gluing it all together into the final system.)
> It’s like saying ”where would we be with AI if Google, OpenAI and Nvidia didn’t have an iron grip”.
We'd be where we are. All the codec-equivalent aspects of their work are unencumbered by patents and there are very high quality free models available in the market that are just given away. If the multimedia world had followed the Google example it'd be quite hard to complain about the codecs.
I'm not opposed to codecs having patents but Chiariglione set up a system where each codec has as many patent holders as possible and any one of those patent holders could hold the entire world hostage. They should have set up the patent pool and pricing before developing each codec and not allowed any techniques in the standard that aren't part of the pool.
> Who would develop those codecs? A good video coding engineer costs about 100-300k USD a year. The really good ones even more. You need a lot of them.
How about governments? Radar, Laser, Microwaves - all offshoots of US military R&D.
There's nothing stopping either the US or European governments from stepping up and funding academic progress again.
The really silly part is that even if you have a license from MPEG LA for your product, you still have to put in a notice like this:
THIS PRODUCT IS LICENSED UNDER THE AVC PATENT PORTFOLIO LICENSE FOR THE PERSONAL AND NON-COMMERCIAL USE OF A CONSUMER TO (I) ENCODE VIDEO IN COMPLIANCE WITH THE AVC STANDARD ("AVC VIDEO") AND/OR (II) DECODE AVC VIDEO THAT WAS ENCODED BY A CONSUMER ENGAGED IN A PERSONAL AND NON-COMMERCIAL ACTIVITY AND/OR WAS OBTAINED FROM A VIDEO PROVIDER LICENSED TO PROVIDE AVC VIDEO. NO LICENSE IS GRANTED OR SHALL BE IMPLIED FOR ANY OTHER USE. ADDITIONAL INFORMATION MAY BE OBTAINED FROM MPEG LA, L.L.C. SEE HTTP://WWW.MPEGLA.COM
It's unclear whether this license covers videoconferencing for work purposes (where you are paid, but not specifically to be on that call). It seems to rule out remote tutoring.
MPEG LA probably did not have much choice here because this language requirement (or language close to it) for outgoing patent licenses is likely part of their incoming patent license agreements. It's probably impossible at this point to renegotiate and align the terms with how people actually use video codecs commercially today.
But it means that you can't get a pool license from MPEG LA that covers commercial videoconferencing, you'd have to negotiate separately with the individual patent holders.
MPEG-7 includes a binary XML standard [0] which is quite useful IMHO in comparison to others (I think it is used in DVB Meta data streams). But beyond patents it is even hard to find open documentation of BIM. I think the group was technically quite competent in comparison with other standard groups, but the business models around it really turn me off.
> "Patents on h264, h265, and even mp3 have been holding the industry back for decades. Imagine what we might have if their iron grip on codecs was broken."
Has AV1 solved this, to some extent? Although there are patent claims against it (patents for technologies that are fundamental to all the modern video codecs), it still seems better than the patent & licensing situation for h264 / h265.
This might be an oversimplification, but as a consumer, I think I see a catch-22 for new codecs. Companies need a big incentive to invest in them, which means the codec has to be technically superior and safe from hidden patent claims. But the only way to know if it's safe is for it to be widely used for a long time. Of course, it can't get widely used without company support in the first place. So, while everyone waits, the technology is no longer superior, and the whole thing fizzles out.
Not all codecs are equal, and to be honest, most are probably not optimized/suitable for today's applications, otherwise Google wouldn't have invented their own codec (which then gets adopted widely, fortunately).
Yes, because mpeg got there first, and now their dominance is baked into silicon with hardware acceleration. It's starting to change at last but we have a long way to go. That way would be a lot easier if their patent portfolio just died.
The fact h264 and h265 are known by those terms is key to the other part of the equation: the ITU Video Coding Experts Group has become the dominant forum for setting standards going back to at least 2005.
> all the investments (collectively hundreds of millions USD) made by the industry for the new video codec will go up in smoke and AOM’s royalty free model will spread to other business segments as well.
He is not a coder, not a researcher, he is only part of the worst game there is in this industry: a money maker from patents and "standards" you need to pay for to use, implement or claim compatibility.
The article does not give much beyond what you already read in the title. What obscure forces and how? Isn’t it an open standards non-profit organisation, then what could possible hinder it?
Maybe because technologically closed standards became better and nonprofit project has no resources to compete with commercial standards?
USB Alliance have been able to work things out, so maybe compression standards should be developed in similar way?
From Leonardo, who founded MPEG, on the page linked:
"Even before it has ceased to exists, the MPEG engine had run out of steam – technology- and business wise. The same obscure forces that have hijacked MPEG had kept it hostage to their interests impeding its technical development and keeping it locked to outmoded Intellectual Property licensing models delaying market adoption of MPEG standards. Industry has been strangled and consumers have been deprived of the benefits of new technologies. From facilitators of new opportunities and experiences, MPEG standards have morphed from into roadblocks."
MPEG's patent policy was not its own. The Moving Picture Experts Group was Working Group 11 of Sub Committee 2 of ISO-IEC Joint Technical Committee 1. Other Working Groups in SC2 were the Joint Photographic Experts Group, the Joint Bi-Level Image Group, and the Multimedia/Hypermedia Experts Group. Therefore, the patent policy was the ISO-IEC policy for including patented technology in their standards.
MPEG was also joint with the video conferencing standards group within the CCITT (now International Telecommunications Union), which generally required FRAND declarations from patent holders.
My recollection is that MPEG-LA was set up as a clearing house so that implementers could go to one licensing organization, rather than negotiating with each patent owner individually.
All the patents for MPEG 1 and MPEG 2 must be expired by now.
Besides patent gridlock, there is a fundamental economic problem with developing new video coding algorithms. It's very difficult to develop an algorithm that will halve the bit rate for the same quality, to get it implemented in hardware products an software, and to introduce it broadly in the existing video services infrastructure. Plus, doubling the compression is likely to more than double the processing required. On the other hand, within a couple of years the network engineers will double the bit rate for the same cost, and the storage engineers will double the storage for the same cost. They, like processing, follow their own Moore's Law.
So reducing the cost by improving codecs is more expensive and takes more effort and time than just waiting for the processor, storage and networking cost reductions. At least that's been true over the 3 decades since MPEG 2.
In general, lossless compression works by predicting the next (letter/token/frame) and then encoding the difference from the prediction in the data stream succinctly. The better you predict, the less you need to encode, the better you compress.
The flip side of this is that all fields of compression have a lot to gain from progress in AI.
It is like upscaling. If you could train AI to "upscale" your audio or video you could get away with sending a lot less data. It is already being done with quite amazing results for audio.
His comment immediately after describes exactly what happened:
> Even before it has ceased to exists, the MPEG engine had run out of steam – technology- and business wise. The same obscure forces that have hijacked MPEG had kept it hostage to their interests impeding its technical development and keeping it locked to outmoded Intellectual Property licensing models delaying market adoption of MPEG standards. Industry has been strangled and consumers have been deprived of the benefits of new technologies. From facilitators of new opportunities and experiences, MPEG standards have morphed from into roadblocks.
Big companies abused the setup that he was responsible for. Gentlemen's agreements to work together for the benefit of all got gamed into patent landmines and it happened under his watch.
Even many of the big corps involved called out the bullshit, notably Steve Jobs refusing to release a new Quicktime till they fixed some of the most egregious parts of AAC licencing way back in 2002.
https://www.zdnet.com/article/apple-shuns-mpeg-4-licensing-t...
It was sweet to see “over the Net”…
More context for this: Chiariglione has been extremely vocal that FRAND patent royalties are entirely necessary for the development of video compression tools, and believes royalty-free standards outpacing the ones that cost money represents the end of innovation in video codecs.
To be clear, Chiariglione isn't opposed to royalty-free standards at all, he just wants them to be deliberately worse so that people who need better compression will pay independent researchers for it. His MPEG actually wound up trying to make such a standard: IVC. You've never heard of MPEG IVC because Samsung immediately claimed ownership over it and ISO patent policy does not allow MPEG to require notice of what specific patents to remove so long as the owner agrees to negotiate a license with a patent pool.
You might think at this point that Chiariglione is on the side of the patent owners, but he's actually not. In fact, it's specifically those patent owners that pushed him out of MPEG.
In the 90s, patent owners were making bank off MPEG-2 royalties, but having trouble monetizing anything newer. A patent pool never actually formed for H.263, and the one for MPEG-4 couldn't agree on a royalty free rate for Internet streaming[0]. H.264 practically is royalty free for online video, but that only happened because Google bought On2[1] and threatened to make YouTube exclusively serve VP8. The patent owners very much resent this state of affairs and successfully sabotaged efforts at MPEG to make dedicated royalty-free codecs.
The second and more pressing issue (to industry, not to us) is the fact that H.265 failed to form a single patent pool. There's actually three of them, thanks to skulduggery by Access Advance to force people to pay for the same patent license twice by promising a sweetheart licensing deal[2] to Samsung. I'm told H.266 is even more insane, mostly because Access Advance is forcing people to buy licenses in a package deal to cover up the fact that they own very little of H.266.
Chiariglione is only pro-patent-owner in the narrow sense that he believes research needs to be 'paid for'. His attempt to keep patent owners honest got him sidelined and marginalized in ISO, which is why he left. He's actually made his own standards organization, with blackjack and hookers^Wartificial intelligence. MPAI's patent policy actually requires companies agree to 'framework licenses' - i.e. promise to actually negotiate with MPAI's own patent pool specifically. No clue if they've actually shipped anything useful.
Meanwhile, the rest of the Internet video industry coalesced around Google and Xiph's AV1 proposal. They somehow manage to do without direct royalty payments for AV1, which to me indicates that this research didn't need to be 'paid for' after all. Though, the way Chiariglione talks about AV1, you'd think it's some kind of existential threat to video encoding...
[0] Practically speaking, this meant MPEG-4 ASP was predominantly used by pirates, as legit online video sites that worked in browsers were using Flash based players, and Flash only supported H.263 and VP6.
[1] The company that made VP3 (Theora) and VP6
[2] The idea is that Samsung and other firms are "net implementer" companies. They own some of H.265, but they need to license the rest of it from MPEG-LA. So Access Advance promised those companies a super-low rate on the patents they need if they all pulled out of MPEG-LA, and they make it up by overcharging everyone else, including making them pay extra if they'd already gotten licenses from MPEG-LA before the Access companies pulled out of it.
Codec development is slow and expensive becuase you can't just release a new codec, you have to dance around patents.
In the early days of MPEG codec development was difficult, because most computers weren't capable of encoding video, and the field was in its infancy.
However, by the end of '00s computers were fast enough for anybody to do video encoding R&D, and there was a ton of research to build upon. At that point MPEG's role changed from being a pioneer in the field to being an incumbent with a patent minefield, stopping others from moving the field forward.
I disagree. Video is such a large percentage of internet traffic and licensing fees are so high that it becomes possible for any number of companies to subsidize the development cost of a new codec on their own and still net a profit. Google certainly spends the most money, but they were hardly the only ones involved in AV1. At Mozilla we developed Daala from scratch and had reached performance competitive with H.265 when we stopped to contribute the technology to the AV1 process, and our team's entire budget was a fraction of what the annual licensing fees for H.264 would have been. Cisco developed Thor on their own with just a handful of people and contributed that, as well. Many other companies contributed technology on a royalty-free basis. Outside of AV1, you regularly see things like Samsung's EVC (or LC-EVC, or APV, or...), or the AVS series from the Chinese.... If the patent situation were more tenable, you would see a lot more of these.
The cost of developing the technology is not the limitation. I would argue the cost to get all parties to agree on a common standard and the cost to deploy it widely enough for people to rely on it is much higher, but people manage that on a royalty-free basis for many other standards.
Maybe you don’t remember the way that the gif format (there was no jpeg, png, or webp initially) had problems with licensing, and then years later having scares about it potentially becoming illegal to use gifs. Here’s a mention of some of the problems with Unisys, though I didn’t find info about these scares on Wikipedia’s GIF or Compuserve pages:
https://www.quora.com/Is-it-true-that-in-1994-the-company-wh...
Similarly, the awful history of digital content restriction technology in-general (DRM, etc.). I’m not against companies trying to protect assets, but data assets historically over all time are inherently prone to “use”, whether that use is intentional or unintentional by the one that provided the data. The problem has always been about the means of dissemination, not that the data itself needed to be encoded with a lock that anyone with the key or means to get/make one could unlock nor that it should need to call home, basically preventing the user from actually legitimately being able to use the data.
I don't know about video codecs but MP3 (also part of MPEG) came out of Fraunhofer and was paid by German tax money. It should not have been patented in the first place (and wasn't in Germany).
The release of VP3 as open source predates Google's later acquisition of On2 (2010) by nearly a decade.
(I know nothing about the legal side of all this, just remembering the time period of Ubuntu circa 2005-2008).
Audio and video codecs, document formats like PDF, are all foundational to computing and modern life from government to business, so there is a great incentive to make it all open, and free.
If that stuff worked better, linux would have failed entirely, instead near everyone interfaces with a linux machine probably hundreds if not thousands of times a day in some form. Maybe millions if we consider how complex just accessing internet services is and the many servers, routers, mirrors, proxies, etc one encounters in just a trivial app refresh. If not linux, then the open mach/bsd derivatives ios uses.
Then looking even previous to the ascent of linux, we had all manner of free/open stuff informally in the 70s and 80s. Shareware, open culture, etc that led to today where this entire medium only exists because of open standards and open source and volunteering.
Software patents are net loss for society. For profit systems are less efficient than open non-profit systems. No 'middle-man' system is better than a system that goes out of its way to eliminate the middle-man rent-seeker.
No, just no. We've had free community codec packs for years before Google even existed. Anyone remember CCCP?
And regarding ”royalty-free” codecs please read this https://ipeurope.org/blog/royalty-free-standards-are-not-fre...
Not to mention the computer clusters to run all the coding sims, thousands and thousands of CPUs are needed per research team.
People who are outside the video coding industry do not understand that it is an industry. It’s run by big companies with large R&D budgets. It’s like saying ”where would we be with AI if Google, OpenAI and Nvidia didn’t have an iron grip”.
MPEG and especially JVET are doing just fine. The same companies and engineers who worked on AVC, HEVC and VVC are still there with many new ones especially from Asia.
MPEG was reorganized because this Leonardo guy became an obstacle, and he’s been angry about ever since. Other than that I’d say business as usual in the video coding realm.
(The answer is that most of the work would be done by companies who have an interest in video distribution - eg. Google - but don't profit directly by selling codecs. And universities for the more research side of things. Plus volunteers gluing it all together into the final system.)
We'd be where we are. All the codec-equivalent aspects of their work are unencumbered by patents and there are very high quality free models available in the market that are just given away. If the multimedia world had followed the Google example it'd be quite hard to complain about the codecs.
How about governments? Radar, Laser, Microwaves - all offshoots of US military R&D.
There's nothing stopping either the US or European governments from stepping up and funding academic progress again.
THIS PRODUCT IS LICENSED UNDER THE AVC PATENT PORTFOLIO LICENSE FOR THE PERSONAL AND NON-COMMERCIAL USE OF A CONSUMER TO (I) ENCODE VIDEO IN COMPLIANCE WITH THE AVC STANDARD ("AVC VIDEO") AND/OR (II) DECODE AVC VIDEO THAT WAS ENCODED BY A CONSUMER ENGAGED IN A PERSONAL AND NON-COMMERCIAL ACTIVITY AND/OR WAS OBTAINED FROM A VIDEO PROVIDER LICENSED TO PROVIDE AVC VIDEO. NO LICENSE IS GRANTED OR SHALL BE IMPLIED FOR ANY OTHER USE. ADDITIONAL INFORMATION MAY BE OBTAINED FROM MPEG LA, L.L.C. SEE HTTP://WWW.MPEGLA.COM
It's unclear whether this license covers videoconferencing for work purposes (where you are paid, but not specifically to be on that call). It seems to rule out remote tutoring.
MPEG LA probably did not have much choice here because this language requirement (or language close to it) for outgoing patent licenses is likely part of their incoming patent license agreements. It's probably impossible at this point to renegotiate and align the terms with how people actually use video codecs commercially today.
But it means that you can't get a pool license from MPEG LA that covers commercial videoconferencing, you'd have to negotiate separately with the individual patent holders.
[0] https://mpeg.chiariglione.org/standards/mpeg-7/reference-sof...
EDIT: Here is the Wikipedia page of BiM which evidently made it even into an ISO Standard [1]
[1] https://en.m.wikipedia.org/wiki/BiM
Has AV1 solved this, to some extent? Although there are patent claims against it (patents for technologies that are fundamental to all the modern video codecs), it still seems better than the patent & licensing situation for h264 / h265.
Just check pirated releases of TV shows and movies.
I remember this same guy complaining investments in the MPEG extortionist group would disappear because they couldn't fight against AV1.
He was part of a patent Mafia is is only lamenting he lost power.
Hypocrisy in its finest form.
https://blog.chiariglione.org/a-crisis-the-causes-and-a-solu...
He is not a coder, not a researcher, he is only part of the worst game there is in this industry: a money maker from patents and "standards" you need to pay for to use, implement or claim compatibility.
MPEG was also joint with the video conferencing standards group within the CCITT (now International Telecommunications Union), which generally required FRAND declarations from patent holders.
My recollection is that MPEG-LA was set up as a clearing house so that implementers could go to one licensing organization, rather than negotiating with each patent owner individually.
All the patents for MPEG 1 and MPEG 2 must be expired by now.
Besides patent gridlock, there is a fundamental economic problem with developing new video coding algorithms. It's very difficult to develop an algorithm that will halve the bit rate for the same quality, to get it implemented in hardware products an software, and to introduce it broadly in the existing video services infrastructure. Plus, doubling the compression is likely to more than double the processing required. On the other hand, within a couple of years the network engineers will double the bit rate for the same cost, and the storage engineers will double the storage for the same cost. They, like processing, follow their own Moore's Law.
So reducing the cost by improving codecs is more expensive and takes more effort and time than just waiting for the processor, storage and networking cost reductions. At least that's been true over the 3 decades since MPEG 2.
If you're interested in this, it's a good idea reading about the Hutter prize (https://en.wikipedia.org/wiki/Hutter_Prize) and going from there.
In general, lossless compression works by predicting the next (letter/token/frame) and then encoding the difference from the prediction in the data stream succinctly. The better you predict, the less you need to encode, the better you compress.
The flip side of this is that all fields of compression have a lot to gain from progress in AI.
Fabrice Bellard's nncp (mentioned in a different comment) leads.
http://prize.hutter1.net/
https://bellard.org/nncp/
Deleted Comment
DCVC-RT (https://github.com/microsoft/DCVC) - A deep learning based video codec claims to deliver 21% more compression than h266.
One of the compelling edge AI usecases is to create deep learning based audio/video codecs on consumer hardwares.
One of the large/enterprise AI usecases is to create a coding model that generates deep learning based audio/video codecs for consumer hardwares.