VFX artist and developer here, who's deep into this stuff, and it is really not there. It's an island of itself, barely controllable and barely usable with other media. They are just now getting around to generating alpha channels, with virtual none of the existing pipelines for any AI video or image generation tools to even incorporate and work with alpha channels. This is just one of several hundred aspects of incompatibility. It really seriously appears as of no one at any of the AI video generation research teams has any professional media production experience, or even bothered too look at existing media production data standards, and what they are making tool-wise is incompatible in every possible respect.
If anyone is curious, a Meta Data Scientist published a great piece about how the facts about what LLMs are actually doing (and therefore able to do) and how it's papered over by using chat bots. It's a long but very engaging read.
This was really interesting, thank you. I found myself playing "would i really ignore this color or would i ignore any color given the contrasting castles etc that are drawing your attention" in all the pictures
If Data from Star Trek (or Eva from Ex Machina) walked out of a lab, we’d have no problem accepting that AGI had been accomplished. Or if the scenario in the movie Her played out with the Samantha OS, we’d be forced to admit not only to AGI, but the evolution of ASI as well. However, there are no such examples in the real world, and after months of overhyping ChatGPT, we still don’t have anything like Data. So it’s not shifting the definition, it’s recognizing that accomplishing a single intelligent task isn’t general intelligence.
Won’t the industry change to adopt that massive price cut/productivity gain?