As an aside. I do not personally feel the future of generative modeling is in generative art or creating new Pokémon or things like that, categories which broadly seem like neat tricks without real world usefulness or at least adoption.
As an aside. I do not personally feel the future of generative modeling is in generative art or creating new Pokémon or things like that, categories which broadly seem like neat tricks without real world usefulness or at least adoption.
[This comment brought to you by a thread [1] from earlier today]
I think the people calling it a dumb treadmill or saying it can be replaced by YouTube and a used exercise bike are clueless.
In non rush times though where tables didn’t get filled when people left, it made a lot more sense to play up dessert.
https://images.ctfassets.net/95kuvdv8zn1v/6h1C7lPC79OLOlddEE...
They and their VC backers are clearly betting on the concept that radars + lidar + imaging will be the ultimate successful solution in full self driving cars, as a completely opposite design and engineering philosophy from Tesla attempting to do "full self driving" with camera sensors and categorical rejection of lidar.
It is interesting to me that right now this is sitting on the HN homepage directly adjacent to: "Tesla to recall vehicles that may disobey stop signs (reuters.com)"
Personally I'm also working on an industrial application, using a CycleGAN-based system to augment real world data (e.g. training a network to "paint" an object so we can apply traditional computer vision techniques such as a HSV filter to locate the object). It's quite promising for this kind of application, albeit hard to fine-tune.
I think your usecase is extremely promising assuming it results in better quality output than just running a modern object detector. Another usecase I don’t have bandwidth for, but would likely be very marketable, is similar to what you’re saying but to allow the use of traditional algos like sift or surf across modalities.