At this point of time, The money, infrastructure and amount of work required is damn high when compared to the returns we get from the post work. The whole VR/AR/XR/MR are at pivotal stage and not that efficient. According to my knowledge you can expect something like that by end of the decade.
What are real life actual useful cases for this tech?
I can imagine in manufacturing: detecting defects or layout mismatch - that's one.
Is there any open source project that uses a image recognition library to achieve any useful task? All I've seen from board partners seem to at most provide very simple demos, where a box with label is drawn around an object. Who actually is using that information, how and for what?
I've also been a part of the Kinect craze and made 3 demos (games mostly) using their SDK and still have a very hard time defending this tech in eyes of coworkers that only see this as a surveillance tech
Do you have any suggestions on how to proceed further? So far, I've procured a Jetson, five cameras, a stand to fix and calibrate the modules, and a cam array hat to equip four cameras and the jetson. I was checking out VPU and NPUS and other hardware as well but struggling to identify compatible hardware. How can I get ahead and build such model to test and validate in 3 Months of time ?