I’ll “yes and” here…beyond AR/VR a more powerful use case is multi-modal learning (with RL) which is what Meta is probably the leader in IMO.
Example paper here: “ Towards Continual Egocentric Activity Recognition: A Multi-modal Egocentric Activity Dataset for Continual Learning”
https://arxiv.org/abs/2301.10931
This IMO is the pathway to AGI, as it combines all sense-plan-do data into a time coordinated stream and mimics how humans transfer learning to children via demonstration recording and behavior authoring.
If we can create robotics with locomotion and dexterous manipulation, egocentric exploration, and a behavior authoring loop that uses human behavior demonstration and trajectory reinforcement - well, we’ll have the AI we’ve been all talking about.
Probably the most exciting area of research that most people don’t know or care about.
That’s why head mounted all day ego centric AR is so important - it gives eyes ears and sense perception to our learning systems with human directed egocentric behaviors, guiding the whole thing. Just like pushing your kid down the street in the stroller.
Applications of embodied AI very interesting. Additionally a lot of hard problems are increasingly being solved in simulation like this. See Wayve's GAIA world model