Coming from a senior Oculus lead, the most interesting thing about this write up for me, is what it lacks: it says almost nothing about the software stack / operating system. Still 100% talking about hardware at the bottom and end user applications at the other end. But there is no discussion of the platform which to me is actually the highest value proposition Apple is bringing here.
In short: Apple has made a fully realized spatial operating system, while Meta has made an app launcher for immersive Unity/Unreal apps for vanilla Android. You can get away with an app launcher when all you want to support is fully immersive apps that don't talk to each other. But that fails completely if you are trying to build a true operating system.
Think about what has to exist, to say, intelligently copy and paste parts of a 3D object made by one application into a 3D object made by another, the same way you would copy a flat image from photoshop into a Word document. The operating system has to truly understand 3D concepts internally. Meta is building these features but it is stuck in a really weird space trying to wedge them in between Android underneath and Unity/Unreal at the application layer. Apple has had the advantage of green field engineering it exactly how they want it to be from the ground up.
For me personally, it's definitely the platform. Requiring a Meta / Facebook account for already-purchased Oculuses, retroactively bricking devices and deleting software which was bought before that requirement, has put Oculus firmly in the "hardware I will never consider in my life" camp.
It's an incredible amount of goodwill to burn from a company with so little to spare, and I'm surprised it hasn't come up yet in this thread or in the blogpost. Meta has fundamental trustability issues.
That isn’t really what the parent is talking about at all…
If your issue is with the device requiring connection with an external account, Vision Pro requires an AppleID which will tie it to way more of your digital things than a Facebook login.
Thanks for your comment. I love just a few things on my Quest 2, and several times a week I take ten minute breaks for ping pong, something meditative, tai chi, etc.
You reminded me of the negative aspects of the Meta/Facebook corporate mass, and they should clean up their act in privacy, etc. for VR in the same way they have basically purchased good will in the AI community for releasing LLM model weights.
Apologies for going off topic, but Apple similarly really needs to trade a little profit for buying themselves a better “look” because they are looking a little tarnished also.
100% this. I paid the increasingly common "privacy and control premium" for a Valve Index (which I'm very happy with) to avoid the entanglements of borrowing a headset from Meta for a large, up front, non-refundable fee.
You're comparing a $300 product from a company that profits on analyzing their customers to a $3500 product from a hardware company.
This is not a fair comparison. They're motivated differently.
Furthermore, the "anti-account" viewpoint is making a privacy issue out of a pinch or friction point. Accounts are required for both devices. If you bought a device which allowed you to buy apps, the experience would be horrible without an account. If most people are willing and it's a better experience, it makes sense to force everyone into the same rails to reduce implementation cost. If it increases revenue, there's yet another reason to do it. It's ridiculous to be in an ideological minority and expect a company to bend to that when it's not in their best interest.
While I prefer Apple products because <yada yada>, Meta and Apple are doing the same thing here. The only difference is that Apple has higher current trustworthiness. This is also the reason they can release a $3500 headset.
I’ve had a Quest 1 for years, it always required an Oculus account, and since rebranding as Meta it now requires a Meta account, which my Oculus account was converted into.
It’s not bricked and hasn’t deleted my software, I’m curious what exactly you’re referring to with that.
> Requiring a Meta / Facebook account for already-purchased Oculuses, retroactively bricking devices and deleting software which was bought before that requirement
You always needed an Oculus account and they didn't brick anything. You did have to migrate from an Oculus to meta account but a Facebook account was never required on a quest 1 (or 3). Is a meta account really that different from an Oculus account?
The quest 1 has been deprecated yes but not bricked.
This has been my main complaint with Oculus all the way since the Rift days. At that point I assumed it was forthcoming within a few years yet here we are 8 years later and somehow it's not all that different. I don't understand how Oculus/Meta isn't drastically ahead at this point on software.
Actually, MSFT made the same blunder when it came to HoloLens. Well .. they did start to build some of the core spatial context (and had a fabulous headstart). But somewhere along the way, they yielded to Unity/Unreal. This was mind boggling to me as giving away the keys to the platform to another party was literally the founding story of Microsoft (with IBM having made the blunder). I wonder if engineering leadership recalls history when making such strategic goofs.
What’s more insulting after the announcement of the vaporware known as “infinite office” is meta’s total lack of attention on their PC software. The work related features of Quest are near non-existent if it weren’t for 3rd parties
Well it's easy to understand why. How could they build an MR ecosystem when their latest device is just barely MR?
They can only just now move towards MR with the Quest 3 and really it'll need another generation to be MR native.
They have a good relationship with developers and focused on what their current hardware is capable of, which is running one VR app. They spent the last 8 years on that use case and I think that was the right choice given the hardware realities at the time.
To me the interesting bit is that an even a VR executive a decade plus into working in the field doesn’t find this device compelling enough to own it.
I get that the thesis is that this version is the devkit etc, but viable consumer product status (read: enough adoption for the device to be profitable) seems very far away
> viable consumer product status (read: enough adoption for the device to be profitable) seems very far away
He mentions a few short term use cases for the current hardware.
For example: Productivity on the go (A laptop with the headset for multiple virtual displays) and Live Sports.
> Apple Immersive on Vision Pro is a transformative experience in terms of video quality and its ability to deliver a real sense of presence. Watching a game in high-resolution VR has the potential to be legitimately better than a regular 4K TV broadcast by enabling hardcore fans to feel much closer to the action
I think there are more than enough higher income people who would pay 5k just for a thing to watch a movie in private, with much better immersion than any alternative, on a plane.
Apple is enamored with vertical integration which gives them control on a whole other level compared to their competitors; feels like history repeating.
What's different with AVP compared to previous products is that it starts off even better thanks to Apple's own custom chips. There's also the amazing network effects of their ever-growing ecosystem.
Competitors don't have all this, so they will struggle to compete on the high-end. The intention of Apple is clearly indicated by the price of AVP, they want the profits at the top, let the rest fight over the scraps at the bottom with crummy privacy-invasive software and poor integration/interoperability.
It also 'starts off better' because they refined its components throughout the rest of their ecosystem over the last half decade (or more?). If you look at a variety of unprovoked UI changes in iOS and tvOS, or hardware changes in iDevices, they now look like field tests at scale for learning, before bringing together these new, now proven, things.
It's a way of development seen almost nowhere else.
Or I'm giving them too much credit ... but I don't think so. I think it's evident they seeded hard parts throughout the rest to learn at massive scale.
I like that Apple is focusing on 3d widgets as an app primitive but is it really that hard to put that into Oculus/Android? Android actually does have widgets. What about the OS precludes it from what Apple has done?
There's some hard decisions around forcing everyone into their custom material that Apple made so that they can handle the rendering more deeply....but is that really a core OS thing? Seems like it doesn't need a new kernel for that.
It’s not so much difficulty as system architecture. Oculus just doesn’t have an OS layer, at least not in the sense of a platform that helps applications share resources and interact with each other.
The Oclulus platform is more like a classic video game console; there are system APIs, but they are designed to be used by single-tasking applications.
And for the user, the Oculus system UI is really an app launcher /task switcher.
It’s not better or worse, just a very different design philosophy.
Damn. Like I never thought of it that way. You need that OS layer. That should be metas core competency if they want to win. Games are something that runs on top of other people’s platform. I thought Zuckerberg did all this to stop being a layer on top of somebody else’s stack but all they did was the exact same thing with Oculus.
That is what always bugged me about the pivot to “meta”. They never had to find product market fit to succeed. They were never hungry. They could just throw money until something clicked… but money alone doesn’t make a revolutionary product. You need somebody hungry enough to see the world in a different way and then execute the fuck out of it.
Dunno how this relates to apple though. They have equal amounts of cash to throw at problems until they are “solved”. Perhaps the “operating system” is a solved problem already to some extent and maybe there isn’t anything truly new?
I get what you are saying, but that is why the Vision Pro is still an over engineered dev kit for a half baked OS.
At least the Meta Quest for example has a lot of content and VR games. The Vision Pro doesn't seem to have much use apart from it curiosity, because such system hasn't been fully built out. It seems like a device that isn't really ready for prime time for a couple of years yet.
Thinking about it. If apple had dropped their vision pro with something like you can play Half Life Alyx on it like they did with Death Stranding with the M2chip/M3? they might have had a larger buyer pool.
This was my immediate takeway from using Quest 3 as well. Zuck has stated forever that he wants a platform, yet when it comes time to do the hard work, we just end up with an Android distro running a React app.
I was saying the same thing back during the first five years of the iPhone. There were so many ostensibly serious people who thought that BlackBerry or Nokia would have an “iPhone killer” just around the corner, and it’s like… do you chumps have any idea how difficult it is to build an operating system?
He talks about how "Gaze & pinch is an incredible UI superpower and major industry ah-ha moment" but... if that's really the case, then it's quite an indictment of the VR industry:
> The hardware needed to track eyes and hands in VR has been around for over a decade, and it’s Apple unique ability to bring everything together in a magical way that makes this UI superpower the most important achievement of the entire Vision Pro product, without a shadow of doubt.
So they had all the pieces, but only Apple put it together and realized that you'd need a VR equivalent of point-and-click? If that's actually true, it's sad.
It's almost exactly the same kind of conceptual transition that Apple made happen with keyboardless smartphones, too, which adds an extra sort of funny element to it.
Putting it together is not as simple as it seems. I think it was an immense engineering and design effort from Apple to get it to the point where it feels effortless and obvious
Not only do they have two cameras per eye, and all the hardware for wide angle out-of-view hand tracking, they had to consider:
Privacy: the user’s gaze is never delivered to your process when your native UI reacts to their gaze. Building this infrastructure to be performant, bug free and secure is a lot of work. Not to mention making it completely transparent for developers to use
Design: they reconsidered every single iOS control in the context of gaze and pinch, and invented whole new UI paradigms that work really well with the existing SDK. You can insert 3D models into a SwiftUI scroll view, and scroll them, and it just works (they even fade at the cut off point)
Accessibility: there is a great deal of thought put into alternative navigation methods for users who cannot maintain consistent gaze
In addition to this they clearly thought about how to maintain “gazeable” targets in the UI. When you drag a window closer or farther it scales up and down maintaining exactly the same visual size, trying to ensure nothing gets too small or large to gaze at effectively
There are so many thousands of design and engineering decisions that went into making gaze and pinch based navigation work so simply, so I can understand how it hasn’t been done this effectively until now
> So they had all the pieces, but only Apple put it together
Its very difficult to change a mindset or culture in big companies. Existing VR companies were too invested in using a controller. Similarly back in the early smartphone days all the big companies thought that smartphones must have physical keyboard.
Apple leveraged their existing OSX OS stack, for Meta this would mean either heavily forking android OR starting their own OS. Both would take 5+ years to get meaningful traction. Remember google fuchsia, the code-repo was public in 2016, intial release was 2021, and it's still not anywhere near where it'd need to be for a VR headset.
> Apple has made a fully realized spatial operating system
I'm not sure what you mean precisely. Apple doesn't seem to have done more than windows with persistent positions. This isn't nothing, but it's also not something that has tremendous value for a headset that you only wear 30 or 45 minutes at a time.
And they have little to no management of these floating windows. I'm really not holding my breath for Apple to come up with breakthrough windows management given what they've done for the past decade.
If you don't think in term of potential and promises, but of actual value to the user right now, I'd understand why Meta hasn't the gimmick.
Is this a big lead for Apple ? Perhaps, the world mapping could be something difficult to reproduce. Or Meta could be at roughly the same point but decided it not to go there.
The user is pointing out that that the real fridge behind them is reflected by the surface of the virtual object in front of them. And consider on top of that, the fridge is not visible to the headset at that moment. It is captured in the 3d spatial model that was created of the room. None of this is a pre-rendered or rigged or specifically engineered scenario. It's just what the operating system does by default. So one app that is totally unknown to another app can introduce reflections into the objects it displays. This is just so far beyond what can happen in the Quest platform by any means at all. And it can only happen because the 3d spatial modeling is integrated deeply into the native rendering stack - not just layered on the surface of each app.
Other people will come up with the right way to do spacial windowing… but they’ll fuck up somehow and Apple will take it, refine it, polish it, and “win”
And as an educator who thought an VR-development course at university for a few times since 2018 setting up and maintaining 15 Oculus Quest Mk I glasses was an absolute pain, with accounts that I have to setup, etc. Sure it worked somewhat like a android phone, but there was no real fast pass for users like me, a lot of the features and the UI changed over the time and it ultimately felt like the platform took itself too seriously and therefore had no problem wasting my time.
When designing a concept the core difference is always whether your design respects the user or whether it does not and tries to make them do things, spend more time on the platform, spend more money on the platform, etc.
To support copy&paste all you need is a common format for 3D objects, like HTML is for 2D documents; glTF might be a reasonable candidate.
The problem with the Apple approach is that there are no apps and games, and there probably won't be many given it's a 3500$ device with few users that Apple exerts its tyrannical grip over (or if there will be, they will be ports of Unity/Unreal, PC or Android VR apps, not using any of the special features that the Apple OS may have).
People have been trying to solve this problem since the VRML days in the '90s. I suspect you are underestimating the complexity of 3D data by a pretty huge extent.
3D is much, much, much more complicated than 2D, especially if you're trying to interchange between arbitrary applications that may have divergent needs.
Start with this little thought experiment.
What do you mean by 3d object?
Do you mean a set of polygons, like in a traditional triangle mesh?
A volume, like voxels? A set of 3d surfaces?
Do you need to model interior details, or only the exterior envelope? Do you need to be able to split or explode it at some arbitrary level of detail? Do we need to encode sharp edges or creases in some way?
etc, etc, etc
This is before you have touched materials, texturing, lighting, any of that.
It goes back to what I have mentioned elsewhere in this thread that Apple always thinks product first. Hardware specs are only ever in service of the product Apple is trying to deliver. If they could never list specs, they wouldn't. The industry forces some capitulation which is why Apple ever talks about specs at all.
Oculus is a gaming device and doesn't have a "productivity" ambition. I believe it is because 3D glasses has very limited productivity application. But who knows, for people who think that 13" laptop is okay for work, Vision Pro may become something better for comparable price.
> I believe it is because 3D glasses has very limited productivity application.
AR has tremendous productivity applications if the device is small and wearable enough. Imagine being up in your attic running cables and seeing a projection of the floor plan of your house so you can see where the different rooms in your house are. Or driving a car, except all the blind spots disappear and are filled in with vehicle-mounted camera feeds, with unobtrusive overlays for navigation or to highlight potential safety hazards. Imagine assembling some IKEA furniture except instead of puzzling through the instruction book, you have an app that can recognize all the pieces using machine vision and simply show you what to do. Imagine never forgetting a name or a face, because every time you see even a distant acquaintance, your glasses can run facial recognition and make their name pop up by their face in real life. Imagine noticing a weird rash on your arm, but as soon as you look at it, your glasses immediately diagnose it as a potential MRSA infection and pop up a notification allowing you to call an urgent care clinic that’s open right this second.
Oculus has expanded its applications beyond just gaming. There are also productivity applications and tools available for Oculus devices like virtual desktops
> Apple has had the advantage of green field engineering it exactly how they want it to be from the ground up.
It's not quite green field on the software side, albeit mostly. Clearly they already have experiencing re-platforming a whole operating system multiple times. The underpinnings of macOS power everything from desktops to smartphones to watches to tablets already all with diverging user interfaces. They had a solid first-party foundation to build the interface they want; Facebook is ultimately a third-party to Android and is having to solve the same Android hardware integration problems as everyone else.
> Think about what has to exist, to say, intelligently copy and paste parts of a 3D object made by one application into a 3D object made by another, the same way you would copy a flat image from photoshop into a Word document.
I own an AVP and this isn't something that can be done with it, to the best of my knowledge. Please explain how this is possible with the existing OS and apps.
I have seen people download 3d model file formats (stl) and position/scale them in front of them, and then walk around the 3d model. I am not sure if they added anything to the Vision Pro but it was pretty impressive. I would not be surprised if it can handle common 3d formats and render them straight to your AR environment out of the box.
Possible with the existing OS? Definitely. It’s just clipboard data
Possible with the current apps? None of them support a standard partial copy of 3D objects but they do allow copy pasting full objects between apps afaik. E.g I can drag a USDZ file from a message into keynote
None of this matters compared to the number of apps available. Given the high price of the Vision Pro and the resulting low sales, it would make little business sense for app developers to invest in creating apps for it instead of for the Quest 3.
This is a really interesting point and plenty of products have died on the hill of not realising that "content is king".
What makes it particularly interesting is that the VisionOS app store so far seems to have had quite an anemic reception from developers. Barely any novel non-toy apps have been released for it, with 8 months since devs got access last year and 2 months in the open dev ecosystem. It's possible the tsunami is just around the corner but it would have to be said that this seems to be diverging heavily at this point from the launch of the original iPhone app store. It was always going to be a question since the user base is miniscule compared to every other headset and iOS devs mostly have negligible experience in developing full scale VR / AR applications which is actually a very steep learning curve. So the barriers are high and the incentive relatively small.
If Apple fails to attract devs to its store it will create a huge problem for them that they are pretty unused to having. I wonder how they will approach a situation like that, since their culture is not used to dealing with that as a problem these days.
> None of this matters compared to the number of apps available.
There's a reason that iPadOS launched multi-window mode of arbitrary curved corner sizes with the same curved handle bar on the curved corner, several iPadOS iterations back.
Doing that ensured that VisionOS could launch opting all iPad apps in: most any iPad app respecting the ability to run in iPadOS's "stage manager" mode with multiple windows, works beautifully OOTB on VisionOS.
In fact, any iPad app run on Vision OS, if you "pull" the app close to you iPad sized, you can touch it as if an iPad screen, and your fingers and touch work as if touching an iPad.
The only apps that don't work as if native are those doing something special with multi-touch or touch gestures, but most apps "just work". It's pretty wild.
Press keeps comparing Vision Pro to MacOS. No, the 2D pass-through mode is a room sized iPad stage manager, infinite iPads.
> Given the high price of the Vision Pro and the resulting low sales, it would make little business sense for app developers to invest in creating apps for it instead of for the Quest 3.
For sure. That’s exactly how it’s played out in iOS vs Android. No developer makes anything for the higher priced, small market iOS, right?
Perhaps it matters because of apps. With OS that natively supports spatial features it could be easier to expand functionality of existing ios apps or interact with them in the ar/vr context.
it is high time to stop praising apple for their software ecosystem. i am still impressed by their level of engineering but it is not doing much in terms of real use-case since the ios days.
case in point - the whole ipad and "what is a computer" campaign. it is hilarious when a half-baked mouse support is celebrated in a tablet. despite using similar hardware, apple refuses to treat their tablets up to their true potential.
despite working on an RTOS for avp, there is no signs that the headset stack will be exploitable by professionals, like it used to for macs. for the coming future it will remain to be a good software demo built on top of nice displays.
I worked on an oculus team for close to a team that was charged with building a platform. The trouble at oculus was that there were multiple waring platform efforts.
> between Android underneath and Unity/Unreal at the application layer.
So they want to build a new kind of device and a new kind of experience, and they seriously think they can do that by just plugging together ready-made parts built by others? No wonder this is going nowhere.
Apple has made a fully realized spatial operating system
Said that out loud to a group of techies and they laughed so hard one of them fell out of their seat.
Apple put the iPad on your face. And that's pretty much it.
The few VP users that haven't returned the device don't use any of the "spatial" features like controlling the UI by pointing in space, since it's so inaccurate that it gives Swype a run for its money.
I think the description of the VisionPro as a dev kit is spot on. It's also a beta product in many way.
Apple know full well that this is not a mass market product, they have made no attempt to make it even remotely affordable to most people. But they also know that every aspect of the hardware will improve over the next decade, and as it does they will have ironed out many of the problems with how we use AR/VR and will be ready for it, based on the real life experiences of actual owners, rather than years of in-house testing.
The screen will get bigger (I hear FoV is pretty poor at the moment), the CPU/power/thermal performance will improve, battery density goes up, cameras and sensors get better and cheaper and many parts will get smaller/lighter. And in that time they will learn by doing, making it better/cheaper/lighter and working on the software and interaction model.
Hopefully at the same time it will really spur on the rest of the industry and we will see more competition and experimentation.
I can't see myself buying something like this for the next 10 years at least. But something like the VPro that is better, smaller and lighter and doesn't cost the earth could be quite tempting for late adopters like me.
I thought it was pretty good - but nowhere near ideal of course.
But discovered I could dispense with both of Apple's "Light Seal Cushions" and simply line the light seal with some 1/8 adhesive foam. It took a little experimenting to avoid hard pressure points, and then make it comfortable.
It is now very comfortable with the following benefits:
1. The field of view is noticeably wider. Yay! The immersion improvement feels cognitively significant.
2. I realized that greater peripheral vision downward is more important than upward. Being more aware of down makes us feel safer and is also where are our hands and keyboards live.
So I arranged the padding to wear the head set slightly lower, allocating all the increased vertical FOV downward.
3. The combination of being 1/4 inch or so closer to the face, and firmer padding, reduced the feeling of weight on the front of my head.
Warning - literally. I get an occasional popup warning that my eyes are too close to the lenses. The danger being if I were to fall I could potentially hit my eyes. I stay seated most the time, but occasionally walk through rooms, so it is worth being careful.
I use my Vision Pro for 10 hours a day on many days, comfortably. I had to switch to the two-strap support to do this. But I have ordered an adapter that allows the original behind the heat cushion strap to be used with a second cushion strap over the head. I anticipate that working even better, given how much surface area weight will be distributed across. (Also turning a knob is easier for adjustments than messing with velcro.)
Also, got some thicker (in width) lighter foam, to add some more light seal around the edges.
This feels like a real upgrade, a year or so before Apple will release a bump.
Ten years ago was pre oculus rift. We weren't saying anything about VR headsets.
Five years ago was pre valve index. We didn't have CPUs in a headset. Nor a battery. Cameras were only used for tracking. The things we were saying would improve is "screen door effect" and "tracking", both of which have.
First Oculus headset versus Quest 3 right now? Quest 3 wins. There's progress. Perhaps not as fast as we'd want. There is progress though. I suspect that progress will continue.
There were a lot of tablets before the iPad but it wasn’t until the iPad that tablets really took off as a serious market segment. Ditto with AirPods and Bluetooth earbuds.
In the past, once Apple started pouring R&D money into a specific product type, the entire industry around it tends to advance very quickly. I’m optimistic about the Vision Pro, and I actually think the n+2 Meta release will be much better off for it.
The problem with every headset over the last ten years was that they just didn’t have the technical commitment to really overcome the major problems. Making good VR is a VERY tech heavy endeavor. Apple has shown that they can make hardware which really begins to solve those issues (good resolution and quality pass through being one, having a nice OS being another). The AVP is truly starting to make a difference in what is possible with VR. It’s not just more optimism.
Apple has a history to sticking with things like this. Watch and HomePod were both seen as over-priced and sold poorly the first iterations. Apple leadership had the confidence to see through those initial versions. This was part of the internal culture during my time at Apple.
I know this is anecdotal but my brother uses the Vison Pro daily ever since we have had it. He got a Oculus DK1 and a Oculus Quest 2 (I bought the Quest 2 off from him) and if he wasn't wearing it so much I have worn it and the best experience from it is media consumption. Its really a iPad with a ridiculous screen.
Given my age, I would say the same applies to all headsets I have seen since 1994, when I saw someone using one to play Doom at the Lisbon computer fair.
> Apple know full well that this is not a mass market product.
I really feel like too many people are ignoring this. If anyone understands how to play the long game with a new product it is Apple. I mean the Apple TV was a "Hobby" for many years.
Has Apple ever put out a "Pro" version of a product before the "normal" version?
I think it also helps the clear sharing of technology between this and other products. From using the M2 tech, to iOS and (I assume) built with much of the AR tech they have been showing off for the last several years on iPhone.
I honestly kinda wonder how much of ARKit was directly made from work on the Vision Pro?
This version of Vision Pro was never going to be a massive product, I expect they knew they would get returns (and they hoped to mitigate that with the in store demos but that only does so much). But it is setting the ground work for a long term investment.
There are many headsets much more expensive than that. They aren't as mainstream but for the hobbyist who is willing to pay more for better, they are definitely out there.
My problem with the vision pro is that it doesn’t do enough new, the iPhone and the MacBook Air let you use a computer in an area where you previously couldn’t and made it accessible to normal people. The vision pro isn’t that much better in terms of bringing the technology in a user friendly package to the masses than the quest.
A good measure of an Apple product is if you can pitch a version of it to your grandma or dad who can’t open PDF file. If something only appeals to tech enthusiasts it is not a good Apple product (except the professional line products intended to be used for serious work by professionals which the vision pro isn’t)
One of the things I remember about it was someone, maybe Jeff Atwood, spent a fortune on a maxed out model with an SSD.
Even though the SSD was much tinier than the hard drive and the processor in the machine was slow and under clocked to be able to manage heat, in combination with the SSD it was fast compiling code and ran smoother than a normal MacBook Pro.
The flipside is I think the hard drive was a 4200 RPM model and performed absolutely abysmally.
But if you didn’t need much computing power, say you’re a writer, it was extremely small and lightweight and easy to carry. It’s not surprising it changed the industry.
I think of the Vision Pro as the 1984 Mac of VR/AR. The original Mac was next to useless and, adjusted for inflation, cost twice what the Vision Pro does. But it changed the world.
(Yes, it was near useless. The original Mac had 128k of RAM, which allowed for only the smallest of applications. It wasn't until the Mac 512k came out that people could do real work on these machines.)
The next rev of Vision products from Apple, or maybe the rev after that, will just be leaps and bounds beyond what anyone else is doing in the space. No new paradigm of computing truly begins until Apple starts it.
Also, much like the Apple Watch, they took their best guess at what it’s good for but don’t really know yet. So they’ve kind of tried to prepare for everything.
It’s going to be interesting to see what it really shines at as more developers make different kinds of apps.
Personally I'm betting it's not even a dev kit for future VR devices, though we will almost definitely get an iteration or two of them before we get to what I think is their true goal, AR glasses. There's so much unnecessary stuff here if the goal isn't that, mainly the ridiculous outward facing eye screen, that makes sense if it's just to simulate as close as possible using AR glasses.
I can't remember if it was here or somewhere else I saw the point made that the current Vision Pro is the worst one Apple will ever make. All future Vision* products will likely be technically improved from the current model. So if the Vision Pro is good right now it really can only be refined and get better.
The original Apple Lisa, which cost $9,995 in 1983, would cost approximately $27,905 in today's dollars, adjusting for inflation over the period up to 2023.
I am happy and avid user of Meta's recent Ray Ban Smart Glasses as Im a sunglass wearer (think a huge part of the population are too) and use my phone to take pics a lot. Now do that reliably through Meta's glasses and Zuckerberg just showed the latest Meta glass beta which you can ask "what mountain am i looking at," at it audibly tells you.
Im betting Apple will release similar smart glasses like Meta's in the next year or two.
> I think the description of the VisionPro as a dev kit is spot on.
A devkit prepared for lock down. It's the old formula: let devs make the platform great, then pull a Sherlock or increase the platform fees. Count me out.
iOS non-game apps aren't exactly innovative. It's Gmail. It's YouTube. It's social media photo and video scrolls. It's been more than a decade, and the most innovative thing I can find on top downloads and top grossing, Duolingo, is also kind of a game, like if you remove the game part of it it's kind of not much, is it? All the innovations in Google Maps are kind of tied up in backend technologies that aren't specific to the phone at all, indeed predated it.
Once they figured out touchscreen keyboard and accelerated web browsing sort of everything else fell into place. Then the retina display was introduced, and software improved the camera. I don't know what roles 3rd party devs played in all of this, but those list of innovations happened years ago.
They don't Sherlock games.
Even then, is Apple going to approve a game with guns on the AVP? Time will tell. Beatsaber is an innovative game but it leaned on the basic premise of people doing something illegal, uploading non licensed maps and tracks.
Hacker News commenters don't know much about making games - even when they work for huge game studios! - and they don't know much about VR - even when they work at companies making VR headsets!
Hugo must certainly be aware of the Varjo XR3, which is actually the most comparable device to the AVP and even more expensive, but there were developers at Apple on the AVP team who never heard of it, and many more Oculus developers.
At the end of the day this is a love letter to halo product positioning coupled with relentless vendor lock-in applied to helpless consumers. I agree with you that the locked down nature of the product makes it as DoA for developers as the Apple Watch was. People forget that the first apps for iPhones were delivered via jailbreaks made by hackers, who had unlimited access, and that plus Steam ultimately teed up what limited things you can do in iOS apps today, not brilliant strategy.
Aren't they already charging the 30% app store tax for the privilege of running software on the device you bought? I doubt that fees going to go up from there.
My two cents (as an owner of a Vision Pro): 1) This is a beta product and I wouldnt recommend it to anyone that is not a developer or in a position to create apps for it, 2) this is going to have as big of an impact on humanity as the computer or the internet did and every engineer or aspiring entrepreneur should pay it attention.
This device is magical. Yes it's too heavy, yes there are not enough apps, yes sharing it with other people is a hassle. Every one of those issues will be solved in the next 2 generations.
When the screen is 50% better and it weighs 50% less, 20% of knowledge workers will be tuned in daily. Technology isnt a zero sum game, but I genuinely believe the impact will be bigger than AI. AI changes the way you work, Apple Vision will change the way you live, which will change the things you consume, which will impact the work that needs to be done to serve you. Empires will rise and fall. Travel to beaches will go up, travel to tier 3 towns and cities with minor attractions could get obliterated. Most TVs will disappear. My grand children will be able to experience memories with me long after I'm dead. After this, looking at my smartphone seems so analog. Anyway, I could go on for hours
Lastly, on the topic of pricing, there is a great book titled Positioning by Al Reiss that highlights the rational behind Apple's strategy on pricing. They are a second mover in this space, they needed to come out of the gate as the absolute gold standard of this technology and position it as the best money can buy. They delivered on both the tech and pricing. Long after the price comes down, they'll still own that position of being the best money can buy. When you own the position, it takes a lot to lose it. It will be almost impossible for Meta to take that from them now. Big mistake on Meta's part.
Having tried the VP out for a week, I categorically disagree that VR is going to have remotely this level of impact on humanity. Having to wear something on your head to compute is fatiguing and isolating. You can’t bring your existing peripherals and audio equipment unless they’re proprietary and wireless. Passthrough video is nauseating and blurry even at the highest refresh rates. The paradigm is also intrinsically inaccessible to the blind and disabled.
The immersiveness for entertainment is neat, but this seems like a relatively minor use case in the grand scheme of things. And with gesture based controls, gaming is pretty much a non-starter.
Do you really envision a future of families hanging out together on the couch, each member with their own VR goggles…? Halfway through my return period, I realized that I much preferred reality.
FWIW, I love my Quest for the occasional gaming romp, but little else.
100% agree. Until this tech has the same form factor as a pair of glasses and no wires it will be stuck in the realms of gaming and porn. Not a small market, perhaps, but hardly a seismic shift in life on Earth.
Did you own prior VR headsets? If so, what is different about this one?
I also have been telling people that VR is magical and that while both current hardware and the current software has issues it will get better and that this stuff is going to have a big impact on humanity... only, I've been telling people this for just shy of a decade now ;P. I don't see the Apple Vision Pro as either the product that made any of this magical nor is it the product that made any of this viable... it is just yet another incrementally-improved product in a category that has been on the verge of getting it right for, well, forever.
I agree. It's obvious this technology has a big future, the question is, outside of the current $3500+ niche marker, when? The Oculus Rift was created in 2011, id Software was demoing it at E3 2012, and in 2016 consumer versions of HTC Vive and Oculus Rift launched, as did Microsoft HoloLens and Google Daydream.
As it improves, I don't question that it has a future, and some people like it so much they're working on it and writing apps now. The question is how many years from now before a larger mass of people are using it?
The OP talks about the Internet, but the Internet started some time between the 1969 first ARPAnet transmission and 1974 paper on the Internet protocol. Out to 1993 most people were not consciously using the Internet, it was a rather niche thing. With the release of the Netscape web browser in 1994, the inclusion of IP networking capability in Windows 1995 etc., this began to change.
One could say this about AI as well. Up until 2019, Norvig and Russell's CS textbook "Artificial Intelligence: A Modern Approach" had only a ten page sub-chapter on neural networks. The now recognized as groundbreaking 2012 ImageNet competition victory by Krizhevsky, Sutskever and Hinton, was greeted with skepticism and disinterest on this forum ( https://news.ycombinator.com/item?id=4611830 ). The ideas for connectionist AI were there starting with McCulloch and Pitts 1943 paper, so that's a lot of development where things still weren't really happening until around 2012 or even 2019.
Some people can see the potential in things decades in advance, but they can take time to move into the mainstream, and it is difficult to determine how long that can take.
I bought an HTC vive 7+ years ago and have middling opinions on its viability long-term as a major game platform. VR is fun and novel, but once the novelty wears off it can be tedious.
It's worth doing a vision pro demo in an Apple store if you can. I'm not planning to buy one in its current state, but it was immediately obvious that this was entirely new territory. It moved a piece of technology from fiction to reality in my mind. The eye tracking is hard to believe without having experienced it.
I'm not ready to say that this device will be responsible for such a large technology shift as the previous person, but I think it will play an large, perhaps pivotal role on our way to the next thing.
Disclaimer: I know a lot of people compare this to a device that I can surmise was meant to do a lot of similar things made by Meta; I have not tried this and may be attributing first-mover credit to Apple for things that Meta may have done similarly well. My comment is made from the perspective of somebody who has never experienced eye tracking in any capacity before.
Other headsets are like a DC film and the Vision Pro is like a marvel film before endgame. The differences (that matter in GP’s context) are qualitative rather than quantitative, and a big distinction is the shift from VR gaming, which most incumbents and current VR fans won’t “get” nearly as fast as the public will (once things get going).
Somewhat ironically, I have no interest in showing people my AVP, I use it for hours everyday but showing it to people feels weird in the same way showing your neighbor your workstation and monitors feels weird. (Part of that is, of course, that I’m not going to give them my laptop to run Mac virtual display, which means they won’t fully “get” it anyhow.)
In contrast the quest devices are practically begging to be shown off but lie in disuse a majority of the time, especially if you don’t have time for gaming.
> this is going to have as big of an impact on humanity as the computer or the internet
I don't see how anyone can say this with a straight face. VR/AR just isn't there and won't be there for a lot longer than people think. Everyone always acts like we're -so close- but then we just aren't. Hololens and Oculus are over a decade old, and honestly when I first demo'd both ages ago, they were incredible (especially Hololens.) If I put on a headset now, sure it's better but it's still stuck on that same idea and seems to fundamentally misunderstand how humans interact. Apple's newest product is no different. I think the future of wearable devices looks a lot more like the Meta Ray-bans with some sort of small HUD than a full computer strapped to your face that you can't wear anywhere (without being socially ostracized or robbed).
One way to look at VR headsets is that it's a better computer display -- as large and immersive as you want, without physical limitation. People prefer retina displays and will pay for multi-monitor setups, so we know people value this.
We also know that people are perfectly willing to wear something on their face to be able to see better, and willing to carry something around in their pockets to be more connected -- glasses and phones.
So I think the only real barrier is technical feasibility: can you make it small enough, light enough, power efficient enough, and, of course, affordable enough?
AVP and Oculus clearly aren't there yet. Relative to smartphones, I think we're at the "Palm III" or maybe "Palm V" level of things. I personally have no idea if a path forward to the "iPhone 3S/iPhone 4" level even really exists for VR headsets.
But if it does and we get there, I think there's no doubt that these headsets (probably just goggles at that point) will take over the world, like phones have.
Totally agree. Best thing about working a life-long career with computers is that you can get away from them quite easily, leave your phone at home, turn off your monitor.
Every time I see an intent by companies to increment my time spent with a computer, to watch their ads, buy another thing I don't need ... I know that I should be doing the exact opposite.
Businesses are not their target market, never has been. They dominated the mobile phone market with their iPhone, and the iPad is still a gold standard product for tablets compared to the dozens of shoddy Android versions out there (coming from someone who leans more Android than Apple with their tech). I can see GP's point regardless of the rollout. Before the VR app stores become such critical masses, Apple will have plenty of time to delight users and capture market share. They have tons of cash and great execution.
50% of all our employees choose Mac over PCs. When you wear an Apple Vision Pro and look at your Macbook, it transfers the screen to Apple Vision and allows you to duplicate and extend. No other PC will have this level of integration.
> When the screen is 50% better and it weighs 50% less, 20% of knowledge workers will be tuned in daily.
I don't believe that for a second. Even 50% lighter and it's still going to be more uncomfortable than not wearing one. All so you can have a blurrier screen?
If/when it gets to the weight/comfort of normal glasses then sure. But that's at least a decade away. Probably more.
If they can't fix the ever so slight blurryness for everyone it will continue to be a niche product simply because only a small niche wants to accept such a thing. Or only a small niche of people has eyes that can deal with it. Btw I consider my Meta Quest 3 to be a terrible product in almost every way, from hardware to software, from blurryness, from weight, from app availability, from PC compatability, from having to buy extra cables, weight, the strap, passthrough, from even the simplest interactions like clicking on a button... everything was bad.
As the article mentions, this is an intentional thing they've done to cover up the remaining vestiges of the screen door effect. The better the screens get, the less and less they'll have any reason to do it.
> this is going to have as big of an impact on humanity as the computer or the internet did
I see statements like this about AI too. How can something be bigger than the technology it is built on? The AVP is a computer. AI is a computer accessed through the internet... any computing device that accesses another computing device is further extending the "impact of computers and the internet." I don't get it. Maybe I'm too caught up on the semantics.
If you look at it as purely additive, then sure. If you look at it like this, the era or the horse drawn carriage ended with the motor car, it doesn't get to claim any of the car's influence, despite being a precursor, then no.
VR is the most extreme case of a product that a subset of technophiles absolutely love and that the rest of the population is decidedly meh on. I think that you are vastly overestimating the amount of people who want to spend a large portion of their daily life with large googles strapped to their head isolating them from the people and things in their immediate environment.
This isn't to say that VR won't improve and find some useful niches, but the idea that it could have anywhere near the global impact of smart phones seems wildly optimistic to me.
> I think that you are vastly overestimating the amount of people who want to spend a large portion of their daily life with large googles strapped to their head
For some reason this illogical phrasing that totally excludes the value proposition has become the standard phrasing for arguing why VR/AR won't succeed. It makes no sense. You haven't expressed any part of the value proposition of the product, only a negative aspect of it, so why are you assessing it's value based on that? I could assess the value of a car as "a metal box you are trapped inside of for hours on end" or a phone as a "fragile object that requires constant charging" and decide these products will just never make it.
Of course people don't want goggles strapped to their face. But we already know they will do it, given a value proposition because humans do exactly that : they wear glasses and sunglasses and hats quite happily, sometimes all day long.
> I think that you are vastly overestimating the amount of people who want to spend a large portion of their daily life with large googles strapped to their head isolating them from the people and things in their immediate environment.
I wish you were right. But, man, look around. People are immersed in their mobile devices. No one gives a shit about their surroundings anymore.
I experienced AR for the first time with the Vision Pro, and I came away with the same impressions; it’s not quite ready for mainstream but “magical” really is apt. I was a big skeptic on AR/VR in general before, but the demo for Vision Pro convinced me. I wrote about it here.[1]
AI does a whole lot more than that. I imagine that the vast majority of content people consume will at some point be created by AI perhaps with some marginal human input. Probably before your VR vision becomes a reality.
> this is going to have as big of an impact on humanity as the computer or the internet did and every engineer or aspiring entrepreneur should pay it attention
I predict it will go the way of 3D TVs. Most people don't want to walk around with half a kilogram of computers inside goggles strapped to their faces.
You have lived in your bubble for too long. Just talk to people around you -- could include other software engineers or tech enthusiasts -- and I'm sure most people don't think about the product the same way you feel. I'll put it sinply: most people DON'T want to put this thing on their face. Not Oculus Quest. Not Apple Vision Pro.
I am perhaps missing your point, but if people will be able to experience incredible VR worlds, couldn't people happily spend their time sitting in tiny windowless rooms rather than physically travelling to a beach?
I think there might end up being a greater division between "IRL places you really need to be IRL to enjoy" and "IRL places that are good enough in VR."
A beach seems like the former, and cities and minor attractions seem like the latter.
His point on the significant motion-blur / image quality issues that exists with pass-through is my biggest complaint with the device hardware-wise.
I got the prescription lens inserts that Apple had suggested, and when I first put on the device I thought that either my eye doctor had gotten my prescription wrong, or something was defective with my device.
The blur is distracting -- and looking further away makes it more obvious, as the objects in the background move around a lot more when turning your head vs items really close to your eyes.
He also says you can read your screens through passthrough, but I've found that not really to be the case, at least for devices like the iPhone or Apple watch.
I've had to take my Vision Pro off many times not only to read a phone notification, but also for anything that requires Face-ID (which doesn't work well when the Vision Pro is covering your face, which feels like an Apple ecosystem fail).
I'm still enjoying it, and I bought it knowing it was a V1 product, but it also shows how far have to go, even with a ton of engineering put into a product.
This actually helps me a little bit. I've also seen people say they can read screens, and that's not my experience. I also have the lens inserts, and I suspect that part of the problem is how they implement the prescription. I'm not knowledgeable enough about lenses to say this with confidence (please correct me!), but I wonder if this is because Apple prioritizes a farther away focal point for the inserts, so you literally can't focus on anything close up.
I've noticed that I can almost read things if I hold my phone a little farther away, but I wouldn't call it usable by any stretch. I've considered getting contacts for the first time just to test all of this, but I'm extremely turned off by the upkeep of them (to say nothing of the idea of touching my eyeball to put them in).
> I wonder if this is because Apple prioritizes a farther away focal point for the inserts, so you literally can't focus on anything close up
VR is interesting all because all manufactures (that I know of) have a fixed focal distance for the screens. This is why you need inserts in the first place, even though the lenses are right in front of your eyes. For example, on the valve index this is set to ~6ft, so if you can see up to 6ft perfectly, you do not need inserts.
Moving your phone around doesn't change this number, its a relationship between the lenses and the screens
Stupid question: Are you wearing it in an environment with good lighting? Passthrough camera performance suffers a lot in low light, and the blur increases with the longer exposure.
I can read screens "fine" while wearing it (fine as in, I can read them, I wouldn't want to do it for an extended amount of time).
Counter anecdote: I didn't find the blur distracting at all, and was able to read my iphone 13 mini perfectly fine. I beleive your experience of course, though.
> but also for anything that requires Face-ID (which doesn't work well when the Vision Pro is covering your face, which feels like an Apple ecosystem fail).
At some point don't we need to accept the laws of physics and biology? That is, The VisionPro covers a significant part of your face, much more than a pair of glasses. Face-ID already has the challenge of needing to recognize you in tons of different conditions (lighting, pale/tan skin color, wildly different hairstyles, facial hair, etc.), while for security reasons nearly never letting someone else impersonate you. Is it really possible to get that level of forgiveness with accuracy if Face-ID only gets to consider the bottom half of your face?
I think the complaint is that the vision pro can’t authenticate the wearer to the iphone. Just like using your apple watch to unlock a mac, the vision pro will probably eventually be able to set up a trusted relationship between the user and their phone, and fix this issue. That’s what I understood from “ecosystem fail”
Quest is clearly a gaming device. It boot straight into the store. The re-opens the store at every opportunity. This means, trying to use it for non-gaming means you're constantly reminded your on a VR game console. It'd be like trying to use a PS5 for work. I kind of get why they did it but I hate it. I'm happy my iPhone does not boot into the store and does not go back to the store every time I go to the home screen.
(2) Quest doesn't care about productivity
This is kind of the same as 1 but, ... I actually try to use my Rift-S with my desktop for certain activities almost daily. I press the right controller's system button, close the home tab, close the library tab, close the store tab, open the desktop. Then I interact with the a few desktop apps. I copy or move files and I run certain apps that I've written that work reasonably well with point and click controls
But, I tried accessing my desktop on a Quest via the various link (wired and wireless) and it crashed 1 of 3 times. This is clearly a niche feature to Meta. Meta expects you're just playing games from their store.
--
You can certainly argue Meta didn't get those things wrong but for me, they are wrong. The more productivity I can do on a Quest the more useful it is it me any many others. There are more phones than game consoles. IMO they should make the device be general purpose, not sell a "pro" version for productivity. Phone game sales far out shadow game console game sales. They don't need to design the Quest around a game console experience.
I'm going to die on the hill that gaze sucks. I really hate focus follows gaze and not having a workable keyboard or buttons makes it hard to actually use this for productivity. It's fine for an alt mode type thing but as the main form of input I kind of hate it.
That said, forcing your hands to be full of controllers also sucks but at least you can play a game. I don't have a solution but it needs to get solved for these things to be seamless enough to wear and useful enough to want to.
I liked the thumb gesture idea they presented in Peripheral. Enough movement to be discernible by a camera, but really only requires a small amount of energy and dexterity.
Though ideally we probably want some sort of “force gloves” that allow us to feel the weight of and manipulate objects in a virtual world using. We need way higher resolution on inputs than we have today, though, which currently amounts to an x/y/z and some button presses.
imo for AR purposes the gaze is a huge improvement than just pinching (a-la HoloLens). I agree though that for productivity, a more tactile device is needed. In practice I end up connecting my keyboard via Bluetooth, or just end up mirroring my MacBook, when I need to do real work
For work purposes, the privacy factor is a huge benefit -- nobody in the room can peek at your "screen"
^ agree with this take. If you're slow with tech, then this is amazing! And I think Apple has been great at making interfaces that are natural to use and easy to get into if you're not tech-savy. But for someone who's used to multi-task, or perform operations without looking at what I'm actually operating on, it sucks. I tried using keynote to create diagrams in Vision pro and the experience was so infuriating, I had to painfully look at everything and keep my focus on them while I also tried to repeatedly snap my fingers as the Vision Pro failed to detect it half of the times. I just want controllers.
Can’t you just use a mouse and keyboard as controllers? It’s hard to imagine any hardware peripheral that would improve on them for making a keynote or we’d already be using it on our desktops.
I like my Oculus Rift. But the software is so bad. It is confusing, after a month not using it, I don't know where each setting is. Sometimes I misconfigure, and there is no easy way to reset it and move everything into view again. Hardware is fine for my needs (playing Alyx), but the software looks like somthing bought from seven sources and glued together.
It's gone through so much reshuffling and clearly been kicked around internally between managers/PMs.
The earlier versions were much easier to use and the later ones can become quite a nightmare to setup navigating the oculus/meta/facebook account silliness then ultimately it all feels a lot jankier than very early versions of the software both in Rift and Quest respectively.
Think if Zuck believes in this going forwards it would be wise to focus on removing some of this platform bureaucracy friction, took me 15-20 minutes to get my Quest up and running and logged in after not using it for 18 months.
Yeah it's wildly, embarrassingly bad - to such a point it feels like Zuck must not be using it. Maybe most of his attention is on the AI stuff and the Ray Bans?
There's a ton of half-baked old ideas in the UI, it's extremely confusing. Even basic stuff like trying to add my dad as a friend is super hard to do.
Literally every person I've helped set it up has also had to do a full manual restore (holding down hardware buttons to reset from a boot menu) because they app fails to connect during initial setup or there's some bug with adding payment.
Someone really needs to go into that team and rip lots of stuff out.
Their touch controllers and basic UI navigation are good though.
Is there something inherently difficult or novel about VR operating software that would make it difficult to design or implement controls? I can understand tracking hands is difficult, but I mean the problems you have.
It seems crazy that after investing so much in the 'hard part', the VR hardware and software itself, they'd drop the ball on what seems mundane - basic design of UI controls.
No I don't think so. They just miss the Apple attitude of UI.
It's like the Windows setting system. I'm using Win11/WSL, and while it works, the Windows settings are a mixture of Win95/Aero/3rd Party Plugins (Nvidia)/Win11 and some things that just look like Win3.1.
Googling to change DNS settings:
Step 1: Open the Control Panel. ...
Step 2: Open Network and Sharing Center. ...
Step 3: Choose the connection. ...
Step 4: Change adapter settings. ...
Step 5: Choose Internet Protocol Version 4 (TCP/IPv4) ...
Step 6: Click on Properties. ...
It’s a new UI paradigm, not just a new UI toolkit or something like that. I think those are quite difficult, I can only think of like 4 in the computer space: text & terminals, windows mouse and keyboard, menu driven (consoles & cable boxes), and smartphone.
We don’t come up with totally new ways to interact with our computers often. And Facebook has never stood out for their UI brilliance.
Maybe I'm old fashioned but one of the biggest barriers for me adopting VR/AR is there isn't a socially acceptable way to "duck out" of a VR experience you're just not into. I've been given a couple of demos of headsets by friends and more often than not by the end I feel trapped -- you're strapped into a thing that fully occupies your visual field, yet it's obvious and socially awkward when you take it off.
And your eyes are both covered so there isn't a good way to non-verbally communicate waning interest levels...I suppose a solution is to simply care less if I hurt my friend's feelings but I'd also like a way to spend time with friends without feeling trapped.
At least in an f2f or video call meeting that I'm bored of, I can zone out or look at my phone or tap on my laptop, or do anything but stare at the slides. With eye tracking, the headset knows (and presumably everyone else could know as well) when you're tuned out.
The eye tracking thing also kind of weirds me out from a privacy standpoint. It's already bad enough that web pages track how long you engage with different portions of content. Now they know what parts of images I stare at, and can algorithmically feed me content based solely on my gaze alone. Does that prospect not weird anyone else out? Or are most normal users like "plug me into AR TikTok, but with gaze mechanics now!"
> What do you do if someone hands you a controller for a tv game equivalent?
I'm not the GP, but as they said you have all your other body language for communication. In VR-world, you have your eye direction and hands - not even your full eyes, with all the muscles around them that may communicate more than anything else on your body. I suppose you could flip them off. :)
It's an interesting point about how VR avatars, for all their 'presence', are very limited. VR video chat seems much better.
> And your eyes are both covered so there isn't a good way to non-verbally communicate waning interest levels...I suppose a solution is to simply care less if I hurt my friend's feelings but I'd also like a way to spend time with friends without feeling trapped.
I hadn’t considered those antisocial patterns that the technology is foisting on users, but I suppose one can also juts directly communicate verbally. Could work in low context cultures but not so well in high context cultures.
> The eye tracking thing also kind of weirds me out from a privacy standpoint.
Indeed. While I am less worried about Apple “going after me in a direct targeted nefarious way”, I don’t appreciate more levers for technology to disintermediate my manipulate my behavior, emotions, or interactions.
That's a very interesting comment, because I've definitely felt that just playing a game on console (portable or plugged to a TV) or on PC. Sometimes games get so intense that I can't stop it and I have to force myself to get out of the world. I remember getting a massive headache playing Elden Ring and forcing myself to just stop the game.
Also, I've played Catan in VR and felt like I couldn't leave "like that". I had to finish the game and shake hands with the strangers I played with before leaving the game, it just felt like I was really there and it would have been rude to leave the game like that.
I did freak out the first time I joined a cinema room (can't remember the name of the game) as when I looked on my left side I saw people staring at me in the cinema room (which made me yell and it made everyone laugh, indicating to me that even my mic was on!)
In short: Apple has made a fully realized spatial operating system, while Meta has made an app launcher for immersive Unity/Unreal apps for vanilla Android. You can get away with an app launcher when all you want to support is fully immersive apps that don't talk to each other. But that fails completely if you are trying to build a true operating system.
Think about what has to exist, to say, intelligently copy and paste parts of a 3D object made by one application into a 3D object made by another, the same way you would copy a flat image from photoshop into a Word document. The operating system has to truly understand 3D concepts internally. Meta is building these features but it is stuck in a really weird space trying to wedge them in between Android underneath and Unity/Unreal at the application layer. Apple has had the advantage of green field engineering it exactly how they want it to be from the ground up.
It's an incredible amount of goodwill to burn from a company with so little to spare, and I'm surprised it hasn't come up yet in this thread or in the blogpost. Meta has fundamental trustability issues.
If your issue is with the device requiring connection with an external account, Vision Pro requires an AppleID which will tie it to way more of your digital things than a Facebook login.
You reminded me of the negative aspects of the Meta/Facebook corporate mass, and they should clean up their act in privacy, etc. for VR in the same way they have basically purchased good will in the AI community for releasing LLM model weights.
Apologies for going off topic, but Apple similarly really needs to trade a little profit for buying themselves a better “look” because they are looking a little tarnished also.
This is not a fair comparison. They're motivated differently.
Furthermore, the "anti-account" viewpoint is making a privacy issue out of a pinch or friction point. Accounts are required for both devices. If you bought a device which allowed you to buy apps, the experience would be horrible without an account. If most people are willing and it's a better experience, it makes sense to force everyone into the same rails to reduce implementation cost. If it increases revenue, there's yet another reason to do it. It's ridiculous to be in an ideological minority and expect a company to bend to that when it's not in their best interest.
While I prefer Apple products because <yada yada>, Meta and Apple are doing the same thing here. The only difference is that Apple has higher current trustworthiness. This is also the reason they can release a $3500 headset.
It’s not bricked and hasn’t deleted my software, I’m curious what exactly you’re referring to with that.
You always needed an Oculus account and they didn't brick anything. You did have to migrate from an Oculus to meta account but a Facebook account was never required on a quest 1 (or 3). Is a meta account really that different from an Oculus account?
The quest 1 has been deprecated yes but not bricked.
Dead Comment
They can only just now move towards MR with the Quest 3 and really it'll need another generation to be MR native.
They have a good relationship with developers and focused on what their current hardware is capable of, which is running one VR app. They spent the last 8 years on that use case and I think that was the right choice given the hardware realities at the time.
I get that the thesis is that this version is the devkit etc, but viable consumer product status (read: enough adoption for the device to be profitable) seems very far away
He mentions a few short term use cases for the current hardware.
For example: Productivity on the go (A laptop with the headset for multiple virtual displays) and Live Sports.
> Apple Immersive on Vision Pro is a transformative experience in terms of video quality and its ability to deliver a real sense of presence. Watching a game in high-resolution VR has the potential to be legitimately better than a regular 4K TV broadcast by enabling hardcore fans to feel much closer to the action
It's very obviously better to wait a little longer for a future version.
Apple is enamored with vertical integration which gives them control on a whole other level compared to their competitors; feels like history repeating.
What's different with AVP compared to previous products is that it starts off even better thanks to Apple's own custom chips. There's also the amazing network effects of their ever-growing ecosystem.
Competitors don't have all this, so they will struggle to compete on the high-end. The intention of Apple is clearly indicated by the price of AVP, they want the profits at the top, let the rest fight over the scraps at the bottom with crummy privacy-invasive software and poor integration/interoperability.
It's a way of development seen almost nowhere else.
Or I'm giving them too much credit ... but I don't think so. I think it's evident they seeded hard parts throughout the rest to learn at massive scale.
There's some hard decisions around forcing everyone into their custom material that Apple made so that they can handle the rendering more deeply....but is that really a core OS thing? Seems like it doesn't need a new kernel for that.
The Oclulus platform is more like a classic video game console; there are system APIs, but they are designed to be used by single-tasking applications.
And for the user, the Oculus system UI is really an app launcher /task switcher.
It’s not better or worse, just a very different design philosophy.
That is what always bugged me about the pivot to “meta”. They never had to find product market fit to succeed. They were never hungry. They could just throw money until something clicked… but money alone doesn’t make a revolutionary product. You need somebody hungry enough to see the world in a different way and then execute the fuck out of it.
Dunno how this relates to apple though. They have equal amounts of cash to throw at problems until they are “solved”. Perhaps the “operating system” is a solved problem already to some extent and maybe there isn’t anything truly new?
At least the Meta Quest for example has a lot of content and VR games. The Vision Pro doesn't seem to have much use apart from it curiosity, because such system hasn't been fully built out. It seems like a device that isn't really ready for prime time for a couple of years yet.
> The hardware needed to track eyes and hands in VR has been around for over a decade, and it’s Apple unique ability to bring everything together in a magical way that makes this UI superpower the most important achievement of the entire Vision Pro product, without a shadow of doubt.
So they had all the pieces, but only Apple put it together and realized that you'd need a VR equivalent of point-and-click? If that's actually true, it's sad.
Not only do they have two cameras per eye, and all the hardware for wide angle out-of-view hand tracking, they had to consider:
Privacy: the user’s gaze is never delivered to your process when your native UI reacts to their gaze. Building this infrastructure to be performant, bug free and secure is a lot of work. Not to mention making it completely transparent for developers to use
Design: they reconsidered every single iOS control in the context of gaze and pinch, and invented whole new UI paradigms that work really well with the existing SDK. You can insert 3D models into a SwiftUI scroll view, and scroll them, and it just works (they even fade at the cut off point)
Accessibility: there is a great deal of thought put into alternative navigation methods for users who cannot maintain consistent gaze
In addition to this they clearly thought about how to maintain “gazeable” targets in the UI. When you drag a window closer or farther it scales up and down maintaining exactly the same visual size, trying to ensure nothing gets too small or large to gaze at effectively
There are so many thousands of design and engineering decisions that went into making gaze and pinch based navigation work so simply, so I can understand how it hasn’t been done this effectively until now
Its very difficult to change a mindset or culture in big companies. Existing VR companies were too invested in using a controller. Similarly back in the early smartphone days all the big companies thought that smartphones must have physical keyboard.
If everything is designed for your controller, the eye interface may not work well due to lack of software optimization.
Which means it’s just an expensive battery hogging extra weight you don’t need.
Apple has a stronger combination of hardware design, software implementation skills, and UX expertise, than any company in the world.
This entire time, they could have built a real OS, solidifying their first mover advantage.
https://www.engadget.com/meta-dissolves-ar-vr-os-team-204708...
I'm not sure what you mean precisely. Apple doesn't seem to have done more than windows with persistent positions. This isn't nothing, but it's also not something that has tremendous value for a headset that you only wear 30 or 45 minutes at a time.
And they have little to no management of these floating windows. I'm really not holding my breath for Apple to come up with breakthrough windows management given what they've done for the past decade.
If you don't think in term of potential and promises, but of actual value to the user right now, I'd understand why Meta hasn't the gimmick.
Is this a big lead for Apple ? Perhaps, the world mapping could be something difficult to reproduce. Or Meta could be at roughly the same point but decided it not to go there.
Take a look at this Reddit post for example:
https://www.reddit.com/r/VisionPro/comments/1ba5hbd/the_most...
The user is pointing out that that the real fridge behind them is reflected by the surface of the virtual object in front of them. And consider on top of that, the fridge is not visible to the headset at that moment. It is captured in the 3d spatial model that was created of the room. None of this is a pre-rendered or rigged or specifically engineered scenario. It's just what the operating system does by default. So one app that is totally unknown to another app can introduce reflections into the objects it displays. This is just so far beyond what can happen in the Quest platform by any means at all. And it can only happen because the 3d spatial modeling is integrated deeply into the native rendering stack - not just layered on the surface of each app.
When designing a concept the core difference is always whether your design respects the user or whether it does not and tries to make them do things, spend more time on the platform, spend more money on the platform, etc.
The problem with the Apple approach is that there are no apps and games, and there probably won't be many given it's a 3500$ device with few users that Apple exerts its tyrannical grip over (or if there will be, they will be ports of Unity/Unreal, PC or Android VR apps, not using any of the special features that the Apple OS may have).
3D is much, much, much more complicated than 2D, especially if you're trying to interchange between arbitrary applications that may have divergent needs.
Start with this little thought experiment.
What do you mean by 3d object?
Do you mean a set of polygons, like in a traditional triangle mesh? A volume, like voxels? A set of 3d surfaces?
Do you need to model interior details, or only the exterior envelope? Do you need to be able to split or explode it at some arbitrary level of detail? Do we need to encode sharp edges or creases in some way?
etc, etc, etc
This is before you have touched materials, texturing, lighting, any of that.
AR has tremendous productivity applications if the device is small and wearable enough. Imagine being up in your attic running cables and seeing a projection of the floor plan of your house so you can see where the different rooms in your house are. Or driving a car, except all the blind spots disappear and are filled in with vehicle-mounted camera feeds, with unobtrusive overlays for navigation or to highlight potential safety hazards. Imagine assembling some IKEA furniture except instead of puzzling through the instruction book, you have an app that can recognize all the pieces using machine vision and simply show you what to do. Imagine never forgetting a name or a face, because every time you see even a distant acquaintance, your glasses can run facial recognition and make their name pop up by their face in real life. Imagine noticing a weird rash on your arm, but as soon as you look at it, your glasses immediately diagnose it as a potential MRSA infection and pop up a notification allowing you to call an urgent care clinic that’s open right this second.
Or the quest pro for that matter, even though it flopped.
It's not quite green field on the software side, albeit mostly. Clearly they already have experiencing re-platforming a whole operating system multiple times. The underpinnings of macOS power everything from desktops to smartphones to watches to tablets already all with diverging user interfaces. They had a solid first-party foundation to build the interface they want; Facebook is ultimately a third-party to Android and is having to solve the same Android hardware integration problems as everyone else.
I own an AVP and this isn't something that can be done with it, to the best of my knowledge. Please explain how this is possible with the existing OS and apps.
Possible with the current apps? None of them support a standard partial copy of 3D objects but they do allow copy pasting full objects between apps afaik. E.g I can drag a USDZ file from a message into keynote
What makes it particularly interesting is that the VisionOS app store so far seems to have had quite an anemic reception from developers. Barely any novel non-toy apps have been released for it, with 8 months since devs got access last year and 2 months in the open dev ecosystem. It's possible the tsunami is just around the corner but it would have to be said that this seems to be diverging heavily at this point from the launch of the original iPhone app store. It was always going to be a question since the user base is miniscule compared to every other headset and iOS devs mostly have negligible experience in developing full scale VR / AR applications which is actually a very steep learning curve. So the barriers are high and the incentive relatively small.
If Apple fails to attract devs to its store it will create a huge problem for them that they are pretty unused to having. I wonder how they will approach a situation like that, since their culture is not used to dealing with that as a problem these days.
There's a reason that iPadOS launched multi-window mode of arbitrary curved corner sizes with the same curved handle bar on the curved corner, several iPadOS iterations back.
Doing that ensured that VisionOS could launch opting all iPad apps in: most any iPad app respecting the ability to run in iPadOS's "stage manager" mode with multiple windows, works beautifully OOTB on VisionOS.
In fact, any iPad app run on Vision OS, if you "pull" the app close to you iPad sized, you can touch it as if an iPad screen, and your fingers and touch work as if touching an iPad.
The only apps that don't work as if native are those doing something special with multi-touch or touch gestures, but most apps "just work". It's pretty wild.
Press keeps comparing Vision Pro to MacOS. No, the 2D pass-through mode is a room sized iPad stage manager, infinite iPads.
For sure. That’s exactly how it’s played out in iOS vs Android. No developer makes anything for the higher priced, small market iOS, right?
case in point - the whole ipad and "what is a computer" campaign. it is hilarious when a half-baked mouse support is celebrated in a tablet. despite using similar hardware, apple refuses to treat their tablets up to their true potential.
despite working on an RTOS for avp, there is no signs that the headset stack will be exploitable by professionals, like it used to for macs. for the coming future it will remain to be a good software demo built on top of nice displays.
So OS is probably not as important as you think.
So they want to build a new kind of device and a new kind of experience, and they seriously think they can do that by just plugging together ready-made parts built by others? No wonder this is going nowhere.
Granted, an iPad is better than an app launcher, but so far I don’t think the software is really “killer” in any specific way.
Most of the in depth reviews I’ve seen mostly praise the screen resolution and the movie experience.
Said that out loud to a group of techies and they laughed so hard one of them fell out of their seat.
Apple put the iPad on your face. And that's pretty much it.
The few VP users that haven't returned the device don't use any of the "spatial" features like controlling the UI by pointing in space, since it's so inaccurate that it gives Swype a run for its money.
Apple know full well that this is not a mass market product, they have made no attempt to make it even remotely affordable to most people. But they also know that every aspect of the hardware will improve over the next decade, and as it does they will have ironed out many of the problems with how we use AR/VR and will be ready for it, based on the real life experiences of actual owners, rather than years of in-house testing.
The screen will get bigger (I hear FoV is pretty poor at the moment), the CPU/power/thermal performance will improve, battery density goes up, cameras and sensors get better and cheaper and many parts will get smaller/lighter. And in that time they will learn by doing, making it better/cheaper/lighter and working on the software and interaction model.
Hopefully at the same time it will really spur on the rest of the industry and we will see more competition and experimentation.
I can't see myself buying something like this for the next 10 years at least. But something like the VPro that is better, smaller and lighter and doesn't cost the earth could be quite tempting for late adopters like me.
I thought it was pretty good - but nowhere near ideal of course.
But discovered I could dispense with both of Apple's "Light Seal Cushions" and simply line the light seal with some 1/8 adhesive foam. It took a little experimenting to avoid hard pressure points, and then make it comfortable.
It is now very comfortable with the following benefits:
1. The field of view is noticeably wider. Yay! The immersion improvement feels cognitively significant.
2. I realized that greater peripheral vision downward is more important than upward. Being more aware of down makes us feel safer and is also where are our hands and keyboards live.
So I arranged the padding to wear the head set slightly lower, allocating all the increased vertical FOV downward.
3. The combination of being 1/4 inch or so closer to the face, and firmer padding, reduced the feeling of weight on the front of my head.
Warning - literally. I get an occasional popup warning that my eyes are too close to the lenses. The danger being if I were to fall I could potentially hit my eyes. I stay seated most the time, but occasionally walk through rooms, so it is worth being careful.
I use my Vision Pro for 10 hours a day on many days, comfortably. I had to switch to the two-strap support to do this. But I have ordered an adapter that allows the original behind the heat cushion strap to be used with a second cushion strap over the head. I anticipate that working even better, given how much surface area weight will be distributed across. (Also turning a knob is easier for adjustments than messing with velcro.)
Also, got some thicker (in width) lighter foam, to add some more light seal around the edges.
This feels like a real upgrade, a year or so before Apple will release a bump.
Can you comfortable code all day in that? If you don't mind my asking, is it purely novelty or is it genuinely better than coding on a 5k monitor?
Btw it’ll be a few years before they update the Vision Pro
Ten years ago was pre oculus rift. We weren't saying anything about VR headsets.
Five years ago was pre valve index. We didn't have CPUs in a headset. Nor a battery. Cameras were only used for tracking. The things we were saying would improve is "screen door effect" and "tracking", both of which have.
In the past, once Apple started pouring R&D money into a specific product type, the entire industry around it tends to advance very quickly. I’m optimistic about the Vision Pro, and I actually think the n+2 Meta release will be much better off for it.
Having even a few people saying they are already doing that on the Vision Pro seems significant.
I really feel like too many people are ignoring this. If anyone understands how to play the long game with a new product it is Apple. I mean the Apple TV was a "Hobby" for many years.
Has Apple ever put out a "Pro" version of a product before the "normal" version?
I think it also helps the clear sharing of technology between this and other products. From using the M2 tech, to iOS and (I assume) built with much of the AR tech they have been showing off for the last several years on iPhone.
I honestly kinda wonder how much of ARKit was directly made from work on the Vision Pro?
This version of Vision Pro was never going to be a massive product, I expect they knew they would get returns (and they hoped to mitigate that with the in store demos but that only does so much). But it is setting the ground work for a long term investment.
Yes they introduced the MacBook Pro before the MacBook.
HomePod is another one.
The Lisa before the (original) Mac
Now if apple puts out an Apple Vision SE or non-pro or whatever at $1,999 it will be seen as an absolute steal.
And yet -- as Steve demonstrated -- it fit in an envelope.
THIS is Apple launching a new product line (and trust me I'm not a fan boi).
And shortly after (2-3 years?) the MBAs were powerful cheap and barely powered up their fans.
Mind you the MBA was maybe Steve's last obsession. What is Tim's thinking these days?
Still it all seems very Apple like...
A good measure of an Apple product is if you can pitch a version of it to your grandma or dad who can’t open PDF file. If something only appeals to tech enthusiasts it is not a good Apple product (except the professional line products intended to be used for serious work by professionals which the vision pro isn’t)
Even though the SSD was much tinier than the hard drive and the processor in the machine was slow and under clocked to be able to manage heat, in combination with the SSD it was fast compiling code and ran smoother than a normal MacBook Pro.
The flipside is I think the hard drive was a 4200 RPM model and performed absolutely abysmally.
But if you didn’t need much computing power, say you’re a writer, it was extremely small and lightweight and easy to carry. It’s not surprising it changed the industry.
(Yes, it was near useless. The original Mac had 128k of RAM, which allowed for only the smallest of applications. It wasn't until the Mac 512k came out that people could do real work on these machines.)
The next rev of Vision products from Apple, or maybe the rev after that, will just be leaps and bounds beyond what anyone else is doing in the space. No new paradigm of computing truly begins until Apple starts it.
It’s going to be interesting to see what it really shines at as more developers make different kinds of apps.
I can't remember if it was here or somewhere else I saw the point made that the current Vision Pro is the worst one Apple will ever make. All future Vision* products will likely be technically improved from the current model. So if the Vision Pro is good right now it really can only be refined and get better.
The original Apple Lisa, which cost $9,995 in 1983, would cost approximately $27,905 in today's dollars, adjusting for inflation over the period up to 2023.
So the VisionPro seems downright cheap.
Im betting Apple will release similar smart glasses like Meta's in the next year or two.
A devkit prepared for lock down. It's the old formula: let devs make the platform great, then pull a Sherlock or increase the platform fees. Count me out.
iOS non-game apps aren't exactly innovative. It's Gmail. It's YouTube. It's social media photo and video scrolls. It's been more than a decade, and the most innovative thing I can find on top downloads and top grossing, Duolingo, is also kind of a game, like if you remove the game part of it it's kind of not much, is it? All the innovations in Google Maps are kind of tied up in backend technologies that aren't specific to the phone at all, indeed predated it.
Once they figured out touchscreen keyboard and accelerated web browsing sort of everything else fell into place. Then the retina display was introduced, and software improved the camera. I don't know what roles 3rd party devs played in all of this, but those list of innovations happened years ago.
They don't Sherlock games.
Even then, is Apple going to approve a game with guns on the AVP? Time will tell. Beatsaber is an innovative game but it leaned on the basic premise of people doing something illegal, uploading non licensed maps and tracks.
Hacker News commenters don't know much about making games - even when they work for huge game studios! - and they don't know much about VR - even when they work at companies making VR headsets!
Hugo must certainly be aware of the Varjo XR3, which is actually the most comparable device to the AVP and even more expensive, but there were developers at Apple on the AVP team who never heard of it, and many more Oculus developers.
At the end of the day this is a love letter to halo product positioning coupled with relentless vendor lock-in applied to helpless consumers. I agree with you that the locked down nature of the product makes it as DoA for developers as the Apple Watch was. People forget that the first apps for iPhones were delivered via jailbreaks made by hackers, who had unlimited access, and that plus Steam ultimately teed up what limited things you can do in iOS apps today, not brilliant strategy.
This device is magical. Yes it's too heavy, yes there are not enough apps, yes sharing it with other people is a hassle. Every one of those issues will be solved in the next 2 generations.
When the screen is 50% better and it weighs 50% less, 20% of knowledge workers will be tuned in daily. Technology isnt a zero sum game, but I genuinely believe the impact will be bigger than AI. AI changes the way you work, Apple Vision will change the way you live, which will change the things you consume, which will impact the work that needs to be done to serve you. Empires will rise and fall. Travel to beaches will go up, travel to tier 3 towns and cities with minor attractions could get obliterated. Most TVs will disappear. My grand children will be able to experience memories with me long after I'm dead. After this, looking at my smartphone seems so analog. Anyway, I could go on for hours
Lastly, on the topic of pricing, there is a great book titled Positioning by Al Reiss that highlights the rational behind Apple's strategy on pricing. They are a second mover in this space, they needed to come out of the gate as the absolute gold standard of this technology and position it as the best money can buy. They delivered on both the tech and pricing. Long after the price comes down, they'll still own that position of being the best money can buy. When you own the position, it takes a lot to lose it. It will be almost impossible for Meta to take that from them now. Big mistake on Meta's part.
The immersiveness for entertainment is neat, but this seems like a relatively minor use case in the grand scheme of things. And with gesture based controls, gaming is pretty much a non-starter.
Do you really envision a future of families hanging out together on the couch, each member with their own VR goggles…? Halfway through my return period, I realized that I much preferred reality.
FWIW, I love my Quest for the occasional gaming romp, but little else.
This feels like a comment from Blackberry or MS making fun of the first iPhone.
I also have been telling people that VR is magical and that while both current hardware and the current software has issues it will get better and that this stuff is going to have a big impact on humanity... only, I've been telling people this for just shy of a decade now ;P. I don't see the Apple Vision Pro as either the product that made any of this magical nor is it the product that made any of this viable... it is just yet another incrementally-improved product in a category that has been on the verge of getting it right for, well, forever.
As it improves, I don't question that it has a future, and some people like it so much they're working on it and writing apps now. The question is how many years from now before a larger mass of people are using it?
The OP talks about the Internet, but the Internet started some time between the 1969 first ARPAnet transmission and 1974 paper on the Internet protocol. Out to 1993 most people were not consciously using the Internet, it was a rather niche thing. With the release of the Netscape web browser in 1994, the inclusion of IP networking capability in Windows 1995 etc., this began to change.
One could say this about AI as well. Up until 2019, Norvig and Russell's CS textbook "Artificial Intelligence: A Modern Approach" had only a ten page sub-chapter on neural networks. The now recognized as groundbreaking 2012 ImageNet competition victory by Krizhevsky, Sutskever and Hinton, was greeted with skepticism and disinterest on this forum ( https://news.ycombinator.com/item?id=4611830 ). The ideas for connectionist AI were there starting with McCulloch and Pitts 1943 paper, so that's a lot of development where things still weren't really happening until around 2012 or even 2019.
Some people can see the potential in things decades in advance, but they can take time to move into the mainstream, and it is difficult to determine how long that can take.
It's worth doing a vision pro demo in an Apple store if you can. I'm not planning to buy one in its current state, but it was immediately obvious that this was entirely new territory. It moved a piece of technology from fiction to reality in my mind. The eye tracking is hard to believe without having experienced it.
I'm not ready to say that this device will be responsible for such a large technology shift as the previous person, but I think it will play an large, perhaps pivotal role on our way to the next thing.
Disclaimer: I know a lot of people compare this to a device that I can surmise was meant to do a lot of similar things made by Meta; I have not tried this and may be attributing first-mover credit to Apple for things that Meta may have done similarly well. My comment is made from the perspective of somebody who has never experienced eye tracking in any capacity before.
Somewhat ironically, I have no interest in showing people my AVP, I use it for hours everyday but showing it to people feels weird in the same way showing your neighbor your workstation and monitors feels weird. (Part of that is, of course, that I’m not going to give them my laptop to run Mac virtual display, which means they won’t fully “get” it anyhow.)
In contrast the quest devices are practically begging to be shown off but lie in disuse a majority of the time, especially if you don’t have time for gaming.
I don't see how anyone can say this with a straight face. VR/AR just isn't there and won't be there for a lot longer than people think. Everyone always acts like we're -so close- but then we just aren't. Hololens and Oculus are over a decade old, and honestly when I first demo'd both ages ago, they were incredible (especially Hololens.) If I put on a headset now, sure it's better but it's still stuck on that same idea and seems to fundamentally misunderstand how humans interact. Apple's newest product is no different. I think the future of wearable devices looks a lot more like the Meta Ray-bans with some sort of small HUD than a full computer strapped to your face that you can't wear anywhere (without being socially ostracized or robbed).
We also know that people are perfectly willing to wear something on their face to be able to see better, and willing to carry something around in their pockets to be more connected -- glasses and phones.
So I think the only real barrier is technical feasibility: can you make it small enough, light enough, power efficient enough, and, of course, affordable enough?
AVP and Oculus clearly aren't there yet. Relative to smartphones, I think we're at the "Palm III" or maybe "Palm V" level of things. I personally have no idea if a path forward to the "iPhone 3S/iPhone 4" level even really exists for VR headsets.
But if it does and we get there, I think there's no doubt that these headsets (probably just goggles at that point) will take over the world, like phones have.
Every time I see an intent by companies to increment my time spent with a computer, to watch their ads, buy another thing I don't need ... I know that I should be doing the exact opposite.
I don't believe that for a second. Even 50% lighter and it's still going to be more uncomfortable than not wearing one. All so you can have a blurrier screen?
If/when it gets to the weight/comfort of normal glasses then sure. But that's at least a decade away. Probably more.
I'm not so sure about that.
Even after 36 years of wearing glasses (current ones weight barely 20 gram), they still bother me, and I fiddle with them constantly.
As the article mentions, this is an intentional thing they've done to cover up the remaining vestiges of the screen door effect. The better the screens get, the less and less they'll have any reason to do it.
I see statements like this about AI too. How can something be bigger than the technology it is built on? The AVP is a computer. AI is a computer accessed through the internet... any computing device that accesses another computing device is further extending the "impact of computers and the internet." I don't get it. Maybe I'm too caught up on the semantics.
This isn't to say that VR won't improve and find some useful niches, but the idea that it could have anywhere near the global impact of smart phones seems wildly optimistic to me.
For some reason this illogical phrasing that totally excludes the value proposition has become the standard phrasing for arguing why VR/AR won't succeed. It makes no sense. You haven't expressed any part of the value proposition of the product, only a negative aspect of it, so why are you assessing it's value based on that? I could assess the value of a car as "a metal box you are trapped inside of for hours on end" or a phone as a "fragile object that requires constant charging" and decide these products will just never make it.
Of course people don't want goggles strapped to their face. But we already know they will do it, given a value proposition because humans do exactly that : they wear glasses and sunglasses and hats quite happily, sometimes all day long.
I wish you were right. But, man, look around. People are immersed in their mobile devices. No one gives a shit about their surroundings anymore.
[1] https://candrewlee14.github.io/blog/2024-03-07_apple-vision-...
AI does a whole lot more than that. I imagine that the vast majority of content people consume will at some point be created by AI perhaps with some marginal human input. Probably before your VR vision becomes a reality.
I predict it will go the way of 3D TVs. Most people don't want to walk around with half a kilogram of computers inside goggles strapped to their faces.
I am perhaps missing your point, but if people will be able to experience incredible VR worlds, couldn't people happily spend their time sitting in tiny windowless rooms rather than physically travelling to a beach?
A beach seems like the former, and cities and minor attractions seem like the latter.
I got the prescription lens inserts that Apple had suggested, and when I first put on the device I thought that either my eye doctor had gotten my prescription wrong, or something was defective with my device.
The blur is distracting -- and looking further away makes it more obvious, as the objects in the background move around a lot more when turning your head vs items really close to your eyes.
He also says you can read your screens through passthrough, but I've found that not really to be the case, at least for devices like the iPhone or Apple watch. I've had to take my Vision Pro off many times not only to read a phone notification, but also for anything that requires Face-ID (which doesn't work well when the Vision Pro is covering your face, which feels like an Apple ecosystem fail).
I'm still enjoying it, and I bought it knowing it was a V1 product, but it also shows how far have to go, even with a ton of engineering put into a product.
I've noticed that I can almost read things if I hold my phone a little farther away, but I wouldn't call it usable by any stretch. I've considered getting contacts for the first time just to test all of this, but I'm extremely turned off by the upkeep of them (to say nothing of the idea of touching my eyeball to put them in).
VR is interesting all because all manufactures (that I know of) have a fixed focal distance for the screens. This is why you need inserts in the first place, even though the lenses are right in front of your eyes. For example, on the valve index this is set to ~6ft, so if you can see up to 6ft perfectly, you do not need inserts.
Moving your phone around doesn't change this number, its a relationship between the lenses and the screens
I can read screens "fine" while wearing it (fine as in, I can read them, I wouldn't want to do it for an extended amount of time).
Deleted Comment
At some point don't we need to accept the laws of physics and biology? That is, The VisionPro covers a significant part of your face, much more than a pair of glasses. Face-ID already has the challenge of needing to recognize you in tons of different conditions (lighting, pale/tan skin color, wildly different hairstyles, facial hair, etc.), while for security reasons nearly never letting someone else impersonate you. Is it really possible to get that level of forgiveness with accuracy if Face-ID only gets to consider the bottom half of your face?
(1) Quest is a game console
Quest is clearly a gaming device. It boot straight into the store. The re-opens the store at every opportunity. This means, trying to use it for non-gaming means you're constantly reminded your on a VR game console. It'd be like trying to use a PS5 for work. I kind of get why they did it but I hate it. I'm happy my iPhone does not boot into the store and does not go back to the store every time I go to the home screen.
(2) Quest doesn't care about productivity
This is kind of the same as 1 but, ... I actually try to use my Rift-S with my desktop for certain activities almost daily. I press the right controller's system button, close the home tab, close the library tab, close the store tab, open the desktop. Then I interact with the a few desktop apps. I copy or move files and I run certain apps that I've written that work reasonably well with point and click controls
But, I tried accessing my desktop on a Quest via the various link (wired and wireless) and it crashed 1 of 3 times. This is clearly a niche feature to Meta. Meta expects you're just playing games from their store.
--
You can certainly argue Meta didn't get those things wrong but for me, they are wrong. The more productivity I can do on a Quest the more useful it is it me any many others. There are more phones than game consoles. IMO they should make the device be general purpose, not sell a "pro" version for productivity. Phone game sales far out shadow game console game sales. They don't need to design the Quest around a game console experience.
That said, forcing your hands to be full of controllers also sucks but at least you can play a game. I don't have a solution but it needs to get solved for these things to be seamless enough to wear and useful enough to want to.
And I would call myself an enthusiast.
Though ideally we probably want some sort of “force gloves” that allow us to feel the weight of and manipulate objects in a virtual world using. We need way higher resolution on inputs than we have today, though, which currently amounts to an x/y/z and some button presses.
For work purposes, the privacy factor is a huge benefit -- nobody in the room can peek at your "screen"
The earlier versions were much easier to use and the later ones can become quite a nightmare to setup navigating the oculus/meta/facebook account silliness then ultimately it all feels a lot jankier than very early versions of the software both in Rift and Quest respectively.
Think if Zuck believes in this going forwards it would be wise to focus on removing some of this platform bureaucracy friction, took me 15-20 minutes to get my Quest up and running and logged in after not using it for 18 months.
There's a ton of half-baked old ideas in the UI, it's extremely confusing. Even basic stuff like trying to add my dad as a friend is super hard to do.
Literally every person I've helped set it up has also had to do a full manual restore (holding down hardware buttons to reset from a boot menu) because they app fails to connect during initial setup or there's some bug with adding payment.
Someone really needs to go into that team and rip lots of stuff out.
Their touch controllers and basic UI navigation are good though.
It seems crazy that after investing so much in the 'hard part', the VR hardware and software itself, they'd drop the ball on what seems mundane - basic design of UI controls.
It's like the Windows setting system. I'm using Win11/WSL, and while it works, the Windows settings are a mixture of Win95/Aero/3rd Party Plugins (Nvidia)/Win11 and some things that just look like Win3.1.
Googling to change DNS settings:
So I don't think it has anything to do with VRWe don’t come up with totally new ways to interact with our computers often. And Facebook has never stood out for their UI brilliance.
Deleted Comment
Deleted Comment
And your eyes are both covered so there isn't a good way to non-verbally communicate waning interest levels...I suppose a solution is to simply care less if I hurt my friend's feelings but I'd also like a way to spend time with friends without feeling trapped.
At least in an f2f or video call meeting that I'm bored of, I can zone out or look at my phone or tap on my laptop, or do anything but stare at the slides. With eye tracking, the headset knows (and presumably everyone else could know as well) when you're tuned out.
The eye tracking thing also kind of weirds me out from a privacy standpoint. It's already bad enough that web pages track how long you engage with different portions of content. Now they know what parts of images I stare at, and can algorithmically feed me content based solely on my gaze alone. Does that prospect not weird anyone else out? Or are most normal users like "plug me into AR TikTok, but with gaze mechanics now!"
I don’t get it. Why not just tell them? What do you do if someone hands you a controller for a tv game equivalent?
I'm not the GP, but as they said you have all your other body language for communication. In VR-world, you have your eye direction and hands - not even your full eyes, with all the muscles around them that may communicate more than anything else on your body. I suppose you could flip them off. :)
It's an interesting point about how VR avatars, for all their 'presence', are very limited. VR video chat seems much better.
I hadn’t considered those antisocial patterns that the technology is foisting on users, but I suppose one can also juts directly communicate verbally. Could work in low context cultures but not so well in high context cultures.
> The eye tracking thing also kind of weirds me out from a privacy standpoint.
Indeed. While I am less worried about Apple “going after me in a direct targeted nefarious way”, I don’t appreciate more levers for technology to disintermediate my manipulate my behavior, emotions, or interactions.
I vaguely remember there being some stuff from Apple about this when it launched - the apps don't have a lot of access to that raw tracking data
Had a quick google... the doc here has more details https://www.apple.com/privacy/docs/Apple_Vision_Pro_Privacy_...
Also, I've played Catan in VR and felt like I couldn't leave "like that". I had to finish the game and shake hands with the strangers I played with before leaving the game, it just felt like I was really there and it would have been rude to leave the game like that.
I did freak out the first time I joined a cinema room (can't remember the name of the game) as when I looked on my left side I saw people staring at me in the cinema room (which made me yell and it made everyone laugh, indicating to me that even my mic was on!)
Deleted Comment
Deleted Comment
Dead Comment