The display may be 3,386 PPI, but wow my laptop screen has never looked sharper after taking the AVP off my head. Screen sharing from my Mac looks distractingly bad, which is a huge bummer. (AVP apps, by contrast, look fantastic.) No matter how I scale or where I place the space, it just never looks very good.
I'm really hoping this improves because it's nice to turn my 14" MacBook Pro into a theater screen of sorts alongside native apps. For the moment, though, AVP doesn't seem very useful apart from watching movies (which is amazing).
It's not surprising. The AVP has a PPD of 34. A 14" MBP screen with a resolution of 3024x1964 already has a PPD of 66 at a close 14" viewing distance and it gets higher as you move further away [1]. Retina is considered 60+ at 20/20 vision. Of course a real display has no resampling or micro-OLED smearing to overcome either.
To achieve the same effective pixel count as a MBP, a virtual screen at the AVP's PPD would have to be resized to 90° horizontal FOV, which is just too big to be comfortable, equivalent to sitting 6" from a 14" MBP screen.
The technology just isn't there yet. We need at least another doubling of PPD. See you in 10 years.
That's the great irony of Apple trying to make AR work as a monitor replacement, their existing users will be some of the hardest to please in terms of resolution since they've been spoiled by retina displays as standard on every Mac for the last decade.
I think we need to withhold analysis based on the 34 PPD number. In the article you can see that the rendering projection is quite distorted, with the apparent effect of using more pixels per degree for the center part. They don't account for this when calculating the 34 number. Of course a "fisheye" lens would also be "distorted" and allocate an equal number of pixels to each angle, so it's hard to tell by just eyeballing things. I wouldn't be surprised if the actual PPD number in the center, where it matters, is higher.
i dont know why it took a week but i finally have you to confirm my suspicions based on the nebulous " its a bit out of focus i dont know" crap from every review I came across.
Agreed. I find my AVP actually quite bad for using my Mac. I use a MBP instead of a MBA because 120hz makes a big difference. The MBP screen within AVP is probably something like 30hz (likely because it's using Airplay, and it doesn't have enough bandwidth). And I can't change the resolution either! It's kind of like working on a TV — I don't see why it's any better than the 14" MBP screen.
Overall, the AVP is a disappointment for me. Most of the new UX patterns I find far worse than keyboard shortcuts. (For example, a window manager is _much_ easier to use than having to pinch to drag windows around. For example Vimium is much easier to use than looking at elements and pinching at things.) I don't consume much content (e.g. TV), but of the content that I do consume, it feels lower resolution to me. The demos they have (e.g. the Alicia Keys video) feels nothing like real-life to me. As a parallel to what you said — real life has never looked better (after taking off the AVP), haha.
You can change the resolution though? If you go to Displays on your mac you can change the resolution of the AVP mirroring. I find it works much better at "1080p" than the default since it makes small text much more readable.
You can combine paradigms for the best of both worlds (e.g. physical keyboard + your apps). I imagine that IDE experiences will only improve as time goes on.
> Screen sharing from my Mac looks distractingly bad, which is a huge bummer. (AVP apps, by contrast, look fantastic.) No matter how I scale or where I place the space, it just never looks very good.
I think a big part of the difference is that your feed is rotated and skewed raster graphic. That might work fine for photographs and movies. But UIs and text will quickly suffer from washed out edges when you apply transformations to otherwise "perfect" pixels.
The resolution of the pre-transformed image would have to be vastly higher to counteract the fuzzyness.
An even better solution would be if the UI could be rendered directly to the AVP rather than being a screen capture.
There's so many layers of compression and scaling and transforming going on that it's never going to look Great.
macOS renders to 5120x2880 'display', then it's downscaled to 4k, which is then streamed to the Vision Pro with whatever compression or bitrate, where it's scaled to whatever size you display it as.
I for one don't notice any quality/resolution difference between native apps and a mirrored Mac screen.
I agree it feels "worse" than a physical screen - it is certainly less crisp. But I ran an experiment, and created a doc with font sizes from 1-15 pt, then viewed it on my physical 24" 4k monitor, and on a mirrored Mac display in the AVP set to the same (virtual) dimensions and distance away.
Interestingly, text became illegible (without leaning closer in either case) for me at around the 6pt mark for both of them. Objectively speaking, the size of font I can read is pretty equivalent, although I find myself preferring fonts about 2pts larger on the AVP than on my physical monitor.
I suspect this is because the physical display is able to benefit from precise pixel alignment and subpixel rendering, which the AVP can't. But the "average density" feels pretty equivalent... it somehow feels more an issue of optical sharpness than of "pixel size" (even I know the two are related.)
For comparison purposes can you share the size of your monitor and how far away you sit from it? Trying to decide if it's worth getting one of these (which would be a bit of a pain since I'm in Canada).
Of every major issue with the AVP the fact that they not only avoided but made impossible to have a wired connections to anything even though they laughably made the battery pack external is really the most stupid thing ever.
As a display replacement for more powerful hardware, we will never overcome the bandwidth problem (heck, it's already hard enough to do 5k at high fps with a wire) no matter how good the compression is.
It makes no sense that they built and advertise this "feature" when it could have been so much better with a thunderbolt/usb c input on the battery pack...
It doesn't spell much good for the future of the device, because it seems it will be largely locked down and limited like the iPad but even less enticing uses cases...
Given that the screen inside the AVP is basically 4k, your screen would then have to fill your entire vision to have the same resolution. So almost like having your nose touching the screen.
Matching the 60° FOV of a 27" monitor at close distance, it's equivalent to 1080p or less. It gets even worse at FOVs equivalent to more reasonable distances.
Whatever the strengths and weaknesses of the Apple Vision Pro as a product, this is a marvel of engineering. The APV seems to include the most advanced version of the output from several industries (semi conductors, display, materials, assembly, etc.)
Also textiles. The solo knit band is a technical marvel. A single 3D knit with embedded mechanical parts and hidden cables to adjust tension. It’s not my wheelhouse but I’d love to read a textiles expert’s take on it.
It's a shame the solo knit band seems to be form-over-function, it looks cool and a ton of work clearly went into developing it, but nearly everyone seems to find the much simpler dual loop band to be more practical.
e: I stand corrected, the reviewers I've watched mostly preferred the dual loop but there's plenty of solo knit fans here. I guess that YMMV factor is why they include both.
I agree. As much as tech communities tend to dump on achievements, we really should applaud this work. I can't wait to see breakdowns of versions 2, 3, 4. As you begin to master your own creation and manufacturing process, it will be neat to see how much simpler it becomes.
I do wonder about the actual material constraints. My thought is, will it even be possible to make something so slight that it becomes an everyday piece of tech like the iPhone, or is it destined for entertainment and casual work?
Batteries are heavy, the optical aspect is bulky, heat dissipation is challenging. I haven't seen any tech on the near horizon that can answer those challenges. I wonder what Apple has in their R&D plans.
Similar, but all the components in it are a notch better — higher resolution, better pass-through video, more precise hand tracking, and much faster CPU.
However, my impression from iFixit's teardown is that there's a ton of tiny boards, screws, and ribbon cables. I bet in v2 they'll figure out how to package it even more densly.
Early VR of this wave had some remarkable displays, not in a sense that "wow they are so good", but "wow HMD displays were pure trash before". Quest 1 while it was a very silly device, it was a pretty good inside-out HMD. Q2 was a lot better.
All of those devices still suffered from the same issues: they can put only whatever Qualcomm can make for them. XR2 Gen 2 is slower than Snapdragon from the same generation. This is due to how "inefficient" Qualcomm SoCs compared to what Apple makes for themselves. (not M2 is meant for desktops and XR2 Gen2 is based on SoC used in mobile phones, so they were created with different power and thermal budgets in mind)
I find it interesting how apple went about the "number crunching" part of AVP by making a separate coprocessor just for that. Yeah, XR2 Gen2 has similar capabilities on SoC, but I want to see benchmarks.
I am not sure that I agree. Engineering is a world of tradeoffs. Juicero said "we are going to overbuild every component even if it means the company goes out of business". Welp.
Most impressive to me is when every unnecessary component has been removed, and every remaining component is as simple as possible. (The example that sticks in my mind is the humble $5 drip coffee pot. No moving parts except the switch to turn it on!)
> Same as juicero, many people bash the product but the engineering is world class.
Absolutely not. Juicero was absolutely terrible engineering. Any mechanical engineering intern can whip up a big chonking hunk of a juicer or any other kind of product in SolidWorks.
Filleting every edge does not mean "engineering".
The vast majority of the work and difficulty in engineering is NOT getting something to work, but doing so while also working within constraints. There were no constraints in the Juicero. It was impressively manufactured. But terribly engineered. Which is why the entire thing went down despite an absurd level of fundraising.
Even if the engineering was "world class", it was completely pointless seeing as people could get most of the juice out of the packs by just squeezing them with their bare hands.
Though I get the impression they put most of their real engineering effort into their obnoxious juice drm scam, and everything else was just overbuilt to make it feel premium.
Disappointing that more analysis isn't put into the lense effect on PPD.
SimulaVR (a competitor) has an extremely good blogpost about this [1] where they argue that the PPD in the central 30 degrees is far more important than the average PPD over the whole headset (because you very rarely turn your eyes more than 15 degrees off center, and your eyes have the highest resolution right in front of them), justifying the same sort of lenses that Apple appears to be using. They claim that this allows them to reallocate pixels for ~45% higher "foveal PPD" than would otherwise be expected.
“friend of iFixit Karl Guttag has a blog post explaining why using the Vision Pro as a replacement for a good monitor is “ridiculous.”
…
This, says Karl, makes for a virtual Mac display with low enough PPD to see individual pixels—not quite the standard desktop display”
So rather than just trying it for themselves and realizing Karl is wrong, they quote this uncritically?
For what it’s worth, there’s nothing wrong with Karl’s math. He’s just not taking into account the fact that the Vision Pro isn’t mounted in a fixed position and that your brain integrates information over time.
In practice it certainly is as clear as my 4K display when I use it for mirroring.
It’s definitely not as good as a 5K studio display, but then Apple never makes that claim.
Karl Guttag has a history of getting a lot of excruciatingly difficult details right, and missing the literal big picture right in front of him. I think he makes great technical contributions to the field (at least from my perspective as a layman), but every one of his analyses that I have read ends with something to the effect of "and this is why it is physically impossible for AR/VR to ever work", which is far beyond skepticism. It makes it hard to take the rest of his analysis seriously because it shows an irrational bias against the technology.
Yeah. It’s weird. Even the analysis of passthrough is so off. If I look out of my window with AVP on, it’s obvious that I’m looking at a comparatively low res screen, but if I look at digital content, passthrough seems perfect because it’s not in my fovea, and given that the point of the device is to interact with digital content, passthrough is definitely at the level of being good enough.
One of my friends who got a Vision Pro is disappointed in its use as a monitor, but only because he has multiple large monitors on his Mac, yet you can only set up one 4k display inside the Vision Pro, so it ends up being effectively much smaller and lower resolution for him.
Slightly different problem, but related. I'm sure it'll get there over time.
This was going to be my complaint with the headset as well. I use 2x 49" super ultrawide displays, as well as 2x 27" 4k displays. Lots of real estate.
But then I started to think harder about what I was consuming display space with, and started to peel those things out into their Vision or iPad app equivalents. I'll not get into the obvious trade-offs of using iPad apps, but it largely suits the need.
Once I had Slack, Outlook, a Terminal, and TablePlus floating separately from the Mac display, I started to see a path toward potentially not caring if I even had the Mac display.
Eh... it's extremely good and more than I expected, but it's not quite 4k quality IMO. Not because I can see the pixels (I can't), but just because of very slight inconsistency across the field of vision, and motion blur when I move my head.
saying 4k and 5k displays while talking about PPD, without mentioning display size, is sort of useless. a 22" 4k display is going to look different than 85" 4k display.
Karl Guttag: "As per last time, since I don’t have an AVP."
For a device like this where there is so much more than just raw hardware impacting user experience anyone who doesn't own one should have their opinion immediately discounted.
> The Vision Pro has another small problem for spectacles wearers. Contrary to some reports, Apple says that corrective lenses are available for most conditions, including astigmatism (which we weren’t sure about in part one), and they also offer bifocals, and progressives. But if you have a prism value included in your prescription, you’re out of luck. Prism correction is used to correct for diplopia, or double vision. The easiest way to see if your vision prescription is supported is to use ZEISS’s online tool.
As someone that has prism in their glasses, that sucks. I do have custom lenses with prism for my Quest 2. Not sure why apple can't offer it as well, especially if the lens comes from ZEISS.
I think it's because of eye tracking issues. If you need a prism then you probably have other issues with how your eyes move around and they haven't developed for that yet. They are thinking about it although as shown with the accessibility options. I think they will figure it out eventually.
I suspect I might need prism lenses, but there's already an eye tracking option that tells it to only use my left eye. Used that option during the in-store demo, and it worked pretty well. If the image in the right eye were adjusted to match my condition then that would've been even better, but my brain already mostly ignores the image from the right eye so it didn't matter that it wasn't.
One thing the article doesn't get quite right is you can't compare ppd of a virtual display directly to an IRL screen. The virtual pixels are not 1:1 to device pixels. Your perspective is constantly shifting slightly and the virtual pixels being sampled from is constantly changing. This is providing your eyes and brain a lot more data to sample from and means the perceived resolution can be higher than the actual resolution.
I've noticed this as well. I kind of think of it like when you look through a chain-link fence, there's a lot of visual data that gets blocked.
But if you're in a car driving past, the chain link fence gets blurred out and you can see everything again.
Something similar happens with the constant resampling happening in a VR headset. The hardware pixels are constantly shifting over the image pixels, as your head constantly moves just from breathing and whatnot. So not only does the signal-over-time counteract any blurriness from resampling in a single frame, but I find myself wondering if it enhances the resolution slightly.
(After all, you can buy 1080p projectors that "jiggle" the image by half a pixel diagonally, several times a frame, to effectively double the resolution and claim it's halfway to 4K. And what is the constant movement of pixel resampling if not a similar jiggling?)
I'm curious if there's an image processing term for this effect? The effective signal resolution from a constantly shifting pixel grid over time, and how to calculate it mathematically.
This is an interesting perspective. Genuine question: would it not be the other way - seeming lower than actual resolution when slightly moving? Why?
Could this be used to simulate a higher pixel density? Maybe doing something crazy vibrating the display in some kind of circular motion or something silly?
The issue is that the original Mac pixels get 3D-projected and thus interpolated onto the VP panels pixel grid. This invariably loses some sharpness and detail. Even just 2D-rotating a Mac screenshot by some non-orthogonal angle will make it more blurry.
I am not convinced. Your eyes are literally magnifying the pixels compared to being at a distance from a modern 2k display. You may have dense PPD in AVP but being that close to them doesn't do it any favours.
I mean, if you can see a pixel you can see a pixel, there is no getting around that. It's like once you notice a blemish you can't not notice it anymore. From what I hear from users is you can absolutely see the pixels.
I think the key detail here is that each eye is getting its own set of pixels for looking at the same virtual content. This can lead to more detail, because the pixels aren't 100% redundant (they're both looking at the content from a slightly different angle.)
If I'm looking through the AVP at a "square inch" of my virtual Mac display, it may be that each eye is getting say, a 100x100 grid of pixels in this angular area. But since each eyepiece is giving me a slightly different projection of that same inch of space, the pixels themselves are going to be of subtly different values, essentially contributing more information to what I'm looking at. This is a lot different from a "real" display, when both of my eyes are staring at the same physical pixels. I think the idea is that my brain will be combining this information into a perceptual image in a way that will appear slightly more detailed than the equivalent-sized pixels in meatspace.
> It's like once you notice a blemish you can't not notice it anymore
Interesting that you say this... because when you move your head towards a virtual object to inspect any pixels you saw, the image literally gets clearer (because you're getting "closer" to it) and you don't see the pixels any more. IME this goes a long way towards tricking my brain into thinking the pixels aren't real and therefore aren't there. (I've been using my AVP for real work all day today and I've been mostly very happy with it. The resolution is absolutely the least of my concerns, and it looks subjectively phenomenal to me.)
Living with the Vision Pro is much easier than I expected. I thought it would feel bulky to manage with the external battery — it hasn't. I put the battery in my pocket and pop into visionOS without any hassle. I found the max time is about 1.5 hours before I feel slight eye and physical pressure fatigue from the headset.
The potential of the display is what convinces me that Vision will be a breakthrough device.
If Apple has a roadmap to keep scaling up the resolution, while only incurring a fixed manufacturing cost of producing a small 2cm display, there will soon be no other device category that will be able to offer such image quality at so low a cost, not laptop screens or desktop monitors or TVs or movie theatres.
Long term I dont think they _want_ Vision to be a breakthrough device. It's only supposed to be a stepping stone towards a true AR/MR device. The ultimate and likely not possible for decades goal is a contact lense based solution. The main goal is a normal looking pair of glasses capable of true AR/MR (not Google Glass or Nreal style attempts with hardware thats not really capable of a truly usable experience).
Long term wide adoption cant really happen with a VR device, the pitfalls (Weight, long term pain, hair being messed up, red marks on your face, poor battery life, etc) are too high of a barrier.
You're confused. Vision Pro is a true AR/MR device. It uses AR passthrough. A lot of people for a long time have assumed there's some inherent reason people would never accept passthrough AR devices. A few people like myself have been consistent saying that's not the case, that passthrough AR is actually the right solution. Vision Pro is just the first version of what I expect to be a long line of passthrough AR devices from Apple.
That's been their plan in the past, but I wonder if they'll reconsider. You can't do stuff like virtual environments in AR glasses. Apple may never care about games, but there are other VR features which users might really love.
I wouldn't be surprised if the headset continues even in the future where amazing perfect AR glasses are possible.
I don't think the physics will ever be there in terms of battery/energy consumption of said future devices, meaning that I don't think Apple (or any other company) will ever be able to build a processing-device that won't have to depend on a big enough battery.
You could try to miniaturise it all (up to a point) and include it as part of the device itself, similar to Google Glass, but who would want such an energy "bucket" so close to one's eyes and face? That's just disaster (and a big lawsuit) waiting to happen.
> The main goal is a normal looking pair of glasses capable of true AR/MR
You can't consume media e.g watch sports, movies etc or do any real work with those glasses which is a huge part of why Apple built Vision Pro in the first place.
And the battery technology doesn't exist for them to last anything longer than a few minutes in such a form factor unless you have it tethered.
I thought Sony makes the display (and cameras for that matter). It's odd that there are two critical pieces of tech that need a lot of R&D for the Vision Pro to keep progressing, and Apple has nothing to do with them. The screens need to get brighter and have higher resolution. The cameras need to work faster in low light and capture more accurate colors.
If I understand your point, I don’t think it makes as much of a difference as it may seem. Display panels are typically made from large “mother glass” panels out of which smaller display panels are cut. So you can make many different sized panels from a single type of mother glass (all with the same PPI of course). And the TV/monitor industry has delivered remarkable improvements in quality and cost very quickly, so it’s not as if our current methods aren’t working.
It’s an interesting thought though, let’s see how it develops.
The process used to make Micro-OLED displays is very different to the processes used for large OLEDs or LCDs, it has more in common with silicon manufacturing than conventional display manufacturing. AIUI that means the panel size will always be limited by the maximum reticle size of the silicon process, which is usually at most ~800mm².
That lines up with the 27.5x24mm size of the panels in the Vision Pro - that's 660mm², already close to the largest silicon die you can make.
After using it since launch, the displays are still startling every time I step back to really look at them. They look so good. There's a few things that could be improved upon — let me look at my phone, or project a digital display over my phone screen — but the UI and virtual displays you see in Vision Pro feel just like reality. It's startling, and bodes well for future Vision Pro devices (and other VR handsets, too).
... will never be manifested, because Apple won't let you have it. Use your imagination to tease you, but you won't ever get it. Lock'n'lol. iPad paperweight. Maybe next time, tiger.
I'm really hoping this improves because it's nice to turn my 14" MacBook Pro into a theater screen of sorts alongside native apps. For the moment, though, AVP doesn't seem very useful apart from watching movies (which is amazing).
To achieve the same effective pixel count as a MBP, a virtual screen at the AVP's PPD would have to be resized to 90° horizontal FOV, which is just too big to be comfortable, equivalent to sitting 6" from a 14" MBP screen.
The technology just isn't there yet. We need at least another doubling of PPD. See you in 10 years.
[1] https://qasimk.io/screen-ppd/
keep it up good man.
Overall, the AVP is a disappointment for me. Most of the new UX patterns I find far worse than keyboard shortcuts. (For example, a window manager is _much_ easier to use than having to pinch to drag windows around. For example Vimium is much easier to use than looking at elements and pinching at things.) I don't consume much content (e.g. TV), but of the content that I do consume, it feels lower resolution to me. The demos they have (e.g. the Alicia Keys video) feels nothing like real-life to me. As a parallel to what you said — real life has never looked better (after taking off the AVP), haha.
I think a big part of the difference is that your feed is rotated and skewed raster graphic. That might work fine for photographs and movies. But UIs and text will quickly suffer from washed out edges when you apply transformations to otherwise "perfect" pixels.
The resolution of the pre-transformed image would have to be vastly higher to counteract the fuzzyness.
An even better solution would be if the UI could be rendered directly to the AVP rather than being a screen capture.
macOS renders to 5120x2880 'display', then it's downscaled to 4k, which is then streamed to the Vision Pro with whatever compression or bitrate, where it's scaled to whatever size you display it as.
I agree it feels "worse" than a physical screen - it is certainly less crisp. But I ran an experiment, and created a doc with font sizes from 1-15 pt, then viewed it on my physical 24" 4k monitor, and on a mirrored Mac display in the AVP set to the same (virtual) dimensions and distance away.
Interestingly, text became illegible (without leaning closer in either case) for me at around the 6pt mark for both of them. Objectively speaking, the size of font I can read is pretty equivalent, although I find myself preferring fonts about 2pts larger on the AVP than on my physical monitor.
I suspect this is because the physical display is able to benefit from precise pixel alignment and subpixel rendering, which the AVP can't. But the "average density" feels pretty equivalent... it somehow feels more an issue of optical sharpness than of "pixel size" (even I know the two are related.)
I wonder how so different opinions exist on things that should be pretty much the same for everyone.
The virtual display doesn't look as good as the laptop screen, but it looks as good or better than the 1440p monitor I normally use at my desk.
It makes no sense that they built and advertise this "feature" when it could have been so much better with a thunderbolt/usb c input on the battery pack...
It doesn't spell much good for the future of the device, because it seems it will be largely locked down and limited like the iPad but even less enticing uses cases...
e: I stand corrected, the reviewers I've watched mostly preferred the dual loop but there's plenty of solo knit fans here. I guess that YMMV factor is why they include both.
Batteries are heavy, the optical aspect is bulky, heat dissipation is challenging. I haven't seen any tech on the near horizon that can answer those challenges. I wonder what Apple has in their R&D plans.
However, my impression from iFixit's teardown is that there's a ton of tiny boards, screws, and ribbon cables. I bet in v2 they'll figure out how to package it even more densly.
All of those devices still suffered from the same issues: they can put only whatever Qualcomm can make for them. XR2 Gen 2 is slower than Snapdragon from the same generation. This is due to how "inefficient" Qualcomm SoCs compared to what Apple makes for themselves. (not M2 is meant for desktops and XR2 Gen2 is based on SoC used in mobile phones, so they were created with different power and thermal budgets in mind)
I find it interesting how apple went about the "number crunching" part of AVP by making a separate coprocessor just for that. Yeah, XR2 Gen2 has similar capabilities on SoC, but I want to see benchmarks.
That screen is simply incredible as a piece of technology.
Most impressive to me is when every unnecessary component has been removed, and every remaining component is as simple as possible. (The example that sticks in my mind is the humble $5 drip coffee pot. No moving parts except the switch to turn it on!)
Absolutely not. Juicero was absolutely terrible engineering. Any mechanical engineering intern can whip up a big chonking hunk of a juicer or any other kind of product in SolidWorks.
Filleting every edge does not mean "engineering".
The vast majority of the work and difficulty in engineering is NOT getting something to work, but doing so while also working within constraints. There were no constraints in the Juicero. It was impressively manufactured. But terribly engineered. Which is why the entire thing went down despite an absurd level of fundraising.
Though I get the impression they put most of their real engineering effort into their obnoxious juice drm scam, and everything else was just overbuilt to make it feel premium.
SimulaVR (a competitor) has an extremely good blogpost about this [1] where they argue that the PPD in the central 30 degrees is far more important than the average PPD over the whole headset (because you very rarely turn your eyes more than 15 degrees off center, and your eyes have the highest resolution right in front of them), justifying the same sort of lenses that Apple appears to be using. They claim that this allows them to reallocate pixels for ~45% higher "foveal PPD" than would otherwise be expected.
[1] https://simulavr.com/blog/ppd-optics/
…
This, says Karl, makes for a virtual Mac display with low enough PPD to see individual pixels—not quite the standard desktop display”
So rather than just trying it for themselves and realizing Karl is wrong, they quote this uncritically?
For what it’s worth, there’s nothing wrong with Karl’s math. He’s just not taking into account the fact that the Vision Pro isn’t mounted in a fixed position and that your brain integrates information over time.
In practice it certainly is as clear as my 4K display when I use it for mirroring.
It’s definitely not as good as a 5K studio display, but then Apple never makes that claim.
Slightly different problem, but related. I'm sure it'll get there over time.
But then I started to think harder about what I was consuming display space with, and started to peel those things out into their Vision or iPad app equivalents. I'll not get into the obvious trade-offs of using iPad apps, but it largely suits the need.
Once I had Slack, Outlook, a Terminal, and TablePlus floating separately from the Mac display, I started to see a path toward potentially not caring if I even had the Mac display.
https://www.macrumors.com/2024/02/06/vision-pro-virtual-disp...
It does that as well as the promised, but nothing more.
That big span of numbers should be fairly revealing of its display capabilities, with all the lines and columns.
How is the fringing and aliasing as you look at different angles?
Is a large pdf filled with math and tables comfortable to spend a few hours with taking notes?
For a device like this where there is so much more than just raw hardware impacting user experience anyone who doesn't own one should have their opinion immediately discounted.
https://kguttag.com/
As someone that has prism in their glasses, that sucks. I do have custom lenses with prism for my Quest 2. Not sure why apple can't offer it as well, especially if the lens comes from ZEISS.
But if you're in a car driving past, the chain link fence gets blurred out and you can see everything again.
Something similar happens with the constant resampling happening in a VR headset. The hardware pixels are constantly shifting over the image pixels, as your head constantly moves just from breathing and whatnot. So not only does the signal-over-time counteract any blurriness from resampling in a single frame, but I find myself wondering if it enhances the resolution slightly.
(After all, you can buy 1080p projectors that "jiggle" the image by half a pixel diagonally, several times a frame, to effectively double the resolution and claim it's halfway to 4K. And what is the constant movement of pixel resampling if not a similar jiggling?)
I'm curious if there's an image processing term for this effect? The effective signal resolution from a constantly shifting pixel grid over time, and how to calculate it mathematically.
What matters is illuminated pixel aperture size, in the end. The smaller the pixel aperture, the less smearing you get from overlapping.
While it still can't fully resolve shapes smaller than a pixel, it definitely helps with identifying forms.
Could this be used to simulate a higher pixel density? Maybe doing something crazy vibrating the display in some kind of circular motion or something silly?
I mean, if you can see a pixel you can see a pixel, there is no getting around that. It's like once you notice a blemish you can't not notice it anymore. From what I hear from users is you can absolutely see the pixels.
I think the key detail here is that each eye is getting its own set of pixels for looking at the same virtual content. This can lead to more detail, because the pixels aren't 100% redundant (they're both looking at the content from a slightly different angle.)
If I'm looking through the AVP at a "square inch" of my virtual Mac display, it may be that each eye is getting say, a 100x100 grid of pixels in this angular area. But since each eyepiece is giving me a slightly different projection of that same inch of space, the pixels themselves are going to be of subtly different values, essentially contributing more information to what I'm looking at. This is a lot different from a "real" display, when both of my eyes are staring at the same physical pixels. I think the idea is that my brain will be combining this information into a perceptual image in a way that will appear slightly more detailed than the equivalent-sized pixels in meatspace.
> It's like once you notice a blemish you can't not notice it anymore
Interesting that you say this... because when you move your head towards a virtual object to inspect any pixels you saw, the image literally gets clearer (because you're getting "closer" to it) and you don't see the pixels any more. IME this goes a long way towards tricking my brain into thinking the pixels aren't real and therefore aren't there. (I've been using my AVP for real work all day today and I've been mostly very happy with it. The resolution is absolutely the least of my concerns, and it looks subjectively phenomenal to me.)
Is this double loop, or cloth band?
If Apple has a roadmap to keep scaling up the resolution, while only incurring a fixed manufacturing cost of producing a small 2cm display, there will soon be no other device category that will be able to offer such image quality at so low a cost, not laptop screens or desktop monitors or TVs or movie theatres.
[1] https://www.sony.com/content/sony/en/en_us/SCA/company-news/...
Long term wide adoption cant really happen with a VR device, the pitfalls (Weight, long term pain, hair being messed up, red marks on your face, poor battery life, etc) are too high of a barrier.
I wouldn't be surprised if the headset continues even in the future where amazing perfect AR glasses are possible.
You could try to miniaturise it all (up to a point) and include it as part of the device itself, similar to Google Glass, but who would want such an energy "bucket" so close to one's eyes and face? That's just disaster (and a big lawsuit) waiting to happen.
You can't consume media e.g watch sports, movies etc or do any real work with those glasses which is a huge part of why Apple built Vision Pro in the first place.
And the battery technology doesn't exist for them to last anything longer than a few minutes in such a form factor unless you have it tethered.
It’s an interesting thought though, let’s see how it develops.
That lines up with the 27.5x24mm size of the panels in the Vision Pro - that's 660mm², already close to the largest silicon die you can make.
... will never be manifested, because Apple won't let you have it. Use your imagination to tease you, but you won't ever get it. Lock'n'lol. iPad paperweight. Maybe next time, tiger.