When Gutenberg’s press arrived, monks likely thought: “Who would want uniform, soulless copies of the Bible when I can hand-craft one with perfect penmanship and illustrations? I’ve spent my life mastering this craft.”
But most people didn’t care. They wanted access and speed. The same trade-off shows up with mass-market books, IKEA furniture, Amazon basics. A small group still prizes the artisanal version, but the majority just wants something that works.
Granted, it's not an apples-to-apples comparison since Codex has the advantage of working in a fully scaffolded codebase where it only has to paint by numbers, but my overall experience has been significantly better since switching.
Normal method:
* Search for a recipe
* Leave my phone on a stand and glance at it if I forget a step
Meta glasses:
* Put glasses on (there's a reason I got lasek, it's because wearing glasses sucks)
* Talk into the void, trying to figure out how to describe my problem as well as the format that I want the LLM to structure the response
* Correct it when it misreads one of my ingredients
* Hope that the rng gods give me a decent recipe
Or basically any of the things shown off for Apple's headset. Strap on a giant headset just so I can... browse photos? or take a video call where the other person can't even see my face?
With glasses, you have to aim your head at whatever you want the AI to see. With a phone, you just point the camera while your hands stay free. Even in Meta’s demo, the presenter had to look back down at the counter because the AI couldn’t see the ingredients.
It feels like the same dead end we saw with Rabbit and the Humane pin—clever hardware that solves nothing the phone doesn’t already do. Maybe there’s a niche if you already wear glasses every day, but beyond that it’s hard to see the case.