He is clearly not being rational there, but I could see how his aesthetic tastes might correlate pretty well with robust software. I suppose that saying no to new features is a good default heuristic, these additions could have easily added more problems than they solve, and then you have more surface area to maintain.
That being said, this old-school ideology of maintainers dumping the full responsibility on the user for applying the API "properly" is rather unreasonable. It often sounds like they enjoy having all these footguns they refuse to fix, so they can feel superior and differentiate their club of greybeards who have memorised all the esoteric pitfalls, simply because they were along for the journey, from the masses.
If the core developers/maintainers are putting in thousands of hours over several years, and a patch comes along, it is rightfully at the discretion of those doing 80-95% of the work.
But as negotiating rationally discusses, we value our work more than others—and there’s some emotional attachment. We need to learn to let that go and try to find the best solution, and be open to the bigger picture.
https://en.m.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar
https://www.simonandschuster.com/books/Negotiating-Rationall...
I was among the nerds who swore I'd never use a touch keyboard, and I refused to buy a smartphone without a physical keyboard until 2011. Yes, typing on a screen was awful at first. But then text prediction and haptics got better, and we invented swipe keyboards. Today I'm nearly as fast and comfortable on a touch keyboard as I am on a physical one on a "real" computer.
My point is that input devices get better. We know when something can be improved, and we invent better ways of interacting with a computer.
If you think that we can't improve voice input to the point where it feels quicker, more natural and comfortable to use than a keyboard, you'd be mistaken. We're still in very early stages of this wave of XR devices.
In the past couple of years alone, text-to-speech and speech recognition systems have improved drastically. Today it's possible to hold a nearly natural sounding conversation with AI. Where do you think we'll be 10 years from now?
> Imagine, for example, trying to navigate your emails by speech only. Disaster.
That's because you're imagining navigating a list on a traditional 2D display with voice input. Why wouldn't we adapt our GUIs to work better with voice, or other types of input?
Many XR devices support eye tracking. This works well for navigation _today_ (see some visionOS demos). Where do you think we'll be 10 years from now?
So I think you're, understandably, holding traditional devices in high regard, and underestimating the possibilities of a new paradigm of computing. It's practically inevitable that XR devices will become the standard computing platform in the near future, even if it seems unlikely today.
AR will always be somewhat awkward until you can physically touch and interact with the material things. It’s useful, sure, but not a replacement.
Haptic feedback is probably my favorite iPhone user experience improvement on both the hardware and software side.
However, I will never be able to type faster than on my keyboard, and even with the most advanced voice inputs, I will always be able to type longer and with less fatigue than if I were to use my voice—having ten fingers and one set of vocal cords.
All options are going to be valid and useful for a very long time.