This could have come in any form, a platform as the author points out for instance.
I have a couple of ideas, how about a permissions kit? Something where before or during you sign off on permissions. Or how about locked down execution sandboxes specifically for agentic loops? Also - why is there not yet (or ever?) a model trained on their development code/forums/manuals/data?
Before OpenClaw, I could see the writing on the wall. The ai ecosystem is not congruent to Apple's walled garden. In many ways because they have turned their backs on those 'misfits' their early ad-copy praised.
This 'misfit' mentality is what I like so much about the OpenClaw community. It was visible from it's very beginning with the devil-may-care disregard for privacy and security.
Reality is the exact opposite. Young, innovative, rebellions, often hyper motivated folks are sprinting from idea to implementation, while executives are “told by a few colleagues” that something new, “the future-of foo” is raising up.
If you use openclaw then that’s fantastic. If you have an idea how to improve it, well it is an open source, so go ahead, submit a pull request.
Telling Apple you should do what I am probably too lazy to do, is kind of entitlement blogging that I have nearly zero respect for.
Apparently it’s easier to give unsolicited advice to public companies than building. Ask the interns at EY and McKinsey.
Maybe the author left out something very real. Apple is a walled-garden monopoly with a locked-down ecosystem and even devices. They are also not alone in this. As far as innovation goes, these companies stifle innovation. Demanding more from these companies is not entitlement.