I'd also need an alternate, safe source for common apps like Uber, Lyft, Slack, Kindle, Doordash, my banking/credit card apps, and a host of others that I use regularly. (And, no, "just use their website" is not acceptable; their website experiences are mostly crap.)
Way long ago I used to run CyanogenMod on my Android phones, and it was trivially easy to get every single app I needed working. Now it's a huge slog to get everything working on a non-Google-blessed OS, and I expect some things I use regularly just won't work. I hate hate hate this state of affairs. It makes me feel like I don't actually own my phone. But I've gotten so used to using these apps and features that it would reduce my quality of life (I know that sounds dramatic, but I'm lacking a better way to put it) to do without.
Path 1: a ZK-proof attestation certificate marketplace implemented by GrapheneOS (or similar) to prove safety in a privacy-securing way enough for 3rd party liability insurance markets to buy in. Banks etc can be indifferent, and wouldn't ignore the market if it got big enough. This would mean we could root any device with aggressive hacking and then apologize for it with ZK-proof certs that prove it's still in good hands - and banking apps don't need to care. No need for hard chains of custody like the Google security model.
Path 2: Don't even worry too hard about 3rd party devices or full OSes, we just need to make the option viable enough to shame Google into adopting the same ZK certificate schemes defensively. If they're reading all user data through ZK-proof certs instead of just downloading EVERYTHING then they're significantly neutered as a Big Brother force and for once we're able to actually trust them. They'd still have app marketplace centrality, but if and when phones are being subdivided with ZK-proof security it would make 3rd party monitoring of the dynamics of how those decisions get made very public (we'd see the same things google sees), so we could similarly shame them via alternatives into adopting reasonable default behaviors. Similar to Linux/Windows - Windows woulda been a lot more evil without the alternative next door.
Longer discussion (opinion not sourced from AI though): https://chatgpt.com/share/68ad1084-eb74-8003-8f10-ca324b5ea8...
I have been reprimanded and tediously spent collectively combing over said quick prototype code for far longer than the time originally provided to work on it though, as a proof of my incompetence! Does that count?
Ahh, sweet summer child, if I had a nickel for every time I've heard "just hack something together quickly, that's throwaway code", that ended up being a critical lynchpin of a production system - well, I'd probably have at least like a buck or so.
Obviously, to emphasize, this kind of thing happens all the time with human-generated code, but LLMs make the issue a lot worse because it lets you generate a ton of eventual mess so much faster.
Also, I do agree with your primary point (my comment was a bit tongue in cheek) - it's very helpful to know what should be core and what can be thrown away. It's just in the real world whenever "throwaway" code starts getting traction and getting usage, the powers that be rarely are OK with "Great, now let's rebuild/refactor with production usage in mind" - it's more like "faster faster faster".
Because this is the first pass on any project, any component, ever. Design is done with iterations. One can and should throw out the original rough lynchpin and replace it with a more robust solution once it becomes evident that it is essential.
If you know that ahead of time and want to make it robust early, the answer is still rarely a single diligent one-shot to perfection - you absolutely should take multiple quick rough iterations to think through the possibility space before settling on your choice. Even that is quite conducive to LLM coding - and the resulting synthesis after attacking it from multiple angles is usually the strongest of all. Should still go over it all with a fine toothed comb at the end, and understand exactly why each choice was made, but the AI helps immensely in narrowing down the possibility space.
Not to rag on you though - you were being tongue in cheek - but we're kidding ourselves if we don't accept that like 90% of the code we write is rough throwaway code at first and only a small portion gets polished into critical form. That's just how all design works though.
Have you read Escher, Bach, Gödel: the Eternal Golden Braid?
"The scope of contracts extends beyond basic validation. One key observation is that a contract is considered fulfilled if both the LLM’s input and output are successfully validated against their specifications. This leads to a deep implication: if two different agents satisfy the same contract, they are functionally equivalent, at least with respect to that specific contract.
This concept of functional equivalence through contracts opens up promising opportunities. In principle, you could replace one LLM with another, or even substitute an LLM with a rule-based system, and as long as both satisfy the same contract, your application should continue functioning correctly. This creates a level of abstraction that shields higher-level components from the implementation details of underlying models."
Sign (Signum) - The thing which points Locus - The thing being pointed to Sense (Sensus) - The effect/sense in the interpreter
Also known by: Representation/Object/Interpretation, Symbol/Referent/Thought, Signal/Data/User, Symbol/State/Update. Same pattern has been independently identified many many times through history, always ending up with the triplet, renamed many many times.
What you're describing above is the "Locus" essential object being pointed to, fulfilled by different contracts/LLMs/systems but the same essential thing always being eluded to. There's an elegant stability to it from a systems design pov. It makes strong sense to build around those as the indexes/keys being pointed towards, and then various implementations (Signs) attempting to achieve them. I'm building a similar system atm.
For once, as developers we are actually using computers how normal people always wished they worked and were turned away frustratedly. We now need to blend our precise formal approach with these capabilities to make it all actually work the way it always should have.
I started out very sceptical. When Claude Code landed, I got completely seduced — borderline addicted, slot machine-style — by what initially felt like a superpower. Then I actually read the code. It was shockingly bad. I swung back hard to my earlier scepticism, probably even more entrenched than before.
Then something shifted. I started experimenting. I stopped giving it orders and began using it more like a virtual rubber duck. That made a huge difference.
It’s still absolute rubbish if you just let it run wild, which is why I think “vibe coding” is basically just “vibe debt” — because it just doesn’t do what most (possibly uninformed) people think it does.
But if you treat it as a collaborator — more like an idiot savant with a massive brain but no instinct or nous — or better yet, as a mech suit [0] that needs firm control — then something interesting happens.
I’m now at a point where working with Claude Code is not just productive, it actually produces pretty good code, with the right guidance. I’ve got tests, lots of them. I’ve also developed a way of getting Claude to document intent as we go, which helps me, any future human reader, and, crucially, the model itself when revisiting old code.
What fascinates me is how negative these comments are — how many people seem closed off to the possibility that this could be a net positive for software engineers rather than some kind of doomsday.
Did Photoshop kill graphic artists? Did film kill theatre? Not really. Things changed, sure. Was it “better”? There’s no counterfactual, so who knows? But change was inevitable.
What’s clear is this tech is here now, and complaining about it feels a bit like mourning the loss of punch cards when terminals showed up.
[0]: https://matthewsinclair.com/blog/0178-why-llm-powered-progra...
Having plenty of initial discussion and distilling that into requirements documents aimed for modularized components which can all be easily tackled separately is key.