Readit News logoReadit News
dogcomplex commented on We mourn our craft   nolanlawson.com/2026/02/0... · Posted by u/ColinWright
dogcomplex · 8 days ago
lmao nope burn in hell old programming. What is emerging is a thousand times better than that dumpster fire
dogcomplex commented on Google will allow only apps from verified developers to be installed on Android   9to5google.com/2025/08/25... · Posted by u/kotaKat
kelnos · 6 months ago
If an alternative, privacy-focused OS like Graphene can support contactless payments (universal, like Google Wallet does it, not having to install an app per bank or card), and can 100% reliably get around apps requiring SafetyNet (or whatever they call it now) attestation, then I'd start using it.

I'd also need an alternate, safe source for common apps like Uber, Lyft, Slack, Kindle, Doordash, my banking/credit card apps, and a host of others that I use regularly. (And, no, "just use their website" is not acceptable; their website experiences are mostly crap.)

Way long ago I used to run CyanogenMod on my Android phones, and it was trivially easy to get every single app I needed working. Now it's a huge slog to get everything working on a non-Google-blessed OS, and I expect some things I use regularly just won't work. I hate hate hate this state of affairs. It makes me feel like I don't actually own my phone. But I've gotten so used to using these apps and features that it would reduce my quality of life (I know that sounds dramatic, but I'm lacking a better way to put it) to do without.

dogcomplex · 6 months ago
For those watching this stuff, there are two other promising paths using ZK-proofs which might disarm the tradeoff situation we've been stuck in. Banking apps etc aren't willing to eat the liability of devices that are rooted or running alternate OSes, and Google's been banking on the exclusivity that brings from being both hardware and security provider.

Path 1: a ZK-proof attestation certificate marketplace implemented by GrapheneOS (or similar) to prove safety in a privacy-securing way enough for 3rd party liability insurance markets to buy in. Banks etc can be indifferent, and wouldn't ignore the market if it got big enough. This would mean we could root any device with aggressive hacking and then apologize for it with ZK-proof certs that prove it's still in good hands - and banking apps don't need to care. No need for hard chains of custody like the Google security model.

Path 2: Don't even worry too hard about 3rd party devices or full OSes, we just need to make the option viable enough to shame Google into adopting the same ZK certificate schemes defensively. If they're reading all user data through ZK-proof certs instead of just downloading EVERYTHING then they're significantly neutered as a Big Brother force and for once we're able to actually trust them. They'd still have app marketplace centrality, but if and when phones are being subdivided with ZK-proof security it would make 3rd party monitoring of the dynamics of how those decisions get made very public (we'd see the same things google sees), so we could similarly shame them via alternatives into adopting reasonable default behaviors. Similar to Linux/Windows - Windows woulda been a lot more evil without the alternative next door.

Longer discussion (opinion not sourced from AI though): https://chatgpt.com/share/68ad1084-eb74-8003-8f10-ca324b5ea8...

dogcomplex commented on The current state of LLM-driven development   blog.tolki.dev/posts/2025... · Posted by u/Signez
KallDrexx · 6 months ago
I would love to work at the places you have been where you are given enough time to throw out the prototype and do it properly. In my almost 20 years of professional experience this has never been the case and prototype and exploratory code has only been given minimal polishing time before reaching production and in use state.
dogcomplex · 6 months ago
We are all too well aware of the tragedy that is modern software engineering lol. Sadly I too have never seen that situation where I was given enough time to do the requisite multiple passes for proper design...

I have been reprimanded and tediously spent collectively combing over said quick prototype code for far longer than the time originally provided to work on it though, as a proof of my incompetence! Does that count?

dogcomplex commented on The current state of LLM-driven development   blog.tolki.dev/posts/2025... · Posted by u/Signez
throwawaybob420 · 6 months ago
Judging from all the comments here, it’s going to be amazing seeing the fallout of all the LLM generated code in a year or so. The amount of people who seemingly relish the ability to stop thinking and let the model generate giant chunks of their code base, is uh, something else lol.
dogcomplex · 6 months ago
lol yep we've never had codebases hacked together by juniors before running major companies in production - nope, never
dogcomplex commented on The current state of LLM-driven development   blog.tolki.dev/posts/2025... · Posted by u/Signez
hn_throwaway_99 · 6 months ago
> It entirely depends on the exposure and reliability the code needs.

Ahh, sweet summer child, if I had a nickel for every time I've heard "just hack something together quickly, that's throwaway code", that ended up being a critical lynchpin of a production system - well, I'd probably have at least like a buck or so.

Obviously, to emphasize, this kind of thing happens all the time with human-generated code, but LLMs make the issue a lot worse because it lets you generate a ton of eventual mess so much faster.

Also, I do agree with your primary point (my comment was a bit tongue in cheek) - it's very helpful to know what should be core and what can be thrown away. It's just in the real world whenever "throwaway" code starts getting traction and getting usage, the powers that be rarely are OK with "Great, now let's rebuild/refactor with production usage in mind" - it's more like "faster faster faster".

dogcomplex · 6 months ago
> Ahh, sweet summer child, if I had a nickel for every time I've heard "just hack something together quickly, that's throwaway code", that ended up being a critical lynchpin of a production system - well, I'd probably have at least like a buck or so.

Because this is the first pass on any project, any component, ever. Design is done with iterations. One can and should throw out the original rough lynchpin and replace it with a more robust solution once it becomes evident that it is essential.

If you know that ahead of time and want to make it robust early, the answer is still rarely a single diligent one-shot to perfection - you absolutely should take multiple quick rough iterations to think through the possibility space before settling on your choice. Even that is quite conducive to LLM coding - and the resulting synthesis after attacking it from multiple angles is usually the strongest of all. Should still go over it all with a fine toothed comb at the end, and understand exactly why each choice was made, but the AI helps immensely in narrowing down the possibility space.

Not to rag on you though - you were being tongue in cheek - but we're kidding ourselves if we don't accept that like 90% of the code we write is rough throwaway code at first and only a small portion gets polished into critical form. That's just how all design works though.

dogcomplex commented on Supreme Court's ruling practically wipes out free speech for sex writing online   ellsberg.substack.com/p/f... · Posted by u/blurbleblurble
dogcomplex · 7 months ago
The Supreme Court is eroding the credibility of the institution of law faster than they can make laws. They really want to see how the public reacts to overreach?
dogcomplex commented on SymbolicAI: A neuro-symbolic perspective on LLMs   github.com/ExtensityAI/sy... · Posted by u/futurisold
VinLucero · 8 months ago
Agreed. This is a very interesting discussion! Thanks for bringing it to light.

Have you read Escher, Bach, Gödel: the Eternal Golden Braid?

dogcomplex · 7 months ago
Of course! And yes, a Locus appears to be very close in concept to a strange attractor. I am especially interested in the idea of the holographic principle, where each node has its own low-fidelity map of the rest of the (graph?) system and can self-direct its own growth and positioning. Becomes more of a marketplace of meaning, and useful for the fuzzier edges of entity relationships that we're working with now.
dogcomplex commented on SymbolicAI: A neuro-symbolic perspective on LLMs   github.com/ExtensityAI/sy... · Posted by u/futurisold
futurisold · 8 months ago
One last comment here on contracts; an excerpt from the linked post I think it's extremely relevant for LLMs, maybe it triggers an interesting discussion here:

"The scope of contracts extends beyond basic validation. One key observation is that a contract is considered fulfilled if both the LLM’s input and output are successfully validated against their specifications. This leads to a deep implication: if two different agents satisfy the same contract, they are functionally equivalent, at least with respect to that specific contract.

This concept of functional equivalence through contracts opens up promising opportunities. In principle, you could replace one LLM with another, or even substitute an LLM with a rule-based system, and as long as both satisfy the same contract, your application should continue functioning correctly. This creates a level of abstraction that shields higher-level components from the implementation details of underlying models."

dogcomplex · 8 months ago
Anyone interested in this from a history / semiotics / language-theory perspective should look into the triad concepts of:

Sign (Signum) - The thing which points Locus - The thing being pointed to Sense (Sensus) - The effect/sense in the interpreter

Also known by: Representation/Object/Interpretation, Symbol/Referent/Thought, Signal/Data/User, Symbol/State/Update. Same pattern has been independently identified many many times through history, always ending up with the triplet, renamed many many times.

What you're describing above is the "Locus" essential object being pointed to, fulfilled by different contracts/LLMs/systems but the same essential thing always being eluded to. There's an elegant stability to it from a systems design pov. It makes strong sense to build around those as the indexes/keys being pointed towards, and then various implementations (Signs) attempting to achieve them. I'm building a similar system atm.

dogcomplex commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
gdubs · 8 months ago
One thing that I find truly amazing is just the simple fact that you can now be fuzzy with the input you give a computer, and get something meaningful in return. Like, as someone who grew up learning to code in the 90s it always seemed like science fiction that we'd get to a point where you could give a computer some vague human level instructions and get it more or less do what you want.
dogcomplex · 8 months ago
If anything we now need to unlearn the rigidity - being too formal can make the AI overly focused on certain aspects, and is in general poor UX. You can always tell legacy man-made code because it is extremely inflexible and requires the user to know terminology and usage implicitly lest it break, hard.

For once, as developers we are actually using computers how normal people always wished they worked and were turned away frustratedly. We now need to blend our precise formal approach with these capabilities to make it all actually work the way it always should have.

dogcomplex commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
matthewsinclair · 8 months ago
I think this article is pretty spot on — it articulates something I’ve come to appreciate about LLM-assisted coding over the past few months.

I started out very sceptical. When Claude Code landed, I got completely seduced — borderline addicted, slot machine-style — by what initially felt like a superpower. Then I actually read the code. It was shockingly bad. I swung back hard to my earlier scepticism, probably even more entrenched than before.

Then something shifted. I started experimenting. I stopped giving it orders and began using it more like a virtual rubber duck. That made a huge difference.

It’s still absolute rubbish if you just let it run wild, which is why I think “vibe coding” is basically just “vibe debt” — because it just doesn’t do what most (possibly uninformed) people think it does.

But if you treat it as a collaborator — more like an idiot savant with a massive brain but no instinct or nous — or better yet, as a mech suit [0] that needs firm control — then something interesting happens.

I’m now at a point where working with Claude Code is not just productive, it actually produces pretty good code, with the right guidance. I’ve got tests, lots of them. I’ve also developed a way of getting Claude to document intent as we go, which helps me, any future human reader, and, crucially, the model itself when revisiting old code.

What fascinates me is how negative these comments are — how many people seem closed off to the possibility that this could be a net positive for software engineers rather than some kind of doomsday.

Did Photoshop kill graphic artists? Did film kill theatre? Not really. Things changed, sure. Was it “better”? There’s no counterfactual, so who knows? But change was inevitable.

What’s clear is this tech is here now, and complaining about it feels a bit like mourning the loss of punch cards when terminals showed up.

[0]: https://matthewsinclair.com/blog/0178-why-llm-powered-progra...

dogcomplex · 8 months ago
"Mech suit" is apt. Gonna use that now.

Having plenty of initial discussion and distilling that into requirements documents aimed for modularized components which can all be easily tackled separately is key.

u/dogcomplex

KarmaCake day546August 29, 2016View Original