Readit News logoReadit News
iepathos commented on We mourn our craft   nolanlawson.com/2026/02/0... · Posted by u/ColinWright
iepathos · a day ago
If AI is good enough that juniors wielding it outproduce seniors, then the juniors are just... overhead. The company would cut them out and let AI report to a handful of senior architects who actually understand what's being built. You don't pay humans to be a slow proxy for a better tool.

If the tools get good enough to not need senior oversight, they're good enough to not need junior intermediaries either. The "juniors with jetpacks outpacing seniors" future is unrealistic and unstable—it either collapses into "AI + a few senior architects" or "AI isn't actually that reliable yet."

iepathos commented on TikTok's 'addictive design' found to be illegal in Europe   nytimes.com/2026/02/06/bu... · Posted by u/thm
Mordisquitos · 3 days ago
Maybe it isn't any different to Facebook, I don't know. Why would if matter if Facebook isn't any different from TikTok in the context of this news?
iepathos · 3 days ago
Apparent hypocrisy and injustice in government policy is an ugly thing in the world that should be pointed out and eliminated through public awareness and scrutiny.
iepathos commented on TikTok's 'addictive design' found to be illegal in Europe   nytimes.com/2026/02/06/bu... · Posted by u/thm
nolroz · 3 days ago
How'd you kick it?
iepathos · 3 days ago
Get a life that's more interesting than dish washing 4-8 hours a day.
iepathos commented on A sane but bull case on Clawdbot / OpenClaw   brandon.wang/2026/clawdbo... · Posted by u/brdd
okinok · 5 days ago
>all delegation involves risk. with a human assistant, the risks include: intentional misuse (she could run off with my credit card), accidents (her computer could get stolen), or social engineering (someone could impersonate me and request information from her).

One of the differences in risk here would be that I think you got some legal protection if your human assistant misuse it, or it gets stolen. But, with the OpenClaw bot, I am unsure if any insurance or bank will side with you if the bot drained your account.

iepathos · 5 days ago
Thought the same thing. There is no legal recourse if the bot drains the account and donates to charity. The legal system's response to that is don't give non-deterministic bots access to your bank account and 2FA. There is no further recourse. No bank or insurance company will cover this and rightfully so. If he wanted to guard himself somewhat he'd only give the bot a credit card he could cancel or stop payments on, the exact minimum he gives the human assistant.
iepathos commented on Ask HN: Do you have any evidence that agentic coding works?    · Posted by u/terabytest
iepathos · 19 days ago
The default output from AI is much like the default output from experienced devs prioritizing speed over architecture to meet business objectives. Just like experienced devs, LLMs accept technical debt as leverage for velocity. This isn't surprising - most code in the world carries technical debt, so that's what the models trained on and learned to optimize for.

Technical debt, like financial debt, is a tool. The problem isn't its existence, it's unmanaged accumulation.

A few observations from my experience:

1. One-shotting - if you're prompting once and shipping, you're getting the "fast and working" version, not the "well-architected" version. Same as asking an experienced dev for a quick prototype.

2. AI can output excellent code - but it takes iteration, explicit architectural constraints, and often specialized tooling. The models have seen clean code too; they just need steering toward it.

3. The solution isn't debt-free commits. The solution is measuring, prioritizing, and reducing only the highest risk tech debt - the equivalent of focusing on bottlenecks with performance profiling. Which code is high-risk? Where's the debt concentrated? Poorly-factored code with good test coverage is low-risk. Poorly-tested code in critical execution paths is high-risk. Your CI pipeline needs to check the debt automatically for you just like it needs to lint and check your tests pass.

I built https://github.com/iepathos/debtmap to solve this systematically for my projects. It measures technical debt density to prioritize risk, but more importantly for this discussion: it identifies the right context for an LLM to understand a problem without looking through the whole codebase. The output is designed to be used with an LLM for automated technical debt reduction. And because we're measuring debt before and after, we have a feedback loop - enabling the LLM to iterate effectively and see whether its refactoring had a positive impact or made things worse. That's the missing piece in most agentic workflows: measurement that closes the loop.

To your specific concern about shipping unreviewed code: I agree it's risky, but the review focus should shift from "is every line perfect" to "where are the structural risks, and are those paths well-tested?" If your code has low complexity everywhere, is well tested (always review tests), and passing everything, then ask yourself what you actually gain at that point from further investing your time over-engineering the lesser tech debt away? You can't eliminate all tech debt, but you can keep it from compounding in the places that matter.

iepathos commented on The Code-Only Agent   rijnard.com/blog/the-code... · Posted by u/emersonmacro
iepathos · 21 days ago
The "code witness" concept falls apart under scrutiny. In practice, the agent isn't replacing ripgrep with pure Python, it's generating a Python wrapper that calls ripgrep via subprocess. So you get:

- Extra tokens to generate the wrapper

- New failure modes (encoding issues, exit code handling, stderr bugs)

- The same underlying tool call anyway

- No stronger guarantees - actually weaker ones, since you're now trusting both the tool AND the generated wrapper

The theoretical framing about "proofs as programs" and "semantic guarantees" sounds impressive, but the generated wrapper doesn't provide stronger semantics than rg alone, it actually provides strictly weaker ones. This is true for pretty much any CLI tool you're having the AI wrap python code around to do instead of calling battle tested tools directly.

For actual development work, the artifact that matters is the code you're building, which we're already tracking in source control. Nobody needs a "witness" of how the agent found the right file to edit and if they do agents have parseable logs. Direct tool calls are faster, more reliable, and the intermediate exploration steps are ephemeral scaffolding anyway.

iepathos commented on You Need to Ditch VS Code   jrswab.com/blog/ditch-vs-... · Posted by u/kugurerdem
iepathos · a month ago
Research on calculator use in early math education (notably the Hembree & Dessart meta-analysis of 79 studies) found that students given calculators performed better at math - including on paper-and-pencil tests without calculators. The hypothesis is that calculators handle computation, freeing cognitive bandwidth and time for problem-solving and conceptual understanding. Problem solving and higher level concepts matter far more than memorizing multiplication and division tables.

I think about this often when discussing AI adoption with people. It's also relevant to this VS Code discussion which is tangential to the broader AI assisted development discussion. This post conflates tool proficiency with understanding. You can deeply understand Git's DAG model while never typing git reflog. Conversely, you can memorize every terminal command and still design terrible systems.

The scarce resource for most developers isn't "knows terminal commands" - it's "can reason about complex systems under uncertainty." If a tool frees up bandwidth for that, that's a net win. Not to throw shade at hyper efficient terminal users, I live in the terminal and recommend it, but it isn't going to make you a better programmer just by using it instead of an IDE for writing code. It isn't reasoning and understanding about complex systems that you gain from living in a terminal. You gain efficiency, flexibility, and nerd cred - all valuable, but none of them are systems thinking.

The auto-complete point in the post is particularly ironic given how critical it is for terminal users and that most vim users also rely heavily on auto-complete. Auto-complete does not limit your effectiveness, it's provably the opposite.

iepathos commented on Rob Pike goes nuclear over GenAI   skyview.social/?url=https... · Posted by u/christoph-heiss
CerryuDu · a month ago
> non-profits

I think those are pretty problematic. They can't pay well (no profits...), and/or they may be politically motivated such that working for them would mean a worse compromise.

> open source foundations

Those dreams end. (Speaking from experience.)

> education, healthcare tech

Not self-sustaining. These sectors are not self-sustaining anywhere, and therefore are highly tied to politics.

> small companies solving real problems

I've tried small companies. Not for me. In my experience, they lack internal cohesion and resources for one associate to effectively support another.

> The "we all have to" framing is a convenient way to avoid examining your own choices.

This is a great point to make in general (I take it very seriously), but it does not apply to me specifically. I've examined all the way to Mars and back.

> And it's telling that this framing always seems to appear when someone is defending their own employer.

(I may be misunderstanding you, but in any case: I've never worked for Google, and I don't have great feelings for them.)

> You've drawn a clear moral line between Google ("mixed bag") and AI companies ("unquestionably cancer")

I did!

> so you clearly believe these distinctions matter even though Google itself is an AI company

Yes, I do believe that.

Google has created Docs, Drive, Mail, Search, Maps, Project Zero. It's not all terribly bad from them, there is some "only moderately bad", and even morsels of "borderline good".

iepathos · a month ago
Thanks for the thoughtful reply.

The objections to non-profits, OSFs, education, healthcare, and small companies all boil down to: they don't pay enough or they're inconvenient. Those are valid personal reasons, but not moral justifications. You decided you wanted the money big tech delivers and are willing to exchange ethics for that. That's fine, but own it. It's not some inevitable prostitution everyone must do. Plenty of people make the other choice.

The Google/AI distinction still doesn't hold. Anthropic and OpenAI also created products with clear utility. If Google gets "mixed bag" status because of Docs and Maps (products that exist largely just to feed their ad machine), why is AI "unquestionable cancer"? You're claiming Google's useful products excuse their harms, but AI companies' useful products don't. That's not a principled line, it's just where you've personally decided to draw it.

iepathos commented on Rob Pike goes nuclear over GenAI   skyview.social/?url=https... · Posted by u/christoph-heiss
CerryuDu · a month ago
Don't be ridiculous. Google has been doing many things, some of those even nearly good. The super talented/prolific/capable have always gravitated to powerful maecenases. (This applies to Haydn and Händel, too.) If you uncompromisingly filter potential employers by "purely a blessing for society", you'll never find an employment that is both gainful and a match for your exceptional talents. Pike didn't make a deal with the devil any more than Leslie Lamport or Simon Peyton Jones did (each of whom had worked for 20+ years at Microsoft, and has advanced the field immensely).

As IT workers, we all have to prostitute ourselves to some extent. But there is a difference between Google, which is arguably a mixed bag, and the AI companies, which are unquestionably cancer.

iepathos · a month ago
> As IT workers, we all have to prostitute ourselves to some extent.

No, we really don't. There are plenty of places to work that aren't morally compromised - non-profits, open source foundations, education, healthcare tech, small companies solving real problems. The "we all have to" framing is a convenient way to avoid examining your own choices.

And it's telling that this framing always seems to appear when someone is defending their own employer. You've drawn a clear moral line between Google ("mixed bag") and AI companies ("unquestionably cancer") - so you clearly believe these distinctions matter even though Google itself is an AI company.

iepathos commented on Europeans' health data sold to US firm run by ex-Israeli spies   ftm.eu/articles/europe-he... · Posted by u/Fnoord
Jaygles · 2 months ago
If services offered a paid version that guaranteed privacy, such that I stay anonymous and only data points that are strictly necessary to provide the service are persisted in the company's servers, I would happily pay.

And I mean guaranteed in a way that I would have legal recourse against the company if they go back on their word or screw up

iepathos · 2 months ago
What specific legal recourse beyond what exists? You can already sue for breach of contract if a company violates their privacy policy. The real problems are: (1) detecting violations in the first place, and (2) proving/quantifying damages. A 'guarantee' doesn't solve either.

u/iepathos

KarmaCake day747March 8, 2021View Original