Readit News logoReadit News
jatora commented on This is not the future   blog.mathieui.net/this-is... · Posted by u/ericdanielski
orthecreedence · 5 days ago
Another post arguing against the profit mechanism and modern commercial capitalism without realizing it. It's arguing against the symptoms. The problem is a system that incentivizes creating markets where a market was not needed and convincing people it will make their lives better. Yes, the problem is cultural, but it's also deeply ingrained in our economic protocol now. You can't scream into the void about specific problems and expect change unless you look at the root causes.
jatora · 5 days ago
Following this thread takes you into political territory and governmental/regulatory capture, which I believe is the root issue that cannot be solved in late stage capitalism.

We are headed towards (or already in) corporate feudalism and I don't think anything can realistically be done about it. Not sure if this is nihilism or realism but the only real solution I see is on the individual level: make enough money that you don't have to really care about the downsides of the system (upper middle class).

So while I agree with you, I think I just disagree with the little bit you said about "cant expect anything to change without-" and would just say: cant expect anything to change except through the inertia of what already is in place.

jatora commented on GPT-5.2   openai.com/index/introduc... · Posted by u/atgctg
fasteo · 9 days ago
>>> Already, the average ChatGPT Enterprise user says AI saves them 40–60 minutes a day

If this is what AI has to offer, we are in a gigantic bubble

jatora · 9 days ago
This seems pretty huge. Not sure by what metric it wouldn't be civilizationally gigantic for everyone to save that much time per day.
jatora commented on Has the cost of building software dropped 90%?   martinalderson.com/posts/... · Posted by u/martinald
y0eswddl · 12 days ago
do you have a link to the latest survey? my google-fu is failing me at the moment
jatora · 12 days ago
your google-fu isnt failing. there's simply only a couple large studies on this, and of those, zero that have a useful methodology.
jatora commented on Google Antigravity   antigravity.google/... · Posted by u/Fysi
recitedropper · a month ago
For seasoned maintainers of open source repos, there is explicit evidence it does slow them down, even when they think it sped them up: https://arxiv.org/abs/2507.09089

Cue: "the tools are so much better now", "the people in the study didn't know how to use Cursor", etc. Regardless if one takes issue with this study, there are enough others of its kind to suggest skepticism regarding how much these tools really create speed benefits when employed at scale. The maintenance cliff is always nigh...

There are definitely ways in which LLMs, and agentic coding tools scaffolded in top, help with aspects of development. But to say anyone who claims otherwise is either being disingenuous or doesn't know what they are doing, is not an informed take.

jatora · a month ago
I have seen this study cited enough to have a copy paste for it. And no, there are not a bunch of other studies that have any sort of conclusive evidence to support this claim either. I have looked and would welcome any with good analysis.

"""

1. The sample is extremely narrow (16 elite open-source maintainers doing ~2-hour issues on large repos they know intimately), so any measured slowdown applies only to that sliver of work, not “developers” or “software engineering” in general.

2. The treatment is really “Cursor + Claude, often in a different IDE than participants normally use, after light onboarding,” so the result could reflect tool/UX friction or unfamiliar workflows rather than an inherent slowdown from AI assistance itself.

3. The only primary outcome is self-reported time-to-completion; there is no direct measurement of code quality, scope of work, or long-term value, so a longer duration could just mean “more or better work done,” not lower productivity.

4. With 246 issues from 16 people and substantial modeling choices (e.g., regression adjustment using forecasted times, clustering decisions), the reported ~19% slowdown is statistically fragile and heavily model-dependent, making it weak evidence for a robust, general slowdown effect.

"""

Any developer (who was a developer before March 2023) that is actively using these tools and understands the nuances of how to search the vector space (prompt) is being sped up substantially.

jatora commented on Google Antigravity   antigravity.google/... · Posted by u/Fysi
absoluteunit1 · a month ago
I didn’t make this claim.

I also have a personal rule that I will try something for at least 4 months actively before making my decision about it (programming language, new tools, or in this case AI assisted coding)

I made the claim that in my area of expertise - I have found that *most of the time it is faster to write something myself than I write out really detailed md file / prompt. It becomes more tedious to express myself via natural language then it is with code when I want something very specific done.

In these types of cases - writing the code myself, allows me to express the thing I want faster. Also, I like to code with the AI auto complete but still while this can be useful I sometimes disable it because it’s distracting and consistently incorrect with its predictions)

jatora · a month ago
claim that i claimed you claimed: "for any coder to claim AI tools slow them down"

---

claim you made: "One thing I’ve noticed though that actually coding (without the use of AI; maybe a bit of tab auto-complete) is that I’m actually way faster when working in my domain than I am when using AI tools."

---

You did make that claim but I'm aware my approach would bring the defensiveness out of anyone :P

jatora commented on AI is a front for consolidation of resources and power   chrbutler.com/what-ai-is-... · Posted by u/delaugust
SoftTalker · a month ago
I'm a SWE, DBA, SysAdmin, I work up and down the stack as needed. I'm not using LLMs at all. I really haven't tried them. I'm waiting for the dust to settle and clear "best practices" to emerge. I am sure that these tools are here to stay but I am also confident they are not in their final form today. I've seen too many hype trains in my career to still be jumping on them at the first stop.
jatora · a month ago
I hope I am never this slow to adapt to new technologies.
jatora commented on Google Antigravity   antigravity.google/... · Posted by u/Fysi
absoluteunit1 · a month ago
Same here - completely relate.

One thing I’ve noticed though that actually coding (without the use of AI; maybe a bit of tab auto-complete) is that I’m actually way faster when working in my domain than I am when using AI tools.

Everytime I use AI tools in my domain-expertise area, I find it ends up slowing me down. Introducing subtle bugs, me having to provide insane amount of context and details (at which point it becomes way faster to do it myself)

Just code and chill man - having spent the last 6 months really trying everything (all these context engineering strategies, agents, CLAUDE.md files on every directory, et, etc). It really easy still more productive to just code yourself if you know what you’re doing.

The thing I love most though - is having discussions with an LLM about an implementation, having it write some quick unit tests and performance tests for certain base cases, having it write a quick shell script, etc. things like this, it’s Amazing and makes me really enjoy programming since I save time and can focus on doing the actual fun stuff

jatora · a month ago
it is absolutely poor skill, or disengenuous at best, for any coder to claim AI tools slow them down lol.
jatora commented on SlopStop: Community-driven AI slop detection in Kagi Search   blog.kagi.com/slopstop... · Posted by u/msub2
dvfjsdhgfv · a month ago
So we have two universes. One is pushing generated content up our throats - from social media to operating systems - and another universe where people actively decide not to have anything to do with it.

I wonder where the obstinacy on the part of certain CEOs come from. It's clear that although such content does have its fans (mostly grouped in communities), people at large just hate arificially-generated content. We had our moment, it was fun, it is no more, but these guys seem obsessed in promoting it.

jatora · a month ago
not exactly nothing to do with it, they still use generative AI to assist search

and saying 'it is no more'... sigh. such a weird take. the world's coming for you

jatora commented on The Case That A.I. Is Thinking   newyorker.com/magazine/20... · Posted by u/ascertain
marcus_holmes · 2 months ago
Yes, I've seen the same things.

But; they don't learn. You can add stuff to their context, but they never get better at doing things, don't really understand feedback. An LLM given a task a thousand times will produce similar results a thousand times; it won't get better at it, or even quicker at it.

And you can't ask them to explain their thinking. If they are thinking, and I agree they might, they don't have any awareness of that process (like we do).

I think if we crack both of those then we'd be a lot closer to something I can recognise as actually thinking.

jatora · 2 months ago
This is just wrong though. They absolutely learn in-context in a single conversation within context limits. And they absolutely can explain their thinking; companies just block them from doing it.
jatora commented on Claude Code on the web   anthropic.com/news/claude... · Posted by u/adocomplete
jswny · 2 months ago
I find Codex CLI to be very good too, but it’s missing tons of features that I use in Claude Code daily that keep me from switching full time.

- Good bash command permission system

- Rollbacks coupled with conversation and code

- Easy switching between approval modes (Claude had a keybind that makes this easy)

- Ability to send messages while it’s working (Codex just queues them up for after it’s done, Claude injects them into the current task)

- Codex is very frustrating when I have to keep allowing it to run the same commands over and over, Claude this works well when I approve it to run a command for the session

- Agents (these are very useful for controlling context)

- A real plan mode (crucial)

- Skills (these are basically just lazy loaded context and are amazing)

- The sandboxing in codex is so confusing, commands fail all the time because they try to log to some system directory or use internet access which is blocked by default and hard to figure out

- Codex prefers python snippets to bash commands which is very hard to permission and audit

When Codex gets to feature parity, I’ll seriously look at switching, but until then it’s just a really good model wrapped in an okay harness

jatora · 2 months ago
to fix having to approve commands over and over - use windows WSL. codex does not play nice with permissions/approvals on windows. WSL solves that completely

u/jatora

KarmaCake day11April 6, 2025View Original