Readit News logoReadit News
typpilol commented on Sprinkling self-doubt on ChatGPT   justin.searls.co/posts/sp... · Posted by u/ingve
schneems · 2 days ago
The article is stating what inputs they used and the output they observed. They stated they saw more tokens used and more time spent before returning an answer. That seems like a data point you can test. Which is maybe not the zoom level or exact content you’re looking for, but I don’t feel your criticism sticks here.

> testing where they change one word here or there and compare

You can be that person. You can write that post. Nothing is stopping you.

typpilol · a day ago
The point is.. a decent article would have included all of that.

You're missing the forest for the trees with your response

typpilol commented on Sprinkling self-doubt on ChatGPT   justin.searls.co/posts/sp... · Posted by u/ingve
typpilol · 2 days ago
This article is so sparce with any details it's basically useless.

Does telling the AI to "just be correct" essentially work? I have no idea after this article because there no details at all related to what changed the type of prompts etc

typpilol commented on AI tooling must be disclosed for contributions   github.com/ghostty-org/gh... · Posted by u/freetonik
popalchemist · 3 days ago
No more so than regurgitating an entire book. While it could technically be possible in the case of certain repos that are ubiquitous on the internet (and therefore overrepresented in training data to the point that they are "regurgitated" verbatim, in whole), it is extremely unlikely and would only occur after deliberate prompting. The NYT suit against Open AI shows (in discovery) that the NYT was only able to get partial results after deliberately prompting the model with portions of the text they were trying to force it to regurgitate.

So. Yes, technically possible. But impossible by accident. Furthermore when you make this argument you reveal that you don't understand how these models work. They do not simply compress all the data they were trained on into a tiny storable version. They are effectively multiplication matrices that allow math to be done to predict the most likely next token (read: 2-3 Unicode characters) given some input.

So the model does not "contain" code. It "contains" a way of doing calculations for predicting what text comes next.

Finally, let's say that it is possible that the model does spit out not entire works, but a handful of lines of code that appear in some codebase.

This does not constitute copyright infringement, as the lines in question a) represent a tiny portion of the whole work (and copyright only protecst against the reduplication of whole works or siginficant portions of the work), and B) there are a limited number of ways to accomplish a certain function and it is not only possible but inevitable that two devs working independently could arrive at the same implementation. Therefore using an identical implementation (which is what this case would be) of a part of a work is no more illegal than the use of a certain chord progression or melodic phrasing or drum rhythm. Courts have ruled about this thoroughly.

typpilol · 2 days ago
It's also why some companies do clean room design.
typpilol commented on AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'   theregister.com/2025/08/2... · Posted by u/JustExAWS
ponector · 3 days ago
The best usage is to ask LLM to explain existing code, to search in the legacy codebase.
typpilol · 3 days ago
I've found this to be not very useful in large projects or projects that are very modularized or fragment across many files.

Because sometimes it can't trace down all the data paths and by the time it does it's context window is running out.

That seems to be the biggest issue I see for my daily use anyways

typpilol commented on AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'   theregister.com/2025/08/2... · Posted by u/JustExAWS
ruszki · 3 days ago
Some gave. Some even recorded it, and showed it, because they thought that they are good with it. But they weren’t good at all.

They were slower than coding by hand, if you wanted to keep quality. Some were almost as quick as copy-pasting from the code just above the generated one, but their quality was worse. They even kept some bugs in the code during their reviews.

So the different world is probably what the acceptable level of quality means. I know a lot of coders who don’t give a shit whether it makes sense what they’re doing. What their bad solution will cause in the long run. They ignore everything else, just the “done” state next to their tasks in Jira. They will never solve complex bugs, they simply don’t care enough. At a lot of places, they are the majority. For them, LLM can be an improvement.

Claude Code the other day made a test for me, which mocked everything out from the live code. Everything was green, everything was good. On paper. A lot of people simply wouldn’t care to even review properly. That thing can generate a few thousands of lines of semi usable code per hour. It’s not built to review it properly. Serena MCP for example specifically built to not review what it does. It’s stated by their creators.

typpilol · 3 days ago
Honestly I think LLMs really shine best when your first getting into a language.

I just recently got into JavaScript and typescript and being able to ask the llm how to do something and get some sources and link examples is really nice.

However using it in a language I'm much more familiar with really decreases the usefulness. Even more so when your code base is mid to large sized

typpilol commented on AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'   theregister.com/2025/08/2... · Posted by u/JustExAWS
brushfoot · 3 days ago
I read AI coding negativity on Hacker News and Reddit with more and more astonishment every day. It's like we live in different worlds. I expect the breadth of tooling is partly responsible. What it means to you to "use the LLM code" could be very different from what it means to me. What LLM are we talking about? What context does it have? What IDE are you using?

Personally, I wrote 200K lines of my B2B SaaS before agentic coding came around. With Sonnet 4 in Agent mode, I'd say I now write maybe 20% of the ongoing code from day to day, perhaps less. Interactive Sonnet in VS Code and GitHub Copilot Agents (autonomous agents running on GitHub's servers) do the other 80%. The more I document in Markdown, the higher that percentage becomes. I then carefully review and test.

typpilol · 3 days ago
Honestly the best way to get good code at least with typescript and JavaScript is to have like 50 eslint plugins

That way it constantly yells at sonnet 4 to get the code at least in a better state.

If anyone is curious I have a massive eslint config for typescript that really gets good code out of sonnet.

But before I started doing this the code it wrote was so buggy and it was constantly trying to duplicate functions into separate files etc

typpilol commented on Zedless: Zed fork focused on privacy and being local-first   github.com/zedless-editor... · Posted by u/homebrewer
echelon · 3 days ago
You can leave LLM Q&A on the table if you like, but tab auto complete is a godlike power.

I'm auto-completing crazy complex Rust match branches for record transformation. 30 lines of code, hitting dozens of fields and mutations, all with a single keystroke. And then it knows where my next edit will be.

I've been programming for decades and I love this. It's easily a 30-50% efficiency gain when plumbing fields or refactoring.

typpilol · 3 days ago
Honestly I find it useful for simple things like having to change something in a ton of columns you can't do with an easy find replace.

Really is game changing

typpilol commented on Zedless: Zed fork focused on privacy and being local-first   github.com/zedless-editor... · Posted by u/homebrewer
3836293648 · 3 days ago
I feel like everyone praising AI is a webdev with extremely predictable problems that are almost entirely boilerplate.

I've tried throwing LLMs at every part of the work I do and it's been entirely useless at everything beyond explaining new libraries or being a search engine. Any time it tries to write any code at all it's been entirely useless.

But then I see so many praising all it can do and how much work they get done with their agents and I'm just left confused.

typpilol · 3 days ago
Can I ask what kind of work area you're in?
typpilol commented on OpenMower – An open source lawn mower   github.com/ClemensElflein... · Posted by u/rickcarlino
bluGill · 5 days ago
The problem with gps is it doesn't work (reliably) under trees which many lawns have. Since trees are common in yards they need something else anyway and at that point you can get rid of the gps
typpilol · 5 days ago
Need to get the multi-gnss like my cycling computer from Garmin has

It's so accurate it's scary. Shows me the what side of the road I'm on and even shows me how much I'm in the lane lol

typpilol commented on How Keeta processes 11M financial transactions per second with Spanner   cloud.google.com/blog/top... · Posted by u/xescure
ramraj07 · 6 days ago
How is it a distributed block chain if it runs on their Google cloud account exclusively?
typpilol · 6 days ago
Distributed between the data center walls I guess

u/typpilol

KarmaCake day64July 30, 2025View Original