Readit News logoReadit News
petetnt commented on GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it   garymarcus.substack.com/p... · Posted by u/kgwgk
petetnt · 17 days ago
GPT-5 is just OpenAI getting started. Just wait and see what GPT-6 is capable of and imagine that GTP-6 is just OpenAI getting started: if GPT-6 was a high school student, GPT-7 is an expert with masters degree; but GPT-7 is OpenAI getting started
petetnt commented on LLM Inevitabilism   tomrenner.com/posts/llm-i... · Posted by u/SwoopsFromAbove
mg · a month ago
You are right. I always wish for more specifics too when we talk about code here.

The library was https://mediabunny.dev/

Before I used my own proprietary code for media encoding/decoding. I also tested a WASM port of ffmpeg for a while.

Mediabunny's documentation might be fine for some developers, but personally I prefer a reference where I have a list of all functions and their specifications.

Yes, I understand the library much better now.

petetnt · a month ago
Personally looking at the documentation I would say that "no good documentation" is highly misleading, because the documentation that it provides is incredibly detailed from quick starts to detailed explanations, offers a lot of examples and has very high quality typings with inline documentation. Not to mention the code itself is documented thoroughly. Sure it doesn't have an API reference, but you get that from the typings, that what I usually do - just check the imports first and go from there.
petetnt commented on LLM Inevitabilism   tomrenner.com/posts/llm-i... · Posted by u/SwoopsFromAbove
mg · a month ago
In the 90s a friend told me about the internet. And that he knows someone who is in a university and has access to it and can show us. An hour later, we were sitting in front of a computer in that university and watched his friend surfing the web. Clicking on links, receiving pages of text. Faster than one could read. In a nice layout. Even with images. And links to other pages. We were shocked. No printing, no shipping, no waiting. This was the future. It was inevitable.

Yesterday I wanted to rewrite a program to use a large library that would have required me to dive deep down into the documentation or read its code to tackle my use case. As a first try, I just copy+pasted the whole library and my whole program into GPT 4.1 and told it to rewrite it using the library. It succeeded at the first attempt. The rewrite itself was small enough that I could read all code changes in 15 minutes and make a few stylistic changes. Done. Hours of time saved. This is the future. It is inevitable.

PS: Most replies seem to compare my experience to experiences that the responders have with agentic coding, where the developer is iteratively changing the code by chatting with an LLM. I am not doing that. I use a "One prompt one file. No code edits." approach, which I describe here:

https://www.gibney.org/prompt_coding

petetnt · a month ago
There’s always a distinct lack of the names in the posts like this. What was the library that was being changed to what? You say it had ”no good documentation”, but it clearly has some sort of documentation considering the LLM did such a good job on the rewrite. Do you understand the ”large library” now?
petetnt commented on Project Vend: Can Claude run a small shop? (And why does that matter?)   anthropic.com/research/pr... · Posted by u/gk1
rossdavidh · 2 months ago
Anyone who has long experience with neural networks, LLM or otherwise, is aware that they are best suited to applications where 90% is good enough. In other words, applications where some other system (human or otherwise) will catch the mistakes. This phrase: "It is not entirely clear why this episode occurred..." applies to nearly every LLM (or other neural network) error, which is why it is usually not possible to correct the root cause (although you can train on that specific input and a corrected output).

For some things, like say a grammar correction tool, this is probably fine. For cases where one mistake can erase the benefit of many previous correct responses, and more, no amount of hardware is going to make LLM's the right solution.

Which is fine! No algorithm needs to be the solution to everything, or even most things. But much of people's intuition about "AI" is warped by the (unmerited) claims in that name. Even as LLM's "get better", they won't get much better at this kind of problem, where 90% is not good enough (because one mistake can be very costly), and problems need discoverable root causes.

petetnt · 2 months ago
The only job in the world where 90% success rate is acceptable is telemarketing and thah has been run by bots since the 90s.
petetnt commented on Largest punk archive to find new home at MTSU's Center for Popular Music   mtsunews.com/worlds-large... · Posted by u/gnabgib
petetnt · 3 months ago
The MRR archive is probably the single most important collection in the field of punk music and I am glad that it has found a (hopefully) permanently safe home. Would be amazing if Center for Popular Music would digitize the materials - with the green tape and all - and index it for the public.

There's so many things there that nobody has probably seen or heard in decades, not to mention letters, notes and other additions along side records and flyers.

Also it's a fuzzy feeling to imagine one of my recordings is now laying around in a box in MTSU, waiting for someone to discover it possibly decades after.

Support your local libraries and archives and all the librarians and archivists!

petetnt commented on Claude Code: An Agentic cleanroom analysis   southbridge-research.noti... · Posted by u/hrishi
elliotec · 3 months ago
Better start now! It’s incredible and unbelievable how productive it is. In my opinion it still takes someone with a staff level of engineering experience to guide it through the hard stuff, but it does in a day with just me what multiple product teams would take months to do, and better.

I’m building a non-trivial platform as a solo project/business and have been working on it since about January. I’ve gotten more done in two nights than I did in 3 months.

I’m sure there are tons of arguments and great points against what I just said, but it’s my current reality and I still can’t believe it. I shelled out the $100/mo after one night of blowing through the $20 credits I used as a trial.

It does struggle with design and front end. But don’t we all.

petetnt · 3 months ago
Designers and frontend developers don’t struggle with those. That’s why they are designers and frontend developers.

Before those 3 months you mentioned, how much did you spend time coding on average (at work, or as a hobby) percentagewise?

petetnt commented on The David Lynch Collection   juliensauctions.com/en/au... · Posted by u/Duanemclemore
Duanemclemore · 3 months ago
"A rare look behind the red curtain of one of the most influential artists of our time..."

Even if you're in the "just looking" category like me, this is such a great glimpse into the life and creative process of a true original. I loved going through this because it ranges the gamut from completely banal stuff like light stands to the personal like custom furniture he made by hand. And then there's stuff to the just plain wacky - a couple Mr. Coffee coffeemakers currently going for $1,250!?

Anyway - thought I might not be the only David Lynch fan out there, and you may get a kick out of this.

petetnt · 3 months ago
If anyone wants another glimpse behind the red curtain, I highly recommend the documentary David Lynch: The Art Life from 2016. It focuses on David Lynches upbringing and formative years as an artist up until the release of Eraserhead that completely changed the direction (or maybe rocketed forward?) of his career. It also shows Lynch working at this workshop, showing some of the tools you see on auction here too.

The collection has some absolute grails too for any fan, like the original script to Twin Peaks as Northwest Passage.

RIP to true master.

petetnt commented on Watching AI drive Microsoft employees insane   old.reddit.com/r/Experien... · Posted by u/laiysb
jeswin · 3 months ago
I find it amusing that people (even here on HN) are expecting a brand new tool (among the most complex ever) to perform adequetely right off the bat. It will require a period of refinement, just as any other tool or process.
petetnt · 3 months ago
People have grown to expect at least adequate performance from products that cost up to 39 dollars a month (* additional costs) per user. In the past you would have called this a tech demo at best.
petetnt commented on Watching AI drive Microsoft employees insane   old.reddit.com/r/Experien... · Posted by u/laiysb
petetnt · 3 months ago
GitHub has spent billions of dollars building an AI that struggles with things like whitespace related linting errors on one of the most mature repositories available. This would be probably okay for a hobbyist experiment, but they are selling this as a groundbreaking product that costs real money.
petetnt commented on GitHub Copilot Coding Agent   github.blog/changelog/202... · Posted by u/net01
doug_durham · 3 months ago
If find your comment "AI Slop" in reference to technical documentation to strange. It isn't a choice between finely crafted prose versus banal text. It's documentation that exists versus documentation that doesn't exist. Or documentation that is hopelessly out of date. In my experience LLMs do a wonderful job in translating from code to documentation. It even does a good job inferring the reason for design decisions. I'm all in on LLM generated technical documentation. If I want well written prose I'll read literature.
petetnt · 3 months ago
Documentation is not just translating code to text - I don't doubt that LLMs are wonderful at that: that's what they understand. They don't understand users though, and that's what separates a great documentation writer from someone who documents.

u/petetnt

KarmaCake day1072January 24, 2016
About
[ my public key: https://keybase.io/petetnt; my proof: https://keybase.io/petetnt/sigs/7MdM5iTKMVmYISiCVWhvTQCEFO_D76CwDZH4zDZY9KA ]
View Original