Readit News logoReadit News
stoneyhrm1 commented on GPT-5   openai.com/gpt-5/... · Posted by u/rd
illiac786 · 16 days ago
Yeah I think that throwing more and more compute at the same training data produces smaller and smaller gains.

Maybe quantum compute would be significant enough of a computing leap to meaningfully move the needle again.

stoneyhrm1 · 16 days ago
What exactly is being moved? It's trained on human data, you can't make code more perfect than what is written out there by a human.
stoneyhrm1 commented on GPT-5   openai.com/gpt-5/... · Posted by u/rd
somenameforme · 16 days ago
If AGI is ever achieved, it would open the door to recursive self improvement that would presumably rapidly exceed human capability across any and all fields, including AI development. So the AI would be improving itself while simultaneously also making revolutionary breakthroughs in essentially all fields. And, for at least a while, it would also presumably be doing so at an exponentially increasing rate.

But I think we're not even on the path to creating AGI. We're creating software that replicate and remix human knowledge at a fixed point in time. And so it's a fixed target that you can't really exceed, which would itself already entail diminishing returns. Pair this with the fact that it's based on neural networks which also invariably reach a point of sharply diminishing returns in essentially every field they're used in, and you have something that looks much closer to what we're doing right now - where all competitors will eventually converge on something largely indistinguishable from each other, in terms of ability.

stoneyhrm1 · 16 days ago
> revolutionary breakthroughs in essentially all field

This doesn't really make sense outside computers. Since AI would be training itself, it needs to have the right answers, but as of now it doesn't really interact with the physical world. The most it could do is write code, and check things that have no room for interpretation, like speed, latency, percentage of errors, exceptions, etc.

But, what other fields would it do this in? How can it makes strives in biology, it can't dissect animals, it can't figure more out about plants that humans feed into the training data. Regarding math, math is human-defined. Humans said "addition does this", "this symbol means that", etc.

I just don't understand how AI could ever surpass anything human known before we live by the rules defined by us.

stoneyhrm1 commented on I'm Archiving Picocrypt   github.com/Picocrypt/Pico... · Posted by u/jaden
rikafurude21 · 18 days ago
I dont think many people would be excited at the thought of going from handcrafted artisan knitting to babying machines in the knitting factory. You need a certain type of autism to be into the latter.
stoneyhrm1 · 18 days ago
I'd think it would be more autistic to continue to use and have interest in something that's been superseded by something far more easier and efficient.

Who would you think is weirder, the person still obsessed with horse & buggies, or the person obsessed with cars?

stoneyhrm1 commented on I'm Archiving Picocrypt   github.com/Picocrypt/Pico... · Posted by u/jaden
stoneyhrm1 · 18 days ago
I understand the author's sentiment but industries don't exist solely because somebody wants them to. I mean, sure, hobbies can exist, but you won't be paid well (or even at all) to work with them.

Software engineering pays because companies want people to develop software. It pays so well because it's hard, but the coding portion is become easier. Vibe coding and AI is here to stay, the author can choose to ignore it and go preach to a dying field (specifically, writing code, not CS), or embrace it. We should be happy we no longer need to type away if and for loops 20 times and instead can focus on high level architecture.

stoneyhrm1 commented on A non-anthropomorphized view of LLMs   addxorrol.blogspot.com/20... · Posted by u/zdw
cmenge · 2 months ago
I kinda agree with both of you. It might be a required abstraction, but it's a leaky one.

Long before LLMs, I would talk about classes / functions / modules like "it then does this, decides the epsilon is too low, chops it up and adds it to the list".

The difference I guess it was only to a technical crowd and nobody would mistake this for anything it wasn't. Everybody know that "it" didn't "decide" anything.

With AI being so mainstream and the math being much more elusive than a simple if..then I guess it's just too easy to take this simple speaking convention at face value.

EDIT: some clarifications / wording

stoneyhrm1 · 2 months ago
I mean you can boil anything down to it's building blocks and make it seem like it didn't 'decide' anything. When you as a human decide something, your brain and it's neurons just made some connections with an output signal sent to other parts that resulting in your body 'doing' something.

I don't think LLMs are sentient or any bullshit like that, but I do think people are too quick to write them off before really thinking about how a nn 'knows things' similar to how a human 'knows' things, it is trained and reacts to inputs and outputs. The body is just far more complex.

stoneyhrm1 commented on A non-anthropomorphized view of LLMs   addxorrol.blogspot.com/20... · Posted by u/zdw
grey-area · 2 months ago
On the contrary, anthropomorphism IMO is the main problem with narratives around LLMs - people are genuinely talking about them thinking and reasoning when they are doing nothing of that sort (actively encouraged by the companies selling them) and it is completely distorting discussions on their use and perceptions of their utility.
stoneyhrm1 · 2 months ago
I thought this too but then began to think about it from the perspective of the programmers trying to make it imitate human learning. That's what a nn is trying to do at the end of the day, and in the same way I train myself by reading problems and solutions, or learning vocab at a young age, it does so by tuning billions of parameters.

I think these models do learn similarly. What does it even mean to reason? Your brain knows certain things so it comes to certain conclusions, but it only knows those things because it was ''trained'' on those things.

I reason my car will crash if I go 120 mph on the other side of the road because previously I have 'seen' where the input is a car going 120mph has a high probability of producing a crash, and similarly have seen input where the car is going on the other side of the road, producing a crash. Combining the two would tell me it's a high probability.

stoneyhrm1 commented on AI note takers are flooding Zoom calls as workers opt to skip meetings   washingtonpost.com/techno... · Posted by u/tysone
mft_ · 2 months ago
I've led cross-functional teams in multiple organisations (albeit not in tech) and I'd argue it's a bit more complex than that. Regular team meetings can cover multiple needs, e.g.:

* Keeping everyone working on a complex project updated on progress

* Keeping everyone 'aligned' - (horrible corporate word but) essentially all working together effectively towards the same goals (be they short or long term)

* Providing a forum for catching and discussing issues as they arise

* A degree of project management - essentially, making sure that people are doing as they said they would

* Information sharing (note I prefer to cancel meetings if this is the only regular purpose)

* Some form of shared decision-making (depending on the model you have for this) and thus shared ownership

If a meeting 'owner' is sensitive to not wasting people's time and regularly shortens or cancels meetings, it can be done well, I believe.

stoneyhrm1 · 2 months ago
Spoken like a true project manager that every engineer hates.
stoneyhrm1 commented on Writing Code Was Never the Bottleneck   ordep.dev/posts/writing-c... · Posted by u/phire
andrelaszlo · 2 months ago
My most recent example of this is mentoring young, ambitious, but inexperienced interns.

Not only did they produce about the same amount of code in a day that they used to produce in a week (or two), several other things made my work harder than before:

- During review, they hadn't thought as deeply about their code so my comments seemed to often go over their heads. Instead of a discussion I'd get something like "good catch, I'll fix that" (also reminiscent of an LLM).

- The time spent on trivial issues went down a lot, almost zero, the remaining issues were much more subtle and time-consuming to find and describe.

- Many bugs were of a new kind (to me), the code would look like it does the right thing but actually not work at all, or just be much more broken than code with that level of "polish" would normally be. This breakdown of pattern-matching compared to "organic" code made the overhead much higher. Spending decades reviewing code and answering Stack Overflow questions often makes it possible to pinpoint not just a bug but how the author got there in the first place and how to help them avoid similar things in the future.

- A simple, but bad (inefficient, wrong, illegal, ugly, ...) solution is a nice thing to discuss, but the LLM-assisted junior dev often cooks up something much more complex, which can be bad in many ways at once. The culture of slowly growing a PR from a little bit broken, thinking about design and other considerations, until its high quality and ready for a final review doesn't work the same way.

- Instead of fixing the things in the original PR, I'd often get a completely different approach as the response to my first review. Again, often broken in new and subtle ways.

This lead to a kind of effort inversion, where senior devs spent much more time on these PRs than the junior authors themselves. The junior dev would feel (I assume) much more productive and competent, but the response to their work would eventually lack most of the usual enthusiasm or encouragement from senior devs.

How do people work with these issues? One thing that worked well for me initially was to always require a lot of (passing) tests but eventually these tests would suffer from many of the same problems

stoneyhrm1 · 2 months ago
> "good catch, I'll fix that"

I see this a lot and even done so myself, I think a lot of people in the industry are a bit too socially-aware and think if they start a discussion they look like they're trying too hard.

It's stupid yes, but plenty of times I've started discussions only to be brushed off or not even replied to, and I believed it was because my responses were too long and nobody actually cared.

Deleted Comment

u/stoneyhrm1

KarmaCake day24April 29, 2025View Original