Readit News logoReadit News
sponnath commented on Google Antigravity   antigravity.google/... · Posted by u/Fysi
sponnath · a month ago
How is a compiler and an LLM equivalent abstractions? I'm also seriously doubtful of the 10x claim any time someone brings it up when AI is being discussed. I'm sure they can be 10x for some problems but they can also be -10x. They're not as consistently predictable (and good) like compilers are.

The "learn to master it or become obsolete" sentiment also doesn't make a lot of sense to me. Isn't the whole point of AI as a technology that people shouldn't need to spend years mastering a craft to do something well? It's literally trying to automate intelligence.

sponnath commented on 'Attention is all you need' coauthor says he's 'sick' of transformers   venturebeat.com/ai/sakana... · Posted by u/achow
mrieck · 2 months ago
My workflow is transformed. If yours isn’t you’re missing out.

Days that I’d normally feel overwhelmed from requests by management are just Claude Code and chill days now.

sponnath · 2 months ago
I've tried to make AI work but a lot of times the overall productivity gains I do get are so negligible that I wouldn't say it's been transformative for me. I think the fact that so many of us here on HN have such different experiences with AI goes to show that it is indeed not as transformative as we think it is (for the field at least). I'm not trying to invalidate your experience.
sponnath commented on 'Attention is all you need' coauthor says he's 'sick' of transformers   venturebeat.com/ai/sakana... · Posted by u/achow
mountainriver · 2 months ago
Software, and it’s wildly positive.

Takes like this are utterly insane to me

sponnath · 2 months ago
Wouldn't say it's transformative.
sponnath commented on State of AI Report 2025   stateof.ai/... · Posted by u/SMAAART
jeetsundareep · 2 months ago
On the contrary; it is prominent to leverage the synergy of ontological orthogonalities to maximize a diverse approach.
sponnath · 2 months ago
This is peak pseudo-intellectualism.
sponnath commented on AGENTS.md – Open format for guiding coding agents   agents.md/... · Posted by u/ghuntley
stingraycharles · 4 months ago
It is. README is for humans, AGENTS / etc is for LLMs.

Document how to use and install your tool in the readme.

Document how to compile, test, architecture decisions, coding standards, repository structure etc in the agents doc.

sponnath · 4 months ago
Why would these things not be relevant for humans?
sponnath commented on GPTs and Feeling Left Behind   whynothugo.nl/journal/202... · Posted by u/Bogdanp
jvanderbot · 4 months ago
I'm convinced the vast difference in outcome with LLM use is a product of the vast difference in jobs. For front end work it's just amazing. Spits out boilerplate and makes alterations without any need of help. For domain specific backend, for example robotics, it's bad. Tries to puke bespoke a-star, or invents libraries and functions. I'm way better off hand coding these things.

The problem is this is classic Gell Mann Amnesia. I can have it restyle my website with zero work, even adding StarCraft 2 or NBA Jam themes, but ask it to work in a planning or estimation problem and I'm annoyed by its quality. Its probably bad at both but I don't notice. If we have 10 specializations required on an app, I'm only mad about 10℅. If I want to make an app entirely outside my domain, yeah sure it's the best ever.

sponnath · 4 months ago
I don't think it's correct to generalize front-end work like this. I've found it very underwhelming for the kind of front-end stuff I do. It makes embarrassing mistakes. I've found it quite useful for a lot of the braindead code I need to write for CRUD backends though.

It's good at stuff that most competent engineers can get right while also having the sort of knowledge breadth an average engineer would lack. You really need to be a domain expert to accurately judge its output in specific areas.

sponnath commented on ChatGPT agent: bridging research and action   openai.com/index/introduc... · Posted by u/Topfi
levocardia · 5 months ago
Maybe this is the "bitter lesson of agentic decisions": hard things in your life are hard because they involve deeply personal values and complex interpersonal dynamics, not because they are difficult in an operational sense. Calling a restaurant to make a reservation is trivial. Deciding what restaurant to take your wife to for your wedding anniversary is the hard part (Does ChatGPT know that your first date was at a burger-and-shake place? Does it know your wife got food poisoning the last time she ate sushi?). Even a highly paid human concierge couldn't do it for you. The Navier–Stokes smoothness problem will be solved before "plan a birthday party for my daughter."
sponnath · 5 months ago
I would even argue the hard parts of being human don't even need to be automated. Why are we all in a rush to automate everything, including what makes us human?

Deleted Comment

sponnath commented on I don't think AGI is right around the corner   dwarkesh.com/p/timelines-... · Posted by u/mooreds
ninetyninenine · 5 months ago
Alright, let’s get this straight.

You’ve got people foaming at the mouth anytime someone mentions AGI, like it’s some kind of cult prophecy. “Oh it’s poorly defined, it’s not around the corner, everyone talking about it is selling snake oil.” Give me a break. You don’t need a perfect definition to recognize that something big is happening. You just need eyes, ears, and a functioning brain stem.

Who cares if AGI isn’t five minutes away. That’s not the point. The point is we’ve built the closest thing to a machine that actually gets what we’re saying. That alone is insane. You type in a paragraph about your childhood trauma and it gives you back something more coherent than your therapist. You ask it to summarize a court ruling and it doesn’t need to check Wikipedia first. It remembers context. It adjusts to tone. It knows when you’re being sarcastic. You think that’s just “autocomplete”? That’s not autocomplete, that’s comprehension.

And the logic complaints, yeah, it screws up sometimes. So do you. So does your GPS, your doctor, your brain when you’re tired. You want flawless logic? Go build a calculator and stay out of adult conversations. This thing is learning from trillions of words and still does better than half the blowhards on HN. It doesn’t need to be perfect. It needs to be useful, and it already is.

And don’t give me that “it sounds profound but it’s really just crap” line. That’s 90 percent of academia. That’s every selfhelp book, every political speech, every guy with a podcast and a ring light. If sounding smarter than you while being wrong disqualifies a thing, then we better shut down half the planet.

Look, you’re not mad because it’s dumb. You’re mad because it’s not that dumb. It’s close. Close enough to feel threatening. Close enough to replace people who’ve been coasting on sounding smart instead of actually being smart. That’s what this is really about. Ego. Fear. Control.

So yeah, maybe it’s not AGI yet. But it’s smarter than the guy next to you at work. And he’s got a pension.

sponnath · 5 months ago
Something big is definitely happening but it's not the intelligence explosion utopia that the AI companies are promising.

> Who cares if AGI isn’t five minutes away. That’s not the point. The point is we’ve built the closest thing to a machine that actually gets what we’re saying. That alone is insane. You type in a paragraph about your childhood trauma and it gives you back something more coherent than your therapist. You ask it to summarize a court ruling and it doesn’t need to check Wikipedia first. It remembers context. It adjusts to tone. It knows when you’re being sarcastic. You think that’s just “autocomplete”? That’s not autocomplete, that’s comprehension

My experience with LLMs have been all over the place. They're insanely good at comprehending language. As a side effect, they're also decent at comprehending complicated concepts like math or programming since most of human knowledge is embedded in language. This does not mean they have a thorough understanding of those concepts. It is very easy to trip them up. They also fail in ways that are not obvious to people who aren't experts on whatever is the subject of its output.

> And the logic complaints, yeah, it screws up sometimes. So do you. So does your GPS, your doctor, your brain when you’re tired. You want flawless logic? Go build a calculator and stay out of adult conversations. This thing is learning from trillions of words and still does better than half the blowhards on HN. It doesn’t need to be perfect. It needs to be useful, and it already is.

I feel like this is handwaving away the shortcomings a bit too much. It does not screw up in the same way humans do. Not even close. Besides, I think computers should rightfully be held up to a higher standard. We already have programs that can automate tasks that human brains would find challenging and tedious to do. Surely the next frontier is something with the speed and accuracy of a computer while also having the adaptability of human reasoning.

I don't feel threatened by LLMs. I definitely feel threatened by some of the absurd amount of money being put into them though. I think most of us here will be feeling some pain if a correction happens.

sponnath commented on Techno-feudalism and the rise of AGI: A future without economic rights?   arxiv.org/abs/2503.14283... · Posted by u/lexandstuff
mitthrowaway2 · 5 months ago
For most of us non-dualists, the human brain is an existence proof. Doesn't mean transformers and LLMs are the right implementation, but it's not really a question of proving it's possible when it's clearly supported by the fundamental operations available in the universe. So it's okay to skip to the part of the conversation you want to write about.
sponnath · 5 months ago
The human brain demonstrates that human intelligence is possible, but it does not guarantee that artificial intelligence with the same characteristics can be created.

u/sponnath

KarmaCake day44September 22, 2023View Original