Readit News logoReadit News
Posted by u/atleastoptimal a month ago
Ask HN: What would convince you to take AI seriously?
Recently OpenAI announced an AI model/system they had recently developed won a gold medal at the IMO. The IMO is a very difficult exam, and only the best high schoolers in the world even qualify, let alone win gold. Those who do often go on to cutting edge mathematical research, like Terence Tao, who won the Fields medal in 2006. It has also been rumored that DeepMind achieved the same result with a yet to be released model.

Now, success in a tough math exam isn't "automating all human labor" but it is certainly a benchmark many thought AI would not achieve easily. Even so, many are claiming it isn't really a big deal, and that humans will still be far smarter than AI's for the foreseeable future.

My question is, if you are in the aforementioned camp, what would it take you to adopt a frame of mind roughly analogous to "It is realistic that AI systems will become smarter than humans, and could automate all human labor and cognitive outputs within a single-digit number of years".

Would it require seeing a humanoid robot perform some difficult task? (the Metaculus definition of AGI requires that a robot be able to satisfactorily assemble a (or the equivalent of a) circa-2021 Ferrari 312 T4 1:8 scale automobile model.). Would it involve a Turing test of sufficient rigor? I'm curious what people's personal definition of "ok this is really real" is.

heavyset_go · a month ago
I would start worrying if AI models can understand, reason, learn and incorporate new information on-the-fly without retraining or just stuffing information in context windows, RAG, etc. The worry would also depend on the economics of the entire model lifecycle, as well as the current state of mechanical automation.

We aren't getting that with next-token generators. I don't think we'll get there by throwing shit at the wall and seeing what sticks, either, I think we'll need a deeper understanding of the mind and what intelligence actually is before we can implement it on our own, virtually or otherwise.

Similarly, we're pretty good at creating purpose-built machines, but when it comes to general/universal purpose, it's still in its infancy. The hand is still the most useful universal tool we have. It's hard to compete with the human mind + body when it comes to adapting to and manipulating the environment with purpose. There's quite literally billions of them, they create themselves and their labor is really cheap, too.

There's my serious answer.

exabrial · a month ago
Have it admit it doesn’t know instead of sounding like a Reddit thread full of “experts” trying to one up each other
ofrzeta · a month ago
How could this even be possible with the current architectures? A statistical machine that statistically produces an utterance about what other utterances it is capable of producing?
exabrial · a month ago
I have no idea, I was answering the question. I’m also slightly annoyed by the proliferation into literally everything, while providing zero actual value, and every company is ignoring the insane environmental cost.
strken · a month ago
Four things: do meaningful novel work, learn over the course of a long conversation without poisoning its context, handle lies and accidental untruths, and generally be able to onboard itself and begin doing useful work for a business given the same resources as a new hire.

This isn't an exhaustive list, it's just intended to illustrate when I'd start seriously thinking AGI was possible with incremental improvements.

I take AI seriously in the sense that it's useful, can solve problems, and represents a lot of value for a lot of businesses, but not in the sense that I think the current methods are headed for AGI without further breakthroughs. I'm also not an expert.

andy99 · a month ago
[Withdrawing my comment, I don't think the original post was in good faith]

From OP's other comments:

> A lot of people here have an emotional aversion to accepting AI progress. They’re deep in the bargaining/anger/denial phase.

atleastoptimal · a month ago
Do you think my personal interpretation of people's sensibilities with respect to the subject matter of a question invalidates the question itself? I was noting how many smart people dismiss concrete evidence of AI progress. I feel it's useful to note the potential ego-preserving elements of certain beliefs since they prevent otherwise smart people from accepting reality.

I too wish AI progress wasn't happening as fast as it is. I, as a software developer, want to imagine a future where my skills are useful. However I haven't seen much convincing evidence or arguments on this site that appropriately critique short-term AI timelines that don't resort to logical fallacies, name calling, ad-hominems, or other tired attempts at zingers that contribute nothing to the discourse.

ranger_danger · a month ago
> display intelligence

Defined as what, by who?

> a constrained problem that someone made up

How is that different from what humans do when asking questions?

Dead Comment

al_borland · a month ago
It isn’t about how well AI can answer solved problems. Can it invent the future?

And let’s say AI does automate all human labor… what’s the plan? That happening, will lead to chaos, without some massive changes in how society is organized and functions. That change itself with be chaotic and massively disruptive and painful for millions. If someone can’t answer that question, they have no business hyping up the end of human involvement in civilization.

cjoelrun · a month ago
Unless it’s their business to make/use said AI? Which will likely be a lot of businesses.
al_borland · a month ago
It’s still a bad plan. Who is going to buy their stuff, with what money, when all jobs are replaced by robots and AI?

Capitalism is driving this hype around cost cutting with AI, but capitalism requires people have capital to buy various goods and services. Where is that going to come from when unemployment hits 100%? Who are the customers?

Why would anyone be excited about this future before solving for this problem?

jfengel · a month ago
My AI professor, in the early 90s, described AI like this:

"In the 60s, we wanted to build computers that acted like people. Not just people, but smart people. So what do smart people do? We play chess! So we spent a lot of time beating chess and learned basically nothing about AI."

Beating the Math Olympiad strikes me as much the same. They're solving "hard" problems, but not solving easy ones.

I want a robot that can clean a toilet. Hand it a brush, send it into the room, and get it clean. Then have it make the bed, without crushing any of the stuff strewn haphazardly about. Something humans do for minimum wage because anybody at all can do it.

Physical manipulation of the real world isn't strictly required for AI, but as a test it precludes a lot of solving the hard problem without solving the 'easy' problem. The real world is very unforgiving of automata in uncontrolled circumstances, something that animals (not just humans) do with minimal effort.

devn0ll · a month ago
When it starts curing disease for real, like one or two sessions: done.

Because then I know: they have been using it for real human benefit without trying to get humans hooked on re-occurring costs.

When it starts solving actual human problems like climate or start filling in our gaps of knowledge in science. When it starts lifting humans up to higher grounds instead of replacing them to make a buck.

alganet · a month ago
In my opinion, that's a silly question.

Why do I even need to make up my mind about it?

rfarley04 · a month ago
It's ok (good?) to not have opinions about everything. Something we'll probably never see from an AI (as it's defined and built today)