Readit News logoReadit News
ilrwbwrkhv · 5 months ago
It will never be. I don't know why people keep trying to make it into a research scientist. It's a great helper but it has no original insight and breakthroughs happen through original insight. LLMs are simply a conditional probability net of existing data so it can never ever have an original insight. I don't know why this is so hard.
dauhak · 5 months ago
This makes no sense. You can describe the brain reductively enough and make it sound like it can't have an original insight either. Transformers are expressive enough function approximaters in theory, there's no reason why a future one couldn't have novel insights.

This is such a weird misconception I keep seeing - the fact that the loss function during training is minimising CE/maximizing prob of correct token doesn't mean that it can't do "real" thinking. If circuitry doing "real" thinking is the best solution found by SGD then it obviously will

goatlover · 5 months ago
And why is there even a desire to replace research scientists? Presumably this is the kind of job humans find meaningful and are good at. I don't understand AI as a replacement for humans instead of a smart tool for humans to make use of.
dailykoder · 5 months ago
Why is there even a desire to replace software developers? Presumably this is the kind of job humans find meaningful

Why is there even a desire to replace car manufacturers? Presumably this is the kind of job humans find meaningful

[...]

dantheman · 5 months ago
Why? to increase productivity and improve the human condition. If AI can do research then technological and scientific progress will increase dramatically.
ninetyninenine · 5 months ago
Talk to the person who pays the scientist. Anyone who works, works for someone. Who you work for is the person who wants to replace you.
dekhn · 5 months ago
you're conflating LLMs with AI. Strictly speaking, from what we understand of physics, chemistry, biology, computing, and mathematics, there is no coherent argument that you could not build an AI which could be an effective research scientist (which I define as: a system which produces novel hypotheses based on current scientific knowledge that are likely to be true, and is capable of evaluating their likelihood quickly enough to be relevant to human endeavors.)

I imagine that such a system would probably have at least required component that looked much like an LLM.

Not all breakthroughs happened due to original insight- many came from tediously improving techniques through fairly mundane means, or from advancements in other areas.

dgfitz · 5 months ago
> which I define as: a system which produces novel hypotheses based on current scientific knowledge that are likely to be true, and is capable of evaluating their likelihood quickly enough to be relevant to human endeavors.

Produces hypotheses which are likely to be true? Pardon my ignorance, have we even proven gravity to be true yet? Sure, I think gravity exists and is true, however your definition of AI seems like Swiss cheese.

ninetyninenine · 5 months ago
The biggest problem with LLMs isn't that it lacks original insight. It's that the insight is so original that we call that insight hallucinations.

We like to think Humans are the most creative things on the face of the earth and we don't like to attribute creativity to LLMs. The sad reality is that LLMs are likely more creative then humans.

awofford · 5 months ago
I think the distinction is that hallucinations are incorrect. You can be super creative building a new chair, but if you can’t sit in it, it’s not a chair.
pclmulqdq · 5 months ago
Most humans are also too creative, but we have moderating impulses that tell us so much. Very few humans have the skill of being able to ride the cutting edge without going too far off either side of it, and most can only do that in a very narrow subfield.
great_psy · 5 months ago
I wouldn’t be so dismissive. Research is just a loop of hypothesis, experiments, collect data, make new hypothesis. There’s so creativity required for scientific breakthroughs, but 99.9% percent of scientists don’t need this creativity. Just need grit and stamina.
didericis · 5 months ago
I wouldn't be so dismissive of the objection.

That loop involves way more flexible goal oriented attention, more intrinsic/implicit understanding of plausible cause and effect based on context, and more novel idea creation than it seems.

You can only brute force things with combinatorics and probabilities that have been well mapped via human attention, as piggy-backing off of lots of human digested data is just a clever way of avoiding those issues. Research is by definition novel human attention directed at a given area, so it can't benefit from that strategy in the same way domains which have already had a lot of human attention can.

XenophileJKO · 5 months ago
I think the whole idea of "original insight" is doing a lot of heavy lifting here.

Most innovative is derivative, either from observation or cross application. People aren't sitting in isolation chambers their whole lives and coming up with things in the absence of input.

I don't know why people think a model would have to manifest a theory absence of input.

wholinator2 · 5 months ago
And insight. Insight can be gleaned from a comprehensive knowledge of all previous trials and the pattern that emerges. But the big insights can also be simple random attempts people make because they dont know something is impossible. While AI _may_ be capable of the first type, it certainly won't be capable of the second
radioactivist · 5 months ago
I think this comment is significantly more dismissive of science and scientists than the original comment was of AI.
ZYbCRq22HbJ2y7 · 5 months ago
Awfully bold to claim that 99.9% of scientists lack the need for "creativity". Creativity in methodology creates gigantic leaps away from reliance on grit and stamina.

Deleted Comment

maxlin · 5 months ago
Yeah, that's exactly what a HUMAN would say ...
gwern · 5 months ago
Sounds like they only evaluated GPT-4o and weaker LLMs like mid-last year?

Deleted Comment

aaviator42 · 5 months ago
I was thinking last night. Shouldn't software we make help people instead of replacing them? Why must innovation be in the direction of and at the cost of replacing humans?
pj_mukh · 5 months ago
"I was thinking last night. Shouldn't <all innovation> we make help people instead of replacing them? Why must innovation be in the direction of and at the cost of replacing humans?"

-Humans when electricity replaced lamplighter jobs [1]

[1]: https://sloanreview.mit.edu/article/learning-from-automation...

frotaur · 5 months ago
I really don't care if jobs are replaced, so long that people are still able to make a living.

It really becomes a problem if you replace humans as a whole, and don't come up with something to allow them to make a living still such as UBI or others.

I think that is the big difference between the lamplighter situation, and the situation at hand.

hackable_sand · 5 months ago
True then, still true now.
lamename · 5 months ago
It doesn't have to be. But often the executives or investors who stand to profit the most from innovation also have strong public facing influence over the narrative. Employees cost a ton, so it's self serving both to promote the product to like minded people, and to hype the product itself.
devit · 5 months ago
That's what it does.

"Replacement" is only a problem for people who are dependent on someone else being dependent on them.

xigency · 5 months ago
> "Replacement" is only a problem for people who are dependent on someone else being dependent on them.

Not so. Replacement is a huge problem for people who have people who depend on them to furnish the cost of living.

Also it can be quite dangerous in a game setting where some costs of losing the game include homelessness or death.

In fact, it might be desirable to some political figures to drive up enlistment numbers by putting more people in such precarious situations.

But what do I know, I read a book and an AI can do that for you now... so... don't think too much about it.

Avicebron · 5 months ago
We've been optimizing for the wrong metrics? Infinite growth was fine when the map had "here be dragons", led to absurdum you get the profit driven, neurotic company architecture that optimizes for the goals of optimizing, every single person can be playing the right cards but the end goal is to move value from A to B with A(A-value) being not considered, when A(-value) are people with lives, we either pickup and move, but A_n(-value) is already there. Aka. no more dragons.\

edit for clarity

ArthurStacks · 5 months ago
Another headline to correct: "whiney desperate scientists fearing their grift is up, try to claim that AI research isnt that good"
matznerd · 5 months ago
How can we trust a human to run the study, isn't there a bias? Needs an AI prompted to be a research scientist as a co-author for it to be balanced.
theamk · 5 months ago
.. and if that AI does not give you the answer you want, re-run it with minor prompt modifications.
0x5f3759df-i · 5 months ago
People are really over indexing on current AI capabilities.

We’re barely 2 years on from ChatGPT’s initial release and we’ve gone from “this thing can put words together in a semi-coherent way” to “this thing produces undergrad level research papers on anything you ask about”.

Where will we be in another 2 years? Probably not at AGI, but there’s no sign this is slowing down.

theamk · 5 months ago
I dunno, I remember reading qbout glue on pizza almost a year ago.. and today I was talking to github tech support and their AI bot (presumably latest and greatest, with best minds programming it), suggested a command which does not exist. And Google AI summary is still hilariously bad for any moderately complex question.

I don't see much AI yielding accurate answers anytime soon, and certainly not in 2 years.

0x5f3759df-i · 5 months ago
The best models are not GitHub’s support bot (Microsoft isn’t even creating their own models) or Google’s AI summary.

If you haven’t used Claude 3.7 extended thinking to write code or ChatGPT Deep Research to investigate a topic you are not seeing what the capabilities are at the cutting edge.

https://aider.chat/docs/leaderboards/

None of it is perfect, obviously, and it’s not going to take everyone’s job next year. But people are not updating their thinking properly if they haven’t used the latest paid models.