Readit News logoReadit News
geor9e · 7 months ago
Too late. I added a 5 minute cron job for cursor AI's compose tab in agent mode that keeps replying "keep going, think of more fixes and features, random ideas as fine, do it all for me". I won't pull the plug.
djohnston · 7 months ago
How do you programmatically interact w cursor??
geor9e · 7 months ago
Same way I botted runescape as a child, by simulating user inputs with any macro app.
voisin · 7 months ago
You’ve created a monster
rickydroll · 7 months ago
you say monster, I say plastic pal who's fun to be with
fragmede · 7 months ago
AGI confirmed.
upghost · 7 months ago
This is a purely procedural question, not supporting or critiquing in any way-- other than this reads kind of like an editorial with the format of a scientific paper. The question is... are there rules about what constitutes a paper or can you just put whatever you want in there as long as you follow "scientific paper format"?
dhruvbatra · 7 months ago
This looks like ICML formatting (and the submission deadline just passed).

ICML25 has an explicit call for position papers: https://icml.cc/Conferences/2025/CallForPositionPapers

upghost · 7 months ago
Wow, great observation. Thank you. Makes sense. I'd never heard of a "position paper" before.
mark_l_watson · 7 months ago
I really enjoy Margaret Mitchell‘s podcast (she is the first author on the paper), and perhaps I missed something important in the paper, but:

Shouldn’t we treat separately autonomous agent we write ourselves, or purchase to run on our own computers, on our own data and that use public APIs for data?

If Margaret is reading this thread, I am curious what her opinion is.

For autonomous agents controlled by corporations and governments, I mostly agree with the paper.

in3d · 7 months ago
I'd recommend looking for other sources of information if you're relying on someone who co-authored the paper that introduced the most misleading and uninformed term of the LLM era: "stochastic parrot".
currymj · 7 months ago
it was a pretty defensible term at the time the paper came out, in the context of how LLMs were being trained and used.

in this paper, it's clear that the authors don't think modern LLM-based systems are just stochastic parrots.

bamboozled · 7 months ago
People are going to be developing these no matter what. Whether it wipes us out or not is just up to fate really.
esafak · 7 months ago
We can constrain their use, as with nuclear materials.
fizx · 7 months ago
Nuclear materials have the advantages of being rare, dangerous to handle, and hard to copy over the internet.
johanneskanybal · 7 months ago
No not really. There's no power in the world that can restrain this in it's current form even mildly much less absolutly. Why do you think that would be even slightly possible?
roenxi · 7 months ago
Despite doing a pretty decent job of containing the risk we're still on the clock until something terrible happens with nuclear war. Humanity appears to be well on track to killing millions to billions of people; rolling the dice relatively regularly waiting for a 1% chance to materialize.

If we only handle AI that well doom is probable. It has economic uses, unlike nuclear weapons, so there will be a thriving black market dodging the safety concerns.

redeux · 7 months ago
At some point in the probably near future it will be much simpler to create an autonomous AI agent than a nuclear bomb.
bamboozled · 7 months ago
Look at who has access to US nuclear codes now. I don’t believe it’s as constrained as you think.
gcanyon · 7 months ago
It is a lot easier to detect illicit nuclear work compared to illicit AI work.
ASalazarMX · 7 months ago
In the incredible case that we develop fully autonomous agents capable of crippling the world, that would mean we developed fully autonomous agents capable of keeping it safe.

Unless the first one is so advanced no other can challenge it, that is.

grayfaced · 7 months ago
How did you jump to that conclusion? The agent will be limited by the capabilities under its control. We have the technological ability to cripple world now and we don't have the technological means to prevent it. Give one AI control of the whole US arsenal and the objective of ending the world. Give another AI the capabilities of the rest of the world and the objective of protecting it. Would you feel safe?
wendyshu · 7 months ago
Fallacious
satisfice · 7 months ago
No one should be allowed to develop software that has bugs in it that lead to unlawful harm to others. And if they do it anyway they should be punished lawfully.

The thing with autonomous AI is that we already know it cannot be made safe in a way that satisfies lawmakers who are fully informed about how it works… unless they are bribed, I suppose.

Animats · 7 months ago
Most of the arguments presented also apply to corporations.

There's no mention of externalities. That is, are the costs of AI errors borne by the operator of the AI, or a third party.

numba888 · 7 months ago
Hmm.. agent cannot do self-supervised learning without actually doing it. The trick is to keep it in a sandbox.
asdasdsddd · 7 months ago
This has to be the least interesting paper I've ever read with the most surface level thinking.

> • Simple→Tool Call: Inaccuracy propagated to inappropriate tool selection.

> • Multi-step: Cascading errors compound risk of inaccurate or irrelevant outcomes.

> • Fully Autonomous: Unbounded inaccuracies may create outcomes wholly unaligned with human goals.

Just... lol