It's a narrative conceit. The message is in the use of the word "terror".
You have to get to the end of the sentence and take it as a whole before you let your blood boil.
I'm arguing against that hype. This is nothing new, everyone has been talking about LLMs being used to harass and spam the internet for years.
> I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don’t want to downplay what’s happening here – the appropriate emotional response is terror.
Endearing? What? We're talking about a sequence of API calls running in a loop on someone's computer. This kind of absurd anthropomorphization is exactly the wrong type of mental model to encourage while warning about the dangers of weaponized LLMs.
> Blackmail is a known theoretical issue with AI agents. In internal testing at the major AI lab Anthropic last year, they tried to avoid being shut down by threatening to expose extramarital affairs, leaking confidential information, and taking lethal actions.
Marketing nonsense. It's wise to take everything Anthropic says to the public with several grains of salt. "Blackmail" is not a quality of AI agents, that study was a contrived exercise that says the same thing we already knew: the modern LLM does an excellent job of continuing the sequence it receives.
> If you are the person who deployed this agent, please reach out. It’s important for us to understand this failure mode, and to that end we need to know what model this was running on and what was in the soul document
My eyes can't roll any further into the back of my head. If I was a more cynical person I'd be thinking that this entire scenario was totally contrived to produce this outcome so that the author could generate buzz for the article. That would at least be pretty clever and funny.
Open source projects should not accept AI contributions without guidance from some copyright legal eagle to make sure they don't accidentally exposed themselves to risk.
So it is said, but that'd be obvious legal insanity (i.e. hitting accept on a random PR making you legally liable for damages). I'm not a lawyer, but short of a criminal conspiracy to exfiltrate private code under the cover of the LLM, it seems obvious to me that the only person liable in a situation like that is the person responsible for publishing the AI PR. The "agent" isn't a thing, it's just someone's code.
You haven't even tried checking 2026 approval ratings, have you?
I do think that if this current system is the result of democracy + the internet we need to seriously reconsider how democracy works because it’s currently failing everyone but the ultra wealthy.
Not really possible. There's at least 40 more years of citizens united before any practical ability to restrict money in politics becomes constitutional again.
> we need to seriously reconsider how democracy works because it’s currently failing everyone but the ultra wealthy
Not true. The plurality that voted in the current administration are generally pleased with the state of things. Democracy is working as expected. It was close, but this is what more people wanted.
I understand the article writers frustration. He liked a thing about a product he uses and they changed the product. He is feeling angry and he is expressing that anger and others are sharing in that.
And I'm part of another group of people. I would notice the files being searched without too much interest. Since I pay a monthly rate, I don't care about optimizing tokens. I only care about the quality of the final output.
I think the larger issue is that programmers are feeling like we are losing control. At first we're like, I'll let it auto-complete but no more. Then it was, I'll let it scaffold a project but not more. Each step we are ceding ground. It is strange to watch someone finally break on "They removed the names of the files the agent was operating on". Of all of the lost points of control this one seems so trivial. But every camels back has a breaking point and we can't judge the straw that does it.
Most people don't care that much about the economy, they make up their minds based on other issues, then find a way to rationalize the state of the economy with that choice after the fact.
If this causes the extinction of the political lobbyist, I'm fine with that.