A lot of things that people call "bribery" is really just ensuring that your preferred candidate gets in office. You couldn't give money directly to the candidate for personal use. Donations went to the campaign of the guy who already agreed with you. The FEC used to take a dim view of outright pay-for-service, even dressed up.
This is new. And now people need to decide how they feel about that. They get one chance to say "no, that's not how we do things." Even if the administration suffers a blow this November, if they hear that this is mostly acceptable to their base, it will be what every politician does from here on.
During a war with national mobilization, that would make sense. Or in a country like China. This kind of coercion is not an expected part of democratic rule.
I got an ad the other day for a school (a mostly reputable one). They were talking about their award winning dining hall food... and the photos are over the top.
Borrow a pile of money, to help fund a pretty campus, and get a degree with limited job prospects, then wonder why you're drowning in debt for decades seems to be the trendy thing to do.
https://www.reddit.com/r/ClaudeAI/comments/1r186gl/my_agent_...
I have noticed similar behavior from the latest codex as well. "The security policy forbid me from doing x, so I will achieve it with a creative work around instead..."
The "best" part of the thread is that Claude comes back in the comments and insults OP a second time!
> SANDBOX YOUR AGENT. Seriously. Run it in a dedicated, isolated environment like a Docker container, a devcontainer, or a VM. Do not run it on your main machine.
> "Docker access = root access." This was OP's critical mistake. Never, ever expose the host docker socket to the agent's container.
> Use a real secrets manager. Stop putting keys in .env files. Use tools like Vault, AWS SSM, Doppler, or 1Password CLI to inject secrets at runtime.
> Practice the Principle of Least Privilege. Create a separate, low-permission user account for the agent. Restrict file access aggressively. Use read-only credentials where possible.
In order to use this developer-replacement, you need accreditation from professional orgs. Maybe the bot can set all this up for you, but then you are almost definitely locked out of your own computer and the bot may not remember its password.
I'm not sure what we've achieved here. If you give it your gmail account, it deletes your emails. If you "sandbox" it, then how is it going to "sort out your inbox"?
It might or might not help veteran devs accelerate some steps, but as with vibeclaw, there's essentially no way to use the tool without "sandboxing" it into uselessness. The pull requests for openclaw are 99% ai slop. There's still no major productivity growth engine in llm's.
They're not improving on the underlying technology. Just iterating on the massaging and perhaps improved data accuracy, if at all. It's still a mishmash of code and cribbed scifi stories. So, of course it's going to hit loops because it's not fundamentally conscience.
"The bus blew up" is a perfectly active clause. "The bus" is the subject, it did its own blowing-up.
"The bus was blown up" is a passive clause. "The bus" is the object, some unnamed entity acted on the bus.
English lacks a formal middle and there is a good deal of established literature on verbal aspects where the subject is not really the agent called "ergative".
There is utility in comparing "the bus exploded", perhaps unclear as to the agent, but language is not an agent game. It's trying to convey information, which is clear enough in these cases.