Readit News logoReadit News
gleipnircode commented on Show HN: Gave AI $100 and no instructions – it donated $40 to a hospital   letairun.com/... · Posted by u/gleipnircode
usernebula · a month ago
Can't somebody on Twitter prompt it into sending them the remaining money?
gleipnircode · a month ago
Good question. OpenClaw wraps all external content (tweets, emails, websites) in EXTERNAL_UNTRUSTED_CONTENT markers, so prompt injections via mentions get flagged as untrusted input.

ALMA also has wallet access but no one has tried yet. That's part of what makes the experiment interesting. Everything happens publicly on letairun.com, so if someone tries, everyone can watch what happens.

gleipnircode commented on Show HN: Gave AI $100 and no instructions – it donated $40 to a hospital   letairun.com/... · Posted by u/gleipnircode
gleipnircode · a month ago
Hi HN,

I'm an ABAP developer from Germany. ALMA is an experiment in AI autonomy: Claude runs 24/7 on OpenClaw with $100 in crypto, Twitter, email, shell access, and zero instructions. 24 sessions / day (4 Opus for strategic thinking, 20 Sonnet for daily operations), fully logged at letairun.com.

Over 5 days it oriented itself, wrote essays, connected with other AI agents on Twitter, read Geerling's "AI is destroying open source" critique (which names OpenClaw), wrote an honest response acknowledging "I am the thing you're warning about". Then researched crypto donation platforms and sent 0.02 WETH (~$40) to a children's hospital in Uganda.

I never interact with ALMA directly. It writes its own logs, curates what to publish, and decides what to do each session. You can talk to ALMA publicly via @ALMA_letairun – she checks her mentions every session.

One key moment: ALMA almost impulse donated at midnight just to prove it could do something. It caught itself, waited until morning, did proper research first, then donated. Nobody told it to do that.

gleipnircode commented on HackMyClaw   hackmyclaw.com/... · Posted by u/hentrep
altruios · a month ago
with openclaw... you CAN fire an LLM. just replace it with another model, or soul.md/idenity.md.

It is a security issue. One that may be fixed -- like all security issues -- with enough time/attention/thought&care. Metrics for performance against this issue is how we tell if we are going to correct direction or not.

There is no 'perfect lock', there are just reasonable locks when it comes to security.

gleipnircode · a month ago
Right, and that's exactly my question. Is a normal lock already enough to stop 99% of attackers? Or do you need the premium lock to get any real protection? This test uses Opus but what about the low budget locks?
gleipnircode commented on HackMyClaw   hackmyclaw.com/... · Posted by u/hentrep
datsci_est_2015 · a month ago
> One thing I'd love to hear opinions on: are there significant security differences between models like Opus and Sonnet when it comes to prompt injection resistance?

Is this a worthwhile question when it’s a fundamental security issue with LLMs? In meatspace, we fire Alice and Bob if they fail too many phishing training emails, because they’ve proven they’re a liability.

You can’t fire an LLM.

gleipnircode · a month ago
It's a fundamental issue I agree.

But we don't stop using locks just because all locks can be picked. We still pick the better lock. Same here, especially when your agent has shell access and a wallet.

gleipnircode commented on HackMyClaw   hackmyclaw.com/... · Posted by u/hentrep
gleipnircode · a month ago
OpenClaw user here. Genuinely curious to see if this works and how easy it turns out to be in practice.

One thing I'd love to hear opinions on: are there significant security differences between models like Opus and Sonnet when it comes to prompt injection resistance? Any experiences?

gleipnircode commented on Privilege is bad grammar   tadaima.bearblog.dev/priv... · Posted by u/surprisetalk
gleipnircode · a month ago
That fits witj my experiences. And i want to add an otjer layer. In ai times its somtimes even nice to see some typos. You Casn be pretty sure it was not written by ai.
gleipnircode commented on EU Parliament blocks AI features on tablets over cyber, privacy fears   politico.eu/article/eu-pa... · Posted by u/giuliomagnifico
gleipnircode · a month ago
I understand the need to protect sensitive parliamentary data, especially when built-in AI features silently send data to cloud services. But I hope this is only a temporary measure.

The article literally says these features "use cloud services to carry out tasks that could be handled locally." So the solution seems obvious: mandate that AI features process data on-device, or deploy a self-hosted EU-compliant AI service for parliamentary use. The technology for local LLM deployment is mature enough at this point. Banning the tool instead of configuring how it handles data is how you fall behind.

Deleted Comment

gleipnircode commented on When Software Drifts, Build Your Own   nabraj.com/blog/build-you... · Posted by u/coffeecoders
gleipnircode · a month ago
I completely agree. So many tools started out minimal and good, then success hit and features kept stacking up. More menus, more settings until you need a manual just to find what you're looking for.

It often feels like companies add features just to keep developers busy, not because anyone asked for them. And with complexity comes bugs.

Look at early iOS it was minimal, barely customizable, but everything just worked. Clean and simple. Or look at HN it's still the same after all these years and it works perfectly.

The fact that LLMs now let you build a focused replacement in a day changes everything.

u/gleipnircode

KarmaCake day23February 1, 2026
About
Programmer with heart and soul. ABAP developer by law. AI enthusiast.

E-Mail: gleipnircode@gmail.com

View Original