And one more thing, this kind of artificial living will be the easiest in many sences if it is going to specialize in all kinds of scam/fraud especially. Technically it is doable, but Sams Altmans are too interested in their own money, not in yours.
My aim here isn’t to create a fully self-modifying AI (yet), but to test what happens when even a static model is forced to operate in a feedback loop where money = survival.
Think of it as a sandbox experiment: will it exploit loopholes? specialize in scams? beg humans for donations?
It’s more like simulating economic pressure on a mindless agent and watching what behaviors emerge.
(Also, your last line made me laugh — and yeah, that’s part of the meta irony of the experiment.)
My hypothesis is that we might find weird edge-cases — small arbitrage tasks, emotional labor, creative content, or even hustling donations — where the agent survives not by being efficient, but by being novel.
It might not scale. But if one survives for 3 days doing random TikTok reposts or selling AI-generated stock photos, I’d consider that a win.
Also, part of the fun is just watching how it tries. Even if it fails, the failure modes could be insightful (or hilarious).