That description leaves off some of the flavor— changes the feel once you know the descendent were labeled the Vile Offspring by the main characters, that were still human-ish.
I think I've internalized these stories enough to comfortably say (without giving anything away) that AI is incompatible with capitalism and probably money itself. That's why I consider it to be the last problem in computer science, because once we've solved problem solving, then the (artificial) scarcity of modern capitalism and the social darwinism it relies upon can simply be opted out of. Unless we collectively decide to subjugate ourselves under a Star Wars empire or Star Trek Borg dystopia.
The catch being that I have yet to see a billionaire speak out against the dangers of performative economics once machines surpass human productivity or take any meaningful action to implement UBI before it's too late. So on the current timeline, subjugation under an Iron Heel in the style of Jack London feels inevitable.
I hear this all the time, but to what end? If the input costs to produce most things ends up driving towards zero, then why would there be a need for UBI? Wouldn't UBI _be_ the performative economics mentioned?
Isn’t that the one where corporate structures become intelligent self executing agents, cause a lot of problems? Yet here IRL, the current tech billionaires think it’s a roadmap to follow?
Talk about getting the wrong message. No one show those guys a copy of 1984! Wow, then…
Please ELI5 for me: How are AI agents different from traditional workflow engines, which orchestrated a set of tasks by interacting with both humans and other software systems?
But rule-based processing was exactly the requirement. Why should the workflow automation come up with rules on the fly, when the rules were defined in the business process requirements? Aren't the deterministic rules more precise and reliable over the rules defined by probabilistic methods?
Autonomy/automation makes sense where error-prone repetitive human activity is involved. But rule definitions are not repetitive human tasks. They are defined once and run every time by automation. Why does one need to go for a probabilistic rule definition for a one-time manual task? I don't see huge gains here.
I needed some data from a content API, had a few options:
1) Human agent, manual retrieval (included for completion
2) one-off script to get exactly the content I want
3) Traditional workflow, write & maintain
4) one off prompt to the agent to write the script in #1, sort and arrange content for grouping base on descriptions it receives (this is what I used, 3 hours later I had a years worth of journal abstracts of various subjects downloaded, sorted, indexed and summarized in a chromadb. I’d just asked for the content, but it’s python code it left for me included a parameterized CLI with assorted variables and some thoughtful presets for semantic search options.)
5) one off prompt to the agent to write the workflow in #3, run at-will or by agent
6) prompt an agent to write some prompts, one of which will be a prompt for this task, the others whatever they want: “write a series of prompts that will be given to agents for task X. Break task x down to these components…”
I noticed on our own agentic setups that there are very few actual scenarios being executed. I suggested implementing some type of monitoring so you can replace 99% of most used workflows with normal python and activate AI calls if something new happens. until that new thing repeats few times and you translate that to code to. that has to be carreer in itself. you can turn a lot of AI apps into profitable and fast internal apps
have you built stuff with LLMs before? genuine question because nondeterministic and deterministic workflows are leagues apart in what they can accomplish.
The human is no longer in the loop. The agentic system is capable of generating quality synthetic data over time to train on. It becomes self improving with the quality synthetic data that can be used to train weaker models to perform better.
Which has become largely true? People flip-flop between the hottest AI model of the day. After a flagship AI model ships, distillations appear that offer slightly degraded performance at the fraction of the cost.
For inference, the difference between expensive data center hardware and homemade GPUs largely comes down the distinction of RAM. Which is a limitation actively worked around (unfortunately all the well-funded orgs are not that interested in this)
I think you may be missing out on the general idea of DAO's in general by restricting yourself to a few particular historical uses (and many a failed one at that) of DAOs, back from when agentic AI wasn't a thing.
The hackability of these things though, that still remains a very valid topic, as it is orthogonal to the fact that AI has arrived on the scene.
Okay so how does an economy of AI companies doing business selling services related to hyperintellingent AI tech to each other differ from Nvidia, Oracle and OpenAI sending money to each other to buy eacy other's stuff?
Is this what will be tried to fix the potential fallout from continuously decreasing fertility rates (resulting in population decline, thus affecting the consumption-based economy)?
Nope. This is just greed to make most of the moment without any thought for tomorrow. Nobody knows or cares where it takes us, but everybody knows that there is money to make today. So you need a model to analyze the economy with greed as the only driving force without any foresight. Add some parameters to account for monopolistic forces, human desire to be lazy and dumb thinking it is progress, and losing all biological senses to devices. That may give a better prediction.
https://en.wikipedia.org/wiki/Accelerando
In Accelerando the VO are a species of trillions of AI beings that are sort of descended from us. They have a civilization of their own.
Also what a shortsighted scifi book, yet techies readily invest in that particular fantasy because it's not your usual spaceship fare.
It's art not oracle
https://marshallbrain.com/manna1
I think I've internalized these stories enough to comfortably say (without giving anything away) that AI is incompatible with capitalism and probably money itself. That's why I consider it to be the last problem in computer science, because once we've solved problem solving, then the (artificial) scarcity of modern capitalism and the social darwinism it relies upon can simply be opted out of. Unless we collectively decide to subjugate ourselves under a Star Wars empire or Star Trek Borg dystopia.
The catch being that I have yet to see a billionaire speak out against the dangers of performative economics once machines surpass human productivity or take any meaningful action to implement UBI before it's too late. So on the current timeline, subjugation under an Iron Heel in the style of Jack London feels inevitable.
I hear this all the time, but to what end? If the input costs to produce most things ends up driving towards zero, then why would there be a need for UBI? Wouldn't UBI _be_ the performative economics mentioned?
Talk about getting the wrong message. No one show those guys a copy of 1984! Wow, then…
There’s a level of autonomy by the AI agents (it determines on its own the next step), that is not predefined.
Agreed though that there’s lots of similarities.
Autonomy/automation makes sense where error-prone repetitive human activity is involved. But rule definitions are not repetitive human tasks. They are defined once and run every time by automation. Why does one need to go for a probabilistic rule definition for a one-time manual task? I don't see huge gains here.
1) Human agent, manual retrieval (included for completion
2) one-off script to get exactly the content I want
3) Traditional workflow, write & maintain
4) one off prompt to the agent to write the script in #1, sort and arrange content for grouping base on descriptions it receives (this is what I used, 3 hours later I had a years worth of journal abstracts of various subjects downloaded, sorted, indexed and summarized in a chromadb. I’d just asked for the content, but it’s python code it left for me included a parameterized CLI with assorted variables and some thoughtful presets for semantic search options.)
5) one off prompt to the agent to write the workflow in #3, run at-will or by agent
6) prompt an agent to write some prompts, one of which will be a prompt for this task, the others whatever they want: “write a series of prompts that will be given to agents for task X. Break task x down to these components…”
For inference, the difference between expensive data center hardware and homemade GPUs largely comes down the distinction of RAM. Which is a limitation actively worked around (unfortunately all the well-funded orgs are not that interested in this)
Here's one from Deepmind:
https://arxiv.org/abs/2509.10147
1. https://www.x402.org/ - micropayments for ai agents to access resources without needing to sign up for an api key
2. https://8004.org/ - open AI agent registry standard
https://en.wikipedia.org/wiki/Decentralized_autonomous_organ...
I feel like co-ops were awful anyway even without the blockchain.
The hackability of these things though, that still remains a very valid topic, as it is orthogonal to the fact that AI has arrived on the scene.