I am sure that is exactly what is happening right now. We just haven't heard about it yet but we will soon start to see LLM found exploits abused in the wild.
Is this arguably a good thing? If security engineers could run these things on their own systems it would be a hell of a way to make them very hardened.
> The findings exposes a troubling asymmetry: at 0.1% vulnerability rates, attackers achieve an on-chain scanning profitability at a $6000 exploit value, while defenders require $60000, raising fundamental questions about whether AI agents inevitably favor exploitation over defense.
Prior to AI, outside the context of crypto, it is/was often not “worth it” to fix security holes, but rather bite the bullet and claim victimhood, sue if possible, and hide behind compliance.
If automated exploitation changes that equation, and even low-probability of success is worth trying because pentesting is not bottlenecked by meatspace, it may incentivise writing secure code, in some cases.
Perversely enough, AIs may crank out orders of magnitude more insecure code at the same time.
I hope this means fuzzing as a service becomes absolutely necessary. I think automated exploitation is a good thing for improved security overall, cracked eggs and all.
If I'm understanding the paper correctly, they're assuming that defenders are also scanning deployed contracts with the intention of ultimately reporting bug bounties. And they get the $6,000/$60,000 numbers by assuming that the bug bounty in their model is 1/10th of the exploit value.
This kind of misses the point though. In the real world engineers would use AI to audit/test the hell out of their contracts before they're even deployed. They could also probably deploy the contracts to testnet and try to actually exploit them running in the wild.
So, while this is all obviously a danger for existing contracts, it seems like it would still be a powerful tool for testing new contracts.
Not at the moment. Running this stuff is expensive and getting funding for running defense is hard. A key tenant of the article is that the economics currently favor the attackers.
Eventually there will probably also be AI agents that prey on people using personalized strategies to steal their money.
AI agents, crypto, and viruses could all blend together to create really annoying things. For example an AI agent could infect your computer and then monitor your activity to see if you're doing anything suspicious, and then blackmail you.
Why stop at the digital if you can go further with biological? I think computer viruses will make the jump at some point and become part of an actual virus.
Cue Ghost in the Shell in 3... 2... 1...
My prediction is that at some point in time there will be an actual living Shiba Inu with some code of Doge in its actual DNA.
I always wondered how come that North Korea doesn't employ a fleet of people that develop smart contract scanners. I mean in every paper about that they always boast that they have found some amount of exploitable smart contracts with insanely high balances, so why was it not taken by North Korea already?
- The pre-print paper: AI Agent Smart Contract Exploit Generation - https://arxiv.org/abs/2507.05558
- An associated research institution: UC Berkeley Center for Responsible, Decentralized Intelligence - https://rdi.berkeley.edu/
Can't wait for millions of AI agents to prey in nanoseconds on any bug, mispecification, user error etc...
Seems not that good of thing on the balance :)
If automated exploitation changes that equation, and even low-probability of success is worth trying because pentesting is not bottlenecked by meatspace, it may incentivise writing secure code, in some cases.
Perversely enough, AIs may crank out orders of magnitude more insecure code at the same time.
I hope this means fuzzing as a service becomes absolutely necessary. I think automated exploitation is a good thing for improved security overall, cracked eggs and all.
This kind of misses the point though. In the real world engineers would use AI to audit/test the hell out of their contracts before they're even deployed. They could also probably deploy the contracts to testnet and try to actually exploit them running in the wild.
So, while this is all obviously a danger for existing contracts, it seems like it would still be a powerful tool for testing new contracts.
/Technology/ inevitably favors exploitation over defense.
-- attributed to IRA after the Brighton hotel bombing narrowly missed Margaret Thatcher
AI agents, crypto, and viruses could all blend together to create really annoying things. For example an AI agent could infect your computer and then monitor your activity to see if you're doing anything suspicious, and then blackmail you.
Cue Ghost in the Shell in 3... 2... 1...
My prediction is that at some point in time there will be an actual living Shiba Inu with some code of Doge in its actual DNA.
That's the current status of web3 security, unfortunately. The on-chain security arm-race is rather under-explored in academia.