The question more than 100 years ago was "If anyone can put together a lamp and plug it in, how will we keep our houses from getting burned down by the wires in the wall?"
The solution then, and the solution now, is to limit what can be delivered to a load, electrical, or computational, before plugging things in.
In our homes, we have circuit breakers, or fuses, electrical codes and their enforcement.
In our computers, we could have the PowerBox[1], which, much like the breakers in a house, serves to limit what resources are given to a program. Unlike AppArmor, etc., this is done at run-time, and takes the place (often seemlessly to the user) of the file handling dialog boxes the user would encounter anyway. The actual application code changes required would be minimal.
Brother, in 2025, we are faced with two models achieving gold medal in IMO. I don't know why more people aren't talking about this. It's way waaay more serious than vibing up a React form.
At this point, I'm not even sure whether these models are simply "statistical parrots" or not. But the output of these systems or the supposed systems in 2035 having vulnerabilities that run-of-the-mill LLMs could exploit is a farfetched idea, that much I know.
The way we’re already hacked by giving megacorps our data and have PII data leaks, we don’t need to wait until 2035 for vibe based security. I’m guessing 90% of security/IT orgs don’t really know anything about security but could evangelize/upsell a security company’s software for you.
I had to take 5 anti-phishing training videos in the last year because my IT org kept improving “security”. This was before LLMs gained popularity.
This is a neat short story, but it falls short of being good science fiction by assuming that the only change is the rise of vibe coding.
If apps started getting hacked instantly, why would people continue using them, short of ones they're required to use by the government, their jobs, or maybe their banks?
I will add this though. I once worked on the UK government Trusted Software Initiative.
We concluded that small companies just don't have the resource to secure things well. The only way we could move the needle at all was by better security training and awareness in the short term. But the fundamental economic point remains the same, and certainly magnified by vibe coding.
On the flip side, we can start using AI to find security problems too.
The solution then, and the solution now, is to limit what can be delivered to a load, electrical, or computational, before plugging things in.
In our homes, we have circuit breakers, or fuses, electrical codes and their enforcement.
In our computers, we could have the PowerBox[1], which, much like the breakers in a house, serves to limit what resources are given to a program. Unlike AppArmor, etc., this is done at run-time, and takes the place (often seemlessly to the user) of the file handling dialog boxes the user would encounter anyway. The actual application code changes required would be minimal.
PS: Yes, this has been brought up before[2]
[1] https://wiki.c2.com/?PowerBox
[2] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
[1] https://wiki.c2.com/?PowerBox
> Still talking about writing slack bots and APIs
Brother, in 2025, we are faced with two models achieving gold medal in IMO. I don't know why more people aren't talking about this. It's way waaay more serious than vibing up a React form.
At this point, I'm not even sure whether these models are simply "statistical parrots" or not. But the output of these systems or the supposed systems in 2035 having vulnerabilities that run-of-the-mill LLMs could exploit is a farfetched idea, that much I know.
I had to take 5 anti-phishing training videos in the last year because my IT org kept improving “security”. This was before LLMs gained popularity.
Deleted Comment
If apps started getting hacked instantly, why would people continue using them, short of ones they're required to use by the government, their jobs, or maybe their banks?
Once it's big and has a significant user base processing personal or sensitive data, watch out for the lawsuits and increased regulation.
We concluded that small companies just don't have the resource to secure things well. The only way we could move the needle at all was by better security training and awareness in the short term. But the fundamental economic point remains the same, and certainly magnified by vibe coding.
On the flip side, we can start using AI to find security problems too.