https://library.sciencemadness.org/library/books/ignition.pd...
https://library.sciencemadness.org/library/books/ignition.pd...
The fact that an AI coding assistant could "delete our production database without permission" suggests there were no meaningful guardrails, access controls, or approval workflows in place. That's not an AI problem - that's just staggering negligence and incompetence.
Replit has nothing to apologize for, just like the CEO of Stihl doesn't need to address every instance of an incompetent user cutting their own arm off with one of their chainsaws.
Edit:
> The incident unfolded during a 12-day "vibe coding" experiment by Jason Lemkin, an investor in software startups.
We're in a bubble.
Creating a db and not accidentally permanently deleting it is one of the capabilities it should have.
It will make them more deterministic, but it will not make them fully deterministic. This is a crucial distinction.
Could use an “auctioneer” voice to playback text at 10x speed.
It would be cool something like a llm based link title classifier that hide click-bait links or something like that.
And give a score based on how interesting it will likely be to you.
Fraud, threats, impersonation, etc etc.
they hype for AGI has certainly deflated, i haven't heard anything about that being right around the corner and the implications in a while now. The hype and doom now seems to be coming from software devs only, the front page news articles about AGI have pretty much stopped for me.
/"front page news" to me is the google news, US, Business, and Technology tabs
https://www.businessinsider.com/microsoft-ai-ceo-mustafa-sul...