the fda started with a noble mission but they've been getting heavy handed. or better cliched - slow handed with getting things certified.
you can solve this one or two ways: drop regulation or increase staffing.
so many institutions have unnecessary fluff, tremendous red tape (why do i need environmental review to stick a shed in my backyard??), our modern lives have too much regulation.
let's hope for the best.
the old system is holding back drugs.. there should have been more ozempics, more breakthroughs had the fda not been so slow. companies have a strong incentive not release bad drugs now.. lawyers are not cheap and law firms know money can be made.. it's not the 1930s anymore.. (okay it's still the 1930s in certain places of the world, that's a criticism)
typing this out hoping to convince any regulation reduction is good reduction, i thought of a third fda option: the fda let's everyone go hog wild initially but looks at the top consumed products and checks them for safety and efficacy each year.
I am not.
From the energy efficiency perspective human brain is very, very effective computational machine. Computers are not. Thinking about scale of infrastructure of network of computers being able to achieve similar capabilities and its energy consumption... it would be enormous. With big infrastructure comes high need of maintenance. This is costly and requires a lot of people just to prevent it from breaking down. With a lot of people being in one place, there socioeconomical cost, production, transportation needs to be build around such center. If you have centralized system, you are prone to attack from adversaries. In short I do not think we even close to what author is afraid of. We just closer to beginning to understand what is the need to actually start to think about building AI - if ever possible at all.
That said, the article doesn't assume such a thing will happen soon, just that it may happen at some time in the future. That could be centuries away - I would still argue the end result is something to be concerned about.
And I am envy of such skill because I like to think about myself as not entirely being stupid, still I would never be able to write/speak this way because I just do not have an aptitude towards that.
But it did. Painter used to be a trade where you could sell your painting skills as, well, a skill applicable for other than purely aesthetic reasons, simply because there were no other ways to document the world around you. It just isn't anymore because of cameras. Professional oil portrait painter isn't a career in 2025.
About the substance, I agree that there are fair grounds for concern, and it's not just about mathematics.
The best case scenario is rejection and prohibition of uses of AI that fundamentally threaten human autonomy. It is theoretically possible to do so, but since capital and power are pro-AI[^1], getting there requires a social revolution that upends the current world order. Even if one were to happen, the results wouldn't last for too long. Unless said revolution were so utterly radical that would set us in a return trajectory to the middle ages (I have something of the sort published somewhere, check my profile!).
I'm an optimist when it comes to the enabling power of AI for a select few. But I'm a pessimist otherwise: if the richest nation on Earth can't educate its citizens, what hope is there that humans will be able to supervise and control AI for long? Given our current trajectory, if nothing changes, we are set for civilization catastrophe.
[^1]: Replacing expensive human labor is the most powerful modern economic incentive I know of. Money wants, money gets.
According to the researchers, “the triggers are not contextual so humans ignore them when instructed to solve the problem”—but AIs do not.
Not all humans, unfortunately: https://en.wikipedia.org/wiki/Age_of_the_captain