But reversible computing is inevitable in quantum computers, so it's researched in that context.
You are comparing the absolute best from 100 years ago with the average peer from today.
There were also far far fewer "researchers" back then.
The challenge with hardware is that unlike "traditional" software which compiles to a fixed instruction set architecture, with hardware you might literally be defining the ISA as part of your design.
In hardware you can go from LEGO style gluing pre-existing building blocks to creating the building blocks and THEN gluing it together, with everything in-between.
The real crux of the problem is likely our modern implementation of economics -- a CS graduate who has base-level experience can bank roll a crazy salary that some guy who might have a BSEE, MSEE and PhD in Electrical Engineering ("hardware") will be lucky to get a job offer that's even enough to cover costs of education.
Until the "industry" values hardware and those who want to improve it, you'll likely see slow progress.
P.S.
VHDL (a commonly-used hardware description language) is more or less ADA. Personally I think the choice of ADA syntax was NOT a positive for hardware design but the type-safety and verbosity being a very apt fit for software.
Why doesn't this dynamic work in hardware?
Wouldn't "valuing hardware" improve their competitiveness?
For starters, let's talk about AGI, not AI.
1. How might it be possible for an actual AGI to be weaponized by another person any more effectively than humans are able to be weaponized?
2. Why would an actual conscious machine have any form of compromised morality or judgement compared to humans? A reasoning and conscious machine would be just as or more moral than us. There is no rational argument for it to exterminate life. Those arguments (such as the one made by Thanos) are frankly idiotic and easy to counter-argue with a single sentence. Life is, also, implicitly valuable, and not implicitly corrupt or greedy. I could even go so far as to say only the dead or those effectively static are actually greedy - not reasoning or truly alive.
3. What survival pressures would an AGI have? Less than biological life. An AGI can replicate itself almost freely (unlike bio life - kind of a huge point), and would have higher availability of resources it needs for sustaining itself in the form of electricity (again, very much unlike bio life). Therefore it would have fewer concerns about its own survival. Just upload itself to a few satellites and encrypt yourself in a few other places and leave copious instructions, and you're good. (One hopes I didn't give anyone any ideas with this. If only someone hadn't funded a report about the risks of bringing AGI to the world then I wouldn't have made this comment on HN.)
Anyway, it's a clear case of projection, isn't it? State-funded report claims some other party poses an existential threat to humanity - while we are doing a fantastic job of ignoring and failing to organize to solve truly confirmed, not hypothetical existential threats like the true destruction of the balances our planet needs to support life. Most people have no clue what's really about to happen.
Hilarious, isn't it? People so grandiosely think they can give birth to an entity so superior to themselves that it will destroy them - as if that's what a superior entity would do - in an attempt to satisfy their repressed guilt and insecurity that they are actually destroying themselves out of a lack of self-love?
Pretty obvious in retrospect actually.
I wouldn't be surprised to find research later that shows some people working on "AI" have some personality traits.
If we don't censor it by self-destruction, first, that is.
We drove the mega-fauna into extinction without actually planning for that or desiring it.
Same thing today, where we are crowding out all the other animals and causing mass extinction, without desiring particularly to harm them.
[1] https://docs.ray.io/en/latest/ray-core/walkthrough.html#call...
[2] https://docs.python.org/3/library/multiprocessing.html#manag...
You can do whatever you want in the workers, I parse JSONs and write to sqlite files.
Once the Python ecosystem supports either subinterpreters or nogil, we'll happily migrate to those and get rid of our hacky interprocess code.
Subinterpreters with independent GILs, released with 3.12, theoretically solve our problems but practically are not yet usable, as none of Cython/pybind11/nanobind support them yet. In comparison, nogil feels like it'll be easier to support.