I started programming on an 8 MHz Mac Plus in the late 1980s and got a bachelors degree in computer engineering in the late 1990s. From my perspective, a kind of inverse Moore's Law happened, where single-threaded performance stays approximately constant as the number of transistors doubles every 18 months.
Wondering why that happened is a bit like asking how high the national debt would have to get before we tax rich people, or how many millions of people have to die in a holocaust before the world's economic superpowers stop it. In other words, it just did.
But I think that we've reached such an astounding number of transistors per chip (100 billion or more) that we finally have a chance to try alternative approaches that are competitive. Because so few transistors are in use per-instruction that it wouldn't take much to beat status quo performance. Note that I'm talking about multicore desktop computing here, not GPUs (their SIMD performance actually has increased).
I had hoped that FPGAs would allow us to do this, but their evolution seems to have been halted by the powers that be. I also have some ideas for MIMD on SIMD, which is the only other way that I can see this happening. I think if the author can reach the CMOS compatibility they spoke of, and home lithography could be provided by an open source device the way that 3D printing happened, and if we could get above 1 million transistors running over 100 MHz, then we could play around with cores having the performance of a MIPS, PowerPC or Pentium.
In the meantime, it might be fun to prototype with AI and build a transputer at home with local memories. Looks like a $1 Raspberry Pi RP2040 (266 MIPS, 2 core, 32 bit, 264 kB on-chip RAM) could be a contender. It has about 5 times the MIPS of an early 32 bit PowerPC or Pentium processor.
For comparison, the early Intel i7-920 had 12,000 MIPS (at 64 bits), so the RP2040 is about 50 times slower (not too shabby for a $1 chip). But where the i7 had 731 million transistors, the RP2040 has only 134,000 (not a typo). So 50 times the performance for over 5000 times the number of transistors means that the i7 is only about 1% as performant as it should be per transistor.
I'm picturing an array of at least 256 of these low-cost cores and designing an infinite-thread programming language that auto-parallelizes code without having to manually use intrinsics. Then we could really start exploring stuff like genetic algorithms, large agent simulations and even artificial life without having to manually transpile our code to whatever non-symmetric multiprocessing runtime we're forced to use currently.
I've only been working with AI for a couple of months, but IMHO it's over. The Internet Age which ran 30 years from roughly 1995-2025 has ended and we've entered the AI Age (maybe the last age).
I know people with little programming experience who have already passed me in productivity, and I've been doing this since the 80s. And that trend is only going to accelerate and intensify.
The main point that people are having a hard time seeing, probably due to denial, is that once problem solving is solved at any level with AI, then it's solved at all levels. We're lost in the details of LLMs, NNs, etc, but not seeing the big picture. That if AI can work through a todo list, then it can write a todo list. It can check if a todo list is done. It can work recursively at any level of the problem solving hierarchy and in parallel. It can come up with new ideas creatively with stable diffusion. It can learn and it can teach. And most importantly, it can evolve.
Based on the context I have before me, I predict that at the end of 2026 (coinciding with the election) America and probably the world will enter a massive recession, likely bigger than the Housing Bubble popping. Definitely bigger than the Dot Bomb. Where too many bad decisions compounded for too many decades converge to throw away most of the quality of life gains that humanity has made since WWII, forcing us to start over. I'll just call it the Great Dumbpression.
If something like UBI is the eventual goal for humankind, or soft versions of that such as democratic socialism, it's on the other side of a bottleneck. One where 1000 billionaires and a few trillionaires effectively own the world, while everyone else scratches out a subsistence income under neofeudalism. One where as much food gets thrown away as what the world consumes, and a billion people go hungry. One where some people have more than they could use in countless lifetimes, including the option to cheat death, while everyone else faces their own mortality.
"AI was the answer to Earth's problems" could be the opening line of a novel. But I've heard this story too many times. In those stories, the next 10 years don't go as planned. Once we enter the Singularity and the rate of technological progress goes exponential, it becomes impossible to predict the future. Meaning that a lot of fringe and unthinkable timelines become highly likely. It's basically the Great Filter in the Drake equation and Fermi paradox.
This is a little hard for me to come to terms with after a lifetime of little or no progress in the areas of tech that I care about. I remember in the late 90s when people were talking about AI and couldn't find a use for it, so it had no funding. The best they could come up with was predicting the stock market, auditing, genetics, stuff like that. Who knew that AI would take off because of self-help, adult material and parody? But I guess we should have known. Every other form of information technology followed those trends.
Because of that lack of real tech as labor-saving devices to help us get real work done, there's been an explosion of phantom tech that increases our burden through distraction and makes our work/life balance even less healthy as underemployment. This is why AI will inevitably be recruited to demand an increase in productivity from us for the same income, not decrease our share of the workload.
What keeps me going is that I've always been wrong about the future. Maybe one of those timelines sees a great democratization of tech, where even the poorest people have access to free problem solving tech that allows them to build assistants that increase their leverage enough to escape poverty without money. In effect making (late-stage) capitalism irrelevent.
If the rate of increasing equity is faster than the rate of increasing excess, then we have a small window of time to catch up before we enter a Long Now of suffering, where wealth inequality approaches an asymptote making life performative, pageantry for the masses who must please an emperor with no clothes.
In a recent interview with Mel Robbins in episode 715 of Real Time, Bill Maher said "my book would be called: It's Not Gonna Be That" about the future not being what we think it is. I can't find a video, but he describes it starting around the 19:00 mark:
https://podcasts.musixmatch.com/podcast/real-time-with-bill-...
Our best hope for the future is that we're wrong about it.