Of my graduating class, very few are designing hardware. Most are writing code in one form or another. There were very few jobs available in EE that didn't underpay and lock you into an antiquated skillset, whether in renewables/MRI/nuclear/control etc.
We had enough exposure to emerging growth areas (computer vision, reinforcement learning, GPUs) to learn useful skills, and those all had free and open source systems to study after graduation, unlike chip design.
The company sponsoring this article is a contributor to that status quo. The complete lack of grassroots support for custom chips in North America, including a dearth of open source design tools or a community around them, has made it a complete non-starter for upskilling. Nobody graduates from an EE undergrad with real capability in the chip design field, so unless you did graduate studies, you probably just ended up learning more and more software skills.
But the relentless off-shoring of hardware manufacturing is likely the ultimate cause. These days, most interesting EE roles I see require fluency in Mandarin.
The next big step is continual learning, which enables long-term adaptive planning and "re-training" during deployment. AI with continual learning will have a larger portion of their physical deployment devoted to the unique memories they developed via individual experiences. The line between history/input context/training corpus will be blurred and deployed agents will go down long paths of self-differentiation via choosing what to train themselves on; eventually we'll end up with a diaspora of uniquely adapted agents.
Right now inference consists of one massive set of weights and biases duplicated for every consumer and a tiny unique memory file that gets loaded in as context to "remind" the AI of the experiences it had (or did it?) with this one user / deployment. Clearly, this is cheap and useful to scale up initially but nobody wants to spend the rest of their life with an agent that is just a commodity image.
In the future, I think we'll realize that adding more encyclopedic knowledge is not a net benefit for most common agents (but we will provide access to niche knowledge behind "domain-specific" gates, like an MoE model but possibly via MCP call), and instead allocate a lot more physical capacity to storing and processing individualized knowledge. Agents will slow down on becoming more book smart, but will become more street smart. Whether or not this "street smart" knowledge ever gets relayed back to a central corpora is probably mostly dependent on the incentives for the agent.
Certainly my biggest challenge after a year of developing an industrial R&D project with AI assistance is that it needs way, way more than 400k tokens of context to understand the project properly. The emerging knowledge graph tools are a step in the right direction, certainly, but they're not nearly integrated enough. From my perspective, we're facing a fundamental limitation: as long as we're on the Transformers architecture with O(n^2) attention scaling, I will never get a sufficiently contextualized model response. Period.
You might notice this yourself if you ask Claude 4.5 (knowledge cutoff Jan 2025) to ramp up on geopolitical topics over the past year. It is just not physically possible in 400k tokens. Architectures like Mamba or HOPE or Sutton's OAK may eventually fix this, and we'll see a long-term future resembling Excession; where individual agents develop in enormously different ways, even if they came from the same base image.
Deleted Comment
It feels like that would be a much simpler way to get to net zero than having to reinvent all of the infrastructure.
So much simpler that I wonder why anyone would keep trying on hydrogen. Which makes me darkly suspect that the goal is to take our attention off the solution that's already being deployed, i.e. wind and solar.
H2 does not make any sense whatever.
Generating several "competing constructs" used to be a wanton misappropriation of resources, but now it's not only viable... it's cheap. Comparing these constructs, however, has not necessarily become easier.
Leaders need to avoid using the base appearance of competence to accept any given plan (as they will all appear internally coherent), and instead spend more effort determining which difficult questions have been left unasked. As we learned from HHGttG, the answer is the easy part.