The entire institution is a pay-to-play criminal outfit, throw it in the garbage along with its employees.
The entire institution is a pay-to-play criminal outfit, throw it in the garbage along with its employees.
don't care. prove effective context length or gtfo.
There is no chance that early fusion plants will be small enough to justify building them in the same building as a factory. They will start large.
> For example, aluminum requires ~14-17MWh to produce 1 ton
The Hall–Héroult process runs at 950 C, just below the melting point of copper. It is close to twice the temperature of steam entering the turbines. It is not something that can be piped around casually- as a gas it will always be at very high pressure because lowering the pressure cools it down. Molten salt or similar is required to transport that much heat as a liquid. Every pipe glows orange. Any industrial process will effectively be a part of the power plant because of how difficult it is to transport that heat away.
Also NB that the Hall–Héroult process is for creating aluminum from ore, and recycling aluminum is the primary way we make aluminum.
Industrial parks centered around power plants might become a thing in the future, being looked at as essential infrastructure investment.
Heat transport could be seen as an entire sub-industry unto itself, adding efficiency and cost-savings for conglamorates that choose to partner with companies that invest in and build power plants.
The engineering challenges are so massive that even if they can be solved, which is far from certain, at what cost? With a dense high-energy plasma, you're dealing with a turbulent fluid where any imperfection in your magnetic confinement will likely dmaage the container.
People get caught up on cheap or free fuel and the fact that stars do this. The fuel cost is irrelevant if the capital cost of a plant is billions and billions of dollars. That has to be amortized over the life of the plant. Producing 1GW of power for $100 billion (made up numbers) is not commercially viable.
And stars solve the confinement problem with gravity and by being really, really large.
Neutron loss remains one of the biggest problems. Not only does this damage the container (ie "neutron embrittlement") but it's a significant energy loss for the system and so-called aneutronic fusion tends to rely on rare fuels like Helium-3.
And all of this to heat water to create steam and turn a turbine.
I see solar as the future. No moving parts. The only form of direct power generation. Cheap and getting cheaper and there are solutions to no power generation at night (eg batteries, long-distance power transmission).
Depleted uranium is one example but that has terrible implications due to radioactive pollution that would result, disposal costs and risks, etc.
Surprised theres not more research into meta-materials and alloys that are neutron-resistant, neutron-slowing, or neutron-absorbing.
Especially if I refuse to debate him and instead hurl insults at him and viciously deride him.
The same is true of the ordinary and the middle-of-the-road people when it comes to fascism.
The best way to create fascists is to attack and histrionically go after non-fascists and demand they conform to our way of thought.
Just being left-wing and going after people out of disgust over their opinions, I've accidentally alienated more people and created more fascists than any of these limp-wrist right-wing conservatives could ever hope to create.
I only realized it years later.
Radicalism begets radicalism.
Model innovation is effectively converging and slowing down considerably. The big companies in this space doing the research are not making leap over leap with each release, and the downstream open source projects are coming closer to the same quality or in fact can produce the same quality (e.g DeepSeek or LLAMA) hence why it’s becoming a commodity.
Around the edges model innovation - particularly speed ups in returning accurate results - will help companies differentiate but fundamentally, all this tech is shovels in search of miners, IE you aren’t really going to make money hand over fist by simply being an LLM model provider.
In another words, this latest innovation has hit commodity level within a few short years of going mainstream and the winners are going to be the companies that make products on top of this tech, and as the tech continues to become a commodity, the value proposition for pure research companies drops considerably relative to application builders.
To me this leaves a central question: when does it hit a relative equilibrium where the technology and the applications on top of it have largely hit their maximal ability to add utility to applicable situations? That’s the next question, and I think the far more important one
One other thing, at the end of the article they wrote:
>Ultimately, businesses won’t rearrange themselves around AI — the AI systems will have to meet businesses where they are.
This is demonstrably untrue. CEOs are chomping at the bit to reorganize their business around AI, as in, AI doing things humans used to do and getting the same effective results or better, thereby they can reduce staff across the board while supposedly maintaining the same output or better.
Look at the leaked Shopify memo for an example or the trend of “I can vibe code with an LLM making software engineers obsolete” that has taken off as of late, if LinkedIn is to be believed
Model providers and model labs stop opensourcing/listing their innovations/papers and start patenting instead.
"who made you this way?"
"you did."
- american politics circa 2025
Dead Comment
There are already examples of this in the wild, language and vision models not just performing scientific experiments, but coming up with new hypothesis on their own, designing experiments from scratch, laying out plans on how to carry out those experiments, and then instructing human helpers to carry those experiments out, gathering data, validating or invalidating hypothesis, etc.
The open question is can we derive a process, come up with data, and train models such that they can 1. detect when some task or question is outside the training distribution, 2. and develop models capable of coming up with a process for exploring the new task or question distribution such that they (eventually) arrive at (if not a good answer), an acceptable one.