Seems to me like a cleaner, more "futuristic" weapon. The "hobby" version of this weapon photographed in the article already looks quite clean.
Makes you wonder if we're training LLMs the hard way. For example, if computers had been invented before Calculus, we'd have been using "Numerical Integration" (iterating the differential squares to sum up areas, etc) and "Numerical Differentiation" (ditto for calculating slopes).
So I wonder if we're simply in a pre-Calculus-like phase of NN/Perceptrons, where we haven't yet realized there's a mathematical way to "solve" a bunch of equations simultaneously and arrive at the best (or some local minima) model weights for a given NN architecture and set of training data.
From a theoretical standpoint it IS a black box problem like this where the set of training data goes in, and an array of model weights comes out. If I were to guess I'd bet there'll be some kind of "random seed" we can add as input, and for each seed we'll get a different (local minima/maxima for model weights).
But I'm not a mathematician and there may be some sort of PROOF that what I just said can definitely never be done?
Far from unlimited. Almost all fusion power plant concepts are thermal power plants. These directly contribute to global warming, and you can’t scale up by more than a couple of orders of magnitude without causing significant climate change.
In the end you’d need to use panels to radiate the waste heat directly to space, but then you have similar land use limitations as solar panels.
There’s no reason to think we actually need that energy. Advanced deep geothermal is probably much easier than fusion. With that you’ve got geothermal near the poles, solar everywhere else and wind/hydro/wave/tide to supplement based on needs and availability.
Helion fusion power plant concept is not thermal. So we can probably make a lot more electricity without heating up the planet too much. In the end it too will be limited by the heat added to the planet.
This is one of the reasons I suggest people buy a house rather than renting. The financials may not always make sense, but it forces savings in a way that many people would not otherwise do.
How many people do you think would manually pay a large chunk of their paycheck into a savings account every months before deciding whether they can afford an extra order of curly fries?
Rent eats first and having a mortgage lets you actually save some of that (once you get past the substantial interest heavy starting year).
> The 401k was an experiment, and that experiment has largely failed.
Retirement accounts are not the problem. It’s a general lack of financial education. People can barely calculate simple interest. Compound interest is even less natural. Calculating the present value of a fixed payment annuity in 25 years? We’re a slim minority that can do that from scratch.
Some of the best savers are the ones that do not even understand the financial constructs, but take the savings rates and methods as gospel. Faithfully putting away percentages of their salaries that are orders of magnitude higher than the rest.
> The answer is probably a more generous and better funded Social Security, except that fund keeps getting raided for other purposes.
Social security is claimed to be “money you paid into”. But that cannot be true if the net amount grossly exceeds what you paid in. That system simply would not work without another revenue source.
People have to pay massively more into it over their entire working lives.
And if someone wants a pension the financial product exists. Except nobody wants to actually pay for it because an inflation adjusted guaranteed annuity for life is incredibly expensive! So the present value of it divided out would be more than people are willing to actually pay.
Since nobody can really say what a good AI department does, companies seem to be driven by credentiallism, load up on machine learning PhDs and masters so they can show their board and investors that they are ready for the AI revolution. This creates economic pressure to write such papers, the vast majority of which will amount to nothing.
It made me a bit nostalgic trying it out again, though. I used Atom for about a year before switching to VS Code, and I remember the vibrant community around it. It definitely fulfilled it's goal of being hackable, since there were extensions that completely extended the UI in some pretty neat/silly ways.