Readit News logoReadit News
DoctorOetker · a year ago
I presume the down-to-earth "Deep Mathematics" and "Deep Differential Algorithms and Differential Datastructures" and "Deep Physics" sobering up.

There is gradual historical transfer of problems first residing in a vague "aristotelian logic" phase, then gradually ever more formal phase, first quantitative descriptive and eventually and only then the normative formalization (i.e. figures of merit etc..).

When physicists and engineers apply RMAD to known and understood (but computationally intensive) total potential/lagrangian/total-figure-of-merit... functions, people don't call it AI, not even necessarily machine learning.

When RMAD is used to optimize vague intuitionistic reasoning like LLMs we do call it AI.

Given the gradual transfer of problems from the vague domain to the explicit domain (formalize or fossilize), "after AI" comes the very same RMAD, but applied to find optima for formally specified and understood-in-the-sense-of-reductionism-but-not-in-emergent-sense systems. (i.e. a computer does not behave like a giant transistor).

christkv · a year ago
Spending for training might be getting closer but I can't imagine that spending for inference will slack off. However for inference it will be all about power efficiency I imagine to bring the cost down over time for the end users and businesses.

Deleted Comment

wegfawefgawefg · a year ago
This is a worthless article completely devoid of substance. The author just wants to see it fail and is virtue signaling about potential harms or something.

The truth is we do not know if AI will keep growing, or stagnate, in the same way we dont know if moores law will keep going or die each year.

The only way AI stops getting better is if computers stop getting higher compute per kwh.

VieEnCode · a year ago
There are plenty of legitimate points made in the article. It also seems uncontroversial at this point to claim that companies are struggling to justify the expenses and inflated share prices brought about by the rush to implement LLMs in as many use cases as possible.

As far as I am aware, the author is right to claim that there is no solution to AI hallucination on the horizon, which is a severely limiting problem. Also I understand we are reaching the boundaries of useful training data available. Both factors suggest current AI improvement simply cannot follow the same trajectory as transistors under Moore's law.

wegfawefgawefg · a year ago
I just dont see hallucination as that big of a problem. I use copilot and gpt daily and only have that issue on occasion. Its a once every few hours issue.

The average person just doesnt care about hallucination that much and probably doesnt even notice half the time.

Costs for gigantic models are really high now, but unless moores law stops they will come down. If moores law does stop, we have a bigger problem. Costs for all the other small models which were already useful 10 years ago, have gotten so cheap you can train them on a macbook, and deploy them on microcontrollers. (sound/image identification/detection in fleet microphones/cameras) That was big ML 8 years ago.

LLM's are not all of AI. There are tons of usecases other than just chatbots. I think people forget that theres models doing well depth alignment, trajectory planning, car tracking, liscense plate reading, etc.

Everyone is just burned out on the relentless advertising of gpt and are conflating that with all of ai. While this man is getting mad at sama for being cringe and hoping the ai world crashes and burns, the general technology is just propogating outwards to people who are actually using it.

tivert · a year ago
> The truth is we do not know if AI will keep growing, or stagnate, in the same way we dont know if moores law will keep going or die each year.

> The only way AI stops getting better is if computers stop getting higher compute per kwh.

You do not know that any more than you know "if moores law will keep going or die each year." It's a reasonable possibility that AI stops getting better because the current techniques hit a wall that can't be solved with more/cheaper compute and better techniques aren't forthcoming.

I suspect you "believe" in AI, and so are applying double standards to support it.

wegfawefgawefg · a year ago
I am agnostic to its success or continued development.

I do not think there is any hard evidence to indicate it will suddenly stop getting better. We have only seen consistent progress, and the cost of infererence per kwh keeps going down. Additionally, the industry is figuring out minified models. A few years ago gpt3 scale models filled datacenters, now they fit on an iphone. We will probably see them sneak onboard drone flight controllers, and toys as ambulation chips, cheap security cameras, etc.

This idea that a new technology never required a scale jump in supply chains and energy consumptions is false (see steel/computers/shipping/etc..).

As gpus keep improving each year the cost of training a model keeps coming down. More companies will have the opportunity to train models. Four years ago you needed a hundred thousand dollars to train some of those toy deepmind rl agents. Now you can rent an H100/A100 for 1-5k for a month and go ham. Give it 10 years and nerdy high schoolers will be buying old H100s as desks just like we bought powerpc server blades with half a terrabyte of ram for 10 bucks on craigslist. (Unless moores law stops but then we have a bigger problem)

I do not speculate on whether ai will turn into god or something like that. It just seems more likely to me we are going to get more ai not less. I genuinely do not see real evidence of a wall. People just want it to fail.

Deleted Comment