DAT-style content updates and signature-based prevention are very archaic. Directly loading content into memory and a hard-coded list of threats? I was honestly shocked that CS was still doing DAT-style updates in an age of ML and real-time threat feeds. There are a number of vendors who've offered it for almost a decade. We use one. We have to run updates a couple of times a year.
SMH. The 90's want their endpoint tech back.
You might not agree with him all the time, but he has some good arguments and seems sincere in his criticisms. Better offline is worth a listen
If in 10 years all we have are better chat bots and image generators, I’d say it was a bubble, and I don’t see anything that says that’s definitely not the path (though I’m not in the weeds of AI, so maybe it’s just not obvious, yet).
For example, in healthcare (because... day job), you will be interacting with an AI as the first step for your visits/appointments, AI will work with you to fill out your forms/history, your chart will be created by AI, your x-ray and lab results will be read by AI first, and your discharge instructions will be created on the fly with AI... etc. etc. etc. This tech is deploying today. Not in a year, today. The only thing that's holding it up is cost and staff training.
The "Great Minds And Great Leaders" types are rushing to warn about the risks, as are a large number of people who spend a lot of time philosophizing.
But the actual scientists on the ground -- the PhDs and engineers I work with every day and who have been in this field, at the bench, doing to work on the latest generation of generative models, and previous generations, in some cases for decades? They almost all roll their eyes aggressively at these sorts of prognostications. I'd say 90+% either laugh or roll their eyes.
Why is that?
Personally, I'm much more on the side of the silent majority here. I agree with Altman's criticisms of criticisms about regulatory capture, that they are probably unfair or at least inaccurate.
What I actually think is going on here is something more about Egos than Greatness or Nefarious Agendas.
Ego, not intelligence or experience, is often the largest differentiator between the bench scientist or mid-level manager/professor persona and the CEO/famous professor persona. (The other important thing, of course, is that the former is the group doing the actual work.)
I think that most of our Great Minds and Great Leaders -- in all fields, really -- are not actually our best minds and best leaders. They are, instead, simply our Biggest Egos. And that those people need to puff themselves up by making their areas of ownership/responsibility/expertise sound Existentially Important.
AI will be transformative, but it's more likely to follow previous transformations. Unintended consequences, sure, but largely an increase in the standard of living, productivity, and economic opportunity.
Best bit of the article.
Question for someone who knows more about this stuff. How likely is it to get the same response to the same prompt with gpt ? Does it have some kind of random seed applied behind the scenes?
-edit- Thank you for the responses. TIL.