It's up to you to learn not to doomscroll where it starts showing you garbage after it burns through your personal feed.
5 years ago a typical argument against AGI was that computers would never be able to think because "real thinking" involved mastery of language which was something clearly beyond what computers would ever be able to do. The implication was that there was some magic sauce that human brains had that couldn't be replicated in silicon (by us). That 'facility with language' argument has clearly fallen apart over the last 3 years and been replaced with what appears to be a different magic sauce comprised of the phrases 'not really thinking' and the whole 'just repeating what it's heard/parrot' argument.
I don't think LLM's think or will reach AGI through scaling and I'm skeptical we're particularly close to AGI in any form. But I feel like it's a matter of incremental steps. There isn't some magic chasm that needs to be crossed. When we get there I think we will look back and see that 'legitimately thinking' wasn't anything magic. We'll look at AGI and instead of saying "isn't it amazing computers can do this" we'll say "wow, was that all there is to thinking like a human".
This is the definition of the word ‘novel’.
I don't need an AGI. I do need a secretary type agent that deals with all the simple but yet laborious non technical tasks that keep infringing on my quality engineering time. I'm CTO for a small startup and the amount of non technical bullshit that I need to deal with is enormous. Some examples of random crap I deal with: figuring out contracts, their meaning/implication to situations, and deciding on a course of action; Customer offers, price calculations, scraping invoices from emails and online SAAS accounts, formulating detailed replies to customer requests, HR legal work, corporate bureaucracy, financial planning, etc.
A lot of this stuff can be AI assisted (and we get a lot of value out of ai tools for this) but context engineering is taking up a non trivial amount of my time. Also most tools are completely useless at modifying structured documents. Refactoring a big code base, no problem. Adding structured text to an existing structured document, hardest thing ever. The state of the art here is an ff-ing sidebar that will suggest you a markdown formatted text that you might copy/paste. Tool quality is very primitive. And then you find yourself just stripping all formatting and reformatting it manually. Because the tools really suck at this.
This doesn’t sound like bullshit you should hand off to an AI. It sounds like stuff you would care about.
In general, if you find yourself thinking a group of people are "just being dramatic" then you're probably missing context.
"I certainly condemn the killing of innocent civilians". No you don't. That's BS you're telling yourself so you can feel unconflicted about what should be a simple moral calculus.
I am not sure what certifies your moral; God or logic or whatever that tells you that how you live your life is justified.
I do know that I have morals, though.
"Don't murder people" is pretty easy for me to justify categorically.
If you have to put a big [*] next to that which says "if my boss tells me to kill someone, it's okay", then you really don't have any morals.
That math is easy for most folks to do.
The thing that probably keeps you from being able to do that math is some relative certainty that you personally will never have to be on the "risk/benefit analysis" board for these kinds of murderers.
But that's an error.