I was recently at a big, three-letter pharmacy company and I can't be specific, but just let me say this: They're always on the edge of having the main websites going down for this or that reason. It's a constant battle.
How is adding more AI complexity going to help any of that when they don't even have a competent enough workforce to manage the complexity as it is today?
You mention VR--that's another huge flop. I got my son a VR headset for Christmas in like 2022. It was cool, but he couldn't use it long or he got nauseaus. I was like "okay, this is problematic." I really liked it in some ways, but sitting around with that goofy thing on your head wasn't a strong selling point at all. It just wasn't.
If AI can't start doing things with accuracy and cleverness, then it's not useful.
I've thought a lot about where this belief comes from, that belief being the general Hacker News skepticism towards AI and especially big tech's promotion and alignment with it in recent years. I believe it's due to fear of irrelevance and loss of control.
The general type I've seen most passionately dismissive of the utility of LLM's are veteran, highly "tech-for-tech's sake" software/hardware people, far closer Wozniak than Jobs on the Steve spectrum. These types typically earned their stripes working in narrow intersections of various mission-critical domains like open-source software, systems development, low-level languages, etc.
To these people, a generally capable all-purpose oracle capable of massive data ingestion and effortless inference represents a death knell to their relative status and value. AI's likely trajectory heralds a world where intelligence and technical ability are commodified and ubiquitous, robbing a sense purpose and security from those whose purpose and security depends on their position in a rare echelon of intellect.
This increasingly likely future is made all the more infuriating by the annoyances of the current reality of AI. The fact that AI is so presently inescapable despite how many glaring security-affecting flaws it causes, how much it propagates slop in the information commons, and how effectively it emboldens a particularly irksome brand of overconfidence in the VC world is preemptive insult to injury in the lead up a reality where AI will nevertheless control everything.
I can't believe these types I've seen on this site aren't smart enough to avoid seeing the forest for the trees on this matter. My Occam's razor conclusion is that most are smart enough, they just are emotionally invested in anticipating a future where the grand promises of AI will fizzle out and it will be back to business as usual. To many this is a salve necessary to remain reasonably sane.
I read a story about 14 year olds that are adopting AI boyfriends. They spend 18 hours a day in conversation with chatbots. Their parents are worried because they are withdrawing from school and losing their friends.
I hate second guessing emails that I've read, wondering if my colleagues are even talking to me or if they are using AI. I hate the idea that AI will replace my job.
Even if it unlocks "economic value" -- what does that even mean? We'll live in fucking blade runner but at least we'll all have a ton of money?
I agree, nobody asked what I wanted. But if they did I'd tell them, I don't want it, I don't want any of it.
Excuse me, I'll go outside now and play with my dogs and stare at a tree.
Not just by companies. We see this from enthusiastic consumers as well, on this very forum. Or it might just be astroturfing, it's hard to tell.
The mantra is that in order to extract value from LLMs, the user must have a certain level of knowledge and skill of how to use them. "Prompt engineering", now reframed as "context engineering", has become this practice that separates anyone who feels these tools are wasting their time more than they're helping, and those who feel that it's making them many times more productive. The tools themselves are never the issue. Clearly it's the user who lacks skill.
This narrative permeates blog posts and discussion forums. It was recently reinforced by a misinterpretation of a METR study.
To be clear: using any tool to its full potential does require a certain skill level. What I'm objecting to is the blanket statement that people who don't find LLMs to be a net benefit to their workflow lack the skills to do so. This is insulting to smart and capable engineers with many years of experience working with software. LLMs are not this alien technology that require a degree to use correctly. Understanding how they work, feeding them the right context, and being familiar with the related tools and concepts, does not require an engineering specialization. Anyone claiming it does is trying to sell you something; either LLMs themselves, or the idea that they're more capable than those criticizing this technology.
I'm often accused of letting my skepticism hold me back from really trying it properly, and maybe that's true. I certainly could not imagine going months without writing any code, letting the AI just generate it while I prompt
My work is pushing these tools hard and it is taking a huge toll on me. I'm constantly hearing how life changing this is, but I cannot replicate it no matter what I do
I'm either just not "getting it", or I'm too much of a control freak, or everyone else is just better than I am, or something. It's been miserable. I feel like I'm either extremely unskilled or everyone else is gaslighting me, basically nowhere in between
I have not once had an LLM generate code that I could accept. Not one time! Every single time I try to use the LLM to speed me up, I get code I have to heavily modify to correct. Sometimes it won't even run!
The advice is to iterate, but that makes no sense to me! I would easily spend more time iterating with the LLM than just writing the code myself!
It's been extremely demoralizing. I've never been unhappier in my career. I don't know what to do, I feel I'm falling behind and being singled out
I probably need to change employers to get away from AI usage metrics at this point, but it feels like it's everyone everywhere guzzling the AI hype. It feels hopeless
There are so much money needed to solve another problems, especially for health.
I don’t blame the new comers, but Zuckerberg.
> i used chatgpt for the first time today and have some lite rage if you wanna hear it. tldr it wasnt correct. i thought of one simple task that it should be good at and it couldnt do that.
> (The kangxi radicals are neatly in order in unicode so you can just ++ thru em. The cjks are not. I couldnt see any clear mapping so i asked gpt to do it. Big mess i had to untangle manually anyway it woulda been faster to look them up by hand (theres 214))
> The big kicker was like, it gave me 213. And i was like, "why is one missing?" Then i put it back in and said count how many numbers are here and it said 214, and there just werent. Like come on you SHOULD be able to count.
If you can make the language models actually interface with what we've been able to do with computers for decades, i imagine many paths open up.