The difference, it seems, is that I’ve been looking at these tools and thinking how I can use them in creative ways to accomplish a goal - and not just treating it like a magic button that solves all problems without fine-tuning.
To give you a few examples:
- There is something called the Picture Superiority Effect, which states that humans remember images better than merely words. I have been interested in applying this to language learning – imagine a unique image for each word you’re learning in German, for example. A few years ago I was about to hire an illustrator to make these images for me, but now with Midjourney or other image creators, I can functionally make unlimited unique images for $30 a month. This is a massive new development that wasn’t possible before.
- I have been working on a list of AI tools that would be useful for “thinking” or analyzing a piece of writing. Things like: analyze the assumptions in this piece; find related concepts with genealogical links; check if this idea is original or not; rephrase this argument as a series of Socratic dialogues. And so on. This kind of thing has been immensely helpful in evaluating my own personal essays and ideas, and prior to AI tools it, again, was not really possible unless I hired someone to critique my work.
The key for both of these example use cases is that I have absolutely no expectation of perfection. I don’t expect the AI images or text to be free of errors. The point is to use them as messy, creative tools that open up possibilities and unconsidered angles, not to do all the work for you.
AI receives so much funding and support from the wealthy because they believe that they can use it to replace humans and reduce labor costs. I strongly suspect that AI being available to us at all is merely a plot to get us to train and troubleshoot the tech for them so it can more perfectly imitate us. Then, eventually, when the tech is "good enough" it will rapidly become too expensive for normal people to use and thus become inaccessible.
Companies are already mass-firing their staff in favor of AI agents even though those agents don't even do a good job. Imagine how it will be when they do.
It is important that this be repeated ad nauseum with AI since it seems there are so many "true believers" who are willing to distort that material reality of AI products.
At this point, I am not convinced that it can ever "get better". These problems seem inherent and fundamental with the technology and while they could possibly be mitigated to an acceptable level, we really should not do that because we can just use traditional algorithms then that are far easier on compute and the environment. And far more reliable. There really isn't any advantage or benefit.
And people just sit around, unimpressed, and complain that ... what ... it isn't a perfect superintelligence that understands everything perfectly? This is the most amazing technology I've experienced as a 50+ year old nerd that has been sitting deep in tech for basically my whole life. This is the stuff of science fiction, and while there totally are limitations, the speed at which it is progressing is insane. And people are like, "Wah, it can't write code like a Senior engineer with 20 years of experience!"
Crazy.
That's how I see LLMs and the hype surrounding them.