I can't think of anything off the top of my head that isn't just doing the things that make it a generative AI. (It's better at generating an image that I describe to it, etc, but that's not something that another technology does.)
And it’s not just them. To me this trend screams “valuations are too high”, and maybe hints at “progress might start to stagnate soon”.
Why all the defensiveness? Whatever genetic aspects of our personalities and behaviours there are - there's still a pretty big component of just learning patterns. Language acquisition is like that. It's an innate thing but the languages we're exposed to as kids shape what patterns of language use we fall into.
LLMs are far from perfect but they can be a very useful tool that, used well, can add significant value in spite of their flaws. Large numbers of people and businesses are extracting huge value from the use of LLMs every single day. Some people are building what will become wildly successful businesses around LLM technology.
Yet in the face of this we still see a population of naysayers who appear intent on rubbishing LLMs at any cost. To me that seems like a pretty bad faith dialogue.
I’m aware that a lot of the positive rhetoric, particularly early on after the first public release of ChatGPT was overstated - sometimes heavily so - but taking one set of shitty arguments and rhetoric and responding to it with polar opposite, but equally shitty, arguments and rhetoric for the most part only serves to double the quantity of shitty arguments and rhetoric (and, adding insult to injury, often does so in the name of “balance”).
I can understand the incentive for researchers to make provocative claims about the abilities or disabilities of LLM's at a moment in time when there's a lot of attention, money and froth circling a new technology.
I'm a little more stumped on the incentive for people (especially in tech?) to have strong negative opinions about the capabilities of LLM's. It's as if folks feel the need to hold some imaginary line around the sanctity of "true reasoning".
I'd love to see someone rigorously test human intelligence with the same kinds of approaches. You'd end up finding that humans in fact suck at reasoning, hallucinate frequently and show all kind of erratic behaviour in our processing of information. Yet somehow - we find other humans incredibly useful in our day to day lives.
Deleted Comment
What a narrow worldview.
The people creating new generative AI models are inventing new words. I think their topic of research and the new words they are creating have high utility.
The authors of this paper on the other hand appear to me to not be applying discipline and rigour to solving hard problems. They are however trying to associate the words they have created in a discipline with little objective utility - with the words of a discipline that has high utility.
This strikes me as annoying and absurd. Why try to make the crossover unless you are trying to catch some shine off of a discipline that is getting a lot of well-justified attention?
I'm still waiting for Ilya to publish his first paper on gender studies..