If you can sell guns and porn in kindergarten, well yes, you live in a very very ”liberal” society - one that is a dystopian hellhole, that is. Unless there is something very wrong with you, you do not want to live in such a society either. Therefore we have laws, regulations, social norms and taxation to limit unwanted behaviour as well as to protect those in the most precarious position. We all know for instance how mental illness affects likelyhood of addiction, or how such a simple thing as _pain_ made legions of people opioid addicts across the USA.
So no, it is not just few junkies that fail to realise.
Possible values for A = heroin, alcohol, tobacco, weed, porn, TV… B = addictive, causes cancer, has an effect on brain health, spreads HIV… C = using, consuming, eating, injecting…
Seems that this “people realizing” does not seem to work with other highly addictive chemicals or electronic media, since healing oneself from addiction requires far more than just “realizing” it is bad for you and the society. Perhaps there is a reason why we limit by law the sale of tobacco, drugs, alcohol and other highly addictive substances.
Before mechanisation, like 50x more people worked in the agricultural sector, compared to today. So tractors certainly left without work a huge number of people. Our society adapted to this change and sucked these people into industrial sector.
If LLM would work like a tractor, it would force 49 out of 50 programmers (or, more generically, blue-collar workers) to left their industry. Is there a place for them to work instead? I don't know.
But none of this chamged how food grows and that you need somebody who bloody well knows what they are doing to produce it. Especially how machinised it is today.
However, I do not believe LLM to be a tractor. More like a slightly different hammer. You still need to hit the nail.
I guess in the end it's my take on a system that I want to use for regular meeting notes with others and that I've never been able to find in any other tool, so I built it.
Good for you Australia. I hope EU follows suit soon.
For example, Inkscape has this and it is easy to use.
Point is, even basic visual design is far from intuitive.
The writer could be very accomplished when it comes to developing - I don’t know - but they clearly don’t understand a single thing about visual arts or culture. I probably could center those text boxes after fiddling with them maybe ten seconds - I have studied art since I was a kid. My bf could do it instantly without thinking a second, he is a graphic designer. You might think that you are able to see what « looks good » since, hey you have eyes, but no you can’t. There’s million details you will miss, or maybe feel something is off, but cannot quite say why. This is why you have graphic designers, who are trained to do that to do it. They can also use generative tools to make something genuinely stunning, unlike most of us. Why? Skills.
This is the same difference why the guy in the story who can’t code can’t code even with LLM, whereas the guy who cans is able to code even faster with these new tools. If use LLM’s for basically auto-completion (what transformer models really are for) you can work with familiar codebase very quickly I’m sure. I’ve used it to gen SQL call statements, which I can’t be bothered to type myself and it was perfect. If I try to generate something I don’t really understand or know how to do, I’m lost staring at sole horrible gobbledygoo that is never going to work. Why? Skills.
There is no verification engineering. There is just people who know how to do things, who have studied their whole life to get those skills. And no, you will not replace a real hardcore professional with an LLM. LLM’s are just tools, nothing else. A tractor replaced a horse in turning the field, bit you still need a farmer to drive it.
If you read some English public school essay by a pupil who has not read their homework, effect is very similar: a lot of complex sentences peppered with non-Celtic words, but utterly without meaning. In simple terms, the writer does not know what the hell they are talking about, although they know how to superficially string words together into a structured and coherent text. Even professional writers do this, when they have a deadline and not a single original idea what to write about.
But we do not write just to fart language on paper or screen, we write to convey a meaning, a message. To communicate. One can of course find meaning from tea leaves and whatnot, but truly it is a communal experience to write with an intention and to desperately try to pass one’s ideas and emotions forward to one’s common enby.
This is what lacks in the million of GPT-generated Linkedin-posts, hecause in the end they are just structure without content, empty shells. Sometimes of course one can get something genuinely good by accident, but it is fairly rare. Usually it is just flexing of syntax in a way both tepid and without heart. And it is unlikely that LLM’s can overcome this hurdle, since people writing without intent cannot either. They are just statistical models guessing words after all.