As far as I can tell, the AGI fantasy is the usual counter-argument to this. "AI is going to make everything 10x more productive", "AI is going to invent new efficient energy for us", etc. And it's always "just around the corner", to make the problems of the current generation of LLMs seem (soon) irrelevant
I love this angle, and would take it further. I'm starting to think about AI in the same way that we think about food ethics.
Some people are vegan, some people eat meat. Usually, these two parties get on best when they can at least understand each-other's perspectives and demonstrate an understanding of the kinds of concerns the other might have.
When talking to people about AI, I feel much more comfortable when people acknowledge the concerns, even if they're still using AI in their day-to-day.
> To me, overusing AI, destroying ecosystems, covering up fuck-ups, and hating minorities are all “bad” for the same reason, which I can mostly sum up as a belief that traumatizing others is “bad”. You cannot prove that AI overuse is “bad” to a person who doesn’t think in this framework, like a nihilist that treats others’ lives like a nuisance.
But you haven't defined the framework. You know a bunch of people for which AI is bad for a bunch of handwavey reasons - not from any underlying philosophical axioms. You are doing what you are accusing others of. In my ethical framework the other stated things can be shown to be bad, it is not as clear for AI.
If you want to take a principled approach you need to define _why_ AI is bad. There have been cultures and religions across time that have done this for other emerging technologies, luddites, the amish, etc. They have good ethical arguments for this - and it's possible they are right.
It's not hard to formulate why "AI" is bad, at least in its current form. It destroys the education system, is dangerous for environment, things like deepfakes drive us further towards post-truth, it decreases product quality, AI is replacing artists and similar professions rather than technical ones without creating new jobs in the same area, it increases inequality, and so on.
Of course, none of these are caused by the technology itself, but rather by people who drive this cultural shift. The framework difference comes from people believing in short-term gains (like revenue, abusing the novelty factor, etc.) vs those trying to reasonably minimize harm.
> The framework difference comes from people believing in short-term gains (like revenue, abusing the novelty factor, etc.) vs those trying to reasonably minimize harm.
OK - so your framework is "harm minimization". This is kind of a negative utilitarian philosophy. Not everyone thinks this way and you cannot really expect them to either. But an argument _for_ AI from a negative utilitarian PoV is also easy to construct. What if AI accelerates the discovery of anti-cancer treatments or revolutionizes green tech. What if AI can act as a smart resource allocator and enables small hi-tech sustainable communes. These are not things you can easily prove AI won't enable even within your framework.
" and hating minorities are all “bad” for the same reason, "
What does this have to do with AI?
Also, why hedge everything you're about to say with a big disclaimer?:
> Disclaimer: I believe that what I’m saying in this post is true to a certain degree, but this sort of logic is often a slippery slope and can miss important details. Take it with a grain of salt, more like a thought-provoking read than a universal claim. Also, it’d be cool if I wasn’t harassed for saying this.
The author's general concern about externalization of downsides.
> Also, why hedge everything you're about to say with a big disclaimer?
Because people are extremely rude on the internet. It won't make much of a difference to actual nitpicking, as I'm sure we'll see, it's more of a sad recognition of the problem.
> Also, why hedge everything you're about to say with a big disclaimer?:
Because her previous (Telegram-only) post on a similar topic has attracted a lot of unfounded negative comments that were largely vague and toxic, rather than engaging with her specific points directly and rationally.
She even mentions it later in this post (the part about “worse is better”). Have you not read that? Ironically, you're acting exactly like those people who complain without having read the post.
> What does this have to do with AI?
It's literally explained in the same sentence, right after the part that you quote. Why don't you engage more specifically with that explanation? What's unclear about it?
One point which I consider worth making is that LLMs have helped enable a lot of people solve real world problems, even if the solutions are sometimes low quality. The reality is that in many cases the only choice is between a low quality solution and no solution at all. Lots of problems are too small or niche to be able to afford hiring a team of skilled programmers for a high quality solution.
- "a low-quality solution, but you also spend extra time (sometimes - other people's time) on learning to solve the problem yourself"
- "a high-quality solution, but you've spent years on becoming an expert is this domain"
It's good that you brought this up.
Often, learning to solve a class of problems is simply not a priority. Low-quality vide-coded tools are usually means to an end. And the end goals that they achieve are often not even the most important end goals that you have. Digging so deep into the details of those is not worth it. Those are temporary, ad-hoc things.
In the original post, the author references our previous discussion about "Worse Is Better". It's a very relevant topic! Over there, I actually made a very similar point about priorities. "Worse" software is "better" when it's just a component of a system where the other components are more important. You want to spend as much time as possible on those other components, and not on the current component.
A (translated) example that I gave in that thread:
> In the 1970s, K&R were doing OS research. Not PL research. When they needed to port their OS, they hacked a portable low-level language for that task. They didn't go deep into "proper" PL research that would take years. They ported their OS, and then returned straight to OS research and achieved breakthroughs in that area. As intended.
> It's very much possible that writing a general, secure-by-design instrument would take way more time than adding concrete hacks on the application level and producing a result that's just as good (secure or whatever) when you look at the end application.
To be fair, the post says that "overusing AI [is] bad" and "problems [are] caused by widespread AI use" (emphasis mine).
I beleive, they aren't against all AI use and aren't against the use that you describe. They are against knowingly cutting corners and pushing the cost onto the users (when you have an option not to). Or onto anything else, be it the environment or the job market
Our mental models are inadequate to think about these new tools. We bend and stretch our familiar patterns and try to slap them on these new paradigms to feel a little more secure. It’ll settle down once we spend more time with the tech.
The discourse is crawling with over generalization and tautologies because of this. This won’t do us any good.
We need to take a deep breath, observe, theorize, experiment, share our findings and cycle.
It's easy to argue that this selection is happening in business and politics. But I don't see how it could be relevant to reproduction, on an evolutionary scale.
A hypothesis is approximately this, paraphrasing - human society rewards sociopathic humans, who filter to the top of politics, corporation control, figures of influence etc. Watts goes even further and throws in a hypothesis that even consciousness may be an artifact which humans will evolve out of. Next are my own thoughts - local short term selection is not enough to put evolutionary pressure, and in general evolution doesn't work like this. But I suspect that a lot of empathy traits and adjacent characteristics are not genetic but a product of education, with some exceptions. So if whole society (not only CEOs and presidents) starts rewarding sociopathic behavior, parents may educate kids accordingly and the loop will be self reinforcing. For example some tiny example we can see today - union busting where those exist; cheating culture where cheating is normal and encouraged; extreme competitiveness turning into deathmathes (a-la South Korea university insanity, when everyone studies much longer hours than needed, constantly escalating the situation).
Oh, well. I guess, I'll have to translate my original Telegram comment.
---
I agree with the connections that you make in this post. I like it.
But I disagree that purely-technical discussions around LLMs are "meaningless" and "miss the point". I think, appealing to reason through "it will make your own work more pleasant and productive" (for example, if you don't try to vibecode an app that you'll need to maintain later) is an activity that has a global positive effect too.
Why? Because the industry has plenty of cargo cults that don't benefit you, even at someone else's expense! This pisses me off the most. Irrationality. Selfishness is at least something that I can understand.
I'll throw in the idea that cultivating rationatily helps cultivate a compassionate society. No matter how you look at it, most people have compassion in them. You don't even need to "activate" it. But I feel like, due to their misunderstanding of a situation, or due to logical fallacies, people's compassion often manifests as actions that only make everything worse. The problem isn't that people don't try to help the others. A lot of people try, but do that wrong :(
A simple example: most of the politically-active people with a position that's opposite to yours. (The "yours" in this example is relative and applicable to anyone, I don't mean the author specifically)
In general, you should fight the temptation to percieve people around you as (even temporarily) ill-intentioned egoists. Most of the time, that's not the case. "Giving the benefit of the doubt" is a wonderful rule of thumb. Assume ignorance and circumstances, rather than selfishness. And try to give people tools and opportunities, instead of trying to influence their moral framework.
I'll also throw in another idea. If a problem has an (ethical) selfish solution, we should choose that. Why? Because it doesn't require any sacrifices. This drastically lowers the friction. Sacrifices are a last resort. Sacrifices don't scale well. Try to think more objectively whether that's the most efficient solution to the injustice that bothers you. Sacrifices allow to put yourself on a moral pedestal, but they don't always lead to the most humane outcomes. It's not a zero-sum game.
1. Who's gonna pay back the investors their trillions of dollars and with what?
2. Didn't we have to start thinking about reducing energy consumption like at least a decade ago?
Some people are vegan, some people eat meat. Usually, these two parties get on best when they can at least understand each-other's perspectives and demonstrate an understanding of the kinds of concerns the other might have.
When talking to people about AI, I feel much more comfortable when people acknowledge the concerns, even if they're still using AI in their day-to-day.
But you haven't defined the framework. You know a bunch of people for which AI is bad for a bunch of handwavey reasons - not from any underlying philosophical axioms. You are doing what you are accusing others of. In my ethical framework the other stated things can be shown to be bad, it is not as clear for AI.
If you want to take a principled approach you need to define _why_ AI is bad. There have been cultures and religions across time that have done this for other emerging technologies, luddites, the amish, etc. They have good ethical arguments for this - and it's possible they are right.
Of course, none of these are caused by the technology itself, but rather by people who drive this cultural shift. The framework difference comes from people believing in short-term gains (like revenue, abusing the novelty factor, etc.) vs those trying to reasonably minimize harm.
OK - so your framework is "harm minimization". This is kind of a negative utilitarian philosophy. Not everyone thinks this way and you cannot really expect them to either. But an argument _for_ AI from a negative utilitarian PoV is also easy to construct. What if AI accelerates the discovery of anti-cancer treatments or revolutionizes green tech. What if AI can act as a smart resource allocator and enables small hi-tech sustainable communes. These are not things you can easily prove AI won't enable even within your framework.
What does this have to do with AI?
Also, why hedge everything you're about to say with a big disclaimer?:
> Disclaimer: I believe that what I’m saying in this post is true to a certain degree, but this sort of logic is often a slippery slope and can miss important details. Take it with a grain of salt, more like a thought-provoking read than a universal claim. Also, it’d be cool if I wasn’t harassed for saying this.
The author's general concern about externalization of downsides.
> Also, why hedge everything you're about to say with a big disclaimer?
Because people are extremely rude on the internet. It won't make much of a difference to actual nitpicking, as I'm sure we'll see, it's more of a sad recognition of the problem.
Because her previous (Telegram-only) post on a similar topic has attracted a lot of unfounded negative comments that were largely vague and toxic, rather than engaging with her specific points directly and rationally.
She even mentions it later in this post (the part about “worse is better”). Have you not read that? Ironically, you're acting exactly like those people who complain without having read the post.
> What does this have to do with AI?
It's literally explained in the same sentence, right after the part that you quote. Why don't you engage more specifically with that explanation? What's unclear about it?
One point which I consider worth making is that LLMs have helped enable a lot of people solve real world problems, even if the solutions are sometimes low quality. The reality is that in many cases the only choice is between a low quality solution and no solution at all. Lots of problems are too small or niche to be able to afford hiring a team of skilled programmers for a high quality solution.
Let’s stay with the (at minimum) low quality solution: What would do someone without IA?
- ask on a forum (Facebook, Reddit, askHN, spécialises forums…)
- ask a neighbor if he knows someone knowledgeable (2 or 3 relations can lead you to many experts)
- go to the library. Time consuming but you might learn something else too and improve knowledge and IQ
- think again about the problem (ask Why? Many times, think out of the box…)
- "a low quality solution"
- "a low-quality solution, but you also spend extra time (sometimes - other people's time) on learning to solve the problem yourself"
- "a high-quality solution, but you've spent years on becoming an expert is this domain"
It's good that you brought this up.
Often, learning to solve a class of problems is simply not a priority. Low-quality vide-coded tools are usually means to an end. And the end goals that they achieve are often not even the most important end goals that you have. Digging so deep into the details of those is not worth it. Those are temporary, ad-hoc things.
In the original post, the author references our previous discussion about "Worse Is Better". It's a very relevant topic! Over there, I actually made a very similar point about priorities. "Worse" software is "better" when it's just a component of a system where the other components are more important. You want to spend as much time as possible on those other components, and not on the current component.
A (translated) example that I gave in that thread:
> In the 1970s, K&R were doing OS research. Not PL research. When they needed to port their OS, they hacked a portable low-level language for that task. They didn't go deep into "proper" PL research that would take years. They ported their OS, and then returned straight to OS research and achieved breakthroughs in that area. As intended.
> It's very much possible that writing a general, secure-by-design instrument would take way more time than adding concrete hacks on the application level and producing a result that's just as good (secure or whatever) when you look at the end application.
I beleive, they aren't against all AI use and aren't against the use that you describe. They are against knowingly cutting corners and pushing the cost onto the users (when you have an option not to). Or onto anything else, be it the environment or the job market
Our mental models are inadequate to think about these new tools. We bend and stretch our familiar patterns and try to slap them on these new paradigms to feel a little more secure. It’ll settle down once we spend more time with the tech.
The discourse is crawling with over generalization and tautologies because of this. This won’t do us any good.
We need to take a deep breath, observe, theorize, experiment, share our findings and cycle.
I haven't read Blindsight, though.
---
I agree with the connections that you make in this post. I like it.
But I disagree that purely-technical discussions around LLMs are "meaningless" and "miss the point". I think, appealing to reason through "it will make your own work more pleasant and productive" (for example, if you don't try to vibecode an app that you'll need to maintain later) is an activity that has a global positive effect too.
Why? Because the industry has plenty of cargo cults that don't benefit you, even at someone else's expense! This pisses me off the most. Irrationality. Selfishness is at least something that I can understand.
I'll throw in the idea that cultivating rationatily helps cultivate a compassionate society. No matter how you look at it, most people have compassion in them. You don't even need to "activate" it. But I feel like, due to their misunderstanding of a situation, or due to logical fallacies, people's compassion often manifests as actions that only make everything worse. The problem isn't that people don't try to help the others. A lot of people try, but do that wrong :(
A simple example: most of the politically-active people with a position that's opposite to yours. (The "yours" in this example is relative and applicable to anyone, I don't mean the author specifically)
In general, you should fight the temptation to percieve people around you as (even temporarily) ill-intentioned egoists. Most of the time, that's not the case. "Giving the benefit of the doubt" is a wonderful rule of thumb. Assume ignorance and circumstances, rather than selfishness. And try to give people tools and opportunities, instead of trying to influence their moral framework.
I'll also throw in another idea. If a problem has an (ethical) selfish solution, we should choose that. Why? Because it doesn't require any sacrifices. This drastically lowers the friction. Sacrifices are a last resort. Sacrifices don't scale well. Try to think more objectively whether that's the most efficient solution to the injustice that bothers you. Sacrifices allow to put yourself on a moral pedestal, but they don't always lead to the most humane outcomes. It's not a zero-sum game.