> Distressingly, the people most passionate about AI often express a not-so-subtle disdain for humanity.
I have noticed something similar: those who are ultra-passionate about AI are often Extremely Online, and it seems like their values tilt too far away from humanity for my taste. The use of AI is treated almost as an end in and of itself, which perpetuates a maximalist AI vision. This is also probably why they give off this weird vibe of having their personality outsourced.
Regardless of whether this is true, it is still nothing more than an ad hominem.
I would argue that the most passionate AI optimism and pessimism stems from a conviction that it is an inevitable next step in evolution. Given the associated potency, it is hard to not take an extreme position with regard to it.
The positions in between seem to be of the form "everything will stay largely the same, but with a bit more automation", which seems naive rather than level-headed, imho.
I can't read the tone of this post, but "AI" as it stands as a marketing term for the last ~45 years has little to do with rigor. These are workers producing profit, not scientists. Ethics has nothing to do with it. Scientists deal with empiricism, not sales.
> But there is a feedback loop: If you change the incentive structures, people’s behaviors will certainly change, but subsequently so, too, will those incentive structures.
This is a good point, and somewhat subtle too. Something that worries me is the acceleration of the feedback loop. The Internet, social media, smartphones, and now generative AI are all things that changed how information is generated, consumed and distributed, and changing that affects the incentive structures and behaviors of the people interacting with that information.
But information is spread increasingly faster, in higher amounts and with higher noise, and so the incentives landscape keeps shifting continuously to keep up, without giving people time to adapt and develop immunity against the viral/parasitic memes that each landscape births.
And so the (meta)game keeps changing under our feet, increasingly accelerating towards chaos or, more worryingly, meta-stable ideologies that can survive the continuous bombardment of an adversarial memetic environment. I say worryingly, because most of those ideologies have to be, by definition, totalizing and highly hostile to anything outside of them.
The problem is quite the opposite: a large part of the incentive structure is effectively static. Our biological makeup hardly changes, so we're still drawn to all kinds of primitive things. Without strong cultural overrides we are sitting ducks, ready to be exploited by click and engagement bait.
With an analogy: Connecting an average human to social media is like connecting a Windows 95 machine to the internet.
Tech companies, at least those who weren't founded on AI, have a significant number of people internally who have exactly the opinions in this blog post.
Outside of the tech ecosystem, most people I encounter either don't care about AI, or are vaguely positive (and using it to write emails/etc). There are exceptions, writers and artists for example, but they're in the minority. To be clear, I'm also not seeing "normal" people raving about it much either, but most people really do not share the opinion of this post.
I realise it may not seem like it, but most big tech companies are not designing for high earning valley software engineers because they are not a big market. They're designing for the world, and the world hasn't made its mind up about AI yet.
Counter-point: Dark patterns are also pervasive in more niche applocations including B2Bs targeting tech startups. Some of it is culturally endemic.
Obsession with one-size-fits-all, metrics-driven development, and UX excusively aiming for the lowest common denominators are also part of this problematic incentive structure you allude to.
> I'm also not seeing "normal" people raving about it much either, but most people really do not share the opinion of this post.
I don't think the "we" here was intended to include the general population.
Could the current use of "AI" also be considered a dark pattern?
In the way that dark patterns get you to use/pay for a product/service that you might not want to, but are too confused, frustrated, or the cost/time tradeoff is not worth it to understand how to stop using/paying for the product/service.
In terms of "AI" in products/services this would be the way that using such an assistant atrophies your skills and knowledge so that you become dependent on the product/service.
I've tolerated AI autocomplete in VSCode but I am a bee's dick away from turning it off, because it so often generates a huge chunk of code that is ALMOST correct, and determining where it is wrong is as much a chore as writing it would have been. But, it's like I've got a junior-coder sidekick who doesn't take any feedback. Not great.
> What I found most interesting is that the multiple choice options included a lot of “I found the Terminator movies scary”, “I read too much Ray Kurzweil”, and/or “I am or was a SF Bay Area rationalist” undertones, but actual ethical objections were strangely absent.
After reading Sarah Wynn-Williams' book and see the current state of democracy in the USA and some European countries (apart from the fact that democracies are too slow to regulate anyway), I see little hope for the future.
Try reading some Sarah Kendzior [0] if you want to lose whatever shreds you had left (or if you'd like to base any droplets of hope on a more accurate world-view).
Wynn-Williams, and the author of this post, both severely underestimate how dark the tech-bros vision for the future actually is and how far along they are. Envision Snow Crash, but without the humor.
When ic the "German spy agency labels AfD as ‘confirmed rightwing extremist’ force". Which will now lead to the removal of AfD members and sympathizers from the civil service. I at least, have a little hope for Germany e.g. Europe. A little hope that the End of the Capitalist era will not end in fascism and there is another way now open for discus involving not only elites (aka tech-bros and old white male).
> I do not actually believe “The Singularity” is a realistic threat due to every system that exhibits exponential growth encountering carrying capacities, which converts it into an S-curve.
Depending on the parameters of the curve, an S-curve may be effectively the same as an exponential curve. For instance if the IQ of AIs reach a plateau of 500 rather than exponentially increasing to infinity, we may not be around to see the plateau.
> For instance if the IQ of AIs reach a plateau of 500 rather than exponentially increasing to infinity, we may not be around to see the plateau.
If the premise for this is, "because we might not survive each other," rather than the AI being specifically an extinction event for humanity, then I think we agree.
It's not even clear "superintelligence" is a meaningful concept. It could be that the broad conflicts in our society are largely intractable and arbitrary, in which case "superintelligence" can do little but complain about the contradictions its passed (something I suspect many on this forum can understand).
Perhaps the calculator is as close as we'll ever get to "superintelligence".
My only concern about AI is that it will get stuck again for another few decades and I'll become elderly before I see what's next.
20 years ago my dream was to see a nimble robot running up the mountain path live. I hope this event is not another 20 years away. Future comes so horribly slowly.
Yes there was something truly special in the movie Mother, when that heavy ass robot was thumping down the hallway. Make sure to watch the boston dynamics outtakes when they break an ankle to alleviate some existential tension afterward
I love the movie "I am mother". Terribly chilling depiction of "end justifies the means", "might makes right" and possibly "hell is paved with good intentions" from the POV of subdued party (which is humanity in this case). Also it's great because it makes you constantly re-evaluate characters intentions and truthfulness of what they are saying.
I have noticed something similar: those who are ultra-passionate about AI are often Extremely Online, and it seems like their values tilt too far away from humanity for my taste. The use of AI is treated almost as an end in and of itself, which perpetuates a maximalist AI vision. This is also probably why they give off this weird vibe of having their personality outsourced.
I would argue that the most passionate AI optimism and pessimism stems from a conviction that it is an inevitable next step in evolution. Given the associated potency, it is hard to not take an extreme position with regard to it.
The positions in between seem to be of the form "everything will stay largely the same, but with a bit more automation", which seems naive rather than level-headed, imho.
Deleted Comment
This is a good point, and somewhat subtle too. Something that worries me is the acceleration of the feedback loop. The Internet, social media, smartphones, and now generative AI are all things that changed how information is generated, consumed and distributed, and changing that affects the incentive structures and behaviors of the people interacting with that information.
But information is spread increasingly faster, in higher amounts and with higher noise, and so the incentives landscape keeps shifting continuously to keep up, without giving people time to adapt and develop immunity against the viral/parasitic memes that each landscape births.
And so the (meta)game keeps changing under our feet, increasingly accelerating towards chaos or, more worryingly, meta-stable ideologies that can survive the continuous bombardment of an adversarial memetic environment. I say worryingly, because most of those ideologies have to be, by definition, totalizing and highly hostile to anything outside of them.
So yeah, interesting times.
With an analogy: Connecting an average human to social media is like connecting a Windows 95 machine to the internet.
Outside of the tech ecosystem, most people I encounter either don't care about AI, or are vaguely positive (and using it to write emails/etc). There are exceptions, writers and artists for example, but they're in the minority. To be clear, I'm also not seeing "normal" people raving about it much either, but most people really do not share the opinion of this post.
I realise it may not seem like it, but most big tech companies are not designing for high earning valley software engineers because they are not a big market. They're designing for the world, and the world hasn't made its mind up about AI yet.
Obsession with one-size-fits-all, metrics-driven development, and UX excusively aiming for the lowest common denominators are also part of this problematic incentive structure you allude to.
> I'm also not seeing "normal" people raving about it much either, but most people really do not share the opinion of this post.
I don't think the "we" here was intended to include the general population.
In the way that dark patterns get you to use/pay for a product/service that you might not want to, but are too confused, frustrated, or the cost/time tradeoff is not worth it to understand how to stop using/paying for the product/service. In terms of "AI" in products/services this would be the way that using such an assistant atrophies your skills and knowledge so that you become dependent on the product/service.
After reading Sarah Wynn-Williams' book and see the current state of democracy in the USA and some European countries (apart from the fact that democracies are too slow to regulate anyway), I see little hope for the future.
Try reading some Sarah Kendzior [0] if you want to lose whatever shreds you had left (or if you'd like to base any droplets of hope on a more accurate world-view).
Wynn-Williams, and the author of this post, both severely underestimate how dark the tech-bros vision for the future actually is and how far along they are. Envision Snow Crash, but without the humor.
0 - https://sarahkendzior.substack.com/p/ten-articles-explaining...
When ic the "German spy agency labels AfD as ‘confirmed rightwing extremist’ force". Which will now lead to the removal of AfD members and sympathizers from the civil service. I at least, have a little hope for Germany e.g. Europe. A little hope that the End of the Capitalist era will not end in fascism and there is another way now open for discus involving not only elites (aka tech-bros and old white male).
Depending on the parameters of the curve, an S-curve may be effectively the same as an exponential curve. For instance if the IQ of AIs reach a plateau of 500 rather than exponentially increasing to infinity, we may not be around to see the plateau.
If the premise for this is, "because we might not survive each other," rather than the AI being specifically an extinction event for humanity, then I think we agree.
Perhaps the calculator is as close as we'll ever get to "superintelligence".
20 years ago my dream was to see a nimble robot running up the mountain path live. I hope this event is not another 20 years away. Future comes so horribly slowly.
I looked it up having not seen the movie.