I think a more accurate and more useful framing is:
Game theory is inevitable.
Because game theory is just math, the study of how independent actors react to incentives.
The specific examples called out here may or may not be inevitable. It's true that the future is unknowable, but it's also true that the future is made up of 8B+ independent actors and that they're going to react to incentives. It's also true that you, personally, are just one of those 8B+ people and your influence on the remaining 7.999999999B people, most of whom don't know you exist, is fairly limited.
If you think carefully about those incentives, you actually do have a number of significant leverage points with which to change the future. Many of those incentives are crafted out of information and trust, people's beliefs about what their own lives are going to look like in the future if they take certain actions, and if you can shape those beliefs and that information flow, you alter the incentives. But you need to think very carefully, on the level of individual humans and how they'll respond to changes, to get the outcomes you want.
The statement "Game theory is inevitable. Because game theory is just math, the study of how independent actors react to incentives." implies that the "actors" are humans. But that's not what game theory assumes.
Game theory just provides a mathematical framework to analyze outcomes of decisions when parts of the system have different goals. Game theory does not claim to predict human behavior (humans make mistakes, are driven by emotion and often have goals outside the "game" in question). Thus game theory is NOT inevitable.
Yes, game theory is not a predictive model but an explanatory/general one. Additionally not everything is a game, as in statistics, not everything has a probability curve. They can be applied speculatively to great effect, but they are ultimately abstract models.
1) Identify coordination failures that lock us into bad equilibria, e.g. it's impossible to defect from the online ads model without losing access to a valuable social graph
2) Look for leverage that rewrites the payoffs for a coalition rather than for one individual: right-to-repair laws, open protocols, interoperable standards, fiduciary duty, reputation systems, etc.
3) Accept that heroic non-participation is not enough. You must engineer a new Schelling point[1] that makes a better alternative the obvious move for a self-interested majority
TLDR, think in terms of the algebra of incentives, not in terms of squeaky wheelism and moral exhortation
Game theory is still inevitable. Its application to humans may be non-obvious.
In particular, the "games" can operate on the level of non-human actors like genes, or memes, or dollars. Several fields generate much more accurate conclusions when you detach yourself from an anthrocentric viewpoint, eg. evolutionary biology was revolutionized by the idea of genes as selfish actors rather than humans trying to pass along their genes; in particular, it explains such concepts as death, sexual selection, and viruses. Capitalism and bureaucracy both make a lot more sense when you give up the idea of them existing for human betterment and instead take the perspective of them existing simply for the purpose of existing (i.e. those organizations that survive are, well, those organizations that survive; there is nothing morally good or bad about them, but the filter that they passed is simply that they did not go bankrupt or get disbanded).
But underneath those, game theory is still fundamental. You can use it to analyze the incentives and selection pressures on the system, whether they are at sub-human (eg. viral, genetic, molecular), human, or super-human (memetic, capitalist, organizational, bureaucratic, or civilizational) scales.
Perhaps you don't intend this, but I intuit that you imply that Game theory's inevitability leads to the inevitability of many claims the author's claims aren't inevitable.
To me, this inevitability only is guaranteed if we assume a framing of non-cooperative game theory with idealized self-interested actors. I think cooperative game theory[1] better models the dynamics of the real world. More important than thinking on the level of individual humans is thinking about the coalitions that have a common interest to resist abusive technology.
>I think cooperative game theory[1] better models the dynamics of the real world.
If cooperative coalitions to resist undesirable abusive technology models the real world better, why is the world getting more ads? (E.g. One of the author's bullet points was, "Ads are not inevitable.")
Currently in the real world...
- Ads frequency goes up : more ad interruptions in tv shows, native ads embedded in podcasts, sponsors segments in Youtube vids, etc
- Ads spaces goes up : ads on refrigerator screens, gas pumps touch screens, car infotainment systems, smart TVs, Google Search results, ChatGPT UI, computer-generated virtual ads in sports broadcasts overlayed on courts and stadiums, etc
What is the cooperative coalition that makes "ads not inevitable"?
Both cooperative and non-cooperative games are relevant. Actually, I think that one of the most intriguing parts of game theory is understanding under what conditions a non-cooperative game becomes a cooperative one [1] [2].
The really simple finding is that when you have both repetition and reputation, cooperation arises naturally. Because now you've changed the payoff matrix; instead of playing a single game with the possibility of defection without consequences, defection now cuts you off from payoffs in the future. All you need is repeated interaction and the ability to remember when you've been screwed, or learn when your counterparty has screwed others.
This has been super relevant for career management, eg. you do much better in orgs where the management chain has been intact for years, because they have both the ability and the incentive to keep people loyal to them and ensure they cooperate with each other.
I'll just take the very first example on the list, Internet-enabled beds.
Absolutely a cooperative game - nobody was forced to build them, nobody was forced to finance them, nobody was forced to buy them. this were all willing choices all going in the same direction. (Same goes for many of the other examples)
Game theory is not inevitable, neither is math. Both are attempts to understand the world around us and predict what is likely to happen next given a certain context.
Weather predictions are just math, for example, and they are always wrong to some degree.
Because the models aren't sophisticated enough (yet). There's no voodoo here.
I'm always surprised how many 'logical' tech people shy away from simple determinism, given how obvious a deterministic universe becomes the more time you spend in computer science, and seem to insist there's some sort of metaphysical influence out there somewhere we'll never understand. There's not.
Math is almost the definition of inevitability. Logic doubly so.
Once there's a sophisticated enough human model to decipher our myriad of idiosyncrasies, we will all be relentlessly manipulated, because it is human nature to manipulate others. That future is absolutely inevitable.
Might as well fall into the abyss with open arms and a smile.
I think its hubris to believe that you can formulate the correct game theoretic model to make significant statements about what is and is not inevitable.
Game theory is only as good as the model you are using.
Now couple the fact that most people are terrible at modeling with the fact that they tend to ignore implicit constraints… the result is something less resembling science but something resembling religion.
The concept of Game Theory is inevitable because it's studying an existing phenomenon. Whether or not the researchers of Game Theory correctly model that is irrelevant to whether the phenomenon exists or not.
The models such as Prisoner's Dilemma are not inevitable though. Just because you have two people doesn't mean they're in a dilemma.
---
To rephrase this, Technology is inevitable. A specific instance of it (ex. Generative AI) is not.
In a world ruled by game theory alone marketing is pointless. Everyone already makes the most rational choice and has all the information, so why appeal to their emotions, build brand awareness or even tell them about your products. Yet companies spend a lot of money on marketing, and game theory tells us that they wouldn't do that without reason
Game theory makes a lot of simplifying assumptions. In the real world most decisions are made under constraints, and you typically lack a lot of information and can't dedicate enough resources to each question to find the optimal choice given the information you have. Game theory is incredibly useful, especially when talking about big, carefully thought out decisions, but it's far from a perfect description of reality
> Game theory makes a lot of simplifying assumptions.
It does because it's trying to get across the point that although the world seems impossibly complex it's not. Of course it is in fact _almost_ impossibly complex.
This doesn't mean that it's redundant for more complex situations, it only means that to increase its accuracy you have to deepen its depth.
This argument has the unspoken premise that in large part, people's core identity is reacting to external influences. I believe that while responding to influences is part of human existence, the richness of the individual transcends such an explanation for all their actions. The phrase "game theory is inevitable" reads like the perspective of an aristocrat looking down on the masses - enough vision to see the interplay of things, and enough arrogance to assume they can control it.
Yes it's one thing to say that game theory is inevitable, but defection is not inevitable. In fact, if you consider all levels of the organization of life, from multicellularity to large organisms, to families, corporations, towns, nations, etc, it all exists because entities figured out how to cooperate and prevent defection.
If you want to fix these things, you need to come up with a way to change the nature of the game.
Game theory assumes that all the players agree on the pay-offs. However this is often not the case in real world situations. Robert MacNamara (the ex US secretary of defence) said that he realized after the Vietnam war the US and the Vietnamese saw the war completely differently, even years after war had ended (see the excellent documentary 'Fog of War').
Game theory is a model that's sometimes accurate. Game theorists often forget that humans are bags of thinking meat, and that our thinking is accomplished by goopy electrochemical processes
Brains can and do make straight-up mistakes all the time. Like "there was a transmission error"-type mistakes. They can't be modeled or predicted, and so humans can never truly be rational actors.
Humans also make irrational decisions all the time based on gut feeling and instinct. Sometimes with reasons that a brain backfills, sometimes not.
People can and do act against the own self interest all the time, and not for "oh, but they actually thought X" reasons. Brains make unexplainable mistakes. Have you ever walked into a room and forgotten what you went in there to do? That state isn't modelable with game theory, and it generalizes to every aspect of human behavior.
I do partly disagree because Game Theory is based on an economic, (and also mentioned) reductionist view of a human, namely homo oeconomicus that does have some bold assumptions of some single men in history that asserted that we all act only with pure egoism & zero altruism which is nowadays highly critiqued and can be challenged.
It is out of question that it is highly useful and simplifies it to an extent that we can mathematically model interactions between agents but only under our underlying assumptions. And these assumptions must not be true, matter of fact, there are studies on how models like the homo oeconomicus have led to a self-fulfilling reality by making people think in ways given by the model, adjusting to the model, and not otherwise, that the model ideally should approximate us. Hence, I don't think you can plainly limit or frame this reality as a product of game theory.
The greatest challenge facing humanity is building a culture where we are liberated to cooperate toward the greatest goals without fear of another selfish individual or group taking advantage to our detriment.
Yes, the mathematicians will tell you it's "inevitable" that people will cheat and "enshittify". But if you take statistical samplings of the universe from an outsider's perspective, you would think it would be impossible for life to exist. Our whole existence is built on disregard for the inevitable.
Reducing humanity to a bunch of game-theory optimizing automatons will be a sure-fire way to fail The Great Filter, as nobody can possibly understand and mathematically articulate the larger games at stake that we haven't even discovered.
> Game theory is inevitable.
Because game theory is just math, the study of how independent actors react to incentives.
That's not how mathematics works. "it's just math therefore it's a true theory of everything" is silly.
We cannot forget that mathematics is all about models, models which, by definition, do not account for even remotely close to all the information involved in predicting what will actually occur in reality. Game Theory is a theory about a particular class of mathematical structures. You cannot reduce all of existence to just this class of structures, and if you think you can, you'd better be ready to write a thesis on it.
Couple that with the inherent unpredictability of human beings, and I'm sorry but your Laplacean dreams will be crushed.
The idea that "it's math so it's inevitable" is a fallacy. Even if you are a hardcore mathematical Platonist you should still recognize that mathematics is a kind of incomplete picture of the real, not its essence.
In fact, the various incompleteness theorems illustrate directly, in Mathematic's own terms, that the idea that a mathematical perspective or any logical system could perfectly account for all of reality is doomed from the start.
One person has more impact than you think. Many times it's one person that is speaking what's on the mind of many and that speaking out can bring the courage to do what needs to be done for many people that sitting on the fence. The Andor TV series really taught me that. I'm working on a presentation of surveillance capitalism that I plan to show to my community. It's going to be an interesting future. Some will side with the Empire and others with side with the Rebellion.
You realize surveillance capitalism is what caused the Andor TV show (and more broadly the entire Star Wars franchise) to exist at all, right? Gigantic corporate entities have made a lot of money from monetizing the Star Wars franchise.
I'll say frankly that I personally object to Star Wars on an aesthetic level - it is ultimately an artistically-flawed media franchise even if it has some genuinely compelling ideas sometimes. But what really bothers me is that Star Wars in its capacity as a modern culturally-important story cycle is also intellectual property owned by the Disney corporation.
The idea that the problems of the world map neatly to a confict between an evil empire and a plucky rebellion is also basically propagandistic (and also boring). It's a popular storytelling frame - that's why George Lucas wrote the original Star Wars movies that way. But I really don't like seeing someone watch a TV series using the Star Wars intellectual property package and then using the story the writers chose to write - writers ultimately funded by Disney - as a basis for how they see themselves in the world poltically.
> if you can shape those beliefs and that information flow, you alter the incentives
Selective information dissemination, persuasion, and even disinformation are for sure the easiest ways to change the behaviors of actors in the system. However, the most effective and durable way to "spread those lies" are for them to be true!
If you can build a technology which makes the real facts about those incentives different than what it was before, then that information will eventually spread itself.
For me, the canonical example is the story of the electric car:
All kinds of persuasive messaging, emotional appeals, moral arguments, and so on have been employed to convince people that it's better for the environment if they drive an electric car than a polluting, noisy, smelly, internal-combustion gas guzzling SUV. Through the 90s and early 2000s, this saw a small number of early adopters and environmentalists adopting niche products and hybrids for the reasons that were persuasive to them, while another slice of society decided to delete their catalytic converters and "roll coal" in their diesels for their own reasons, while the average consumer was still driving an ICE vehicle somewhere in the middle of the status quo.
Then lithium battery technology and solid-state inverter technology arrived in the 2010s and the Tesla Model S was just a better car - cheaper to drive, more torque, more responsive, quieter, simpler, lower maintenance - than anything the internal combustion engine legacy manufacturers could build. For the subset of people who can charge in their garage at home with cheap electricity, the shape of the game had changed, and it's been just a matter of time (admittedly a slow process, with a lot of resistance from various interests) before EVs were simply the better option.
Similarly, with modern semiconductor technology, solar and wind energy no longer require desperate pleas from the limited political capital of environmental efforts, it's like hydro - they're just superior to fossil fuel power plants in a lot of regions now. There are other negative changes caused by technology, too, aided by the fact that capitalist corporations will seek out profitable (not necessarily morally desirable) projects - in particular, LLMs are reshaping the world just because the technology exists.
Once you pull a new set of rules and incentives out of Pandora's box, game theory results in inevitable societal change.
"in formal experiments the only people who behaved exactly according to the mathematical models created by game theory are economists themselves, and psychopaths" [1]
Game theory applied to the world is a useful simplification; reality is messy. In reality:
* Actors have access to limited computation
* The "rules" of the universe are unknowable and changing
* Available sets of actions are unknowable
* Information is unknowable, continuous, incomplete, and changes based on the frame of reference
* Even the concept of an "Actor" is a leaky abstraction
There's a field of study called Agent-based Computational Economics which explores how systems of actors behaving according to sets of assumptions behave. In this field you can see a lot of behaviour that more closely resembles real world phenomena, but of course if those models are highly predictive they have a tendency to be kept secret and monetized.
So for practical purposes, "game theory is inevitable" is only a narrowly useful heuristic. It's certainly not a heuristic that supports technological determinism.
I mean, in an ideal system we would have political agency greater than the sum of individuals who would put pressure/curtail the rise of abusive actors taking advantage of power and informational asymetry to try and gain more power (wealth) and influence (wealth) in order to gain more wealth
It feels like the only aspect of Game Theory at work here is opportunity cost. For example, why shouldn't you make AI porn generation software? There's moral reasons for it, but usually, most put it aside because someone else is going to get the bag first. That exhaustive list the author enumerated are all in some way byproducts of break-things-move-fast-say-sorry-later philosophy. You need ID for the websites because you did not give a shit and wanted to get the porn out there first and foremost. Now you need IDs.
You need to track everyone and everything on the internet because you did not want to cap your wealth at a reasonable price for the service. You are willing to live with accumulated sins because "its not as bad as murder". The world we have today has way more to do with these things than anything else. We do not operate as a collective, and naturally, we don't get good outcomes for the collective.
what i'm reading here then is that those 7.999999999B others are braindead morons.
OP is 100% correct. either you accept that the vast majority are mindless automatons (not hard to get onboard with that honestly, but still, seems an overestimate), or there's some kind of structural unbalance, an asymmetry that's actively harmful and not the passive outcome of a 8B independent actors.
I do disagree that some of these were not inevitable. Let me deconstruct a couple:
> Tiktok is not inevitable.
TikTok the app and company, not inevitable. Short form video as the medium, and algorithm that samples entire catalog (vs just followers) were inevitable. Short form video follows gradual escalation of most engaging content formats, with legacy stretching from short-form-text in Twitter, short-form-photo in Instagram and Snapchat. Global content discovery is a natural next experiment after extended follow graph.
> NFTs were not inevitable.
Perhaps Bitcoin as proof-of-work productization was not inevitable (for a while), but once we got there, a lot of things were very much inevitable. Explosion of alternatives like with Litecoin, explosion of expressive features, reaching Turing-completeness with Ethereum, "tokens" once we got to Turing-completeness, and then "unique tokens" aka NFTs (but also colored coins in Bitcoin parlance before that). The cultural influence was less inevitable, massive scam and hype was also not inevitable... but to be fair, likely.
I could deconstruct more, but the broader point is: coordination is hard. All these can be done by anyone: anyone could have invented Ethereum-like system; anyone could have built a non-fungible standard over that. Inevitability comes from the lack of coordination: when anyone can push whatever future they want, a LOT of things become inevitable.
The author doesn't mean that the technologies weren't inevitable in the absolute sense. They mean that it was not inevitable that anyone should use those technologies. It's not inevitable that they will use Tiktok, and it is not inevitable for anyone, I've never used Tiktok, so the author is right in that regard.
If you disavow short form video as a medium altogether, something I'm strongly considering, then you can. It does mean you have to make sacrifices, for example Youtube doesn't let you disable their short form video feature so it is inevitable for people who choose they don't want to drop Youtube. That is still a choice though, so it is not truly inevitable.
The larger point is that there are always people pushing some sort of future, sketching it as inevitable. But the reality is that there always remains a choice, even if that choice means you have to make sacrifices.
The author is annoyed at people throwing the towel in the ring and declaring AI is inevitable, when the author apparently still sees a path to not tolerating AI. Unfortunately the author doesn't really constructively show that path, so the whole article is basically a luddite complaint.
Has that ever worked at scale in history?
This strikes me as the same as people who take a stand by not ordering from Amazon or not using whichever service, they make their life somewhat harder and the world doesn't notice.
Even worse, the people taking a stand signal to others that they do it, but most others think that the cost outweighs the benefit, and don't like being judged. Groups in which everyone signals and judges like that suck and devolve into purity spiraling, so few people sustain it, and the people taking a stand get bitter.
Re Tiktok, what is definitely not inevitable is the monetization of human attention. It's only a matter of policy. Without it the incentives to make Tiktok would have been greatly reduced, if even economically possible at all.
> what is definitely not inevitable is the monetization of human attention. It's only a matter of policy. Without it the incentives to make Tiktok would have been greatly reduced, if even economically possible at all.
This is not a new thing. TV monetizes human attention. Tiktok is just an evolution of TV. And Tiktok comes from China which has a very different society. If short-form algo slop video can thrive in both liberal democracies and a heavily censored society like China, than it's probably somewhat inevitable.
This appears to be overthinking it: sure it's inevitable that when zero trust systems are shown to be practicable, they will be explored. But, like a million other ideas that nobody needed to spend time on, selling NFTs should've been relegated to obscurity far earlier than what actually happened.
> Short form video as the medium, and algorithm that samples entire catalog (vs just followers) were inevitable.
Just objectively false and assumes that the path humans took to allow this is the only path that unfolded.
Much of this tech could have been regulated early on, preventing garbage like short-form slop, from existing.
So in short, none of what you are describing is "inevitable". Someone might come up with it, and others can group together and say: "We aren't doing that, that is awful".
Which is exactly what happened though?
I never engaged with most of what the author laments - one thing i found hard was to exist in society without a smartphone, but thats more down to current personal circumstances than inevatibility.
My personal experience is that most people dont mind these things, for example short form content: most of my friends genuinely like that sort of content and i can to some extent also understand why. Just like heroin or smoking it will take some generations to regulate it (and tbf we still have problems with those two even though they are arguably much worse)
You're taking the meaning of the word "inevitable" too literally.
Something might be "inevitable" in the sense that someone is going to create it at some point whether we like it or not.
Something is also not "inevitable" in the sense that we will be forced to use it or you will not be able to function in society. <-- this is what the author is talking about
We do not need to tolerate being abused by the elites or use their terrible products because they say so. We can just say no.
> Perhaps Bitcoin as proof-of-work productization was not inevitable (for a while), but once we got there, a lot of things were very much inevitable. Explosion of alternatives like with Litecoin, explosion of expressive features, reaching Turing-completeness with Ethereum, "tokens" once we got to Turing-completeness, and then "unique tokens" aka NFTs (but also colored coins in Bitcoin parlance before that). The cultural influence was less inevitable, massive scam and hype was also not inevitable... but to be fair, likely.
The only way I can get to the "crypto is inevitable" take relies on the scams and fraud as the fundamental drivers. These things don't have any utility otherwise and no reason to exist outside of those.
Scams and fraud are such potent drivers that perhaps it was inevitable, but one could imagine a more competent regulatory regime that nipped this stuff in the bud.
nb: avoiding financial regulations and money laundering are forms of fraud
> The only way I can get to the "crypto is inevitable" take relies on the scams and fraud as the fundamental drivers.
The idea of a cheap, universal, anonymous digital currency itself is old (e.g. eCash and Neuromancer in the '80s, Snow Crash and Cryptonomicon in the '90s).
It was inevitable that someone would try implementing it once the internet was widespread - especially as long as most banks are rent-seeking actors exploiting those relying on currency exchanges, as long as many national currencies are directly tied to failing political and economic systems, and as long as the un-banking and financially persecution of undesirables was a threat.
Doing it so extremely decentralized and with a the whole proof-of-work shtick tacked on top was not inevitable and arguably not a good way to do it, nor the cancer that has grown on top of it all...
I think you could say it's inevitable because of the size of both the good AND bad opportunities. Agree with you and the original point of the article that there COULD be a better way. We are reaping tons of bad outcomes across social media, crypto, AI, due to poor leadership(from every side really).
Imagine new coordination technology X. We can remove any specific tech reference to remove prior biases. Say it is a neutral technology that could enable new types of positive coordination as well as negative.
3 camps exist.
A: The grifters. They see the opportunity to exploit and individually gain.
B: The haters. They see the grifters and denigrate the technology entirely. Leaving no nuance or possibility for understanding the positive potential.
C: The believers. They see the grift and the positive opportunity. They try and steer the technology towards the positive and away from the negative.
The basic formula for where the technology ends up is -2(A)-(B) +C. It's a bit of a broad strokes brush but you can probably guess where to bin our current political parties into these negative categories. We need leadership which can identify and understand the positive outcomes and push us towards those directions. I see very little strength anywhere from the tech leaders to politicians to the social media mob to get us there. For that, we all suffer.
> These things don't have any utility otherwise and no reason to exist outside of those.
Lol. Permissionless payments certainly have utility. Making it harder for governments to freeeze/seize your assets has utility. Buying stuff the government disallows, often illegitimately, has value. Currency that can't be inflated has value.
Any outside of pure utility, they have tons of ideological reason to exist outside scams and fraud. Your inability to imagine or dismissal of those is telling as to your close-mindedness.
It was all inevitable, by definition, as we live in a deterministic universe (shock, I know!)
But further, the human condition has been developing for tens of thousands of years, and efforts to exploit the human condition for a couple of thousand (at least) and so we expect that a technology around for a fraction of that would escape all of the inevitable 'abuses' of it?
What we need to focus on is mitigation, not lament that people do what people do.
The point is that regulation could have made Bitcoin and NFTs never cause the harm they have inflicted and will inflict, but the political will is not there.
> Short form video as the medium, and algorithm that samples entire catalog (vs just followers) were inevitable.
I doubt that. There is a reason the videos get longer again.
So people could have ignored the short form from the beginning. And wasn’t the matching algorithm the teal killer feature that amazed people, not the length of the videos?
I've got a hypothesis that the reason short-form video like TikTok became dominant is because of the decline in reading instruction (eg. usage of whole-word instruction over phonics) that started in 1998-2000. The timing largely lines up: the rise of video content started around 2013, just as these kids were entering their teenage years. Media has significant economies of scale and network effects (i.e. it is much more profitable to target the lowest common denominator than any niche group), and so if you get a large number of teenagers who have difficulty with reading, media will adjust to provide them content that they can consume effortlessly.
Anecdotally, I hear lots of people talking about the short attention span of Zoomers and Gen Alpha (which they define as 2012+; I'd actually shift the generation boundary to 2017+ for the reasons I'm about to mention). I don't see that with my kid's 2nd-grade classmates: many of them walk around with their nose in a book and will finish whole novels. They're the first class after phonics was reintroduced in the 2023-2024 kindergarten year; every single kid knew how to read by the end of kindergarten. Basic fluency in skills like reading and math matters.
Even if that's true, that sub-minute videos are not the apex content, that only goes to prove inevitability. Every idea will be tested and measured; the best-performing ones will survive. There can't be any coordination or consensus like "we shouldn't have that" - the only signal is, "is this still the most performant medium + algorithm mix?"
Shorts are everywhere because it is the most addictive form of media, easy to consume, no effort required to follow through.
More generally I think the problems we got into were inevitable. They are the result of platforms optimizing for their own interests at the expense of both creatives and users, and that is what any company would do.
All the platforms enshittified, they exploit their users first, by ranking addictive content higher, then they also influence creatives by making it clear only those who fit the Algorithm will see top rankings. This happens on Google, YT, Meta, Amazon, Play Store, App Store - it's everywhere. The ranking algorithm is "prompting" humans to make slop. Creatives also optimize for their self interest and spam the platforms.
Agree with OP. This reminds me of fast food in the 90s. Executives rationalized selling poison as "if I don't, someone else will" and they were right until they weren't.
Society develops antibodies to harmful technology but it happens generationally. We're already starting to view TikTok the way we view McDonalds.
But don't throw the baby out with the bath water. Most food innovation is net positive but fast food took it too far. Similarly, most software is net positive, but some apps take it too far.
Perhaps a good indicator of which companies history will view negatively are the ones where there's a high concentration of executives rationalizing their behavior as "it's inevitable."
Obesity rates have never been higher and the top fast food franchises have double digit billions in revenue. I don’t think there is any redemption arc in there for public health since the 90s.
those statistics really gloss over / erase the vast cultural changes that have occurred. america / the west / society's relationship to fast food and obesity is dramatically different than it was thirty years ago.
Agree and disagree. It is also possible to take a step back and look at the very large picture and see that these things actually are somewhat inevitable. We do exist in a system where "if I don't do it first, someone else will, and then they will have an advantage" is very real and very powerful. It shapes our world immensely. So, while I understand what the OP is saying, in some ways it's like looking at a river of water and complaining that the water particles are moving in a direction that the levees pushed them. The levees are actually the bigger problem.
We are the levees in your metaphor and we have agency. The problem is not that one founder does something before another and gains an advantage. The problem is the millions of people who buy or use the harmful thing they create - and that we all have control over. If we continue down this path we'll end up at free will vs determinism and I choose to believe the future is not inevitable.
This post rhymes with a great quote from Joseph Weizenbaum:
"The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But, in fact, there are actors!"
That reminds me of water use in California. We frequently have droughts, and the messaging is always to reduce water usage. I have friends who turn the shower off while soaping up just to save a few gallons out of civic duty. Meanwhile a few companies are using more water than every residential user combined to grow alfalfa, half of which gets shipped overseas. Like ban one company from selling livestock feed to Asia/Saudi Arabia and the drought for 40 million people is solved.
but people just throw their hands up "looks like another drought this year! Thats California!".
Perhaps we need more collective action & coordination?
I don’t see how we could politically undermine these systems, but we could all do more to contribute to open source workarounds.
We could contribute more to smart tv/e-reader/phone & tablet jailbreak ecosystems. We could contribute more to the fediverse projects. We could all contribute more to make Linux more user friendly.
I admire volunteer work, but I don't think we should focus too hard on paths forward that summarize to "the volunteers need to work harder". If we like what they're doing we show find ways to make it more likely to happen.
For instance, we could forbid taxpayer money from being spent on proprietary software and on hardware that is insufficiently respectful of its user, and we could require that 50% of the money not spent on the now forbidden software instead be spent on sponsorships of open source contributors whose work is likely to improve the quality of whatever open alternatives are relevant.
Getting Microsoft and Google out of education would be huge re: denormalizing the practice of accepting eulas and letting strangers host things you rely on without understanding how they're leveraging that position against your interests.
I do think AI involvement in programming is inevitable; but at this time a lot of the resistance is because AI programming currently is not the best tool for many jobs.
To better the analogy: I have a wood stove in my living room, and when it's exceptionally cold, I enjoy using it. I don't "enjoy" stacking wood in the fall, but I'm a lazy nerd, so I appreciate the exercise. That being said, my house has central heating via a modern heat pump, and I won't go back to using wood as my primary heat source. Burning wood is purely for pleasure, and an insurance policy in case of a power outage or malfunction.
What does this have to do with AI programming? I like to think that early central heating systems were unreliable, and often it was just easier to light a fire. But, it hasn't been like that in most of our lifetimes. I suspect that within a decade, AI programming will be "good enough" for most of what we do, and programming without it will be like burning wood: Something we do for pleasure, and something that we need to do for the occasional cases where AI doesn't work.
For you it's "purely for pleasure," for me it's for money, health and fire protection. I heat my home with my wood stove to bypass about $1,500/year in propane costs, to get exercise (and pleasure) out of cutting and splitting the wood, and to reduce the fuel load around my home. If those reasons went away I'd stop.
That's a good metaphor for the rapid growth of AI. It is driven by real needs from multiple directions. For it to become evitable, it would take coercion or the removal of multiple genuine motivators. People who think we can just say no must be getting a lot less value from it then me day to day.
You may be saving money but wood smoke is very much harmful to your lungs and heart according to the American Lung and American Heart Associations + the EPA. There's a good reason why we've adopted modern heating technologies. They may have other problems but particulate pollution is not one of them.
> For people with underlying heart disease, a 2017 study in the journal Environmental Research linked increased particulate air pollution from wood smoke and other sources to inflammation and clotting, which can predict heart attacks and other heart problems.
> A 2013 study in the journal Particle and Fibre Toxicology found exposure to wood smoke causes the arteries to become stiffer, which raises the risk of dangerous cardiac events. For pregnant women, a 2019 study in Environmental Research connected wood smoke exposure to a higher risk of hypertensive disorders of pregnancy, which include preeclampsia and gestational high blood pressure.
The industrial revolution was pushed down the throats of a lot of people who were sufficiently upset by the developments that they invented communism, universal suffrage*, modern* policing, health and safety laws, trade unions, recognisably modern* state pensions, the motor car (because otherwise we'd be knee-deep in horse manure), zoning laws, passports, and industrial-scale sewage pumping.
I do wonder who the AI era's version of Marx will be, what their version of the Communist Manifesto will say. IIRC, previous times this has been said this on HN, someone pointed out Ted Kaczynski's manifesto.
* Policing and some pensions and democracy did exist in various fashions before the industrial revolution, but few today would recognise their earlier forms as good enough to deserve those names today.
I’m all for a good argument that appears to challenge the notion of technological determinism.
> Every choice is both a political statement and a tradeoff based on the energy we can spend on the consequences of that choice.
Frequently I’ve been opposed to this sort of sentiment. Maybe it’s me, the author’s argument, or a combination of both, but I’m beginning to better understand how this idea works. I think that the problem is that there are too many political statements to compare your own against these days and many of them are made implicit except among the most vocal and ostensibly informed.
I think this is a variant of "every action is normative of itself". Using AI states that use of AI is normal and acceptable. In the same way that for any X doing X states that X is normal and acceptable - even if accompanied by a counterstatement that this is an exception and should not set a precedent.
Yeah, following the OP's logic, if I think this obsession with purity tests and politicizing every tool choice is more toxic than an LLM could ever be, then I should actively undermine that norm.
So I guess I'm morally obligated to use LLMs specifically to reject this framework? Works for me.
I really don't like the "everything is political" sentiment. Sure, lots of things are or can be, but whenever I see this idea, it usually comes from people who have a very specific mindset that's leaning further in one direction on a political spectrum and is pushing their ideology.
To clarify, I don't think pushing an ideology you believe in by posting a blog post is a bad thing. That's your right! I just think I have to read posts that feel like they have a very strong message with more caution. Maybe they have a strong message because they have a very good point - that's very possible! But often times, I see people using this as a way to say "if you're not with me, you're against me".
My problem here is that this idea that "everything is political" leaves no room for a middle ground. Is my choice to write some boiler plate code using gen AI truly political? Is it political because of power usage and ongoing investment in gen AI?
All that to say, maybe I'm totally wrong, I don't know. I'm open to an argument against mine, because there's a very good chance I'm missing the point.
Your introductory paragraph comes across very much like "people who want to change the status quo are political and people who want to maintain it are not"; which is clearly nonsense. "how things are is how they should be" is as much of an ideology, just a less conspicuous one given the existing norms.
>Is my choice to write some boiler plate code using gen AI truly political?
I am much closer to agreeing with your take here, but as you recognise, there are lots of political aspects to your actions, even if they are not conscious. Not intentionally being political doesn't mean you are not making political choices; there are many more that your AI choice touches upon; privacy issues, wealth distribution, centralisation, etc etc. Of course these choices become limited by practicalities but they still exist.
I believe that one point of the author precisely is that there seems to be no room for middle ground left in the tech space:
Resisting the status quo of hostile technology is an endless uphill battle. It requires continous effort, mostly motivated by political or at least ideological reasons.
Not fighting it is not the same as being neutral, because not fighting it supports this status quo. It is the conscious or unconscious surrender to hostile systems, whose very purpose is to lull you into apathy through convenience.
I don't think you're wrong so much as you've tread into some semantic muddy water. What did the OP mean by 'inevitable', 'political' or 'everything'?. A lot hangs on the meaning. I lot of words could be written defending one interpretation over another and the chance of changing anyone's mind on the topic seems slim.
> Sure, lots of things are or can be, but whenever I see this idea, it usually comes from people who have a very specific mindset that's leaning further in one direction on a political spectrum and is pushing their ideology.
This is also my core reservation against the idea.
I think that the belief only holds weight in a society that is rife with opposing interpretations about how it ought to be managed. The claim itself feels like an attempt to force someone toward the interests of the one issuing it.
> Is my choice to write some boiler plate code using gen AI truly political? Is it political because of power usage and ongoing investment in gen AI?
Apparently yes it is. This is all determined by your impressions on generative AI and its environmental and economic impact. The problem is that most blog posts are signaling toward a predefined in-group either through familiarity with the author or by a preconceived belief about the subject where it’s assumed that you should already know and agree with the author about these issues. And if you don’t you’re against them.
For example—I don’t agree that everything is inevitable. But I as I read the blog post in question I surmised that it’s an argument against the idea that human beings are not at the absolute will of technological progress. And I can agree with that much. So this influences how I interpret the claim “nothing is inevitable” in addition to the title of the post and in conjunction with the rest of the article (and this all is additionally informed by all the stuff I’m trying to express to you that surrounds this very paragraph).
I think that this is speaks to the present problem of how “politics” is conflated to additionally refer to one’s worldview, culture, etc., in and of itself instead of something distinct but not necessarily inseparable from these things.
Politics ought to indicate toward a more comprehensive way of seeing the world but this isn’t the case for most people today and I suspect that many people who claim to have comprehensive convictions are only 'virtue signaling’.
A person with comprehensive convictions about the world and how humans ought to function in it can better delineate the differences and necessary overlap between politics and other concepts that run downstream from their beliefs. But what do people actually believe in these days? That they can summarize in a sentence or two and that can objectively/authoritatively delineate an “in-group” from an “out-group” and that informs all of their cultural, political, environmental and economic considerations, and so on...
Online discourse is being cleaved into two sides vying for digital capital over hot air. The worst position you can take is a critical one that satisfies neither opponent.
You should keep reading all blog posts with a critical eye toward the appeals embedded within the medium. Or don’t read them at all. Or read them less than you read material that affords you with a greater context than the emotional state that the author was in when they wrote the post before they go back to releasing software communiques.
Amen to this. There's a certain mindset that seems depressingly prevalent in tech, which I can't even call fatalism because it doesn't really even have much of a negative-emotion dimension. It's just a sort of amorality that tries to move with trends and is puzzled by even the idea that we would choose to do things because they are good or bad, because we think they will lead to better or worse futures, etc., rather than just because they'll make more money or because they're more technically satisfying, or even just because everyone else seems to be doing them.
There's a lot of bad stuff going on; even more dangerous is the idea that we can't do anything about it; but more dangerous still is the idea that there's no reason to even think in terms of what we "should" do, and that we just have to accept our current position and trajectory without question.
I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously.
However tech people who thinks AI is bad, or not inevitable is really hard to understand. It’s almost like Bill Gates saying “we are not interested in internet”. This is pretty much being against the internet, industrialization, print press or mobile phones. The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me. I can only say being against this is either it’s self-interest or not able to grasp it.
So if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter? Like how does it matter? Is it because they “stole” all the art in the world. But somehow if a person “influenced” by people, ideas, art in less efficient way almost we applaud that because what else, invent the wheel again forever?
Apologies, but I'm copy/pasting a previous reply of mine to a similar sentiment:
Art is an expression of human emotion. When I hear music, I am part of those artists journey, struggles. The emotion in their songs come from their first break-up, an argument they had with someone they loved. I can understand that on a profound, shared level.
Way back me and my friends played a lot of starcraft. We only played cooperatively against the AI. Until one day me and a friend decided to play against each other. I can't tell put into words how intense that was. When we were done (we played in different rooms of house), we got together, and laughed. We both knew what the other had gone through. We both said "man, that was intense!".
I don't get that feeling from an amalgamation of all human thoughts/emotions/actions.
One death is a tragedy. A million deaths is a statistic.
Yet humans are the ones enacting an AI for art (of some kind). Is not therefore not art because even though a human initiated the process, the machine completed it?
If you argue that, then what about kinetic sculptures, what about pendulum painting, etc? The artist sets them in motion but the rest of the actions are carried out by something nonhuman.
And even in a fully autonomous sense; who are we to define art as being artefacts of human emotion? How typically human (tribalism). What's to say that an alien species doesn't exist, somewhere...out there. If that species produces something akin to art, but they never evolved the chemical reactions that we call emotions...I suppose it's not art by your definition?
And what if that alien species is not carbon based? If it therefore much of a stretch to call art that an eventual AGI produces art?
My definition of art is a superposition of everything and nothing is art at the same time; because art is art in the eye of the arts beholder. When I look up at the night sky; that's art, but no human emotion produced that.
No doubt, but if your Starcraft experience against AI was "somehow" exactly same with AI, gave you the same joy, and you cannot even say whether it was AI or other players, does that matter? I get this is kind of Truman Show-ish scenario, but does it really matter? If the end results are same, does it still matter? If it does, why? I get the emotional aspect of it, but in practice you wouldn't even know. Now is AI at that point for any of these, possibly not. We can tell AI right now in many interactions and art forms, because it's hollow, and it's just "perfectly mediocre".
It's kind of the sci-fi cliche, can you have feelings for an AI robot? If you can what does that mean.
We lost that like 100 years ago. Sitting and watching someone perform music in an intimate setting rarely happens anymore.
If you listen to an album by your favorite band, it is highly unlikely that your feelings/emotions and interpretations correlate with what they felt. Feeling a connection to a song is just you interpreting it through the lens of your own experience, the singer isn't connecting with listeners on some spiritual level of shared experience.
I am not an AI art fan, it grosses me out, but if we are talking purely about art as a means to convey emotions around shared experiences, then the amalgamation is probably closer to your reality than a famous musicians. You could just as easily impose your feelings around a breakup or death on an AI generated classical piano song, or a picture of a tree, or whatever.
So are photos that are edited via Photoshop not art? Are they not art if they were taken on a digital camera? What about electronic music?
You could argue all these things are not art because they used technology, just like AI music or images... no? Where does the spectrum of "true art" begin and end?
I think your view makes sense. On the other hand, Flash revolutionized animation online by allowing artists to express their ideas without having to exhaustively render every single frame, thanks to algorithmic tweening. And yeah, the resulting quality was lower than what Disney or Dreamworks could do. But the ten thousand flowers that bloomed because a wall came down for people with ideas but not time utterly redefined huge swaths of the cultural zeitgeist in a few short years.
I strongly suspect automatic content synthesis will have similar effect as people get their legs under how to use it, because I strongly suspect there are even more people out there with more ideas than time.
I hear the complaints about AI being "weird" or "gross" now and I think about the complaints about Newgrounds content back in the day.
It matters because the amount of influence something has on you is directly attributable to the amount of human effort put into it. When that effort is removed so to is the influence. Influence does not exist independently of effort.
All the people yapping about LLM keep fundamentally not grasping that concept. They think that output exists in a pure functional vacuum.
I don't know if I'm misinterpreting the word "influence", but low-effort internet memes have a lot more cultural impact than a lot of high-effort art. Also there's botnets, which influence political voting behaviour.
I see what you're getting at, but I think a better framing would be: there's an implicit understand amongst humans that, in the case of things ostensibly human-created, a human found it worth creating. If someone put in the effort to write something, it's because they believed it worth reading. It's part of the social contract that makes it seem worth reading a book or listening to a lecture even if you don't receive any value from the first word.
LLMs and AI art flip this around because potentially very little effort went into making things that potentially take lots of effort to experience and digest. That doesn't inherently mean they're not valuable, but it does mean there's no guarantee that at least one other person out there found it valuable. Even pre-AI it wasn't an iron-clad guarantee of course -- copy-writing, blogspam, and astroturfing existed long before LLMs. But everyone hates those because they prey on the same social contract that LLMs do, except in a smaller scale, and with a lower effort-in:effort-out ratio.
IMO though, while AI enables malicious / selfish / otherwise anti-social behavior at an unprecedented scale, it also enables some pretty cool stuff and new creative potential. Focusing on the tech rather than those using it to harm others is barking up the wrong tree. It's looking for a technical solution to a social problem.
Well, the LLMs were trained with data that required human effort to write, it's not just random noise. So the result they can give is, indirectly and probabilistically regurgitated, human effort.
I'm paying infrastructure costs for our little art community, chatbot crawling our servers and ignoring robots.txt, mining the work of our users so it can make copies, and being told that I don't get because this is such a paradigm shift, is pretty great..
Yes, it matters to me because art is something deeply human, and I don't want to consume art made by a machine.
It doesn't matter if it's fun and beautiful, it's just that I don't want to. It's like other things in life I try to avoid, like buying sneakers made by children, or sign-up to anything Meta-owned.
That's pretty much what they said about photographs at first. I don't think you'll find a lot of people who argue that there's no art in photography now.
Asking a machine to draw a picture and then making no changes? It's still art. There was a human designing the original input. There was human intention.
And that's before they continue to use the AI tools to modify the art to better match their intention and vision.
> I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously.
This is an extremely crude characterisation of what many people feel. Plenty of artists oppose copyright-ignoring generative AI and "get" it perfectly, even use it in art, but in ways that avoid the lazy gold-rush mentality we're seeing now.
I hear you, that's not a problem of AI but a problem of copyright and other stuff. I suppose they'd enrage if an artist replicated their art too closely, rightly or wrongly. Isn't it flattery that your art is literally copied millions of times? I guess not when it doesn't pay you, which is a separate issue than AI in my opinion. Theoretically we can have worse that's only trained on public domain that'd have addressed that concern.
Just like you cannot put piracy into the bag in terms of movies, tv shows you cannot put AI into the bag it came from. Bottom line, this is happening (more like happened) now let's think about what that means and find a way forward.
Prime example is voice acting, I hear why voice actors are mad, if someone can steal your voice. But why not work on legal framework to sell your voice for royalties or whatever. I mean if we can get that lovely voice of yours without you spending your weeks, and still compensated fairly for it, I don't see how this is a problem. -and I know this is already happening, as it should.
This kind of talk I see as an extension of OP's rant. You talk as if mass theft of these LLM growing companies was inevitable. Hogwash and absolutely wrong. It isn't inevitable and it (my opinion) it shouldn't be.
I work at a company trying very hard to incorporate AI into pretty much everything we do. The people pushing it tend to have little understanding of the technology, while the more experienced technical people see a huge mismatch between its advertised benefits and actual results. I have yet to see any evidence that AI is "paradigm shifting" much less "revolutionary." I would be curious to hear any data or examples you have backing those claims up.
In regards to why tech people should be skeptical of AI: technology exists solely to benefit humans in some way. Companies that employ technology should use it to benefit at least one human stakeholder group (employees, customers, shareholders, etc). So far what I have seen is that AI has reduced hiring (negatively impacting employees), created a lot of bad user interfaces (bad for customers), and cost way more money to companies than they are making off of it (bad to shareholders, at least in the long run). AI is an interesting and so far mildly useful technology that is being inflated by hype and causing a lot of damage in the process. Whether it becomes revolutionary like the Internet or falls by the wayside like NFTs and 3D TV's is unknowable at this point.
Case in point: subscription costs for everything going up to justify the "additional value" that AI is bringing _whether you use it or not_.
This would have been an additional, more expensive, subscription tier in the past.
Anecdote: Literally this morning, krisp.ai (noise cancellation software that succumbed to slop-itis two years ago and added AI notetaker and meeting summarization stuff to their product that's really difficult to turn off, which is insulting seeing how most people purchased this tool JUST FOR NOISE CANCELLING, but I digress) sent an email to their customers (me) announcing that they would no longer offer a free tier and will, instead, offer 14-day trials with all features enabled.
Why?
"As AI has become central to everyday work, we’ve seen that most people preferred the unlimited workflow once they tried it."
Big tech senior software engineer working on a major AI product speaking:
I totally agree with the message in the original post.
Yes, AI is going to be everywhere, and it's going to create amazing value and serious challenges, but it's essential to make it optional.
This is not only for the sake of users' freedom. This is essential for companies creating products.
This is minority report, until it is not.
AI has many modes of failure, exploitability, and unpredictability. Some are known and many are not. We have fixes for some, and band aids for some other, but many are not even known yet.
It is essential to make AI optional, to have a "dumb" alternative to everything delegated to a Gen AI.
These options should be given to users, but also, and maybe even more importantly, be baked into the product as an actively maintained and tested plan-b.
The general trend of cost cutting will not be aligned with this. Many products will remove, intentionally or not, the non-ai paths, and when the AI fails (not if), they regret this decision.
This is not a criticisms of AI or a shift in trends toward it, it's a warning for anyone who does not take seriously, the fundamental unpredictability of generative AI
When people talk about AI, they aren't talking about the algorithms and models. They're talking about the business. If you can't honestly stand up and look at the way the AI companies and related business are operating and not feel at least a little unease, you're probably Sam Altman.
> However tech people who thinks AI is bad, or not inevitable is really hard to understand.
I disagree. It's really not. Popular AI is extremely powerful and capable of a lot of things. It's also being used for nefarious purposes at the cost of our privacy and, in many cases, livelihoods.
> So if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter? Like how does it matter?
We don't live in a vacuum.
Every work that someone mostly generated from a prompt is a work results in work that another person (or people) couldn't generate. This was "fine" when the scope of the automation was small, as it gave people time to re-skill or apply their skills elsewhere. This is not fine when those with capital are talking about using this for EVERY POSSIBLE skill. This is even less fine when you consider how the systems that learned how to produce these works were literally trained on stolen data!
Yes, there are plenty of jobs that are safe from today's AI. That doesn't stop the threat of possibility, however.
I also disagree that the crop of AI art that exists today is "good." Some of what's out there is pretty novel, but a vast, vast majority of it looks extremely same-y. Same color hues, same styles (see also: the pervasive Studio Ghibli look), DEFINITELY same fonts, etc. It's also kind-of low res, so it always looks sloppy when printed on large format media. That's before the garbled text that gets left in. Horrible look IMO.
AI-generated audio is worse. Soundstage is super compressed and the output sounds low-bandwidth. This works great for lo-fi (I'm sure lo-fi artists will disagree though), however.
I'm sure all of this will get better as time goes on and more GPUs are sacrificed for better training.
Yes. The work of art should require skills that took years to hone, and innate talent. If it was produced without such, it is a fraud; I've been deceived.
But in fact I was not deceived in that sense, because the work is based on talent and skill: that of numerous unnamed, unattributed people.
It is simply a low-effort plagiarism, presented as an original work.
> I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously.
Yeah, no. It's presumptuous to say that these are the only reasons. I don't think you understand at all.
> So if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter? Like how does it matter?
Because to me, and many others, art is a form of communication. Artists toil because they want to communicate something to the world- people consume art because they want to be spoken to. It's a two-way street of communication. Every piece created by a human carries a message, one that's sculpted by their unique life experiences and journey.
AI-generated content may look nice on the surface, but fundamentally they say nothing at all. There is no message or intent behind a probabilistic algorithm putting pixels onto my screen.
When a person encounters AI content masquerading as human-made, it's a betrayal of expectations. There is no two-way communication, the "person" on the other side of the phone line is a spam bot. Think about how you would feel being part of a social group where the only other "people" are LLMs. Do you think that would be fulfilling or engaging after the novelty wears off?
If I look at a piece of art that was made by a human who earned money for making that art, then it means an actual real human out there was able to put food on their table.
If I look at a piece of "art" produced by a generative AI that was trained on billions of works from people in the previous paragraph, then I have wasted some electricity even further enriching a billionaire and encouraging a world where people don't have the time to make art.
I think we're all just sick of having everything upended and forced on us by tech companies. This is true even if it is inevitable. It occurred to me lately that modern tech and the modern internet has sort of turned into something which is evil in the way that advertising is evil. (this is aside from the fact of course that the internet is riddled with ads)
Modern tech is 100% about trying to coerce you: you need to buy X, you need to be outraged by X, you must change X in your life or else fall behind.
I really don't want any of this, I'm sick of it. Even if it's inevitable I have no positive feelings about the development, and no positive feelings about anyone or any company pushing it. I don't just mean AI. I mean any of this dumb trash that is constantly being pushed on everyone.
Well you don't, and no tech company can force you to.
> you must change X in your life or else fall behind
This is not forced on you by tech companies, but by the rest of society adopting that tech because they want to. Things change as technology advances. Your feeling of entitlement that you should not have to make any change that you don't want to is ridiculous.
I honestly cannot agree more with this, while still standing behind what I said on the parent comment.
As someone who's been in tech for more than 25 years, I started to hate tech because of all things that you've said. I loved what tech meant, and I hate what it became (to the point I got out of the industry).
But majority of these disappear if we talk about offline models, open models. Some of that already happened and we know more of that will happen, just matter of time. In that world how any of us can say "I don't want a good amount of the knowledge in the whole fucking world in my computer, without even having an internet or paying someone, or seeing ads".
I respect if your stand is just like a vegetarian says I'm ethically against eating animals", I have no argument to that, it's not my ethical line but I respect it. However behind that point, what's the legitimate argument, shall we make humanity worse just rejecting this paradigm shifting, world changing thing. Do we think about people who's going to able to read any content in the world in their language even if their language is very obscure one, that no one cares or auto translate. I mean the what AI means for humanity is huge.
What tech companies and governments do with AI is horrific and scary. However government will do it nonetheless, and tech companies will be supported by these powers nonetheless. Therefore AI is not the enemy, let's aim our criticism and actions to real enemies.
Well it's not really about AI is it then; it's about Millenia of human evolution and the intrinsically human behaviours we've evolved.
Like greed. And apathy. Those are just some of the things that have enabled billionaires and trillionaires. Is it ever gonna change? Well it hasn't for millions of years, so no. As long as we remain human we'll always be assholes to each other.
> I can only say being against this is either it’s self-interest or not able to grasp it.
So we're just waving away the carbon cost, centralization of power, privacy fallout, fraud amplification, and the erosion of trust in information? These are enormous society-level effects (and there are many more to list).
Dismissing AI criticism as simply ignorance says more about your own.
> I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously.
Running this paragraph through Gemini, returns a list of the fallacies employed, including - Attacking the Motive - "Even if the artists are motivated by self-interest, this does not automatically make their arguments about AI's negative impacts factually incorrect or "bad."
Just as a poor person is more aware through direct observation and experience, of the consequences of corporate capitalism and financialisation; an artist at the coal face of the restructuring of the creative economy by massive 'IP owners' and IP Pirates (i.e.: the companies training on their creative work without permission) is likely far more in touch the the consequences of actually existing AI than a tech worker who is financially incentivised to view them benignly.
> The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me.
This is a strange kind of anti-naturalistic fallacy. A paradigm shift (or indeed a revolution) is not in itself a good thing. One paradigm shift that has occurred for example in recent goepolitics is the normalisation of state murder - i.e.: extrajudicial assassination in the drone war or the current US govts use of missile attacks on alleged drug traffickers. One can generate countless other negative paradigm shifts.
> if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter?
1) You haven't produced it.
2) Such a thing - a beautiful product of AI that is not identifiably artificial - does not yet, and may never exist.
3) Scare quotes around intellectual property theft aren't an argument. We can abandon IP rights - in which case hurrah, tech companies have none - or we can in law at least, respect them. Anything else is legally and morally incoherent self justification.
4) Do you actually know anything about the history of art, any genre of it whatsoever? Because suggesting originality is impossible and 'efficiency' of production is the only form of artistic progress suggests otherwise.
> I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously
>But somehow if a person “influenced” by people, ideas, art in less efficient way almost we applaud that because what else, invent the wheel again forever?
I understand AI perfectly fine, thanks. I just reject the illegal vacuuming up of everyones' art for corporations while things like sampling in music remain illegal. This idea that everything must be efficient comes from the bowels of Silicon Valley and should die.
> However tech people who thinks AI is bad, or not inevitable is really hard to understand. It’s almost like Bill Gates saying “we are not interested in internet”. This is pretty much being against the internet, industrialization, print press or mobile phones. The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me. I can only say being against this is either it’s self-interest or not able to grasp it.
Again, the problem is less the tech itself and the corporations who have control of it. Yes, I'm against corporations gobbling up everyones' data for ads and AI surveillance. I think you might be the one who doesn't understand that not everything is roses and their might be more weeds in the garden than flowers.
It's not corps only though, AI at this point include many open models, and we'll have more of it as we go if needed. Just like how the original hacker culture was born, we have the open source movements, AI will follow it.
When LLMs first gotten of, people were talking about how governments will control them, but anyone who knows the history of personal computing and hacker culture knew that's not the way things go in this world.
Do I enjoy corpos making money off of anyone's work, including obvious things like literally pirating books and training their models (Meta), absolutely not. However you are blaming the wrong thing in here, it's not technology's fault it's how governments are always corrupted and side with money instead of their people. We should be lashing out to them not each other, not the people who use AI and certainly not the people who innovate and build it.
>I can only say being against this is either it’s self-interest or not able to grasp it.
I'm massively burnt out, what can I say? I can grasp new tech perfectly fine, but I don't want to. I quite honestly can't muster enough energy to care about "revolutionary" things anymore.
If anything I resent having to deal with yet more "revolutionary" bullshit.
As someone else put it succinctly, there's art and then there's content. AI generated stuff is content.
And not to be too dismissive of copywriters, but old Buzzfeed style listicles are content as well. Stuff that people get paid pennies per word for, stuff that a huge amount of people will bid on on a gig job site like Fiverr or what have you is content, stuff that people churn out by rote is content.
Creative writing on the other hand is not content. I won't call my shitposting on HN art, but it's not content either because I put (some) thought into it and am typing it out with my real hands. And I don't have someone telling me what I should write. Or paying me for it, for that matter.
Meanwhile, AI doesn't do anything on its own. It can be made to simulate doing stuff on its own (by running continuously / unlimited, or by feeding it a regular stream of prompts), but it won't suddenly go "I'm going to shitpost on HN today" unless told to.
To me, it matters because most serious art requires time and effort to study, ponder, and analyze.
The more stuff that exists in the world that superficially looks like art but is actually meaningless slop, the more likely it is that your time and effort is wasted on such empty nonsense.
The problem is that LLMs are just parrots who swoop into your house and steal everything, then claim it as theirs. That's not art, that's thievery and regurgitation. To resign oneself that this behavior is ok and inevitable is sad and cowardly.
To conflate LLMs with a printing press or the internet is dishonest; yes, it's a tool, but one which degrades society in its use.
Contemporary AI is bad in the same way a Walther P.38 is bad: it's a tool designed by an objectively, ontologically evil force specifically for their evil ends. We live in a world where there are no hunting rifles, no pea-shooters for little old women to protect themselves, no sport pistols. Just the AI equivalent or a weapon built for easy murder by people whose express end is taking over the world.
...Okay, now maybe take that and dial it back a few notches of hyperbole, and you'll have a reasonable explanation for why people have issues with AI as it currently exists. People are not wrong to recognize that, just because some people use AI for benign reasons, the people and companies that have formed a cartel for the tech mainly see those benign reasons as incidental to becoming middle men in every single business and personal computing task.
Of course, there is certainly a potential future where this is not the case, and AI is truly a prosocial, democratizing technology. But we're not there, and will have a hard time getting there with Zuckerburg, Altman, Nadella, and Musk at the helm.
As someone who spends quite a bit of time sketching and drawing for my own satisfaction, it does matter to me when something is created using AI.
I can tell whether something is a matte painting, Procreate, watercolor, or some other medium. I have enough taste to distinguish between a neophyte and an expert.
I know what it means to be that good.
Sure, most people couldn’t care less, and they’re happy with something that’s simply pleasant to look at.
But for those people, it wouldn’t matter even if it weren’t AI-generated. So what is the point?
You created something without having to get a human to do it. Yaay?
Except we already have more content than we know what to do with, so what exactly are we gaining here? Efficiency?
Generative AI was fed on the free work and joy of millions, only to mechanically regurgitate content without attribution. To treat creators as middlemen in the process.
Yaay, efficient art. This is really what is missing in a world with more content than we have time to consume.
The point of markets, of progress, is the improvement of the human condition. That is the whole point of every regulation, every contract, and every innovation.
I am personally not invested in a world that is worse for humanity
I mean we have already stopped caring about dump stock photos at the beginning for every blog post, so we already don't care about shit that's meaningless, let's it's still happening because there is an audience for it.
Art can be about many things, we have a lot of tech oriented art (think about demo scene). Noe one gives a shit about art that evokes nothing for them, therefore if AI evokes nothing who cares, if it does, is it bad suddenly because it's AI? How?
Actually I think AI will force good amount of mediums to logical conclusion if what you do is mediocre, and not original and AI can do same or better, then that's about you. Once you pass that threshold that's how the world cherish you as a recognized artist. Again you can be artist even 99.9% of the world thinks what you produced is absolute garbage, that doesn't change what you do and what that means to you. Again nothing to do with AI.
Game theory is inevitable.
Because game theory is just math, the study of how independent actors react to incentives.
The specific examples called out here may or may not be inevitable. It's true that the future is unknowable, but it's also true that the future is made up of 8B+ independent actors and that they're going to react to incentives. It's also true that you, personally, are just one of those 8B+ people and your influence on the remaining 7.999999999B people, most of whom don't know you exist, is fairly limited.
If you think carefully about those incentives, you actually do have a number of significant leverage points with which to change the future. Many of those incentives are crafted out of information and trust, people's beliefs about what their own lives are going to look like in the future if they take certain actions, and if you can shape those beliefs and that information flow, you alter the incentives. But you need to think very carefully, on the level of individual humans and how they'll respond to changes, to get the outcomes you want.
Game theory just provides a mathematical framework to analyze outcomes of decisions when parts of the system have different goals. Game theory does not claim to predict human behavior (humans make mistakes, are driven by emotion and often have goals outside the "game" in question). Thus game theory is NOT inevitable.
1) Identify coordination failures that lock us into bad equilibria, e.g. it's impossible to defect from the online ads model without losing access to a valuable social graph
2) Look for leverage that rewrites the payoffs for a coalition rather than for one individual: right-to-repair laws, open protocols, interoperable standards, fiduciary duty, reputation systems, etc.
3) Accept that heroic non-participation is not enough. You must engineer a new Schelling point[1] that makes a better alternative the obvious move for a self-interested majority
TLDR, think in terms of the algebra of incentives, not in terms of squeaky wheelism and moral exhortation
[1]https://en.wikipedia.org/wiki/Focal_point_(game_theory)
In particular, the "games" can operate on the level of non-human actors like genes, or memes, or dollars. Several fields generate much more accurate conclusions when you detach yourself from an anthrocentric viewpoint, eg. evolutionary biology was revolutionized by the idea of genes as selfish actors rather than humans trying to pass along their genes; in particular, it explains such concepts as death, sexual selection, and viruses. Capitalism and bureaucracy both make a lot more sense when you give up the idea of them existing for human betterment and instead take the perspective of them existing simply for the purpose of existing (i.e. those organizations that survive are, well, those organizations that survive; there is nothing morally good or bad about them, but the filter that they passed is simply that they did not go bankrupt or get disbanded).
But underneath those, game theory is still fundamental. You can use it to analyze the incentives and selection pressures on the system, whether they are at sub-human (eg. viral, genetic, molecular), human, or super-human (memetic, capitalist, organizational, bureaucratic, or civilizational) scales.
To me, this inevitability only is guaranteed if we assume a framing of non-cooperative game theory with idealized self-interested actors. I think cooperative game theory[1] better models the dynamics of the real world. More important than thinking on the level of individual humans is thinking about the coalitions that have a common interest to resist abusive technology.
[1]: https://en.wikipedia.org/wiki/Cooperative_game_theory
If cooperative coalitions to resist undesirable abusive technology models the real world better, why is the world getting more ads? (E.g. One of the author's bullet points was, "Ads are not inevitable.")
Currently in the real world...
- Ads frequency goes up : more ad interruptions in tv shows, native ads embedded in podcasts, sponsors segments in Youtube vids, etc
- Ads spaces goes up : ads on refrigerator screens, gas pumps touch screens, car infotainment systems, smart TVs, Google Search results, ChatGPT UI, computer-generated virtual ads in sports broadcasts overlayed on courts and stadiums, etc
What is the cooperative coalition that makes "ads not inevitable"?
The really simple finding is that when you have both repetition and reputation, cooperation arises naturally. Because now you've changed the payoff matrix; instead of playing a single game with the possibility of defection without consequences, defection now cuts you off from payoffs in the future. All you need is repeated interaction and the ability to remember when you've been screwed, or learn when your counterparty has screwed others.
This has been super relevant for career management, eg. you do much better in orgs where the management chain has been intact for years, because they have both the ability and the incentive to keep people loyal to them and ensure they cooperate with each other.
[1] https://en.wikipedia.org/wiki/Tit_for_tat
[2] https://en.wikipedia.org/wiki/The_Evolution_of_Cooperation
Absolutely a cooperative game - nobody was forced to build them, nobody was forced to finance them, nobody was forced to buy them. this were all willing choices all going in the same direction. (Same goes for many of the other examples)
Deleted Comment
Weather predictions are just math, for example, and they are always wrong to some degree.
I'm always surprised how many 'logical' tech people shy away from simple determinism, given how obvious a deterministic universe becomes the more time you spend in computer science, and seem to insist there's some sort of metaphysical influence out there somewhere we'll never understand. There's not.
Math is almost the definition of inevitability. Logic doubly so.
Once there's a sophisticated enough human model to decipher our myriad of idiosyncrasies, we will all be relentlessly manipulated, because it is human nature to manipulate others. That future is absolutely inevitable.
Might as well fall into the abyss with open arms and a smile.
Just because we weren't able to discover all of the law of physics, doesn't mean they don't apply to us.
Now couple the fact that most people are terrible at modeling with the fact that they tend to ignore implicit constraints… the result is something less resembling science but something resembling religion.
The concept of Game Theory is inevitable because it's studying an existing phenomenon. Whether or not the researchers of Game Theory correctly model that is irrelevant to whether the phenomenon exists or not.
The models such as Prisoner's Dilemma are not inevitable though. Just because you have two people doesn't mean they're in a dilemma.
---
To rephrase this, Technology is inevitable. A specific instance of it (ex. Generative AI) is not.
Game theory makes a lot of simplifying assumptions. In the real world most decisions are made under constraints, and you typically lack a lot of information and can't dedicate enough resources to each question to find the optimal choice given the information you have. Game theory is incredibly useful, especially when talking about big, carefully thought out decisions, but it's far from a perfect description of reality
It does because it's trying to get across the point that although the world seems impossibly complex it's not. Of course it is in fact _almost_ impossibly complex.
This doesn't mean that it's redundant for more complex situations, it only means that to increase its accuracy you have to deepen its depth.
Whether that word is reductionism is an exercise left to Chomsky?
> It's also true that you, personally, are just one of those 8B+ people
Unless you communicate and coordinate!
If you want to fix these things, you need to come up with a way to change the nature of the game.
It is not a physics theory, that works regardless of our will.
Brains can and do make straight-up mistakes all the time. Like "there was a transmission error"-type mistakes. They can't be modeled or predicted, and so humans can never truly be rational actors.
Humans also make irrational decisions all the time based on gut feeling and instinct. Sometimes with reasons that a brain backfills, sometimes not.
People can and do act against the own self interest all the time, and not for "oh, but they actually thought X" reasons. Brains make unexplainable mistakes. Have you ever walked into a room and forgotten what you went in there to do? That state isn't modelable with game theory, and it generalizes to every aspect of human behavior.
It is out of question that it is highly useful and simplifies it to an extent that we can mathematically model interactions between agents but only under our underlying assumptions. And these assumptions must not be true, matter of fact, there are studies on how models like the homo oeconomicus have led to a self-fulfilling reality by making people think in ways given by the model, adjusting to the model, and not otherwise, that the model ideally should approximate us. Hence, I don't think you can plainly limit or frame this reality as a product of game theory.
Yes, the mathematicians will tell you it's "inevitable" that people will cheat and "enshittify". But if you take statistical samplings of the universe from an outsider's perspective, you would think it would be impossible for life to exist. Our whole existence is built on disregard for the inevitable.
Reducing humanity to a bunch of game-theory optimizing automatons will be a sure-fire way to fail The Great Filter, as nobody can possibly understand and mathematically articulate the larger games at stake that we haven't even discovered.
Deleted Comment
That's not how mathematics works. "it's just math therefore it's a true theory of everything" is silly.
We cannot forget that mathematics is all about models, models which, by definition, do not account for even remotely close to all the information involved in predicting what will actually occur in reality. Game Theory is a theory about a particular class of mathematical structures. You cannot reduce all of existence to just this class of structures, and if you think you can, you'd better be ready to write a thesis on it.
Couple that with the inherent unpredictability of human beings, and I'm sorry but your Laplacean dreams will be crushed.
The idea that "it's math so it's inevitable" is a fallacy. Even if you are a hardcore mathematical Platonist you should still recognize that mathematics is a kind of incomplete picture of the real, not its essence.
In fact, the various incompleteness theorems illustrate directly, in Mathematic's own terms, that the idea that a mathematical perspective or any logical system could perfectly account for all of reality is doomed from the start.
It's not a question about the one who cannot influence the 7.9B. The question is disparity, which is like income disparity.
What happens when fewer and fewer people influence greater and greater numbers. Understanding the risk in that.
But don't forget, we discovered Nash Equilibrium, which changed how people behaved, in many interesting scenarios.
Also, from a purely Game Theoretic standpoint... All kinds of atrocities are justifiable, if it propagates your genes...
I'll say frankly that I personally object to Star Wars on an aesthetic level - it is ultimately an artistically-flawed media franchise even if it has some genuinely compelling ideas sometimes. But what really bothers me is that Star Wars in its capacity as a modern culturally-important story cycle is also intellectual property owned by the Disney corporation.
The idea that the problems of the world map neatly to a confict between an evil empire and a plucky rebellion is also basically propagandistic (and also boring). It's a popular storytelling frame - that's why George Lucas wrote the original Star Wars movies that way. But I really don't like seeing someone watch a TV series using the Star Wars intellectual property package and then using the story the writers chose to write - writers ultimately funded by Disney - as a basis for how they see themselves in the world poltically.
Selective information dissemination, persuasion, and even disinformation are for sure the easiest ways to change the behaviors of actors in the system. However, the most effective and durable way to "spread those lies" are for them to be true!
If you can build a technology which makes the real facts about those incentives different than what it was before, then that information will eventually spread itself.
For me, the canonical example is the story of the electric car:
All kinds of persuasive messaging, emotional appeals, moral arguments, and so on have been employed to convince people that it's better for the environment if they drive an electric car than a polluting, noisy, smelly, internal-combustion gas guzzling SUV. Through the 90s and early 2000s, this saw a small number of early adopters and environmentalists adopting niche products and hybrids for the reasons that were persuasive to them, while another slice of society decided to delete their catalytic converters and "roll coal" in their diesels for their own reasons, while the average consumer was still driving an ICE vehicle somewhere in the middle of the status quo.
Then lithium battery technology and solid-state inverter technology arrived in the 2010s and the Tesla Model S was just a better car - cheaper to drive, more torque, more responsive, quieter, simpler, lower maintenance - than anything the internal combustion engine legacy manufacturers could build. For the subset of people who can charge in their garage at home with cheap electricity, the shape of the game had changed, and it's been just a matter of time (admittedly a slow process, with a lot of resistance from various interests) before EVs were simply the better option.
Similarly, with modern semiconductor technology, solar and wind energy no longer require desperate pleas from the limited political capital of environmental efforts, it's like hydro - they're just superior to fossil fuel power plants in a lot of regions now. There are other negative changes caused by technology, too, aided by the fact that capitalist corporations will seek out profitable (not necessarily morally desirable) projects - in particular, LLMs are reshaping the world just because the technology exists.
Once you pull a new set of rules and incentives out of Pandora's box, game theory results in inevitable societal change.
[1] http://en.wikipedia.org/wiki/The_Trap_(television_documentar...
* Actors have access to limited computation
* The "rules" of the universe are unknowable and changing
* Available sets of actions are unknowable
* Information is unknowable, continuous, incomplete, and changes based on the frame of reference
* Even the concept of an "Actor" is a leaky abstraction
There's a field of study called Agent-based Computational Economics which explores how systems of actors behaving according to sets of assumptions behave. In this field you can see a lot of behaviour that more closely resembles real world phenomena, but of course if those models are highly predictive they have a tendency to be kept secret and monetized.
So for practical purposes, "game theory is inevitable" is only a narrowly useful heuristic. It's certainly not a heuristic that supports technological determinism.
You need to track everyone and everything on the internet because you did not want to cap your wealth at a reasonable price for the service. You are willing to live with accumulated sins because "its not as bad as murder". The world we have today has way more to do with these things than anything else. We do not operate as a collective, and naturally, we don't get good outcomes for the collective.
OP is 100% correct. either you accept that the vast majority are mindless automatons (not hard to get onboard with that honestly, but still, seems an overestimate), or there's some kind of structural unbalance, an asymmetry that's actively harmful and not the passive outcome of a 8B independent actors.
> Tiktok is not inevitable.
TikTok the app and company, not inevitable. Short form video as the medium, and algorithm that samples entire catalog (vs just followers) were inevitable. Short form video follows gradual escalation of most engaging content formats, with legacy stretching from short-form-text in Twitter, short-form-photo in Instagram and Snapchat. Global content discovery is a natural next experiment after extended follow graph.
> NFTs were not inevitable.
Perhaps Bitcoin as proof-of-work productization was not inevitable (for a while), but once we got there, a lot of things were very much inevitable. Explosion of alternatives like with Litecoin, explosion of expressive features, reaching Turing-completeness with Ethereum, "tokens" once we got to Turing-completeness, and then "unique tokens" aka NFTs (but also colored coins in Bitcoin parlance before that). The cultural influence was less inevitable, massive scam and hype was also not inevitable... but to be fair, likely.
I could deconstruct more, but the broader point is: coordination is hard. All these can be done by anyone: anyone could have invented Ethereum-like system; anyone could have built a non-fungible standard over that. Inevitability comes from the lack of coordination: when anyone can push whatever future they want, a LOT of things become inevitable.
If you disavow short form video as a medium altogether, something I'm strongly considering, then you can. It does mean you have to make sacrifices, for example Youtube doesn't let you disable their short form video feature so it is inevitable for people who choose they don't want to drop Youtube. That is still a choice though, so it is not truly inevitable.
The larger point is that there are always people pushing some sort of future, sketching it as inevitable. But the reality is that there always remains a choice, even if that choice means you have to make sacrifices.
The author is annoyed at people throwing the towel in the ring and declaring AI is inevitable, when the author apparently still sees a path to not tolerating AI. Unfortunately the author doesn't really constructively show that path, so the whole article is basically a luddite complaint.
Co-ordination problems are the hardest problems.
This is not a new thing. TV monetizes human attention. Tiktok is just an evolution of TV. And Tiktok comes from China which has a very different society. If short-form algo slop video can thrive in both liberal democracies and a heavily censored society like China, than it's probably somewhat inevitable.
Just objectively false and assumes that the path humans took to allow this is the only path that unfolded.
Much of this tech could have been regulated early on, preventing garbage like short-form slop, from existing.
So in short, none of what you are describing is "inevitable". Someone might come up with it, and others can group together and say: "We aren't doing that, that is awful".
My personal experience is that most people dont mind these things, for example short form content: most of my friends genuinely like that sort of content and i can to some extent also understand why. Just like heroin or smoking it will take some generations to regulate it (and tbf we still have problems with those two even though they are arguably much worse)
Something might be "inevitable" in the sense that someone is going to create it at some point whether we like it or not.
Something is also not "inevitable" in the sense that we will be forced to use it or you will not be able to function in society. <-- this is what the author is talking about
We do not need to tolerate being abused by the elites or use their terrible products because they say so. We can just say no.
What i dont like about this sort of article is that it fails to come up with _any_ meaningful ideas on how to convince others to "just say no"
The only way I can get to the "crypto is inevitable" take relies on the scams and fraud as the fundamental drivers. These things don't have any utility otherwise and no reason to exist outside of those.
Scams and fraud are such potent drivers that perhaps it was inevitable, but one could imagine a more competent regulatory regime that nipped this stuff in the bud.
nb: avoiding financial regulations and money laundering are forms of fraud
The idea of a cheap, universal, anonymous digital currency itself is old (e.g. eCash and Neuromancer in the '80s, Snow Crash and Cryptonomicon in the '90s).
It was inevitable that someone would try implementing it once the internet was widespread - especially as long as most banks are rent-seeking actors exploiting those relying on currency exchanges, as long as many national currencies are directly tied to failing political and economic systems, and as long as the un-banking and financially persecution of undesirables was a threat.
Doing it so extremely decentralized and with a the whole proof-of-work shtick tacked on top was not inevitable and arguably not a good way to do it, nor the cancer that has grown on top of it all...
Imagine new coordination technology X. We can remove any specific tech reference to remove prior biases. Say it is a neutral technology that could enable new types of positive coordination as well as negative.
3 camps exist.
A: The grifters. They see the opportunity to exploit and individually gain.
B: The haters. They see the grifters and denigrate the technology entirely. Leaving no nuance or possibility for understanding the positive potential.
C: The believers. They see the grift and the positive opportunity. They try and steer the technology towards the positive and away from the negative.
The basic formula for where the technology ends up is -2(A)-(B) +C. It's a bit of a broad strokes brush but you can probably guess where to bin our current political parties into these negative categories. We need leadership which can identify and understand the positive outcomes and push us towards those directions. I see very little strength anywhere from the tech leaders to politicians to the social media mob to get us there. For that, we all suffer.
Lol. Permissionless payments certainly have utility. Making it harder for governments to freeeze/seize your assets has utility. Buying stuff the government disallows, often illegitimately, has value. Currency that can't be inflated has value.
Any outside of pure utility, they have tons of ideological reason to exist outside scams and fraud. Your inability to imagine or dismissal of those is telling as to your close-mindedness.
But further, the human condition has been developing for tens of thousands of years, and efforts to exploit the human condition for a couple of thousand (at least) and so we expect that a technology around for a fraction of that would escape all of the inevitable 'abuses' of it?
What we need to focus on is mitigation, not lament that people do what people do.
I doubt that. There is a reason the videos get longer again.
So people could have ignored the short form from the beginning. And wasn’t the matching algorithm the teal killer feature that amazed people, not the length of the videos?
Anecdotally, I hear lots of people talking about the short attention span of Zoomers and Gen Alpha (which they define as 2012+; I'd actually shift the generation boundary to 2017+ for the reasons I'm about to mention). I don't see that with my kid's 2nd-grade classmates: many of them walk around with their nose in a book and will finish whole novels. They're the first class after phonics was reintroduced in the 2023-2024 kindergarten year; every single kid knew how to read by the end of kindergarten. Basic fluency in skills like reading and math matters.
More generally I think the problems we got into were inevitable. They are the result of platforms optimizing for their own interests at the expense of both creatives and users, and that is what any company would do.
All the platforms enshittified, they exploit their users first, by ranking addictive content higher, then they also influence creatives by making it clear only those who fit the Algorithm will see top rankings. This happens on Google, YT, Meta, Amazon, Play Store, App Store - it's everywhere. The ranking algorithm is "prompting" humans to make slop. Creatives also optimize for their self interest and spam the platforms.
Society develops antibodies to harmful technology but it happens generationally. We're already starting to view TikTok the way we view McDonalds.
But don't throw the baby out with the bath water. Most food innovation is net positive but fast food took it too far. Similarly, most software is net positive, but some apps take it too far.
Perhaps a good indicator of which companies history will view negatively are the ones where there's a high concentration of executives rationalizing their behavior as "it's inevitable."
"The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But, in fact, there are actors!"
but people just throw their hands up "looks like another drought this year! Thats California!".
I don’t see how we could politically undermine these systems, but we could all do more to contribute to open source workarounds.
We could contribute more to smart tv/e-reader/phone & tablet jailbreak ecosystems. We could contribute more to the fediverse projects. We could all contribute more to make Linux more user friendly.
For instance, we could forbid taxpayer money from being spent on proprietary software and on hardware that is insufficiently respectful of its user, and we could require that 50% of the money not spent on the now forbidden software instead be spent on sponsorships of open source contributors whose work is likely to improve the quality of whatever open alternatives are relevant.
Getting Microsoft and Google out of education would be huge re: denormalizing the practice of accepting eulas and letting strangers host things you rely on without understanding how they're leveraging that position against your interests.
France and Germany are investing in open source (https://chipp.in/news/france-and-germany-launch-docs-an-open...), though perhaps not as aggressively as I've proposed. Let's join them.
To better the analogy: I have a wood stove in my living room, and when it's exceptionally cold, I enjoy using it. I don't "enjoy" stacking wood in the fall, but I'm a lazy nerd, so I appreciate the exercise. That being said, my house has central heating via a modern heat pump, and I won't go back to using wood as my primary heat source. Burning wood is purely for pleasure, and an insurance policy in case of a power outage or malfunction.
What does this have to do with AI programming? I like to think that early central heating systems were unreliable, and often it was just easier to light a fire. But, it hasn't been like that in most of our lifetimes. I suspect that within a decade, AI programming will be "good enough" for most of what we do, and programming without it will be like burning wood: Something we do for pleasure, and something that we need to do for the occasional cases where AI doesn't work.
That's a good metaphor for the rapid growth of AI. It is driven by real needs from multiple directions. For it to become evitable, it would take coercion or the removal of multiple genuine motivators. People who think we can just say no must be getting a lot less value from it then me day to day.
> For people with underlying heart disease, a 2017 study in the journal Environmental Research linked increased particulate air pollution from wood smoke and other sources to inflammation and clotting, which can predict heart attacks and other heart problems.
> A 2013 study in the journal Particle and Fibre Toxicology found exposure to wood smoke causes the arteries to become stiffer, which raises the risk of dangerous cardiac events. For pregnant women, a 2019 study in Environmental Research connected wood smoke exposure to a higher risk of hypertensive disorders of pregnancy, which include preeclampsia and gestational high blood pressure.
https://www.heart.org/en/news/2019/12/13/lovely-but-dangerou...
I like the metaphor of burning wood, I also think it's going to be left for fun.
I do wonder who the AI era's version of Marx will be, what their version of the Communist Manifesto will say. IIRC, previous times this has been said this on HN, someone pointed out Ted Kaczynski's manifesto.
* Policing and some pensions and democracy did exist in various fashions before the industrial revolution, but few today would recognise their earlier forms as good enough to deserve those names today.
Deleted Comment
I’m all for a good argument that appears to challenge the notion of technological determinism.
> Every choice is both a political statement and a tradeoff based on the energy we can spend on the consequences of that choice.
Frequently I’ve been opposed to this sort of sentiment. Maybe it’s me, the author’s argument, or a combination of both, but I’m beginning to better understand how this idea works. I think that the problem is that there are too many political statements to compare your own against these days and many of them are made implicit except among the most vocal and ostensibly informed.
I think this is a variant of "every action is normative of itself". Using AI states that use of AI is normal and acceptable. In the same way that for any X doing X states that X is normal and acceptable - even if accompanied by a counterstatement that this is an exception and should not set a precedent.
So I guess I'm morally obligated to use LLMs specifically to reject this framework? Works for me.
To clarify, I don't think pushing an ideology you believe in by posting a blog post is a bad thing. That's your right! I just think I have to read posts that feel like they have a very strong message with more caution. Maybe they have a strong message because they have a very good point - that's very possible! But often times, I see people using this as a way to say "if you're not with me, you're against me".
My problem here is that this idea that "everything is political" leaves no room for a middle ground. Is my choice to write some boiler plate code using gen AI truly political? Is it political because of power usage and ongoing investment in gen AI?
All that to say, maybe I'm totally wrong, I don't know. I'm open to an argument against mine, because there's a very good chance I'm missing the point.
>Is my choice to write some boiler plate code using gen AI truly political?
I am much closer to agreeing with your take here, but as you recognise, there are lots of political aspects to your actions, even if they are not conscious. Not intentionally being political doesn't mean you are not making political choices; there are many more that your AI choice touches upon; privacy issues, wealth distribution, centralisation, etc etc. Of course these choices become limited by practicalities but they still exist.
Resisting the status quo of hostile technology is an endless uphill battle. It requires continous effort, mostly motivated by political or at least ideological reasons.
Not fighting it is not the same as being neutral, because not fighting it supports this status quo. It is the conscious or unconscious surrender to hostile systems, whose very purpose is to lull you into apathy through convenience.
This is also my core reservation against the idea.
I think that the belief only holds weight in a society that is rife with opposing interpretations about how it ought to be managed. The claim itself feels like an attempt to force someone toward the interests of the one issuing it.
> Is my choice to write some boiler plate code using gen AI truly political? Is it political because of power usage and ongoing investment in gen AI?
Apparently yes it is. This is all determined by your impressions on generative AI and its environmental and economic impact. The problem is that most blog posts are signaling toward a predefined in-group either through familiarity with the author or by a preconceived belief about the subject where it’s assumed that you should already know and agree with the author about these issues. And if you don’t you’re against them.
For example—I don’t agree that everything is inevitable. But I as I read the blog post in question I surmised that it’s an argument against the idea that human beings are not at the absolute will of technological progress. And I can agree with that much. So this influences how I interpret the claim “nothing is inevitable” in addition to the title of the post and in conjunction with the rest of the article (and this all is additionally informed by all the stuff I’m trying to express to you that surrounds this very paragraph).
I think that this is speaks to the present problem of how “politics” is conflated to additionally refer to one’s worldview, culture, etc., in and of itself instead of something distinct but not necessarily inseparable from these things.
Politics ought to indicate toward a more comprehensive way of seeing the world but this isn’t the case for most people today and I suspect that many people who claim to have comprehensive convictions are only 'virtue signaling’.
A person with comprehensive convictions about the world and how humans ought to function in it can better delineate the differences and necessary overlap between politics and other concepts that run downstream from their beliefs. But what do people actually believe in these days? That they can summarize in a sentence or two and that can objectively/authoritatively delineate an “in-group” from an “out-group” and that informs all of their cultural, political, environmental and economic considerations, and so on...
Online discourse is being cleaved into two sides vying for digital capital over hot air. The worst position you can take is a critical one that satisfies neither opponent.
You should keep reading all blog posts with a critical eye toward the appeals embedded within the medium. Or don’t read them at all. Or read them less than you read material that affords you with a greater context than the emotional state that the author was in when they wrote the post before they go back to releasing software communiques.
There's a lot of bad stuff going on; even more dangerous is the idea that we can't do anything about it; but more dangerous still is the idea that there's no reason to even think in terms of what we "should" do, and that we just have to accept our current position and trajectory without question.
However tech people who thinks AI is bad, or not inevitable is really hard to understand. It’s almost like Bill Gates saying “we are not interested in internet”. This is pretty much being against the internet, industrialization, print press or mobile phones. The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me. I can only say being against this is either it’s self-interest or not able to grasp it.
So if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter? Like how does it matter? Is it because they “stole” all the art in the world. But somehow if a person “influenced” by people, ideas, art in less efficient way almost we applaud that because what else, invent the wheel again forever?
Art is an expression of human emotion. When I hear music, I am part of those artists journey, struggles. The emotion in their songs come from their first break-up, an argument they had with someone they loved. I can understand that on a profound, shared level.
Way back me and my friends played a lot of starcraft. We only played cooperatively against the AI. Until one day me and a friend decided to play against each other. I can't tell put into words how intense that was. When we were done (we played in different rooms of house), we got together, and laughed. We both knew what the other had gone through. We both said "man, that was intense!".
I don't get that feeling from an amalgamation of all human thoughts/emotions/actions.
One death is a tragedy. A million deaths is a statistic.
Yet humans are the ones enacting an AI for art (of some kind). Is not therefore not art because even though a human initiated the process, the machine completed it?
If you argue that, then what about kinetic sculptures, what about pendulum painting, etc? The artist sets them in motion but the rest of the actions are carried out by something nonhuman.
And even in a fully autonomous sense; who are we to define art as being artefacts of human emotion? How typically human (tribalism). What's to say that an alien species doesn't exist, somewhere...out there. If that species produces something akin to art, but they never evolved the chemical reactions that we call emotions...I suppose it's not art by your definition?
And what if that alien species is not carbon based? If it therefore much of a stretch to call art that an eventual AGI produces art?
My definition of art is a superposition of everything and nothing is art at the same time; because art is art in the eye of the arts beholder. When I look up at the night sky; that's art, but no human emotion produced that.
It's kind of the sci-fi cliche, can you have feelings for an AI robot? If you can what does that mean.
If you listen to an album by your favorite band, it is highly unlikely that your feelings/emotions and interpretations correlate with what they felt. Feeling a connection to a song is just you interpreting it through the lens of your own experience, the singer isn't connecting with listeners on some spiritual level of shared experience.
I am not an AI art fan, it grosses me out, but if we are talking purely about art as a means to convey emotions around shared experiences, then the amalgamation is probably closer to your reality than a famous musicians. You could just as easily impose your feelings around a breakup or death on an AI generated classical piano song, or a picture of a tree, or whatever.
You could argue all these things are not art because they used technology, just like AI music or images... no? Where does the spectrum of "true art" begin and end?
I strongly suspect automatic content synthesis will have similar effect as people get their legs under how to use it, because I strongly suspect there are even more people out there with more ideas than time.
I hear the complaints about AI being "weird" or "gross" now and I think about the complaints about Newgrounds content back in the day.
It matters because the amount of influence something has on you is directly attributable to the amount of human effort put into it. When that effort is removed so to is the influence. Influence does not exist independently of effort.
All the people yapping about LLM keep fundamentally not grasping that concept. They think that output exists in a pure functional vacuum.
LLMs and AI art flip this around because potentially very little effort went into making things that potentially take lots of effort to experience and digest. That doesn't inherently mean they're not valuable, but it does mean there's no guarantee that at least one other person out there found it valuable. Even pre-AI it wasn't an iron-clad guarantee of course -- copy-writing, blogspam, and astroturfing existed long before LLMs. But everyone hates those because they prey on the same social contract that LLMs do, except in a smaller scale, and with a lower effort-in:effort-out ratio.
IMO though, while AI enables malicious / selfish / otherwise anti-social behavior at an unprecedented scale, it also enables some pretty cool stuff and new creative potential. Focusing on the tech rather than those using it to harm others is barking up the wrong tree. It's looking for a technical solution to a social problem.
Yes, it matters to me because art is something deeply human, and I don't want to consume art made by a machine.
It doesn't matter if it's fun and beautiful, it's just that I don't want to. It's like other things in life I try to avoid, like buying sneakers made by children, or sign-up to anything Meta-owned.
Asking a machine to draw a picture and then making no changes? It's still art. There was a human designing the original input. There was human intention.
And that's before they continue to use the AI tools to modify the art to better match their intention and vision.
This is an extremely crude characterisation of what many people feel. Plenty of artists oppose copyright-ignoring generative AI and "get" it perfectly, even use it in art, but in ways that avoid the lazy gold-rush mentality we're seeing now.
Just like you cannot put piracy into the bag in terms of movies, tv shows you cannot put AI into the bag it came from. Bottom line, this is happening (more like happened) now let's think about what that means and find a way forward.
Prime example is voice acting, I hear why voice actors are mad, if someone can steal your voice. But why not work on legal framework to sell your voice for royalties or whatever. I mean if we can get that lovely voice of yours without you spending your weeks, and still compensated fairly for it, I don't see how this is a problem. -and I know this is already happening, as it should.
In regards to why tech people should be skeptical of AI: technology exists solely to benefit humans in some way. Companies that employ technology should use it to benefit at least one human stakeholder group (employees, customers, shareholders, etc). So far what I have seen is that AI has reduced hiring (negatively impacting employees), created a lot of bad user interfaces (bad for customers), and cost way more money to companies than they are making off of it (bad to shareholders, at least in the long run). AI is an interesting and so far mildly useful technology that is being inflated by hype and causing a lot of damage in the process. Whether it becomes revolutionary like the Internet or falls by the wayside like NFTs and 3D TV's is unknowable at this point.
This would have been an additional, more expensive, subscription tier in the past.
Anecdote: Literally this morning, krisp.ai (noise cancellation software that succumbed to slop-itis two years ago and added AI notetaker and meeting summarization stuff to their product that's really difficult to turn off, which is insulting seeing how most people purchased this tool JUST FOR NOISE CANCELLING, but I digress) sent an email to their customers (me) announcing that they would no longer offer a free tier and will, instead, offer 14-day trials with all features enabled.
Why?
"As AI has become central to everyday work, we’ve seen that most people preferred the unlimited workflow once they tried it."
This bait-and-switch era is tiring.
I totally agree with the message in the original post. Yes, AI is going to be everywhere, and it's going to create amazing value and serious challenges, but it's essential to make it optional.
This is not only for the sake of users' freedom. This is essential for companies creating products.
This is minority report, until it is not.
AI has many modes of failure, exploitability, and unpredictability. Some are known and many are not. We have fixes for some, and band aids for some other, but many are not even known yet.
It is essential to make AI optional, to have a "dumb" alternative to everything delegated to a Gen AI.
These options should be given to users, but also, and maybe even more importantly, be baked into the product as an actively maintained and tested plan-b.
The general trend of cost cutting will not be aligned with this. Many products will remove, intentionally or not, the non-ai paths, and when the AI fails (not if), they regret this decision.
This is not a criticisms of AI or a shift in trends toward it, it's a warning for anyone who does not take seriously, the fundamental unpredictability of generative AI
I disagree. It's really not. Popular AI is extremely powerful and capable of a lot of things. It's also being used for nefarious purposes at the cost of our privacy and, in many cases, livelihoods.
> So if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter? Like how does it matter?
We don't live in a vacuum.
Every work that someone mostly generated from a prompt is a work results in work that another person (or people) couldn't generate. This was "fine" when the scope of the automation was small, as it gave people time to re-skill or apply their skills elsewhere. This is not fine when those with capital are talking about using this for EVERY POSSIBLE skill. This is even less fine when you consider how the systems that learned how to produce these works were literally trained on stolen data!
Yes, there are plenty of jobs that are safe from today's AI. That doesn't stop the threat of possibility, however.
I also disagree that the crop of AI art that exists today is "good." Some of what's out there is pretty novel, but a vast, vast majority of it looks extremely same-y. Same color hues, same styles (see also: the pervasive Studio Ghibli look), DEFINITELY same fonts, etc. It's also kind-of low res, so it always looks sloppy when printed on large format media. That's before the garbled text that gets left in. Horrible look IMO.
AI-generated audio is worse. Soundstage is super compressed and the output sounds low-bandwidth. This works great for lo-fi (I'm sure lo-fi artists will disagree though), however.
I'm sure all of this will get better as time goes on and more GPUs are sacrificed for better training.
Yes. The work of art should require skills that took years to hone, and innate talent. If it was produced without such, it is a fraud; I've been deceived.
But in fact I was not deceived in that sense, because the work is based on talent and skill: that of numerous unnamed, unattributed people.
It is simply a low-effort plagiarism, presented as an original work.
Yeah, no. It's presumptuous to say that these are the only reasons. I don't think you understand at all.
> So if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter? Like how does it matter?
Because to me, and many others, art is a form of communication. Artists toil because they want to communicate something to the world- people consume art because they want to be spoken to. It's a two-way street of communication. Every piece created by a human carries a message, one that's sculpted by their unique life experiences and journey.
AI-generated content may look nice on the surface, but fundamentally they say nothing at all. There is no message or intent behind a probabilistic algorithm putting pixels onto my screen.
When a person encounters AI content masquerading as human-made, it's a betrayal of expectations. There is no two-way communication, the "person" on the other side of the phone line is a spam bot. Think about how you would feel being part of a social group where the only other "people" are LLMs. Do you think that would be fulfilling or engaging after the novelty wears off?
If I look at a piece of art that was made by a human who earned money for making that art, then it means an actual real human out there was able to put food on their table.
If I look at a piece of "art" produced by a generative AI that was trained on billions of works from people in the previous paragraph, then I have wasted some electricity even further enriching a billionaire and encouraging a world where people don't have the time to make art.
Modern tech is 100% about trying to coerce you: you need to buy X, you need to be outraged by X, you must change X in your life or else fall behind.
I really don't want any of this, I'm sick of it. Even if it's inevitable I have no positive feelings about the development, and no positive feelings about anyone or any company pushing it. I don't just mean AI. I mean any of this dumb trash that is constantly being pushed on everyone.
Well you don't, and no tech company can force you to.
> you must change X in your life or else fall behind
This is not forced on you by tech companies, but by the rest of society adopting that tech because they want to. Things change as technology advances. Your feeling of entitlement that you should not have to make any change that you don't want to is ridiculous.
As someone who's been in tech for more than 25 years, I started to hate tech because of all things that you've said. I loved what tech meant, and I hate what it became (to the point I got out of the industry).
But majority of these disappear if we talk about offline models, open models. Some of that already happened and we know more of that will happen, just matter of time. In that world how any of us can say "I don't want a good amount of the knowledge in the whole fucking world in my computer, without even having an internet or paying someone, or seeing ads".
I respect if your stand is just like a vegetarian says I'm ethically against eating animals", I have no argument to that, it's not my ethical line but I respect it. However behind that point, what's the legitimate argument, shall we make humanity worse just rejecting this paradigm shifting, world changing thing. Do we think about people who's going to able to read any content in the world in their language even if their language is very obscure one, that no one cares or auto translate. I mean the what AI means for humanity is huge.
What tech companies and governments do with AI is horrific and scary. However government will do it nonetheless, and tech companies will be supported by these powers nonetheless. Therefore AI is not the enemy, let's aim our criticism and actions to real enemies.
Like greed. And apathy. Those are just some of the things that have enabled billionaires and trillionaires. Is it ever gonna change? Well it hasn't for millions of years, so no. As long as we remain human we'll always be assholes to each other.
So we're just waving away the carbon cost, centralization of power, privacy fallout, fraud amplification, and the erosion of trust in information? These are enormous society-level effects (and there are many more to list).
Dismissing AI criticism as simply ignorance says more about your own.
Deleted Comment
All of those things had positive as well as negative consequences. It's not entirely unreasonable to argument against any of those, at least in part.
Running this paragraph through Gemini, returns a list of the fallacies employed, including - Attacking the Motive - "Even if the artists are motivated by self-interest, this does not automatically make their arguments about AI's negative impacts factually incorrect or "bad."
Just as a poor person is more aware through direct observation and experience, of the consequences of corporate capitalism and financialisation; an artist at the coal face of the restructuring of the creative economy by massive 'IP owners' and IP Pirates (i.e.: the companies training on their creative work without permission) is likely far more in touch the the consequences of actually existing AI than a tech worker who is financially incentivised to view them benignly.
> The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me.
This is a strange kind of anti-naturalistic fallacy. A paradigm shift (or indeed a revolution) is not in itself a good thing. One paradigm shift that has occurred for example in recent goepolitics is the normalisation of state murder - i.e.: extrajudicial assassination in the drone war or the current US govts use of missile attacks on alleged drug traffickers. One can generate countless other negative paradigm shifts.
> if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter?
1) You haven't produced it.
2) Such a thing - a beautiful product of AI that is not identifiably artificial - does not yet, and may never exist.
3) Scare quotes around intellectual property theft aren't an argument. We can abandon IP rights - in which case hurrah, tech companies have none - or we can in law at least, respect them. Anything else is legally and morally incoherent self justification.
4) Do you actually know anything about the history of art, any genre of it whatsoever? Because suggesting originality is impossible and 'efficiency' of production is the only form of artistic progress suggests otherwise.
Deleted Comment
> I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously
>But somehow if a person “influenced” by people, ideas, art in less efficient way almost we applaud that because what else, invent the wheel again forever?
I understand AI perfectly fine, thanks. I just reject the illegal vacuuming up of everyones' art for corporations while things like sampling in music remain illegal. This idea that everything must be efficient comes from the bowels of Silicon Valley and should die.
> However tech people who thinks AI is bad, or not inevitable is really hard to understand. It’s almost like Bill Gates saying “we are not interested in internet”. This is pretty much being against the internet, industrialization, print press or mobile phones. The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me. I can only say being against this is either it’s self-interest or not able to grasp it.
Again, the problem is less the tech itself and the corporations who have control of it. Yes, I'm against corporations gobbling up everyones' data for ads and AI surveillance. I think you might be the one who doesn't understand that not everything is roses and their might be more weeds in the garden than flowers.
When LLMs first gotten of, people were talking about how governments will control them, but anyone who knows the history of personal computing and hacker culture knew that's not the way things go in this world.
Do I enjoy corpos making money off of anyone's work, including obvious things like literally pirating books and training their models (Meta), absolutely not. However you are blaming the wrong thing in here, it's not technology's fault it's how governments are always corrupted and side with money instead of their people. We should be lashing out to them not each other, not the people who use AI and certainly not the people who innovate and build it.
Actual AI? Sure. The LLM slop we currently refer to as AI? lol, lmao even
I'm massively burnt out, what can I say? I can grasp new tech perfectly fine, but I don't want to. I quite honestly can't muster enough energy to care about "revolutionary" things anymore.
If anything I resent having to deal with yet more "revolutionary" bullshit.
And not to be too dismissive of copywriters, but old Buzzfeed style listicles are content as well. Stuff that people get paid pennies per word for, stuff that a huge amount of people will bid on on a gig job site like Fiverr or what have you is content, stuff that people churn out by rote is content.
Creative writing on the other hand is not content. I won't call my shitposting on HN art, but it's not content either because I put (some) thought into it and am typing it out with my real hands. And I don't have someone telling me what I should write. Or paying me for it, for that matter.
Meanwhile, AI doesn't do anything on its own. It can be made to simulate doing stuff on its own (by running continuously / unlimited, or by feeding it a regular stream of prompts), but it won't suddenly go "I'm going to shitpost on HN today" unless told to.
To me, it matters because most serious art requires time and effort to study, ponder, and analyze.
The more stuff that exists in the world that superficially looks like art but is actually meaningless slop, the more likely it is that your time and effort is wasted on such empty nonsense.
To conflate LLMs with a printing press or the internet is dishonest; yes, it's a tool, but one which degrades society in its use.
...Okay, now maybe take that and dial it back a few notches of hyperbole, and you'll have a reasonable explanation for why people have issues with AI as it currently exists. People are not wrong to recognize that, just because some people use AI for benign reasons, the people and companies that have formed a cartel for the tech mainly see those benign reasons as incidental to becoming middle men in every single business and personal computing task.
Of course, there is certainly a potential future where this is not the case, and AI is truly a prosocial, democratizing technology. But we're not there, and will have a hard time getting there with Zuckerburg, Altman, Nadella, and Musk at the helm.
I know what it means to be that good.
Sure, most people couldn’t care less, and they’re happy with something that’s simply pleasant to look at.
But for those people, it wouldn’t matter even if it weren’t AI-generated. So what is the point?
You created something without having to get a human to do it. Yaay?
Except we already have more content than we know what to do with, so what exactly are we gaining here? Efficiency?
Generative AI was fed on the free work and joy of millions, only to mechanically regurgitate content without attribution. To treat creators as middlemen in the process.
Yaay, efficient art. This is really what is missing in a world with more content than we have time to consume.
The point of markets, of progress, is the improvement of the human condition. That is the whole point of every regulation, every contract, and every innovation.
I am personally not invested in a world that is worse for humanity
Art can be about many things, we have a lot of tech oriented art (think about demo scene). Noe one gives a shit about art that evokes nothing for them, therefore if AI evokes nothing who cares, if it does, is it bad suddenly because it's AI? How?
Actually I think AI will force good amount of mediums to logical conclusion if what you do is mediocre, and not original and AI can do same or better, then that's about you. Once you pass that threshold that's how the world cherish you as a recognized artist. Again you can be artist even 99.9% of the world thinks what you produced is absolute garbage, that doesn't change what you do and what that means to you. Again nothing to do with AI.