They know that LLMs as a product are racing towards commoditization. Bye bye profit margins. The only way to win is regulation allowing a few approved providers.
They are more likely trying to race towards wildly overinflated government contracts because they aren't going to profit how they're currently operating without some of that funny money.
It is unclear. Everyday I seem to read contradictory headlines about whether or not inference is profitable.
If inference has significant profitability and you're the only game in town, you could do really well.
But without regulation, as a commodity, the margin on inference approaches zero.
None of this even speaks to recouping the R&D costs it takes to stay competitive. If they're not able to pull up the ladder, these frontier model companies could have a really bad time.
Yeah, but we can self-host them. At this point in the span of it, it's more about infrastructure and compute power to meet demand and Google won because it has many business models, massive cashflow, TPUs, and the infrastructure to build expanding on their current, which would take new companies ~25 years to map out compute, data centers and have a viable, tangible infrastructure all while trying to figure out profits.
I'm not sure about how the regulation of things would work, but prompt injections and whatever other attacks we haven't seen yet where agents can be hijacked and made to do things sounds pretty scary.
It's a race towards AGI at this point. Not sure if that can be achieved as language != consciousness IMO
Who is "we", and what are the actual capabilities of the self-hosted models? Do they do the things that people want/are willing to pay money for? Can they integrate with my documents in O365/Google Drive or my calendar/email in hosted platforms? Can most users without a CS degree and a decade of Linux experience actually get them installed or interact with them? Are they integratable with the tools they use?
Statistically close to "everyone" cannot run great models locally. GPUs are expensive and niche, especially with large amounts of VRAM.
>It's a race towards AGI at this point. Not sure if that can be achieved as language != consciousness IMO
However it is arguable that thought is relatable with conscienceness. I’m aware non-linguistic thought exists and is vital to any definition of conscienceness, but LLMs technically dont think in words, they think in tokens, so I could imagine this getting closer.
The bottleneck for commoditization is hardware. The manufacture of the hardware required is led by tmsc and samsung being a close second. The tooling required for manufacture is centralized with ASML and several other smaller players like Zeiss and the design of the product centers around nvidia though there are players like AMD who are attempting to catch up.
It is a complex supply chain but each section of the chain is held by only a few companies. Hopefully this is enough competition to accelerate the development of computational technologies that can run and train these LLMs at home. I give it a decade or more.
Another way to win is through exclusive access to high quality training data. Training data quality and quantity represent an upper bound on LLM performance. That's why the frontier model developers are investing some of their "war chests" in purchasing exclusive rights to data locked up behind corporate firewalls, and even hiring human subject matter experts in order to create custom proprietary training data in certain strategic domains.
That's a good line but it only works if market forces don't commoditize you first. Blithely saying "commoditize your complement" is a bit like saying "draw the rest of the owl."
Someone once told me about being a new journalist, on the city beat. They said something like: I wasn't surprised to find that bribery was going on; I was just surprised the bribes were so small.
Someone always makes this kind of comment and I've always found it being pretty middle-brow. I think there's a classic comment that the USA spends more on potato chips than on lobbying.
I think the best explanation is that there's a pushing a rope phenominum where there's still a limit on the amount of money the American political corruption system can absorb.
We have long had robust laws that prevent people outright paying for political or regulatory outcomes (which used to be much stronger under honest services). We also had robust laws limiting the amount of campaign contributions.
As that system has been torn down, we've seen the amount of money flowing in increase.
The thing is, lobbying isn't literally a bribe. Lobbying involves all sorts of expensive activities, like compiling reports, doing research, etc in addition to the more sketchy semi-bribe stuff.
All that adds up. How many FTE lobbyists is a few million dollars? It just doesn't seem like all that much if they are trying to do a hard core lobbying campaign.
> there's still a limit on the amount of money the American political corruption system can absorb.
Wikipedia tells me [1] to win a seat in the House of Representatives (one of 435) costs on average $2.79 million, while a seat the Senate (one of 100) costs $26.53 million.
So the political system can absorb $3.8 billion in 'donations' from 'supporters'. That might sound small compared to nvidia's $4.3 trillion market cap, but to a lot of folks $3.8 billion is serious money.
And that's just spending by the winners - often there will be a loser who spent almost as much.
That calls to mind this NYT article about the small time grifts that Eric Adams and associates were up to, including a small acting role in Godfather of Harlem in exchange for canceling a planned bike lane.
At that point nobody will care though. People pushing for regulation (not uniquely) want power- those that can write the regulation will be in a position to exert a lot of power over a lot of people/companies, making it an attractive thing to push for.
They need to be more worried about creating a viable economic model for the present AI craze. Right now there’s no clear path to making any of the present insanity a profitable endeavor. Yes NVIDIA is killing it, but with money pumped in from highly upside down sources.
Things will regulate themselves pretty quickly when the financial music stops.
Do you mean that they need to find better ways to create value by using AI, or that they need better ways to extract value from end-users of AI?
I'd argue that "value creation" is already at a decent position considering generative AI and the usecase as "interactive search engine" alone.
Regarding "value extraction": Advertising should always be an option here, just like it was for radio, television and online content in general in the past.
Preventing smaller entities (or private persons even) from just doing their own thing and making their own models seems like the biggest difficulty long term to me (from the perspective of the "rent seeking" tech giant).
> I'd argue that "value creation" is already at a decent position considering generative AI and the usecase as "interactive search engine" alone.
> Regarding "value extraction": Advertising should always be an option here, just like it was for radio, television and online content in general in the past.
Not at the actual price it's going to cost though. The cost of an "interactive search" (LLM) vs a "traditional search" (Google) is exponentially higher. People tolerate ads to pay Google for the service, but imagine how many ads would ChatGPT need, or how much it will have to cost, to compensate an e.g. 10x difference. Last time I read about this a few months ago, ChatGPT were losing money on their paid tier because the people paying for it were using it a lot.
It's more likely that ChatGPT will just be spamming ads sprinkled in the responses (like you ask for a headphone comparison, and it gives you the sponsored brand one, from a sponsored vendor, with an affiliate link), and hope it's enough.
I don’t disagree that then AI is of “value.” The issue at the moment is the whole thing is being kept alive by hype and circular financing.
There’s not anywhere near enough money entering in from outside (ie consumers and businesses buying this stuff) to remotely support the amount of money being spent. Not even close. Not even “we just need to scale more.” It’s presently one big spectacular burning pile of cash with no obvious way forward other than throwing more cash on the burning pile.
This is crazy for me with how inaccurate Google’s AI summaries are. They’ve basically just added a chunk of lies to the top of every search page that I have to scroll past.
The music is just getting started. The way it is going, AI will be inevitable. Companies are CONVINCED it’s adopt AI or die, whether it is effective or not.
The race is to be the first to make a self-improving model (and have the infrastructure it will demand).
This is a winner-takes-all game, that stands a real chance of being the last winner-takes-all game humans will ever play. Given that, the only two choices are either throw everything you can at becoming the winner, or to sit out and hope no one wins.
The labs know that substantial losses will be had, they aren't investing in this to get a return, they are investing in it to be the winner. The losers will all be financially obliterated (and whoever sat out will be irrelevant).
I doubt they are sweating to hard though, because it seems overwhelmingly likely that most people would pay >$75/mo for LLM inference monthly (similar to cell phone costs), and at that rate without going hard on training, the models are absolute money printers.
There is zero evidence that the current approach will ever lead to a self-improving model, or that current GPU/TPU infrastructure is even capable of running self-improving models.
I believe that the right regulation makes a difference, but I honestly don't know what that looks like for AI. LLMs are so easy to build/use and that trend is accelerating. The idea of regulating AI is quickly becoming like the idea of regulating hammers. They are ubiquitous general purpose tools and putting legislation specifically about hammers would be deeply problematic for, hopefully, obvious reasons. Honest question here, what is practical AND effective here? Specifically, what problems can clearly be solved and by what kinds of regulations?
The most sane version of regulation IMO is the (already passed) EU AI Act. It's less about control of AI itself, more about controlling inputs/outputs. Tell users when they're interacting with an AI, mark/disclaimer AI-generated content, don't use AI in high-risk scenarios, etc. Along the lines of "we don't regulate hammers, but we regulate you hitting people with a hammer".
I haven't read that regulation, but the way you describe it makes me immediately think of cookie banners. Everything is pretty quickly getting 'ai' in it. If the definition just narrows down to LLMs, even there we have big questions. Does speech recognition count? Whisper uses cross attn and transformer blocks to generate text. You could easily call it an LLM but I doubt anyone would use it that way. What about services that use LLMs in their back-end to monitor logs for problems. Does that count? Again, I am actually for regulations but I just don't know where to start. My best, very early and likely deeply flawed, thought is that we create enhanced punishments for crimes when an LLM is used. So a company that illegally harvests your data and processes it with an LLM would get bigger fines and penalties because LLMs were involved. That kind of thing. The idea here is that bigger tools get bigger punishments. Again, not well thought out but there may be something here.
Why should a user care whether the entity they're interacting with meets some arbitrary political definition of "AI"? Does it matter whether an article that I'm reading was written by AI or by a monkey randomly banging on a keyboard? Regulations seem totally pointless, just another excuse to shovel taxpayer money to a bunch of bureaucrats with fancy degrees who are incapable of finding real jobs.
The problem is that the conversation gets sidetracked on ASGI while ignoring the very real actual harms that are already currently happening - like AI "companions" addicting and harming kids, deepfakes, IP-theft, etc.
I also worry about malicious AI used to socially engineer and attack people at scale. We have an aging population that is already getting constantly getting scammed by really amateur attacks, what happens when the AI can perfectly emulate your grandkid's voice?
I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?
What we should be doing is surfacing well defined points regarding AI regulation and discussing them, instead of fighting proxy wars for opaque groups with infinite money. It feels like we're at the point where nobody is even pretending like people's opinions on this topic are relevant, it's just a matter of pumping enough money and flooding the zone.
Personally, I still remain very uncertain about the topic; I don't have well-defined or clearly actionable ideas. But I'd love to hear what regulations or mental models other HN readers are using to navigate and think about this topic. Sam Altman and Elon Musk have both mentioned vague ideas of how AI is somehow going to magically result in UBI and a magical communist utopia, but nobody has ever pressed them for details. If they really believe this then they could make some more significant legally binding commitments, right? Notice how nobody ever asks: who is going to own the models, robots, and data centers in this UBI paradise? It feels a lot like Underpants Gnomes: (1) Build AGI, (2) ???, (3) Communist Utopia and UBI.
> I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?
There's a vocal minority calling for AI regulation, but what they actually want often strikes me as misguided:
"Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.
"Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.
> "Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.
This is being screamed from the rooftops by nearly the entire creative community of artists, photographers, writers, and other people who do creative work as a job, or even for fun.
The difference between the 99% of individual creatives and the 1% is that the 1% has entire portfolios of IP - IP that they might not have even created themselves - as well as an army of lawyers to protect that IP.
> "Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.
Artists are not primarily in the 1% though, it's not only patents that are IP theft.
"Stop the laundering of responsibility/liability" - the risk that you can run someone over with a software controlled car and it's not a crime "because AI" whereas a human doing the same thing would be in jail. Image detection leading to false arrests, etc. It's harder to sue because the immediate party can say "it wasn't us, we bought this software product and it did the bad thing!"
I strongly feel that regulation needs to curb this, even if it leads to product managers going to jail for what their black box did.
> This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.
They already do this[1]. Why should there be an exception carved out for AI type jobs?
------------------------------
[1] What do you think tariffs are? Show me a country without tariffs and I'll show you a broken economy with widespread starvation and misery.
"Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.
So politicians are supposed to create "non bullshit" jobs out of thin air?
The job you've done for decades is suddenly bullshit because some shit LLM is hallucinating nice sounding words?
It's less about who is right and more about economic interests and lobbying power. There's a vocal minority that is just dead set against AI using all sorts of arguments related to religion, morality, fears about mass unemployment, all sorts of doom scenarios, etc. However, this is a minority with not a lot of lobbying power ultimately. And the louder they are and the less of this stuff actually materializes the easier it becomes to dismiss a lot of the arguments. Despite the loudness of the debate, the consensus is nowhere near as broad on this as it may seem to some.
And the quality of the debate remains very low as well. Most people barely understand the issues. And that includes many journalists that are still getting hung up on the whole "hallucinations can be funny" thing mostly. There are a lot of confused people spouting nonsense on this topic.
There are special interest groups with lobbying powers. Media companies with intellectual properties, actors worried about being impersonated, etc. Those have some ability to lobby for changes. And then you have the wider public that isn't that well informed and has sort of caught on to the notion that chat gpt is now definitely a thing that is sometimes mildly useful.
And there are the AI companies that are definitely very well funded and have an enormous amount of lobbying power. They can move whole economies with their spending so they are getting relatively little push back from politicians. Political Washington and California run on obscene amounts of lobbying money. And the AI companies can provide a lot of that.
>There's a vocal minority calling for AI regulation, but what they actually want often strikes me as misguided:
There's a ton other points intersecting with regulation. Either directly related by AI, or made significantly more relevant by it.
Just from the top of my head:
- information processing: Is there private data AI should never be able to learn from? We restrict collection but it might be unclear whether model training counts as storage.
- related to the former, what kind of dystopian practices should we ban? AI can probably create much deeper profiles inferring information from users than our already worrrying tech, even without storing sensitive data. If it can use conversations to deduce I'm in risk of a shorter lifespan, can the owners communicate that data to insurance companies?
- healthcare/social damage: what is the long term effects of people having an always available yes men, a substitution for social interaction, a cheating tool, etc? should some people be kept from access? (minors, mentally ill, whatever). Should access, on the other hand, become a basic right if it realistically makes a lef-behind person unable to compete with others who have it?
- National security. Is a country's economy being reliant in a service offered somewhere else? Worse even, is this fact draining skills from the population that might not able to be easily recovered when needed?
- energy/resources impact: Are we ready to have an enormous increase in usage of energy and/or certain goods? should we limit usage until we can meet the demand without struggle?
- consumer protections: Many companies just offer 'flat' usage, freely being able to change the model behind the scenes for a worse one when needed or even adapt user limits on their server load. Which of these are fair business practices?
- economy risks: What is the maximum risk we can take of the economy being made dependent to services that aren't yet profitable? Is there any steps that need to be taken to keep us from the potential bust if costs can't be kept up with?
- monopoly risks: we could end up with a single company being able to offer literally any intellectual work as a service. Whoever gets this tech might become the most powerful entity in the world. Should we address this impact through regulation before such an entity rises and becomes impossible to tame?
- enabling crime: can an army of AI hackers disrupt entire countries? how is this handled?
- impact on job creation: If AIs can practically DDOS job offer forms, how is this handled to keep access fair? Same for a million other places that are subjected to AI spam.
your point "It's on politicians to help people adapt to a new economic reality" brings a few:
- Should we tax AI using companies? if they produce the same employing fewer people, tax extraction suffers and the non-taxed money does not make it back to the people. How do we compensate? And how do we remake
- How should we handle entire professions being put to pasture at once? Lost employment is a general problem if it's a large enough amount of people.
- how should the push of intellectual work be rethought if it becomes extremely cheap relative to manual work? is the way we train our population in need of change?
You might have strong opinions on most of these issues, but there is clearly A LOT of important debates that aren't being addressed.
> "Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.
This is a really good point. If a country tries to "protect" jobs by blocking AI, it only puts itself at a disadvantage. Other countries that don't pass those restrictions will produce goods and services more efficiently and at lower cost, and they’ll outcompete you anyway. So even with regulations the jobs aren't actually saved.
The real solution is for people to upskill and learn new abilities so they can thrive in the new economic reality. But it's hard to convince people that they need to change instead of expecting the world around them to stay the same.
Musk wants extreme law and order and will beat down any protests. His X account is full of posts that want to fill up prisons. This is the highlight so far:
Notice that the retweeted Will Tanner post also denigrates EBT. Musk does not give a damn about UBI. The unemployed will do slave labor, go to prison, or, if they revolt, they will be hanged. It is literally all out there by now.
Elon Musk explicitly said in his latest Joe Rogan appearance that he advocates for the smallest government possible - just army, police, legal. He did NOT mention social care, health care.
Doesn't quite align with UBI, unless he envisions the AI companies directly giving the UBI to people (when did that ever happen?)
Like every other self-serving rich “Libertarian,” they want a small government when it stands to get in their way, and a large one when they want their lifestyle subsidized by government contracts.
> Elon Musk explicitly said in his latest Joe Rogan appearance that he advocates for the smallest government possible - just army, police, legal. He did NOT mention social care, health care.
This would be a 19th century government, just the "regalian" functions. It's not really plausible in a world where most of the population who benefit from the health/social care/education functions can vote.
Algorithmic Accountability. Not just for AI, but also social media, advertising, voting systems, etc. Algorithm Impact Assessments need to become mandatory.
All major cloud providers have high profit margins in the range of 30-40%.
How much does it cost to train a cutting edge LLM? Those costs need to be factored into the margin from inferencing.
Buying hard drives and slotting them in also has capex associated with it, but far less in total, I'd guess.
If inference has significant profitability and you're the only game in town, you could do really well.
But without regulation, as a commodity, the margin on inference approaches zero.
None of this even speaks to recouping the R&D costs it takes to stay competitive. If they're not able to pull up the ladder, these frontier model companies could have a really bad time.
I'm not sure about how the regulation of things would work, but prompt injections and whatever other attacks we haven't seen yet where agents can be hijacked and made to do things sounds pretty scary.
It's a race towards AGI at this point. Not sure if that can be achieved as language != consciousness IMO
Who is "we", and what are the actual capabilities of the self-hosted models? Do they do the things that people want/are willing to pay money for? Can they integrate with my documents in O365/Google Drive or my calendar/email in hosted platforms? Can most users without a CS degree and a decade of Linux experience actually get them installed or interact with them? Are they integratable with the tools they use?
Statistically close to "everyone" cannot run great models locally. GPUs are expensive and niche, especially with large amounts of VRAM.
However it is arguable that thought is relatable with conscienceness. I’m aware non-linguistic thought exists and is vital to any definition of conscienceness, but LLMs technically dont think in words, they think in tokens, so I could imagine this getting closer.
It is a complex supply chain but each section of the chain is held by only a few companies. Hopefully this is enough competition to accelerate the development of computational technologies that can run and train these LLMs at home. I give it a decade or more.
Dead Comment
Someone once told me about being a new journalist, on the city beat. They said something like: I wasn't surprised to find that bribery was going on; I was just surprised the bribes were so small.
I think the best explanation is that there's a pushing a rope phenominum where there's still a limit on the amount of money the American political corruption system can absorb.
We have long had robust laws that prevent people outright paying for political or regulatory outcomes (which used to be much stronger under honest services). We also had robust laws limiting the amount of campaign contributions.
As that system has been torn down, we've seen the amount of money flowing in increase.
All that adds up. How many FTE lobbyists is a few million dollars? It just doesn't seem like all that much if they are trying to do a hard core lobbying campaign.
Wikipedia tells me [1] to win a seat in the House of Representatives (one of 435) costs on average $2.79 million, while a seat the Senate (one of 100) costs $26.53 million.
So the political system can absorb $3.8 billion in 'donations' from 'supporters'. That might sound small compared to nvidia's $4.3 trillion market cap, but to a lot of folks $3.8 billion is serious money.
And that's just spending by the winners - often there will be a loser who spent almost as much.
[1] https://en.wikipedia.org/wiki/Campaign_finance_in_the_United...
https://www.nytimes.com/2025/08/22/nyregion/new-york-city-co...
Things will regulate themselves pretty quickly when the financial music stops.
I'd argue that "value creation" is already at a decent position considering generative AI and the usecase as "interactive search engine" alone.
Regarding "value extraction": Advertising should always be an option here, just like it was for radio, television and online content in general in the past.
Preventing smaller entities (or private persons even) from just doing their own thing and making their own models seems like the biggest difficulty long term to me (from the perspective of the "rent seeking" tech giant).
> Regarding "value extraction": Advertising should always be an option here, just like it was for radio, television and online content in general in the past.
Not at the actual price it's going to cost though. The cost of an "interactive search" (LLM) vs a "traditional search" (Google) is exponentially higher. People tolerate ads to pay Google for the service, but imagine how many ads would ChatGPT need, or how much it will have to cost, to compensate an e.g. 10x difference. Last time I read about this a few months ago, ChatGPT were losing money on their paid tier because the people paying for it were using it a lot.
It's more likely that ChatGPT will just be spamming ads sprinkled in the responses (like you ask for a headphone comparison, and it gives you the sponsored brand one, from a sponsored vendor, with an affiliate link), and hope it's enough.
There’s not anywhere near enough money entering in from outside (ie consumers and businesses buying this stuff) to remotely support the amount of money being spent. Not even close. Not even “we just need to scale more.” It’s presently one big spectacular burning pile of cash with no obvious way forward other than throwing more cash on the burning pile.
All they need to do is start adding in sponsored results (and the ability to purchase keywords), and AI becomes insanely profitable.
Deleted Comment
This is a winner-takes-all game, that stands a real chance of being the last winner-takes-all game humans will ever play. Given that, the only two choices are either throw everything you can at becoming the winner, or to sit out and hope no one wins.
The labs know that substantial losses will be had, they aren't investing in this to get a return, they are investing in it to be the winner. The losers will all be financially obliterated (and whoever sat out will be irrelevant).
I doubt they are sweating to hard though, because it seems overwhelmingly likely that most people would pay >$75/mo for LLM inference monthly (similar to cell phone costs), and at that rate without going hard on training, the models are absolute money printers.
Dead Comment
https://artificialintelligenceact.eu/
I also worry about malicious AI used to socially engineer and attack people at scale. We have an aging population that is already getting constantly getting scammed by really amateur attacks, what happens when the AI can perfectly emulate your grandkid's voice?
Dead Comment
I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?
What we should be doing is surfacing well defined points regarding AI regulation and discussing them, instead of fighting proxy wars for opaque groups with infinite money. It feels like we're at the point where nobody is even pretending like people's opinions on this topic are relevant, it's just a matter of pumping enough money and flooding the zone.
Personally, I still remain very uncertain about the topic; I don't have well-defined or clearly actionable ideas. But I'd love to hear what regulations or mental models other HN readers are using to navigate and think about this topic. Sam Altman and Elon Musk have both mentioned vague ideas of how AI is somehow going to magically result in UBI and a magical communist utopia, but nobody has ever pressed them for details. If they really believe this then they could make some more significant legally binding commitments, right? Notice how nobody ever asks: who is going to own the models, robots, and data centers in this UBI paradise? It feels a lot like Underpants Gnomes: (1) Build AGI, (2) ???, (3) Communist Utopia and UBI.
There's a vocal minority calling for AI regulation, but what they actually want often strikes me as misguided:
"Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.
"Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.
This is being screamed from the rooftops by nearly the entire creative community of artists, photographers, writers, and other people who do creative work as a job, or even for fun.
The difference between the 99% of individual creatives and the 1% is that the 1% has entire portfolios of IP - IP that they might not have even created themselves - as well as an army of lawyers to protect that IP.
Artists are not primarily in the 1% though, it's not only patents that are IP theft.
I strongly feel that regulation needs to curb this, even if it leads to product managers going to jail for what their black box did.
They already do this[1]. Why should there be an exception carved out for AI type jobs?
------------------------------
[1] What do you think tariffs are? Show me a country without tariffs and I'll show you a broken economy with widespread starvation and misery.
So politicians are supposed to create "non bullshit" jobs out of thin air?
The job you've done for decades is suddenly bullshit because some shit LLM is hallucinating nice sounding words?
And the quality of the debate remains very low as well. Most people barely understand the issues. And that includes many journalists that are still getting hung up on the whole "hallucinations can be funny" thing mostly. There are a lot of confused people spouting nonsense on this topic.
There are special interest groups with lobbying powers. Media companies with intellectual properties, actors worried about being impersonated, etc. Those have some ability to lobby for changes. And then you have the wider public that isn't that well informed and has sort of caught on to the notion that chat gpt is now definitely a thing that is sometimes mildly useful.
And there are the AI companies that are definitely very well funded and have an enormous amount of lobbying power. They can move whole economies with their spending so they are getting relatively little push back from politicians. Political Washington and California run on obscene amounts of lobbying money. And the AI companies can provide a lot of that.
There's a ton other points intersecting with regulation. Either directly related by AI, or made significantly more relevant by it.
Just from the top of my head:
- information processing: Is there private data AI should never be able to learn from? We restrict collection but it might be unclear whether model training counts as storage.
- related to the former, what kind of dystopian practices should we ban? AI can probably create much deeper profiles inferring information from users than our already worrrying tech, even without storing sensitive data. If it can use conversations to deduce I'm in risk of a shorter lifespan, can the owners communicate that data to insurance companies?
- healthcare/social damage: what is the long term effects of people having an always available yes men, a substitution for social interaction, a cheating tool, etc? should some people be kept from access? (minors, mentally ill, whatever). Should access, on the other hand, become a basic right if it realistically makes a lef-behind person unable to compete with others who have it?
- National security. Is a country's economy being reliant in a service offered somewhere else? Worse even, is this fact draining skills from the population that might not able to be easily recovered when needed?
- energy/resources impact: Are we ready to have an enormous increase in usage of energy and/or certain goods? should we limit usage until we can meet the demand without struggle?
- consumer protections: Many companies just offer 'flat' usage, freely being able to change the model behind the scenes for a worse one when needed or even adapt user limits on their server load. Which of these are fair business practices?
- economy risks: What is the maximum risk we can take of the economy being made dependent to services that aren't yet profitable? Is there any steps that need to be taken to keep us from the potential bust if costs can't be kept up with?
- monopoly risks: we could end up with a single company being able to offer literally any intellectual work as a service. Whoever gets this tech might become the most powerful entity in the world. Should we address this impact through regulation before such an entity rises and becomes impossible to tame?
- enabling crime: can an army of AI hackers disrupt entire countries? how is this handled?
- impact on job creation: If AIs can practically DDOS job offer forms, how is this handled to keep access fair? Same for a million other places that are subjected to AI spam.
your point "It's on politicians to help people adapt to a new economic reality" brings a few:
- Should we tax AI using companies? if they produce the same employing fewer people, tax extraction suffers and the non-taxed money does not make it back to the people. How do we compensate? And how do we remake - How should we handle entire professions being put to pasture at once? Lost employment is a general problem if it's a large enough amount of people. - how should the push of intellectual work be rethought if it becomes extremely cheap relative to manual work? is the way we train our population in need of change?
You might have strong opinions on most of these issues, but there is clearly A LOT of important debates that aren't being addressed.
This is a really good point. If a country tries to "protect" jobs by blocking AI, it only puts itself at a disadvantage. Other countries that don't pass those restrictions will produce goods and services more efficiently and at lower cost, and they’ll outcompete you anyway. So even with regulations the jobs aren't actually saved.
The real solution is for people to upskill and learn new abilities so they can thrive in the new economic reality. But it's hard to convince people that they need to change instead of expecting the world around them to stay the same.
https://xcancel.com/elonmusk/status/1992599328897294496#m
Notice that the retweeted Will Tanner post also denigrates EBT. Musk does not give a damn about UBI. The unemployed will do slave labor, go to prison, or, if they revolt, they will be hanged. It is literally all out there by now.
Doesn't quite align with UBI, unless he envisions the AI companies directly giving the UBI to people (when did that ever happen?)
This would be a 19th century government, just the "regalian" functions. It's not really plausible in a world where most of the population who benefit from the health/social care/education functions can vote.
Dead Comment