The difference is that crypto as a sector is a dead end, while there will absolutely be multiple trillion+ dollar "AI" companies at some point in the future. It may not happen anytime soon, and it may not be any of the companies in existence today, but the overall bet (and its associated hype) is a valid - even necessary - one.
In that sense the current state of AI is less like crypto and more like the dotcom bubble of the early 00s. No one really understands the underlying tech well enough, but everyone wants to be involved. There are companies surging in valuation just by adding AI to their name. While all of this will correct itself, the underlying tech (the web in 2000, AI today) will eventually prove to be world changing.
The marketers from Crypto have transitioned to Ai... With their outlandish hype and over embellishment of capability, they are flooding the Internet with SEO spam concerning Ai, while it really isn't ready for prime time. This is the new reality, it will cause burnout and backlash against innovation, and it artificially bilks investors and causes rampant and harmful overvaluation and inflation bubbles in IT that ruin trustworthiness in tech.
Now replace “crypto” with “social media” and “AI” with “crypto”, then read your comment again and tell me how this is not what a crypto fanboy would write two years ago?
The difference is that crypto (blockchain) didn't solve any problems that a majority, or even significant part, of people actually have.
It allows tracking ownership of digital assets in a distributed way, but it does so in an overly complicated way, intermixed with minting new coins, which only makes sense for the actual Bitcoin-type currencies.
Any company who creates digital assets, say games, wants to be in complete control. Being distributed isn't a benefit.
Moving around money anonymously has been largely used for crime. Sure, there are other users, but not enough.
And the speculation and manipulation of blockchain currency rates had made them so volatile that they are not really usable as currency.
AI (or whatever will end up calling it, machine learning is better than AI) can actually solve problems, or ease everyday tasks, for everybody. It's not a matter of who'll use it, it's what it'll be used for the most.
The difference is that social media as a sector is a dead end, while there will absolutely be multiple trillion+ dollar "crypto" companies at some point in the future. It may not happen anytime soon, and it may not be any of the companies in existence today, but the overall bet (and its associated hype) is a valid - even necessary - one.
In that sense, the current state of crypto is less like social media and more like the dotcom bubble of the early 00s. No one really understands the underlying tech well enough, but everyone wants to be involved. There are companies surging in valuation just by adding "crypto" to their name. While all of this will correct itself, the underlying tech (the web in 2000, crypto today) will eventually prove to be world changing.
It's important to note that this is a hypothetical scenario where we're replacing terms to compare different industries. The statements above are not a reflection of actual market conditions or predictions about the future success of specific sectors.
People need to remember what year Bitcoin first became popular and what year it is today. Comparing crypto of today to AI of today is unfair. You need to compare crypto when it first came out:
- “digital cash”, “buy anything with minimal fees”
- “banking for the unbanked”
- “digital gold”, “not subject to inflation”
- “trade cash for crypto with anyone on the street”
- “no merchant fees”
All of these statements were true when Bitcoin first came out and all of them provided value to some albeit not everyone. Most of these statements are not true today. 6-7 years later and we can barely buy anything with crypto, the fees are most definitely not minimal and the “merchant fees” fade in comparison, the market is fragmented beyond imagination. Given taxation and KYC requirements crypto today is anything but “banking for the unbanked” or “digital cash”, you most definitely can’t trade it with random strangers on the street unless you want to get yourself arrested. Due to its volatility and correlation with the market crypto also does not appear to be a safe heaven today, despite what theory would have us believe.
All that to say, there was great hope for crypto when it first came out but over time its utility diminished. I can draw some parallels with the AI of today but will the future turn out the same we can only guess.
Other criticisms aside, fees are <10 cents for many transactions on Eth rollups right now, and will drop another 10x in the next six months if not more. (Only true since around the last month.) USDC is nice for international money transfers, though still niche. I don’t think anyone cares if you trade crypto on the streets, just don’t get rubber hosed.
The space has suffered a lot from adverse selection towards grifters. UST alone captured many of the earnest attempts towards payments and took them down with it.
I can be sympathetic to many folks who have publicly dismissed LLMs as vapid hype in past months -- tech has claimed many fundamental breakthroughs before, it was reasonable to expect this would be another illusory one too -- but come on, even science fiction authors can't tell the difference between GPT-4 and autocomplete? It just feels willfully unimaginative of them, and perhaps motivated by insecurity.
Use some of the creativity we've all seen you exhibit to ask GPT-4 questions that require complex reasoning inside a world model, rather than regurgitating its training set. It's not going to get it right all of the time, but the fact that it can get it right any of the time is astonishing, and it's going to get quickly better, perhaps even purely through compute spend increase rather than algorithmic breakthrough.
Maybe it is important to ask if that creativity is truly unique? Or in those cases already encoded in the training data? Humans in general might not actually be that creative. Or variations are something that could be broken.
You should try web searching for your question before asking it. If it shows up, don't ask it. If it doesn't show up, and the complaint is that it's possible to solve the question by studying past answers to different questions and applying them to this new one, then that sounds like reasoning to me.
Whilst one has an efficient greener method of achieving consensus which one blockchain (Ethereum) switched from the wasteful proof-of-work to the greener proof-of-stake making that possible.
On the another hand after years of deep learning, there is still no viable efficient methods of training, fine tuning or inference without needing more data centers as the data and the AI model scales and its hype is currently contributing to the increased wastage of water and resources all for the sake of incinerating the planet to produce broken and hallucinating black-box AI models.
Being ‘useful’ isn’t an excuse for not finding greener efficient alternatives that significantly lower emissions rather than continuing to burn the planet over the new ‘hype’.
I agree with your point on AI - the hallucinating black box isn’t going to change the world in any meaningful, fundamental way. But positioning crypto as the antithesis of wasteful, useless tech (esp. Ethereum) is truly laughable, as no one uses it for anything either.
Right. The blockchain at least has semi-novel ideas around hashing and distributed trust, AI is just spitting out the same words we've seen for years!
I'm not a crypto or AI advocate, but it's easy to see how "technology with overestimated social consequences" could get tiring for everyone who isn't an investor. The "fundamental difference" between cryptocurrency and AI is that one is built on reproducible and accountable logging and the other is guesswork. Trying to champion one or the other feels like pointless virtue signalling really.
I also have to admit I really don't like Cory Doctorow's other works (I especially despise the reductive theory of "enshittification") but his thesis is 100% the truth. People need new buzzwords to cling to, and AI is the one with the most popular demo. The usefulness is an accessory to the marketing and positioning of AI-powered products.
> Trying to champion one or the other feels like pointless virtue signalling really.
No, I mean something very specific - I can do things at work with LLMs that I couldn't do a year ago without them. I can't say the same about anything crypto related.
If you think that AI is just spitting our word we've seen or that it's guesswork, I don't know how you can even discuss it in good faith.
To most people the main use of crypto is to make money go up. This is a dubious use case.
To most people the main use of AI is:
Code suggestions
Code analysis
Document summarization
Document creation
Clipart generation
Audio transcription
Audio synthesis
Realistic chat bots
...
These are not dubious use cases.
The number of people who have been defrauded by crypto number in the millions. The number of people who have been defrauded by AI are zero.
Even if AI is wrong 90% of the time, 10% of the time it makes someone's job easier. Making someone's job easier is objectively more valuable than making someone's bank account number go up. One adds wealth to society. The other redistributes existing wealth.
I can go on about the differences but I'll let you tell me about how Ethereum has a really good logging system first.
I do think “AI” is a hype-bubble, but machine learning and specifically the sort of LLMs we’re seeing now are definitively not a hype-bubble.
We don’t really need general AI, nor are we going to be able to make a true flawless general AI anytime soon. AI was always a really vague marketing term, and can be applied to anything for instant relevancy boost. Bubble territory.
The LLMs we have now are extremely effective though. There’s real use case here for automating writing, and some menial tasks that human beings were unfairly burdened with, but were impossible to automate till now.
That’ll stick around. (Considering that ML concepts have been around for decades, you could even say they already have!)
No, I know, the distinction I’m making is that “AI on everything” marketing is going to be the hype bubble.
There’s a lot of knee jerk products out there from businesses that were scared of being left behind on the wave of AI that don’t actually add much. “AI powered” could mean an in-house trained deep learning network. Or an api call to ChatGPT. Or single function someone made by tweaking sliders in Desmos calculator till the curve had the right vibe. It’s the slapping “AI” over “black box magic process” that’s going to implode.
But there’s obviously massive work being done in ML that will change the world. Something actually worth the hype.
> The LLMs we have now are extremely effective though. There’s real use case here for automating writing, and some menial tasks that human beings were unfairly burdened with, but were impossible to automate till now.
Some people take issue with that assertion. The biggest problem with LLMs is you can’t trust them and have to verify everything they output - and often writing it yourself and verifying is easier than verifying someone/something else’s work.
Right now no sane corporation will even let a LLM run their helplines that they out sourced to India. Imagine a LLM hallucinating a dangerous “solution” to a customer problem resulting in loss of property, injuries, or even death. It’s a massive lawsuit waiting to happen.
A bursting bubble? Not in my opinion. I'm already able to capture value in ways that were thought too difficult just 6 months ago. That's not vaporware.
WeWork also captured unique value for thousands of people before their company collapsed and everything fell apart. Your product doesn't need to be vaporware to be a bubble.
This is a terribly ignorant take and disappointing to see coming from a science fiction author.
> AI isn’t “artificial” and it’s not “intelligent.” “Machine learning” doesn’t learn. On this week’s Trashfuture podcast, they made an excellent (and profane and hilarious) case that ChatGPT is best understood as a sophisticated form of autocomplete — not our new robot overlord.
This is just argument by assertion. We have no good definition of intelligence, so I have no clue how he can be so confident. "Machine learning doesn't learn" is a crazy take, since "backprop + gradient descent does learn" is close to the most well-supported thing you can say about the past few years of algorithmic progress.
> sophisticated autocomplete
Aside from this being an incredibly reductive sneer that clearly isn't true if you've honestly tried using ChatGPT, etc., his citation for this is a podcast, which I'm positive Doctorow would not accept as sufficient for basically any other technical topic.
I love Ted Chiang's stories, and some of his takes on AI progress are cited here. However, I also found his extensive conversation with the Financial Times (earlier this month, so published after this) disappointing along similar lines. The thread running through both of these is a complete lack of a positive vision for the future, replaced by an almost smug cynicism that asserts any more technological progress is simply hype, a grift, and bad. Are there any current science fiction authors with a positive vision of the future?
> This is just argument by assertion. We have no good definition of intelligence, so I have no clue how he can be so confident.
Without concrete definitions your assertions are just as correct as theirs. But they have the evidence of absurd tech-bro hype of past technologies to draw on.
> I love Ted Chiang's stories, and some of his takes on AI progress are cited here. However, I also found his extensive conversation with the Financial Times (earlier this month, so published after this) disappointing along similar lines.
"I love Ted Chiang's stories because they jive with my preconceived notions, but I like him less when he says things that I don't believe"
> The thread running through both of these is a complete lack of a positive vision for the future, replaced by an almost smug cynicism that asserts any more technological progress is simply hype, a grift, and bad. Are there any current science fiction authors with a positive vision of the future?
Plenty. They talked about flying cars and living on the moon. Instead we got stagnant wages and a social-media skinner box. All of those wonderfully positive predictions didn't pan out.
The current "AI" is probably somewhat useful in many cases. Mostly producing content that does not need to be perfect. Or even correct. And it can already do it. You could already get text from one model and then few images from an other. And spam the made content online.
Now valuation hype is big question. Will there be moats? Or will it be commodity technology? And maybe server farms will make some money and everyone else marginal profit?
In that sense the current state of AI is less like crypto and more like the dotcom bubble of the early 00s. No one really understands the underlying tech well enough, but everyone wants to be involved. There are companies surging in valuation just by adding AI to their name. While all of this will correct itself, the underlying tech (the web in 2000, AI today) will eventually prove to be world changing.
Any company who creates digital assets, say games, wants to be in complete control. Being distributed isn't a benefit.
Moving around money anonymously has been largely used for crime. Sure, there are other users, but not enough.
And the speculation and manipulation of blockchain currency rates had made them so volatile that they are not really usable as currency.
AI (or whatever will end up calling it, machine learning is better than AI) can actually solve problems, or ease everyday tasks, for everybody. It's not a matter of who'll use it, it's what it'll be used for the most.
In that sense, the current state of crypto is less like social media and more like the dotcom bubble of the early 00s. No one really understands the underlying tech well enough, but everyone wants to be involved. There are companies surging in valuation just by adding "crypto" to their name. While all of this will correct itself, the underlying tech (the web in 2000, crypto today) will eventually prove to be world changing.
It's important to note that this is a hypothetical scenario where we're replacing terms to compare different industries. The statements above are not a reflection of actual market conditions or predictions about the future success of specific sectors.
No sed/shitcoin required.
The space has suffered a lot from adverse selection towards grifters. UST alone captured many of the earnest attempts towards payments and took them down with it.
Use some of the creativity we've all seen you exhibit to ask GPT-4 questions that require complex reasoning inside a world model, rather than regurgitating its training set. It's not going to get it right all of the time, but the fact that it can get it right any of the time is astonishing, and it's going to get quickly better, perhaps even purely through compute spend increase rather than algorithmic breakthrough.
On the another hand after years of deep learning, there is still no viable efficient methods of training, fine tuning or inference without needing more data centers as the data and the AI model scales and its hype is currently contributing to the increased wastage of water and resources all for the sake of incinerating the planet to produce broken and hallucinating black-box AI models.
Being ‘useful’ isn’t an excuse for not finding greener efficient alternatives that significantly lower emissions rather than continuing to burn the planet over the new ‘hype’.
I'm not a crypto or AI advocate, but it's easy to see how "technology with overestimated social consequences" could get tiring for everyone who isn't an investor. The "fundamental difference" between cryptocurrency and AI is that one is built on reproducible and accountable logging and the other is guesswork. Trying to champion one or the other feels like pointless virtue signalling really.
I also have to admit I really don't like Cory Doctorow's other works (I especially despise the reductive theory of "enshittification") but his thesis is 100% the truth. People need new buzzwords to cling to, and AI is the one with the most popular demo. The usefulness is an accessory to the marketing and positioning of AI-powered products.
No, I mean something very specific - I can do things at work with LLMs that I couldn't do a year ago without them. I can't say the same about anything crypto related.
If you think that AI is just spitting our word we've seen or that it's guesswork, I don't know how you can even discuss it in good faith.
To most people the main use of AI is:
Code suggestions
Code analysis
Document summarization
Document creation
Clipart generation
Audio transcription
Audio synthesis
Realistic chat bots
...
These are not dubious use cases.
The number of people who have been defrauded by crypto number in the millions. The number of people who have been defrauded by AI are zero.
Even if AI is wrong 90% of the time, 10% of the time it makes someone's job easier. Making someone's job easier is objectively more valuable than making someone's bank account number go up. One adds wealth to society. The other redistributes existing wealth.
I can go on about the differences but I'll let you tell me about how Ethereum has a really good logging system first.
Dead Comment
We don’t really need general AI, nor are we going to be able to make a true flawless general AI anytime soon. AI was always a really vague marketing term, and can be applied to anything for instant relevancy boost. Bubble territory.
The LLMs we have now are extremely effective though. There’s real use case here for automating writing, and some menial tasks that human beings were unfairly burdened with, but were impossible to automate till now.
That’ll stick around. (Considering that ML concepts have been around for decades, you could even say they already have!)
There isn't any kind of distinction - when people say AI in 2023 they mean ML, LLMs, image models etc.
There’s a lot of knee jerk products out there from businesses that were scared of being left behind on the wave of AI that don’t actually add much. “AI powered” could mean an in-house trained deep learning network. Or an api call to ChatGPT. Or single function someone made by tweaking sliders in Desmos calculator till the curve had the right vibe. It’s the slapping “AI” over “black box magic process” that’s going to implode.
But there’s obviously massive work being done in ML that will change the world. Something actually worth the hype.
Some people take issue with that assertion. The biggest problem with LLMs is you can’t trust them and have to verify everything they output - and often writing it yourself and verifying is easier than verifying someone/something else’s work.
Right now no sane corporation will even let a LLM run their helplines that they out sourced to India. Imagine a LLM hallucinating a dangerous “solution” to a customer problem resulting in loss of property, injuries, or even death. It’s a massive lawsuit waiting to happen.
A bursting bubble? Not in my opinion. I'm already able to capture value in ways that were thought too difficult just 6 months ago. That's not vaporware.
> AI isn’t “artificial” and it’s not “intelligent.” “Machine learning” doesn’t learn. On this week’s Trashfuture podcast, they made an excellent (and profane and hilarious) case that ChatGPT is best understood as a sophisticated form of autocomplete — not our new robot overlord.
This is just argument by assertion. We have no good definition of intelligence, so I have no clue how he can be so confident. "Machine learning doesn't learn" is a crazy take, since "backprop + gradient descent does learn" is close to the most well-supported thing you can say about the past few years of algorithmic progress.
> sophisticated autocomplete
Aside from this being an incredibly reductive sneer that clearly isn't true if you've honestly tried using ChatGPT, etc., his citation for this is a podcast, which I'm positive Doctorow would not accept as sufficient for basically any other technical topic.
I love Ted Chiang's stories, and some of his takes on AI progress are cited here. However, I also found his extensive conversation with the Financial Times (earlier this month, so published after this) disappointing along similar lines. The thread running through both of these is a complete lack of a positive vision for the future, replaced by an almost smug cynicism that asserts any more technological progress is simply hype, a grift, and bad. Are there any current science fiction authors with a positive vision of the future?
Without concrete definitions your assertions are just as correct as theirs. But they have the evidence of absurd tech-bro hype of past technologies to draw on.
> I love Ted Chiang's stories, and some of his takes on AI progress are cited here. However, I also found his extensive conversation with the Financial Times (earlier this month, so published after this) disappointing along similar lines.
"I love Ted Chiang's stories because they jive with my preconceived notions, but I like him less when he says things that I don't believe"
> The thread running through both of these is a complete lack of a positive vision for the future, replaced by an almost smug cynicism that asserts any more technological progress is simply hype, a grift, and bad. Are there any current science fiction authors with a positive vision of the future?
Plenty. They talked about flying cars and living on the moon. Instead we got stagnant wages and a social-media skinner box. All of those wonderfully positive predictions didn't pan out.
Now valuation hype is big question. Will there be moats? Or will it be commodity technology? And maybe server farms will make some money and everyone else marginal profit?