Anecdotally, our company's next couple quarters are projected to be a bloodbath. Spending is down everywhere, nearly all of our customers are pushing for huge cuts to their contracts, and in turn literally any costs we can jettison to keep jobs is being pushed through. We're hearing the same from our customers.
AI has been the only new investment our company has made (half hearted at that). I definitely get the sense that everyone pretending things are fine to investors, meanwhile they are playing musical chairs.
Back in my economics classes at college, a professor pointed out that a stock market can go up for two reasons: On one hand, the economy is legitimately growing and shares are becoming more valuable. But on the other hand, people and corporations could be cutting spending en masse so there's extra cash to flood the stock markets and drive up prices regardless of future earnings.
I work for one of the largest packaging companies in the world. Customers across the board in the US are cutting back on how much packaging they need due to presumably lower sales volume. Make of that information what you will.
My eBay sales have been way down this year too and so far q4 is not looking good at all. People are cutting back across the board and it’s going to be very ugly once wall street stops plugging their ears and covering their eyes.
This is an indicator that is very close to the sale time. If you can share and don't mind sharing, how did whatever you saw during 2020/2021 corelate with retail sales?
car manufacturers, right at the beginning of covid, started cutting orders of components from their suppliers, thinking that demand is going to drop due to covid induced recession.
> Back in my economics classes at college, a professor pointed out that a stock market can go up for two reasons
Reason #1 is lower interest rates, which increase the present value of future cash flows in DCF models. A professor who does not mention that does not know what they are talking about.
Likewise on all downward business signals at my employer. I was thankfully in school during 09, but this easily feels like the biggest house of cards I have ever experienced as an adult.
Econ professor espousing on the stock market lol. As someone with a couple dumb fancy Econ degrees and whose career has been quite good in the stock market, this comment made me laugh.
Almost all my money goes to mortgage, shit from China, food, and the occasional service. It does make me wonder some times how it all works. But it's been working like this for a long time now.
Real estate. The US economy floats on the perpetually-increasing cost of land. Thats where your mortgage money goes, to a series of finacial instruments to allow others to benifit from the eternally rising value of "your" property.
Pretty sure the stagnation has a cause beginning in 2025 and that has to do with things like: Canada refusing to buy ALL American liquor in retaliation. China refusing to buy ANY soy beans in retaliation. In retaliation for what you might ask?
I leave that as an exercise for the reader. If you are unable to answer that question honestly to yourself you need to seriously consider that your cognitive bias might be preventing you from thinking clearly.
depends, on which side, of the tarrifs an economy happens to be
and where, geopoliticaly.
AI, or whatever a mountain of processors churning all of the worlds data will be called later, still has no use case, other than total domination, for which it has brought a kind of lame service to all of the totaly dependent go along to get along types, but nothing approaching an actual guaranteed answer for anything usefull and profitable, lame, lame, infinite fucking lame tedious shit
that has prompted most people to.stop.even trying, and so a huge vast amount of genuine human
inspiration and effort is gone
I think that this concern is valid but there are deeper more foundational issues facing the US that have led to the sum of the issues mentioned in the post.
We can say that if this rotten support beam fails the US is in trouble but the real issue is what caused the rot in the first place.
The effective removal of regulations via winner bribes and a lack of enforcement plus the explicit removal of regulations, to reduce corruption and insider trading. AI is not required to create the systemic exploitations and they are far more efficient at extracting value than any AI system.
I think a better metaphor for interconnected economies is that of chains always breaking at their weakest link.
Sure, well done, your link in the chain didn't break… but your anchor is still stuck on the bottom of the ocean and you're on your spare anchor (with a shorter chain) until you get back to harbour.
> I was discussing with a friend that my biggest concern with AI right now is not that it isn't capable of doing things... but that we switched from research/academic mode to full value extraction so fast that we are way out over our skis in terms of what is being promised, which, in the realm of exciting new field of academic research is pretty low-stakes all things considered... to being terrifying when we bet policy and economics on it.
That isn't overly prescient or anything... it feels like the alarm bells started a while ago... but wow the absolute "all in" of the bet is really starting to feel like there is no backup. With the cessation of EVs tax credits, the slowdown in infra spending, healthcare subsidies, etc, the portfolio of investment feels much less diverse...
Especially compared to China, which has bets in so many verticals, battery tech, EVs, solar, then of course all the AI/chips/fabs. That isn't to say I don't think there are huge risks for China... but geez does it feel like the setup for a big shift in economic power especially with change in US foreign policy.
I'll offer two counter-points. Weak but worth mentioning. wrt China there's no value to extract by on-shoring manufacturing -- many verticals are simply uninvestable in the US because of labor costs and the gap of cost to manufacture is so large it's not even worth considering. I think there's a level of introspection the US needs to contend with, but that ship has sailed. We should be forward looking in what we can do outside of manufacturing.
For AI, the pivot to profitability was indeed quick, but I don't think it's as bad as you may think. We're building the software infrastructure to accomodate LLMs into our work streams which makes everyone more efficient and productive. As foundational models progress, the infrastructure will reap the benefits a-la moore's law.
I acknowledge that this is a bullish thesis but I'll tell you why I'm bullish: I'm basically a high-tech ludite -- the last piece of technology I adopted was google in 1996. I converted from vim to vscode + copilot (and now cursor.) because of LLMs -- that's how transformative this technology is.
> which makes everyone more efficient and productive
There is something bizarre about an economic system that pursues productivity for the sake of productivity even as it lays off the actual participants in the economic system
An echo of another commenter who said that its amazing that AI is now writing comments on the internet
Which is great, but it actively makes the internet a worse place for everyone and eventually causes people to simply stop using your site
Somewhat similar to AI making companies more productive - you can produce more than ever, but because you’re more productive, you don’t hire enough and ultimately there aren’t enough people to consume what you produce
> many verticals are simply uninvestable in the US because of labor costs and the gap of cost to manufacture is so large it's not even worth considering.
I think this is covered in a number of papers from think tanks related to the current administration.
The overall plan, as I understood it, is to devalue the dollar while keeping the monetary reserve status. A weaker dollar will make it competitive for foreign countries to manufacture in the US. The problem is that if the dollar weakens, investors will fly away. But the AI boom offsets that.
For now it seems to work: the dollar lost more than 10% year to date, but the AI boom kept investors in the US stock market. The trade agreements will protect the US for a couple years as well. But ultimately it's a time bomb for the population, that will wake up in 10 years with half their present purchasing power, in non dollar terms.
I think an interesting way to measure the value is to argue "what would we do without it?"
If we removed "modern search" (Google) and had to go back to say 1995-era AltaVista search performance, we'd probably see major productivity drops across huge parts of the economy, and significant business failures.
If we removed the LLMs, developers would go back to Less Spicy Autocomplete and it might take a few hours longer to deliver some projects. Trolls might have to hand-photoshop Joe Biden's face onto an opossum's body like their forefathers did. But the world would keep spinning.
It's not just that we've had 20 years more to grow accustomed to Google than LLMs, it's that having a low-confidence answer or an excessively florid summary of a document are not really that useful.
Another thing to note about China: while people love pointing to their public transit as an example of a country that's done so much right, their (over)investment in this domain has led to a concerning explosion of local government debt obligations which isn't usually well-represented in their overall debt to GDP ratios many people quote. I only state that to state that things are not all the propaganda suggests it might be in China. The big question everyone is asking is, what happens after Xi. Even the most educated experts on the matter do not have an answer.
I, too, don't understand the OP's point of quickly pivoting to value extraction. Every technology we've ever invented was immediately followed by capitalists asking "how can I use this to make more money". LLMs are an extremely valuable technology. I'm not going to sit here and pretend that anyone can correctly guess exactly how much we should be investing into this right now in order to properly price how much value they'll be generating in five years. Except, its so critical to point out that the "data center capex" numbers everyone keeps quoting are, in a very real (and, sure, potentially scary) sense, quadruple-counting the same hundred-billion dollars. We're not actually spending $400B on new data centers; Oracle is spending $nnB on Nvidia, who is spending $nnB to invest in OpenAI, who is spending $nnB to invest in AMD, who Coreweave will also be spending $nnB with, who Nvidia has an $nnB investment in... and so forth. There's a ton of duplicate-accounting going on when people report these numbers.
It doesn't grab the same headlines, but I'm very strongly of the opinion that there will be more market corrections in the next 24 months, overall stock market growth will be pretty flat, and by the end of 2027 people will still be opining on whether OpenAI's $400B annual revenue justifies a trillion dollars in capex on new graphics cards. There's no catastrophic bubble burst. AGI is still only a few years away. But AI eats the world none-the-less.
>> I was discussing with a friend that my biggest concern with AI right now is not that it isn't capable of doing things... but that we switched from research/academic mode to full value extraction so fast
lol, I read this few hours ago, maybe without enough caffeine but I read it as "my comment from 70 *years* ago" because I thought you somehow where at the The Dartmouth Summer Research Project on Artificial Intelligence 1956 workshop!
I somehow thought "Damn... already back there, at the birth of the field they thought it was too fast". I was entirely wrong and yet in some convoluted way maybe it made sense.
Gotta thank AI — it’s keeping my portfolio from collapsing, at least for now . But yeah, I totally see the point: AI investment might be one of the few things holding up the U.S. economy, and it doesn’t even have to fail spectacularly to cause trouble. Even a “slightly disappointing” AI wave could ripple across markets and policy.
> more than a fifth of the entire S&P 500 market cap is now just three companies — Nvidia, Microsoft, and Apple — two of which are basically big bets on AI.
These 3 companies have been heavyweights since long before AI. Before AI, you couldn't get Nvidia cards due to crypto, or gaming. Apple is barely investing in AI. Microsoft has been the most important enterprise tech company for my entire lifetime.
Nvidia market cap has increased about 10x since the crypto-shortage years. It wasn't small before, but there's a big difference between ~1% of the market and ~10% of the market in terms of systemic risk.
Also, as of last year about 80% of their revenues were from data center GPUs designed specifically for "AI", and that's undoubtedly continuing to grow as a share of their revenues.
You’re missing the point. Whether one buys it or not to one side, the author is saying those companies, whatever their history have pushed a significant amount of their … chips into a bet on AI.
One of my most frustrating things regarding the potential of an AI bubble was some very smart and intelligent researcher being incredibly bullish on AI on Twitter because if you extrapolate graphs measuring AI's ability to complete long-duration tasks (https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...) or other benchmarks then by 2026 or 2027 then you've basically invented AGI.
I'm going to take his statements at face value and assume that he really does have faith in his own predictions and isn't trying to fleece us.
My gripe with this statement is that this prediction is based on proxies for capability that aren't particularly reliable. To elaborate, the latest frontier models score something like 65% on SWE-bench, but I don't think they're as capable as a human that also scored 65%. That isn't to say that they're incapable, but just that they aren't as capable as an equivalent human. I think there's a very real chance that a model absolutely crushes the SWE-bench benchmark but still isn't quite ready to function as an independent software engineering agent.
So a lot of this bullishness basically hinges on the idea that if you extrapolate some line on a graph into the future, then by next year or the year after all white-collar work can be automated. Terrifying as that is, this all hinges on the idea that these graphs, these benchmarks, are good proxies.
There's a huge disconnect between what the benchmarks are showing and what the day-to-day experience of those of us using LLMs are experiencing. According to SWE-bench, I should be able to outsource a lot of tasks to LLMs by now. But practically speaking, I can't get them to reliably do even the most basic of tasks. Benchmaxxing is a real phenomenon. Internal private assessments are the most accurate source of information that we have, and those seem to be quite mixed for the most recent models.
How ironic that these LLM's appear to be overfitting to the benchmark scores. Presumably these researchers deal with overfitting every day, but can't recognize it right in front of them
>> by next year or the year after all white-collar work can be automated
Work generates work. If you remove the need for 50% of the work then a significant amount of the remaining work never needs to be done. It just doesn't appear.
The software that is used by people in their jobs will no longer be needed if those people aren't hired to do their jobs. There goes Slack, Teams, GitHub, Zoom, Powerpoint, Excel, whatever... And if the software isn't needed then it doesn't need to be written, by either a person or an AI. So any need for AI Coders shrinks considerably.
Even in the unlikely event AI somehow delivers on its valuations and thereby doesn't disappoint, the implied negative externalities on the population (mass worker redundancy, inequality that makes even our current scenario look rosy, skyrocketing electricity costs) means that America's and the world's future looks like a rocky road.
I don't think many businesses are at the stage where they can actually measure whatever AI is delivering.
At one business I know they fired most senior developers and mandated junior developers to use AI. Stakeholders were happy as finally they could see their slides in action. But, at a cost of code base being unreadable and remaining senior employees leaving.
So on paper, everything is better than ever - cheap workers deliver work fast. But I suspect in few months' time it will all collapse.
Most likely they'll be looking to hire for complete rewrite or they'll go under.
In the light of this scenario, AI is false economy.
When standard of living increases significantly, inequality often also increases. The economy is not a zero sum game. Having both rising inequality and rising living standards is generally the thing to aim for.
Both parties seem to agree we should build more electric capacity, that does seem like an excellent thing to invest in, why aren't we?
As the cost of material goods decreases, they will become near free. IMO demand for human-produced goods and experiences will increase.
Not necessarily. If $x is enough to get you 10x more Software engineering effort, people may be willing to increase their spending on software engineering, rather than decrease it
Solar is extremely cheap and battery costs are dropping quickly, IMO you may see US neighborhoods, especially rural disconnecting from the grid and rolling their own solutions.
This china rare earth thing may slow down the battery price drop somewhat but not for long because plenty of chemistries don't rely on rare earths, and there will soon be plenty of old EV packs that have some life left in them as part of grid storage.
I personally hope AI doesn't quite deliver on its valuations, so we don't lose tons of jobs, but instead of a market crash, the money will rotate into quantum and crispr technologies (both soon to be trillion dollar+ industries). People who bet big on AI might lose out some but not be wiped out. That's best casing it though.
Other than collapsing the internet when every pre-quantum algorithm is broken (nice jobs for the engineers who need to scramble to fix everything, I guess) and even more uncrackable comms for the military. Drug and chemistry discovery could improve a lot?
And to be quite honest, the prospect of a massive biotech revolution is downright scary rather than exciting to me because AI might be able to convince a teenager to shoot up a school now and then, but, say, generally-available protein synthesis capability means nutters could print their own prions.
Better healthcare technology in particular would be nice, but rather like food, the problem is that we already can provide it at a high standard to most people and choose not to.
Quantum had already peaked in the hype. It doesn't scale, like at all. It can't be used for abstract problems. We don't even know the optimal foundation base on which to start developing. It is now in the fusion territory. Fusion is also objectively useful with immense depth or research potential. It's just humans are too dumb for it, for now and so we will do it at scale centuries later.
Crispr would clash with the religious fundamentalists slowly coming back to power in all western countries. Potentially it will be even banned, like abortions.
I like this, because I hate the idea that we should either be rooting for AI to implode and cause a crash, or for it to succeed and cause a crash (or take us into some neo-feudal society).
"quantum" and "biotech" have been wishful thinking based promises for several years now, much like "artificial intelligence"
we need human development, not some shining new blackbox that will deliver us from all suffering
we need to stop seeking redemption and just work on the very real shortcomings of modern society... we don't even have scarcity anymore but the premise is still being upheld for the benefit of the 300 or so billionaire families...
AI has been the only new investment our company has made (half hearted at that). I definitely get the sense that everyone pretending things are fine to investors, meanwhile they are playing musical chairs.
Back in my economics classes at college, a professor pointed out that a stock market can go up for two reasons: On one hand, the economy is legitimately growing and shares are becoming more valuable. But on the other hand, people and corporations could be cutting spending en masse so there's extra cash to flood the stock markets and drive up prices regardless of future earnings.
sometimes volume and total $ are not the same.
Deleted Comment
Guess what happened next?
Reason #1 is lower interest rates, which increase the present value of future cash flows in DCF models. A professor who does not mention that does not know what they are talking about.
AI, or whatever a mountain of processors churning all of the worlds data will be called later, still has no use case, other than total domination, for which it has brought a kind of lame service to all of the totaly dependent go along to get along types, but nothing approaching an actual guaranteed answer for anything usefull and profitable, lame, lame, infinite fucking lame tedious shit that has prompted most people to.stop.even trying, and so a huge vast amount of genuine human inspiration and effort is gone
We can say that if this rotten support beam fails the US is in trouble but the real issue is what caused the rot in the first place.
Remember, when a bear is chasing you and some others, you don't have to be faster than the bear to escape.
The effective removal of regulations via winner bribes and a lack of enforcement plus the explicit removal of regulations, to reduce corruption and insider trading. AI is not required to create the systemic exploitations and they are far more efficient at extracting value than any AI system.
If there's another European debt crisis (for example) does the bear eat Europe and any US issues go away?
Sure, well done, your link in the chain didn't break… but your anchor is still stuck on the bottom of the ocean and you're on your spare anchor (with a shorter chain) until you get back to harbour.
> I was discussing with a friend that my biggest concern with AI right now is not that it isn't capable of doing things... but that we switched from research/academic mode to full value extraction so fast that we are way out over our skis in terms of what is being promised, which, in the realm of exciting new field of academic research is pretty low-stakes all things considered... to being terrifying when we bet policy and economics on it.
That isn't overly prescient or anything... it feels like the alarm bells started a while ago... but wow the absolute "all in" of the bet is really starting to feel like there is no backup. With the cessation of EVs tax credits, the slowdown in infra spending, healthcare subsidies, etc, the portfolio of investment feels much less diverse...
Especially compared to China, which has bets in so many verticals, battery tech, EVs, solar, then of course all the AI/chips/fabs. That isn't to say I don't think there are huge risks for China... but geez does it feel like the setup for a big shift in economic power especially with change in US foreign policy.
For AI, the pivot to profitability was indeed quick, but I don't think it's as bad as you may think. We're building the software infrastructure to accomodate LLMs into our work streams which makes everyone more efficient and productive. As foundational models progress, the infrastructure will reap the benefits a-la moore's law.
I acknowledge that this is a bullish thesis but I'll tell you why I'm bullish: I'm basically a high-tech ludite -- the last piece of technology I adopted was google in 1996. I converted from vim to vscode + copilot (and now cursor.) because of LLMs -- that's how transformative this technology is.
There is something bizarre about an economic system that pursues productivity for the sake of productivity even as it lays off the actual participants in the economic system
An echo of another commenter who said that its amazing that AI is now writing comments on the internet
Which is great, but it actively makes the internet a worse place for everyone and eventually causes people to simply stop using your site
Somewhat similar to AI making companies more productive - you can produce more than ever, but because you’re more productive, you don’t hire enough and ultimately there aren’t enough people to consume what you produce
I think this is covered in a number of papers from think tanks related to the current administration.
The overall plan, as I understood it, is to devalue the dollar while keeping the monetary reserve status. A weaker dollar will make it competitive for foreign countries to manufacture in the US. The problem is that if the dollar weakens, investors will fly away. But the AI boom offsets that.
For now it seems to work: the dollar lost more than 10% year to date, but the AI boom kept investors in the US stock market. The trade agreements will protect the US for a couple years as well. But ultimately it's a time bomb for the population, that will wake up in 10 years with half their present purchasing power, in non dollar terms.
If we removed "modern search" (Google) and had to go back to say 1995-era AltaVista search performance, we'd probably see major productivity drops across huge parts of the economy, and significant business failures.
If we removed the LLMs, developers would go back to Less Spicy Autocomplete and it might take a few hours longer to deliver some projects. Trolls might have to hand-photoshop Joe Biden's face onto an opossum's body like their forefathers did. But the world would keep spinning.
It's not just that we've had 20 years more to grow accustomed to Google than LLMs, it's that having a low-confidence answer or an excessively florid summary of a document are not really that useful.
I, too, don't understand the OP's point of quickly pivoting to value extraction. Every technology we've ever invented was immediately followed by capitalists asking "how can I use this to make more money". LLMs are an extremely valuable technology. I'm not going to sit here and pretend that anyone can correctly guess exactly how much we should be investing into this right now in order to properly price how much value they'll be generating in five years. Except, its so critical to point out that the "data center capex" numbers everyone keeps quoting are, in a very real (and, sure, potentially scary) sense, quadruple-counting the same hundred-billion dollars. We're not actually spending $400B on new data centers; Oracle is spending $nnB on Nvidia, who is spending $nnB to invest in OpenAI, who is spending $nnB to invest in AMD, who Coreweave will also be spending $nnB with, who Nvidia has an $nnB investment in... and so forth. There's a ton of duplicate-accounting going on when people report these numbers.
It doesn't grab the same headlines, but I'm very strongly of the opinion that there will be more market corrections in the next 24 months, overall stock market growth will be pretty flat, and by the end of 2027 people will still be opining on whether OpenAI's $400B annual revenue justifies a trillion dollars in capex on new graphics cards. There's no catastrophic bubble burst. AGI is still only a few years away. But AI eats the world none-the-less.
[1] https://www.sciencedirect.com/science/article/abs/pii/S09275...
For example?
Deleted Comment
>> I was discussing with a friend that my biggest concern with AI right now is not that it isn't capable of doing things... but that we switched from research/academic mode to full value extraction so fast
lol, I read this few hours ago, maybe without enough caffeine but I read it as "my comment from 70 *years* ago" because I thought you somehow where at the The Dartmouth Summer Research Project on Artificial Intelligence 1956 workshop!
I somehow thought "Damn... already back there, at the birth of the field they thought it was too fast". I was entirely wrong and yet in some convoluted way maybe it made sense.
It happened ten years ago, it's just that perceptions haven't changed yet.
Maybe consumer staples (Walmart, Pepsi etc.)? Dollar stores?
These 3 companies have been heavyweights since long before AI. Before AI, you couldn't get Nvidia cards due to crypto, or gaming. Apple is barely investing in AI. Microsoft has been the most important enterprise tech company for my entire lifetime.
Also, as of last year about 80% of their revenues were from data center GPUs designed specifically for "AI", and that's undoubtedly continuing to grow as a share of their revenues.
Deleted Comment
Microsoft's share price has more than doubled since 2023.
I'm going to take his statements at face value and assume that he really does have faith in his own predictions and isn't trying to fleece us.
My gripe with this statement is that this prediction is based on proxies for capability that aren't particularly reliable. To elaborate, the latest frontier models score something like 65% on SWE-bench, but I don't think they're as capable as a human that also scored 65%. That isn't to say that they're incapable, but just that they aren't as capable as an equivalent human. I think there's a very real chance that a model absolutely crushes the SWE-bench benchmark but still isn't quite ready to function as an independent software engineering agent.
So a lot of this bullishness basically hinges on the idea that if you extrapolate some line on a graph into the future, then by next year or the year after all white-collar work can be automated. Terrifying as that is, this all hinges on the idea that these graphs, these benchmarks, are good proxies.
And if they aren't, oh wow.
A bit offtopic but as time goes by, I believe we can be very intelligent in some aspects and very, very naive and/or wrong in other aspects.
Work generates work. If you remove the need for 50% of the work then a significant amount of the remaining work never needs to be done. It just doesn't appear.
The software that is used by people in their jobs will no longer be needed if those people aren't hired to do their jobs. There goes Slack, Teams, GitHub, Zoom, Powerpoint, Excel, whatever... And if the software isn't needed then it doesn't need to be written, by either a person or an AI. So any need for AI Coders shrinks considerably.
https://www.julian.ac/blog/2025/09/27/failing-to-understand-...
- Where we have intelligent computers and robots that can take over most jobs
- A smarter LLM that can help with creative work but limited interaction with the physical world
- Something else we haven't imagined yet
Depending on where we end up, the current investment could provide a great ROI or a negative one.
At one business I know they fired most senior developers and mandated junior developers to use AI. Stakeholders were happy as finally they could see their slides in action. But, at a cost of code base being unreadable and remaining senior employees leaving.
So on paper, everything is better than ever - cheap workers deliver work fast. But I suspect in few months' time it will all collapse.
Most likely they'll be looking to hire for complete rewrite or they'll go under.
In the light of this scenario, AI is false economy.
Deleted Comment
Deleted Comment
Both parties seem to agree we should build more electric capacity, that does seem like an excellent thing to invest in, why aren't we?
As the cost of material goods decreases, they will become near free. IMO demand for human-produced goods and experiences will increase.
You said it right here. No one is going to give up energy at such a cheap rate anymore. Those days are over. Darkness for the US is coming.
This china rare earth thing may slow down the battery price drop somewhat but not for long because plenty of chemistries don't rely on rare earths, and there will soon be plenty of old EV packs that have some life left in them as part of grid storage.
scarcity isn't real anymore, it is enforced politically for the benefit of the owning class
Other than collapsing the internet when every pre-quantum algorithm is broken (nice jobs for the engineers who need to scramble to fix everything, I guess) and even more uncrackable comms for the military. Drug and chemistry discovery could improve a lot?
And to be quite honest, the prospect of a massive biotech revolution is downright scary rather than exciting to me because AI might be able to convince a teenager to shoot up a school now and then, but, say, generally-available protein synthesis capability means nutters could print their own prions.
Better healthcare technology in particular would be nice, but rather like food, the problem is that we already can provide it at a high standard to most people and choose not to.
Crispr would clash with the religious fundamentalists slowly coming back to power in all western countries. Potentially it will be even banned, like abortions.
we need human development, not some shining new blackbox that will deliver us from all suffering
we need to stop seeking redemption and just work on the very real shortcomings of modern society... we don't even have scarcity anymore but the premise is still being upheld for the benefit of the 300 or so billionaire families...