Readit News logoReadit News
kbos87 · 2 months ago
Companies like Nvidia and OpenAI base their answers to any questions on economic risk on their own best interests and a pretty short view of history. They are fighting like hell to make sure they are among a small set of winners while waving away the risk or claiming that there's some better future for the majority of people on the other side of all this.

To claim that all of the benefit isn't going to naturally accrue to a thin layer of people at the top isn't a speculation at this point - it's a bold faced lie.

When AI finally does cause massive disruption to white collar work, what happens then? Do we really think that most of the American economy is just going to downshift into living off a meager universal basic income allotment (assuming we could ever muster the political will to create a social safety net?) Who gets the nice car and the vacation home?

Once people are robbed of what remaining opportunities they have to exercise agency and improve their life, it isn't hard to imagine that we see some form of swift and punitive backlash, politically or otherwise.

lend000 · 2 months ago
Friends and others who have described the details of their non-technical white collar work to me over the last 15 or so years have typically evoked the unspoken response... "Hmm, I could probably automate about 50-80% of your job in a couple weeks." That's pre-AI. And yet years later, they would still have similar jobs with repetitive computer work.

So I'm quite confident the future will be similar with AI. Yes, in theory, it could already replace perhaps 90% of the white collar work in the economy. But in practice? It will be a slow, decades-long transition as old-school / less tech savvy employers adopt the new processes and technologies.

Junior software engineers trying to break into high paying tech jobs will be hit the hardest hit IMO, since employers are tech savvy, the supply of junior developers is as high as ever, and they simply will take too long to add more value than using Claude unless you have a lot of money to burn on training them.

brookst · 2 months ago
I’m very skeptical of claims that all things will always do this and never do that, etc.

IMO Jensen and others don’t know where AI is going any more than the rest of us. Your imaginary dystopia is certainly possible, but I would warn against having complete conviction it is the only possible outcome.

danans · 2 months ago
> Your imaginary dystopia is certainly possible, but I would warn against having complete conviction it is the only possible outcome.

Absent some form of meaningful redistribution of the economic and power gains that come from AI, the techno-feudalist dystopia becomes a more likely outcome (though not a certain outcome), based on a straightforward extrapolation of the last 40 years of increasing income and wealth inequality. That trend could be arrested (as it was just after WW2), but that probably won't happen by default.

kbos87 · 2 months ago
Fair point and I absolutely acknowledge that the future AI will usher in is still very much an unknown. I do think it's worth recognizing that there is one part of the story that is very predictable because it's happened over and over again - the part where some sort of innovation creates new efficiencies and advantages. I think it's fair to debate the extent to which AI will completely disrupt the white collar working class, but to whatever extent it does, I don't think there's much argument about where the benefit will accrue under our current economic system.
ben_w · 2 months ago
> To claim that all of the benefit isn't going to naturally accrue to a thin layer of people at the top isn't a speculation at this point - it's a bold faced lie.

Indeed.

> When AI finally does cause massive disruption to white collar work, what happens then? Do we really think that most of the American economy is just going to downshift into living off a meager universal basic income allotment (assuming we could ever muster the political will to create a social safety net?) Who gets the nice car and the vacation home?

Completely impossible to forecast, even if not against a backdrop of all the changes due to the very tech that makes people unemployable.

For the latter: One task that is currently done by humans, is making robots. Either that continues to be done in part by humans, or it becomes fully automated. If it's fully automated, essentially nothing stops exactly one lone philanthropist (or anarchist hacker) from telling one to recursively make copies of itself until every human on earth has their own.

Unfortunately, I suspect the realpolitik of such tech will be horrific in the same way that the trenches of WW1 provided an unexpected and horrific use case for all the chemical industry that was in the process of ending famine in Europe, and likewise the manufacturing industry that was in the process of giving everyone modern conveniences such as "electricity", "indoor plumbing", and "affordable home refrigeration".

zer00eyz · 2 months ago
> and a pretty short view of history

Great lets see an example!

> To claim that all of the benefit isn't going to naturally accrue to a thin layer of people at the top isn't a speculation at this point - it's a bold faced lie.

Except that innovation has lead to more jobs, new industries, and more prosperity and fewer working hours. The stark example of this: you arent a farmer: https://modernsurvivalblog.com/systemic-risk/98-percent-of-a...

Your shirts arent a weeks or a months income: https://www.bookandsword.com/2017/12/09/how-much-did-a-shirt...

Go back to the 1960's when automation was new. It was an expensive, long running failure for GM to put in those first robotic arms. Today there are people who have CNC shops in their garage. The cost of starting that business up is in the same price range as the pickup truck you might put in there. You no longer need accountants, payroll, and your not spending as much time doing these things yourself its all software. You dont need to have a retail location, or wholesale channels, build your website, app, leverage marketplaces and social media. The reality is that it is cheaper and easier than ever to be your own business... and lots of people are figuring this out and thriving.

> Do we really think that most of the American economy is just going to downshift

No I think my fellow Americans are going to scream and cry and hold on to dying ways of life -- See coal miners.

willis936 · 2 months ago
I struggle to see how AI innovation falls into the "automate creation of material goods" camp and not the "stratification of wealth" camp.
TeMPOraL · 2 months ago
> Except that innovation has lead to more jobs, new industries, and more prosperity and fewer working hours.

For other people's kids.

This is the critical point so many are still missing: whatever benefits come from jobs being automated away, they do not come to people whose jobs were automated. Those people are fucked over for life. And to a large degree, so are their kids - the new careers are available for picking, but you're not going to be first in line with your peers, if you're struggling from the shock after your household suddenly dropped two levels down on the economic ladder.

sbierwagen · 2 months ago
>Who gets the nice car and the vacation home?

AI will crash the price of manufactured goods. Since all prices are relative, the price of rivalrous goods will rise. A car will be cheap. A lakeside cabin will be cheap. A cottage in the Hamptons will be expensive. Superbowl tickets will be a billion dollars each.

>meager universal basic income allotment

What does a middle class family spend its money on? You don't need a house within an easy commute of your job, because you won't have one. You don't need a house in a good school district, because there's no point in going to school. No need for the red queen's race of extracurriculars that look good on a college application, or to put money in a "college fund", because college won't exist either.

The point of AI isn't that it's going to blow up some of the social order, but that it's going to blow up the whole thing.

amazingman · 2 months ago
The main flaw in your framing is that physical resources are still scarce. All prices are not relative in the sense you're building your projections on.
bigbadfeline · 2 months ago
> AI will crash the price of manufactured goods.

Quite the opposite, persistent inflation has been with us for a long time despite automation, it's not driven by labor cost (even mainstream econ knows it), it's driven by monopolization which corporate AI facilitates and shifts to overdrive.

> The point of AI isn't that it's going to blow up some of the social order, but that it's going to blow up the whole thing.

AI will blow up only what its controllers tell it to, that control is the crux of the problem. The AI-driven monopolization allows few controllers to keep the multitudes in their crosshairs and do whatever they want, with whomever they want, J. Huang will make sure they have the GPUs they need.

> You don't need a house within an easy commute of your job, because you won't have one.

Remote work has been a thing for quite some time but remote housing is still rare anyway - a house provides access not only to jobs and school but also to medical care, supply lines and social interaction. There are places in Montana and the Dakotas who see specialist doctors only once a week or month because they fly weekly from places as far away as Florida.

> You don't need a house in a good school district, because there's no point in going to school... and college won't exist either.

What you're describing isn't a house, it's a barn! Can you lactate? Because if you can't, nobody is going to provide you with a stall in the glorious AI barn.

timewizard · 2 months ago
> base their answers to any questions on economic risk on their own best interests and a pretty short view of history.

We used to just call that lying.

> When AI finally does cause massive disruption to white collar work

It has to exist first. Currently you have a chat bot that requires terabytes of copyrighted data to function and has sublinear increases in performances for exponential increases in costs. These guys genuinely seem to be arguing over a dead end.

> what happens then?

What happened when gasoline engines removed the need to have large pools of farm labor? It turns out people are far more clever than a "chat bot" and entire new economies became invented.

> that we see some form of swift and punitive backlash, politically or otherwise.

Or people just move onto the next thing. It's hilarious how small imaginations become when "AI" is being discussed.

klipklop · 2 months ago
> Once people are robbed of what remaining opportunities they have to exercise agency and improve their life, it isn't hard to imagine that we see some form of swift and punitive backlash, politically or otherwise.

Now you understand why the government is rushing to implement a surveillance state and deputize tech execs to speed this process up. The "control grid" goes in first and the middle-class job market is snuffed out after.

zozbot234 · 2 months ago
"Massive disruption" of what kind? Current AI abilities make white-collar work more productive and potentially higher-paid, not less.
kbos87 · 2 months ago
Why would my employer pay me more for using their AI? I am already massively more productive at work using AI. I'm not getting paid more, and I'm not working fewer hours. The road we are headed down is one where all of the economic benefits go straight to the owning class.
Arainach · 2 months ago
Productivity per capita is dramatically up since the 1970s. Wages are flat. Employers are greedy and short sighted.

Employers would rather pay more to hire someone new who doesn't know their business than give a raise to an existing employee who's doing well. They're not going to pay someone more because they're more productive, they'll pay them the same and punish anyone who can't meet the new quota.

uhhhhhhh · 2 months ago
Companies are actively not hiring expecting AI to compensate and still have growth. I have seen these same companies giving smaller raises and less promotions, and eliminate junior positions.

The endgame isn't more employees or paying them more. It's paying less people or no skilled people when possible.

That's a fairly massive disruption.

ffsm8 · 2 months ago
You seem to have the same opinion as kbos87 then, because given your higher productivity, do you honestly think there will not be less job openings from your employer going forward?

What you just said as a rebuttal was pretty much his point, you just didn't internalize what the productivity gains mean at the macro level, only looking at the select few that will continue to have a job

VWWHFSfQ · 2 months ago
USA is presently in the midst of a massive offshoring of software jobs which will only continue to accelerate as AI becomes better. These are "white collar" jobs that will never come back.
jaredklewis · 2 months ago
Salaries are determined by the replacement cost of the employee in question, not their productivity. How does AI increase wages?
knowitnone · 2 months ago
so if they are more productive, does that not mean companies will need fewer staff? Why would they give you more pay when they can so easily replace you. Remember, you're not doing much of the work anymore so expect lower pay.
bradgessler · 2 months ago
I did a thought experiment where, at scale, if each human was given maximum agency over the observable universe, we’d each manage 250 galaxies.

That comes out to about 25 trillion stars and 40 trillion planets.

Give or take a few orders of magnitude, I’m confident humans will either figure out how to expand into that space or squabble over rationing our remaining resources on earth.

hnlmorg · 2 months ago
That backlash is already happening. Which is why we are seeing the rise in right wing extremism. People are voting for change. The problem is they’re also voting for the very establishment they’re protesting against.
willis936 · 2 months ago
Surveys aren't revealing that AI legislation is a top 3 issue for constituents on either side. It might as well be under the noise floor politically.
Aurornis · 2 months ago
AI doesn’t really register on polls of voter priorities.
imperialdrive · 2 months ago
Finally gave Claude a go after trying OpenAI a while and feeling pretty _meh_ about the coding ability... Wow, it's a whole other level or two ahead, at least for my daily flavor which is PowerShell. No way a double-digit amount of jobs aren't at stake. This stuff feels like it is really starting to take off. Incredible time to be in tech, but you gotta be clever and work hard every day to stay on the ride. Many folks got comfortable and/or lazy. AI may be a kick in the pants. It is for me anyway.
WXLCKNO · 2 months ago
I've been trying every flavor of AI powered development and after trying Claude Code for two days with an API key, I upgraded to the full Max 20x plan.

Cursor, Windsurf, Roo Code / Cline, they're fine but nothing feels as thorough and useful to me as Claude Code.

The Codex CLI from OpenAI is not bad either, there's just something satisfying about the LLM straight up using the CLI

solumunus · 2 months ago
It really is night and day. Most of them feel like cool toys, Claude Code is a genuine work horse. It immediately became completely integral to my workflow. I own a small business and I can say with absolute confidence this will reduce the amount of devs I need to hire going forward.
wellthisisgreat · 2 months ago
hey can you explain the appeal of Claude Code vs Cursor?

I know the context window part and Cursor RAG-ing it, but isn't IDE integration a a true force multiplier?

Or does Claude Code do something similar with "send to chat" / smart (Cursor's TAB feature) autocomplete etc.?

I fired it up but it seemed like just Claude in terminal with a lot more manual copy-pasting expected?

I tried all the usual suspects in AI-assisted programming, and Cursor's TAB is too good to give up vs Roo / Cline.

I do agree Claude's the best for programming so would love to use it full-featured version.

dandaka · 2 months ago
Claude Code works surprisingly well and is also cheaper, compared to Windsurf and Cline + Sonnet 4. The rate of errors dropped dramatically for my side projects, from "I have to check most changes" to "I have not written a line".
GardenLetter27 · 2 months ago
I find it's good if you can get a really clean context, but on IRL problems with 100k+ lines of code that's extremely hard to manage.

It absolutely aced an old take-home test I had though - https://jamesmcm.github.io/blog/claude-data-engineer/

But note the problems it got wrong are troubling, especially the off-by-one error the first time as that's the sort of thing a human might not be able to validate easily.

neilfrndes · 2 months ago
Yup, Claude Code is the real deal. It's a massive force multiplier for me. I run a small SaaS startup. I've gotten more done in the last month than the previous 3 months or more combined. Not just code, but also emails, proposals, planning, legal etc. I feel like working in slo-mo when Claude is down (which unfortunately happens every couple of days). I believe that tools like Claude code will help smaller companies disproportionately.
finlayson_point · 2 months ago
how are you using claude code for emails? with a MCP connection or just taking the output from the terminal
Aurornis · 2 months ago
> Finally gave Claude a go after trying OpenAI a while and feeling pretty _meh_ about the coding ability... Wow, it's a whole other level or two ahead,

I’ve been avoiding LLM-coding conversations on popular websites because so many people tried it a little bit 3-6 months ago, spot something that doesn’t work right, and then write it off completely.

Everyone who uses LLM tools knows they’re not perfect, they hallucinate some times, their solutions will be laughably bad to some problems, and all the other things that come with LLMs.

The difference is some people learn the limits and how to apply them effectively in their development loop. Other people go in looking for the first couple failures and then declare victory over the LLM.

There are also a lot of people frustrated with coworkers using LLMs to produce and submit junk, or angry about the vibe coding glorification they see on LinkedIn, or just feel that their careers are threatened. Taking the contrarian position that LLMs are entirely useless provides some comfort.

Then in the middle, there are those of us who realize their limits and use them to help here and there, but are neither vibe coding nor going full anti-LLM. I suspect that’s where most people will end up, but until then the public conversations on LLMs are rife with people either projecting doomsday scenarios or claiming LLMs are useless hype.

unshavedyak · 2 months ago
I purchased Max a week ago and have been using it a lot. Few experiences so far:

- It generates slop in high volume if not carefully managed. It's still working, tested code, but easily illogical. This tool scares me if put in the hands of someone who "just wants it to work".

- It has proven to be a great mental block remover for me. A tactic i've often had in my career is just to build the most obvious, worst implementation i can if i'm stuck, because i find it easier to find flaw in something and iterate than it is to build a perfect impl right away. Claude makes it easy to straw man a build and iterate it.

- All the low stakes projects i want to work on but i'm too tired to after real work have gotten new life. It's updated library usage (Bevy updates were always a slog for me), cleaned up tooling and system configs, etc.

- It seems incapable of seeing the larger picture on why classes of bugs happen. Eg on a project i'm Claude Code "vibing" on, it's made a handful of design issues that started to cause bugs. It will happily try and fix individual issues all day rather than re-architect to make a less error prone API. Despite being capable to actually fix the API woes if prompted to. I'm still toying with the memory though, so perhaps i can get it to reconsider this behavior.

- Robust linting, formatting and testing tools for the language seem necessary. My pet peeve is how many spaces the LLM will add in. Thankfully cargo-fmt clears up most LLM gunk there.

Deleted Comment

levocardia · 2 months ago
Nvidia is also very mad about Anthropic's advocacy for chip export controls, which is not mentioned in this article. Dario has an entire blog post explaining why preventing China from getting Nvidia's top of the line chips is a critical national security issue, and Jensen is, at least by his public statements, furious about the export controls. As it currently stands, Anthropic is winning in terms of what the actual US policy is, but it may not stay that way.
KerrAvon · 2 months ago
Jensen is right, though. If we force China to develop their own technology they’ll do that! We don’t have a monopoly on talent or resources. The US can have a stake at the table or nothing at all. The time when we, the US, could do protectionism without shooting ourselves in the foot is well and truly over. The most we can do is inconvenience China in the short term.
orangecat · 2 months ago
The most we can do is inconvenience China in the short term.

If scaling holds up enough to make AGI possible in the next 5-10 years, slowing down China by even a few years is extremely valuable.

nickysielicki · 2 months ago
> If we force China to develop their own technology they’ll do that!

They’re going to do that anyway. They already are. The reason that they want to buy these cards in the first place is because developing these accelerators takes time. A lot of time.

sorcerer-mar · 2 months ago
Should we also give them the plans for all of our military equipment then, by the same logic?

Neither side is obviously right.

dsign · 2 months ago
Why look at five years and say "everything is gonna be fine in five years, thus, everything is gonna be fine and we should keep this AI thing going"?

It's early days and nobody knows how things will go, but to me it looks that in the next century or so humans are going the way of the horse, at least when it comes to jobs. And if our society doesn't change radically, let's remember that the only way most people have of eating and clothing is to sell their labor.

I'm an AI pessimist-pragmatist. If the thing with AI gets really bad for wage slaves like me, I would prefer to have enough savings to put AIs to work in some profitable business of mine, or to do my healthcare when disease strikes.

quonn · 2 months ago
> It's early days and nobody knows how things will go, but to me it looks that in the next century or so

How is it early days? AI has been talked about since at least the 50s, neural networks have been a thing since the 80s.

If you are worried about how technology will be in a century, why stop right here? Why not take the state of computers in the 60s and stop there?

Chances are, if the current wave does not achieve strong AI the there will be another AI winter and what people will research in 30 or 40 or 100 years is not something that our current choices can affect.

Therefore the interesting question is what happens short-term not what happens long-term.

dsign · 2 months ago
I said that one hundred years from now humans would have likely gone the way of the horse. It will be a finished business, not a thing starting. We may take it with some chill, depending on how we value our species and our descendants and the long human history and our legacy. It's a very individual thing. I'm not chill.

There's no comparing the AI we have today with what we had 5 years ago. There's a huge qualitative difference: the AI we had five years ago was reliable but uncreative. The one we have now is quite a bit unreliable but creative at a level comparable with a person. To me, it's just a matter of time before we finish putting the two things together--and we have already started. Another AI winter of the sort we had before seems to me highly unlikely.

falcor84 · 2 months ago
> How is it early days?

When you have exponential growth, it's always early days.

Other than that I'm not clear on what you're saying. What is in your mind the difference between how we should plan for the societal impact of AI in the short vs the long term?

fmbb · 2 months ago
We have only been selling our labor for a couple of hundred years. Humanity has been around for hundreds of thousands of years.

We will manage. Hey, we can always eat the rich!

dsign · 2 months ago
>> we can always eat the rich!

As long as they are not made out of silicon....

pixl97 · 2 months ago
"Dinosaurs have been around 100 million years and they will be around 100 million more" --Dinosaurs 65.1 million years ago.
davemp · 2 months ago
> …to me it looks that in the next century or so humans are going the way of the horse, at least when it comes to jobs.

I’m not sure. I think we can extrapolate that repetitive knowledge work will require much less labor. For actual AGI capable of applying rigor, I don’t think it clear that the computational requirements are achievable without a massive breakthrough. Also for general purpose physical tasks, humans are still pretty dang efficient at ~100watts and self maintaining.

jjfoooo4 · 2 months ago
The AI executives predicting AI doomsday trend has been pretty tiresome, and I'm glad it's getting push back. It's impossible to take it seriously given an Anthropic's CEO's incentives: to thrill investors and to shape regulation of competitors.

The biggest long term competitor to Anthropic isn't OpenAI, or Google... it's open source. That's the real target of Amodei's call for regulation.

scuol · 2 months ago
Just this morning, I had Claude come up with a C++ solution that would have undefined behavior that even a mid-level C++ dev could have easily caught (assuming iterator stability in a vector that was being modified) just by reading the code.

These AI solutions are great, but I have yet to see any solution that makes me fear for my career. It just seems pretty clear that no LLM actually has a "mental model" of how things work that can avoid the obvious pitfalls amongst the reams of buggy C++ code.

Maybe this is different for JS and Python code?

jsrozner · 2 months ago
This is exactly right. LLMs do not build appropriate world models. And no...python and JS have similar failure cases.

Still, sometimes it can solve a problem like magic. But since it does not have a world model it is very unreliable, and you need to be able to fall back to real intelligence (i.e., yourself).

rangestransform · 2 months ago
> assuming iterator stability in a vector that was being modified

This is the crux of an interview question I ask, and you’d be amazed how many experienced cpp devs require heavy hints to get it

unshavedyak · 2 months ago
I agree, but i think the thing we often miss in these discussions is how much LLMs have potential to be productivity multipliers.

Yea, they still need to improve a bit - but i suspect there will be a point at which individual devs could be getting 1.5x more work done in aggregate. So if everyone is doing that much more work, it has potential to "take the job" of someone else.

Yea, software is being needed more and more and more, so perhaps it'll just make us that much more dependent on devs and software. But i do think it's important to remember that productivity always has potential to replace devs, and LLMs imo have huge potential in productivity.

scuol · 2 months ago
Oh I agree it can be a multiplier for sure. I think it's not "AI will take your job" but rather "someone who uses AI well will take your job if you don't learn it".

At least for C++, I've found it does very mediocre at suggesting project code (because it has the tendency to drop in subtle bugs all over the place, you basically have to carefully review it instead of just writing it yourself), but asking things in copilot like "Is there any UB in this file?" (not that it will be perfect, but sometimes it'll point something out) or especially writing tests, I absolutely love it.

skerit · 2 months ago
Sonnet or Opus? Well, I guess they both still can do that. But I'm just keeping on asking it to review all its code. To make sure it works. Eventually, it'll catch its errors.

Now this isn't a viable way of working if you're paying for this token-by-token, but with the Claude Code $200 plan ... this thing can work for the entire day, and you will get a benefit from it. But you will have to hold its hand quite a bit.

mistrial9 · 2 months ago
a difference emerges when an agent can run code and examine the results. Most platforms are very cautious about this extension. Recent MCP does define toolsets and can enable these feedback loops in a way that can be adopted by markets and software ecosystems.
phamilton · 2 months ago
(not trolling) Would that undefined behavior have occurred in idiomatic rust?

Will the ability to use AI to write such a solution correctly be enough motivation to push C++ shops to adopt rust? (Or perhaps a new language that caters to the blindspots of AI somehow)

There will absolutely be a tipping point where the potential benefits outweigh the costs of such a migration.

ddaud · 2 months ago
I agree. That mental model is precisely why I don’t use LLMs for programming.
fassssst · 2 months ago
It’s another league for JS and python, yes.
pepinator · 2 months ago
This is where one can notice that LLM are, after all, just stochastic parrots. If we don't have a reliable way to systematically test their outputs, I don't see many jobs being replaced by AI either.
mistrial9 · 2 months ago
> just stochastic parrots

this is flatly false for two reasons -- one is that all LLMs are not equal. The models and capacities are quite different, by design. Secondly a large number of standardized LLM testing, tests for sequence of logic or other "reasoning" capacity. Stating the fallacy of stochastic parrots is basically proof of not looking at the battery of standardized tests that are common in LLM development.

zozbot234 · 2 months ago
> undefined behavior that even a mid-level C++ dev could have easily caught (assuming iterator stability in a vector that was being modified)

This is not an AI thing, plenty of "mid-level" C++ developers could have made that same mistake. New code should not be written in C++.

(I do wonder how Claude AI does when coding Rust, where at least you can be pretty sure that your code will work once it compiles successfully. Or Safe C++, if that ever becomes a thing.)

sampullman · 2 months ago
It does alright with Rust, but you can't assume it works as intended if it compiles successfully. The issue with current AI when solving complex or large scale coding problems is usually not syntax, it's logical issues and poor abstraction. Rust is great, but the borrow checker doesn't protect you from that.

I'm able to use AI for Rust code a lot more now than 6 months ago, but it's still common to have it spit out something decent looking, but not quite there. Sometimes re-prompting fixes all the issues, but it's pretty frustrating when it doesn't.

bugglebeetle · 2 months ago
I haven’t tried with the most recent Claude models, but for the last iteration, Gemini was far better at Rust and what I still use to write anything in it. As an experiment, I even fed it a whole ebook on Rust design patterns and a small script (500 lines) and it was able to refactor to use the correct ones, with some minor back and forth to fix build errors!
steveklabnik · 2 months ago
I use Claude Code with Rust regularly and am very happy with it.
jeffreygoesto · 2 months ago
Go ahead and modify A Python dict while iterating over it then.
Spivak · 2 months ago
Hey now, let's not criticize the Anthropic CEO just yet. He made a totally not just pulling a number out of his ass prediction, but a prediction that's nonetheless falsifiable.

> that 50% of all entry-level white-collar jobs could be wiped out by artificial intelligence, causing unemployment to jump to 20% within the next five years

I'm not a betting woman but I feel extremely confident taking the other end of this bet.

tossandthrow · 2 months ago
The other end is that we continue a roughly 3% unemployment over the next 5 years.

I am curious to hear why you think that?

seadan83 · 2 months ago
False dichotomy, we don't have to continue at 3% for the 20% prediction to be wrong.

So far, I've seen jobs lost to tariffs. I've yet to see a job lost to AI. Observations are not evidence, but so far there is no evidence I see that shows AI to be a stronger macro economic force than say recessions, tariffs (trade wars) or actual wars.

rectang · 2 months ago
The Anthropic CEO wants companies to lay off workers and pay Anthropic to do the work instead. Is Anthropic capable enough to replace those workers, and will it actually happen? Such pronouncements should be treated with the skepticism you'd apply to any sales pitch.