Readit News logoReadit News
dcastonguay · 4 months ago
My current opinion is that AI is just a thing that is going to further amplify our existing tendencies (on a person-to-person basis). I've personally found it to be extremely beneficial in research, learning, and the category of "things that take time because of the stuff you have to mechanically go through and not because of the brainpower involved". I have been able to spend so much more time on things that I feel require more human thinking and on stuff I generally enjoy more. It has been wonderful and I feel like I've been on a rocket ship of personal growth.

I've seen many other people who have essentially become meatspace analogues for AI applications. It's sad to watch this happen while listening to many of the same people complain about how AI will take their jobs, without realizing that they've _given_ AI their jobs by ensuring that they add nothing to the process. I don't really understand yet why people don't see that they are doing this to themselves.

password54321 · 4 months ago
The post was largely about young people who are growing up with these tools and are still at the stage of developing their habits and tendencies. Personally I am glad I learnt programming before LLMs even if it meant the tedious searches on stackoverflow because I didn't feel like I was coming up against a wave of new technology when it came to future job searches. Having done so I can understand and appreciate the intrinsic value of learning these things but western culture is largely about extrinsic values that may lead to the future generation on missing out on learning certain skills.
jstanley · 4 months ago
Personally I am glad I learnt programming before StackOverflow! Precisely because it meant I had to learn to figure things out myself.

I still use StackOverflow and LLMs, but if those things were available when I was learning I would probably not have learnt as much.

agentcoops · 4 months ago
I agree with you on the question of extrinsic values and do not envy people who are starting college right now, trying to make decisions about an extra-ordinarily unclear future. I recently became a father and I try to convince myself that in eighteen years we'll at least finally know whether it has all been hype or not.

However, on the intrinsic value of these new tools when developing habits and skills, I just think about the irreplaceable role that the (early) internet played in my own development and how incredible an LLM would have been. People are still always impressed whenever a precocious high schooler YouTubes his way to an MVP SaaS launch -- I hope and expect the first batch of LLM-accompanied youth to emerge will have set their sights higher.

Deleted Comment

mnky9800n · 4 months ago
It fulfills steve jobs promise that a computer is a bicycle for your mind. It is crazy to me that all these poeple think they are going to lose their ability to think. If you didn't lose your ability to think to scrolling social media then you aren't going to lose it to AI. However, I think a lot of people lost their ability to think by scrolling social media and that is problematic. What people need to realize is that they have agency over what they put in their mind and they probably shouldn't put massive amounts of algorithmically determined content without first considering if its going to push them in a particular direction of beliefs, purchases, or lifestyle choices.
mtillman · 4 months ago
1. https://www.researchgate.net/publication/255603105_The_Effec...

2. https://pubmed.ncbi.nlm.nih.gov/25509828/

3. https://www.researchgate.net/publication/392560878_Your_Brai...

I’m pretty convinced it should be used to do things humans can’t do instead of things humans can do well with practice. However, I’m also convinced that Capital will always rely on Labor to use it on their behalf.

jhbadger · 4 months ago
Being a "bicycle for the mind" is a fine thing for technology to be. The problem is just as with bicycles that's too much work for a lot of people and they would prefer "cars for the mind" in which they have to do nothing.
MattRix · 4 months ago
It’s less like a bicycle for the mind, and more like a bus. Sure you’re gonna get there quickly, but you’ll to end up at the same place as a bunch of other people, and you won’t remember the route you took to get there.
AlecSchueler · 4 months ago
> If you didn't lose your ability to think to scrolling social media ...

Didn't we?

program_whiz · 4 months ago
Honestly I feel my skills atrophying if I rely on AI too much, and many people I interact with are much weaker still (trying to vibe code without ever learning). To take your analogy further, having a single speed bike lets you go further faster and doesn't have a big impact on your "skills" (physical in this case), but deferring all transport to cars, and then to an electric scooter so you never have to walk definitely will cause your endurance / physical ability to walk to disappear. We are creatures that require constant use of and exercise of our capabilities or the system crumbles. Especially for high-skill activities (language, piano, video games, programming), proficiency can wane extremely quickly without constant practice.
bgwalter · 4 months ago
Except that generative "AI" is a tricycle for the mind that prevents you from ever learning how to ride a bicycle.
zelphirkalt · 4 months ago
The problem is, that many people are also incapable of adding much to the process. We have had that kind of situation long before "AI". There are tons of gatekeepers and other types of people out there, that are a net negative, wherever they are employed, either by doing bad work, which someone else needs to clean up after them, or by destroying work culture with silly shortsighted dogmatism of how things must work, clueless middle management, and so on. We need to leave these people somewhere. Maybe with "AI" a few more of these are revealed, but the problem stays the same. Where do we leave all these people? What task can we give them, that is not dehumanizing can we give them, where they will be a positive, instead of net negative to society? Or do we leave it all up to chance, that one day they will find something they are actually good at, and that doesn't result in net negative for humanity? What is the future of all the people? UBI and letting them figure it out doesn't look like such a bad idea.
catigula · 4 months ago
I think what you will find is that many people fundamentally don't care about their job for various reasons, chiefly of which is most likely that they don't feel fairly compensated, and thus outsourcing their labor to AI isn't the fundamental identify transplant you think it is.
walleeee · 4 months ago
> I don't really understand yet why people don't see that they are doing this to themselves.

Maybe it has something to do with the purveyors of these products

- claiming they will take the jobs

- designing them to be habit-forming

- advertising them as digital mentats

- failing to advertise risks of using them

rimeice · 4 months ago
I'm undecided on this, initially I was on the “this is bad, we’re outsourcing our thinking” bandwagon, now after using AI for lots of different types of tasks for a while now, I feel like generally I’ve learnt so much, so much more quickly. Would I recall it all without my new crutch? Maybe not, but I may not have learnt it in the first place without it.
zdragnar · 4 months ago
Think of it like alcohol.

Some people benefit from the relaxing effects of a little bit. It helped humanity get through ages of unsafe hygiene by acting as a sanitizer and preservative.

For some people, it is a crutch that inhibits developing safe coping mechanisms for anxiety.

For others it becomes an addiction so severe, they literally risk death if they don't get some due to withdrawal, and death by cirrhosis if they keep up with their consumption. They literally cannot live without it or with it, unless they gradually taper off over days.

My point isn't that AI addiction will kill you, but that what might be beneficial might also become a debilitating mental crutch.

JumpCrisscross · 4 months ago
> Think of it like alcohol

Better analogy is processed food.

It makes calories cheaper, it’s tasty, and in some circumstances (e.g. endurance sports or backpacking) is materially enhances what an ordinary person can achieve. But if you raise a child on it, to where it’s what they reach for by default, they’re fucked.

techjamie · 4 months ago
It comes down to how you use it, whether you're just getting an answer and moving on, or if you're getting an answer and then increasing your understanding on why that's the correct answer.

I was building a little roguelike-ish sort of game for myself to test my understanding of Raylib. I was using as few external resources as possible outside of the cheatsheet for functions, including avoiding AI initially.

I ran into my first issue when trying to determine line of sight. I was naively simply calculating a line along the grid and tagging cells for vision if they didn't hit a solid object, but this caused very inconsistent sight. I tried a number of things on my own and realized I had to research.

All of the search results I found used Raycasting, but I wanted to see if my original idea had merit, and didn't want to do Raycasting. Finally, I gave up my search and gave copilot a function to fill in, and it used Bresenham's Line Algorithm. It was exactly what I was looking for, and also, taught me why my approach didn't work consistently because there's a small margin of error when calculating a line across a grid that Bresenham accounts for.

Most people, however, won't take interest in why the AI answer might work. So while it can be a great learning tool, it can definitely be used in a brainless sort of way.

wizzwizz4 · 4 months ago
This reminds me of my experience using computer-assisted mathematical proof systems, where the computer's proof search pointed me at the Cantor–Schröder–Bernstein theorem, giving me a great deal of insight into the problem I was trying to solve.

That system, of course, doesn't rely on generative AI at all: all contributions to the system are appropriately attributed, etc. I wonder if a similar system could be designed for software?

juped · 4 months ago
Now imagine how much better

- the code

- your improvement in knowledge

would have been if you had skipped copilot and described your problem and asked for algorithmic help?

jacquesm · 4 months ago
You are not necessarily typical.
kannanvijayan · 4 months ago
Discussing this in terms of anecdotes of whether people will use these tools to learn, or as mental crutches.. seems to be the wrong framing.

Stepping back - the way fundamental technology gets adopted by populations always has a distribution between those that leverage it as a tool, and those that enjoy it as a luxury.

When the internet blew up, the population of people that consumed web services dwarfed the population of people that became web developers. Before that when the microcomputer revolution was happening, there were once again an order of magnitude more users than developers.

Even old tech - such as written language - has this property. The number of readers dwarfs the number of writers. And even within the set of all "writers", if you were to investigate most text produced, you'd find that the vast majority of it is falls into that long tail of insipid banter, gossip, diaries, fanfiction, grocery lists, overwrought teenage love letters, etc.

The ultimate consequences of this tech will depend on the interplay between those two groups - the tool wielders and the product enjoyers - and how that manifests for this particular technology in this particular set of world circumstances.

add-sub-mul-div · 4 months ago
Right. It doesn't matter how smart you still are if the majority of society turns into Idiocracy. Second, we're all at risk of blind spots in estimating how disciplined we're being about using the shortcut machine the right way. Smart people like me, you, grandparent aren't immune to that.
jrflowers · 4 months ago
> the “this is bad, we’re outsourcing our thinking”

> Would I recall it all without my new crutch? Maybe not

This just seems like you’ve shifted your definition of “learning” to no longer include being able to remember things. Like “outsourcing your thinking isn’t bad if you simply expect less from your brain” isn’t a ringing endorsement for language models

drbojingle · 4 months ago
Agreed. I've engaged with different tech since moving things along is now easier.
tkgally · 4 months ago
That’s the problem, I think: Using AI will make some people stupider overall, it will make other people smarter overall, and it will make many people stupider in some ways and smarter in other ways.

It would have been nice if the author had not overgeneralized so much:

https://claude.ai/share/27ff0bb4-a71e-483f-a59e-bf36aaa86918

I’ll let you decide whether my use of Claude to analyze that article made me smarter or stupider.

Addendum: In my prompt to Claude, I seem to have misgendered the author of the article. That may answer the question about the effect of AI use on me.

jacquesm · 4 months ago
> That’s the problem, I think: Using AI will make some people stupider overall, it will make other people smarter overall, and it will make many people stupider in some ways and smarter in other ways.

And then:

> It would have been nice if the author had not overgeneralized so much

But you just fell into the exact same trap. The effect on any individual is a reflection of that person's ability in many ways and on an individual level it may be all of those things depending on context. That's what is so problematic: you don't know to a fine degree what level of competence you have relative to the AI you are interacting with so for any given level of competence there are things that you will miss when processing an AI's output. The more competent you are the better you are able to use it. But people turn to AI when they are not competent and that is the problem, not that when they are competent they can use it effectively. And despite all of the disclaimers that is exactly the dream that the AI peddlers are selling you. 'Your brain on steroids'. But with the caveat that they don't know anything about your brain other than what can be inferred from your prompts.

A good teacher will be able to spot their own errors, here the pupil is supposed to be continuously on the looking for utter nonsense the teacher utters with great confidence. And the closer it gets to being good at some stuff the more leeway it will get for the nonsense as well.

zdw · 4 months ago
For an example of where this already happened, look at the number of people who literally don't have a minor inkling of how to plan a route or navigate without a GPS and mapping software.

Sure, having a real-time data source is nice for avoiding construction/traffic, and I'd use a real-time map, but going beyond that to be spoon fed your next action over and over leads to dependency.

port11 · 4 months ago
I always thought I had a good sense of navigation, until I realised it was getting quite bad.

More or less at the same time I found “Human Being: Reclaim 12 Vital Skills We’re Losing to Technology”, and the chapter on navigation hit me so hard I put the book down and refused to read any more until my navigation skills improved.

They're quite good now. I sit at the toilet staring at the map of my city, which I now know quite well. No longer navigate with my phone.

I'm scared about the chapter on communication, which I'm going through right now.

I do think we're losing those skills, and offloading more thinking to technology will further erode your own abilities. Perhaps you think you'll spend more time in high-cognition activities, but will you? Will all of us?

johnnyanmac · 4 months ago
>Perhaps you think you'll spend more time in high-cognition activities, but will you? Will all of us?

When I can get a full time job again, I plan to. I was trying to learn how to 3d model before the tech scene exploded 3 years ago. I'm probably not trying to take back all 12 factors (I'm fine with where my writing is as of now, even if it is subpar), but I am trying to focus on what parts are important to me as a person and not take any shortcuts out of them.

1718627440 · 4 months ago
> and the chapter on navigation hit me so hard I

Don't leave as hanging, what were they saying?

WillAdams · 4 months ago
And for the societal cost of that see stories such as:

https://www.npr.org/2011/07/26/137646147/the-gps-a-fatally-m...

and for the way this mindset erodes values and skills:

https://www.marinecorpstimes.com/news/your-marine-corps/2018...

IshKebab · 4 months ago
The "societal cost" you linked literally says this happens with paper maps too. The cause is incorrect maps, not GPS.

(And of course, idiotic behaviour... but GPS doesn't cause that.)

Overall GPS has been an absolutely enormous benefit for society with barely any downside other than nostalgia for map reading.

heisenbit · 4 months ago
GPS allowed me to go where I would have been hesitant to venture before.
PessimalDecimal · 4 months ago
Can you draw this analogy or a bit? What hesitation has an LLM helped you overcome?
sotix · 4 months ago
My mom can't get to our lake house anymore without GPS even though she's been driving there for forty years. To be fair, a wild amount of construction and development has occurred around it in the past 15 years, but the road signs still point you in the right direction.
foxglacier · 4 months ago
It seems arbitrary to set the limit of how much pointless busywork we need as just the amount you're used to. Maybe maps are already dumbing it down too much and we should work out directions from a textual description of landmarks and compass bearings that's not specific to our route? In my opinion, dependency on turn-by-turn directions is fine because we do actually have the machines to do it for us. We're equally dependent on all sorts of useful things that free us to think about something actually useful that really can't be done for us. For example, consumer law means we can walk into a shop and buy something without negotiating a contract with each seller and working out all the ways we might get cheated.

Maybe the place to draw the line is different for each individual and depends on if they're really spending their freed-up time doing something useful or wasting it doing something unproductive and self-destructive.

AndrewKemendo · 4 months ago
But this is just proof that long ago we ceded

The ballad of John Henry was written in the 1840s

“Does the engine get rewarded for its steam?” That was the anti-automation line back then

If you gave up anything that was previously called “AI” we would not have computers, cars, airplanes or any type of technology whatsoever anywhere

johnnyanmac · 4 months ago
>Does the engine get rewarded for its steam?” That was the anti-automation line back then

Sure, and it was wrong because it turns out the conductor does get rewarded. Given train strikes that had to be denied as recently as a few years ago, it's clear that's an essential role 150 years later.

With how they want to frame AI as replacing labor, who's being rewarded long term for its thinking? Who's really being serviced?

LPisGood · 4 months ago
Why do you consider relying on navigation apps to be over dependency? Planning a route is basically an entirely useless skill for most people, and if they do need to for some odd reason, it’s pretty easy.
zdw · 4 months ago
In my observation of others, it's not an easy skill unless you've done it before. Most people have little idea where they are and no idea what they would do next if the turn-by-turn tech failed on them. I'd argue it is useful the first time you want to take a scenic route, or optimize for things other than shortest travel time, like a loop bike route that avoids major streets.

Not to say that apps aren't useful in replacing the paper map, or doing things like adding up the times required (which isn't new - there used to be tables in the back of many maps with distances and durations between major locations).

crazygringo · 4 months ago
And so what? Why not be dependent?

I grew up with a glove box full of atlases in my car. On one job, I probably spent 30 minutes a day planning the ~4h of driving I'd do daily to different sites. Looking up roads in indexes, locating grid numbers, finding connecting roads spanning pages 22-23, 42-43, 62-63, and 64-65. Marking them and trying not to get confused with other markings I'd made over the past months. Getting to intersections and having no idea which way to turn because the angles were completely different from on the map (yes this is a thing with paper maps) and you couldn't see any road signs and the car behind you is honking.

What a waste of time. It didn't make me stronger or smarter or a better person. I don't miss it the same way I don't miss long division.

tdrz · 4 months ago
> It didn't make me stronger or smarter or a better person.

Yes, it did.

jncfhnb · 4 months ago
I don’t have an inkling of how to navigate. I don’t really see the problem.
1718627440 · 4 months ago
I just can't comprehend how people can accept that. I mean sure you can use your handheld computer, but when I wouldn't know where I am and what I need to do to go to where I intend to be, I would feel very alone, lost and abandoned, like being Napoleon on Elba. In a completely foreign city, often I just look at a map before the journey for the general direction and then just keep running, without thinking much. That works quite well, because street design is actually logical and where it isn't there are signs. I'm surprised that you often wouldn't even need a map; because you just need to look at the signs, they will tell you.
dagmx · 4 months ago
Atrophy has really been an issue in my recent hiring cycles for good senior engineers.

80% of senior candidates I interview now aren’t able to do junior level tasks without GenAI helping them.

We’ve had to start doing more coding tests to weed their skill set out as a result, and I try and make my coding tests as indicative of our real work as possible and the work they current do.

But these people are struggling to work with basic data structures without an LLM.

So then I put coding aside, because maybe their skills are directing other folks. But no, they’ve also become dependent on LLMs to ideate.

That 80% is no joke. It’s what I’m hitting actively.

And before anyone says: well then let them use LLMs, no. Firstly, we’re making new technologies and APIs that LLMs really struggle with even with purpose trained models. But furthermore, If I’m doing that, then why am I paying for a senior ? How are they any different than someone more junior or cheaper if they have become so atrophied ?

Narciss · 4 months ago
I was actually thinking about this the other day while vibe coding for a side project.

I am a lead engineer, but I’ve been using AI in much of my code recently. If you were to ask me to code anything manually right now, I could do it, but it would take a bit to acclimate to writing code line by line. By “a while”, I mean maybe a few days.

Which means that if we were to do a coding interview without LLMs, I would probably flop without me doing a bit of work beforehand, or at least struggle. But hire me regardless, and I would get back on track in a few days and be better than most from then on.

Careful not to lose talent just because you are testing for little used but latent capabilities.

dagmx · 4 months ago
In your scenario though, how do you avoid hiring based on blind faith?

How do I know you aren’t just a lead with a very good team to pick up the slack?

How do I separate you from the 20 other people saying they’re also good?

Why would I hire someone who can’t hit the ground running faster than someone else who can?

Furthermore, why would I hire someone who didn’t prepare at all for an interview, even if just mentally?

How do you avoid just hiring based on vibes? Bear in mind every candidate can claim they’re part of impressive projects so the resume is often not your differentiator.

rurp · 4 months ago
Expecting senior job applicants to have regained basic coding skills seems reasonable to me. I would be skeptical of an applicant who hadn't made the level of effort you're describing before applying.
OptionOfT · 4 months ago
The problem becomes distinguishing someone like you, who has the skill but hasn't recently used it vs someone who doesn't have the skill.
woooooo · 4 months ago
Leetcode was always a skill mostly practiced for interviews though, right? Arguably its a better signal now in the era of vibecoding that someone can do it themselves if they have to. It used to be "yeah of course I'm responsible in my job, I use a library for this stuff". But in this era, maybe performative leetcode has more value as a signal that you can really guide the AI.
roxolotl · 4 months ago
Isn’t the solution to tell the interviewee that they will have to write some code without llm support? In the case of someone like you I’d hope they’d take that as notice to spend a tiny bit of time getting back up to speed. If it really is just a day or two then it shouldn’t be an issue
risyachka · 4 months ago
>> and I would get back on track in a few days

Thats the issue. How can one be sure you can actually get back on track - or - you never were on the track in the first place and you are just an AI slopper?

Thats why on interview you need to show skills. And on actual job you can use AI.

Deleted Comment

sktrdie · 4 months ago
> then why am I paying for a senior ?

Because they know how to talk to the AI. That's literally the skill that differentiates seniors from juniors at this point. And a skill that you gain only by knowing about the problem space and having banged your head at it multiple times.

zelphirkalt · 4 months ago
The actual skill is having knowledge and knowing when to not trust the AI, because it is hallucinating or bullshitting you. Having worked on enough projects to have a good idea about how things should be structured and what is extensible and what not. What is maintainable and what not. The list goes on and on. A junior rarely has these skills and especially not, when they rely on AI to figure things out.
johnnyanmac · 4 months ago
>That's literally the skill that differentiates seniors from juniors at this point.

If your product has points where llms falter, this use a useless metric.

>and having banged your head at it multiple times.

And would someone who relied on an LLM be doing this?

htrp · 4 months ago
Except most junior devs will be better than sr devs at wholehearted ai adoption
spaceballbat · 4 months ago
Grade inflation has spilled over into the corporate world. I’ve interviewed people titled “principal” who would barely qualify as “senior” a few decades ago.
ben_w · 4 months ago
"Senior" was already a weird title, given it could have been anything from 3-10 years of experience even back in 2021.

I've seen people with 10 years experience blindly duplicate C++ classes rather than subclass them, and when questioned they seemed to think the mere existence of `private:` access specifiers justified it. There were two full time developers including him, and no code review, so it's not like any of the access specifiers even did anything useful.

theshrike79 · 4 months ago
There are specific cultures where titles and steady title progression are Really Important.

Me being from a place where they definitely aren't found this hilarious.

I've had meetings with Principal Architects with less experience than me (title: Backend Programmer).

Bigger organisations really should standardise their titles to specific experience/responsibility/capability milestones so people from other sides of the org can use the title to estimate the skill level of the other person they're talking with.

k__ · 4 months ago
It's a matter of opinion what you should know and what you can easily google or ask an LLM.
dagmx · 4 months ago
If someone needs to continuously google how to use the basic data structures in a language they use every day, then I worry about their ability for knowledge retention as a whole.
alchemism · 4 months ago
Pair a senior with an agent LLM, pair a junior with an agent LLM, measure the output over some cycles. You will find your answer in the data, one way or another.
dagmx · 4 months ago
Truthfully, in my experience, they both end up performing near the level of the LLM. It’s an averaging factor not an uplifting one.
drdaeman · 4 months ago
> aren’t able to do junior level tasks without GenAI helping them

I’m assuming “unable” means not complete lack of knowledge how to approach it, but lack of detail knowledge. E.g. a junior likely remembers some $algorithm in detail (from all the recent grind), while a senior may no longer do so but only know that it exists, what properties it has (when to use, when to not use), and how to look it up.

If you don’t think of something regularly, memory of that fades away, becomes just a vague remembrance, and you eventually lose that knowledge - that’s just how we are.

However, consider that not doing junior-level tasks means it was unnecessary for the position and the position was about doing something else. It’s literally a matter of specialization and nomenclature mismatch: “junior” and “senior” are frequently not different levels of same skill set, but somewhat different skill sets. A simple test: if at your place you have juniors - check if they do the same tasks as seniors do, or if they’re doing something different.

Plus the title inflation - demand shifts and title-catching culture had messed up the nomenclature.

dagmx · 4 months ago
I don’t test rote algorithmic knowledge in our coding tests. Candidates can pick their language

Candidates can ask for help, and can google/llm as well if they can’t recall methods. I just do not allow them to post the whole problem in an LLM and need to see them solve through the problem themselves to see how they think and approach issues.

This therefore also requires they know the language that they picked to do simple tasks , including iterating iterables

wyre · 4 months ago
Why not let them use LLMs? LLM's are a tool for the job so you wind the find the candidate that can most effectively use that tool in your role. If LLM's struggle with your technologies and API's then a developer that can use an LLM for development with good results should be a desirable thing, right?

Can the senior developer understand and internalize your codebase? Can they solve complex problems? If you're paying them to be a senior developer, it likely isn't worth their time to concern themselves with basic data structures when they are trying to solve more complex problems.

johnnyanmac · 4 months ago
You literally asked the question the GP pre-emptively answered. Read their last paragraph.

>Can the senior developer understand and internalize your codebase?

Would you trust someone who needs llms in the hiring phase to be able to do this higher order tasks if they can't nail down the fundamentals?

athrowaway3z · 4 months ago
What do you consider senior?

We've seem to have had a significant title-inflation in the last 5 years, and everybody seems to be at least a senior.

With no new junior positions opening up, I'm not even sure I blame them.

dagmx · 4 months ago
A senior to me is someone who can tackle complex problems without significant supervision , differentiated from a lead in that they still need guidance on what the overall tasks are. They should be familiar enough with their tech stacks that I can send them to meetings (after the requisite on-boarding time) to represent the team if needed (though I try and not overload my engineers with meetings) and answer feasibility questions for our projects. They don’t need constant check ins on their work. I should be able to bounce ideas of them readily on how to approach bigger problems.

A junior being someone who needs more granular direction or guidance. I’d only send them to meetings paired with a senior. They need close to daily check ins on their work. I include them in all the same things the seniors do for exposure, but do not expect the same level of technical strength at this time in their careers.

I try not to focus on years of experience necessarily, partly because I was supervising teams at large companies very early in my career.

selinkocalar · 4 months ago
This is a real concern. We use AI for a lot of our development work now and I've noticed people will be less likely to dig deep into problems before asking Claude.

The trick is using AI to handle the grunt work while still maintaining the critical thinking skills. But it's so easy to slip into autopilot mode.

rcxdude · 4 months ago
Are you sure you're not just mostly seeing the candidates who are using LLMs to pass through the earlier screening phases with flying colors despite their lack of skills? (i.e., they haven't atrophied, they just weren't very good to begin with). There's always a lot of unqualified applicants to a job but LLMs can make them way more effort to filter out.
dagmx · 4 months ago
We have the coding test as the second phase now for specifically that reason, and might have more once they’re doing the full interview set.
bongodongobob · 4 months ago
That's what I said about compiled languages. No one knows how to optimize assembly anymore, they just let the compiler do it.
erichocean · 4 months ago
> Firstly, we’re making new technologies and APIs that LLMs really struggle

LLMs absolutely excel at this task.

Source: Me, been doing it since early July with Gemini Pro 2.5 and Claude Opus.

So good, in fact, that I have no plans to hire software engineers in the future. (I have hired many over my 25 years developing software.)

dagmx · 4 months ago
So you’ve made a decision based on three months of use.

I am legitimately interested in your experience though. What are you creating where you can see the results in that time frame to make entire business decisions like that?

I would really like to see those kinds of productivity gains myself.

mitthrowaway2 · 4 months ago
Despite the magazine being named the Argument, this article falls into the typical pattern of claiming "the problem isn't X, it's Y", and then spending the rest of the article body building support for Y, but never once making any argument that refutes X.
svat · 4 months ago
Typically, such articles can be charitably interpreted as saying "I worry more about Y than X", rather than as literally making two separate claims (that X isn't a problem, and that Y is). So as a reader if you're trying to get value out of the article, you can focus on evaluating Y, and ignore X that the article does not address.

In this particular case, the article is even explicit about this:

> While we have no idea how AI might make working people obsolete at some imaginary date, we can already see how technology is affecting our capacity to think deeply right now. And I am much more concerned about the decline of thinking people than I am about the rise of thinking machines.

So the author is already explicitly saying that he doesn't know about X (whether AI will take jobs), but prefers to focus in the article on Y (“the many ways that we can deskill ourselves”).

mitthrowaway2 · 4 months ago
I agree that may be the charitable interpretation. But so often these points are phrased in a way that directly attempts to dismiss someone else's concerns ("the real danger with AI isn't the thing you're worried about, it's the thing I'm worried about!"). I feel like they shouldn't be doing that unless they're going to present some kind of reasoning that supports both pillars of that claim.

There's nothing stopping them from simply saying "an under-recognized problem with AI is Y, let me explain why it should be a concern".

Expressing the article as an attack on X, when in fact the author hasn't even put five minutes of thought into evaluating X, is just a click bait strategy. People go in expecting something that undermines X, but they don't get it.

jncfhnb · 4 months ago
Refuting X is not necessary if it’s a subjective perspective

Dead Comment

slackfan · 4 months ago
[flagged]
mitthrowaway2 · 4 months ago
That would only come close to working as an argument if X and Y were mutually exclusive, but this article doesn't even bother to make the case that they are, nor is there any reason why they would be.
daxfohl · 4 months ago
The author may have hit on the answer without realizing it. He's in the gym doing pull-ups. Is his lat strength necessary for some important part of his survival? Highly unlikely. Pre-20th century, when most work was highly physical, would he have gone out after work to research and test out efficient lat exercises? Probably not either.

If we're not using our brains for work, then maybe it'll actually increase our deliberateness to strengthen them at home. In fact, I can't imagine that not to be the case. I mean, it's possible we turn into a society of mindless zombies, but fundamentally, at least among some reasonable percentage of the population, I have to believe there is an innate desire to learn and understand and build, and a relationship-building aspect to it as well.

legacynl · 4 months ago
> The author may have hit on the answer without realizing it. He's in the gym doing pull-ups. Is his lat strength necessary for some important part of his survival? Highly unlikely. Pre-20th century, when most work was highly physical, would he have gone out after work to research and test out efficient lat exercises? Probably not either.

Yeah but don't you kinda proof his point though? When we all created machines to do all the manual labor for us we got fat and unhealthy. Only recently we've really began to understand how important excercise (manual labor) is for us.

I fear that the obvious result of a laissez-faire attitude is that in 20-30 years we find out that there was actual benefit of using our mind, reading, writing, etc.

In the case of our muscles, we were lucky that we didn't need those to identify the health benefits of excercise and we could come up with a solution. In the case of our brains, we might not be as lucky, so maybe be prudent and assume it's something that's beneficial?

kristianc · 4 months ago
I always think for pieces like these which claim atrophy, well yes , but what about the things that you would have never even tried without it. The barrier to many things isn't becoming lazy when you're already halfway proficient, it's getting started in the first place. AI lowers the getting started cost of almost everything exponentially.

If the argument is that people shouldn't be able to get started on those things without having to slog though a lot of mindless drudgework - then people should be honest about that rather than dress it up in analogies.

legacynl · 4 months ago
> but what about the things that you would have never even tried without it

The problem with that is that there's a lot of cases where a total newbie engaging with some subject could lead to problems. They have a false confidence in their abilities, while not knowing what they don't know.

What if you want to try chemistry and ask the AI about what you need to know. Since you don't know anything about chemistry you don't know if the answers are complete or correct. Because you don't know about chemistry you also don't know about the dangers you need to ask about, what prevention to take, etc.

The same could be said about many different subjects: rock-climbing, home-improvement, electrical work, car maintenance, etc.

You might argue then that it would still be perfect for low-risk subjects, but how would a total newbie be able to validly determine risks of anything they don't know anything about?

kristianc · 4 months ago
Sure, if someone’s first use case for AI is synthesising chlorine gas in their shed, that’s a separate issue.

Most of us are talking about writing, coding, or analysis, not hazardous materials though.

simianwords · 4 months ago
Why does everyone talk about this part of the issue but never the other one? Society can’t progress if we can’t delegate the boring stuff and work on complicated stuff. How else will it progress?

Do you think we could have progressed to this level if we were still calculating stuff by hand? Log tables??

walleeee · 4 months ago
"This level" presumably being when the accumulation of progress past at last overwhelms us in all its complication?
vivzkestrel · 4 months ago
who decides what is boring stuff? there are prompts on github to build entire applications from scratch. if you are going to go in this direction, what happens to the critical thinking pathways inside your brain that no longer have to work?
simianwords · 4 months ago
Who decided doing arithmetic is boring and we need calculators?