Readit News logoReadit News
stevage · 3 months ago
I guess we're all trying to figure out where we sit along the continuum from anti-AI Luddite to all-in.

My main issue with vibe coding etc is I simply don't enjoy it. Having a conversation with a computer to generate code that I don't entirely understand and then have to try to review is just not fun. It doesn't give me any of the same kind of intellectual satisfaction that I get out of actually writing code.

I'm happy to use Copilot to auto-complete, and ask a few questions of ChatGPT to solve a pointy TypeScript issue or debug something, but stepping back and letting Claude or something write whole modules for me just feels sloppy and unpleasant.

tobr · 3 months ago
I tried Cursor again recently. Starting with an empty folder, asking it to use very popular technologies that it surely must know a lot about (Typescript, Vite, Vue, and Tailwind). Should be a home run.

It went south immediately. It was confused about the differences between Tailwind 3 and 4, leading to a broken setup. It wasn’t able to diagnose the problem but just got more confused even with patient help from me in guiding it. Worse, it was unable to apply basic file diffs or deletes reliably. In trying to diagnose whether this is a known issue with Cursor, it decided to search for bug reports - great idea, except it tried to search the codebase for it, which, I remind you, only contained code that it had written itself over the past half hour or so.

What am I doing wrong? You read about people hyping up this technology - are they even using it?

EDIT: I want to add that I did not go into this antagonistically. On the contrary, I was excited to have a use case that I thought must be a really good fit.

windows2020 · 3 months ago
My recent experience has been similar.

I'm seeing that the people hyping this up aren't programmers. They believe the reason they can't create software is they don't know the syntax. They whip up a clearly malfunctioning and incomplete app with these new tools and are amazed at what they're created. The deficiencies will sort themselves out soon, they believe. And then programmers won't be needed at all.

cube2222 · 3 months ago
Just trying to help explain the issues you've been hitting, not to negate your experience.

First, you might've been using a model like Sonnet 3.7, whose knowledge cutoff doesn't include Tailwind 4.0. The model should know a lot about the tech stack you mentioned, but it might not know the latest major revisions if they were very recent. If that is the case (you used an older model), then you should have better luck with a model like Sonnet 4 / Opus 4 (or by providing the relevant updated docs in the chat).

Second, Cursor is arguably not the top tier hotness anymore. Since it's flat-rate subscription based, the default mode of it will have to be pretty thrifty with the tokens it uses. I've heard (I don't use Cursor) in Cursor Max Mode[0] improves on that (where you pay based on tokens used), but I'd recommend just using something like Claude Code[1], ideally with its VS Code or IntelliJ integration.

But in general, new major versions of sdk's or libraries will cause you a worse experience. Stable software fares much better.

Overall, I find AI extremely useful, but it's hard to know which tools and even ways of using these tools are the current state-of-the-art without being immersed into the ecosystem. And those are changing pretty frequently. There's also a ton of over-the-top overhyped marketing of course.

[0]: https://docs.cursor.com/context/max-mode

[1]: https://www.anthropic.com/claude-code

varjag · 3 months ago
I had some success doing two front-end projects. One in 2023 using Mixtral 7b local model and one just this month with Codex. I am an experienced programmer (35 years coding, 28 professionally). I hate Web design and I never cared to learn JavaScript.

The first project was a simple touch based control panel that communicates via REST/Websocket and runs a background visual effect to prevent the screen burn-in. It took a couple of days to complete. There were often simple coding errors but trivial enough to fix.

The second is a 3D wireframe editor for distributed industrial equipment site installations. I started by just chatting with o3 and got the proverbial 80% within a day. It includes orbital controls, manipulation and highlighting of selected elements, property dialogs. Very soon it became too unwieldy for the laggard OpenAI chat UI so I switched to Codex to complete most of the remaining features.

My way with it is mostly:

- ask no fancy frameworks: my projects are plain JavaScript that I don't really know, makes no sense to pile on React and TypeScript atop of it that I am even less familiar with

- explain what I want by defining data structures I believe are the best fit for internal representation

- change and test one thing at a time, implement a test for it

- split modules/refactor when a subsystem gets over a few hundred LOC, so that the reasoning can remain largely localized and hierarchical

- make o3 write an llm-friendly general design document and description of each module. Codex uses it to check the assumptions.

As mentioned elsewhere the code is mediocre at best and it feels a bit like when I've seen a C compiler output vs my manually written assembly back in the day. It works tho, and it doesn't look to be terribly inefficient.

gs17 · 3 months ago
> It was confused about the differences between Tailwind 3 and 4

I have the same issue with Svelte 4 vs 5. Adding some notes to the prompt to be used for that project helps sort of.

steveklabnik · 3 months ago
Tailwind 4 has been causing Claude a lot of problems for me, especially when upgrading projects.

I managed to get it to do one just now, but it struggled pretty hard, and still introduced some mistakes I had to fix.

tomtomistaken · 3 months ago
> What am I doing wrong?

Wrong tool..

pandler · 3 months ago
In addition to not enjoying it, I also don’t learn anything, and I think that makes it difficult to sustain anything in the middle of the spectrum between “I won’t even look at the code; vibes only” and advanced autocomplete.

My experience has been that it’s difficult to mostly vibe with an agent, but still be an active participant in the codebase. That feels especially true when I’m using tools, frameworks, etc that I’m not already familiar with. The vibing part of the process simultaneously doesn’t provide me with any deeper understanding or experience to be able to help guide or troubleshoot. Same thing for maintaining existing skills.

daxfohl · 3 months ago
It's like trying to learn math by reading vs by doing. If all you're doing is reading, it robs you of the depth of understanding you'd gain by solving things yourself. Going down wrong paths, backtracking, finally having that aha moment where things click, is the only way to truly understand something.

Now, for all the executives who are trying to force-feed their engineering team to use AI for everything, this is the result. Your engineering staff becomes equivalent to a mathematician who has never actually done a math problem, just read a bunch of books and trusted what was there. Or a math tutor for your kid who "teaches" by doing your kid's homework for them. When things break and the shit hits the fan, is that the engineering department you want to have?

timr · 3 months ago
> My main issue with vibe coding etc is I simply don't enjoy it. Having a conversation with a computer to generate code that I don't entirely understand and then have to try to review is just not fun. It doesn't give me any of the same kind of intellectual satisfaction that I get out of actually writing code.

I am the opposite. After a few decades of writing code, it wasn't "fun" to write yet another file parser or hook widget A to API B -- which is >99% of coding today. I moved into product management because while I still enjoy building things, it's much more satisfying/challenging to focus on the higher-level issues of making a product that solves a need. My professional life became writing specs, and reviewing code. It's therefore actually kind of fun to work with AI, because I can think technically, but I don't have to do the tedious parts that make me want to descend into a coma.

I could care less if I'm writing a spec for a robot, or I'm writing a spec for a junior front-end engineer. They're both going to screw up, and I'm going to have to spend time explaining the problem again and again...at least the robot never complains and tries really hard to do exactly what I ask, instead of slacking off, doing something more intellectually appealing, getting mired in technical complexity, etc.

prmph · 3 months ago
After like the 20th time explaining the same (simple) problem to the AI that it is unable to fix, you just might change your mind [1]. At that point you just have to jump in and get dirty.

Do this a few times and you start to realize it is kinda of worse than just being in the driver's seat in terms of the coding right from the start. For one thing, when you jump in, you are working with code that is probably architectured quite differently from the way you normally do, and you have no developed the deep mental model that is needed to work with the code effectively.

Not to say the LLMs are not useful, especially in agent mode. But the temptation is always to trust and task them with more than they can handle. maybe we need an agent that limits the scope of what you can ask it to do, to keep you involved at the necessary level.

People keep thinking we are at the level where we can forget about the nitty gritty of the code and rise up the abstraction level, when this is nothing close to the truth.

[1] Source: me last week trying really hard to work like you are talking about with Claude Code.

dlisboa · 3 months ago
You touched on the significant thing that separates most of the AI code discourse in the two extremes: some people just don't like programming and see it as a simple means to an end, while others love the process of actually crafting code.

Similar to the differences between an art collector and a painter. One wants the ends, the other desires the means.

icedchai · 3 months ago
Same. After doing this for decades, so much programming work is tedious. Maybe 5% to 20% of the work is interesting. If I can get a good chunk of that other 80%+ built out quickly with a reasonable level of quality, then we're good.
getnormality · 3 months ago
Your use case seems relatively well-suited to AI. Even an unreliable technology like LLMs could be useful for automating a task that is mundane, well-defined, and easy to review for accuracy.
kiitos · 3 months ago
> After a few decades of writing code, it wasn't "fun" to write yet another file parser or hook widget A to API B -- which is >99% of coding today.

If this is your experience of programming, then I feel for you, my dude, because that sucks. But it is definitely not my experience of programming. And so I absolutely reject your claim that this experience represents "99% of programming" -- that stuff is rote and annoying and automate-able and all that, no argument, but it's not what any senior-level engineer worth their salt is spending any of their time on!

xg15 · 3 months ago
> that I don't entirely understand

That's the bigger issue in the whole LLM hype that irks me. The tacit assumption that actually understanding things is now obsolete, as long as the LLM delivers results. And if it doesn't we can always do yet another finetuning or try yet another magic prompt incantation to try and get it back on track. And that this is somehow progress.

It feels like going back to pre-enlightenment times and collecting half-rationalized magic spells instead of having a solid theoretical framework that let's you reason about your systems.

AnimalMuppet · 3 months ago
Well... I'm torn here.

There is a magic in understanding.

There is a different magic in being able to use something that you don't understand. Libraries are an instance of this. (For that matter, so is driving a car.)

The problem with LLMs is that you don't understand, and the stuff that it gives you that you don't understand isn't solid. (Yeah, not all libraries are solid, either. LLMs give you stuff that is less solid than that.) So LLMs give you a taste of the magic, but not much of the substance.

Kiro · 3 months ago
I'm the opposite. I haven't had this much fun programming in years. I can quickly iterate, focus on the creative parts and it really helps with procrastination.
9d · 3 months ago
Considering the actual Vatican literally linked AI to the apocalypse, and did so in the most official capacity[1], I don't think avoiding AI has to be ludditism.

[1] Antiqua et Nova p. 105, cf. Rev. 13:15

9d · 3 months ago
Full link and relevant quote:

https://www.vatican.va/roman_curia/congregations/cfaith/docu...

> Moreover, AI may prove even more seductive than traditional idols for, unlike idols that “have mouths but do not speak; eyes, but do not see; ears, but do not hear” (Ps. 115:5-6), AI can “speak,” or at least gives the illusion of doing so (cf. Rev. 13:15).

It quotes Rev. 13:15 which says (RSVCE):

> and it was allowed to give breath to the image of the beast so that the image of the beast should even speak, and to cause those who would not worship the image of the beast to be slain.

9d · 3 months ago
I emphasize that it's the Vatican because they are the most theologically careful of all. This isn't some church with a superstitious pastor who jumps to conclusions about the rapture at a dime drop. This is the Church which is hesitant to say literally anything about the book of Revelation at all, which is run by tired men who just want to keep the status quo so they can hopefully hit retirement without any trouble.
doctoboggan · 3 months ago
> and then have to try to review

I think (at least by the original definition[0]) this is not vibe coding. You aren't supposed to be reviewing the code, just execute and pray.

[0]: https://xcancel.com/karpathy/status/1886192184808149383

throwawayk7h · 3 months ago
Perhaps we're feeling too much pressure to pick an extreme stance. Can we firmly establish a middle-ground party? I feel like a lot of people here fit into a norm of "AI is very useful, but may upend many people's lives, and not currently suitable for every task" category. (There may be variations of course depending on how worried you are about FOOM.)

We need a catchy name.

rowanseymour · 3 months ago
This was my experience until recently.. now I'm currently quite enjoying assigning small PRs to copilot and working through them via the GitHub PR interface. It's basically like managing a junior programmer but cheaper and faster. Yes that's not as much fun as writing code but there isn't time for me to write all the code myself.
cloverich · 3 months ago
Can you elaborate on the "assign PR's" bit?

I use Cursor / ChatGPT extensively and am ready to dip into more of an issue / PR flow but not sure what people are doing here exactly. Specifically for side projects, I tend to think through high level features, then break it down into sub-items much like a PM. But I can easily take it a step further and give each sub issue technical direction, e.g. "Allow font customization: Refactor tailwind font configuration to use CSS variables. Expose those CSS variables via settings module, and add a section to the Preferences UI to let the user pick fonts for Y categories via dropdown; default to X Y Z font for A B C types of text".

Usually I spend a few minutes discussing w/ ChatGPT first, e.g. "What are some typical idioms for font configuration in a typical web / desktop application". Once I get that idea solidified I'd normally start coding, but could just as easily hand this part off for simple-ish stuff and start ironing out he next feature. In the time I'd usually have planned the next 1-2 months of side project work (which happens, say, in 90 minute increments 2x a week), the Agent could knock out maybe half of them. For a project i'm familiar with, I expect I can comfortably review and comment on a PR with much less mental energy than it would take to re-open my code editor for my side project, after an entire day of coding for work + caring for my kids. Personally I'm pretty excited about this.

_aavaa_ · 3 months ago
> Luddite

The luddites were not against progress or the technology itself. They were opposed to how it was used, for whose benefit, and for whose loss [0].

The AI-Luddite position isn’t ain’t AI, it’s (among other things anti mass copyright theft from creators to train something with the explicit goal of putting them out of a job, without compensation. All while producing an objectively inferior product but passing it off as a higher quality one.

[0]: https://www.hachettebookgroup.com/titles/brian-merchant/bloo...

Garlef · 3 months ago
> Having a conversation with a computer to generate code that I don't entirely understand and then have to try to review is just not fun.

Same for me. But maybe that's ultimately an UX issue? And maybe things will straighten out once we figure out how to REALLY do AI assisted software development.

As an anology: Most people wouldn't want to dig through machine code/compiler output. At least not without proper tooling.

So: Maybe once we have good tools to understand the output it might be fun again.

(I guess this would include advances in structuring/architecting the output)

username223 · 3 months ago
> As an anology: Most people wouldn't want to dig through machine code/compiler output. At least not without proper tooling.

My analogy is GUI builders from the late 90s that let you drag elements around, then generated a pile of code. They worked sometimes, but God help you if you wanted to do something the builder couldn't do, and had to edit the generated code.

Looking at compiler output is actually more pleasant. You profile your code, find the hot spots, and see that something isn't getting inlined, vectorized, etc. At that point you can either convince the compiler to do what you want or rewrite it by hand, and the task is self-contained.

cratermoon · 3 months ago
Tim doesn't address this in his essay, so I'm going to harp on it: "AI will soon be able to...". That phrase is far too load-bearing. The part of AI hype that says, "sure, it's kinda janky now, but this is just the beginning" has been repeated for 3 years now, and everything has been just around the corner the entire time. It's the first step fallacy, saying that if we can build a really tall ladder now, surely we'll soon be able to build a ladder tall enough to reach the moon.

The reality is that we've seen incremental and diminishing returns, and the promises haven't been met.

layer8 · 3 months ago
The compiler analogy doesn’t quite fit, because the essential difference is that source code is (mostly) deterministic and thus can be reasoned about (you can largely predict in detail what behavior code will exhibit even before writing it), which isn’t the case for LLM instructions. That’s a major factor why many developers don’t like AI coding, because every prompt becomes a non-reproducible, literally un-reasonable experiment.
bitwize · 3 months ago
I think that AI assistance in coding will become enjoyable for me once the technology exists for AI to translate my brainwaves into text. Then I could think my code into computer, greatly speeding the OODA loop of programming.

As it is, giving high-level directives to an LLM and debugging the output seems like a waste of my time and a hindrance to my learning process. But that's how professional coding will be done in the near future. 100% human written code will become like hand-writing a business letter in cursive: something people used to be taught in school, but no one actually does in the real world because it's too time-consuming.

Ultimately, the business world only cares about productivity and what the stopwatch says is faster, not whether you enjoy or learn from the process.

9d · 3 months ago
> It doesn't give me any of the same kind of intellectual satisfaction that I get out of actually writing code.

Writing code is a really fun creative process:

1. Conceive an exciting and useful idea

2. Comprehend the idea fully from its top to its bottom

3. Translate the idea into specific instructions utilizing known mechanics

4. Find the beautiful middleground between instruction and abstraction

5. Write lots and lots of code!

6. Find where your conception was flawed and fix it as necessary.

7. Repeat steps 2-6 until the thing works just as you dreamed or you give up.

It's maybe the most fun and exciting mixture of art and technology ever.

9d · 3 months ago
I forgot to say the second part:

Using AI is the same as code-review or being a PM:

1. Have an ideal abstraction

2. Reverse engineer an actual abstraction from code

3. Compare the two and see if they match up

4. If they don't, ask the author to change or fix it until it does

5. Repeat steps 2-4 until it does

This is incredibly not fun, because it's not a creative process.

You're essentially just an accountant or calculator at this point.

tsumnia · 3 months ago
Sounds like me 20 years ago learning Java

It's new tech. We're all "giraffes on roller skates" whenever we start something new. Find out where you can use in your life and use it. Where you can't or don't want to, don't. Try to not get deterred by analysis paralysis when there's something that doesn't make sense. In time, you'll get it.

gs17 · 3 months ago
> My main issue with vibe coding etc is I simply don't enjoy it.

I almost enjoy it. It's kind of nice getting to feel like management for a second. But the moment it hits a bug it can't fix and you have to figure out its horrible mess of code any enjoyment is gone. It's really nice for "dumb" changes like renumbering things or very basic refactors.

tptacek · 3 months ago
When the agent spins out, why don't you just take the wheel and land the feature yourself? That's what I do. I'm having trouble integrating these two skeptical positions of "LLMs suck all the joy out of actually typing code into an editor" and "LLMs are bad because they sometimes force you to type code into an editor".
potatolicious · 3 months ago
Yeah, I will say now that I've played with the AI coding tools more, it seems like there are two distinct use cases:

1 - Using coding tools in a context/language/framework you're already familiar with.

This one I have been having a lot of fun with. I am in a good position to review the AI-generated code, and also examine its implementation plan to see if it's reasonable. I am also able to decompose tasks in a way that the AI is better at handling vs. giving it vague instructions that it then does poorly on.

I feel more in control, and it feels like the AI is stripping away drudgery. For example, for a side project I've been using Claude Code with an iOS app, a domain I've spent many years in. It's a treat - it's able to compose a lot of boilerplate and do light integrations that I can easily write myself, but find annoying.

2 - Using coding tools in a context/language/framework you don't actually know.

I know next to nothing about web frontend frameworks, but for various side projects wanted to stand up some simple web frontends, and this is where AI code tools have been a frustration.

I don't know what exactly I want from the AI, because I don't know these frameworks. I am poorly equipped to review the code that it writes. When it fails (and it fails a lot) I have trouble diagnosing the underlying issues and fixing it myself - so I have to re-prompt the LLM with symptoms, leading to frustrating loops that feel like two cave-dwellers trying to figure out a crashed spaceship.

I've been able to stand up a lot of stuff that I otherwise would never have been able to, but I'm 99% sure the code is utter shit, but I also am not in a position to really quantify or understand the shit in any way.

I suppose if I were properly "vibe coding" I shouldn't care about the fact that the AI produced a katamari ball of code held together by bubble gum. But I do care.

Anyway, for use case #1 I'm a big fan of these tools, but it's really not the "get out of learning your shit" card that it's sometimes hyped up to be.

saratogacx · 3 months ago
For case 2, I've had a lot of luck starting with asking the LLM "I have experience in X, Y, and Z technologies, help me translate this project in those terms, list anything this code does that doesn't align with the typical use of the technologies they've chosen". This has given me a great "intro" to move me closer to being able to understand.

Once I've done that and piked a few follow up questions, I feel much better in diving into the generated code.

thadt · 3 months ago
On Learning:

My wife, a high school teacher, remarked to me the other day “you know, it’s sad that my new students aren’t going to be able to do any of the fun online exercises that I used to run.”

She’s all but entirely removed computers from her daily class workflow. Almost to a student, “research” has become “type it into Google and write down whatever the AI spits out at the top of the page” - no matter how much she admonishes them not to do it. We don’t even need to address what genAI does to their writing assignments. She says this is prevalent across the board, both in middle and high school. If educators don’t adapt rapidly, this is going to hit us hard and fast.

MarkusQ · 3 months ago
That's because research had already become "look up the answer somebody else found" years ago. If you want to force they to do real research, ask them things no AI knows because no one knows. Ask them to find the exact center of the classroom. Or how many peas the cafeteria throws away each year, on average. Or any of a thousand other questions that no one knows the answer to.
bgwalter · 3 months ago
I notice a couple of things in the pro-AI [1] posts: All start writing in a lengthy style like Steve Yegge in his peak. All are written by ex-programmers who are on the management/founder side now. All of them cite programmer friends who claim that AI is useful.

It is very strange that no real open source project uses "AI" in any way. Perhaps these friends work on closed source and say what their manager wants them to say? Or they no longer care? Or they work in "AI" companies?

[1] He does mention return on investment doubts and waste of energy, but claims that the agent nonsense works (without public evidence).

orangecat · 3 months ago
I'm a programmer, not a manager. I don't have a blog. AI is useful.

It is very strange that no real open source project uses "AI" in any way.

How do you know? Given the strong opposition that lots of people have I wouldn't expect its use to be actively publicized. But yes, I would expect that plenty of open source contributors are at the very least using Cursor-style tab completion or having AIs generate boilerplate code.

Perhaps these friends work on closed source and say what their manager wants them to say?

"Everyone who disagrees with me is paid to lie" is a really tiresome refrain.

bwfan123 · 3 months ago
There is a large number of wannabe hands-on coders who have moved on to become management - and they all either have coder-envy or coder-hatred.

To them, gen-ai is a savior - Earlier, they felt out of the game - now, they feel like they can compete. Earlier they were wannabe coders. Now they are legit.

But, this will last only until they accept a chunk of code put out by co-pilot and then spend the next 2 days wrangling with it. At that point, it dawns on them what these tools can actually do.

Dead Comment

senko · 3 months ago
What’s “real open source” to you? I have a niche project useful to a small audience (grocery prices crawler for Croatian stores), 70ish stars, 10 or so forks, a few contributors, AGPL licensed.

I used AI a lot (vibe coding for spikes and throwaway tools, AI-assisted coding for prod code, chatgpt sessions to optimize db schema and queries, etc). I’d say some 80% or more of the code was written by Claude and reviewed by me.

It has not only sped up the development, but as a side project, I would never even have finished it (deployed to prod with enough features to be useful) without AI.

Now you can say that doesn’t count because it’s a side project, or because I’m bullish on AI (I am, without jumping on the hype train), or because it’s too small, or because I haven’t blogged about it, or because anecdotes are not data, and I will readily admit I’m not a true Scotsman.

cesarb · 3 months ago
> It is very strange that no real open source project uses "AI" in any way.

Using genAI is particularly hard on open source projects due to worries about licensing: if your project is under license X, you don't want to risk including any code with a license incompatible with X, or even under a license compatible with X but without the correct attribution.

It's still not settled whether genAI can really "launder" the license of the code in its training set, or whether legal theories like "subconscious copying" would apply. In the later case, using genAI could be very risky.

rjsw · 3 months ago
At least in my main open source project, use of AI is prohibited due to potentially tainting the codebase with stuff derived from other GPL projects.
zurfer · 3 months ago
Using AI in real projects is not super simple but if you lean into it, it can accelerate things.

Anecdotally check this out https://github.com/antiwork/gumroad/graphs/contributors

Devin is an AI agent

strict9 · 3 months ago
Angst is the best way to put it.

I use AI every day, I feel like it makes me more productive, and generally supportive of it.

But the angst is something else. When nearly every tech related startup seems to be about making FTEs redundant via AI it leaves me with a bad feeling for the future. Same with the impact on students and learning.

Not sure where we go from here. But this feels spot on:

>I think that the best we can hope for is the eventual financial meltdown leaving a few useful islands of things that are actually useful at prices that make sense.

fellowniusmonk · 3 months ago
All the angst is 100% manufactured by policy, LLMs wouldn't be hated if it didn't dovetail with the end of ZIRP, Section 174 specifically targeting engineer roles to be tax losers so others could be other tax winners, Macro Economic Uncertainty (which compounds the problems of 174.)

If ours roles hadn't been specifically targeted by government policy for reduction as a way to buoy government revenues and prop up the budgetary bottom line, in the face of decreasing taxes for favored parties.

This is simply policy induced multifactorial collapse.

And LLMs get to take the blame from engineers because that is the excuse being used. Pretty much every old school hacker who has played around with them recognizes that LLMs are impressive and sci-fi, it's like my childhood dream come true for interface design.

I cannot begin to say how fucking stupid the people in charge of these policies are, I'm an old head, I know exactly the type of 80s executives that actively likes to see the nerds suffer because we're all irritating poindexters to them.

The pattern of actively attacking the freedoms and sabotaging incomes of knowledge workers is not remotely a rare pattern, and it's often done this stupidly and at the expense of an countries economic footing and ability to innovate.

bob1029 · 3 months ago
I agree that some kind of meltdown/crash would be the best possible thing to happen. There are too many players not adding any value to the ecosystem at this point. MCP is a great example of this - Complexity merchants inventing new markets to operate in. We need something severe to scare off the bullshit artists for a while.

How many civil engineering projects could we have completed ahead of schedule and under budget if we applied the same amount of wild-eyed VC and genius tier attention to the problems at hand?

pzo · 3 months ago
MCP is now only used by really power users and mostly only in software dev settings but I see them used by users in the future. There is no decent mcp client for non tech savvy users yet. But I think if browsers will have build in better implementation of them they will be used. Think what perplexity comet or browser company dia trying to do. It's still very early for MCP.
perplex · 3 months ago
> I really don’t think there’s a coherent pro-genAI case to be made in the education context

My own personal experience is that Gen AI is an amazing tool to support learning, when used properly.

Seems likely there will be changes in higher education to work with gen AI instead of against it, and it could be a positive change for both teachers and students.

jplusequalt · 3 months ago
>Seems likely there will be changes in higher education to work with gen AI instead of against it, and it could be a positive change for both teachers and students.

Since we're using anecdotes, let me leave one as well--it's been my experience that humans choose the path of least resistance. In the context of education, I saw a large percentage of my peers during K-12 do the bare minimum to get by in the classes, and in college I saw many resorting to Chegg to cheat on their assignments/tests. In both cases I believe it was the same motivation--half-assing work/cheating takes less effort and time.

Now, what happens when you give those same children access to an LLM that can do essentially ALL their work for them? If I'm right, those children will increasingly lean on those LLMs to do as much of their schoolwork/homework as possible, because the alternative means they have less time to scroll on Tik Tok.

But wait, this isn't an anecdote, it's already happening! Here's an excellent article that details the damage these tools are already causing to our students https://www.404media.co/teachers-are-not-ok-ai-chatgpt/.

>[blank] is an amazing tool ... when used properly

You could say the same thing about a myriad of controversial things that currently exist. But we don't live in a perfect world--we live in a world where money is king, and often times what makes money is in direct conflict with utilitarianism.

ryandrake · 3 months ago
> Now, what happens when you give those same children access to an LLM that can do essentially ALL their work for them? If I'm right, those children will increasingly lean on those LLMs to do as much of their schoolwork/homework as possible, because the alternative means they have less time to scroll on Tik Tok.

I think schools are going to have to very quickly re-evaluate their reliance on "having done homework" and using essays as evidence that a student has mastered a subject. If an LLM can easily do something, then that thing is no longer measuring anything meaningful.

A school's curriculum should be created assuming LLMs exist and that students will always use them to bypass make-work.

dowager_dan99 · 3 months ago
>> an amazing tool to support learning, when used properly.

how can kids, think K-12, who don't even know how to "use" the internet properly - or even their phones - learn how to learn with AI? The same way social media and mobile apps made the internet easy, mindless clicking, LLMs make school a mechanical task. It feels like your argument is similar to LLMs helping experienced, senior developers code more effectively, while eliminating many chances to grow the skills needed to join that group. Sounds like you already know how to learn and use AI to enhance that. My 12-yr-old is not there yet and may never get there.

lonelyasacloud · 3 months ago
>> how can kids, think K-12, who don't even know how to "use" the internet properly - or even their phones - learn how to learn with AI?

For every person/child that just wants the answer there will be at least some that will want to know why. And these endlessly patient machines are very good at feeding that curiosity.

rightbyte · 3 months ago
> My 12-yr-old is not there yet and may never get there.

Wouldn't class room exams enforce that though? Like, imagining LLMs like an older sibling or parent that would help pupils cheat on essays.

SkyBelow · 3 months ago
The issue with education in particular is a much deeper issue which gen AI has ripped bandages off and exposed the wound to the world, while also greatly accelerating its decay, but it was not responsible for creating it.

What is the purpose of education? Is it to learn, or to gain credentials that you have learned? Too much of education has become the latter, to the point we have sacrificed the former. Eventually this brings down both, as a degree gains a reputation of no longer signifying the former ever happened.

Or existing systems that check for learning before granting the degree that showed an individual learned were largely not ready for the impact of genAI and teachers and professors have adapted poorly. Sometimes due to lack of understanding the technology, often due to their hands being tied.

GenAI used to cheat is a great detriment to education, but a student using genAI to learn can benefit greatly, as long as they have matured enough in their education process to have critical thinking to handle mishaps by the AI and to properly differentiate when they are learning and when they are having the AI do the work for them (I don't say cheat here because some students will accidentally cross the line and 'cheat' often carries a hint of mens rea). To the mature enough student interested in learning more, genAI is a worthwhile tool.

How do we handle those who use it to cheat? How do we handle students who are too immature in their education journey to use the tool effectively? Are we ready to have a discussion about those learning who only care for the degree and the education to earn the degree is just seen as a means to an end? How to teachers (and increasingly professors) fight back against the pressure of systems that optimize on granting credentials and which just assume the education will be behind those systems (Goodhart's Law anyone)? Those questions don't exist because of genAI, but genAI greatly increased our need to answer them.

murrayb · 3 months ago
I think he is talking education as in school/college/university rather than learning?

I too am finding AI incredibly useful for learning, I use it for high level overviews and to help guide me to resources (online formats and books) deeper dives. Claude has so far proven to be an excellent learning partner, no doubt other models are similarly good.

strict9 · 3 months ago
That is my take. Continuing education via prompt is great, I try to do it every day. Despite years of use I still get that magic feeling when asking about some obscure topic I want to know more about.

But that doesn't mean I think my kids should primarily get K-12 and college education this way.

Aperocky · 3 months ago
Computer and internet has been around for 20 years and yet the evaluation systems of our education has largely remained the same.

I don't hold my breath on this.

icedchai · 3 months ago
Where are you located? The Internet boom in the US happened in the mid-90's. My first part-time ISP job was in 1994.
schmichael · 3 months ago
> I really don’t think there’s a coherent pro-genAI case to be made in the education context.

I think it’s simple: the reign of the essay is over. Educators must find a new way to judge a student’s understanding.

Presentations, artwork, in class writing, media, discussions and debates, skits, even good old fashioned quizzes all still work fine for getting students to demonstrate understanding.

As the son of two teachers I remember my parents spending hours in the evenings grading essays. While writing is a critical skill, and essays contain a good bit of information, I’m not sure education wasn’t overindexing on them already. They’re easy to assign and grade, but there’s so much toil on both ends unrelated to the core subject matter.

thadt · 3 months ago
I posit that of the various uses of student writing, the most important isn't communication or even assessment, but synthesis. Writing forces you to grapple with a subject in a way that clarifies your thinking. It's easy to think you understand something until you have to explain or apply it.

Skipping that entirely, or using a LLM to do most of it for you, skips something rather important.

schmichael · 3 months ago
> Writing forces you

I agree entirely with you except for the word "forces." Writing can cause synthesis. It should. It should be graded to encourage that...

...but all of that is a whole lot of work for everyone involved: student and teacher alike.

And that kind of synthesis is in no way unique to essays! All of the other mediums I mention can make synthesis more readily apparent then paragraphs of (often very low quality) prose. A clever meme lampooning the "mere merchant" status of the Medici family could demonstrate a level of understanding that would take paragraphs of prose to convey.

ryandrake · 3 months ago
I'd also say that the era of graded homework in general is over, and using "proof of toil" assignments as a meaningful measurement of a student's progress/mastery.
jplusequalt · 3 months ago
Wholeheartedly agree. I can't help but think that proponents of LLMs are not seriously considering the impact it will have on our ability to communicate with each other, or to reason on our own accord without the assistance of an LLM.

It confounds me how these people would trust the same companies who fueled the decay of social discourse via the internet with the creation of AI models which aim to encroach on every aspect of our lives.

Workaccount2 · 3 months ago
For me it threatens to be like a spell check. Back 20 years ago when I was still in school and still hand writing for many assignments, my spelling was very good.

Nowadays it's been a long time since my brain totally checked out on spelling. Everything I write in every case has spell check, so why waste neurons on spelling?

I fear the same will happen on a much broader level with AI.

kiitos · 3 months ago
What? Who is spending any brain cycles on spelling? When you write a word, you just write the word, the spelling is... intrinsic? automatic? certainly not something that you have to, like, actively think about?
soulofmischief · 3 months ago
Some of us realize this technology was inevitable and are more focused on figuring out how society evolves from here instead of complaining and trying to legislate away math and prevent honest people from using these tools while criminals freely make use of them.
dowager_dan99 · 3 months ago
This is a really negative and insulting comment towards people who are struggling with a very real, very emotional response to AI, and super-concerned about both the real and potential negatives that the rabid boosters won't even acknowledge. You don't have to "play the game" to make an impact, it's valid to try and challenge the math and change the rules too.
jplusequalt · 3 months ago
>Some of us realize this technology was inevitable

How was any of this inevitable? Point me to which law of physics demanded we reach this state of the universe. These companies actively choose to train these models, and by framing their development as "inevitable" you are helping absolve them of any of the negative shit they have/will cause.

>figuring out how society evolves from here instead of complaining and trying to legislate away math

Could you not apply this exact logic to the creation of nuclear weaponry--perhaps the greatest example of tragedy of the commons?

>prevent honest people from using these tools while criminals freely make use of them

What is your argument here? Should we suggest that everyone learn how to money launder to even the playing field against criminals?

bgwalter · 3 months ago
DDT was a very successful insecticide that was outlawed due to its adverse effects on humans.
harimau777 · 3 months ago
Have they come up with anything? So far I haven't seen any solutions presented that are both politically viable and don't result in people being even more under the thumb of late stage capitalism.
collingreen · 3 months ago
If only there were more nuances and options between those two extremes! Oh well, back to the anti math legislation pits I guess.
throwawaybob420 · 3 months ago
It’s not angst to see the people who run the companies we work for “encourage” us to use Claude to write our code knowing full well it’s their attempt to see if they really can fire us without a hit in “productivity”.

It’s not angst to see students throughout the entire spectrum end up using ChatGPT to write their papers, summarize 3 paragraphs, and use it to bypass any learning.

It’s not angst to see people ask a question to an LLM and talk what it says as gospel.

It’s not angst to understand the environmental impact of all this stupid fucking shit.

It’s not angst to see the danger in generative AI not only just creating slop, but further blurring the lines of real and fake.

It’s not angst to see the vast amount of non-consensual porn being generated of people without their knowledge.

Feel like I’m going fucking crazy here, just day after day of people bowing down at the altar and legit not giving a single fuck about what happens after rofl

bluefirebrand · 3 months ago
Hey for what it's worth, you aren't alone

This is a really wild and unpredictable time, and it's ok to see the problems looming and feel unsettled at how easily people are ignoring the potential oncoming train

I would suggest taking some time for yourself to distance yourself from this as much as you can for your own mental health

Ride this out as best you can until things settle down a bit. You aren't alone