Typing is not fun. It robs me of my craft of holding my pencil and feeling it press against the paper with my hand... LLMs are merely a tool to achieve a similar end result. The different aspects of software development are an art. But even with LLMS, I critique and care about the code just as much as if I were writing it line by line myself. I have had more FUN being able to get all of my ideas on paper with LLMs than I have had over years of banging my head against a keyboard going down the rabbit hole on production bugs.
It's not about typing, it's about writing. You don't type, you write. That's the paradigm. You can write with a pen or you can type a keyboard. Different ways, same goal. You write.
Yesterday I had semi-coherent idea for an essay. I told it to an LLM and asked for a list of authors and writings where similar thoughts have been expressed - and it provided a fantastic bibliography. To me, this is extremely fun. And, reading similar works to help articulate an idea is absolutely part of writing.
"LLMs" are like "screens" or "recording technology". They are not good or bad by themselves - they facilitate or inhibit certain behaviors and outcomes. They are good for some things, and they ruin some things. We, as their users, need to be deliberate and thoughtful about where we use them. Unfortunately, it's difficult to gain wisdom like this a priori.
I write what I want the LLM to do. Generating a satisfactory prompt is sometimes as much work as writing the code myself - it just separates the ideation from the implementation. LLMs are the realization of the decades-long search for natural language programming, dating at least as far back as COBOL. I personally think they are great - not 100% of the time, just as a tool.
A director is the most important person to the creation of a film. The director delegates most work (cameras, sets, acting, costumes, makeup, lighting, etc.), but can dive in and take low-level/direct control of any part if they choose.
have you actually done some projects with e.g. claude code?
completely greenfield entirely up to yourself?
because ime, youre completely wrong.
I mean i get were youre coming from if you imagine it like the literal vibe coding how this started, but thats just a party trick and falls off quickly as the project gets more complex.
to be clear, simple features in an existing project can often be done simply - with a single prompt making changes across mutliple files - but that only works under _some circumstances_ and bigger features / more indepth architecture is still necessary to get the project to work according to your ideas
And that part needs you to tell the llm how it should do it - because otherwise youre rolling the dice wherever its gonna be a clusterfuck after the next 5 changes
100% this. I've had more fun using Claude Code because I get to spend more of my time doing the fun parts (design, architecture, problem solving, etc) and less time spent typing, fixing small compilation errors, looking up API docs to figure out that query parameters use camelcase instead of underscores.
I'd rather spend my time designing and writing code than spending it debugging and reformatting whatever an LLM cobbled together from stack overflow and github. 'Design, architecture, problem solving, etc' all takes a backseat when the LLM barfs out all the code and you have to either spend your time convincing it to output what you could have written yourself anyway or play QA fixing its slop all day long.
I had one of those shower epiphanies a couple mornings ago... And I fed it into a couple LLMs while I was playing a video game (taking some time over the holidays to do that), and by the afternoon I had that idea as working code: ~4500 LOC with that many more in tests.
People keep saying "I want LLMs to take out the laundry so I can do art, not doing the laundry while LLMs do art." This is an example of LLMs doing the coding, so I can rekindle a joy of gaming, which feels like it's leaning in the right direction.
Unironically this: isn't writing on paper more fun than typing? Isn't painting with real paint and canvas more satisfying than with a stylus and an iPad? Isn't it more fun to make a home-cooked meal for your family than ordering out? Who stomps into the holiday celebration and tells mom that it'd be a lot more efficient to just get catering?
Isn't there something good about being embodied and understanding a medium of expression rather than attempting to translate ideas directly into results as quickly as possible?
If you get your enjoyment from the process of cooking, by all means cook. But if you enjoy being with people and just eating food, catering is better.
Is your goal to efficiently get your thoughts to a medium as fast as possible, use a stylus or a keyboard. Do you enjoy the process of writing your thoughts down, use a fountain pen.
Or the easiest comparison: coffee. Do you want your fix of caffeine as fast as possible? Grab some gas station slop on the go for .99€. But if you're more about relaxing and slowly enjoying the process of selecting the optimal beans for this particular day, grinding them to perfection and brewing them just right with a pour-over technique or a fancy Italian espresso machine you refurbished yourself - then do that.
Same with code. I want to solve a problem I have or a client has. I get enjoyment from solving the problem. Having to tell the computer how to do that with a programming language is just a boring intermediate step on the way.
Radical change in the available technology is going to require radical shifts in perspective. People don't like change, especially if it involves degrading their craft. If they pivot and find the joy in the new process, they'll be happy, but people far more often prefer to be "right" and miserable.
I have some sympathy for them, but AI is here to stay, and it's getting better, faster, and there's no stopping it. Adapt and embrace change and find joy in the process where you can, or you're just going to be "right" and miserable.
The sad truth is that nobody is entitled to a perpetual advantage in the skills they've developed and sacrificed for. Expertise and craft and specialized knowledge can become irrelevant in a heartbeat, so your meaning and joy and purpose should be in higher principles.
AI is going to eat everything - there will be no domain in which it is better for humans to perform work than it will be to have AI do it. I'd even argue that for any given task, we're pretty much already there. Pick any single task that humans do and train a multibillion dollar state of the art AI on that task, and the AI is going to be better than any human for that specific task. Most tasks aren't worth the billions of dollars, but when the cost drops down to a few hundred dollars, or pennies? When the labs figure out the generalization of problem categories such that the entire frontier of model capabilities exceeds that of all humans, no matter how competent or intelligent?
AI will be better, cheaper, and faster in any and every metric of any task any human is capable of performing. We need to figure out a better measure of human worth than the work they perform, and it has to happen fast, or things will get really grim. For individuals, that means figuring out your principles and perspective, decoupling from "job" as meaning and purpose in life, and doing your best to surf the wave.
> Expertise and craft and specialized knowledge can become irrelevant in a heartbeat, so your meaning and joy and purpose should be in higher principles.
My meaning could be in higher purposes; however I still need a job to be enable/pursue those things. If AI takes the meaning out of your craft it takes out the ability to use it to pursue higher order principles as well for most people, especially if you aren't in the US/big tech scene with significant equity to "make hay while the sun is still shining".
>LLMs are merely a tool to achieve a similar end result.
Advanced tools are never "merely tools".
Tools that are pushed onto people, come to be expected to even participate in social/professional life, and take over knowledge-based tasks and creative aspects, are even less "merely tools".
We are not talking of a hammer or a pencil here. An LLM user doesn't outsource typing, they outsource thinking.
I just had Claude Code finetune a reranker model to improve it significantly across a large set of evals. I chose the model to fine tune, the loss function, created the underlying training dataset for the re-ranking task, and designed the evals. What thinking did I outsource exactly?
I guess did not waste time learning the failure-prone arcana of how to schedule training jobs on HuggingFace, but that also seems to me like a net benefit.
I was about to write something really emotional and clearly lacking any kind of self reflect ; then I read you again ; and I admit there is a lot of part of this that is true.
I feel like it may be something inherently wrong in the interface more than the actual expression of the tool. I'm pretty sure we are in some painful era where LLM, quiet frankly, help a tons with an absurd amount of stuff, underlying tons and "stuff" because it really is about "everything".
But it also generate a lot of frustrations ; I'm not convinced of the conversational status-quo for example ; and I could easily see something inspired directly from what you said about drawing ; there is something here about the experience - and it's really difficult to work on because it's inherently personal and may require to actually spend time, accumulate frustration to finally be able to express it through something else.
Speaking as someone who despises writing freehand, and loves typing... what? I understand what you're trying to say, but you lost me very quickly I'm afraid. Whatever tool I use to write I'm still making every choice along the way, and that's true if I'm dictating, using a stylus to press into a clay tablet, or any other medium. An LLM is writing for me based on prompts, it's more analogous to hiring a very stupid person to write for you, and has very little to do with pens or keyboards.
I wholeheartedly agree. I'm not saying LLMs are 'bad'. I'm not saying they are not useful. But to me personally they take out the fun parts from my profession.
My role changes from coming up with solutions to babysitting a robotic intern. Not 100% of course. And of course an agent can be useful like 'intellisense on steroids'. Or an assistant who 'ripgreps' for me. There are advantages for sure. But for me the advantages don't match the disadvantages. LLMs take the heart out of what made me like programming: building stuff yourself with your near infinite lego box of parts and coming up with ideas yourself.
I'm only half convinced the LLMs will become as important to coding as they seem . And I'm hoping a sane balance will emerge at the other end of the hype. But if it goes where OpenAI etc. want it to go I think I'll have to re-school to become an electrician or something...
As I mentioned in another comment they smell blood in our profession, and as entities dependent on investor/VC/seed money rounds they want it. There's a reason every new model that comes out has a blog post "best at coding" often in their main headline - its also a target that people outside of tech don't really care about IMO unlike for example art and writing.
Tbh if it wasn't for coding disruption I don't think the AI boom would of really been that hyped up.
> LLMs take the heart out of what made me like programming: building stuff yourself with your near infinite lego box of parts and coming up with ideas yourself.
i feel like that's all im doing with llms. just in the last hour i realized that i wanted an indexed string internpool instead of passing string literals. the LLM refactored everything and then i didn't have to worry about that lego piece anymore.
Good for you it feels that way to you. But then you need to check how the LLM refactored stuff. Because (a) you never know if it's actually correct and (b) the code needs to be maintained into the future, so you need to know how it works and be able to fix bugs in it. And then you are suddenly spending a lot more time understanding someone else's code, someone you can't discuss the 'why' of the code over a coffee with. Of course the same goes for library code but to me understanding bugs in a library (and reporting/fixing them) feels more useful than understanding the one-off output of an LLM. And for a library the coffee part might not fly, but at least you can discuss stuff with the original author(s). I'm not saying my feeling is the absolute truth, it's very subjective.
I guess mechanics must feel the same about modern computerized cars, where suddenly the injection timing is no longer a mechanical gadget you can tweak by experience, but some locked down black box you don't have control over.
Also I really dislike that (for now) using an LLM means selling your soul to some dubious company. Even if you use only the free tier you still need to upload your code and let the LLM do whatever with it. If an LLM is an indispensible part of being a programmer, everybody will be held hostage by the large techfirms (even more...).
edit:
I suddenly thought of the famous aforism by Brian Kernighan:
"Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it? "
I fear we will end up with programs nobody understands anymore.
The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. Software/coding is once of these activities. One can do coding for fun but doing the same coding where it provides value to others/society and financial upkeep for you and your family is far more meaningful.
For those who have swallowed the AI panacea hook line and sinker. Those that say it's made me more productive or that I no longer have to do the boring bits and can focus on the interesting parts of coding. I say follow your own line of reasoning through. It demonstrates that AI is not yet powerful enough to NOT need to empower you, to NOT need to make you more productive. You're only ALLOWED to do the 'interesting' parts presently because the AI is deficient. Ultimately AI aims to remove the need for any human intermediary altogether. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months from now may be dramatically impacted.
As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.
I don't have any ideas as to what should be done or more importantly what can be done. Pandora's box has been opened, Humpty Dumpty has fallen and he can't be put back together again. AI feels like it has crossed the rubicon. We must all collectively await to see where the dust settles.
Someone smart said that AI should replace tasks, not jobs.
There are infinite analogies for this whole thing, but it mostly distills down to artisans and craftsmen in my mind.
Artisans build one chair to perfection, every joint is meticulously measured and uses traditional handcrafted Japanese joinery, not a single screw or nail is used unless it's absolutely necessary. It takes weeks to build one, each one is an unique work of art.
It also costs 2000€ for a chair.
Craftsmen optimise their process for output, instead of selling one 2000€ chair a month, they'd rather sell a hundred for 20€. They have templates for cutting every piece, jigs for quickly attaching different components, use screws and nails to speed up the process instead of meticulous handcrafted joinery.
It's all about where you get your joy in "software development". Is it solving problems efficiently or crafting a beautiful elegant expressive piece of code?
Neither way is bad, but pre-LLM both people could do the same tasks. I think that's coming to an end in the near future. The difference between craftsmen and artisans is becoming clearer.
There is a place for people who create that beautiful hyper-optimised code, but in many (most) cases just a craftsman with an agentic LLM tool will solve the customer's problem with acceptable performance and quality in a fraction of the time.
In the long run I think it's pretty unhealthy to make one's career a large part of one's identity. What happens during burnout or retirement or being laid off if a huge portion of one's self depends on career work?
Economically it's been a mistake to let wealth get stratified so unequally; we should have and need to reintroduce high progressive tax rates on income and potentially implement wealth taxes to reduce the necessity of guessing a high-paying career over 5 years in advance. That simply won't be possible to do accurately with coming automation. But it is possible to grow social safety nets and decrease wealth disparity so that pursuing any marginally productive career is sufficient.
Practically, once automation begins producing more value than 25% or so of human workers we'll have to transition to a collective ownership model and either pay dividends directly out of widget production, grant futures on the same with subsidized transport, or UBI. I tend to prefer a distribution-of-production model because it eliminates a lot of the rent-seeking risk of UBI; your landlord is not going to want 2X the number of burgers and couches you get distributed as they'd happily double rent in dollars.
Once full automation hits (if it ever does; I can see augmented humans still producing up to 50% of GDP indefinitely [so far as anyone can predict anything past human-level intelligence] especially in healthcare/wellness) it's obvious that some kind of direct goods distribution is the only reasonable outcome; markets will still exist on top of this but they'll basically be optional participation for people who want to do that.
For a prototype, but something production ready requires almost similar amount of effort than it used to, if you care about good design and code quality.
I really doesn't. I just ditched my wordpress/woocommerce webshop for a custom one that I made in 3 days with Claude, in C# blazor. It is better in every single way than my old webshop, and I have control over every aspect of it. It's totally production ready.
The code is as good or even better than I would have written. I gave Claude the right guidelines and made sure it stayed in line. There are a bunch of playwright tests ensuring things don't break over time, and proving that things actually work.
I didn't have to mess with any of the HTML/css which is usually what makes me give up my personal projects. The result is really, really good, and I say that as someone who's been passionate about programming for about 15 years.
3 days for a complete webshop with Stripe integration, shipping labels and tracking automation, SMTP emails, admin dashboard, invoicing, CI/CD, and all the custom features that I used to dream of.
Sure it's not a crazy innovative projet, but it brings me a ton of value and liberates me from these overengineered, "generic" bulky CMS. I don't have to pay $50 for a stupid plugin (that wouldn't really fit my needs anyway) anymore.
You can be as specific as you want with an LLM, you can literally tell it to do “clean code” or use a DI framework or whatever and it’ll do it. Is it still work? Yes. But once you start using them you’ll realize how much code you actually write is safely in the realm of boilerplate and the core aspect of software dev is architecture which you don’t have to lose when instructing an agent. Most of the time I already know how I want the code to look, I just farm out the actual work to an agent and then spend a bunch of time reviewing and asking follow up questions.
Here’s a bunch of examples: moving code around, abstracting common functionality into a function and then updating all call sites, moving files around, pattern matching off an already existing pattern in your code. Sometimes it can be fun and zen or you’ll notice another optimization along the way … but most of the time it’s boring work an agent can is 10x faster than you.
My overwhelming experience is that the sort of developers unironically using the phrase "vibe coding" are not interested in or care about good design and code quality.
Perhaps for the inexperienced or timid. Code quality is it compiles and design is it performs to spec. Does properly formatted code matter when you no longer have to read it?
I'm not sure I'm having more fun, at least not yet, since for me the availability of LLMs takes away some of the pleasure of needing to use only my intellect to get something working. On the other hand, yes, it is nice to be able to have Copilot work away on a thing for my side project while I'm still focused on my day job. The tradeoff is definitely worth it, though I'm undecided on whether I am legitimately enjoying the entire process more than I used to.
You don't have to use LLMs the whole time. For example, I've gotten a lot done with AI and had the time to spend over the holidays on a long time side project... organically coding the big fun thing
Replacing Dockerfiles and Compose with CUE and Dagger
I don't do side projects, but the LLM has completely changed the calculus about whether some piece of programming is worthwhile doing at all. I've been enjoying myself automating all sorts of admin/ops stuff that hitherto got done manually because there was never a clear 1/2 day of time to sit down and write the script. Claude does it while I'm deleting email or making coffee.
For you, maybe. In my experience, the constant need for babysitting LLMs to avoid the generation of verbose, unmaintainable slop is exhausting and I'd rather do everything myself. Even with all the meticulously detailed instructions, it feels like a slot machine - sometimes you get lucky and the generated code is somewhat usable. Of course, it also depends of the complexity and scope of the project and/or the tasks that you are automating.
It is clearly an emotional question. My comment on here saying I enjoyed programming with an LLM has received a bunch of downvotes, even though I don't think the comment was derogatory towards anyone who feels differently.
People seem to have a visceral reaction towards AI, where it angers them enough that even the idea that people might like it upsets them.
Largely agree. Thoreau said for every 1000 hacking at the leaves of evil, there was 1 hacking at the roots.
Web programming is not fun. Years ago, a colleague who had pivoted in the early years said "Web rots your brain" (we had done some cool work together in real time optical food sorting).
I know it (web programming) gives a lot of people meaning, purpose, and a paycheck, to become a specialist in an arcane art that is otherwise unplumbable by others. First it was just generally programming. But it's bifurcated into back end, front end, db, distributed, devops, meta, api, etc. The number of programmers I meet now days, who are at start ups that eventually "pivot" to making tools for the tool wielders is impressive (e.g. "we tried to make something for the general public, but that didn't stick, but on the way, we learned how to make a certain kind of pick axe and are really hoping we can get some institutional set of axe wielders at a big digging corporation to buy into what we're offering"). Instead of "Software is eating the world" the real story these days may be "Software is eating itself"
Mired with a mountain of complexity we've created as a result of years of "throw it at the wall and ship what sticks", we're now doubling down on "stochastic programming". We're literally, mathematically, embracing "this probab[i]l[it]y works". The usefulness/appeal of LLMs is an indictment and a symptom. Not a cause.
I'm constantly surprised by developers who like LLMs because "it's great for boiler plate". Why on earth were you wasting your time writing boiler plate before? These people are supposed to be programmers. Write code to generate the boiler plate or get abstract it away.
I suppose the path of least resistance is to ignore the complexity, let the LLM deal with it, instead of stepping back and questioning why the complexity is even there.
> Write code to generate the boiler plate or get abstract it away.
That doesn’t make any sense. I want to consider what you’re saying here but I can’t relate to this idea at all. Every project has boilerplate. It gets written once. I don’t know what code you’d write to generate that boilerplate that would be less effort than writing the boilerplate itself…
I was recently talking to a colleague I went to school with and they said the same thing, but for a different reason. We both did grad studies with a focus on ML, and at the time ML as a field seemed to be moving so fast. There was a lot of excitement around AI again finally after the 'AI winter'. It was easy to participate in bringing something new to the field, and there was so many unique and interesting models coming about every day. There was genuine discussion about a viable path to AGI.
Now, basically every new "AI" feature feels like a hack on top of yet another LLM. And sure the LLMs seem to keep getting marginally better, but the only people with the resources to actually work on new ones anymore are large corporate labs that hide their results behind corporate facades and give us mere mortals an API at best. The days of coding a unique ML algorithm for a domain specific problem are pretty much gone -- the only thing people pay attention to is shoving your domain specific problem into an LLM-shaped box. Even the original "AI godfathers" seem mostly disinterested in LLMs these days, and most people in ML seem dubious that simply scaling up LLMs more and more will be a likely path to AGI.
It seems like there's more excitement around AI for the average person, which is probably a good thing I suppose, but for a lot of people that were into the field they're not really that fun anymore.
In terms of programming, I think they can be pretty fun for side projects. The sort of thing you wouldn't have had time to do otherwise. For the sort of thing you know you need to do anyway and need to do well, I notice that senior engineers spend more time babysitting them than benefitting from them. LLMs are good at the mechanics of code and struggle with the architecture / design / big picture. Seniors don't really think much about the mechanics of code, it's almost second nature, so they don't seem to benefit as much there. Juniors seem to get a lot more benefit because the mechanics of the code can be a struggle for them.
> Now, basically every new "AI" feature feels like a hack on top of yet another LLM.
LLM user here with no experience of ML besides fine-tuning existing models for image classification.
What are the exciting AI fields outside of LLMs? Are there pending breakthroughs that could change the field? Does it look like LLMs are a local maxima and other approaches will win through - even just for other areas?
Personally I'm looking forward to someone solving 3D model generation as I suck at CAD but would 3D print stuff if I didn't have to draw it. And better image segmentation/classification models. There's gotta be other stuff that LLMs aren't the answer to?
Well one of the inherent issues is assuming that text is the optimal modality for every thing we try to use an LLM for. LLMs are statistical engines designed to predict the most likely next token in a sequence of words. Any 'understanding' they do is ultimately incidental to that goal and once you look at them that way a lot of the shortcomings we see become more intuitive.
There's a lot of problems LLMs are really useful for because generating text is what you want to do. But there's tons of problems which we would want some sort of intelligent, learning behaviour that do not map to language at all. There's also a lot of problems that can "sort of" be mapped to a language problem but make pretty extraneous use of resources compared to a (existing or potential) domain specific solution. For purposes of AGI, you could argue that trying to express "general intelligence" via language alone is fundamentally flawed altogether -- although that quickly becomes a debate about what actually counts as intelligence.
I pay less attention to this space lately so I'm probably not the most informed. Everyone seems so hyped about LLMs that I feel like a lot of other progress gets buried, but I'm sure it's happening. There's some problem domains that are obviously solved better with other paradigms currently: self-driving tech, recommendation systems, robotics, game AIs, etc. Some of the exciting stuff that can likely solve some problems better in the future is some of the work on world models, graph neural nets, multi modality, reinforcement learning, alternatives to gradient descent, etc. I think it's a debate whether or not LLMs are a local maxima but many of the leading AI researchers seem to think so -- Yann Lecun recently for e.g. said LLMs 'are not a path to human-level AI'
It’s now moving faster than ever. Huge strides have been made in interpretability, multi modality, and especially the theoretical understanding of how training interacts with high dimensional spaces. E.g.: https://transformer-circuits.pub/2022/toy_model/index.html
Thanks, this seems interesting. I'll give it a read. I admittedly don't keep tabs as much as I should these days. I feel like every piece of AI news is about LLMs. I suppose I should know other people are still doing interesting things :)
> For me, the joy of programming is understanding a problem in full depth, so that when considering a change, I can follow the ripples through the connected components of the system.
>The joy of management is seeing my colleagues learn and excel, carving their own paths as they grow. Watching them rise to new challenges. As they grow, I learn from their growth; mentoring benefits the mentor alongside the mentee.
I fail to grasp how using LLMs precludes either of these things. If anything, doing so allows me to more quickly navigate and understand codebases. I can immediately ask questions or check my assumptions against anything I encounter.
Likewise, I don’t find myself doing less mentorship, but focusing that on higher-level guidance. It’s great that, for example, I can tell a junior to use Claude to explore X,Y, or Z design pattern and they can get their own questions answered beyond the limited scope of my time. I remember seniors being dicks to me in my early career because they were overworked or thought my questions were beneath them. Now, no one really has to encounter stuff like that if they don’t want to.
I’m not even the most AI-pilled person I know or on my team, but it just seems so staggeringly obvious how much of a force multiplier this stuff has become over the last 3-6 months.
The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. Software/coding is once of these activities. One can do coding for fun but doing the same coding where it provides value to others/society and financial upkeep for you and your family is far more meaningful.
If that is what you've been doing, a love for coding, I can well empathise how the world is changing underneath your feet.
LLMs code for you. They write for you.
"LLMs" are like "screens" or "recording technology". They are not good or bad by themselves - they facilitate or inhibit certain behaviors and outcomes. They are good for some things, and they ruin some things. We, as their users, need to be deliberate and thoughtful about where we use them. Unfortunately, it's difficult to gain wisdom like this a priori.
A director is the most important person to the creation of a film. The director delegates most work (cameras, sets, acting, costumes, makeup, lighting, etc.), but can dive in and take low-level/direct control of any part if they choose.
because ime, youre completely wrong.
I mean i get were youre coming from if you imagine it like the literal vibe coding how this started, but thats just a party trick and falls off quickly as the project gets more complex.
to be clear, simple features in an existing project can often be done simply - with a single prompt making changes across mutliple files - but that only works under _some circumstances_ and bigger features / more indepth architecture is still necessary to get the project to work according to your ideas
And that part needs you to tell the llm how it should do it - because otherwise youre rolling the dice wherever its gonna be a clusterfuck after the next 5 changes
Deleted Comment
My place for that is in the shower.
I had one of those shower epiphanies a couple mornings ago... And I fed it into a couple LLMs while I was playing a video game (taking some time over the holidays to do that), and by the afternoon I had that idea as working code: ~4500 LOC with that many more in tests.
People keep saying "I want LLMs to take out the laundry so I can do art, not doing the laundry while LLMs do art." This is an example of LLMs doing the coding, so I can rekindle a joy of gaming, which feels like it's leaning in the right direction.
Isn't there something good about being embodied and understanding a medium of expression rather than attempting to translate ideas directly into results as quickly as possible?
My family eats out at a nice steak restaurant every Christmas no one wants to cook. None of us like to cook.
If you get your enjoyment from the process of cooking, by all means cook. But if you enjoy being with people and just eating food, catering is better.
Is your goal to efficiently get your thoughts to a medium as fast as possible, use a stylus or a keyboard. Do you enjoy the process of writing your thoughts down, use a fountain pen.
Or the easiest comparison: coffee. Do you want your fix of caffeine as fast as possible? Grab some gas station slop on the go for .99€. But if you're more about relaxing and slowly enjoying the process of selecting the optimal beans for this particular day, grinding them to perfection and brewing them just right with a pour-over technique or a fancy Italian espresso machine you refurbished yourself - then do that.
Same with code. I want to solve a problem I have or a client has. I get enjoyment from solving the problem. Having to tell the computer how to do that with a programming language is just a boring intermediate step on the way.
I have some sympathy for them, but AI is here to stay, and it's getting better, faster, and there's no stopping it. Adapt and embrace change and find joy in the process where you can, or you're just going to be "right" and miserable.
The sad truth is that nobody is entitled to a perpetual advantage in the skills they've developed and sacrificed for. Expertise and craft and specialized knowledge can become irrelevant in a heartbeat, so your meaning and joy and purpose should be in higher principles.
AI is going to eat everything - there will be no domain in which it is better for humans to perform work than it will be to have AI do it. I'd even argue that for any given task, we're pretty much already there. Pick any single task that humans do and train a multibillion dollar state of the art AI on that task, and the AI is going to be better than any human for that specific task. Most tasks aren't worth the billions of dollars, but when the cost drops down to a few hundred dollars, or pennies? When the labs figure out the generalization of problem categories such that the entire frontier of model capabilities exceeds that of all humans, no matter how competent or intelligent?
AI will be better, cheaper, and faster in any and every metric of any task any human is capable of performing. We need to figure out a better measure of human worth than the work they perform, and it has to happen fast, or things will get really grim. For individuals, that means figuring out your principles and perspective, decoupling from "job" as meaning and purpose in life, and doing your best to surf the wave.
My meaning could be in higher purposes; however I still need a job to be enable/pursue those things. If AI takes the meaning out of your craft it takes out the ability to use it to pursue higher order principles as well for most people, especially if you aren't in the US/big tech scene with significant equity to "make hay while the sun is still shining".
Advanced tools are never "merely tools".
Tools that are pushed onto people, come to be expected to even participate in social/professional life, and take over knowledge-based tasks and creative aspects, are even less "merely tools".
We are not talking of a hammer or a pencil here. An LLM user doesn't outsource typing, they outsource thinking.
I guess did not waste time learning the failure-prone arcana of how to schedule training jobs on HuggingFace, but that also seems to me like a net benefit.
I feel like it may be something inherently wrong in the interface more than the actual expression of the tool. I'm pretty sure we are in some painful era where LLM, quiet frankly, help a tons with an absurd amount of stuff, underlying tons and "stuff" because it really is about "everything".
But it also generate a lot of frustrations ; I'm not convinced of the conversational status-quo for example ; and I could easily see something inspired directly from what you said about drawing ; there is something here about the experience - and it's really difficult to work on because it's inherently personal and may require to actually spend time, accumulate frustration to finally be able to express it through something else.
Ok time to work lmao
My role changes from coming up with solutions to babysitting a robotic intern. Not 100% of course. And of course an agent can be useful like 'intellisense on steroids'. Or an assistant who 'ripgreps' for me. There are advantages for sure. But for me the advantages don't match the disadvantages. LLMs take the heart out of what made me like programming: building stuff yourself with your near infinite lego box of parts and coming up with ideas yourself.
I'm only half convinced the LLMs will become as important to coding as they seem . And I'm hoping a sane balance will emerge at the other end of the hype. But if it goes where OpenAI etc. want it to go I think I'll have to re-school to become an electrician or something...
Tbh if it wasn't for coding disruption I don't think the AI boom would of really been that hyped up.
For one thing, LLMs aren't terrible at grammar.
i feel like that's all im doing with llms. just in the last hour i realized that i wanted an indexed string internpool instead of passing string literals. the LLM refactored everything and then i didn't have to worry about that lego piece anymore.
I guess mechanics must feel the same about modern computerized cars, where suddenly the injection timing is no longer a mechanical gadget you can tweak by experience, but some locked down black box you don't have control over.
Also I really dislike that (for now) using an LLM means selling your soul to some dubious company. Even if you use only the free tier you still need to upload your code and let the LLM do whatever with it. If an LLM is an indispensible part of being a programmer, everybody will be held hostage by the large techfirms (even more...).
edit: I suddenly thought of the famous aforism by Brian Kernighan: "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it? "
I fear we will end up with programs nobody understands anymore.
For those who have swallowed the AI panacea hook line and sinker. Those that say it's made me more productive or that I no longer have to do the boring bits and can focus on the interesting parts of coding. I say follow your own line of reasoning through. It demonstrates that AI is not yet powerful enough to NOT need to empower you, to NOT need to make you more productive. You're only ALLOWED to do the 'interesting' parts presently because the AI is deficient. Ultimately AI aims to remove the need for any human intermediary altogether. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months from now may be dramatically impacted.
As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.
I don't have any ideas as to what should be done or more importantly what can be done. Pandora's box has been opened, Humpty Dumpty has fallen and he can't be put back together again. AI feels like it has crossed the rubicon. We must all collectively await to see where the dust settles.
There are infinite analogies for this whole thing, but it mostly distills down to artisans and craftsmen in my mind.
Artisans build one chair to perfection, every joint is meticulously measured and uses traditional handcrafted Japanese joinery, not a single screw or nail is used unless it's absolutely necessary. It takes weeks to build one, each one is an unique work of art.
It also costs 2000€ for a chair.
Craftsmen optimise their process for output, instead of selling one 2000€ chair a month, they'd rather sell a hundred for 20€. They have templates for cutting every piece, jigs for quickly attaching different components, use screws and nails to speed up the process instead of meticulous handcrafted joinery.
It's all about where you get your joy in "software development". Is it solving problems efficiently or crafting a beautiful elegant expressive piece of code?
Neither way is bad, but pre-LLM both people could do the same tasks. I think that's coming to an end in the near future. The difference between craftsmen and artisans is becoming clearer.
There is a place for people who create that beautiful hyper-optimised code, but in many (most) cases just a craftsman with an agentic LLM tool will solve the customer's problem with acceptable performance and quality in a fraction of the time.
Economically it's been a mistake to let wealth get stratified so unequally; we should have and need to reintroduce high progressive tax rates on income and potentially implement wealth taxes to reduce the necessity of guessing a high-paying career over 5 years in advance. That simply won't be possible to do accurately with coming automation. But it is possible to grow social safety nets and decrease wealth disparity so that pursuing any marginally productive career is sufficient.
Practically, once automation begins producing more value than 25% or so of human workers we'll have to transition to a collective ownership model and either pay dividends directly out of widget production, grant futures on the same with subsidized transport, or UBI. I tend to prefer a distribution-of-production model because it eliminates a lot of the rent-seeking risk of UBI; your landlord is not going to want 2X the number of burgers and couches you get distributed as they'd happily double rent in dollars.
Once full automation hits (if it ever does; I can see augmented humans still producing up to 50% of GDP indefinitely [so far as anyone can predict anything past human-level intelligence] especially in healthcare/wellness) it's obvious that some kind of direct goods distribution is the only reasonable outcome; markets will still exist on top of this but they'll basically be optional participation for people who want to do that.
The code is as good or even better than I would have written. I gave Claude the right guidelines and made sure it stayed in line. There are a bunch of playwright tests ensuring things don't break over time, and proving that things actually work.
I didn't have to mess with any of the HTML/css which is usually what makes me give up my personal projects. The result is really, really good, and I say that as someone who's been passionate about programming for about 15 years.
3 days for a complete webshop with Stripe integration, shipping labels and tracking automation, SMTP emails, admin dashboard, invoicing, CI/CD, and all the custom features that I used to dream of.
Sure it's not a crazy innovative projet, but it brings me a ton of value and liberates me from these overengineered, "generic" bulky CMS. I don't have to pay $50 for a stupid plugin (that wouldn't really fit my needs anyway) anymore.
The future is both really exciting and scary.
Here’s a bunch of examples: moving code around, abstracting common functionality into a function and then updating all call sites, moving files around, pattern matching off an already existing pattern in your code. Sometimes it can be fun and zen or you’ll notice another optimization along the way … but most of the time it’s boring work an agent can is 10x faster than you.
Replacing Dockerfiles and Compose with CUE and Dagger
For you, maybe. In my experience, the constant need for babysitting LLMs to avoid the generation of verbose, unmaintainable slop is exhausting and I'd rather do everything myself. Even with all the meticulously detailed instructions, it feels like a slot machine - sometimes you get lucky and the generated code is somewhat usable. Of course, it also depends of the complexity and scope of the project and/or the tasks that you are automating.
People seem to have a visceral reaction towards AI, where it angers them enough that even the idea that people might like it upsets them.
Web programming is not fun. Years ago, a colleague who had pivoted in the early years said "Web rots your brain" (we had done some cool work together in real time optical food sorting).
I know it (web programming) gives a lot of people meaning, purpose, and a paycheck, to become a specialist in an arcane art that is otherwise unplumbable by others. First it was just generally programming. But it's bifurcated into back end, front end, db, distributed, devops, meta, api, etc. The number of programmers I meet now days, who are at start ups that eventually "pivot" to making tools for the tool wielders is impressive (e.g. "we tried to make something for the general public, but that didn't stick, but on the way, we learned how to make a certain kind of pick axe and are really hoping we can get some institutional set of axe wielders at a big digging corporation to buy into what we're offering"). Instead of "Software is eating the world" the real story these days may be "Software is eating itself"
Mired with a mountain of complexity we've created as a result of years of "throw it at the wall and ship what sticks", we're now doubling down on "stochastic programming". We're literally, mathematically, embracing "this probab[i]l[it]y works". The usefulness/appeal of LLMs is an indictment and a symptom. Not a cause.
I'm constantly surprised by developers who like LLMs because "it's great for boiler plate". Why on earth were you wasting your time writing boiler plate before? These people are supposed to be programmers. Write code to generate the boiler plate or get abstract it away.
I suppose the path of least resistance is to ignore the complexity, let the LLM deal with it, instead of stepping back and questioning why the complexity is even there.
That doesn’t make any sense. I want to consider what you’re saying here but I can’t relate to this idea at all. Every project has boilerplate. It gets written once. I don’t know what code you’d write to generate that boilerplate that would be less effort than writing the boilerplate itself…
Now, basically every new "AI" feature feels like a hack on top of yet another LLM. And sure the LLMs seem to keep getting marginally better, but the only people with the resources to actually work on new ones anymore are large corporate labs that hide their results behind corporate facades and give us mere mortals an API at best. The days of coding a unique ML algorithm for a domain specific problem are pretty much gone -- the only thing people pay attention to is shoving your domain specific problem into an LLM-shaped box. Even the original "AI godfathers" seem mostly disinterested in LLMs these days, and most people in ML seem dubious that simply scaling up LLMs more and more will be a likely path to AGI.
It seems like there's more excitement around AI for the average person, which is probably a good thing I suppose, but for a lot of people that were into the field they're not really that fun anymore.
In terms of programming, I think they can be pretty fun for side projects. The sort of thing you wouldn't have had time to do otherwise. For the sort of thing you know you need to do anyway and need to do well, I notice that senior engineers spend more time babysitting them than benefitting from them. LLMs are good at the mechanics of code and struggle with the architecture / design / big picture. Seniors don't really think much about the mechanics of code, it's almost second nature, so they don't seem to benefit as much there. Juniors seem to get a lot more benefit because the mechanics of the code can be a struggle for them.
LLM user here with no experience of ML besides fine-tuning existing models for image classification.
What are the exciting AI fields outside of LLMs? Are there pending breakthroughs that could change the field? Does it look like LLMs are a local maxima and other approaches will win through - even just for other areas?
Personally I'm looking forward to someone solving 3D model generation as I suck at CAD but would 3D print stuff if I didn't have to draw it. And better image segmentation/classification models. There's gotta be other stuff that LLMs aren't the answer to?
There's a lot of problems LLMs are really useful for because generating text is what you want to do. But there's tons of problems which we would want some sort of intelligent, learning behaviour that do not map to language at all. There's also a lot of problems that can "sort of" be mapped to a language problem but make pretty extraneous use of resources compared to a (existing or potential) domain specific solution. For purposes of AGI, you could argue that trying to express "general intelligence" via language alone is fundamentally flawed altogether -- although that quickly becomes a debate about what actually counts as intelligence.
I pay less attention to this space lately so I'm probably not the most informed. Everyone seems so hyped about LLMs that I feel like a lot of other progress gets buried, but I'm sure it's happening. There's some problem domains that are obviously solved better with other paradigms currently: self-driving tech, recommendation systems, robotics, game AIs, etc. Some of the exciting stuff that can likely solve some problems better in the future is some of the work on world models, graph neural nets, multi modality, reinforcement learning, alternatives to gradient descent, etc. I think it's a debate whether or not LLMs are a local maxima but many of the leading AI researchers seem to think so -- Yann Lecun recently for e.g. said LLMs 'are not a path to human-level AI'
But, if you are in a work situation where LLM's are forced upon you in very high doses, then yes -- I understand the feeling.
>The joy of management is seeing my colleagues learn and excel, carving their own paths as they grow. Watching them rise to new challenges. As they grow, I learn from their growth; mentoring benefits the mentor alongside the mentee.
I fail to grasp how using LLMs precludes either of these things. If anything, doing so allows me to more quickly navigate and understand codebases. I can immediately ask questions or check my assumptions against anything I encounter.
Likewise, I don’t find myself doing less mentorship, but focusing that on higher-level guidance. It’s great that, for example, I can tell a junior to use Claude to explore X,Y, or Z design pattern and they can get their own questions answered beyond the limited scope of my time. I remember seniors being dicks to me in my early career because they were overworked or thought my questions were beneath them. Now, no one really has to encounter stuff like that if they don’t want to.
I’m not even the most AI-pilled person I know or on my team, but it just seems so staggeringly obvious how much of a force multiplier this stuff has become over the last 3-6 months.
The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. Software/coding is once of these activities. One can do coding for fun but doing the same coding where it provides value to others/society and financial upkeep for you and your family is far more meaningful.
If that is what you've been doing, a love for coding, I can well empathise how the world is changing underneath your feet.