Readit News logoReadit News
gombosg commented on I was interviewed by an AI bot for a job   theverge.com/featured-vid... · Posted by u/speckx
ossa-ma · 2 days ago
Perfectly encapsulates the state of the job market. Interviewing is genuinely a hellscape at this point and I've experienced many interviews where there was a complete breakdown of etiquette/guidelines and good faith.

One was so bad I had to write about it: https://ossama.is/writing/betrayed

gombosg · 2 days ago
I'm sorry for your experience, but loved the painting at the end... :)
gombosg commented on We should revisit literate programming in the agent era   silly.business/blog/we-sh... · Posted by u/horseradish
monsieurbanana · 4 days ago
Those two are not linked. I could buy that maybe human-readable code will be the minority.

But what does ephemeral code even means? That we will throw everything out of the window at every release cycle and recreate from scratch with llms based on specs? That's not happening

gombosg · 4 days ago
I think you're right, ephemeral code would be the concept that you have (I'm hand-waving) "the spec", that specifies what the code should be doing and the AI could regenerate the code any time based on it.

I'm also baffled by this concept and fundamentally believe that code _should be_ the ground truth (the spec), hence it should be human readable. That's what "clean code" would be about, choosing tools and abstractions so that code is consumable for humans and easy to reason about, debug and extend.

If we let go of that and rely on LLMs entirely... not sure where that would land, since computers ultimately execute the code - and the company is liable for the results of that code being executed -, not the plain language "specs".

gombosg commented on The happiest I've ever been   ben-mini.com/2026/the-hap... · Posted by u/bewal416
fortzi · 13 days ago
OP might love tech, but he sure doesn’t sound like he loved the craft.

Describing it as sitting in front of a rectangle, moving all rectangles around is so reductive.

gombosg · 13 days ago
Exactly, basically then every desk or office job means sitting next to a box?
gombosg commented on Layoffs at Block   twitter.com/jack/status/2... · Posted by u/mlex
gombosg · 15 days ago
I still don't get it.

If AI really improves efficiency and allows the company's employees to produce more, better products faster and thus increase the competitiveness of a company... then why does said company fire (half of!) its staff instead of, well, producing more, better products faster, thus increasing its competitiveness?

Am I naive or is AI a lie when marked as a cause?

Why is it that us employees are gaslighted with the FOMO of "if you don't adopt AI to produce more, then you'll be replaced by employees who do", and why do these executives don't feel "if you fire half of your employees for whatever reason, you'll be outcompeted by companies who... simply didn't?"

gombosg commented on Writing code is cheap now   simonwillison.net/guides/... · Posted by u/swolpers
chr15m · 17 days ago
That's a good point. Myself is the easiest person to fool.

I knocked together a quick analysis of my commit graphs going back several years, if you're interested: https://mccormick.cx/gh/

My average leading up to 2023 was around 2k commits per year. 2023 I started using ChatGPT and I hit my highest commits so far that year at 2,600. 2024 I moved to a different country, which broke my productivity. I started using aider at the end of 2024 and in 2025 I again hit my highest commits ever at 2,900. This year is looking pretty solid.

From this it looks to me like I'm at least 1.4x more productive than before.

As a freelancer I have to track issues closed and hours pretty closely so I can give estimates and updates to clients. My baseline was always "two issues closed per working day". These are issues I create myself (full stack, self-managed freelancer) so the average granularity has stayed roughly constant.

This morning I closed 8 issues on a client project. I estimate I am averaging around 4 issues per working day these days. I know this because I have to actually close the issues each day. So on that metric my productivity has roughly doubled.

I believe those studies for sure. I think there is nuance to using these tools well, and I think a lot of people are going backwards and introducing more bugs than progress through vibe coding. I do not think I have gone backwards, and the metrics I have available seem to agree with that assessment.

gombosg · 16 days ago
Love your approach and that you actually have "before vs. after" numbers to back it up!

I personally also use AI in a similar way, strongly guiding it instead of vibe-coding. It reduces frustration because it surely "types" faster and better than me, including figuring out some syntax nuances.

But often I jump in and do some parts by myself. Either "starting" something (creating a directory, file, method etc.) to let the LLM fill in the "boring" parts, or "finishing" something by me filling in the "important" parts (like business logic etc.).

I think it's way easier to retain authorship and codebase understanding this way, and it's more fun as well (for me).

But in the industry right now there is a heavy push for "vibe coding".

gombosg commented on How to effectively write quality code with AI   heidenstedt.org/posts/202... · Posted by u/i5heu
whynotminot · a month ago
Of course software hasn’t been delivered fast enough. There is so so so much of the world that still needs high quality software.
gombosg · a month ago
I think there are four fundamental issues here for us...

1. There are actually less software jobs out there, with huge layoffs still going on, so software engineering as a profession doesn't seem to profit from AI.

2. The remaining engineers are expected by their employers to ship more. Even if they can manage that using AI, there will be higher pressure and higher stress on them, which makes their work less fulfilling, more prone to burnout etc.

3. Tied to the previous - this increases workism, measuring people, engineers by some output benchmark alone, treating them more like factory workers instead of expert, free-thinking individuals (often with higher education degrees). Which again degrades this profession as a whole.

3. Measuring developer productivity hasn't really been cracked before either, and still after AI, there is not a lot of real data proving that these tools actually make us more productive, whatever that may be. There is only anecdotal evidence: I did this in X time, when it would have taken me otherwise Y time - but at the same time it's well known that estimating software delivery timelines is next to impossible, meaning, the estimation of "Y" is probably flawed.

So a lot of things going on apart from "the world will surely need more software".

gombosg commented on How to effectively write quality code with AI   heidenstedt.org/posts/202... · Posted by u/i5heu
carlmr · a month ago
>but the code to me is a forcing mechanism into ironing out the details, and I don't get that when I'm writing a specification.

This is so on point. The spec as code people try again and again. But reality always punches holes in their spec.

A spec that wasn't exercised in code, is like a drawing of a car, no matter how detailed that drawing is, you can't drive it, and it hides 90% of the complexity.

To me the value of LLMs is not so much in the code they write. They're usually to verbose, start building weird things when you don't constantly micromanage them.

But you can ask very broad questions, iteratively refine the answer, critique what you don't like. They're good as a sounding board.

gombosg · a month ago
I love using LLMs as well as rubber ducks - what does this piece of code do? How would you do X with Y? etc.

The problem is that this spec-driven philosophy (or hype, or mirage...) would lead to code being entirely deprecated, at least according to its proponents. They say that using LLMs as advisors is already outdated, we should be doing fully agentic coding and just nudge the LLM etc. since we're losing out on 'productivity'.

gombosg commented on How to effectively write quality code with AI   heidenstedt.org/posts/202... · Posted by u/i5heu
vbezhenar · a month ago
> why wouldn’t he want you to use that to great efficiency

Because I deny that? It's not fun for me.

> would a carpenter shop accept employees rejecting the power saw in favour of a hand saw to retain their artisanal capability?

Why not? If that makes enough money to keep going.

You might argue that in theoretical ideal market companies who're not utilizing every possible trick to improve productivity (including AI) will lose competition, but let's be real, a lot of companies are horribly inefficient and that does not make them bankrupt. The world of producing software is complicated.

I know that I deliver. When I'm asked to write a code, I deliver it and I responsible for it. I enjoy the process and I can support this code. I can't deliver with AI. I don't know what it'll generate. I don't know how much time would it take to iterate to the result that I precisely want. So I can't longer be responsible for my own output. Or I'd spend more time baby-sitting AI than it would take me to write the code. That's my position. Maybe I'm wrong, they'll fire me and I'll retire, who knows. AI hype is real and my boss often copy&pasting ChatGPT asking me to argue with it. That's super stupid and irritating.

gombosg · a month ago
I can totally relate to your experience.

I started this career because I liked writing code. I no longer write a lot of code as a lead, but I use writing code to learn, to gain a deeper understanding of the problem domain etc. I'm not the type who wants to write specs for every method and service but rather explore and discover and draft and refactor by... well, coding. I'm amazed at creating and reading beautiful, stylish, working code that tells a story.

If that's taken away, I'm not sure how I could retain my interest in this profession. Maybe I'll need to find something else, but after almost a decade this will be a hard shift.

gombosg commented on The Codex app illustrates the shift left of IDEs and coding GUIs   benshoemaker.us/writing/c... · Posted by u/straydusk
kwindla · a month ago
Yes, but also ... the analogy to assembly is pretty good. We're moving pretty quickly towards a world where we will almost never read the code.

You may read all the assembly that your compiler produces. (Which, awesome! Sounds like you have a fun job.) But I don't. I know how to read assembly and occasionally do it. But I do it rarely enough that I have to re-learn a bunch of stuff to solve the hairy bug or learn the interesting system-level thing that I'm trying to track down if I'm reading the output of the compiler. And mostly even when I have a bug down at the level where reading assembly might help, I'm using other tools at one or two removes to understand the code at that level.

I think it's pretty clear that "reading the code" is going to go the way of reading compiler output. And quite quickly. Even for critical production systems. LLMs are getting better at writing code very fast, and there's no obvious reason we'll hit a ceiling on that progress any time soon.

In a world where the LLMs are not just pretty good at writing some kinds of code, but very good at writing almost all kinds of code, it will be the same kind of waste of time to read source code as it is, today, to read assembly code.

gombosg · a month ago
I think this analogy to assembly is flawed.

Compilers predictably transform one kind of programming language code to CPU (or VM) instructions. Transpilers predictably transform one kind of programming language to another.

We introduced various instruction architectures, compiler flags, reproducible builds, checksums exactly to make sure that whatever build artifact that's produced is super predictable and dependable.

That reproducibility is how we can trust our software and that's why we don't need to care about assembly (or JVM etc.) specifics 99% of the time. (Heck, I'm not familiar with most of it.)

Same goes for libraries and frameworks. We can trust their abstractions because someone put years or decades into developing, testing and maintaining them and the community has audited them if they are open-source.

It takes a whole lot of hand-waving to traverse from this point to LLMs - which are stochastic by nature - transforming natural language instructions (even if you call it "specs", it's fundamentally still a text prompt!) to dependable code "that you don't need to read" i.e. a black box.

gombosg commented on The Gorman Paradox: Where Are All the AI-Generated Apps?   codemanship.wordpress.com... · Posted by u/ArmageddonIt
bossyTeacher · 3 months ago
At this point, the question we should all be worried about is what is going to happen once the biggest investors see and internalize these articles? Will the economy withstand the collapse of the AI industry and temporary damage to adjacent tech sectors or will this combined with the dodgy loans taken by Meta/Amazon/Alphabet pull the wider economy into a recession?
gombosg · 3 months ago
I think that just because AI won't be as good for tech as initially promised, it still has penetration potential in the wider economy.

OK I don't have numbers to back it up but I wouldn't be surprised if most of the investment and actual AI use was not tech (software engineering), but other use cases.

u/gombosg

KarmaCake day253July 16, 2018
About
https://gombosg.com
View Original