Readit News logoReadit News
bagacrap commented on If you're remote, ramble   stephango.com/ramblings... · Posted by u/lawgimenez
xandrius · 21 days ago
Re-reading your own message should definitely be a bell for you to notice either your lack of trust in others and/or your twisted perspective that work is the goal of life.

Unless you are the one paying for that person and they are not performing as by contract, even if someone needs an extra day off to chill, you should be happy they do take it as it creates an environment where you also could take it off if you so wished.

bagacrap · 10 days ago
found the guy who's sick every other Monday ^
bagacrap commented on Why LLMs can't really build software   zed.dev/blog/why-llms-can... · Posted by u/srid
tempodox · 10 days ago
That's what I don't understand about AI coding fans. Instead of using a language that was designed to produce executable code, they insert another translation stage with a much murkier and fuzzier language. So you have to learn a completely new interface that is less fit for the task for the benefit of uncertain outcomes. And woe betide you if you step outside the most mainstream of mainstreams, where there's not an overabundance of training data.
bagacrap · 10 days ago
It's because the AI coding fans already don't know programming languages, so they're learning a new language/interface either way.

That, and their software doesn't actually have any users, I find.

bagacrap commented on Why LLMs can't really build software   zed.dev/blog/why-llms-can... · Posted by u/srid
Transfinity · 10 days ago
> LLMs get endlessly confused: they assume the code they wrote actually works; when test fail, they are left guessing as to whether to fix the code or the tests; and when it gets frustrating, they just delete the whole lot and start over.

I feel personally described by this statement. At least on a bad day, or if I'm phoning it in. Not sure if that says anything about AI - maybe just that the whole "mental models" part is quite hard.

bagacrap · 10 days ago
So LLMs are always phoning it in, on a bad day, etc. Great.

I recently tried to get AI to refactor some tests, which it proceeded to break. Then it iterated a bit till it had gotten the pass rate back up to 75%. At this point it declared victory. So yes, it does really seem like a human who really doesn't want to be there.

bagacrap commented on Why LLMs can't really build software   zed.dev/blog/why-llms-can... · Posted by u/srid
generalizations · 10 days ago
These LLM discussions really need everyone to mention what LLM they're actually using.

> AI is awesome for coding! [Opus 4]

> No AI sucks for coding and it messed everything up! [4o]

Would really clear the air. People seem to be evaluating the dumbest models (apparently because they don't know any better?) and then deciding the whole AI thing just doesn't work.

bagacrap · 10 days ago
My experience is that AI enthusiasts will always say, "well you just used the wrong model". And when no existing model works well, they say, "well in 6 months it will work". The utility of agentic coding for complex projects is apparently unfalsifiable.
bagacrap commented on Why LLMs can't really build software   zed.dev/blog/why-llms-can... · Posted by u/srid
emilecantin · 10 days ago
Yeah, I think it's pretty clear to a lot of people that LLMs aren't at the "build me Facebook, but for dogs" stage yet. I've had relatively good success with more targeted tasks, like "Add a modal that does this, take this existing modal as an example for code style". I also break my problem down into smaller chunks, and give them one by one to the LLM. It seems to work much better that way.
bagacrap · 10 days ago
I can already copy paste existing code and tweak it to do what I want (if you even consider that "software engineering"). The difference being that my system clipboard is deterministic, rather than infinitely creative at inventing new ways to screw up.
bagacrap commented on Why LLMs can't really build software   zed.dev/blog/why-llms-can... · Posted by u/srid
JimDabell · 10 days ago
LLMs can’t build software because we are expecting them to hear a few sentences, then immediately start coding until there’s a prototype. When they get something wrong, they have a huge amount of spaghetti to wade through. There’s little to no opportunity to iterate at a higher level before writing code.

If we put human engineering teams in the same situation, we’d expect them to do a terrible job, so why do we expect LLMs to do any better?

We can dramatically improve the output of LLM software development by using all those processes and tools that help engineering teams avoid these problems:

https://jim.dabell.name/articles/2025/08/08/autonomous-softw...

bagacrap · 10 days ago
There are a lot of human engineers who do a fine job in these situations, akshwally.

If it isn't easy to give commands to LLMs, then what is the purpose of them?

bagacrap commented on GPT-5   openai.com/gpt-5/... · Posted by u/rd
roxolotl · 17 days ago
Yea maybe it’s naive but I’ve started learning towards preferring the devil I know. It also helps that Gemini is great.
bagacrap · 17 days ago
Plus it's the mega monopoly that is already being scrutinized by the government. Every tech company seems to start out with too much credibility that it has to whittle down little by little before we really hold them accountable.
bagacrap commented on If you're remote, ramble   stephango.com/ramblings... · Posted by u/lawgimenez
kmarc · 21 days ago
Indeed, I, as a fully remote, probably overworked person, sometimes wonder if I'm a loser just because I never

* pick up Becky from school

* feel under the weather today so I'll be offline and "take it easy" (never hear about me anymore today)

* sorry "traffic jam" (10:00am)

* sorry "train canceled"

* will leave a bit early (2pm) for [insert random reason] appointment

While all these can be completely valid reasons, it's just funny hearing one of these daily. On a side note, I also kinda like my job and am not interested in slacking.

bagacrap · 21 days ago
I do tend to be a bit suspicious of the one-day "under the weather" events.

However I do think we need to make extra room for parents (I am not one, yet). I'm going to need a doctor who's younger than me when I'm 80+

Folks could always just disappear instead of announcing these things, but is that better? And as a senior on my team, I over announce certain stuff to let the other team members know that WLB is ok.

bagacrap commented on The untold impact of cancellation   pretty.direct/impact... · Posted by u/cbeach
praptak · 22 days ago
No, that's not what I concluded. I used the word "perhaps" to indicate that accusers telling the truth is a possibility. And it's one that cannot be ruled out even if we believe 100% of what the author wrote in this article.
bagacrap · 21 days ago
There is an assertion at stake here along the lines of "he took advantage of me" which is either objectively true or is not objectively true. It may be impossible for us to know what his actual motivation was at the time the undisputed events took place, but if everyone agreed his intentions were pure, then he wouldn't be ostracized. So there is no way for everyone's statements to be in perfect alignment with objective reality.
bagacrap commented on Lina Khan points to Figma IPO as vindication of M&A scrutiny   techcrunch.com/2025/08/02... · Posted by u/bingden
evolve2k · 22 days ago
What a stupid comment at the end of the article. The vindication is of having the company exist in the market in such a way as to encourage competition.
bagacrap · 21 days ago
"Figma is a massive success [...] because of the company’s innovative growth"

Given that this sell side analyst defines success as growth, this seems rather like a tautology.

u/bagacrap

KarmaCake day3443August 6, 2011View Original