Readit News logoReadit News
coffeeaddict1 commented on A few random notes from Claude coding quite a bit last few weeks   twitter.com/karpathy/stat... · Posted by u/bigwheels
inerte · 12 days ago
You don't simply put a body in a seat and get software. There are entire systems enabling this trust: college, resume, samples, referral, interviews, tests and CI, monitoring, mentoring, and performance feedback.

And accountability can still exist? Is the engineer that created or reviewed a Pull Request using Claude Code less accountable then one that used PICO?

coffeeaddict1 · 12 days ago
> And accountability can still exist? Is the engineer that created or reviewed a Pull Request using Claude Code less accountable then one that used PICO?

The point is that in the human scenario, you can hold the human agents accountable. You cannot do that with AI. Of course, you as the orchestrator of agents will be accountable to someone, but you won't have the benefit of holding your "subordinates" accountable, which is what you do in a human team. IMO, this renders the whole situation vastly different (whether good or bad I'm not sure).

coffeeaddict1 commented on A few random notes from Claude coding quite a bit last few weeks   twitter.com/karpathy/stat... · Posted by u/bigwheels
atonse · 13 days ago
> LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building.

I’ve always said I’m a builder even though I’ve also enjoyed programming (but for an outcome, never for the sake of the code)

This perfectly sums up what I’ve been observing between people like me (builders) who are ecstatic about this new world and programmers who talk about the craft of programming, sometimes butting heads.

One viewpoint isn’t necessarily more valid, just a difference of wiring.

coffeeaddict1 · 12 days ago
But how can you be a responsible builder if you don't have trust in the LLMs doing the "right thing"? Suppose you're the head of a software team where you've picked up the best candidates for a given project, in that scenario I can see how one is able to trust the team members to orchestrate the implementation of your ideas and intentions, with you not being intimately familiar with the details. Can we place the same trust in LLM agents? I'm not sure. Even if one could somehow prove that LLM are very reliable, the fact an AI agents aren't accountable beings renders the whole situation vastly different than the human equivalent.
coffeeaddict1 commented on Unrolling the Codex agent loop   openai.com/index/unrollin... · Posted by u/tosh
adam_patarino · 16 days ago
I’ve never understood checkpoints / forks. When do you use them?
coffeeaddict1 · 16 days ago
Usually, I tell the agent to try out an idea and if I don't like the implementation or approach I want to undo the code changes. Then I start again, feeding it more information so it can execute a different idea or the same one with a better plan. This also helps the context window small.
coffeeaddict1 commented on Unrolling the Codex agent loop   openai.com/index/unrollin... · Posted by u/tosh
coffeeaddict1 · 16 days ago
What I really want from Codex is checkpoints ala Copilot. There are a couple of issues [0][1] opened about on GitHub, but it doesn't seem a priority for the team.

[0] https://github.com/openai/codex/issues/2788

[1] https://github.com/openai/codex/issues/3585

coffeeaddict1 commented on Claude's new constitution   anthropic.com/news/claude... · Posted by u/meetpateltech
spicyusername · 18 days ago

    objective truth

    moral absolutes
I wish you much luck on linking those two.

A well written book on such a topic would likely make you rich indeed.

    This rejects any fixed, universal moral standards
That's probably because we have yet to discover any universal moral standards.

coffeeaddict1 · 18 days ago
> That's probably because we have yet to discover any universal moral standards.

This is true. Moral standards don't seem to be universal throughout history. I don't think anyone can debate this. However, this is different that claiming there is an objective morality.

In other words, humans may exhibit varying moral standards, but that doesn't mean that those are in correspondence with moral truths. Killing someone may or may not have been considered wrong in different cultures, but that doesn't tell us much about whether killing is indeed wrong or right.

coffeeaddict1 commented on The struggle of resizing windows on macOS Tahoe   noheger.at/blog/2026/01/1... · Posted by u/happosai
drob518 · a month ago
Yea, the programmers aren’t to blame here. In fact some of the visual effects they have achieved are pretty cool. The designers are at fault because they prioritized visuals over usability. Literally nobody I know thinks “Liquid Glass” has been an improvement. The feedback is universally negative.
coffeeaddict1 · a month ago
I hate it too, but to my surprise, all of my colleagues (with an iPhone) said they love because it looks great.

u/coffeeaddict1

KarmaCake day1075July 7, 2022View Original