Readit News logoReadit News
101011 commented on After my dad died, we found the love letters   jenn.site/after-my-dad-di... · Posted by u/eatitraw
schneems · 4 months ago
I agree. To me, it's like a blameless retro. You can either seek understanding or seek blame, but not both at once.

The author seemingly had a lot of judgement and blame for the dad before finding this out. It sounds like they are seeking understanding. I think the last line makes that clear:

> the evening we found the love letters. his entire life, and mine as well

And it's not to say someone can't attach judgement to characters, or that no one should hold blame. But I think it's important to honor what the author is seeking.

101011 · 4 months ago
> You can either seek understanding or seek blame, but not both at once.

This is the first I've heard this statement (not necessarily the idea), but I found it incredibly beautiful in it's simplicity - thanks for sharing!

Are there origins to this that you're aware of? With some searching I found some adjacent thread lines to stoicism and Buddhism, but nothing quite the same.

101011 commented on Anthropic agrees to pay $1.5B to settle lawsuit with book authors   nytimes.com/2025/09/05/te... · Posted by u/acomjean
mbrochh · 6 months ago
After their recent change in tune to retain data for longer and to train on our data, I deleted my account.

Try to do that. There is no easy way to delete your account. You need to reach out to their support via email. Incredibly obnoxious dark pattern. I hate OpenAI, but everything with Anthropic also smells fishy.

We need more and better players. I hope that XAi will give them all some good competition, but I have my doubts.

101011 · 6 months ago
I think it's fair to call out a dark pattern for account deletion (which, for better or worse, is common practice) - but the data training and data retention thing can both be disabled...I was much more surprised that they DIDN'T train on data as long as they did, when every other LLM provider was sucking in as much data as they could (OpenAI, Google, Meta, and xAI - although Meta gets a pass for providing the open-weight models in my head).

Anthropic has made AI safety a central pillar of their ethos and have shared a lot of information about what they're doing to responsibly train models...personally I found a lot of corporate-speak on this topic from OpenAI, but very little information.

101011 commented on Counter-Strike: A billion-dollar game built in a dorm room   nytimes.com/2025/08/18/ar... · Posted by u/asnyder
vehemenz · 7 months ago
Even though I played CS at the highest level for the time, CAL-i, I always thought the maps were too small for 5v5 and that competitive would have benefitted from 7v7 or 8v8. That was how pubs were, and the dynamics were better. I think 5v5 won out due to practicality.
101011 · 7 months ago
Hey, I was CAL-i too! At least...that's what we told people in IRC :P
101011 commented on Ask HN: Anyone struggling to get value out of coding LLMs?    · Posted by u/bjackman
101011 · 10 months ago
If I could offer another suggestion from what's been discussed so far - try Claude Code - they are doing something different than the other offerings around how they manage context with the LLM and the results are quite different than everything else.

Also, the big difference with this tool is that you spend more time planning, don't expect it to 1 shot, you need to think about how you go from epic to task first, THEN you let it execute.

101011 commented on Running Qwen3 on your macbook, using MLX, to vibe code for free   localforge.dev/blog/runni... · Posted by u/avetiszakharyan
rcarmo · 10 months ago
I purposefully used exactly the same thing I did with Claude and Gemini to see how the models dealt with ambiguity. It shouldn't have degraded the chain of thought to the point where it starts looping.
101011 · 10 months ago
The trick shouldn't be to try and generate a litmus test for agentic development, it's to change your workflow to game-plan solutions and decompose problems (like you would a jira epic to stories), and THEN have it build something for you.
101011 commented on Running Qwen3 on your macbook, using MLX, to vibe code for free   localforge.dev/blog/runni... · Posted by u/avetiszakharyan
chuckadams · 10 months ago
Anyone know of a setup, perhaps with MCP, where I can get my local LLM to work in tandem on tasks, compress context, or otherwise act in concert with the cloud agent I'm using with Augment/Cursor/whatever? It seems silly that my shiny new M3 box just renders the UI while the cloud LLM alone refactors my codebase, I feel they could negotiate the tasks between themselves somehow.
101011 · 10 months ago
This is the closest I've found that's akin to Claude Code: https://aider.chat/
101011 commented on What makes code hard to read: Visual patterns of complexity (2023)   seeinglogic.com/posts/vis... · Posted by u/homarp
titzer · a year ago
The last one is better as:

SELECT * FROM authors WHERE author_id IN (SELECT author_id FROM books WHERE pageCount > 1000);

But I think you're missing the point. The functional/procedural style of writing is sequentialized and potentially slow. It's not transactional, doesn't handle partial failure, isn't parallelizable (without heavy lifting from the language--maybe LINQ can do this? but definitely not in Java).

With SQL, you push the entire query down into the database engine and expose it to the query optimizer. And SQL is actually supported by many, many systems. And it's what people have been writing for 40+ years.

101011 · a year ago
agreed on the revised SQL!

But I don't think I missed the point, the original text talks about measuring complexity as a function of operators, operands, and nested code. The true one to one mapping is more complex than the original comment I replied to

101011 commented on What makes code hard to read: Visual patterns of complexity (2023)   seeinglogic.com/posts/vis... · Posted by u/homarp
titzer · a year ago
SELECT DISTINCT author FROM books WHERE pageCount > 1000;
101011 · a year ago
In fairness, if this was in a relational data store, the same code as above would probably look more like...

SELECT DISTINCT authors.some_field FROM books JOIN authors ON books.author_id = authors.author_id WHERE books.pageCount > 1000

And if you wanted to grab the entire authors record (like the code does) you'd probably need some more complexity in there:

SELECT * FROM authors WHERE author_id IN ( SELECT DISTINCT authors.author_id FROM books JOIN authors ON books.author_id = authors.author_id WHERE books.pageCount > 1000 )

101011 commented on API design note: Beware of adding an "Other" enum value   devblogs.microsoft.com/ol... · Posted by u/luu
janci · a year ago
How that works when you need to distinguish between "no value provided" and "a value that is not in the list" - in some applications they have different semantics.
101011 · a year ago
You could treat it as an nullable Option<SomeType>.

In practice, as it relates to enums, I don't usually see 'no value provided' as a frequently used case - it's more likely that 'no value provided' maps to a more informative 'enum' value

101011 commented on API design note: Beware of adding an "Other" enum value   devblogs.microsoft.com/ol... · Posted by u/luu
coin · a year ago
Just call it "unknown" or "unspecified" or better yet use an optional to hold the enum.
101011 · a year ago
This ended up being the preferred pattern we moved into.

If, like us, you were passing the object between two applications, the owning API would serialize the enum value as a String value, then we had a client helper method that would parse the string value into an Optional enum value.

If the original service started transferring a new String object between services, it wouldn't break any downstream clients, because the clients would just end up with Optional empty

u/101011

KarmaCake day310April 10, 2015View Original