Readit News logoReadit News
peheje commented on We mourn our craft   nolanlawson.com/2026/02/0... · Posted by u/ColinWright
peheje · 3 days ago
I get the grief about AI, but I don't share it.

After ten years of professional coding, LLMs have made my work more fun. Not easier in the sense of being less demanding, but more engaging. I am involved in more decisions, deeper reviews, broader systems, and tighter feedback loops than before. The cognitive load did not disappear. It shifted.

My habits have changed. I stopped grinding algorithm puzzles because they started to feel like practicing celestial navigation in the age of GPS. It is a beautiful skill, but the world has moved on. The fastest path to a solution has always been to absorb existing knowledge. The difference now is that the knowledge base is interactive. It answers back and adapts to my confusion.

Syntax was never the job. Modeling reality was. When generation is free, judgment becomes priceless.

We have lost something, of course. There is less friction now, which means we lose the suffering we often mistook for depth. But I would rather trade that suffering for time spent on design, tradeoffs, and problems that used to be out of reach.

This doesn't feel like a funeral. It feels like the moment we traded a sextant for a GPS. The ocean is just as dangerous and just as vast, but now we can look up at the stars for wonder, rather than just for coordinates.

peheje commented on Oban, the job processing framework from Elixir, has come to Python   dimamik.com/posts/oban_py... · Posted by u/dimamik
simonw · 13 days ago
> Oban allows you to insert and process jobs using only your database. You can insert the job to send a confirmation email in the same database transaction where you create the user. If one thing fails, everything is rolled back.

This is such a key feature. Lots of people will tell you that you shouldn't use a relational database as a worker queue, but they inevitably miss out on how important transactions are for this - it's really useful to be able to say "queue this work if the transaction commits, don't queue it if it fails".

Brandur Leach wrote a fantastic piece on this a few years ago: https://brandur.org/job-drain - describing how, even if you have a separate queue system, you should still feed it by logging queue tasks to a temporary database table that can be updated as part of those transactions.

peheje · 12 days ago
This resonates so much. I spent years in an org watching "domain events" vanish into the ether because of the Dual Write Problem. We had these high-performance, sharded, distributed monsters that were "fast" on paper, but they couldn't guarantee a simple message would actually send after a record was saved.

Moving back to a rock-solid SQL-backed approach solved it overnight. But since there are no more "1% glitches," people have forgotten there was ever a fire. It’s a thankless win. The organization now thinks the system is "easy" and the "async purists" still lobby for a separate broker just to avoid "polluting" the DB. They’d rather trust complex, custom-built async logic than the most reliable part of their stack. (The transactional outbox pattern is essential, I just prefer mine backed by the same ACID guarantees as my data).

It’s tricky stuff. I'm an application dev, not a DB internalist, but I've realized that a week spent actually learning isolation levels and commit-ordering saves you a year of "distributed system" debugging. Even when teams layer an ORM like Entity Framework on top to "hide" the complexity, that SQL reality is still there. It’s not magic; it’s just ACID, and it’s been there the whole time.

peheje commented on Maybe comments should explain 'what' (2017)   hillelwayne.com/post/what... · Posted by u/zahrevsky
tetha · a month ago
This however misses an important point: 3 is not in our control. 3 in general is controlled by math-people, and that 3 in particular is probably in the hands of a legal/regulation department. That's a much more important information to highlight.

For example, at my last job, we shoved all constants managed by the balancing teams into a static class called BalancingTeam, to make it obvious that these values are not in our control. Tests, if (big-if) written, should revolve around the constants to not be brittle.

peheje · a month ago
I like the idea of using a LegalConstants namespace or Code Owners to signal that we don't own those values.

However, I’d argue that being 'out of our control' is actually the main reason to test them. We treat these as acceptance tests. The goal isn't flexibility, it is safety. If a PR changes a regulated value, the test failure acts as a tripwire. It forces us to confirm with the PM (and the Jira ticket) that the change is intentional before merging. It catches what code structure alone might miss.

peheje commented on Maybe comments should explain 'what' (2017)   hillelwayne.com/post/what... · Posted by u/zahrevsky
tetha · a month ago
> That's explaining "what" but also implicitly "why" - because that's how double-entry works and that's the tolerance banks allow for settlement delays. You can't really extract that into a method name without it becoming absurd.

That's why I've also started to explicitly decompose constants if possible. Something like `ageAlertThresholdHours = backupIntervalHours + approxBackupDurationHours + wiggleRoomHours`. Sure, this takes 4 constants and is longer to type than "28 hours".

However, it communicates how I think about this, how these 28 hours come about and how to tweak them if the alerting is being annoying: It's easy to guess that you'd bump that to 29 hours if it throws false positives. But like this, you could see that the backup is supposed to take, say, 3 hours - except now it takes 4 due to growth of the system. So like this, you can now apply an "obviously correct" fix of bumping up the approximate backup duration based on monitoring data.

peheje · a month ago
I don't necessarily disagree with providing context, but my concern is that comments eventually lie. If the business rule evolves (say the window moves to 5 days) the comment becomes a liability the moment someone updates the code but forgets the prose.

The comment also leaves me with more questions: how do you handle multiple identical amounts in that window? I would still have to read the implementation to be sure.

I would prefer encoding this in an Uncle Bob style test. It acts as living documentation that cannot get out of sync with the code and explains the why through execution. For example:

  test("should_match_debit_with_credit_only_if_within_three_day_settlement_window", () => {
      const debit = A_Transaction().withAmount(500.00).asDebit().on(JANUARY_1);
      const creditWithinWindow = A_Transaction().withAmount(500.00).asCredit().on(JANUARY_4);
      const creditOutsideWindow = A_Transaction().withAmount(500.00).asCredit().on(JANUARY_5);
  
      expect(Reconciliation.tryMatch(debit, creditWithinWindow)).isSuccessful();
      expect(Reconciliation.tryMatch(debit, creditOutsideWindow)).hasFailed();
  });
This way, the 3 day rule is a hard requirement that fails the build if broken rather than a suggestion in a comment block.

peheje commented on Show HN: Use Claude Code to Query 600 GB Indexes over Hacker News, ArXiv, etc.   exopriors.com/scry... · Posted by u/Xyra
adammarples · a month ago
This is obviously cool, and I don't want to take away from that, but using a shortcut to make training a bit faster is qualitatively different from producing an AI which is actually more intelligent. The more intelligent AI can recursively produce a more intelligent one and so on, hence the explosion. If it's a bit faster to train but the same result then no explosion. It may be that finding efficiencies in our equations is low hanging fruit, but developing fundamentally better equations will prove impossible.
peheje · a month ago
Agreed. This is a small step :) And humans still definitely in the loop.
peheje commented on Show HN: Use Claude Code to Query 600 GB Indexes over Hacker News, ArXiv, etc.   exopriors.com/scry... · Posted by u/Xyra
adammarples · a month ago
When AI gets so good it can improve on itself
peheje · a month ago
Actually, this has already happened in a very literal way. Back in 2022, Google DeepMind used an AI called AlphaTensor to "play" a game where the goal was to find a faster way to multiply matrices, the fundamental math that powers all AI.

To understand how big this is, you have to look at the numbers:

The Naive Method: This is what most people learn in school. To multiply two 4x4 matrices, you need 64 multiplications.

The Human Record (1969): For over 50 years, the "gold standard" was Strassen’s algorithm, which used a clever trick to get it down to 49 multiplications.

The AI Discovery (2022): AlphaTensor beat the human record by finding a way to do it in just 47 steps.

The real "intelligence explosion" feedback loop happened even more recently with AlphaEvolve (2025). While the 2022 discovery only worked for specific "finite field" math (mostly used in cryptography), AlphaEvolve used Gemini to find a shortcut (48 steps) that works for the standard complex numbers AI actually uses for training.

Because matrix multiplication accounts for the vast majority of the work an AI does, Google used these AI-discovered shortcuts to optimize the kernels in Gemini itself.

It’s a literal cycle: the AI found a way to rewrite its own fundamental math to be more efficient, which then makes the next generation of AI faster and cheaper to build.

https://deepmind.google/blog/discovering-novel-algorithms-wi... https://www.reddit.com/r/singularity/comments/1knem3r/i_dont...

peheje commented on Show HN: 22 GB of Hacker News in SQLite   hackerbook.dosaygo.com... · Posted by u/keepamovin
3eb7988a1663 · a month ago
While I suspect DuckDB would compress better, given the ubiquity of SQLite, it seems a fine standard choice.
peheje · a month ago
the data is dominated by big unique TEXT columns, unsure how that can much compress better when grouped - but would be interesting to know
peheje commented on Gemini 3 Flash: Frontier intelligence built for speed   blog.google/products/gemi... · Posted by u/meetpateltech
hubraumhugo · 2 months ago
You can get your HN profile analyzed and roasted by it. It's pretty funny :) https://hn-wrapped.kadoa.com
peheje · 2 months ago
This is great. I literally "LOL'd".
peheje commented on Gemini 3 Flash: Frontier intelligence built for speed   blog.google/products/gemi... · Posted by u/meetpateltech
caminanteblanco · 2 months ago
Does anyone else understand what the difference is between Gemini 3 'Thinking' and 'Pro'? Thinking "Solves complex problems" and Pro "Thinks longer for advanced math & code".

I assume that these are just different reasoning levels for Gemini 3, but I can't even find mention of there being 2 versions anywhere, and the API doesn't even mention the Thinking-Pro dichotomy.

peheje · 2 months ago
I think:

Fast = Gemini 3 Flash without thinking (or very low thinking budget)

Thinking = Gemini 3 flash with high thinking budget

Pro = Gemini 3 Pro with thinking

peheje commented on Using LLMs at Oxide   rfd.shared.oxide.computer... · Posted by u/steveklabnik
peheje · 2 months ago
I know I'm walking into a den of wolves here and will probably get buried in downvotes, but I have to disagree with the idea that using LLMs for writing breaks some social contract.

If you hand me a financial report, I expect you used Excel or a calculator. I don't feel cheated that you didn't do long division by hand to prove your understanding. Writing is no different. The value isn't in how much you sweated while producing it. The value is in how clear the final output is.

Human communication is lossy. I think X, I write X' (because I'm imperfect), you understand Y. This is where so many misunderstandings and workplace conflicts come from. People overestimate how clear they are. LLMs help reduce that gap. They remove ambiguity, clean up grammar, and strip away the accidental noise that gets in the way of the actual point.

Ultimately, outside of fiction and poetry, writing is data transmission. I don't need to know that the writer struggled with the text. I need to understand the point clearly, quickly, and without friction. Using a tool that delivers that is the highest form of respect for the reader.

u/peheje

KarmaCake day280November 12, 2017View Original