Readit News logoReadit News
patcon commented on Show HN: I was curious about spherical helix, ended up making this visualization   visualrambling.space/movi... · Posted by u/damarberlari
srean · 3 days ago
These used to be super important in early oceanic navigation. It is easier to maintain a constant bearing throughout the voyage. So that's the plan sailors would try to stick close to. These led to let loxodromic curves or rhumb lines.

https://en.m.wikipedia.org/wiki/Rhumb_line

Mercator maps made it easier to compute what that bearing ought to be.

https://en.m.wikipedia.org/wiki/Mercator_projection

This configuration is a mathematical gift that keeps giving. Look at it side on in a polar projection you get a logarithmic spiral. Look at it side on you get a wave packet. It's mathematics is so interesting that Erdos had to have a go at it [0]

On a meta note, today seems spherical geometry day on HN.

https://news.ycombinator.com/item?id=44956297

https://news.ycombinator.com/item?id=44939456

https://news.ycombinator.com/item?id=44938622

[0] Spiraling the Earth with C. G. J. Jacobi. Paul Erdös

https://pubs.aip.org/aapt/ajp/article-abstract/68/10/888/105...

patcon · 3 days ago
Jeez Erdos. This man was so prolific he was still publishing 4 years after he died :o
patcon commented on Vibe coding creates a bus factor of zero   mindflash.org/coding/ai/a... · Posted by u/AntwaneB
p1necone · 3 days ago
If you're using llms to shit out large swathes of unreviewed code you're doing it wrong and your project is indeed doomed to become unmaintainable the minute it goes down a wrong path architecturally, or you get a bug with complex causes or whatever.

Where llms excel is in situations like:

* I have <special snowflake pile of existing data structures> that I want to apply <well known algorithm> to - bam, half a days work done in 2 minutes.

* I want to set up test data and the bones of unit tests for <complicated thing with lots of dependencies> - bam, half a days work done in 2 minutes (note I said to use the llms for a starting point - don't generate your actual test cases with it, at least not without very careful review - I've seen a lot of really dumb ai generated unit tests).

* I want a visual web editor for <special snowflake pile of existing data structures> that saves to an sqlite db and has a separate backend api, bam 3 days work done in 2 minutes.

* I want to apply some repetitive change across a large codebase that's just too complicated for a clever regex, bam work you literally would have never bothered to do before done in 2 minutes.

You don't need to solve hard problems to massively increase your productivity with llms, you just need to shave yaks. Even when it's not a time save, it still lets you focus mental effort on interesting problems rather than burning out on endless chores.

patcon · 3 days ago
I like the spirit of these, but there are waaaay more. Like you only mentioned the ones for professional and skilled coders who have another option. What about all the sub-examples for people all the way from "technically unskilled" to "baby-step coders". There's a bunch of things they can now just do and get in front of ppl without us.

Going from "thing in my head that I need to pay someone $100/h to try" to "thing a user can literally use in 3 minutes that will make that hypothetical-but-nonexistent $100/h person cry"... like there is way more texture of roles in that territory than your punchy comment gives credit. No one cares is it's maintainable if they now know what's possible, and that matters 1000x more than future maintenance concerns. People spend years working up to this step that someone can now simply jank out* in 3 minutes.

* to jank out. verb. 1. to crank out via vibe-coding, in the sense of productive output.

patcon commented on Why Metaflow?   docs.metaflow.org/introdu... · Posted by u/savin-goyal
thomasingalls · 7 days ago
What do people do to curate/version /transform their raw datasets these days? I am vaguely aware of the "chuck it all into s3" strategy for hanging onto raw data, and related strategies where instead of s3 it's a db of some flavor. What are folks doing for record-keeping for what today's raw data contains vs tomorrow's?

And the next step - a curated dataset has a time-bound provenance - what are folks doing to keep track of the transformations/cleaning steps that makes the raw data useful for the data at the time it's being processed? Does this bit fall under the purview of metaflow, or is this different tooling?

Or maybe my assumptions are off base! Curious about what other teams are doing with their datasets.

patcon · 7 days ago
I'm exploring kedro and Kedro-viz lately, in case that's in the vicinity of your question. It ties most closely with MLFlow for artifacts, but storing locally works fine too
patcon commented on GPT-5 vs. Sonnet: Complex Agentic Coding   elite-ai-assisted-coding.... · Posted by u/intellectronica
patcon · 15 days ago
> One continuous difference: while GPT-5 would do lots of thinking then do something right the first time, Claude frantically tried different things — writing code, executing commands, making pretty dumb mistakes [...], but then recovering. This meant it eventually got to correct implementation with many more steps.

Sounds like Claude muddles. I consider that the stronger tactic.

I sure hope GPt-5 is muddling on the backend, else I suspect it will be very brittle.

Re: https://contraptions.venkateshrao.com/p/massed-muddler-intel...

> Lindblom’s paper identifies two patterns of agentic behavior, “root” (or rational-comprehensive) and “branch” (or successive limited comparisons), and argues that in complicated messy circumstances requiring coordinated action at scale, the way actually effective humans operate is the branch method, which looks like “muddling through” but gradually gets there, where the root ["godding through"] method fails entirely.

patcon commented on The Whispering Earring   croissanthology.com/earri... · Posted by u/ZeljkoS
abeppu · 16 days ago
I think there's a meaningful difference between a tool to remind oneself to take a beat before speaking vs being told what to say. For example, cues that help you avoid an impulsive reaction of anger I think is a step away from being a reactive automaton.
patcon · 16 days ago
My sensibility is that agency is about "noticing". The content of information seems perhaps less important than the attention allocation mechanism that brings our attention to something.

If you write all your own words, but without an ability to direct your attention to what needed words conjured around it, did you really do anything important at all? (Yes, that's perhaps controversial :) )

patcon commented on LLM Inflation   tratt.net/laurie/blog/202... · Posted by u/ingve
djoldman · 17 days ago
> Bob needs a new computer for his job.... In order to obtain a new work computer he has to create a 4 paragraph business case explaining why the new computer will improve his productivity.

> Bob’s manager receives 4 paragraphs of dense prose and realises from the first line that he’s going to have to read the whole thing carefully to work out what he’s being asked for and why. Instead, he copies the email into the LLM.... The 4 paragraphs are summarised as “The sender needs a new computer as his current one is old and slow and makes him unproductive.” The manager approves the request.

"LLM inflation" as a "bad" thing often reflects a "bad" system.

In the case described, the bad system is the expectation that one has to write, or is more likely to obtain a favorable result from writing, a 4 paragraph business case. Since Bob inflates his words to fill 4 paragraphs and the manager deflates them to summarise, it's clear that the 4 paragraph expectation/incentive is the "bad" thing here.

This phenomenon of assigning the cause of "bad" things to LLMs is pretty rife.

In fact, one could say that the LLM is optimizing given the system requirement: it's a lot easier to get around this bad framework.

patcon · 17 days ago
I love this article for how it gets the thinking, and I love your response.

I've been aware of similar dynamic in politics, where the collective action/intelligence of the internet destroyed all the old signals politicians used to rely on. Emails don't mean anything like letters used to mean. Even phone calls are automated now. Your words and experience matter more in a statistical big data sense, rather than individually.

---

This puts me in sci-fi world-building mode, wondering what the absurd extension is... maybe it's just proving burned time investment. So maybe in an imagined world where LLMs are available to all as extensions of thought via neural implant, you can't be taken seriously for even the simplest direct statements unless you prove your mind sat and did nothing (aka wasted it's time) for some arbitrary period of time. So if you sat in the corner and registered inactive boredom for 2h, and attached a non-renewable proof of that to a written word, then people would take your perspective seriously, because you expended (though not "gave") your limited attention/time to the request for some significant amount of time

patcon commented on Show HN: An open-source e-book reader for conversational reading with an LLM   github.com/shutootaki/boo... · Posted by u/takigon
patcon · 17 days ago
It's interesting that, if this became commonplace, it could be much easier to get value out of poorly written books...

Some people have deep knowledge, but don't have the skills to untangle context and lay out the right learning path for a reader. These people likely bell-curve around certain neurotypes, which perhaps know certain sorts of knowledge more strongly.

Right now, those people shouldn't publish. But if LLMs could augment poorly structured content (not incorrect content, just poorly structured), that perhaps open up more people to share their wisdom.

Anyhow, just thinking out loud here. I'm sure there are some massive downsides that are coming to mind for ppl reading :)

patcon commented on The Geological Sublime   harpers.org/archive/2025/... · Posted by u/prismatic
rriley · a month ago
This reminded me of The Overstory by Richard Powers, once you start seeing the world on the time and spatial scales of trees, everything shifts. Human timelines feel like flickers. It’s not just a change in perspective; it’s a change in what even counts as meaningful.
patcon · a month ago
This story was so unexpectedly emotional. I'd never read a book where I so palpably felt that the main characters were in fact the non-conscious ones.
patcon commented on The Geological Sublime   harpers.org/archive/2025/... · Posted by u/prismatic
turnsout · a month ago

  > As for the amber stream pouring into my gas tank as I stand at the self-service pump on my way to Walden, I now take it and all the other plant-based fossil fuels to be an infinity of petrified sunlight, best understood through the compound lens of the Lyell-Darwin eye.
This is the most nihilistic essay I've read in a long time. It contemplates climate change and the extinction of humanity with a lyrical nonchalance that is misanthropic at best. Keep pumping that liquid sunshine, Lewis.

Every single one of us needs to wake the fuck up. The author is right that the planet itself will be fine without us. If we want to survive as a species, we can't bask in decadence and romanticize the decline.

patcon · a month ago
I think humans have a coping mechanism to find some beauty in things they're forced to participate in, good or bad.

Maybe there's some small edge of anti-fragility in that -- we seem more willing to confront beauty and inspect its contours

patcon commented on US AI Action Plan   ai.gov/action-plan... · Posted by u/joelburget
rwmj · a month ago
Definitely a risk, and already happening, but I presume mostly closed source AIs are used for this? Like, people using the ChatGPT APIs to generate spam; or Grok just doing its normal thing. Don't see how the open vs closed debate has much to do with it.
patcon · a month ago
You can't see how a hosted private model (that can monitor usage and adapt mechanisms to that) has a different risk profile than an open weight model (that is unmonitorable and becomes more and more runnable on more and more hardware every month)?

One can become more controlled and wrangle in the edge-cases, and the other has exploding edges.

You can have your politics around the value of open source models, but I find it hard to argue that there aren't MUCH higher risks with the lack of containment of open weights models

u/patcon

KarmaCake day3176November 11, 2010
About
https://nodescription.net/notes https://github.com/patcon https://twitter.com/patcon_ https://hypha.coop https://civictech.ca [ my public key: https://keybase.io/patcon; my proof: https://keybase.io/patcon/sigs/PEhRXmqSserZmdbUByVJg4i7Pi80yFhN98UteG1ZiV0 ]
View Original