Readit News logoReadit News
danenania commented on Waitgroups: What they are, how to use them and what changed with Go 1.25   mfbmina.dev/en/posts/wait... · Posted by u/mfbmina
unsnap_biceps · 9 hours ago
buffered channels won't help here. That's just how many results can be buffered before the remaining results can be added to the channel. It doesn't wait until all of them are done before returning a result to the consumer.
danenania · 8 hours ago
> It doesn't wait until all of them are done before returning a result to the consumer.

Right, but it prevents goroutine leaks. In these situations I'm usually fine with bailing on the first error, but I grant that's not always desirable. If it's not, I would collect and join errors and return those along with partial results (if those are useful).

danenania commented on Waitgroups: What they are, how to use them and what changed with Go 1.25   mfbmina.dev/en/posts/wait... · Posted by u/mfbmina
c0balt · 10 hours ago
You would probably benefit from errgroup, https://pkg.go.dev/golang.org/x/sync/errgroup

But channels already do the waiting part for you.

danenania · 9 hours ago
Thanks! looking into errgroup
danenania commented on Waitgroups: What they are, how to use them and what changed with Go 1.25   mfbmina.dev/en/posts/wait... · Posted by u/mfbmina
javier2 · 10 hours ago
How you handle err here? If you return, the go routines will leak
danenania · 9 hours ago
Ah, good point—should be using a buffered channel to avoid that:

  errCh := make(chan error, len(urls))

danenania commented on Waitgroups: What they are, how to use them and what changed with Go 1.25   mfbmina.dev/en/posts/wait... · Posted by u/mfbmina
danenania · 10 hours ago
I like WaitGroup as a concept, but I often end up using a channel instead for clearer error handling. Something like:

  errCh := make(chan error)
  for _, url := range urls {
    go func(url string){
      errCh <- http.Get(url)
    }(url)
  }

  for range urls {
    err := <-errCh
    if err != nil {
      // handle error
    }
  }
Should I be using WaitGroup instead? If I do, don't I still need an error channel anyway—in which case it feels redundant? Or am I thinking about this wrong? I rarely encounter concurrency situations that the above pattern doesn't seem sufficient for.

danenania commented on Go is still not good   blog.habets.se/2025/07/Go... · Posted by u/ustad
api · a day ago
Go passes on a lot of ideas that are popular in academic language theory and design, with mixed but I think mostly positive results for its typical use cases.

Its main virtues are low cognitive load and encouraging simple straightforward ways of doing things, with the latter feeding into the former.

Languages with sophisticated powerful type systems and other features are superior in a lot of ways, but in the hands of most developers they are excuses to massively over-complicate everything. Sophomore developers (not junior but not yet senior) love complexity and will use any chance to add as much of it as they can, either to show off how smart they are, to explore, or to try to implement things they think they need but actually don't. Go somewhat discourages this, though devs will still find a way of course.

Experienced developers know that complexity is evil and simplicity is actually the sign of intelligence and skill. A language with advanced features is there to make it easier and simpler to express difficult concepts, not to make it more difficult and complex to express simple concepts. Every language feature should not always be used.

danenania · a day ago
Oh yeah. Said another way, it discourages nerd-sniping, which in practice is a huge problem with functional programming and highly expressive type systems.

You end up creating these elegant abstractions that are very seductive from a programmer-as-artist perspective, but usually a distraction from just getting the work done in a good enough way.

You can tell that the creators of Go are very familiar with engineer psychology and what gets them off track. Go takes away all shiny toys.

danenania commented on Go is still not good   blog.habets.se/2025/07/Go... · Posted by u/ustad
theshrike79 · 2 days ago
The comparative strictness and simplicity of Go also makes it a good option for LLM-assisted programming.

Every single piece of Go 1.x code scraped from the internet and baked in to the models is still perfectly valid and compiles with the latest version.

danenania · a day ago
Yep, and Go’s discouragement of abstraction and indirection are also good qualities for LLM coding.
danenania commented on AI tooling must be disclosed for contributions   github.com/ghostty-org/gh... · Posted by u/freetonik
thallavajhula · 2 days ago
I’m loving today. HN’s front page is filled with some good sources today. No nonsense sensationalism or preaching AI doom, but more realistic experiences.

I’ve completely turned off AI assist on my personal computer and only use AI assist sparingly on my work computer. It is so bad at compound work. AI assist is great at atomic work. The rest should be handled by humans and use AI wisely. It all boils down back to human intelligence. AI is only as smart as the human handling it. That’s the bottom line.

danenania · 2 days ago
The way I've been thinking about it is that the human makes the key decisions and then the AI connects the dots.

What's a key decision and what's a dot to connect varies by app and by domain, but the upside is that generally most code by volume is dot connecting (and in some cases it's like 80-90% of the code), so if you draw the lines correctly, huge productivity boosts can be found with little downside.

But if you draw the lines wrong, such that AI is making key decisions, you will have a bad time. In that case, you are usually better off deleting everything it produced and starting again rather than spending time to understand and fix its mistakes.

Things that are typically key decisions:

- database table layout and indexes

- core types

- important dependencies (don't let the AI choose dependencies unless it's low consequence)

- system design—caches, queues, etc.

- infrastructure design—VPC layout, networking permissions, secrets management

- what all the UI screens are and what they contain, user flows, etc.

- color scheme, typography, visual hierarchy

- what to test and not to test (AI will overdo it with unnecessary tests and test complexity if you let it)

- code organization: directory layout, component boundaries, when to DRY

Things that are typically dot connecting:

- database access methods for crud

- API handlers

- client-side code to make API requests

- helpers that restructure data, translate between types, etc.

- deploy scripts/CI and CD

- dev environment setup

- test harness

- test implementation (vs. deciding what to test)

- UI component implementation (once client-side types and data model are in place)

- styling code

- one-off scripts for data cleanup, analytics, etc.

That's not exhaustive on either side, but you get the idea.

AI can be helpful for making the key decisions too, in terms of research, ideation, exploring alternatives, poking holes, etc., but imo the human needs to make the final choices and write the code that corresponds to these decisions either manually or with very close supervision.

danenania commented on Illinois limits the use of AI in therapy and psychotherapy   washingtonpost.com/nation... · Posted by u/reaperducer
hathawsh · 10 days ago
Here is what Illinois says:

https://idfpr.illinois.gov/content/dam/soi/en/web/idfpr/news...

I get the impression that it is now illegal in Illinois to claim that an AI chatbot can take the place of a licensed therapist or counselor. That doesn't mean people can't do what they want with AI. It only means that counseling services can't offer AI as a cheaper replacement for a real person.

Am I wrong? This sounds good to me.

danenania · 10 days ago
While I agree it’s very reasonable to ban marketing of AI as a replacement for a human therapist, I feel like there could still be space for innovation in terms of AI acting as an always-available supplement to the human therapist. If the therapist is reviewing the chats and configuring the system prompt, perhaps it could be beneficial.

It might also be a terrible idea, but we won’t find out if we make it illegal to try new things in a safe/supervised way. Not to say that what I just described would be illegal under this law; I’m not sure whether it would be. I’d expect it will discourage any Illinois-licensed therapists from trying out this kind of idea though.

danenania commented on Study mode   openai.com/index/chatgpt-... · Posted by u/meetpateltech
tootyskooty · 22 days ago
Thanks a lot! I did do some of these things (namely Reddit) and that worked well, just the number of places that allow posting is limited and I don't want to get too spammy. Will continue there.

Main conceptual issue I've been having with other marketing (e.g. influencer) is that there isn't a well-defined audience to market this to. Usually edtech targets students/schools and Periplus doesn't fit there too well. Need to find what works I guess.

I'll spend more time on it from now on :). Thanks again.

re: mobile playback controls -> on it

danenania · 14 days ago
Hey, I've been running into some bugs with audio playback. Where should I report these?
danenania commented on GPT-5: Key characteristics, pricing and system card   simonwillison.net/2025/Au... · Posted by u/Philpax
morleytj · 16 days ago
It's cool and I'm glad it sounds like it's getting more reliable, but given the types of things people have been saying GPT-5 would be for the last two years you'd expect GPT-5 to be a world-shattering release rather than incremental and stable improvement.

It does sort of give me the vibe that the pure scaling maximalism really is dying off though. If the approach is on writing better routers, tooling, comboing specialized submodels on tasks, then it feels like there's a search for new ways to improve performance(and lower cost), suggesting the other established approaches weren't working. I could totally be wrong, but I feel like if just throwing more compute at the problem was working OpenAI probably wouldn't be spending much time on optimizing the user routing on currently existing strategies to get marginal improvements on average user interactions.

I've been pretty negative on the thesis of only needing more data/compute to achieve AGI with current techniques though, so perhaps I'm overly biased against it. If there's one thing that bothers me in general about the situation though, it's that it feels like we really have no clue what the actual status of these models is because of how closed off all the industry labs have become + the feeling of not being able to expect anything other than marketing language from the presentations. I suppose that's inevitable with the massive investments though. Maybe they've got some massive earthshattering model release coming out next, who knows.

danenania · 16 days ago
Isn’t reasoning, aka test-time compute, ultimately just another form of scaling? Yes it happens at a different stage, but the equation is still 'scale total compute > more intelligence'. In that sense, combining their biggest pre-trained models with their best reasoning strategies from RL could be the most impactful scaling lever available to them at the moment.

u/danenania

KarmaCake day8024October 19, 2010
About
Founder of Plandex: an open source AI coding agent - https://plandex.ai | dane@plandex.ai

Also founder of EnvKey (YC W18): the simple, secure, open source configuration and secrets manager - https://www.envkey.com | dane@envkey.com

@Danenania on the twitters.

View Original