But channels already do the waiting part for you.
But channels already do the waiting part for you.
errCh := make(chan error)
for _, url := range urls {
go func(url string){
errCh <- http.Get(url)
}(url)
}
for range urls {
err := <-errCh
if err != nil {
// handle error
}
}
Should I be using WaitGroup instead? If I do, don't I still need an error channel anyway—in which case it feels redundant? Or am I thinking about this wrong? I rarely encounter concurrency situations that the above pattern doesn't seem sufficient for.Its main virtues are low cognitive load and encouraging simple straightforward ways of doing things, with the latter feeding into the former.
Languages with sophisticated powerful type systems and other features are superior in a lot of ways, but in the hands of most developers they are excuses to massively over-complicate everything. Sophomore developers (not junior but not yet senior) love complexity and will use any chance to add as much of it as they can, either to show off how smart they are, to explore, or to try to implement things they think they need but actually don't. Go somewhat discourages this, though devs will still find a way of course.
Experienced developers know that complexity is evil and simplicity is actually the sign of intelligence and skill. A language with advanced features is there to make it easier and simpler to express difficult concepts, not to make it more difficult and complex to express simple concepts. Every language feature should not always be used.
You end up creating these elegant abstractions that are very seductive from a programmer-as-artist perspective, but usually a distraction from just getting the work done in a good enough way.
You can tell that the creators of Go are very familiar with engineer psychology and what gets them off track. Go takes away all shiny toys.
Every single piece of Go 1.x code scraped from the internet and baked in to the models is still perfectly valid and compiles with the latest version.
I’ve completely turned off AI assist on my personal computer and only use AI assist sparingly on my work computer. It is so bad at compound work. AI assist is great at atomic work. The rest should be handled by humans and use AI wisely. It all boils down back to human intelligence. AI is only as smart as the human handling it. That’s the bottom line.
What's a key decision and what's a dot to connect varies by app and by domain, but the upside is that generally most code by volume is dot connecting (and in some cases it's like 80-90% of the code), so if you draw the lines correctly, huge productivity boosts can be found with little downside.
But if you draw the lines wrong, such that AI is making key decisions, you will have a bad time. In that case, you are usually better off deleting everything it produced and starting again rather than spending time to understand and fix its mistakes.
Things that are typically key decisions:
- database table layout and indexes
- core types
- important dependencies (don't let the AI choose dependencies unless it's low consequence)
- system design—caches, queues, etc.
- infrastructure design—VPC layout, networking permissions, secrets management
- what all the UI screens are and what they contain, user flows, etc.
- color scheme, typography, visual hierarchy
- what to test and not to test (AI will overdo it with unnecessary tests and test complexity if you let it)
- code organization: directory layout, component boundaries, when to DRY
Things that are typically dot connecting:
- database access methods for crud
- API handlers
- client-side code to make API requests
- helpers that restructure data, translate between types, etc.
- deploy scripts/CI and CD
- dev environment setup
- test harness
- test implementation (vs. deciding what to test)
- UI component implementation (once client-side types and data model are in place)
- styling code
- one-off scripts for data cleanup, analytics, etc.
That's not exhaustive on either side, but you get the idea.
AI can be helpful for making the key decisions too, in terms of research, ideation, exploring alternatives, poking holes, etc., but imo the human needs to make the final choices and write the code that corresponds to these decisions either manually or with very close supervision.
https://idfpr.illinois.gov/content/dam/soi/en/web/idfpr/news...
I get the impression that it is now illegal in Illinois to claim that an AI chatbot can take the place of a licensed therapist or counselor. That doesn't mean people can't do what they want with AI. It only means that counseling services can't offer AI as a cheaper replacement for a real person.
Am I wrong? This sounds good to me.
It might also be a terrible idea, but we won’t find out if we make it illegal to try new things in a safe/supervised way. Not to say that what I just described would be illegal under this law; I’m not sure whether it would be. I’d expect it will discourage any Illinois-licensed therapists from trying out this kind of idea though.
Main conceptual issue I've been having with other marketing (e.g. influencer) is that there isn't a well-defined audience to market this to. Usually edtech targets students/schools and Periplus doesn't fit there too well. Need to find what works I guess.
I'll spend more time on it from now on :). Thanks again.
re: mobile playback controls -> on it
It does sort of give me the vibe that the pure scaling maximalism really is dying off though. If the approach is on writing better routers, tooling, comboing specialized submodels on tasks, then it feels like there's a search for new ways to improve performance(and lower cost), suggesting the other established approaches weren't working. I could totally be wrong, but I feel like if just throwing more compute at the problem was working OpenAI probably wouldn't be spending much time on optimizing the user routing on currently existing strategies to get marginal improvements on average user interactions.
I've been pretty negative on the thesis of only needing more data/compute to achieve AGI with current techniques though, so perhaps I'm overly biased against it. If there's one thing that bothers me in general about the situation though, it's that it feels like we really have no clue what the actual status of these models is because of how closed off all the industry labs have become + the feeling of not being able to expect anything other than marketing language from the presentations. I suppose that's inevitable with the massive investments though. Maybe they've got some massive earthshattering model release coming out next, who knows.
Right, but it prevents goroutine leaks. In these situations I'm usually fine with bailing on the first error, but I grant that's not always desirable. If it's not, I would collect and join errors and return those along with partial results (if those are useful).