It’s still mindless consumption if you don’t interact with the material in any meaningful way (follow up questions, application, try to refute it, evaluate a hypothesis you had before watching it, …)
It’s still mindless consumption if you don’t interact with the material in any meaningful way (follow up questions, application, try to refute it, evaluate a hypothesis you had before watching it, …)
Assumption. Big ass assumption.
Pilot are trained until actions are instinctual and certain memory items are almost unconscious. But pilots are still people and people are fallible and make mistakes, and sometimes act unreasonably. Intent cannot be determined without clear evidence or statements because that's now how thoughts locked away in people's minds work.
> It's not arrogance to assume the most likely conclusion is true
You don't know this. This is beyond the capability to know and is therefore pure speculation. That is the definition of arrogance.
https://www.reddit.com/r/dataisbeautiful/comments/e1jrvw/oc_...
The preceding section does mention studies that show a cause and effect relationship between e.g., income and fertility, but the effect is surprisingly small. The authors conclude the section with:
> “Pro-natal incentives do work: more money does yield more babies… But it takes a lot of money. Truth be told, trying to boost birth rates to replacement rate purely through cash incentives is prohibitively costly.”
So far I've only skimmed the paper, but here's an interesting quote:
> Among respondents of a 2018 survey conducted for the New York Times, the desire to “have more leisure time” is offered as the leading reason for not having children among adults who...
If your assumption is that economic reasons cause the decline in fertility rates, it's tempting (and natural!) to view every alternative explanation in the context of economics. In other words: all alternative explanations are symptoms of economic problems, so the root cause remains money.
But quotes like this can also be interpreted as people changing their priorities regardless of income and worries about housing. Maybe, freed of traditional role models, people would rather watch Netflix all day long in their single person household.
> Underpinning these policies is an assumption that poorer women are more likely to respond to incentives to have more children. Indeed, their fertility rates do seem more elastic than those of professional women. Whereas the fertility rates of older, college-educated women have remained fairly steady over the past six decades, most of the collapse in fertility in America and Britain since 1980 stems from younger and poorer women having fewer children, particularly from unplanned pregnancies.
https://www.economist.com/leaders/2025/06/19/why-magas-pro-n...
Just like with many of these topics, most sources seem to contradict each other.
I've been using Zed and Claude Sonnet 4 (and sometimes trying Opus) heavily over the past weeks. For small edits where I have lots of unit tests, the results were great. So great that they worry me with regards to job security. For exploring a new programming domain it was also somewhat useful. I work a lot with the Typescript compiler API right now, and it has almost no documentation. Since the AI can see into every GitHub repository out there, it's much better, and more efficient, at learning APIs based on code from other folks. On the other hand it means I don't do that job, and I am forced to rely 100% on how the AI presents the Typescript compiler API to me. Are there better methods I could use? Who knows.
Where it's abysmal is code architecture. Sometimes it's almost comical: it adds an if statement to handle one highly specific edge case in a program that only makes sense if it solves the general case. This didn't happen often thought.
The hardest part was to force it to reuse existing code from the same file. My use case is transforming a Typescript AST into a GraphQL AST. The code is one big switch statement with lots of recursive calls. The AI would often add 300 lines of code that duplicate some logic which already exists somewhere else.
In the end I rewrote the whole thing from scratch. At around 900 lines of code the AI was starting to really struggle. When I wanted to take over, I realized that I didn't have the in-depth knowledge to do so. And trying to understand the code the AI had written proved futile.
Ultimately that's on me, I should have been more diligent reviewing the dozens of 300 line of code changes the AI throws at me over the course of a day. But I wasn't, because reviewing is really, really hard. For many reasons. And AI makes it even harder.
Am I therefore nuts? I find this whole article extremely one sided. Surely, based on the sheer amount of both positive and negative press, the answer is somewhere in the middle.
I can’t have a hook that talks to a real API in one environment but to a fake one in another. I’d have to use Jest style mocking, which is more like monkey patching.
From the point of view of a React end user, there’s also no list of effects that I can access. I can’t see which effects or hooks a component carries around, which ones weren’t yet evaluated, and so on.
> LLMs get endlessly confused: they assume the code they wrote actually works; when test fail, they are left guessing as to whether to fix the code or the tests; and when it gets frustrating, they just delete the whole lot and start over. This is exactly the opposite of what I am looking for. Software engineers test their work as they go. When tests fail, they can check in with their mental model to decide whether to fix the code or the tests, or just to gather more data before making a decision. When they get frustrated, they can reach for help by talking things through. And although sometimes they do delete it all and start over, they do so with a clearer understanding of the problem.
My experiences are based on using Cline with Anthropic Sonnet 3.7 doing TDD on Rails, and have a very different experience. I instruct the model to write tests before any code and it does. It works in small enough chunks that I can review each one. When tests fail, it tends to reason very well about why and fixes the appropriate place. It is very common for the LLM to consult more code as it goes to learn more.
It's certainly not perfect but it works about as well, if not better, than a human junior engineer. Sometimes it can't solve a bug, but human junior engineers get in the same situation too.
I say capture logs without overriding console methods -> they override console methods.
YOU ARE NOT ALLOWED TO CHANGE THE TESTS -> test changed
Or they insert various sleep calls into a test to work around race conditions.
This is all from Claude Sonnet 4.