1. AI is making unit tests nearly free. It's a no-brainer to ask Copilot/Cursor/insert-your-tool-here to include tests with your code. The bonus is that it forces better habits like dependency injection just to make the AI's job possible. This craters the "cost" side of the equation for basic coverage.
2. At the same time, software is increasingly complex: a system of a frontend, backend, 3rd-party APIs, mobile clients, etc. A million passing unit tests and 100% test coverage mean nothing in a world when a tiny contract change breaks the whole app. In our experience the thing that gives us the most confidence is black-box, end-to-end testing that tests things exactly as a real user would see them.
Using the Lindy Effect for guidance, I've built a stack/framework that works across 20 years of different versions of these languages, which increases the chances of it continuing to work without breaking changes for another 20 years.
From my side I think it’s more useful to focus on surfacing issues early. We want to know about bugs, slowdowns, regressions before they hit users, so everything we write is written using TDD. But because unit tests are couple with the environment they "rot" together. So we usually set up monitoring, integration and black-box tests super early on and keep them running as long as the project is online.