LLMs are like lane assist for cars. They help with common scenarios, but you still need to watch for edge cases and are ultimately responsible for the vehicle.
The difference is companies expect massive productivity boosts from AI tooling. Its as if when lane assist was invented we also doubled all the speed limits, and now everyone is stressed trying to make sure the car always stays in the lanes.
The BCG framing makes it sound like a cognitive load problem but I think it is more unreliability fatigue. When your AI does 8 things right and then confidently does the 9th wrong, you spend mental energy second-guessing everything. Supervising an unreliable system is more exhausting than just doing the task yourself.
The difference is companies expect massive productivity boosts from AI tooling. Its as if when lane assist was invented we also doubled all the speed limits, and now everyone is stressed trying to make sure the car always stays in the lanes.
AI programming is going to be worse, because the AI comes back faster than a human would for the same work.
https://en.wikipedia.org/wiki/Automation_bias