With a take home I can demonstrate how I would perform at work. I can sit on it, think things over in my head, come up with an attack plan and execute it. I can demonstrate how I think about problems and my own value more clearly. Using a take home as a test is indicative to me that a company cares a bit more about its hiring pipeline and is being careful not to put candidates under arbitrary pressures.
Either way, when you fail, chances are that you will not get any meaningful feedback other than "we have decided to move forward with other candidates".
If you had done a take-home, how could you know where you went wrong?
If you had done a leetcode question, you can look up the question after the interview and usually learn from your mistakes.
With leetcode you usually don't need the interviewer's feedback to improve. You don't even need the interview. And after a certain point you won't need that much time to prepare.
Failing a take-home is an entirely different thing. It's a huge loss in time and mental energy.
I've only done 3 of those in my career and only because the projects sounded interesting. 1 of those 3 resulted in a job offer which I can now confidently say in hindsight was the worst job in my career (...so far!).
I'm now leaning towards just filtering out companies that do take-homes because it signals to me that they don't care about their candidate's time and how a company treats its candidates is usually a good indicator of how they treat their employees.
I recall reading advice that you should just verbalize all your thoughts and have found that this is not optimal.
It's okay to take some moments of silence and talk strategically. For example, I'll tell the interviewer that I'll be silent for a few minutes while I read and understand the question. I'll then talk out loud to confirm with them that I've correctly understood it. From there I'll talk on and off as I narrow towards a solution.
While writing code and debugging, I'll start talking if I get stuck on something.
So the idea is to use talking in a way that doesn't slow you down and may even help you solve the problem faster.
The entire idea of measuring worker performance is not only dehumanizing but is particularly flawed when it comes to knowledge work.
It's like trying to determine the faster car by racing through rush hour traffic. Or ignoring the fact that each car is on a different incline.
Knowing the right people and being in the right place at the right time can often make or break one's career.
Yet we are to incorporate these factors that are for the most part not in our control as a measurement of our own "performance". The unlucky get insult added to injury. The lucky get a dose of ego or fear (depending on their level of self-awareness).
It seems like corporate gaslighting to me.
Think about it, if they were good at evaluation, you could remove all humans in the loop and have recursively self improving AGI.
Nice to see an article that makes a more concrete case.
Think about when chat gpt gives you the side-by-side answers and asks you to rate which is "better".
Now consider the consequence of this at scale with different humans with different needs all weighing in on what "better" looks like.
This is probably why LLM generated code tends to have excessive comments. Those comments would probably get it a higher rating but you as a developer may not want that. It also hints at why there's inconsistency in coding styles.
In my opinion, the most important skill for developers today is not in writing the code but in being able to critically evaluate it.