I think three things about what you're saying:
1. The answers you're giving don't provide a lot of signal (the queue one being the exception). The question that's implicitly being asked is not just what you would choose, but why you would choose it. What factors would drive you to a particular decision? What are you thinking about when you provide an answer? You're not really verbalizing your considerations here.
A good interviewer will pry at you to get the signal they need to make a decision. So if you say that back-pressure isn't worth worrying about here, they'll ask you when it would be, and what you'd do in that situation. But not all interviewers are good interviewers, and sometimes they'll just say "I wasn't able to get much information out of the candidate" and the absence of a yes is a no. As an interviewee, you want to make the interviewer's job easy, not hard.
2. Even if the interviewer is good and does pry the information out of you, they're probably going to write down something like "the candidate was able to explain sensibly why they'd choose a particular technology, but it took a lot of prodding and prying to get the information out of them -- communications are a negative." As an interviewee, you want to communicate all the information your interviewer is looking for proactively, not grudgingly and reluctantly. (This is also true when you're not interviewing.)
3. I pretty much just disagree on that SQL/NoSQL answer. Team expertise is one factor, but those technologies have significant differences; depending on what you need to do, one of them might be way better than the other for a particular scenario. Your answer there is just going to get dinged for indicating that you don't have experience in enough scenarios to recognize this.
I feel so sorry for you people. You need to find some constructive way to deal with your issues, instead of blaming your insecurities on women.
This also often works with tool use and tool cause - just ask it what part of its prompt told it to do something and it can usually point there.
If you ask it why it thought something a priori was wrong, then the bot can't answer it, and neither can I. If you ask me to clarify why I wrote some code I can walk you through the steps I got there. But if you ask me to clarify why I believed a function exists, that upon runtime I learned doesn't actually exist, I can't provide justification there.
How would that person understand the contradiction of sometimes being in a role where they dismiss patients, but also sometimes being in the role of the patient where they are being dismissed?
I've wondered if it comes down to a game theory result where if you have x amount of time to distribute amongst y problems, there's more net social utility in applying a simple solution to everyone, and accepting that difficult problems will be missed. Versus spending detailed time on each problem, and as a result solving fewer problems but better.
Of course, when money comes into play, it seems as though you're going to be financially required to rush everyone through the door in 15 minutes, or else your business will lose to someone who adopts this strategy. Leaving anyone with a complex problem dead.
I spend a lot of time on migraine and pain subreddits and there is an air of deference to the medical authority to know what to do. But if your case is complex, you just can't fit into the simple flow chart, and need to advocate and problem solve for yourself. Which leads people into the path of pseudoscience and exploitation.
They also have one of the most profitable business models the world has ever seen. Their RPE (revenue per employee) is roughly $1mm and growing at a 50% YoY rate...
They heavily use technology as leverage for insane margin growth. 90% rule of 40 as well.
If you really want a bad trip, head over to the r/migraine and r/chronicpain subreddits. These subreddits are full of people, with none of the resources or experiences as the author, betrayed entirely by the medical system which does not serve them, and condemned to chronic pain. The complexity and range of headache and migraine causes does not lend itself well to a 15 minute GP appointment every few months, and without the technical expertise these people turn to alternative medicine, online pseudoscience, and opiates.
I really enjoy his blog posts and his work on automata seems to be well respected. I've felt he presents a solid epistemology.
I agree with your first point, maybe AI will close some of those gaps with future advances, but I think a large part of the damage will have been done by then.
Regarding the memory of reasoning from LLMs, I think the issue is that even if you can solve it in the future, you already have code for which you've lost the artifacts associated with the original generation. Overall I find there's a lot of talks (especially in the mainstream media) about AI "always learning" when they don't actually learn new anything until a new model is released.
> Why does it require 100% accuracy 100% of the time? Humans are not 100% accurate 100% of the time and we seem to trust them with our code.
Correct, but humans writing code don't lead to a Bus Factor of 0, so it's easier to go back, understand what is wrong and address it.
If the other gaps mentioned above are addressed, then I agree that this also partially goes away.