It is often thought of as a prediction about what level of contribution is expected, but that's basically a farce. In every organization I've seen that makes distinctions between junior and senior developers, the only thing different between them is that senior developers are paid more. They don't do more, manage more, design with more foresight, or anything like that.
The exception is that developers who are brand new to a job function as junior engineers until they get calibrated and learn some parts of the codebase in which they can be effective. But this is just as true for a brand new senior dev with 10 years experience in your company's primary domain application as it is for a kid straight out of a bachelor's program whose only prior work experience has been theoretical REUs or something.
Basically, it's just a status tool.
We certainly can speculate whether whose opinions are more correct or valid, but if we just objectively consider that doing perfectly means you get hired, then what the interviewer thinks matters and what the interviewee thinks doesn't.
I do think you raise an interesting question though. I do wonder whether, in a scenario where many interviewers saw the same interview performance, how varied their scores would be.
After all, most people fail at many job interviews before landing one they get, but is that because of variation in the performance of the interviewee or because of the differences in interviewers?
really? who would sell the last gallon of water on earth?
In my experience, the problem is that interviewers have no idea how to correctly value a candidate's performance. Maybe the candidates are closer to being well-calibrated, but their self-assessments don't match up with the interviewers' because the interviewers don't know how to gauge what they are looking for?
Making the assumption that an interviewer knows how to measure the response of a candidate, even in cases of extremely quantitative questions with well-defined answers, is highly suspect to me. I think virtually no one knows how to do that effectively.
Frankly speaking, if your company doesn't need a data engineer, it won't hire one or move you into that role. They likely don't, either, if you're experiencing this pushback -- data engineers often develop ETL pipelines or data warehouses, both of which are very useful if your company has a data team and very useless if it does not.
That said, you may want to move closer to my role. There's actually a shortage of data-savvy people who can also write production software, and you would nicely complement a more research-inclined data scientist or analyst -- someone with far more experience with research/analysis than development.
I experience the same problem with shortage-at-price-X in the field you describe. I'm a machine learning engineer with experience in MCMC methods, but I also have a lot of low-level Python and Cython experience, some intermediate experience with database internals, and lots of experience writing well-crafted code for production systems.
There are basically zero companies willing to pay what I'm seeking (which is a salary based on my previous job and a few offers I got around the time I took that job). In fact, in some of the more expensive cities, the real wage offered is far lower than other markets.
I've seen reputable, multi-billion dollar companies offering in the $140k range for this type of role in New York. That's wildly below anything reasonable for this sort of thing in New York. I've seen companies in Minneapolis offering $130k for the same kind of job -- and even that is still too low for Minneapolis! The same has been true in San Francisco as well.
Because these companies value you more for simply looking good on paper and looking good as a piece of office ornamentation when investors stroll through, and they view you as an arbitrary work receptacle closer to a software janitor than a statistical specialist, their whole mindset is about how to drive wage down.
Frankly, given the stresses of the job and the risk of burnout, I think it's actually a terrible time to be in the machine learning / computational stats employment field, despite all of the interesting new work and advances being made. The intellectual side is good, but the quality of jobs is through the floor.
I guess this means that the entire profession consists of janitors and plumbers.
> The root cause of this weakness is that the assert mechanism is designed purely for testing purposes, as is done in C++.
However, C and C++ are perhaps unique in how much undefined behavior is possible and in how simple it is to create. Inserting into a vector while iterating through it, for instance. Or an uninitialized pointer.
That's why many C++ experts believe in runtime assertions in production. Crashing the application with a core dump is generally preferable to trashing memory, corrupting your database, or launching the missiles.
I imagine they would say that your statement about crashing vs. e.g. launching the missiles is a false dilemma. You don't crash and you don't incorrectly launch the missiles.
I'm not a C++ developer so I can't say it with certainty. I more agree with what you're saying. I'm just relaying that my experience has been that out of many different language communities, C++ actually seems adamantly the opposite of what you're describing.
The software is junk software. There's no other secret thing going on -- no misdirection or duplicitous motives. A certain class of high-paying customers responds more to the Goldman brand name -- or at least believes it buys them cache with regulators or investors. For that class of customers, vetting the reliability and quality of the tech stack is at best an afterthought. Since that pile of money exists as a thing for Goldman to target, they do target it.
I advocate that more people should prioritize vetting the technology. If so, they would see it is not of sufficient quality to justify its use, let alone paying to perpetuate it elsewhere. But I'm not naive -- the political approach will always matter more to a wide range of people than will a more objective assessment.