If the interview for a coding job doesn't involve actual coding, how do you know that the candidate can code? What you may get are people who are just really good at selling themselves. Maybe a good fit for the sales department, but not so much for the technical position you are hiring for.
LeetCode is not perfect, but no test is.
As for the "memorization" aspect. You can certainly memorize solutions. But you can't just memorize every character of every solution and regurgitate it perfectly. You will need to make some generalizations, just to fit everything into your brain, and as you type it back, you will probably misremember something, and have to fix the bug or bridge the gap. Those are useful, real life coding skills.
LeetCode problems seldom resemble real-life coding scenarios, which in theory is what you are supposed to be most concerned with.
Why not present them with a real problem that you actually had to solve for your business? And ask them to walk you through how they would try to solve that? Perhaps as part of a take-home assignment?
Leetcode test or nothing is a false dichotomy. Accepting it is lacking without attempting to look into alternatives doesn’t seem logical.
>You can certainly memorize solutions
You can most certainly memorize the high level steps to LC problems. In fact I would argue that this is already the status quo. You may still be able to learn how they would approach a problem this way, but if they are good at “selling themselves” they can make all that “seem” real as well.
LeetCode questions as interview questions are mostly theater. Most people who do well on these aren’t actually “solving them” on the fly from scratch. They just happen to have seen the exact same problem before and retake the steps they’ve memorized to get to the answer. Testing whether or not someone can regurgitate the solution they have memorized to a math problem doesn’t tell you much about how they will perform in a truly novel non-contrived constraint problem scenario, which is generally what most dev work entails.
Perhaps if you are working at Bespoke Algorithms ‘R Us, benchmarking this would have more value to your org, but for most dev roles at most companies it is hard to see it as more than a compliance exercise, or maybe even as a tool to weed out those with families that can’t devote the hours/day to LC memorization.
Also, the LSAT is not static. It has changed over the years as instruction methods have also changed and as demographics have changed; so it is not a reliable measure of aptitude in any way.
Whether or not someone gets an A or a C in a course for Physics likely will not have any bearing on the needs of an employer who needs someone who is a Python wiz for data science.
Maybe the bigger problem is colleges offering undergraduate majors which lack demand, coupled with in-demand majors not having enough relevant course content for the job market, or maybe employers have just gotten more unreasonable in terms of expectations for graduates over time?
It's hard to make the argument these students are less qualified.. look at the acceptance rate of top schools over time, its essentially at or near all-time low percentages for the majority.
Tangentially, if there were a war today would the US be able to produce as much as it did in WW2?
However, this might not matter as much now as it did in the past due to nuclear weapons being the primary deterrent in war these days, and the fact that our standing fleet of aircraft, aircraft carriers, nuke subs, tanks, etc... is essentially second to none. Additionally what we do have is highly capable and extremely specialized, in my opinion, leading to not really needing as many (quality over quantity). Take for example, an F35, which doesn't really have an equal in the skies, we have over 630 of them, with the goal of having around 2500. China only has 300 J-20s which are basically a copy of the older F22. Russia only has 22 non-test Su-57s. Would we have a realistic need to build 1000 of them within a year?
Due to many factors, but primarily free trade and globalization, it's unlikely that we ever see that non-automated manufacturing capacity return, though if needed we could probably mobilize the economy via the defense act to force more manufacturing capacity, though it's hard to imagine we would currently need to.
Yes, but...there is something else to be said here. One of the things we have evolved to do, without necessarily appreciating it, is to intuit the behavior of other humans through the theory-of-mind. If AVs consistent act "unexpectedly", this injects a lot more uncertainty into the system, especially when interacting with other humans.
"Acting unexpectedly" is one of the aspects that makes dealing with mentally ill people anxiety-producing. I don't think most of us would automatically want to share the roads with a bunch of mentally ill drivers, even if, statistically, they were better than neurotypical drivers. There's something to be said about these scenarios regarding trust being derived from understanding what someone else is likely thinking.
Edit: the other aspect that needs to be said is that tech in society is governed by policy. People don't generally just accept policy based on statistical arguments. If you think that you can expect people to accept policies that allow AVs without addressing the trust issue, it might be a painful ride.
"Acting Unexpectedly" can often mean following the actual laws and general guidelines for safe and/or defensive driving. I would hazard a guess that sometimes doing the intuitive thing is, in reality, unsafe and/or against the law. If the car does this in 99% of circumstances, and still gets rear-ended, who is really the problem here?