Pretty much all of what's covered here should be part of a good Algorithm class.
A lot of folks do whiteboard interviews wrong. They often expect to get the exact implementation of an algorithm they found in a textbook and for code on the board to compile. This isn't the point of whiteboarding. Doing this only promotes rote memorization. A good whiteboard interview should be a toy problem that can be solved in several different ways by using different strategies or data-structures. The idea is to see how the candidate will break down the problem. Is the candidate able to formulate test cases, write a simple implementation, verify his code and correct the implementation should it fail a test? On the more meta side of things is the candidate able to take feedback and explain why a certain strategy was chosen? Of course, it's not representative of real world engineering but it's a good way to peek at someone's ability to debug and reason about programs; these abilities translate well into debugging and design.
I think most people would agree that this is indeed the intention of white board interviews.
Unfortunately this has not been my recent interviewing experiences with any of the big tech companies.
I had 4 "on-site" days with big techs and an order of magnitude more "tech screening"s with big techs and unicorns over the past year. Vast majority of them are questions straight from leetcode. And interviewers pretty much expect you to flawlessly write up the correct answer without much discussion.
There's one particular example at one particular company where they asked me to implement "3 sum closest" (https://leetcode.com/problems/3sum-closest/description/), which I didn't know by heart. So I started with the plain 3 sum question.
Instead of discussing how this can be modified to satisfy a more general requirement and/or give me hints to see how I can take feedback, the interviewer simply said: well that's a different question, isn't it. And failed me right then.
So from my personal experience, these interviews have indeed largely become a memorization exercise + some luck that they ask you a question you have seen/memorized before.
I've come to agree. The fact that there are candidates that study with questions that are leaked online means that the bar ends up getting raised every year. 5 years ago the 2 pointer solution to linked list cycle detection might have been a clever thing, now if you haven't seen it then you're behind.
The good news is that you have a very material advantage if you, say, memorize the top 100 questions on leetcode. The bad news is that real life is just playing out Goodhart's Law:
> "When a measure becomes a target, it ceases to be a good measure."
The interviewer should have explained what he wanted and meant by the "3 sum closest". Especially with ESL candidates, I can't expect them to have the same names for some common problems.
The best interviews, however, typically starts with a problem the candidate has never seen. Because I consider that asking relevant questions to figure out a problem is an important skill.
Many of us discovered in college that our peers who had to learn to study in high school often thrived while 'smarter' kids struggled.
Similarly, I have seen a lot of 'smart' developers who never learned to break down a problem. They end up making solutions that only they will ever understand. They won't be able to mentor anyone. If you get more than a few of these people on a team, you'll see a bimodal distribution in productivity, where nobody is really effective until they've been on the team for 3 or 4 years.
You want people whose answer to "how do you eat an elephant?" is "one bite at a time", not "grow bigger jaws", and watching how (or if) they break down the problem and offer multiple solutions is probably the best you're going to do in an interview setting.
The rest can be mitigated by harping on Bus Factors early and often, and insisting on diversity (ie, the bus factor for each part of the system should not be the same 3 people).
Pretty much all of what's covered here should be part of a good Algorithm class.
My own instinct (and a careful reading of the itemized list in that repo) tells me you're fudging there - and more than a tad bit.
And that's precisely the point - coding interview have become like russian roulette. A decent chunk of the material is basically trivial, some of it borderline... but a steady portion of it just isn't, and (not too often - but far too often) outlandlishly so. As in, not impossibly, but clearly not reflective of the kind of problem-solving one needs to be able to perform under do-or-die conditions -- and right now, dammit! -- in an actual, real, development job.
The idea is to see how the candidate will break down the problem.
Unfortunately this just isn't true. Perhaps the interviewer thinks, to to himself, that that's what it's about.
But in practice, these almost always end up being pass-or-fail tests. And if you take just a bit too long, or temporarily forget whether a certain datatype is immutable or not, or even take a bit too much time explaining what you're doing rather than furiously typing or writing your knuckles off... or, horror of horrors, you give a perfectly reasonable answer, but not the one the interviewer "likes" ...
> But in practice, these almost always end up being pass-or-fail tests. And if you take just a bit too long, or temporarily forget whether a certain datatype is immutable or not, or even take a bit too much time explaining what you're doing rather than furiously typing or writing your knuckles off... or, horror of horrors, you give a perfectly reasonable answer, but not the one the interviewer "likes" ...
It does seem like an opportunity for startups to catch some amazing talent by being more open-minded in these types of interviews. Generally startups have a much smaller interview committee, sometimes even just 1-2 persons, so they can control better for not having these silly criteria.
I'd say roughly 3/4 of the problems in the link are... actually very straightforward to implement on a whiteboard, even if you're focusing on precision. There are some in there, like balanced binary tree, that I doubt even Knuth could get right from memory, but I'd hope that nobody would ever actually expect somebody to recite the details of an AVL implementation off the top of their head.
I had a classmate who insisted he found a bug in the R-B tree implementation in CLR, but I was too busy with something else to insist that he prove it to me.
I don't want coworkers who think implementing a sort algorithm by hand is an asset. It's not. It's a skill that has to be managed meticulously, and we rarely if ever do anything meticulously. It's not an asset, it's a liability. Understanding how to balance multiple constraints at once is as asset, and sort algorithms are one of the ways we are introduced to this.
Mastery requires that you see the scaffolding of your education as something to be jettisoned when the project (you) is finished. A lot of very productive people never achieve mastery.
In the same head space as sorting, a lot of people can't implement a custom comparator properly. Sorting is worthless if you can't decide what to sort by. Most people learn about comparators after learning about sort algorithms, so you can in some cases argue your fellow interviewers out of an AVL tree and into sorting customers by name and registration date. Or taking a list of inventory sorted alphabetically and grouping it instead by price range.
I feel leetcode and all these websites actually made the interviews harder. As an interviewer, if you pick a question that's too easy or common on leetcode, your risk people who just know it from having done it before.
So I feel interviewers have made their questions harder and harder. Also, leetcode has allowed them to easily find hard questions to ask.
So on one hand those websites try to prepare you to pass an interview, but they've also made it so more people are now prepared and thus the interview no longer asses the candidate talent, but only how much they prepared, thus more imaginative and possibly difficult problem now need to be asked.
The best in interviews for a company is not something generic like solving an algorithm etc.
But rather solving a business logic problem, something that actually relates to the job.
Ex. for a software company that creates logistic systems, a problem to solve could be how to ship N amount of different products from X to Y and how that could work when interacting with external systems for ex. orders, shipping and product information.
Those kind of problems will be much easier to tell if someone is a good fit or not because it's an essential problem to solve for the company.
A lot of folks do whiteboard interviews wrong. They often expect to get the exact implementation of an algorithm they found in a textbook and for code on the board to compile. This isn't the point of whiteboarding. Doing this only promotes rote memorization. A good whiteboard interview should be a toy problem that can be solved in several different ways by using different strategies or data-structures. The idea is to see how the candidate will break down the problem. Is the candidate able to formulate test cases, write a simple implementation, verify his code and correct the implementation should it fail a test? On the more meta side of things is the candidate able to take feedback and explain why a certain strategy was chosen? Of course, it's not representative of real world engineering but it's a good way to peek at someone's ability to debug and reason about programs; these abilities translate well into debugging and design.
Unfortunately this has not been my recent interviewing experiences with any of the big tech companies.
I had 4 "on-site" days with big techs and an order of magnitude more "tech screening"s with big techs and unicorns over the past year. Vast majority of them are questions straight from leetcode. And interviewers pretty much expect you to flawlessly write up the correct answer without much discussion.
There's one particular example at one particular company where they asked me to implement "3 sum closest" (https://leetcode.com/problems/3sum-closest/description/), which I didn't know by heart. So I started with the plain 3 sum question.
Instead of discussing how this can be modified to satisfy a more general requirement and/or give me hints to see how I can take feedback, the interviewer simply said: well that's a different question, isn't it. And failed me right then.
So from my personal experience, these interviews have indeed largely become a memorization exercise + some luck that they ask you a question you have seen/memorized before.
The good news is that you have a very material advantage if you, say, memorize the top 100 questions on leetcode. The bad news is that real life is just playing out Goodhart's Law:
> "When a measure becomes a target, it ceases to be a good measure."
The interviewer should have explained what he wanted and meant by the "3 sum closest". Especially with ESL candidates, I can't expect them to have the same names for some common problems.
The best interviews, however, typically starts with a problem the candidate has never seen. Because I consider that asking relevant questions to figure out a problem is an important skill.
Similarly, I have seen a lot of 'smart' developers who never learned to break down a problem. They end up making solutions that only they will ever understand. They won't be able to mentor anyone. If you get more than a few of these people on a team, you'll see a bimodal distribution in productivity, where nobody is really effective until they've been on the team for 3 or 4 years.
You want people whose answer to "how do you eat an elephant?" is "one bite at a time", not "grow bigger jaws", and watching how (or if) they break down the problem and offer multiple solutions is probably the best you're going to do in an interview setting.
The rest can be mitigated by harping on Bus Factors early and often, and insisting on diversity (ie, the bus factor for each part of the system should not be the same 3 people).
My own instinct (and a careful reading of the itemized list in that repo) tells me you're fudging there - and more than a tad bit.
And that's precisely the point - coding interview have become like russian roulette. A decent chunk of the material is basically trivial, some of it borderline... but a steady portion of it just isn't, and (not too often - but far too often) outlandlishly so. As in, not impossibly, but clearly not reflective of the kind of problem-solving one needs to be able to perform under do-or-die conditions -- and right now, dammit! -- in an actual, real, development job.
The idea is to see how the candidate will break down the problem.
Unfortunately this just isn't true. Perhaps the interviewer thinks, to to himself, that that's what it's about.
But in practice, these almost always end up being pass-or-fail tests. And if you take just a bit too long, or temporarily forget whether a certain datatype is immutable or not, or even take a bit too much time explaining what you're doing rather than furiously typing or writing your knuckles off... or, horror of horrors, you give a perfectly reasonable answer, but not the one the interviewer "likes" ...
Then forget it, you're screwed.
It does seem like an opportunity for startups to catch some amazing talent by being more open-minded in these types of interviews. Generally startups have a much smaller interview committee, sometimes even just 1-2 persons, so they can control better for not having these silly criteria.
I don't want coworkers who think implementing a sort algorithm by hand is an asset. It's not. It's a skill that has to be managed meticulously, and we rarely if ever do anything meticulously. It's not an asset, it's a liability. Understanding how to balance multiple constraints at once is as asset, and sort algorithms are one of the ways we are introduced to this.
Mastery requires that you see the scaffolding of your education as something to be jettisoned when the project (you) is finished. A lot of very productive people never achieve mastery.
In the same head space as sorting, a lot of people can't implement a custom comparator properly. Sorting is worthless if you can't decide what to sort by. Most people learn about comparators after learning about sort algorithms, so you can in some cases argue your fellow interviewers out of an AVL tree and into sorting customers by name and registration date. Or taking a list of inventory sorted alphabetically and grouping it instead by price range.
The balanced binary tree is a rough one. But checking that a tree is balanced would be a nice whiteboard problem.
So I feel interviewers have made their questions harder and harder. Also, leetcode has allowed them to easily find hard questions to ask.
So on one hand those websites try to prepare you to pass an interview, but they've also made it so more people are now prepared and thus the interview no longer asses the candidate talent, but only how much they prepared, thus more imaginative and possibly difficult problem now need to be asked.
https://python-poetry.org/docs/
But rather solving a business logic problem, something that actually relates to the job.
Ex. for a software company that creates logistic systems, a problem to solve could be how to ship N amount of different products from X to Y and how that could work when interacting with external systems for ex. orders, shipping and product information.
Those kind of problems will be much easier to tell if someone is a good fit or not because it's an essential problem to solve for the company.
> It’s more about systems, rather than logic, which (personally) I find more in-tune with my mental makeup.
And hoenstly I feel like this a lot, especially comparing leetcode type stuff vs my own personal projects.