The best interview process I've ever been a part of involved pair programming with the person for a couple hours, after doing the tech screening having a phone call with a member of the team. You never failed to know within a few minutes whether the person could do the job, and be a good coworker. This process worked so well, it created the best team, most productive team I've worked on in 20+ years in the industry, despite that company's other dysfunctions.
The problem with it is the same curse that has rotted so much of software culture—the need for a scalable process with high throughput. "We need to run through hundreds of candidates per position, not a half dozen, are you crazy? It doesn't matter if the net result is better, it's the metrics along the way that matter!"
I dislike pair programming interviews - as they currently exist - because they usually feel like a time-crunched exam. You don't realistically have the freedom to actually think as you would in actual pair programming. i.e. if you wag your tail chasing a bad end for 15 mins, this is a fail in an interview, but it's pretty realistic of real life work and entirely a non-problem. It's probably even good to test for at interview: how does a person work when they aren't working with an oracle that already knows the answer (ie: the interviewer)?
Pair programming with the person for a couple hours, maybe even on an actual feature, would probably work, assuming the candidate is compensated for their time. I can imagine it'd especially work for teams working on open source projects (Sentry, Zed, etc). Might not be as workable for companies whose work is entirely closed source.
Indeed, the other problem is what you mention: it doesn't scale to high throughput.
> i.e. if you wag your tail chasing a bad end for 15 mins, this is a fail in an interview
In all pair programming interviews I have run (which I will admit have been only a few) I would fail myself as an interviewer if I was not able to guide the interviewee away from a dead end within 15 minutes.
If the candidate wasn't able to understand the hints I was giving them, or just kept driving forward, then they would fail.
That's definitely up to the interviewer, in which a lot of discretion and trust has been placed. I think a lot of it also comes down to the culture of the company—whether they're cutthroat or supportive. As you get better people into the company, hopefully this improves over time. I know that when we did it, it was never about nailing it on the first try, it was literally about proving you knew how to program and were not an asshole. So, not the equivalent of reversing a binary tree on a whiteboard. The kinds of problems we worked on in the interviews weren't leetcode type problems, they were real tickets from our current project. Sometimes it was just doing stuff like making a new component or closing a bug, but those were the things we really did, so it felt like a better test.
I did a couple of rounds of this with my manager as the interviewer. Personally I really liked the process, and the feedback I got from the candidates was positive (but then again it always would be).
What worked well for me was that I made it very clear to my manager, a man who I trust, that I would not be able to provide him with a boolean pass/fail result. I couldn't provide him any objective measure of their ability or performance. What I could do was hang out with the canditates for an hour while we discussed some concepts I thought were important in my position. From that conversation I would be able to provide him a ranking along with a personal evaluation on whether I would personally like to work with the candidate.
I prepared some example problems that I worked for myself a bit. Then I went into the interviews with those problems and let them direct those same explorations of the problem. Some of them took me on detours I hadn't taken on my own. Some of them needed a little nudge at times. I never took specific notes, but just allowed my brain to get a natural impression of the person. I was there to get to know them, not administer an exam.
I feel like the whole experience worked super well. It felt casual, but also very telling. It was almost like a focused date. Afterwards I discussed my impression of the candidates with my manager to ensure the things I was weighing was somewhat commutable to what he desired.
All in all it was a very human process. It must have taken an enormous amount of trust from my manager to allow me the discretion to make a subjective judgment. I was extremely surprised at how clearly I was able to delineate the people, but also how that delineation shifted depending on which axis we evaluated. A simple pass/fail technical interview would have missed that image of a full person.
When I was doing interviews at my old job we did an hour of pair programming and we very intentionally designed the exercise such that
1) if you're qualified to do what we're hiring you for this stuff should be your bread and butter. We were doing restful web apps so the prompt might be "A restful API that takes in a string and returns that same string but backward." If you chase your tail on that for 15 minutes, you're not right for this job.
2) We weren't looking for a specific implementation. You want to make it a GET call with a query param that returns just the string? Neat. A POST with request and response DTOs? Also neat. We're not gonna ding you for doing it one way rather than the other but we're definitely gonna talk to you about why you made the choice that you did and maybe try to tease out another choice you could have made so that we can ask you to compare them. Again, the only wrong answer here is one you can't defend.
3) I'm not here to test your ability to memorize a particular language's syntax, your IDE's autocomplete or your ability to google. If you can articulate that you're trying to create a POST endpoint I'll give you that the correct annotation with Spring is @PostMapping or little things like that.
I do 1hr pair programming interviews for my company and you have to strike a balance between letting candidates think through the problem even when you think it won't work (to see their thought process and maybe be surprised at their approach working/see how quickly they can self-correct) and keeping them on track so that the interview still provides a good signal for candidates who are less familiar with that specific task/stack.
I'm also not actually testing for pair programming ability directly, moreso ability to complete practical tasks / work in a specific area, collaborate, and communicate. If you choose a problem that is general/expandable enough that good candidates for the position are unlikely to go down bad rabbit holes (eg for a senior fullstack role, create a minimal frontend and api server that talk to each other) it works just fine. Actually with these kinds of problems it's kind of good if your candidates end up "studying" them like with leetcode, because it means they are just learning how to do the things that they'll do on the job.
> maybe even on an actual feature
I don't think this would work unless the feature were entirely self-contained. If your workaround is to give the candidate an OSS project they need to study beforehand, I think that would bias candidates' performance in ways that aren't aligned with who you want to hire (eg how desperate are they for the role and how much time outside of work are they willing to put into your interview).
> if you wag your tail chasing a bad end for 15 mins, this is a fail in an interview
Eh, if it's a reasonable bad end and you communicate through it, I wouldn't see it as a fail. Particularly if it's something I would have tried, too. (If it's something I wouldn't have thought of and it's a good idea, you're hired.)
I've (unfortunately) been interviewing the last two months and the main pattern that I've noticed is that a) big companies have terrible interview processes while b) small companies and startups are great at interviewing.
Big companies need to hire tons of people and interview even more so they need some sort of scalable process for it. An early stage startup can just ask you about your past projects and pair program with you for an hour.
> small companies and startups are great at interviewing
Small companies have the benefit of the pressure to fill a role to get work done, the lack of bureaucratic baggage to "protect" the company from bad hires, and generally don't have enough staff to suffer empire-building.
Somewhere along the line the question changes from "can this candidate do the job that other people in this office are already doing?" to "can this candidate do the job of this imaginary archetype I've invented with seemingly impossible qualities".
I hear this all the time, but I have yet to experience it. It may be because the small companies that I interview with are all startups, but I have yet to be able to get a call back from any other kind of small company. And the startups I do interview with have a full FAANG interview loops.
There seems to be a weird selection bias that if you're FAANG or FAANG adjacent these small companies aren't interested.
>The best interview process I've ever been a part of involved pair programming with the person for a couple hours... You never failed to know within a few minutes whether the person could do the job
There is something funny about the "best interview process" taking "a couple hours" despite giving you the answer "within a few minutes". Seems like even the best process is a little broken.
Lightly ironic indeed! Though I’m not sure “broken” is exactly the word I’d choose.
I can only speak for myself, but I imagine myself as a candidate approaching a “couple of hours” project or relationship differently than I would a “few minutes” speed round. For that matter I can think of people I know professionally who I only know through their behavior “on stage” in structured and stylized meetings of a half hour or an hour—and I don’t feel like I have any sense at all of how they would be as day-to-day coworkers.
If we sat down to work together, you’d probably have a sense in the first few minutes of whether or not we were going to work out—but that would be contingent on us going into it with the attitude that we were working together as colleagues toward a goal.
That's mainly because there were multiple pairing sessions, and even if you knew the person was going to pass, there are still a couple more people who need to meet them, and a schedule to make sure they're available to do that. Plus due diligence, etc.
Nor am I saying it was a perfect system, just the best I've seen in terms of results.
I've been interviewing recently and got through to the last round (of five...) with an interesting company. I knew the interview involved pairing, but I didn't expect: two people sitting behind me while I sat on a wobbly stool at a standing desk, trying to use a sticky wired mouse and a non-UK keyboard, and a very bright monitor (I normally use dark mode). They had a screen recorder running, and a radio was on in the next room.
I totally bombed it. I suspect just the two people behind me would have been enough though.
The biggest victims of these non-scalable process is people without a good network. As an intl PhD student, I am that person.
So now I have this weird dynamic: I get interview calls only from FAANG companies, the ones with the manpower to do your so called "cursed" scalable interviews. But the smaller companies or startups, ones who are a far better fit for my specialized kills, never call me. You need to either "know someone" or be from a big school, or there is zero chance.
Pairing on something close to whatever real work they'd be doing, but familiar to the applicant is my favorite way to evaluate someone (e.g. choose a side project, pre agree adding a feature).
I don't care if someone uses modern tools (like AI assists), google, etc - e.g. "open book" - as that's the how they want to work. Evaluating their style of thinking / solving problems, comms, and output is the win.
Very few people doing this sort of interview (they tend to be our best, most empathetic developers) are likely to cut a multi-hour planned process short after a few minutes. It will eat at least an hour of their (very expensive & valuable) time.
Also how am I supposed to filter the 100's of AI-completed assessments? Who gets this opportunity?
We didn't do assessments (if by that you mean take home assignments). This was partly a solution to that, since nobody thought they were a good idea. If you mean the phone screen, I think that would be a problem, yep, but it wasn't an issue back in 2016. Having them pair would weed out cheaters, but we would have to figure out a way to weed them out during the screening, I agree.
We also did not require the employers doing the interview to be our most senior team members. They probably did it more often than most people, but often because they volunteered to do it. Anyone on the team would be part of the loop, which helped with scheduling. And, remember, we were working on actual tickets, so in a lot of cases it actually helped having the candidate there as a pairing partner.
For a little extra detail, the way we actually did it was to have 2-3 pairing sessions of up to 2 hours apiece. At the end of the day, all the team members who paired with the candidate had to give them the thumbs up.
I've been a proponent of pair programming since the early days of Agile, when it was still seen as part of extreme programming. Unfortunately, it’s not often employed in workplace settings.
With that said, would your perception of the interview remain positive if the outcome had been negative?
A common challenge across all interviews is a mismatch in personal dynamics, which can significantly impact the experience for both participants.
Consider a scenario where a senior developer, who prefers simplicity, is paired with a mid-level developer who is still captivated by complexity.
Or a "just start typing" person is with a "mull it over first" person. By the time I am typing code, I want to have 90% of it already completely worked out (at least till I type a "c.Lock()" and suddenly realize I hadn't considered thus and so synchronization issue.
This. Interviewing for a sr dev position with a web app, backend stack is the bog standard java, spring, SQL abstracted away via JPA. We did a first screen, then the tech interview was two of their senior devs shoulder surfing me as I built a simple API. We chatted, I built, they asked questions, I defended my decisions (sometimes successfully, sometimes gracefully conceding defeat), they left knowing that I was who my resume said I was and the reminder that popped up in the middle of the interview to feed my sourdough starter showed them that I'm a culture fit.
I think you're onto something with that last paragraph but I want to try being a bit more generous with why things are the way they are. The question seems to be "When there are hundreds of applicants how do we give everyone a fair shake without hiring an entire team of devs who do nothing but interview?" From that perspective the intentionality is different and even sensible but the end product is likely to be the same. Even when someone is chasing a metric it's because someone else wants what's best and has decided that metric is a sensible way to make that happen. At the end of the day they really do want to hire the best candidate out of a pool whose size is extremely variable and that's challenging.
Similarly, in general the best interviews I've ever been part of (either giving or being) turn into discussions where people's experience, opinions, and stories get aired (going both ways). You eventually get a good sense of each other and things get more relaxed when you both realize that you know what you're talking about (this is harder for Jr roles, though).
Being peppered with questions very rarely gives any insight.
For junior roles, you want to interview for intelligence and shall we say an interest in learning rather than specific skills.
Even for senior roles, that's what I want to interview for, although it is true at times a business case can be made for someone that is good at some specific complex skill and doesn't need to listen to other people to do ok work.
I work at a company which has 11 engineers and competes with companies with 100s. The hiring process was a screening call with the CTO to not waste the prospective team's time, then a call with 2 of my prospective colleagues to gauge competence and cultural fit. Since then I have been involved in hiring most of the team I work with now. The CTO is one of the most competent engineers I have ever met and he designed this process. He also has very high EQ. One of the points I sell to prospective hires is him as a person to work with, as well as our team. He has also flatly denied people I suggested before and that's fine.
I have been here 5 years now and I'm working with the most competent team I have ever worked with. My take away from this is that hiring doesn't need to be commoditised and scale, it just needs to find good people and give them an opportunity to show you that you do or don't want to work with them.
Was this an in-person pair programming session or remote via shared desktop?
Because the latter is garbage. You can't see the other person or read their body language. It also gives off "proctored test" vibes. Or worse than vibes - where you're expected to clear your room, swing your camera around, and never look away.
I used to love getting to know the interviewer and doing things like that but IMO the market has shifted fundamentally on both ends for this to be effective anymore for most SaaS roles. This is anecdotal for US/Canada tech market over the past 10 years so YMMV.
Developers Side:
Since developers don't have job security anymore (at least for those who work on common languages like Go, Python, Java and Typescript) they are better off learning and keeping in touch with leetcode and system design questions, looking for new opportunities and interviewing in "batch mode" when looking for a job. The idea is to clear as many interviews as possible using the same concepts, get in and make money asap before you get laid off. No incentive for collaboration or for fulfilling but esoteric stuff like Haskell and Scala. Career security > Job security.
Companies Side:
On the other end software companies have less trust in developers staying long term so they want to make the interview process as quick and risk free as possible. In essence they are betting that by perusing 100s of resumes and hiring someone who seemingly knows CS concepts they can get some value out of them before they leave. Standardized tests/vetting > team fit.
TLDR; The art is gone from this job, its become akin to management consulting or investment banking. Quality and UX seems to be regressing across the board as a result.
I think what makes it work is that our code pair is pretty low stakes. I was told that I didn’t have to finish the problem and I was free to use whatever tools or language I needed. They just wanted to see how I work and collaborate.
Thats what we did, pair program on some real production code and tickets. This way the person could get a feel about what they potentially were walking into and you get a good idea of how they think and approach problems.
Teams are really sleeping on code reviews as an assessment tool. As in having the candidate review code.
A junior, mid, senior, staff are going to see very different things in the same codebase.
Not only that, as AI generated code becomes more common, teams might want to actively select for devs that can efficiently review code for quality and correctness.
I went through one interview with a YC company that had a first round code review. I enjoyed it so much that I ended up making a small open source app for teams that want to use code reviews: https://coderev.app (repo: https://github.com/CharlieDigital/coderev)
This is harder than it sounds, although I agree in a vacuum the idea is a good one.
So much value of the code review comes from having actual knowledge of the larger context. Mundane stuff like formatting quirks and obvious bad practices should be getting hoovered up by the linters anyways. But what someone new may *not* know is that this cruft is actually important for some arcane reason. Or that it's important that this specific line be super performant and that's why stylistically it's odd.
The real failure mode I worry about here is how much of this stuff becomes second nature to people on a team. They see it as "obvious" and forgot that it's actually nuance of their specific circumstances. So then a candidate comes in and misses something "obvious", well, here's the door.
You can do code review exercises without larger context.
An example from the interview: the code included a python web API and SQL schema. Some obvious points I noticed were no input validation, string concatenating for the database access (SQL injection), no input scrubbing (XSS), based on the call pattern there were some missing indices, a few bad data type choices (e.g. integer for user ID), a possible infinite loop in one case.
You might be thinking about it in the wrong way; what you want to see is that someone can spot these types of logic errors that either a human or AI copilot might produce regardless of the larger context.
The juniors will find formatting and obvious bad practices; the senior and staff will find the real gems. This format works really well for stratification.
It's not so hard. One of the interview stages I did somewhere well known used this.
Here's the neural net model your colleague sent you. They say it's meant to do ABC, but they found limitation XYZ. What is going on? What changes would you suggest and why?
Was actually a decent combined knowledge + code question.
I like the code review approach and tried it a few times when I was needed to do interviews.
The great thing about code reviews is that there are LOTS of ways people can improve code. You can start with the basics like can you make this code run at all (i.e. compile) and can you make it create the right output. And there's also more advanced improvements like how to make the code more performant, more maintainable, and less error-prone.
Also, the candidates can talk about their reasoning about why or why not they'd change the code they're reviewing.
For example, you'd probably view the candidates differently based on their responses to seeing a code sample with a global variable.
Poor: "Everything looks fine here"
Good: "Eliminate that global variable. We can do that by refactoring this function to..."
Better: "I see that there's a global variable here. Some say they're an anti-pattern, and that is true in most but not all cases. This one here may be ok if ..., but if not you'll need to..."
100% it is more conducive to a conversational exchange that actually gives you better insight into how a developer thinks much more so than leetcode.
Coding for me is an intensely focused activity and I work from home to boot so most of the time, I'm coding in complete silence. It's very awkward to be talking about my thought process while I'm coding, but not talking is just as awkward!
Some of the most interesting interviews that I felt like accurately assessed my skills (even non-live ones) where debugging and code review assessments. I didn't get offers from these cos later on because I failed the leetcodes they did later in the process but I felt the review interviews were a good way to be assessed.
Yep, I've done a lot of SQL interviews and it is always interesting to see the folks who've crash and burned at code review and killed it at writing individual queries and sometimes the unexpected, the opposite happened, the person would fly through a code review and do really subpar on writing it, a signal I usually took to mean that the person was nervous as hell in the interview.
The two folks who showed this behavior I hired anyway (they were contractors so nbd) and they were excellent hires, so I really love the code review approach for climbing up bloom's taxonomy.
I feel like identifying problems is the most important skill for success. Especially with AI, (but even before that) SEs are more often "editing" rather than "writing" code, and most of your time is either fixing odd states or anticipating them.
I don't know. A cold code review on a codebase they never saw is not super informative about how the candidate would interact with you and the code once they're in known territory.
I loved the idea of code reviews interviews, i've had several good ones, until yesterday when I had my first bad code review interview.
They asked me to review a function for a residential housing payment workflow, which I'm unfamiliar with. From an actual snippet of their bad production code (which has since been rewritten). In Go which I've never used (I've never professionally used the language that doesn't have error handling built-in, for example).
I had to spend more than half of my time asking questions to try and get enough context about Go error handling techniques, the abstractions they were using which we only had the import statements to and the way that the external system was structured to handle these requests to review the hundred lines of code they shared.
I was able to identify a bunch of things incidentally, like making all of the DB changes as part of a transaction so that we don't get inconsistent state or breaking up the function into sub functions, because the names were extremely long, but this was so far outside my area of expertise and comfort zone that I felt like shooting in the dark.
So just like any other interview style, they can be done very poorly.
Yeah it's really tempting when you discover an interesting fact to think "that would make an interesting interview question" and turn the interview into some kind of pub quiz. Happens with all forms of technical interview though. I mean 90% of leetcode questions are "this one weird trick".
I was asked by an SME to code on a whiteboard for an interview (in 2005? I think?). I asked if I could have a computer, they said no. I asked if I would be using a whiteboard during my day-to-day. They said no. I asked why they used whiteboards, they said they were mimicking Google's best practice. That discussion went on for a good few minutes and by the end of it I was teetering on leaving because the fit wasn't good.
I agreed to do it as long as they understood that I felt it was a terrible way of assessing someone's ability to code. I was allowed to use any programming language because they knew them all (allegedly).
The solution was a pretty obvious bit-shift. So I wrote memory registers up on the board and did it in Motorola 68000 Assembler (because I had been doing a lot of it around that time), halfway through they stopped me and I said I'd be happy to do it again if they gave me a computer.
I work for a faang subsidiary. We pay well below average salary and equity. We finally got one nice perk, a very good 401k match. A few months later it was announced that the 401k match would be scaled back "to come in line with what our parent company offers". I thought about asking "will be getting salaries or equity in line with what our parent company offers?" but that would have been useless. Management doesn't care. I'm job hunting.
Folks getting mad about whiteboard interviews is a meme at this point. It misses the point. We CANT test you effectively on your programming skillbase. So we test on a more relevant job skill, like can you have a real conversation (with a whiteboard to help) about how to solve the problem.
It isn't that your interviewer knew all the languages, but that the language didn't matter.
I didn't get this until I was giving interviews. The instructions on how to give them are pretty clear. The goal isn't to "solve the puzzle" but instead to demonstrate you can reason about it effectively, communicate your knowledge and communicate as part of problem solving.
I know many interviewers also didn't get it, and it became just "do you know the trick to my puzzle". That pattern of failure is a good reason to deprecate white board interviews, not "I don't write on a whiteboard when i program in real life".
> We CANT test you effectively on your programming skillbase. So we test on a more relevant job skill, like can you have a real conversation (with a whiteboard to help) about how to solve the problem.
Except, that's not what happens. In basically every coding interview in my life, it's been a gauntlet: code this leetcode medium/hard problem while singing and tapdancing backwards. Screw up in any way -- or worse (and also commonly) miss the obscure trick that brings the solution to the next level of algorithmic complexity -- and your interview day is over. And it's only gotten worse over time, in that nowadays, interviewers start with the leetcode medium as the "warmup exercise". That's nuts.
It's not a one off. The people doing these interviews either don't know what they're supposed to be looking for, or they're at a big tech company and their mandate is to be a severe winnowing function.
> It isn't that your interviewer knew all the languages, but that the language didn't matter.
I've done enough programming interviews to know that using even a marginally exotic language (like, say, Ruby) will drastically reduce your success rate. You either use a language that your interviewer knows well, or you're adding a level of friction that will hurt you. Interviewers love to say that language doesn't matter, but in practice, if they can't know that you're not making up the syntax, then it dials up the skepticism level.
> So we test on a more relevant job skill, like can you have a real conversation (with a whiteboard to help) about how to solve the problem.
Everybody says that, but reality is they don't imho. If you don't pass the pet question quiz "they don't know how to program" or are a "faker", etc.
I've seen this over and over and if you want to test a real conversation you can ask about their experience. (I realize the challenge with that is young interviewers aren't able to do that very well with more experienced people.)
> The goal isn't to "solve the puzzle" but instead to demonstrate you can reason about it effectively, communicate your knowledge and communicate as part of problem solving.
...while being closely monitored in a high-stakes performance in front of an audience of strangers judging them critically.
+1 to all this. It still surprises me how many people, even after being in the industry for years, think the goal of any interview is to “write the best code” or “get the right answer”.
What I want to know from an interview is if you can be presented an abstract problem and collaboratively work with others on it. After that, getting the “right” answer to my contrived interview question is barely even icing on the cake.
If you complain about having to have a discussion about how to solve the problem, I no longer care about actually solving the problem, because you’ve already failed the test.
> I was asked by an SME to code on a whiteboard for an interview (in 2005? I think?). I asked if I could have a computer, they said no. I asked if I would be using a whiteboard during my day-to-day. They said no. I asked why they used whiteboards, they said they were mimicking Google's best practice.
This looks more like a culture fit test than a coding test.
I am so happy that you did this. We vote with our feet and sadly, too many tech folks are unwilling to use their power or have golden handcuff tunnel vision.
>I was allowed to use any programming language because they knew them all (allegedly).
After 30 years of doing this, I find that typically the people who claim to know a lot often know very little. They're insecure in their ability so much that they've tricked themselves into not learning anything.
What is the functional difference between copying an AI answer and copying a StackOverflow answer, in terms of it being "cheating" during an interview?
I think the entire question is missing the forest for the trees. I have never asked a candidate to write code in any fashion during an interview. I talk to them. I ask them how they would solve problems, chase down bugs, or implement new features. I ask about concepts like OOP. I ask about what they've worked on previously, what they found interesting, what they found frustrating, etc.
Languages are largely teachable, it's just syntax and keywords. What I can't teach people is how to think like programmers need to: how to break down big, hard problems into smaller problems and implement solutions. If you know that, I can teach you fucking Swift, it isn't THAT complicated and there's about 5 million examples of "how do I $X" available all over the Internet.
Company A wants to hire an engineer, an AI could solve all their tech interview questions, so why not hire that AI instead?
There's very likely a real answer to that question, and that answer should shape the way that engineer should be assessed and hired.
For example, it could be that the company wants the engineer to do some kind of assessment whether a feature should be implemented at all, and if yes, in what way. Then you could, in an interview, give a bit of context and then ask the candidate to think out loud about an example feature request.
It seems to me the heart of the problem is that companies aren't very clear about what value the engineers add, and so they have trouble deciding whether a candidate could provide that value.
The even bigger challenge is that hiring experts in any domain requires domain knowledge, but hiring has been shifted to HR. They aren't experts in anything, and for some years they made do with formulaic approaches, but that doesn't cut it anymore. So now if your group wants to get it done, and done well, you have to get involved yourself, and it's a lot of work on top of your regular tasks. Maybe more work because HR is deeply involved.
Well, unless you know sufficiently senior people. But I suspect that is a deeply unsatisfactory answer to many people in this forum.
My long term last, only technically-adjacent, job came through a combination of knowing execs, having gone to the same school as my ultimate manager, and knowing various other people involved. (And having a portfolio of public work.)
I saw this at the big corporate (not faang/tech) place I work at. Engineers run and score interviews, but we don't make the final decision. That goes to HR and the hiring manager who usually has no technically background.
HR are experts in HR, which is to say they have a broader view of the institutional needs and legal requirements of hiring staffing than you do. It's always annoying when that clashes with your vision, but dismissing their entire domain is unlikely to help you avoid running into that dynamic again and again
I've never seen hiring completely in the domain of HR. HR filters incoming candidates and checks for culture fit etc, but technical competency is checked by engineers/ML folks. I can't imagine an HR person checking if someone understands neural networks.
> Company A wants to hire an engineer, an AI could solve all their tech interview questions, so why not hire that AI instead?
Interview coding questions aren't like the day-to-day job, because of the nature of an interview.
In an hour-long interview, I have to be able to state the problem in a way the candidate can understand, within 10 minutes or so. We don't have time for a lecture on the intricacies of voucher calculation and global sales tax law.
It also has to be a problem that's solvable within about 40 minutes.
The problem needs to test the candidate meets the company's hiring bar - while also having enough nuance that there's an opportunity for absolutely great candidates to impress me.
And the problem has to be possible to state unambiguously. Can't have a candidate solving the problem, but failing the interview because there was a secret requirement and they failed to read my mind.
And of course, if we're doing it in person on a whiteboard (do people do that these days?) it has to be solvable without any reference to documentation.
> In an hour-long interview, I have to be able to state the problem in a way the candidate can understand, within 10 minutes or so. We don't have time for a lecture on the intricacies of voucher calculation and global sales tax law.
If you send me a rubric I can pre-load whatever you want to talk about. If you tell me what you're trying to build and what you need help with, I can show up with a game plan.
You need to make time for a conversation on the intricacies of voucher calculation and global sales tax law if you want to find people jazzed about the problem space.
> In an hour-long interview, I have to be able to state the problem in a way the candidate can understand, within 10 minutes or so. We don't have time for a lecture on the intricacies of voucher calculation and global sales tax law.
Proving if they are technically capable of a job seems rather silly. Look at their resume, look at their online works, ask them questions about it. Use probing questions to understand the depths of their knowledge. I don't get why we are over-engineering interviews. If I have 10+ years of experience with some proof through chatting that I am, in fact, a professional software engineer, isn't that enough?
>Interview coding questions aren't like the day-to-day job, because of the nature of an interview.
You have missed his point. If the interview questions are such that AI can solve them, they are the wrong questions being asked, by definition. Unless that company is trying to hire a robot, of course.
One of the best interviews I've encountered as a candidate wasn't exactly a pair programming session but it was similar. The interviewer pulled up a webpage of theirs and showed me a problem with it, and then asked how I would approach fixing it. We worked our way through many parts of their stack and while it was me driving most of the way we ended up having a number of interesting conversations that cropped up organically at various points. It was scheduled for an hour and the time actually flew by.
I felt like I got a good sense of what he would be like to work with and he got to see how I approached various problems. It avoided the live coding problems of needing to remember a bunch of syntax trivia on the spot and having to focus on a quick small solution, rather than a large scalable one that you need more often for actual work problems.
“Real problems” aren’t something that can be effectively discussed in the time span of an interview, so companies concoct unreal problems that are meant to be good indicators.
Tech interviews in general need to be overhauled, and if they were it’d be less likely that AI would be as helpful in the process to begin with (at least for LLMs in their current state).
Current LLMs can do some basic coding and stitch it together to form cool programs, but it struggles at good design work that scales. Design-focused interviews paired with soft-skill-focus is a better measure of how a dev will be in the workplace in general. Yet, most interviews are just “if you can solve this esoteric problem we don’t use at all at work, you are hired”. I’d take a bad solution with a good design over a good solution with a bad design any day, because the former is always easier to refactor and iterate on.
AI is not really good at that yet; it’s trained on a lot of public data that skews towards worse designs. It’s also not all that great at behaving like a human during code reviews; it agrees too much, is overly verbose, it hallucinates, etc.
I want to hire people who can be given some problem and will go off and work on it and come to me with questions when specs are unclear or there's some weird thing that cropped up. AI is 100% not that. You have to watch it like a 15 year old driver.
A company wants to hire someone to perform tasks X, Y and Z. It's difficult to cleanly evaluate someone's ability to do these tasks in a short amount of time, so they do their best to construct a task A which is easy to test, and such that most people who can do A can also do X, Y and Z.
Now someone comes along and builds a machine that can do A. It turns out that while for humans, A was a good indicator of X, Y and Z, for the machine it is not. A is easy for the machine, but X, Y and Z are still difficult.
This isn't a sign that the company was wrong to ask A, nor is it a sign that they could just hire the machine.
This is a great point. Though what if the answer is that the company can hire that AI to solve a significant fraction of its actual problems? People who do the assessments and decide what features should look like are often called managers (product, engineering, etc.).
For a while I’ve been skeptical that the rate of hiring of engineers would change significantly because of LLMs, but I’m starting to feel like maybe I’m wrong and it’s already changing and companies are looking toward AI to lower costs and require fewer humans. In that case they are probably still going to want people who are technically exceptional - maybe even more so - but are able and willing to create, integrate, and babysit AI generated code, and also do PM and EM style feature management.
If companies are slowing hiring due to AI, I would expect interviews to get worse before they get better.
> For example, it could be that the company wants the engineer to do some kind of assessment whether a feature should be implemented at all, and if yes, in what way. Then you could, in an interview, give a bit of context and then ask the candidate to think out loud about an example feature request.
Maybe now, or maybe in a year or two, AI coding tools will be good enough that a single semi-technical person can be Product Manager for a small product, and implement all the feature through AI/LLM tools.
Probably not for something of the complexity of Google Maps, but for a simpler website with some interactive elements, that could work.
But then, this was just an example. There can be lots of reasons that companies still need engineers, my point was that they need to think about these reasons, and then use these reasons to decide how to select their engineers.
In most companies every engineer above a junior level is expected to pass features and bugfixes through their common sense filter and provide feedback. Product managers and designers aren't infallible and sometimes lack knowledge about the system or product that an engineer might have.
You can't just take requirements and churn out code without a critical eye at what you're doing.
I've accidentally been using an AI-proof hiring technique for about 20 years: ask a junior developer to bring code with them and ask them to explain it verbally. You can then talk about what they would change, how they would change it, what they would do differently, if they've used patterns (on purpose or by accident) what the benefits/drawbacks are etc. If they're a senior dev, we give them - on the day - a small but humorously-nasty chunk of code and ask them to reason through it live.
Works really well and it mimics the what we find is the most important bit about coding.
I don't mind if they use AI to shortcut the boring stuff in the day-to-day, as long as they can think critically about the result.
Yep. I've also been using an AI-proof interview for years. We have a normal conversation, they talk about their work, and I do a short round of well-tested technical questions (there's no trivia, let's just talk about some concepts you probably encounter fairly regularly given your areas of expertise).
You can tell who's trying to use AI live. They're clearly reading, and they don't understand the content of their answers, and they never say "I don't know." So if you ask a followup or even "are you sure" they start to panic. It's really obvious.
Maybe this is only a real problem for the teams that offloading their interviewing skills onto some leetcode nonsense...
This is a fine way. I’ll say that the difference between a senior and a principal is that the senior might snicker but the principal knows that there’s a chance the code was written by a founder.
And if the Principal is good, they should stand up and say exactly why the code is bad. If there's a reason to laugh because it is cliche bad, they should say so.
If someone gave me code with
if (x = 7) { ... } as part of a C eval.
Yeah, you'll get a sarcastic response back because I know it is testing code.
What I think people ignore is that personality matters. Especially at the higher levels. If you are a Principal SWE you have to be able to stand up to a CEO and say "No, sir. I think you are wrong. This is why." In a diplomatic way. Or sometimes. Less than diplomatic, depending on the CEO.
One manager that hired me was trying to figure me out. So he said (and I think he was honest at the time). "You got the job as long an you aren't an axe murderer."
To which I replied deadpan: "I hope I hid the axe well." (To be clear to all reading, I have never killed someone, nevermind with an axe! Hi FBI, NSA, CIA and pals!)
Got the job, and we got along great, I operated as his right hand.
Nowadays I am on the other part of the fence, I am the interviewer. We are not a FAANG, so we just use a SANE interview process. Single interview, we ask the candidate about his CV and what his expectations are, what are his competences and we ask him to show us some code he has written. That's all. The process is fast and extremely effective. You can discriminate week candidates in minutes.
How do you expect them to get access to the property internal Git repo codebase and approval from their employer's lawyers to show it to third parties during the interview?
Sounds like you're only selecting Foss devs and nothing more.
Most people have still written code for school or a hobby project. Maybe I'm missing empathy, but I cannot understand how some developers have no code to show.
If that's the case however, just let them make a small project over the weekend and then do another interview where you ask stuff about what they've made. It's not that deep
My worst code is always what I wrote yesterday. Often what’s missing is context, unless I comment ad nauseam. Sure I didn’t write complete test, obey open closed principles abstract into factory functions. The code I send from my hobby projects is likely a mess, because finishing on my own time by my own unpaid constraints wills it to be so
Maybe you forked a library because of reasons. You can tour the original repo and explain the problems. I have at least one of those examples for each time the legal or confidentiality department stepped in.
That process might work for your company precisely because you are not FAANG. You don't get hundreds of applicants that are dying to get in, so people don't have that strong of a motivation to do anything it takes (including lying) to get the job.
I’ve worked at a company with 150,000 employees. The interview process was pretty much as described here. There is absolutely no reason a Big Co needs to operate any differently.
We do this too, works fine. We ask open ended questions like, "What's your favorite thing you've done in your career and why?" and "What was the most challenging project in your career and why?" If you listen, you can get a lot of insight from just those two questions. If they don't give enough detail, we'll probe a little.
Our "gotcha," which doesn't apply to most languages anymore is, "What's the difference between a function and a procedure." It's a one sentence answer, but people who didn't know it would give some pretty enlightening answers.
Edit: From the replies I can see people are a little defensive about not knowing it. Not knowing it is ok because it was a question I asked people 20 years ago relevant to a language long dead in the US. I blame the defensiveness on how FUBAR the current landscape is. Giving a nuanced answer to show your depth of knowledge is actually preferred. A once sentence answer is minimal.
I'm editing this because HN says I'm posting too fast, which is super annoying, but what can I do?
> We do this too, works fine. We ask open ended questions like, "What's your favorite thing you've done in your career and why?" and "What was the most challenging project in your career and why?" If you listen, you can get a lot of insight from just those two questions. If they don't give enough detail, we'll probe a little.
The problem is: there is a very negative incentive to give honest answers. If I were to answer these questions honestly, I'd bring up some very interesting theorems (related to some deep algorithmic topics) that I proved in my PhD thesis. Yes, I would have loved to stay in academia, but I switched to industry because of the bad job prospects in academia - this is not what
interviewers want to hear. :-(
> "What's the difference between a function and a procedure." It's a one sentence answer
The terminology here differs quite a lot in different "programming communities". For example
says: "Procedure (computer science), also termed a subroutine, function, or subprogram",
i.e. there is no difference. On the other hand, Pascal programmers strongly distinguish between functions and procedures; here functions return a value, but procedures don't. Programmers who are more attracted to type theory (think Haskell) would rather consider "procedures" to be functions returning a unit type. If you rather come from a database programming background, (stored) procedures vs functions are quite different concepts.
I could go on and on. What I want to point out is that this topic is much more subtle than a "one sentence answer".
Here's an interesting thought on your "gotcha" - I'm 57 years old, been programming as a career for over 30 years, a lot of languages and I have no idea what the difference is.
> Single interview, we ask the candidate about *his* CV and what *his* expectations are, what are *his* competences and we ask *him* to show us some code *he* has written
You... might want to think about what implicit biases you might be bringing here
> There should be a ton of companies out there just dying to hire someone with that kind of experience.
heh.. they are probably dead already?
i have even longer years.. But this time i am looking since.. september? Applying 1-2 per day, on average.. Widening the fishing net each month.. ~2% showed some interest.. but no bingo.
I am with you! Been programming since I was 10 and have 20YoE. Many of my prototypes have grown into full fledged products, I have 40+ published papers, and I am regularly sought out for advice and help by those who know me. Everyone i have been, I am always told I am a good catch.
However, I won't do leet coding. I want to hear about why I should come work for u. What about my works makes u think I could help ubm with your problem. Then let's have a talk about your problems and where I can create value for you.
My experience in hiring is that leet coders are good one trick ponies. But long term don't become technical peers.
Part of the problem is there just aren't a lot of people out there who can correctly judge that level of experience, and looking up the spectrum tends to simply look weird.
What I've been thinking about leetcode medium/hard as a 30-45 minute tech interview (as there are a few minutes of pleasantry and 10 minutes reserved for questions), is that you are only really likely to reveal 2 camps of people—taking in good faith that they are not "cheating". One who is approaching the problem from first principles and the other who knows the solution already.
Take maximum subarray problem, which can be optimally solved with Kadane's algorithm. If you don't know that, you are looking at the problem as Professor Kadane once did. I can't say for sure, but I suspect it took him longer than 30-45 minutes to come up with his solution, and I also imagine he didn't spend the whole time blabbering about his thought process.
I often see comments like: this person had this huge storied resume but couldn't code their way out of a paper bag. Now having been that engineer stuck in a paper bag a few times, I think this is a very narrow way to view others.
I don't know the optimal way to interview engineers. I do know the style of interview that I prefer and excel at[0], but I wouldn't be so naive to think that the style that works for me would work for all. Often I chuckle about an anecdote from the fabled I.P. Sharp: Ian Sharp would set a light meter on his desk and measure how wide an interviewees eyes would get when he explained to them about APL. A strange way to interview, but is it any less strange than interviewing people via leetcode problems?
0: I think my ideal tech screen interview question is one that 1) has test cases 2) the test cases gradually ramp up in complexity 3) the complexity isn't revealed all at once; the interviewer "hides their cards," so to speak 4) is focused on a data structure rather than an algorithm such that the algorithm falls out naturally rather than serves as the focus. 5) Gives the opportunity for the candidate to weigh tradeoffs, make compromises, and cut corners given the time frame. 6) Doesn't combine big ideas (i.e. you shouldn't have to parse complex input and do something complicated with it); pick a single focus. Interviews I have participated and enjoyed like this: construct a Set class (union, difference, etc); implement an rpn calculator (ramp up the complexity by introducing multiple arities); create a range function that works like the python range function (for junior engineers, this one involves a function with different behavior based on arity).
>Take maximum subarray problem, which can be optimally solved with Kadane's algorithm. If you don't know that, you are looking at the problem as Professor Kadane once did. I can't say for sure, but I suspect it took him longer than 30-45 minutes to come up with his solution, and I also imagine he didn't spend the whole time blabbering about his thought process.
This is something that drives me nuts in academia when it comes to exam questions. I once took an exam that asked us to invent vector clocks from whole cloth, basically, having only knowledge of a basic Lamport clock for context. I think one person got it--and that person had just learned about vector clocks in a different class. Given some time, it's possible I could have figured it out. But on an exam, you've got like 10-15 minutes per question.
The funny thing about it is that I do the same damn thing from the other side all the time when working with students. It's incredibly tempting once you know the solution to a problem (especially if you didn't "solve" it yourself, but had the solution presented to you already) to present the question as though it has an obvious solution and expect somebody else to immediately solve it.
I'm aware of the effect, I've experienced it many times, and I still catch myself doing it. I've never interviewed a candidate for a job, but I can only imagine how tempting it would be to fall into that trap.
When I'm interviewing a candidate, I'm often asking myself if this question is just something I happen to know therefor expect the candidate to know too, or if it's crucial to doing the job?
Sometimes it may not be fair to expect a random developer to be familiar with a specific concept. But at the same time it might be critical to the kind of work we're doing.
The problem with it is the same curse that has rotted so much of software culture—the need for a scalable process with high throughput. "We need to run through hundreds of candidates per position, not a half dozen, are you crazy? It doesn't matter if the net result is better, it's the metrics along the way that matter!"
Pair programming with the person for a couple hours, maybe even on an actual feature, would probably work, assuming the candidate is compensated for their time. I can imagine it'd especially work for teams working on open source projects (Sentry, Zed, etc). Might not be as workable for companies whose work is entirely closed source.
Indeed, the other problem is what you mention: it doesn't scale to high throughput.
In all pair programming interviews I have run (which I will admit have been only a few) I would fail myself as an interviewer if I was not able to guide the interviewee away from a dead end within 15 minutes.
If the candidate wasn't able to understand the hints I was giving them, or just kept driving forward, then they would fail.
That’s an assumption. Perhaps following a dead end for a while, realizing it, pivoting, etc is a valuable, positive, signal?
What worked well for me was that I made it very clear to my manager, a man who I trust, that I would not be able to provide him with a boolean pass/fail result. I couldn't provide him any objective measure of their ability or performance. What I could do was hang out with the canditates for an hour while we discussed some concepts I thought were important in my position. From that conversation I would be able to provide him a ranking along with a personal evaluation on whether I would personally like to work with the candidate.
I prepared some example problems that I worked for myself a bit. Then I went into the interviews with those problems and let them direct those same explorations of the problem. Some of them took me on detours I hadn't taken on my own. Some of them needed a little nudge at times. I never took specific notes, but just allowed my brain to get a natural impression of the person. I was there to get to know them, not administer an exam.
I feel like the whole experience worked super well. It felt casual, but also very telling. It was almost like a focused date. Afterwards I discussed my impression of the candidates with my manager to ensure the things I was weighing was somewhat commutable to what he desired.
All in all it was a very human process. It must have taken an enormous amount of trust from my manager to allow me the discretion to make a subjective judgment. I was extremely surprised at how clearly I was able to delineate the people, but also how that delineation shifted depending on which axis we evaluated. A simple pass/fail technical interview would have missed that image of a full person.
1) if you're qualified to do what we're hiring you for this stuff should be your bread and butter. We were doing restful web apps so the prompt might be "A restful API that takes in a string and returns that same string but backward." If you chase your tail on that for 15 minutes, you're not right for this job.
2) We weren't looking for a specific implementation. You want to make it a GET call with a query param that returns just the string? Neat. A POST with request and response DTOs? Also neat. We're not gonna ding you for doing it one way rather than the other but we're definitely gonna talk to you about why you made the choice that you did and maybe try to tease out another choice you could have made so that we can ask you to compare them. Again, the only wrong answer here is one you can't defend.
3) I'm not here to test your ability to memorize a particular language's syntax, your IDE's autocomplete or your ability to google. If you can articulate that you're trying to create a POST endpoint I'll give you that the correct annotation with Spring is @PostMapping or little things like that.
I'm also not actually testing for pair programming ability directly, moreso ability to complete practical tasks / work in a specific area, collaborate, and communicate. If you choose a problem that is general/expandable enough that good candidates for the position are unlikely to go down bad rabbit holes (eg for a senior fullstack role, create a minimal frontend and api server that talk to each other) it works just fine. Actually with these kinds of problems it's kind of good if your candidates end up "studying" them like with leetcode, because it means they are just learning how to do the things that they'll do on the job.
> maybe even on an actual feature
I don't think this would work unless the feature were entirely self-contained. If your workaround is to give the candidate an OSS project they need to study beforehand, I think that would bias candidates' performance in ways that aren't aligned with who you want to hire (eg how desperate are they for the role and how much time outside of work are they willing to put into your interview).
Eh, if it's a reasonable bad end and you communicate through it, I wouldn't see it as a fail. Particularly if it's something I would have tried, too. (If it's something I wouldn't have thought of and it's a good idea, you're hired.)
Dead Comment
Big companies need to hire tons of people and interview even more so they need some sort of scalable process for it. An early stage startup can just ask you about your past projects and pair program with you for an hour.
Small companies have the benefit of the pressure to fill a role to get work done, the lack of bureaucratic baggage to "protect" the company from bad hires, and generally don't have enough staff to suffer empire-building.
Somewhere along the line the question changes from "can this candidate do the job that other people in this office are already doing?" to "can this candidate do the job of this imaginary archetype I've invented with seemingly impossible qualities".
There seems to be a weird selection bias that if you're FAANG or FAANG adjacent these small companies aren't interested.
If a startup can spend 20 man-hours filling a single position, why can't a big company spend 1000 man-hours filling 50 positions?
But you don't! You only need to find the first person who is good enough to do the job. You do not need to find the best person.
There is something funny about the "best interview process" taking "a couple hours" despite giving you the answer "within a few minutes". Seems like even the best process is a little broken.
I can only speak for myself, but I imagine myself as a candidate approaching a “couple of hours” project or relationship differently than I would a “few minutes” speed round. For that matter I can think of people I know professionally who I only know through their behavior “on stage” in structured and stylized meetings of a half hour or an hour—and I don’t feel like I have any sense at all of how they would be as day-to-day coworkers.
If we sat down to work together, you’d probably have a sense in the first few minutes of whether or not we were going to work out—but that would be contingent on us going into it with the attitude that we were working together as colleagues toward a goal.
Nor am I saying it was a perfect system, just the best I've seen in terms of results.
I've been interviewing recently and got through to the last round (of five...) with an interesting company. I knew the interview involved pairing, but I didn't expect: two people sitting behind me while I sat on a wobbly stool at a standing desk, trying to use a sticky wired mouse and a non-UK keyboard, and a very bright monitor (I normally use dark mode). They had a screen recorder running, and a radio was on in the next room.
I totally bombed it. I suspect just the two people behind me would have been enough though.
So now I have this weird dynamic: I get interview calls only from FAANG companies, the ones with the manpower to do your so called "cursed" scalable interviews. But the smaller companies or startups, ones who are a far better fit for my specialized kills, never call me. You need to either "know someone" or be from a big school, or there is zero chance.
I don't care if someone uses modern tools (like AI assists), google, etc - e.g. "open book" - as that's the how they want to work. Evaluating their style of thinking / solving problems, comms, and output is the win.
Additionally it’s rarely the hiring that makes a great team - it’s the long term commitment and investment in training.
Also how am I supposed to filter the 100's of AI-completed assessments? Who gets this opportunity?
We also did not require the employers doing the interview to be our most senior team members. They probably did it more often than most people, but often because they volunteered to do it. Anyone on the team would be part of the loop, which helped with scheduling. And, remember, we were working on actual tickets, so in a lot of cases it actually helped having the candidate there as a pairing partner.
For a little extra detail, the way we actually did it was to have 2-3 pairing sessions of up to 2 hours apiece. At the end of the day, all the team members who paired with the candidate had to give them the thumbs up.
Idk if I'm even being sarcastic here.
With that said, would your perception of the interview remain positive if the outcome had been negative?
A common challenge across all interviews is a mismatch in personal dynamics, which can significantly impact the experience for both participants.
Consider a scenario where a senior developer, who prefers simplicity, is paired with a mid-level developer who is still captivated by complexity.
I think you're onto something with that last paragraph but I want to try being a bit more generous with why things are the way they are. The question seems to be "When there are hundreds of applicants how do we give everyone a fair shake without hiring an entire team of devs who do nothing but interview?" From that perspective the intentionality is different and even sensible but the end product is likely to be the same. Even when someone is chasing a metric it's because someone else wants what's best and has decided that metric is a sensible way to make that happen. At the end of the day they really do want to hire the best candidate out of a pool whose size is extremely variable and that's challenging.
Then why spend a couple hours?
Being peppered with questions very rarely gives any insight.
Even for senior roles, that's what I want to interview for, although it is true at times a business case can be made for someone that is good at some specific complex skill and doesn't need to listen to other people to do ok work.
I have been here 5 years now and I'm working with the most competent team I have ever worked with. My take away from this is that hiring doesn't need to be commoditised and scale, it just needs to find good people and give them an opportunity to show you that you do or don't want to work with them.
>You never failed to know within a few minutes whether the person could do the job
Did a misunderstood something or your best interview process is to multiple hours from someone when you've decided within minutes?
Because the latter is garbage. You can't see the other person or read their body language. It also gives off "proctored test" vibes. Or worse than vibes - where you're expected to clear your room, swing your camera around, and never look away.
The downside we found was inconsistency. Candidate 1 and 2 get variable difficulty work. How do we decide who did better?
Developers Side: Since developers don't have job security anymore (at least for those who work on common languages like Go, Python, Java and Typescript) they are better off learning and keeping in touch with leetcode and system design questions, looking for new opportunities and interviewing in "batch mode" when looking for a job. The idea is to clear as many interviews as possible using the same concepts, get in and make money asap before you get laid off. No incentive for collaboration or for fulfilling but esoteric stuff like Haskell and Scala. Career security > Job security.
Companies Side: On the other end software companies have less trust in developers staying long term so they want to make the interview process as quick and risk free as possible. In essence they are betting that by perusing 100s of resumes and hiring someone who seemingly knows CS concepts they can get some value out of them before they leave. Standardized tests/vetting > team fit.
TLDR; The art is gone from this job, its become akin to management consulting or investment banking. Quality and UX seems to be regressing across the board as a result.
Not sure how those are similar.
I think what makes it work is that our code pair is pretty low stakes. I was told that I didn’t have to finish the problem and I was free to use whatever tools or language I needed. They just wanted to see how I work and collaborate.
Teams are really sleeping on code reviews as an assessment tool. As in having the candidate review code.
A junior, mid, senior, staff are going to see very different things in the same codebase.
Not only that, as AI generated code becomes more common, teams might want to actively select for devs that can efficiently review code for quality and correctness.
I went through one interview with a YC company that had a first round code review. I enjoyed it so much that I ended up making a small open source app for teams that want to use code reviews: https://coderev.app (repo: https://github.com/CharlieDigital/coderev)
So much value of the code review comes from having actual knowledge of the larger context. Mundane stuff like formatting quirks and obvious bad practices should be getting hoovered up by the linters anyways. But what someone new may *not* know is that this cruft is actually important for some arcane reason. Or that it's important that this specific line be super performant and that's why stylistically it's odd.
The real failure mode I worry about here is how much of this stuff becomes second nature to people on a team. They see it as "obvious" and forgot that it's actually nuance of their specific circumstances. So then a candidate comes in and misses something "obvious", well, here's the door.
An example from the interview: the code included a python web API and SQL schema. Some obvious points I noticed were no input validation, string concatenating for the database access (SQL injection), no input scrubbing (XSS), based on the call pattern there were some missing indices, a few bad data type choices (e.g. integer for user ID), a possible infinite loop in one case.
You might be thinking about it in the wrong way; what you want to see is that someone can spot these types of logic errors that either a human or AI copilot might produce regardless of the larger context.
The juniors will find formatting and obvious bad practices; the senior and staff will find the real gems. This format works really well for stratification.
Here's the neural net model your colleague sent you. They say it's meant to do ABC, but they found limitation XYZ. What is going on? What changes would you suggest and why?
Was actually a decent combined knowledge + code question.
The great thing about code reviews is that there are LOTS of ways people can improve code. You can start with the basics like can you make this code run at all (i.e. compile) and can you make it create the right output. And there's also more advanced improvements like how to make the code more performant, more maintainable, and less error-prone.
Also, the candidates can talk about their reasoning about why or why not they'd change the code they're reviewing.
For example, you'd probably view the candidates differently based on their responses to seeing a code sample with a global variable.
Poor: "Everything looks fine here"
Good: "Eliminate that global variable. We can do that by refactoring this function to..."
Better: "I see that there's a global variable here. Some say they're an anti-pattern, and that is true in most but not all cases. This one here may be ok if ..., but if not you'll need to..."
Coding for me is an intensely focused activity and I work from home to boot so most of the time, I'm coding in complete silence. It's very awkward to be talking about my thought process while I'm coding, but not talking is just as awkward!
The two folks who showed this behavior I hired anyway (they were contractors so nbd) and they were excellent hires, so I really love the code review approach for climbing up bloom's taxonomy.
So yeah, I think it's the opposite: explicitly testing for their ability to read code is probably kinda important.
They asked me to review a function for a residential housing payment workflow, which I'm unfamiliar with. From an actual snippet of their bad production code (which has since been rewritten). In Go which I've never used (I've never professionally used the language that doesn't have error handling built-in, for example).
I had to spend more than half of my time asking questions to try and get enough context about Go error handling techniques, the abstractions they were using which we only had the import statements to and the way that the external system was structured to handle these requests to review the hundred lines of code they shared.
I was able to identify a bunch of things incidentally, like making all of the DB changes as part of a transaction so that we don't get inconsistent state or breaking up the function into sub functions, because the names were extremely long, but this was so far outside my area of expertise and comfort zone that I felt like shooting in the dark.
So just like any other interview style, they can be done very poorly.
Language and domain experience are things id like to know after an interview process.
I guess it would degrade to stackoverflow-like poems eventually, but still interesting.
https://codereview.stackexchange.com
It would be interesting, but I agree it would need to be content moderated to some extent.
(OBS Elements other times)
Each code sample should have multiple things wrong. The best people will find most (not necessarily all) of them. The mediocre will find a few.
I agreed to do it as long as they understood that I felt it was a terrible way of assessing someone's ability to code. I was allowed to use any programming language because they knew them all (allegedly).
The solution was a pretty obvious bit-shift. So I wrote memory registers up on the board and did it in Motorola 68000 Assembler (because I had been doing a lot of it around that time), halfway through they stopped me and I said I'd be happy to do it again if they gave me a computer.
The offered me the job. I went elsewhere.
Folks getting mad about whiteboard interviews is a meme at this point. It misses the point. We CANT test you effectively on your programming skillbase. So we test on a more relevant job skill, like can you have a real conversation (with a whiteboard to help) about how to solve the problem.
It isn't that your interviewer knew all the languages, but that the language didn't matter.
I didn't get this until I was giving interviews. The instructions on how to give them are pretty clear. The goal isn't to "solve the puzzle" but instead to demonstrate you can reason about it effectively, communicate your knowledge and communicate as part of problem solving.
I know many interviewers also didn't get it, and it became just "do you know the trick to my puzzle". That pattern of failure is a good reason to deprecate white board interviews, not "I don't write on a whiteboard when i program in real life".
Except, that's not what happens. In basically every coding interview in my life, it's been a gauntlet: code this leetcode medium/hard problem while singing and tapdancing backwards. Screw up in any way -- or worse (and also commonly) miss the obscure trick that brings the solution to the next level of algorithmic complexity -- and your interview day is over. And it's only gotten worse over time, in that nowadays, interviewers start with the leetcode medium as the "warmup exercise". That's nuts.
It's not a one off. The people doing these interviews either don't know what they're supposed to be looking for, or they're at a big tech company and their mandate is to be a severe winnowing function.
> It isn't that your interviewer knew all the languages, but that the language didn't matter.
I've done enough programming interviews to know that using even a marginally exotic language (like, say, Ruby) will drastically reduce your success rate. You either use a language that your interviewer knows well, or you're adding a level of friction that will hurt you. Interviewers love to say that language doesn't matter, but in practice, if they can't know that you're not making up the syntax, then it dials up the skepticism level.
Everybody says that, but reality is they don't imho. If you don't pass the pet question quiz "they don't know how to program" or are a "faker", etc.
I've seen this over and over and if you want to test a real conversation you can ask about their experience. (I realize the challenge with that is young interviewers aren't able to do that very well with more experienced people.)
And do you frame the problem like that when giving interviews? Or the candidates are led to believe working code is expected?
...while being closely monitored in a high-stakes performance in front of an audience of strangers judging them critically.
What I want to know from an interview is if you can be presented an abstract problem and collaboratively work with others on it. After that, getting the “right” answer to my contrived interview question is barely even icing on the cake.
If you complain about having to have a discussion about how to solve the problem, I no longer care about actually solving the problem, because you’ve already failed the test.
This looks more like a culture fit test than a coding test.
I am so happy that you did this. We vote with our feet and sadly, too many tech folks are unwilling to use their power or have golden handcuff tunnel vision.
After 30 years of doing this, I find that typically the people who claim to know a lot often know very little. They're insecure in their ability so much that they've tricked themselves into not learning anything.
Today? Now that's when it is tricky. How can we know you are not one of these prompt "engineers" copy paster? That's the issue being discussed.
20 years and many new technologies of difference.
I think the entire question is missing the forest for the trees. I have never asked a candidate to write code in any fashion during an interview. I talk to them. I ask them how they would solve problems, chase down bugs, or implement new features. I ask about concepts like OOP. I ask about what they've worked on previously, what they found interesting, what they found frustrating, etc.
Languages are largely teachable, it's just syntax and keywords. What I can't teach people is how to think like programmers need to: how to break down big, hard problems into smaller problems and implement solutions. If you know that, I can teach you fucking Swift, it isn't THAT complicated and there's about 5 million examples of "how do I $X" available all over the Internet.
There's very likely a real answer to that question, and that answer should shape the way that engineer should be assessed and hired.
For example, it could be that the company wants the engineer to do some kind of assessment whether a feature should be implemented at all, and if yes, in what way. Then you could, in an interview, give a bit of context and then ask the candidate to think out loud about an example feature request.
It seems to me the heart of the problem is that companies aren't very clear about what value the engineers add, and so they have trouble deciding whether a candidate could provide that value.
Well, unless you know sufficiently senior people. But I suspect that is a deeply unsatisfactory answer to many people in this forum.
My long term last, only technically-adjacent, job came through a combination of knowing execs, having gone to the same school as my ultimate manager, and knowing various other people involved. (And having a portfolio of public work.)
Not everywhere. At my company, HR owns the process but we -- the hiring tech team -- own the content of interviews and the outcomes.
Interview coding questions aren't like the day-to-day job, because of the nature of an interview.
In an hour-long interview, I have to be able to state the problem in a way the candidate can understand, within 10 minutes or so. We don't have time for a lecture on the intricacies of voucher calculation and global sales tax law.
It also has to be a problem that's solvable within about 40 minutes.
The problem needs to test the candidate meets the company's hiring bar - while also having enough nuance that there's an opportunity for absolutely great candidates to impress me.
And the problem has to be possible to state unambiguously. Can't have a candidate solving the problem, but failing the interview because there was a secret requirement and they failed to read my mind.
And of course, if we're doing it in person on a whiteboard (do people do that these days?) it has to be solvable without any reference to documentation.
If you send me a rubric I can pre-load whatever you want to talk about. If you tell me what you're trying to build and what you need help with, I can show up with a game plan.
You need to make time for a conversation on the intricacies of voucher calculation and global sales tax law if you want to find people jazzed about the problem space.
Proving if they are technically capable of a job seems rather silly. Look at their resume, look at their online works, ask them questions about it. Use probing questions to understand the depths of their knowledge. I don't get why we are over-engineering interviews. If I have 10+ years of experience with some proof through chatting that I am, in fact, a professional software engineer, isn't that enough?
You have missed his point. If the interview questions are such that AI can solve them, they are the wrong questions being asked, by definition. Unless that company is trying to hire a robot, of course.
I felt like I got a good sense of what he would be like to work with and he got to see how I approached various problems. It avoided the live coding problems of needing to remember a bunch of syntax trivia on the spot and having to focus on a quick small solution, rather than a large scalable one that you need more often for actual work problems.
Let's not pretend 95% of companies are asking asinine interview questions (though I understand the reasons why) that LLMs can easily solve.
Current LLMs can do some basic coding and stitch it together to form cool programs, but it struggles at good design work that scales. Design-focused interviews paired with soft-skill-focus is a better measure of how a dev will be in the workplace in general. Yet, most interviews are just “if you can solve this esoteric problem we don’t use at all at work, you are hired”. I’d take a bad solution with a good design over a good solution with a bad design any day, because the former is always easier to refactor and iterate on.
AI is not really good at that yet; it’s trained on a lot of public data that skews towards worse designs. It’s also not all that great at behaving like a human during code reviews; it agrees too much, is overly verbose, it hallucinates, etc.
I think if it was socially acceptable they'd just do the latter.
Now someone comes along and builds a machine that can do A. It turns out that while for humans, A was a good indicator of X, Y and Z, for the machine it is not. A is easy for the machine, but X, Y and Z are still difficult.
This isn't a sign that the company was wrong to ask A, nor is it a sign that they could just hire the machine.
For a while I’ve been skeptical that the rate of hiring of engineers would change significantly because of LLMs, but I’m starting to feel like maybe I’m wrong and it’s already changing and companies are looking toward AI to lower costs and require fewer humans. In that case they are probably still going to want people who are technically exceptional - maybe even more so - but are able and willing to create, integrate, and babysit AI generated code, and also do PM and EM style feature management.
If companies are slowing hiring due to AI, I would expect interviews to get worse before they get better.
So a Product Manager?
Maybe now, or maybe in a year or two, AI coding tools will be good enough that a single semi-technical person can be Product Manager for a small product, and implement all the feature through AI/LLM tools.
Probably not for something of the complexity of Google Maps, but for a simpler website with some interactive elements, that could work.
But then, this was just an example. There can be lots of reasons that companies still need engineers, my point was that they need to think about these reasons, and then use these reasons to decide how to select their engineers.
You can't just take requirements and churn out code without a critical eye at what you're doing.
Dead Comment
Works really well and it mimics the what we find is the most important bit about coding.
I don't mind if they use AI to shortcut the boring stuff in the day-to-day, as long as they can think critically about the result.
You can tell who's trying to use AI live. They're clearly reading, and they don't understand the content of their answers, and they never say "I don't know." So if you ask a followup or even "are you sure" they start to panic. It's really obvious.
Maybe this is only a real problem for the teams that offloading their interviewing skills onto some leetcode nonsense...
If someone gave me code with
if (x = 7) { ... } as part of a C eval.
Yeah, you'll get a sarcastic response back because I know it is testing code.
What I think people ignore is that personality matters. Especially at the higher levels. If you are a Principal SWE you have to be able to stand up to a CEO and say "No, sir. I think you are wrong. This is why." In a diplomatic way. Or sometimes. Less than diplomatic, depending on the CEO.
One manager that hired me was trying to figure me out. So he said (and I think he was honest at the time). "You got the job as long an you aren't an axe murderer."
To which I replied deadpan: "I hope I hid the axe well." (To be clear to all reading, I have never killed someone, nevermind with an axe! Hi FBI, NSA, CIA and pals!)
Got the job, and we got along great, I operated as his right hand.
How do you expect them to get access to the property internal Git repo codebase and approval from their employer's lawyers to show it to third parties during the interview?
Sounds like you're only selecting Foss devs and nothing more.
If that's the case however, just let them make a small project over the weekend and then do another interview where you ask stuff about what they've made. It's not that deep
Our "gotcha," which doesn't apply to most languages anymore is, "What's the difference between a function and a procedure." It's a one sentence answer, but people who didn't know it would give some pretty enlightening answers.
Edit: From the replies I can see people are a little defensive about not knowing it. Not knowing it is ok because it was a question I asked people 20 years ago relevant to a language long dead in the US. I blame the defensiveness on how FUBAR the current landscape is. Giving a nuanced answer to show your depth of knowledge is actually preferred. A once sentence answer is minimal.
I'm editing this because HN says I'm posting too fast, which is super annoying, but what can I do?
The problem is: there is a very negative incentive to give honest answers. If I were to answer these questions honestly, I'd bring up some very interesting theorems (related to some deep algorithmic topics) that I proved in my PhD thesis. Yes, I would have loved to stay in academia, but I switched to industry because of the bad job prospects in academia - this is not what interviewers want to hear. :-(
> "What's the difference between a function and a procedure." It's a one sentence answer
The terminology here differs quite a lot in different "programming communities". For example
> https://en.wikipedia.org/w/index.php?title=Procedure&oldid=1...
says: "Procedure (computer science), also termed a subroutine, function, or subprogram",
i.e. there is no difference. On the other hand, Pascal programmers strongly distinguish between functions and procedures; here functions return a value, but procedures don't. Programmers who are more attracted to type theory (think Haskell) would rather consider "procedures" to be functions returning a unit type. If you rather come from a database programming background, (stored) procedures vs functions are quite different concepts.
I could go on and on. What I want to point out is that this topic is much more subtle than a "one sentence answer".
My answer would be along the lines of "It's 2025, no one has talked about procedures for 20+ years"
You... might want to think about what implicit biases you might be bringing here
Deleted Comment
I have 26 years of solid experience, been writing code since I was 8.
There should be a ton of companies out there just dying to hire someone with that kind of experience.
But I'm not perfect, no one is; and faking doesn't work very well for me.
heh.. they are probably dead already?
i have even longer years.. But this time i am looking since.. september? Applying 1-2 per day, on average.. Widening the fishing net each month.. ~2% showed some interest.. but no bingo.
"overqualified" is about half of the "excuses" :/
Time to plant tomatoes maybe..
Not that I mind growing tomatoes, quite the opposite :)
However, I won't do leet coding. I want to hear about why I should come work for u. What about my works makes u think I could help ubm with your problem. Then let's have a talk about your problems and where I can create value for you.
My experience in hiring is that leet coders are good one trick ponies. But long term don't become technical peers.
Take maximum subarray problem, which can be optimally solved with Kadane's algorithm. If you don't know that, you are looking at the problem as Professor Kadane once did. I can't say for sure, but I suspect it took him longer than 30-45 minutes to come up with his solution, and I also imagine he didn't spend the whole time blabbering about his thought process.
I often see comments like: this person had this huge storied resume but couldn't code their way out of a paper bag. Now having been that engineer stuck in a paper bag a few times, I think this is a very narrow way to view others.
I don't know the optimal way to interview engineers. I do know the style of interview that I prefer and excel at[0], but I wouldn't be so naive to think that the style that works for me would work for all. Often I chuckle about an anecdote from the fabled I.P. Sharp: Ian Sharp would set a light meter on his desk and measure how wide an interviewees eyes would get when he explained to them about APL. A strange way to interview, but is it any less strange than interviewing people via leetcode problems?
0: I think my ideal tech screen interview question is one that 1) has test cases 2) the test cases gradually ramp up in complexity 3) the complexity isn't revealed all at once; the interviewer "hides their cards," so to speak 4) is focused on a data structure rather than an algorithm such that the algorithm falls out naturally rather than serves as the focus. 5) Gives the opportunity for the candidate to weigh tradeoffs, make compromises, and cut corners given the time frame. 6) Doesn't combine big ideas (i.e. you shouldn't have to parse complex input and do something complicated with it); pick a single focus. Interviews I have participated and enjoyed like this: construct a Set class (union, difference, etc); implement an rpn calculator (ramp up the complexity by introducing multiple arities); create a range function that works like the python range function (for junior engineers, this one involves a function with different behavior based on arity).
This is something that drives me nuts in academia when it comes to exam questions. I once took an exam that asked us to invent vector clocks from whole cloth, basically, having only knowledge of a basic Lamport clock for context. I think one person got it--and that person had just learned about vector clocks in a different class. Given some time, it's possible I could have figured it out. But on an exam, you've got like 10-15 minutes per question.
The funny thing about it is that I do the same damn thing from the other side all the time when working with students. It's incredibly tempting once you know the solution to a problem (especially if you didn't "solve" it yourself, but had the solution presented to you already) to present the question as though it has an obvious solution and expect somebody else to immediately solve it.
I'm aware of the effect, I've experienced it many times, and I still catch myself doing it. I've never interviewed a candidate for a job, but I can only imagine how tempting it would be to fall into that trap.
When I'm interviewing a candidate, I'm often asking myself if this question is just something I happen to know therefor expect the candidate to know too, or if it's crucial to doing the job?
Sometimes it may not be fair to expect a random developer to be familiar with a specific concept. But at the same time it might be critical to the kind of work we're doing.