Have you gone back to in-person whiteboards? More focus on practical problems? I really have no idea how the traditional tech interview is supposed to work now when problems are trivially solvable by GPT.
The last time I've used a leet code style interview was in 2012, and it resulted in a bad hire (who just happened to have trained on the questions we used). I've hired something like 150 developers so far, and what I ended up with after a few years of trial and error:
1. Use recruiters and network: Wading through the sheer volume of applications was even nasty before COVID, I don't even want to imagine what it's like now. A good recruiter or a recommendation can save a lot of time.
2. Do either no take home test, or one that takes at most two hours. I do discuss the solution candidates came up with, so as long as they can demonstrate they know what they did there, I don't care too much how they did it. If I do this part, it's just to establish some base line competency.
3. Put the candidate at ease - nervous people don't interview well, another problem with non-trivial tasks in technical interviews. I rarely do any live coding, if I do, it's pairing and for management roles, to e.g. probe how they manage disagreement and such. But for developers, they mostly shine when not under pressure, I try to see that side of them.
4. Talk through past and current challenges, technical and otherwise. This is by far the most powerful part of the interview IMHO. Had a bad manager? Cool, what did you do about it? I'm not looking for them having resolved whatever issue we talk about, I'm trying to understand who they are and how they'd fit into the team.
I've been using this process for almost a decade now, and currently don't think I need to change anything about it with respect to LLMs.
I kinda wish it was more merit based, but I haven't found a way to do that well yet. Maybe it's me, or maybe it's just not feasible. The work I tend to be involved in seems way too multi faceted to have a single standard test that will seriously predict how well a candidate will do on the job. My workaround is to rely on intuition for the most part.
When I was interviewing candidates at IBM, I came up with a process I was really happy with. It started with a coding challenge involving several public APIs, in fact the same coding challenge that was given to me when I interviewed there.
What I added:
1. Instead of asking "do you have any questions for me?" at the very end, we started with that general discussion.
2. A few days ahead, I emailed the candidate the problem and said they are welcome to do it as a take home problem or we could work on it together. I let them know that if they did it ahead of time, we would do a code review and I would ask them about their design and coding choices. Or if they wanted to work on it together, they should consider it a pair programming session where I would be their colleague and advisor. Not some adversarial thing!
3. This the innovation I am proud of: a segment at the beginning of the interview called "teach me something". In my email I asked the candidate to think of something they would like to teach me about. I encouraged them to pick a topic unrelated to our work, or it could be programming related if they preferred that. Candidates taught me things like:
• How to choose colors of paint to mix that will get the shade you want.
• How someone who is bilingual thinks about different topics in their different languages.
• How the paper bill handler in an ATM works.
• How to cook pork belly in an air fryer without the skin flying off (the trick is punching holes in the skin with toothpicks).
I listed these in more recent emails as examples of fun topics to teach me about. And I mentioned that if I were asked this question, I might talk about how to tune a harmonica, and why a harmonica player would want to do that
This was fun for me and the candidate. It helped put them at ease by letting them shine as the expert in some area they had a special interest in.
"Teach me something" is how the test prep companies would interview instructors. Same idea—have a candidate explain a topic they are totally comfortable with—but they were more focused how engaging the lesson was, how the candidate handled questions they didn't expect, etc. moreso than I expect you would if you use this in a IC coding interview. It's a neat idea though, I can imagine lots of different ways it would be handy.
Kudos to you - this sounds like a fantastic interview format.
I especially like that you're prepping them very clearly in advance, giving them every opportunity to succeed, and clearly setting a tone of collaboration.
In person design and coding challenges are a pressure cooker, and not real-world. However, giving people the choice, seems like a great way to achieve the balance.
Honestly, I'm really just commenting here so that this shows up in my history, so I can borrow heavily from this format next time I need to interview! :) Thanks again for sharing.
That sounds really cool. I wish I was running into more job interviews like the one you describe. The adversarial interviewing really hurts the entire feel of the process
I immediately want to learn about all these cool things you listed.
I work as a developer and as an interviewer (both freelance). Now I want to integrate your point 3. into my interviews, but not to choose better candidates, just to learn new stuff I never thought about before.
It is your fault that I see now this risk in my professional life, coming at me. I could get addicted to "teach me something".
'Hey candidate, we have 90 minutes. Just forget about that programming nonsense and teach me cool stuff'
I spent a decade and change doing adult instruction in CAD. Early on, many were still transitioning off 2d hand drawings. Boy, was that an art! I got to do a few the very old school way and have serious respect for the people who can produce a complex assembly drawing that way. Same for many shapes that may feature compound curves, for example.
But I digress!
Asking them what they wanted or would teach me was very illuminating and frankly, fun all around! I was brought a variety of subjects and not a single one was dull!
Seems I had experiences similar to yours.
One of my questions was influenced by someone who I respected highly asking, "what books are on your shelf at home?"
This one almost always brought out something about the candidate I would have had no clue about otherwise. As time advanced, it became more about titles because fewer people maintain a physical book shelf!
I love this. I hope you don't mind, but I'm going to steal it for when I am back in the saddle interviewing folks. We need much more of this and a lot less of the adversarial BS.
(3) is biasing the process strongly in favour of people who spin a good story. If you're looking for a certain team culture then OK but this is going to neatly screen out anyone who just wants to do their job well and doesn't particularly know how to sell the extra-curriculars they have.
This post is some unintentional satire how IBM operates (nobody invents anything in IBM anymore).
The opening question is a copy of what was done before (probably by someone who doenst work at IBM anymore) and all the new stuff is stolen from outsiders.
> In my email I encouraged the candidate to teach me something unrelated to our work, or programming related if they preferred that.
What if I decide to teach you something about the Quran and you don't hire me?
Perhaps this is just urban legend but from the stories I've heard hiring for FAANG-type companies there are people out there interviewing with their only goal being baiting you into a topic they can sue you over.
Worst instance I have heard of is when an interviewer asked about the books the candidate liked to read (since the candidate said they're an avid reader in their resume) and he just said that he liked to read the Bible. After not getting hired for failing the interviews he accused the company of religious discrimination.
I'm by no means an expert in US law and don't know the people this happened to directly so maybe it's just one big fantasy story but it doesn't seem that far fetched that
- If you are a rich corporation then people will look to bait you into a lawsuit
- If you give software engineers (or any non-lawyers) free rein on how to conduct interviews then some of them can be baited into a situation that the company could be sued over way more easily than a rigid leetcode interview
I think nothing came of the fellow who liked reading the Bible but I would imagine the legal department has a say in what kind of interview process a rich company can and can not use.
As someone with a pretty long career already, and who's comfortable talking about it, I was a bit surprised that in three interviews last year nobody asked a single thing about any of my previous work. One was live coding and tech trivia, the other two were extensive take-home challenges.
To their credit, I think they would have hired "the old guy" if I'd aced their take-homes, but I was a bit rusty and not super thrilled about their problem areas anyway so we didn't get that far. And honestly it seems like a decent system for hiring well-paid cogs in your well-oiled machine for the short term.
Your method sounds like what we were trying to do ten years ago, and it worked pretty well until our pipeline dried up. I wish you, and your candidates, continued success with it: a little humanity goes a long way these days.
So, did you find a company that you are happy with (interviewing or otherwise)? I would be really interested to know how you are dealing with tech landscape changes lately, and your plans for staying in tech ...
>>Do either no take home test, or one that takes at most two hours. I do discuss the solution candidates came up with, so as long as they can demonstrate they know what they did there, I don't care too much how they did it. If I do this part, it's just to establish some base line competency.
The biggest problem with the take home tests are not people who don't show up due to not being able to finish the assignment, But that those people who do, now expect to get hired.
95% people don't finish the assignment. 5% do. Teams think submitting the assignment with 100% feature set, unit test cases, onsite code review and onsite additional feature implementation still shouldn't mean a hire(If not anything, there are just not enough positions to fill). From the candidate's perspective, its pointless to spend a week working and doing every thing the hiring team asked for and still receive a 'no'.
I think if you are not ready to pay 2x - 5x market comp you shouldn't use take home assignments to hire people. There is too much work to do, and receive a reject at the end. Upping the comp absolutely makes sense as working a week to get a chance at earning 3x more or so makes sense.
Most of the time, those take home tests cannot be done in 2 hours. I remember one where I wasn't even done with the basic setup in 2 hours, installing various software/libraries and debugging issues with them.
We did a lot of these assignments and no one assumed that they will be hired if they complete it. Its about how you communicate your intent. I always told the candidates, that the goal of the task is 1. to see some code and if some really basic stuff is on point and 2. that you can argue with someone about his or her code.
How does that process handle people who have been out of work for a few years and can pass a take-home technical challenge (without a LLM) but cannot remember a convincing level of detail on the specifics of their past work? I’ve been experiencing your style of interview a lot and running up against this obstacle, even though I genuinely did the work I’m claiming to have done.
Especially people with ADHD don’t remember details as long as others, even though ADHD does not make someone a bad hire in this industry (and many successful tech workers have it).
I do prefer take-home challenges to live coding interviews right now, or at least a not-rushed live coding interview with some approximate advance warning of what to expect. That gives me time to refresh my rust (no programming language pun intended) or ramp up on whichever technologies are necessary for the challenge, and then to complete the challenge well even if taking more time than someone who is currently fresh with the perfectly matched skills might need. I want the ability to show what I can do on the job after onboarding, not what I can do while struggling with long-term unemployment, immigration, financial, and family health concerns. (My fault for marrying a foreigner and trying to legally bring her into the US, apparently.)
And, no, my life circumstances do not make it easy for me to follow the common suggestion of ramping up in my “spare time” without the pressures of a job or a specific interview task. That’s completely different from when I can do on the job or for a specific interview’s technical challenge.
This is slightly tangential to your questions, but to address the "remembering details about your past work", I've long-encouraged the developers I mentor to keep a log/doc/diary of their work.
If nothing else, it's a useful tool when doing reviews with your manager, or when you're trying to evaluate your personal growth.
It comes in really handy when you're interviewing or looking for a new job.
It doesn't have to be super detailed either. I tell people to write 50% about what the work was, and 50% about what the purpose/value of the work was.. That tends to be a good mix of details and context.
Writing in it once a month, or once a sprint, is enough. Even if it's just a paragraph or two..
I can't say I've interviewed someone to which this applies - unfortunately! Probably just doesn't get surfaced by my channels.
I would definitely not expect someone out of work for a while to have any meaningful side projects. I mean, if they do, that's cool, but I bet the sheer stress of not having a job kills a lot of creativity and productivity. Haven't been there, so I can only imagine.
For such a candidate, I'd probably want to offer them a time limited freelance gig rather quickly, so I can just see what they work like. For people who are already in a job, it's more important to ensure fit before they quit there, but your scenario opens the door to not putting that much pressure on the interview process.
Just a suggestion, but I have done 3-5 projects a year for a long, long time, and as an executive (for almost a decade) had dozens of projects annually I was overseeing and/or contributing to. I am not a fan of LinkedIn, but I did eventually start logging at least some of the projects I do in their "projects" section. That helps me remember and revisit some of the projects, and when you go back 10 years later it's sort of joyful walk down memory lane sometimes.
I feel like take home tests are meaningless and I always have. Even more so now with LLMs, though 9/10 times you can tell if it's an LLM-people don't normally put trivial comments in the code such as
> // This line prevents X from happening
I've seen a number of those. The issue here is that you've already wasted a lot of time with a candidate.
So being firmly against take home tests or even leetcode, I think the only viable option is a face to face interview with a mixture of general CS questions(i.e. what is a hashmap, benefits and drawbacks, what is a readers-writer lock, etc) and some domain specific questions: "You have X scenario(insert details here), which causes a race condition, how do you solve it."
> I feel like take home tests are meaningless and I always have. Even more so now with LLMs
This has been discussed many times already here. You need to set an "LLM trap" (like an SSH honey trap) by asking the candidate to explain the code they wrote. Also, you can wait until the code review to ask them how they would unit test the code. Most cheaters will fall apart in the first 60 seconds. It is such an obvious tell. And if they used an LLM, but they can very well explain the code, well, then, they will be a good programmer on your team, where an LLM is simply one more tool in their arsenal.
I am starting to think that we need two types of technical interview questions: Old school (no LLMs allowed) vs new school (LLMs strongly encouraged). Someone under 25 (30?) is probably already making great use of LLMs to teach themselves new things about programming. This reminds me of when young people (late 2000s/early 2010s) began to move away from "O'Reilly-class" (heh, like a naval destroyer class) 500 page printed technical books to reading technical blogs. At first, I was suspicious -- essentially, I was gatekeeping on the blog writers. Over time, I came to appreciate that technical learning was changing. I see the same with LLMs. And don't worry about the shitty programmers who try to skate by only using LLMs. Their true colours will show very quickly.
Can I ask a dumb question? What are some drawbacks of using a hash map? Honestly, I am nearly neck-bearded at this point, and I would be surprised by this question in an interview. Mostly, people ask how do they work (impl details, etc.) and what are some benefits over using linear (non-binary) search in an array.
If you are evaluating how well people code without LLMs you are likely filtering for the wrong people and you are way behind the times.
For most companies, the better strategy would be to explicitly LET them use LLMs and see whether they can accomplish 10X what a coder 3 years ago could accomplish, in the same time. If they accomplish only 1X, that's a bad sign that they haven't learned anything in 3 years about how to work faster with new power tools.
A good analogy of 5 years ago would be forcing candidates to write in assembly instead of whatever higher level language you actually use in your work. Sure, interview for assembly if that's what you use, but 95% of companies don't need to touch assembly language.
To add: I can very well imagine this process isn't suitable for FAANG, so I can understand their university exam style approach to a degree. It's easy to arm chair criticise, but I don't know if I could come up with something better at their scale. These days, I'm mostly engaged by startups to help them build an initial team, I acknowledge that's different from what a lot of other folks hire for.
Why not? Plenty of large organizations hire this way. My first employer is bigger than any FAANG company by head count, and they hired this way. Why is big tech different?
> Put the candidate at ease - nervous people don't interview well
This is great advice. I have great success with it. I give the same 60 second speech at the start of each interview. I tell candidates that I realise that tech interviews are stressful -- "In 202X, the 'tech universe' is infinitely wide and deep. We can always find something that you don't know. If you don't have experience in a topic that we raise, let us know. We will move to a new topic. All, yes, all people that we interviewed had at least one topic where they had no experience, or none recent." Also, it helps to do "interview ramp-up", where you start with some very quick wins to build up confidence with the candidate. It is OK to tell them "I will push a bit harder here" so they know you are not being a jerk... only trying to dig deeper on their knowledge.
Putting candidate at ease is definitely important.
Another reason:
If you're only say one of four interviewers, and you're maybe not the last interviewer, you really want the candidate to come out of your interview feeling like they did well or at least ok enough, so that they don't get tilted for the next interview. Because even if they did really poorly in your interview, maybe it's a fluke and they won't fail the rest of the loop.
Which is then a really difficult skill as an interviewer - how do you make sure someone thinks they do well even if they do very poorly? Ain't easy if there's any technical guts in the interview.
I sure as shit didn't get any good at that until I'd conducted like 100+ interviews, but maybe I'm just a slow learner haha
I’ve done the “at home” test for ML recently for a small AI consulting firm. It's a nice approach and got me to the next round, but the way the company evaluated it was to go through the questions and ask "fundamental ML bingo" questions. I don't think I had a single discussion about the company in the entire interview process. I was told up front "we probably won't get to the third question because it will take time to discuss theory for the first two".
If you're a company that does this, please dog food your problems and make sure the interview makes the effort feel valued. It also smells weird if you claim it's representative of a typical engineering discussion. We all know that consultancy is wrangling data, bad data and really bad data. If you're arguing over what optimiser we're choosing I'd say there's better ways to waste your customer's money.
On the other hand I like leetcode interviews. They're a nice equalizer and I do think getting good at them improves your coding skill. The point is to not ask ludicrously obscure hard problems that need tricks. I like the screen share + remote IDE. We used Code which was nice and they even had tests integrated so there wasn't the whiteboard pressure to get everything right in your head. You also know instantly if your solution works and it's a nice confidence if you get it first try, plus you can see how candidates would actually debug, etc.
Wow, that was a great write up. Can I interview with you? Lol, everything you wrote was really spot on with my own interview experiences. I tend to get super nervous during interviews and have choked up on many interviews asking for live coding on crazy algorithm problems. It state of hiring seems to be really bad right now. But I'll take your advice and try to get in contact with some recruiters
I would never show an interviewer how I code. Let alone allow them to give me 2 hours to solve a problem. It's contradicting that you know that developers "mostly shine when not under pressure", yet you set 2 hours to solve a problem (Sounds like you don't want to pay them for the take home). The positive feedback you're getting for your flawed process just shows how worse other, more common, recruiters are.
> 2. Do either no take home test, or one that takes at most two hours. I do discuss the solution candidates came up with, so as long as they can demonstrate they know what they did there, I don't care too much how they did it. If I do this part, it's just to establish some base line competency.
You need to do this to establish some baseline competency... for junior hires with no track record?
Recruiters have been notoriously bad in my experience. Relying on network has the potential to create bias and avoid good candidates that simply don't have an "in".
I've let people use GPT in coding interviews, provided that they show me how they use it. At the end I'm interested in knowing how a person solves a problem, and thinks about it. Do they just accept whatever crap the gpt gives them, can they take a critical approach to it, etc.
So far, everyone that elected to use GPT did much worse. They did not know what to ask, how to ask, and did not "collaborate" with the AI. So far my opinion is if you have a good interview process, you can clearly see who are the good candidates with or without ai.
Earlier this past week I asked Copilot to generate some Golang tests and it used some obscure assertion library that had a few hundred stars on GitHub. I had to explicitly ask it to generate idiomatic tests and even then it still didn't test all of the parameters that it should have.
At a previous job I made the mistake of letting it write some repository methods that leveraged SQLAlchemy. Even though I (along with my colleague via PR) reviewed the generated code we ended up with a preprod bug because the LLM used session.flush() instead of session.commit() in exactly one spot for no apparent reason.
LLMs are still not ready for prime-time. They churn out code like an overconfident 25-year-old that just downed three lunch beers with a wet burrito at the Mexican place down the street from the office on a rainy Wednesday.
I feel like I am taking crazy pills that other devs don't feel this way. How bad are the coders that they think these AI's are giving them super powers. The PR's with AI code are so obvious and when you ask the devs why, they don't even know. They just say, well the AI picked this, as if that means something in and of itself.
I don't follow this take. ChatGPT outputted a bug subtle enough to be overlooked by you and your colleague and your test suite, and that means it's not ready for prime time?
The day when generative AI might hope to completely handle a coding task isn't here yet - it doesn't know your full requirements, it can't run your integration tests, etc. For now it's a tool, like a linter or a debugger - useful sometimes and not useful other times, but the responsibility to keep bugs out of prod still rests with the coder, not the tools.
> an overconfident 25-year-old that just downed three lunch beers with a wet burrito at the Mexican place down the street from the office on a rainy Wednesday
Is that the LLM's fault or SQLAlchemy for having that API in the first place? Or was that a gap in your testing strategy, as (if I'm reading it right), flush() doesn't write anything to the database but is only intended as an intermediate step (and commit() calls flush() under water).
I think we're in a period similar to self-driving cars, where the LLMs are pretty good, but not perfect; it's those last few percent that break it.
> At a previous job I made the mistake of letting it write some repository methods that leveraged SQLAlchemy. Even though I (along with my colleague via PR) reviewed the generated code we ended up with a preprod bug because the LLM used session.flush() instead of session.commit() in exactly one spot for no apparent reason.
Ive had ChatGPT do the same thing with code involving SQLAlchemy.
You can't tell us that LLM's aren't ready for prime time in 2025 after you tried Copilot twice last year.
New better models are coming out almost daily now and it's almost common knowledge that Copilot was and is one of the worst. Especially right now, it doesn't even come close to what better models have to offer.
Also the way to use them is to ask for small chunks of code or questions about code after you gave them tons of context (like in Claude projects for example).
"Not ready for prime time" is also just factually incorrect. It is already being used A LOT. To the point that there are rumors that Cursor is buying so much compute from Anthropic that they are making their product unstable, because nvidia can't supply them hardware fast enough.
I imagine most of the things that would be good uses for seniors in AI aren't great uses for a coding interview anyway.
"Oh, I don't remember how to do parameterized testing in junit, okay, I'll just copy-paste like crazy, or make a big for-loop in this single test case"
"Oh, I don't remember the API call for this one thing, okay, I'll just chat with the interviewer, maybe they remember - or I'll just say 'this function does this' and the interviewer and I will just agree that it does that".
Things more complicated than that that need exact answers shouldn't exist in an interview.
> Things more complicated than that that need exact answers shouldn't exist in an interview.
Agreed, testing for arcane knowledge is pointless in a world where information lookup is instant, and we now have AI librarians at our fingertips.
Critical thinking, capacity to ingest and process new information, fast logic processing, software fundamentals and ability to communicate are attributes I would test for.
An exception though is proving their claimed experience, you can usually tease that out with specifics about the tools.
We do the same thing. It's perfectly fine for candidates to use AI-assistive tooling provided that they can edit/maintain the code and not just sit in a prompt the whole time. The heavier a candidate relies on LLMs, the worse they often do. It really comes down to discipline.
To me it's the lack of skill. If the LLM spits out junk you should be able to tell. ChatGPT-based interviews could work just as well to determine the ability to understand, review and fix code effectively.
This has been my experience as well. The ones that have most heavily relied on GPT not only didn't really know what to ask, but couldn't reason about the outputs at all since it was frequently new information to them. Good candidates use it like a search engine - filling known gaps.
Yea I agree. I don't rely on the AI to generate code for me, I just use it as a glorified search engine. Sure I do some copypasta from time to time, but it almost always needs modification to work correctly... Man does AI get stuff wrong sometimes lol
I don't really can't imagine being it usefull in the way where it writes logical part of the code for you. If you are not being lousy you still need to think about all the edge cases when it generates the code which seems harder for me.
I like that you’re openminded to allow candidates to be who they are and judge them for the outcome rather than using a prescribed rigid method to evaluate them. Im not looking to interview right now but I’d feel very comfortable interviewing with someone like you, I’d very likely give out my best in such an interview. Id probably choose not to use an LLM during the interview unless I wanted to show how I brainstormed a solution.
same thing here. Interview is basically a representative thing of what we do, but also depends on the level of seniority. I ask people just to share the screen with me and use whatever you want / fell comfortable with. Google, ChatGPT, call your mom, I don't care as long as you walk me through how you're approaching the thing at hand. We've all googled tar xvcxfgzxfzcsadc, what's that permission for .pem is it 400, etc.. no shame in anything and we all use all of the things through day. Let's simulate a small task at hand and see where we end up at. Similarly, there is a bias where people leaning more on LLMs doing worse than those just googling or, gasp, opening documentation.
Yes, the current google search is somehow bad than sometime between covid or before that. Using chatgpt as search engine can save time sometimes and if you're somewhat knowledgeable, you can pinpoint the key info and crosscheck with google search.
What does effective use look like? I have attempted messing around with a couple of options, but was always disappointed with the output. How do you properly present a problem to a LLM? Requiring an ongoing conversation feels like a tech priest praying to the machine spirit.
When I was last interviewing people (several years ago now), I’d let them use the internet to help them on anything hands on. I was astounded by how bad some people were at using a search engine. Some people wouldn’t even make an attempt.
My company, a very very large company, is transitioning back to only in-person interviews due to the rampant amount of cheating happening during interviews.
As an interviewer, it's wild to me how many candidates think they can get away with it, when you can very obviously hear them typing, then watching their eyes move as they read an answer from another screen. And the majority of the time the answer is incorrect anyway. I'm happy that we won't have to waste our time on those candidates anymore.
So far 3 of the 11 people we interviewed have been clearly using ChatGPT for the >>behavioral<< part of the interview (like, just chatting about background, answering questions about their experience). I find that absolutely insane, if you cannot hold a basic conversation about your life without using AI then something is terribly wrong.
We actually allow using AI in our in-person technical interviews, but our questions are worded to fail safety checks. We'll talk about smuggling nuclear weapons, violent uprising, staging a coup, manufacturing fentanyl, etc. (within the context of system design) and that gives us really good mileage on weeding out those who are just transcribing what we say into AI and reading the response.
> I find that absolutely insane, if you cannot hold a basic conversation about your life without using AI then something is terribly wrong.
I'm genuinely curious what questions you ask during the behavioral interview. Most companies ask questions like "recall a time when..." and I know people who struggle with these kinds of questions despite being good teammates, either because they find it difficult to explain the situation, or due to stress. And recruitment process is not a "basic conversation" — as a recruiter you're in far more comfortable position. I find it hard to believe anyone would use an LLM if you ask them question like "what were your responsibilities in your last role", and I do see how they might've primed the chat to help them communicate an answer to a question like "tell me about a situation when you had a conflict with your manager"
I think you (your company) and many other commenters here are just trying too hard.
I had just recently lead through several interview rounds for software engineering role and we have not had any issue with LLM use. What we do for the technical interview part is very simple - live whiteboarding design task where we try to identify what the candidate's focus is and might pivot at any time or dig deeper into particular topics. Sometimes, we will even go as detailed as talking about particular algorithms the candidate would use.
In general, I found that this type of interview is the most fun for both sides. The candidates don't feel pressure that they must do the only right thing as there is a lot of room for improvisation; the interviewers don't get bored with repetitive interviews over and over as new candidates come by with different perspectives. Also, there is no room for LLM use because the candidate has to be involved in drawing on the whiteboard and showing their technical presentation skills, which are very important for developers.
> if you cannot hold a basic conversation about your life without using AI then something is terribly wrong.
I wouldn’t be surprised if the effect of Google Docs and Gmail forcing full AI, is a generation of people who can’t even talk about themselves, and can’t articulate even a single email.
Is it necessary? Perhaps. Will it make the world boring? Yes.
what actually happens to the interviewee? Do they suddenly go blank when they realise the LLM has replied "I'm sorry I cannot assist you with this", or they try to make something up?
So depressing to hear that “because of rampant cheating”
As a person looking for a job, I’m really not sure what to do. If people are lying on their resumes and cheating in interviews, it feels like there’s nothing I can do except do the same. Otherwise I’ll remain jobless.
Here's the thing: 95% of cheaters still suck, even when cheating. Its hard to imagine how people can perform so badly while cheating, yet they consistently do. All you need to do to stand out is not be utterly awful. Worrying about what other people are doing is more detrimental to your performance than anything else is. Just focus on yourself: being broadly competent, knowing your niche well, and being good at communicating how you learn when you hit the edges of your knowledge. Those are the skills that always stand out.
I don't know, I kind of feel like leetcode interviews are a situation where the employer is cheating. I mean, you're admittedly filtering out a great number of acceptable candidates knowing that if you just find 1 in a 1000, that'll be good enough. It is patently unfair to the individuals that are smart enough to do your work, but poor at some farcical representation of the work. That is cheating.
In my opinion, if a prospective employee is able to successfully use AI to trick me into hiring them, then that is a hell of a lot closer to the actual work they'll be hired to do (compared to leetcode).
I say, if you can cheat at an interview with AI, do it.
> it feels like there’s nothing I can do except do the same.
Why does it feel like that when you’re replying to someone who already points out that it doesn’t work? Cheating can prevent you from getting a job, and it can get you fired from the job too. It can also impede your ability to learn and level up your own skills. I’m glad you haven’t done it yet, just know that you can be a better candidate and increase your chances by not cheating.
Using an LLM isn’t cheating if the interviewer allows it. Whether they allow it or not, there’s still no substitute for putting in the work. Interviews are a skill that can (and should) be practiced. Candidates are rarely hired for technical skill alone. Attitude, communication, curiosity, and lots of other soft skills are severely underestimated by so many job seekers, especially those coming right out of school. A small amount of strengthening your non-code abilities can improve your odds much faster than leetcode ever will. And if you have time, why not do both?
Note also "And the majority of the time the answer is incorrect anyway."
I haven't looked for development-related jobs this millennium, but it's unclear to me how effective a crutch AI is for interviews--at least for well-designed and run interviews. Maybe in some narrow domains for junior people.
As a few of us have written elsewhere, I consider not having in-person interviews past an initial screen sheer laziness and companies generally deserve whoever they end up with.
> it feels like there’s nothing I can do except do the same. Otherwise I’ll remain jobless.
Never buy into this mentality. Because once you do, it never goes away. After the interview, your coworkers might cheat, so you cheat too. Then your business competitors might cheat, so you cheat too. And on and on.
sounds cheesy, but keep being honest. Eventually companies will realize (as we have years ago) that automating recruiting gets you automated candidates.
But YMMV. I have 9 years and still can get interviews the old fashioned way.
When I was interviewing entry level programmers at my last job, we gave them an assignment that should only take a few hours, but we basically didn't care about the code at all.
Instead, we were looking to see if they followed instructions, and if they left anything out.
I never had a chance to test it out, since we hadn't hired anyone new in so long, but ChatGPT/etc would almost always fail this exam because of how bad it is at making sure everything was included.
And bad programmers also failed it. It always left us with a few candidates that paid attention, and from there we figure if they can do that, they can learn the rest. It seemed to work quite well.
I was recently laid off from that company, and now I'm realizing that I really want to see what current-day candidates would turn in. Oh well.
For those tests I never follow the rules, I just make something quick and dirty because I refuse to spend unpaid hours. In the interview the first question is why I didnt follow the instructions, and they think my reason is fair.
Companies seem to think that we program just for fun and ask to make a full blown app... also underestimating the time candidates actually spend making it.
The industry (all industries really) might want to reconsider online applications, or at least privilege in-person resume drop-offs because the escalating ai application/evaluation war that's happening doesn't seem to be helping anyone.
That's fine. The ones who are "good cheaters" are probably smarter than many honest people. Think about those school days where your smartest peers were cheating anyway, despite teaching you organically earlier on. Those kinds of cheaters do it to turn an A into an A+, not because they don't understand the material.
Interviews aren’t about solving problems. The interviewer isn’t interested in a problem’s solution, they’re interested in seeing how you get to the answer. They’re about trying to find out if you’ll be a good hire, which notably includes whether you’re willing and interested in spending effort learning. They already know how to use AI, they don’t need you for that. They want to know that you’ll contribute to the team. Wanting to use AI probably sends the wrong message, and is more likely to get you left out of the next round of interviews than it is to get you called back.
Imagine you need to hire some people, and think about what you’d want. That’ll answer your question. Do you want people who don’t know but think AI will solve the problems, or do you want people who are capable of thinking through it and coming up with new solutions, or of knowing when and why the AI answer won’t work?
If you've been given the problem of "without using AI, answer this question", and you use an AI, you haven't solved the problem.
The ultimate question that an interview is trying to answer is not "can this person solve this equation I gave them?", it's usually something along the lines of "does this person exhibit characteristics of a trustworthy and effective employee?". Using AI when you've been asked not to is an automatic failure of trust.
This isn't new or unique to AI, either. Before AI people would sometimes try to look up answers on Google. People will write research papers by looking up information on Wikipedia. And none of those things are wrong, as long as they're done honestly and up front.
If you are pretending to have knowledge and skills you don't have you are cheating. And if you have the required knowledge and skill AI is a hindrance, not a help. You can solve the problem easily without it. So "is using ai cheating"? IDK, but logically you wouldn't use AI unless you were cheating.
For the goal of the interview - showing your knowledge and skills - you are failing miserably. People know what LLMs can do, the interview is about you.
Some can be quite good at the cheating: At least good enough to get through multiple layers. I've been in hiring meetings where I was the only one of 4 rounds that caught the cheating, and they were even cheating in behaviorals. I've also been in situations with a second interviewer, where the other interviewer was completely oblivious even when it was clear I was basically toying with the guy reading from the AI, leading conversation in unnatural ways.
Detection of AI in remote interviews, behavioral and technical, just has to be taught today if you are ever interviewing people that don't come from in-network recommendations. Completely fake candidates are way too common.
I'm at the same company I think. I don't get why we can't just use some software that monitors clicking away or tabbing away from the window, and just tell candidates explicitly that we are monitoring them, and looking away or tabbing away will appear suspect.
I haven’t been doing that much interviewing, but in the dozen or so candidates I’ve had I don’t think a single one has tried to use AI. I almost wish they would, as then at least I’d get past the first half of the question…
I'm using AI for interview screeners for nontechnical roles that require knowledge work. The AI interviewing app is very very basic, its just a wrapper put together by an eng, with enough features to prevent cheating.
Start with recording the session and blocking right-click, and you are halfway there. Its not hard.
The AI app has helped me surface top candidates. I don't even look at resumes anymore. There's no point. I interview the top 10 out of 200, and then do my references and select.
I mean they could be googling things; I’ve definitely googled stuff during an interview. I do think in-person interviews area important though, I did some remote final interviews with Amazon and they were all terrible
My startup got acquired last year so I haven't interviewed anyone in a while, but my technical interview has always been:
- share your screen
- download/open the coding challenge
- you can use any website, Stack Overflow, whatever, to answer my questions as long as it's on the screenshare
My goal is to determine if the candidate can be technically productive, so I allow any programming language, IDE, autocompleter, etc, that they want. I would have no problem with them using GPT/Copilot in addition to all that, as long as it's clear how they're solving it.
I recently interviewed for my team and tried this same approach. I thought it made sense because I want to see how people can actually work and problem solve given all the tools at their disposal, just like on the job.
It proved to be awkward and clumsy very quickly. Some candidates resisted it since they clearly thought it would make them judged harsher. Some candidates were on the other extreme and basically tried asking ChatGPT the problem straight up, even though I clarified up front "You can even use ChatGPT as long as you're not just directly asking for the solution to the whole problem and just copy/pasting, obviously."
After just the initial batch of candidates it became clear it was muddying things too much, so I simply forbade using it for the rest of the candidates, and those interviews went much smoother.
Over the years, I've walked from several "live coding" interviews. Arguably though, if you're looking for "social coders" maybe the interview is working as intended?
But for me, it's just not how my brain works. If someone is watching me, I'll be so self-conscious the entire time you'll get a stream of absolute nonsense that makes me look like I learned programming from YouTube last night. So it's not worth the time.
You want some good programming done? I need headphones, loud music, a closed door and a supply of Diet Coke. I'll see you in a few hours.
Did you tell them that you “want to see how people can actually work and problem solve given all the tools at their disposal, just like on the job”? Just curious.
> "You can even use ChatGPT as long as you're not just directly asking for the solution to the whole problem and just copy/pasting, obviously."
No, it's not "obvious" whatsoever. Actually it's obviously confusing: why you are allowing them to use ChatGPT but forbidding them from asking the questions directly? Do you want an employee who is productive at solving problems, or someone who guess your intentions better?
If AI is an issue for you then just ban it. Don't try to make the interview a game of who outsmart who.
I've had a few people chuck the entire problem into ChatGPT, it was still very much useful in a few ways:
- You get to see how they then review the generated code, do they spot potential edge cases which the AI missed?
- When I ask them to make a change not in the original spec, a lot of them completely shut down because they either didn't understand the code generated well enough, or they themselves didn't really know how to code.
And you still get to see people who _do_ know how to use AI well, which at this point is a must for its overall productivity benefits.
the trick is to phrase the problem in a way that GPT4 will always give the incorrect answer (due to vagueness of your problem) and that multiple rounds of guiding/correcting are needed to solve.
There's more than one possible AI on the other end, so crafting something that will not annoy a typical candidate, but will lead every AI astray seems pretty difficult.
I did this while hiring last year and the number of candidates who got stuff wrong because they were too proud to just look up the answer was shocking.
> I would have no problem with them using GPT/Copilot in addition to all that, as long as it's clear how they're solving it.
Too many people are the opposite that I would literally never tell you
And this works.
what can we do to help that?
I’ve had interviews where AI use was encouraged as well.
but so many casual tirades against it dont make me want to ever try being forthcoming. most organizations are realistically going to be 10 years behind the curve on this
Screen share or in person are what I think the best ways are. These are not the best options.
I do not want AI. The human is the value add.
I understand that people won't feel super comfortable with this, and I try not to roast the candidate with leetcode. It should be a conversation where I surface technical reality and understanding.
im not doing any coding challenges that aren't real world
if i see anything remotely challenging i dip out. interviewing is just a numbers game nowadays so i dont waste time on interviews if they seem like they're gonna burn me out for the rest of the day. granted i have 11 years experience
The difficulty of your questions have to change drastically if they are using good tooling. Many a problem that would take a reasonable candidate half an hour to figure out is 'free' for Claude, so your question might not show any signal. And if you tweak your questions to be sure to not be auto-solved by a strong enough AI, then you better say it's semi-required, because the difficulty level of the question you need to ask goes up quite a bit.
Some of the questions in our interview loop have been posted in github... which means every AI has trained on them specifically. They are, therefore, useless if you have AI turned on. And if you interview enough people, someone will post on github, and therefore your question will have a pretty short shelf life before it's in training and instantly solved.
Part of my resume review process is trying to decide if I can trust the person. If their resume seems too AI-generated, I feel less like I can trust that candidate and typically reject the candidate.
Once you get to the interview process, it's very clear if someone thinks they can use AI to help with the interview process. I'm not going to sit here while you type my question into OpenAI and try to BS a meaningful response to my question 30 seconds later.
AI-proof interviewing is easy if you know what you're talking about. Look at the candidates resume and ask them to describe some of their past projects. If they can have a meaningful conversation without delays, you can probably trust their resume. It's easy to spot BS whether AI is behind it or not.
Good interviews are a conversation, a dialog to uncover how the person thinks, how they listen, how they approach problems and discuss. Also a bit detail knowledge, but that's only a minor component in the end. Any interview where AI in its current form helps is not good anyway. Keep in mind that in our industry, the interview goes both ways. If the candidate thinks your process is bad then they are less inclined to join your company because they know that their coworkers will have been chosen by a subpar process.
That said, I'm waiting for an "interview assistant" product. It listens in to the conversation and silently provides concise extra information about the mentioned subjects that can be quickly glanced at without having to enter anything. Or does this already exist?
Such a product could be useful for coding to. Like watching me over the shoulder and seeing aha, you are working with so-and-so library, let me show you some key parts of the API in this window, or you are trying to do this-and-that, let me give you some hints. Not as intrusive as current assistants that try to write code for you, just some proactive lookup without having to actively seek out information. Anybody knows a product for that?
That might be good for newbie developers but for the rest of us it'll end up being the Clippy of AI assistants. If I want to know more about an API I'm using, I'll Google (or ask ChatGPT) for details; I don't need an assistant trying to be helpful and either treating me like a child, or giving me info that maybe right but which I don't need at the moment.
The only way I can see that working is if it spends hundreds of hours watching you to understand what you know and don't know, and even then it'll be a bit of a crap shoot.
This, and tbh this has always been the best way. Someone who has projects, whether personal or professional, and has the capability to discuss those projects in depth and with passion will usually be a better employee than a leet code specialist.
Doesn't even have to be a project per se, if they can discuss some sort of technical topic in depth (i.e. the sort of discussion you might have when discussing potential solutions to a problem) then that's a great sign imo.
My resume has a bunch of personal projects on there as well as work experience and the project experience seems to not help at all. Just rejections after rejections.
Agreed. This is why - while I won't ding an applicant for not having a public Github, I'm always happy when they do because usually they'll have some passion projects on there that we can discuss.
I have 23 years of experience and I am almost invisible on GitHub, and for all those years I've been fired from 4 contracts due to various disconnects (one culture mis-fit and two under-performances due to illness I wasn't aware of at the time, and one because the company literally restructured over the weekend and fired 80% of all engineers), and I have been contracting a lot in the last 10 years (we're talking 17-19 gigs).
If you look solely at my GitHub you'd likely reject me right away.
I wish I had the time and energy for passion projects in programming. I so wish it was so. But commercial work has all but destroyed my passion for programming, though I know it can be rekindled if I can ever afford to take a properly long sabbatical (at least 2 years).
I'll more agree with your parent / sibling comments: take a look at the resume and look for bad signs like too vanilla / AI language, too grandiose claims (though when you are experienced you might come across as such so 50/50), or almost no details, general tone etc.
And the best indicator is a video call conversation, I found as a candidate. I am confident in what I can do (and have done), I am energetic and love to go for the throat of the problems on my first day (provided the onboarding process allows for it) and it shows -- people have told me that and liked it.
If we're talking passion, I am more passionate about taking a walk with my wife and discussing the current book we're reading, or getting to know new people, or going to the sauna, or wondering what's the next meetup we should be going to, stuff like that. But passion + work, I stand apart by being casual and not afraid of any tech problems, and by prioritizing being a good teammate first and foremost (several GitHub-centric items come to mind: meaningful PR comments and no minutiae, good commit messages, proper textual comment updates in the PR when f.ex. requirements change a bit, editing and re-editing a list of tasks in the PR description).
I already do too much programming. Don't hold it against me if I don't live on the computer and thus have no good GitHub open projects. Talk to me. You'll get much better info.
Also because most people are busy with actual work and don't have the time to have passion projects. Some people do, and that's great, but most people are simply not passionate about labor, regardless of what kind of labor it is.
To add to this, lots of senior people in the consultanting world are brought in under escalations. They often have to hide the fact they are an external resource.
Also if you have a novel or disclosure sensitive passion project, GitHub may be avoided even as a very conservative bright line.
As stated above I think it can be good to find common points to enhance the interview process, but make sure to not use it as a filter.
I really hate those who ask for GitHub profiles. Mine is psuedo anonymous and I don't want to share it with my employer or anyone I don't want to. Besides privacy, I do not understand why a company would even expect the candidate to have free contribution in the first place. Can't the candidate have other hobbies to enjoy or learn?
> If their resume seems too AI-generated, I feel less like I can trust that candidate and typically reject the candidate
So you just subjectively say "this resume is too perfect, it must be bullshit"? How the fuck is any actual, qualified engineer supposed to get through your gauntlet of subjectivity?
You'd be surprised at how good you can get at sniffing out slop, especially when it's the type prompted by fools who think it'll get them an easy win. Often the actual content doesn't even factor in - what triggers my mental heuristics is usually meta stuff like tone and structure.
I'm sure some small % of people get away with it by using LLaMA-FooQux-2552-Finetune-v3-Final-v1.5.6 or whatever, but realistically, the majority is going to be obvious to anyone that's been force-fed slop as part of their job.
> AI-proof interviewing is easy if you know what you're talking about. Look at the candidates resume and ask them to describe some of their past projects. If they can have a meaningful conversation without delays, you can probably trust their resume. It's easy to spot BS whether AI is behind it or not.
Generally, this is how to figure out if a candidate is full of crap or not. When they say they did a thing, ask them questions about that thing.
If they can describe their process, the challenges, how they solved the challenges, and all of it passes the sniff test: If they are bullshitting, they did crazy research and that's worth something too.
There are much more sophisticated methods than that now with AI, like speech to text to LLM. It's getting increasingly harder to detect interviewees cheating.
I think GP's point is that this says as much about the interview design and interviewer skill as it does about the candidate's tools.
If you do a rote interview that's easy to game with AI, it will certainly be harder to detect them cheating.
If you have an effective and well designed open ended interview that's more collaborative, you get a lot more signal to filter the wheat from the chaff.
There's candidates running speech-to-text that avoid the noticeable delays, but it's still possible to do the right kind of digging the AI will almost always refuse to do, because it's way too polite.
It's as if we were testing for replicants in Blade Runner: The AI response will rarely figure out you are aiming to look for something frustrating, that they are actually proud of, or figure out when you are looking for a hot take you can then disagree with.
The traditional tech interview was always designed to optimize for reliably finding someone who was willing to do what they were told even if it feels like busywork. As a rule someone who has the time and the motivation to brush up on an essentially useless skill in order to pass your job interview will likely fit nicely as a cog in your machine.
AI doesn't just change the interviewing game by making it easy to cheat on these interviews, it should be changing your hiring strategy altogether. If you're still thinking in terms of optimizing for cogs, you're missing the boat—unless you're hiring for a very short term gig what you need now is someone with high creative potential and great teamwork skills.
And as far as I know there is no reliable template interview for recognizing someone who's good at thinking outside the box and who understands people. You just have to talk to them: talk about their past projects, their past teams, how they learn, how they collaborate. And then you have to get good at understanding what kinds of answers you need for the specific role you're trying to fill, which will likely be different from role to role.
The days of the interchangeable cog are over, and with them easy answers for interviewing.
Have you spent a lot of time trying to hire people? I guarantee you there is no shadow council trying to figure out how to hire "busywork" worker bees. This perspective smells completely like "If I were in charge, things would be so much better." Guess what? If you were to take your idea and try to lead this change across a 100 people engineering org, there would be "out of the box thinkers" who would go against your ideas and cause dissent. At that point, guess what? You're going to figure out how to hire compliant people who will execute on your strategy.
"talk about their past projects, their past teams, how they learn, how they collaborate"
You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.
- “big” tech companies like Google, Amazon, Microsoft came up with these types of tech interviews. And there it seems pretty clear that for most of their positions they are looking for cogs
- The vast majority of tech companies have just copied what “big” tech is doing, including tech interviews. These companies may not be looking for cogs, but they are using an interview process that’s not suitable for them
- Very few companies have their own interview process suitable for them. These are usually small companies and therefore the number of engineers in such companies is negligible to be taken into account (most likely, less than 1% of the audience here work at such companies)
> You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.
This is the job of a good interviewer. I've run the gauntlet from terrible to great answers to the exact same questions depending on the interviewer. If you literally just ask that question out of the blue, you'll either get a bad or rehearsed response. If you establish some rapport, and ask it in a more natural way, you'll get a more natural answer.
It's not easy, but neither is being on the other side of the interviewer, and that's never been accepted as an excuse
> I guarantee you there is no shadow council trying to figure out how to hire "busywork" worker bees.
The council itself is made of "busywork" worker bees. Slave hiring slaves - the vast majority of IT interviewers and candidates are idiot savants - they know very little outside of IT, or even realize that there is more to life than IT.
> You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.
This was the norm until perhaps for about the last 10-15 years of Software Engineering.
> I guarantee you there is no shadow council trying to figure out how to hire "busywork" worker bees.
I didn't say that. I said that this style of interview was designed to hire pluggable cogs. As others have noted, that was the correct move for Big Tech and was cargo culted into a bunch of other companies that didn't know why their interviews were shaped the way they were.
> there would be "out of the box thinkers" who would go against your ideas and cause dissent. At that point, guess what? You're going to figure out how to hire compliant people who will execute on your strategy.
In answer to your original question: yes, I'm actively involved in hiring at a 100+ person engineering org that hires this way. And no, we're not looking to figure out how to hire compliant people, we're hiring engineers who will push back and do what works well, not just act because an executive says so.
> You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.
Only if you suck at making people comfortable and at understanding different (potentially awkward) communication styles. You don't have to discriminate against people for being awkward, that's a choice you can make. You can instead give them enough space to find their train of thought and pursue it, and it does work—I recently sat in on an interview like that with someone who fits your description exactly, and we strongly recommended him.
> what you need now is someone with high creative potential and great teamwork skills.
That’s exactly what we always needed, long before LLMs arrived. That’s why all the interviews I’ve seen or give already were designed to have conversations.
I’m agreeing with you, but I’ve never seen these ‘interchangeable cog’ interviews you’re talking about.
Right, I agree. The leetcode interviews are a bad fit for almost every company—they only made sense in the Googles and Microsofts that invented them and actually did want to optimize for cogs.
I think every interviewer, hiring manager ought to know or be trained on these tools, your intuition about candidate's behaviour isn't enough. Otherwise, we will soon reach a tipping point where honest candidates will be at a severe disadvantage.
Tbh I’m very happy these tools exist. If your company wants to ask dumb formulaic leetcode questions and doesn’t care about the candidate’s actual ability then this is what you deserve. If they can automate the interview so well then they should also be able to automate the job right? Or are your interview questions not representative of what the job actually entails?
I understand this sentiment for experienced developers. It is an imperfect signal. But what is in your opinion a better signal for junior or new grads?
Every alternative I can think of is either worse, or sounds nice but impractical to implement in practice at scale.
I don’t know about you, but most interviewers out there don’t have the ability to judge the technical merit of a bullshitters’s contribution to a class or internship project in half an hour, specially if it’s in a domain interviewer has no familiarity with. And by the way, not all of them are completely dumb, they do know computer science, just perhaps not as well as an honest competitor.
> Or are your interview questions not representative of what the job actually entails?
100% of all job interviews are a proxy. It is not possible to perform an interview in ~4 hours such that someone sufficiently engages in what the job “actually entails”.
A leetcode interview either is or not a meaningful proxy. AI tools either do or not invalidate the validity of that proxy.
Personally I think leetcode interview are an imperfect but relatively decent proxy. And that AI tools render that proxy invalid.
Hopefully someday someone invents a better interview scheme that can reliably and consistently scale to thousands of interviews, hundreds of hires, and dozens of interviewers. That’d be great. Alas it’s a really hard and unsolved problem!
I think this is the first interview cheating tool I've seen that feels morally justified to me. I wonder if it will actually change company behavior at all.
anyone I know who actually got a job through leetcode style in the last 2 years cheated. they would get their friends to mirror monitor and then type the answers in from chatgpt LOL
I strongly disagree. This is nothing. You can sort out if someone is using something like this to cheat. You have a conversation. You can ask conceptual questions about algorithms and time complexity and figure out their level and see how their sophistication matches their solution on the LeetCode problem or whatever. Now, if you have really bad intuition or understanding of human behavior then yeah it would probably be hard but in that case being a good interviewer is probably hopeless anyway.
The key is having interviewers that know what they are talking about so in-depth meandering discussions can be had regarding personal and work projects which usually makes it clear whether the applicant knows what they are talking about. Leetcode was only ever a temporary interview technique, and this 'AI' prominence in the public domain has simply sped up it's demise.
You ask a rote question and you'll get a rote answer while the interviewee is busy looking at a fixed point on the screen.
You then ask a pointed question about something they know or care about, and suddenly their face lights up, they're animated, and they are looking around.
You know, this makes me wonder if a viable remote interview technique, at least until real-time deepfaking gets better, would be to have people close their eyes while talking to them. For somebody who knows their stuff it'll have zero impact; for someone relying entirely on GPT, it will completely derail them.
This is the way. We do an intro call, an engineering chat (exactly as you describe), a coding challenge and 2 team chat sessions in person. At the end of that, we usually have a good feeling about how sharp the candidate is, of they like to learn and discover new things, what their work ethic is. It's not bullet proof, but it removes a lot of noise from the signal.
The coding challenge is supposed to be solved with AI. We can no longer afford not to use LLMs for engineering, as it's that much of a productivity boost when used right, so candidates should show how they use LLMs. They need to be able to explain the code of course, and answer questions about it, but for us it's a negative mark of a candidate proclaims that they don't use LLMs.
> The coding challenge is supposed to be solved with AI. We can no longer afford not to use LLMs for engineering, as it's that much of a productivity boost when used right, so candidates should show how they use LLMs. They need to be able to explain the code of course, and answer questions about it, but for us it's a negative mark of a candidate proclaims that they don't use LLMs.
Do you state this upfront or is it some hidden requirement? Generally I'd expect an interview coding exercise to not be done with AI, but if it's a hidden requirement that the interviewer does not disclose, it is unfair to be penalized for not reading their minds.
> as it's that much of a productivity boost when used right
Frankly, if an interviewer told me this, I would genuinely wonder why what they're building is such a simple toy product that an LLM can understand it well enough to be productive.
1. Use recruiters and network: Wading through the sheer volume of applications was even nasty before COVID, I don't even want to imagine what it's like now. A good recruiter or a recommendation can save a lot of time.
2. Do either no take home test, or one that takes at most two hours. I do discuss the solution candidates came up with, so as long as they can demonstrate they know what they did there, I don't care too much how they did it. If I do this part, it's just to establish some base line competency.
3. Put the candidate at ease - nervous people don't interview well, another problem with non-trivial tasks in technical interviews. I rarely do any live coding, if I do, it's pairing and for management roles, to e.g. probe how they manage disagreement and such. But for developers, they mostly shine when not under pressure, I try to see that side of them.
4. Talk through past and current challenges, technical and otherwise. This is by far the most powerful part of the interview IMHO. Had a bad manager? Cool, what did you do about it? I'm not looking for them having resolved whatever issue we talk about, I'm trying to understand who they are and how they'd fit into the team.
I've been using this process for almost a decade now, and currently don't think I need to change anything about it with respect to LLMs.
I kinda wish it was more merit based, but I haven't found a way to do that well yet. Maybe it's me, or maybe it's just not feasible. The work I tend to be involved in seems way too multi faceted to have a single standard test that will seriously predict how well a candidate will do on the job. My workaround is to rely on intuition for the most part.
What I added:
1. Instead of asking "do you have any questions for me?" at the very end, we started with that general discussion.
2. A few days ahead, I emailed the candidate the problem and said they are welcome to do it as a take home problem or we could work on it together. I let them know that if they did it ahead of time, we would do a code review and I would ask them about their design and coding choices. Or if they wanted to work on it together, they should consider it a pair programming session where I would be their colleague and advisor. Not some adversarial thing!
3. This the innovation I am proud of: a segment at the beginning of the interview called "teach me something". In my email I asked the candidate to think of something they would like to teach me about. I encouraged them to pick a topic unrelated to our work, or it could be programming related if they preferred that. Candidates taught me things like:
• How to choose colors of paint to mix that will get the shade you want.
• How someone who is bilingual thinks about different topics in their different languages.
• How the paper bill handler in an ATM works.
• How to cook pork belly in an air fryer without the skin flying off (the trick is punching holes in the skin with toothpicks).
I listed these in more recent emails as examples of fun topics to teach me about. And I mentioned that if I were asked this question, I might talk about how to tune a harmonica, and why a harmonica player would want to do that
This was fun for me and the candidate. It helped put them at ease by letting them shine as the expert in some area they had a special interest in.
I especially like that you're prepping them very clearly in advance, giving them every opportunity to succeed, and clearly setting a tone of collaboration.
In person design and coding challenges are a pressure cooker, and not real-world. However, giving people the choice, seems like a great way to achieve the balance.
Honestly, I'm really just commenting here so that this shows up in my history, so I can borrow heavily from this format next time I need to interview! :) Thanks again for sharing.
I work as a developer and as an interviewer (both freelance). Now I want to integrate your point 3. into my interviews, but not to choose better candidates, just to learn new stuff I never thought about before.
It is your fault that I see now this risk in my professional life, coming at me. I could get addicted to "teach me something". 'Hey candidate, we have 90 minutes. Just forget about that programming nonsense and teach me cool stuff'
But I digress!
Asking them what they wanted or would teach me was very illuminating and frankly, fun all around! I was brought a variety of subjects and not a single one was dull!
Seems I had experiences similar to yours.
One of my questions was influenced by someone who I respected highly asking, "what books are on your shelf at home?"
This one almost always brought out something about the candidate I would have had no clue about otherwise. As time advanced, it became more about titles because fewer people maintain a physical book shelf!
I wish more bosses were like you.
The opening question is a copy of what was done before (probably by someone who doenst work at IBM anymore) and all the new stuff is stolen from outsiders.
What if I decide to teach you something about the Quran and you don't hire me?
Perhaps this is just urban legend but from the stories I've heard hiring for FAANG-type companies there are people out there interviewing with their only goal being baiting you into a topic they can sue you over.
Worst instance I have heard of is when an interviewer asked about the books the candidate liked to read (since the candidate said they're an avid reader in their resume) and he just said that he liked to read the Bible. After not getting hired for failing the interviews he accused the company of religious discrimination.
I'm by no means an expert in US law and don't know the people this happened to directly so maybe it's just one big fantasy story but it doesn't seem that far fetched that
- If you are a rich corporation then people will look to bait you into a lawsuit
- If you give software engineers (or any non-lawyers) free rein on how to conduct interviews then some of them can be baited into a situation that the company could be sued over way more easily than a rigid leetcode interview
I think nothing came of the fellow who liked reading the Bible but I would imagine the legal department has a say in what kind of interview process a rich company can and can not use.
To their credit, I think they would have hired "the old guy" if I'd aced their take-homes, but I was a bit rusty and not super thrilled about their problem areas anyway so we didn't get that far. And honestly it seems like a decent system for hiring well-paid cogs in your well-oiled machine for the short term.
Your method sounds like what we were trying to do ten years ago, and it worked pretty well until our pipeline dried up. I wish you, and your candidates, continued success with it: a little humanity goes a long way these days.
Wishing you all the best for your career!
The biggest problem with the take home tests are not people who don't show up due to not being able to finish the assignment, But that those people who do, now expect to get hired.
95% people don't finish the assignment. 5% do. Teams think submitting the assignment with 100% feature set, unit test cases, onsite code review and onsite additional feature implementation still shouldn't mean a hire(If not anything, there are just not enough positions to fill). From the candidate's perspective, its pointless to spend a week working and doing every thing the hiring team asked for and still receive a 'no'.
I think if you are not ready to pay 2x - 5x market comp you shouldn't use take home assignments to hire people. There is too much work to do, and receive a reject at the end. Upping the comp absolutely makes sense as working a week to get a chance at earning 3x more or so makes sense.
Uhhh yeah. That would really piss me off. Like reviewbombing glassdoor type of pissed.
Especially people with ADHD don’t remember details as long as others, even though ADHD does not make someone a bad hire in this industry (and many successful tech workers have it).
I do prefer take-home challenges to live coding interviews right now, or at least a not-rushed live coding interview with some approximate advance warning of what to expect. That gives me time to refresh my rust (no programming language pun intended) or ramp up on whichever technologies are necessary for the challenge, and then to complete the challenge well even if taking more time than someone who is currently fresh with the perfectly matched skills might need. I want the ability to show what I can do on the job after onboarding, not what I can do while struggling with long-term unemployment, immigration, financial, and family health concerns. (My fault for marrying a foreigner and trying to legally bring her into the US, apparently.)
And, no, my life circumstances do not make it easy for me to follow the common suggestion of ramping up in my “spare time” without the pressures of a job or a specific interview task. That’s completely different from when I can do on the job or for a specific interview’s technical challenge.
If nothing else, it's a useful tool when doing reviews with your manager, or when you're trying to evaluate your personal growth.
It comes in really handy when you're interviewing or looking for a new job.
Julia Evans calls it a "brag doc": https://jvns.ca/blog/brag-documents/
It doesn't have to be super detailed either. I tell people to write 50% about what the work was, and 50% about what the purpose/value of the work was.. That tends to be a good mix of details and context.
Writing in it once a month, or once a sprint, is enough. Even if it's just a paragraph or two..
I would definitely not expect someone out of work for a while to have any meaningful side projects. I mean, if they do, that's cool, but I bet the sheer stress of not having a job kills a lot of creativity and productivity. Haven't been there, so I can only imagine.
For such a candidate, I'd probably want to offer them a time limited freelance gig rather quickly, so I can just see what they work like. For people who are already in a job, it's more important to ensure fit before they quit there, but your scenario opens the door to not putting that much pressure on the interview process.
> // This line prevents X from happening
I've seen a number of those. The issue here is that you've already wasted a lot of time with a candidate.
So being firmly against take home tests or even leetcode, I think the only viable option is a face to face interview with a mixture of general CS questions(i.e. what is a hashmap, benefits and drawbacks, what is a readers-writer lock, etc) and some domain specific questions: "You have X scenario(insert details here), which causes a race condition, how do you solve it."
I am starting to think that we need two types of technical interview questions: Old school (no LLMs allowed) vs new school (LLMs strongly encouraged). Someone under 25 (30?) is probably already making great use of LLMs to teach themselves new things about programming. This reminds me of when young people (late 2000s/early 2010s) began to move away from "O'Reilly-class" (heh, like a naval destroyer class) 500 page printed technical books to reading technical blogs. At first, I was suspicious -- essentially, I was gatekeeping on the blog writers. Over time, I came to appreciate that technical learning was changing. I see the same with LLMs. And don't worry about the shitty programmers who try to skate by only using LLMs. Their true colours will show very quickly.
Can I ask a dumb question? What are some drawbacks of using a hash map? Honestly, I am nearly neck-bearded at this point, and I would be surprised by this question in an interview. Mostly, people ask how do they work (impl details, etc.) and what are some benefits over using linear (non-binary) search in an array.
For most companies, the better strategy would be to explicitly LET them use LLMs and see whether they can accomplish 10X what a coder 3 years ago could accomplish, in the same time. If they accomplish only 1X, that's a bad sign that they haven't learned anything in 3 years about how to work faster with new power tools.
A good analogy of 5 years ago would be forcing candidates to write in assembly instead of whatever higher level language you actually use in your work. Sure, interview for assembly if that's what you use, but 95% of companies don't need to touch assembly language.
Another reason:
If you're only say one of four interviewers, and you're maybe not the last interviewer, you really want the candidate to come out of your interview feeling like they did well or at least ok enough, so that they don't get tilted for the next interview. Because even if they did really poorly in your interview, maybe it's a fluke and they won't fail the rest of the loop.
Which is then a really difficult skill as an interviewer - how do you make sure someone thinks they do well even if they do very poorly? Ain't easy if there's any technical guts in the interview.
I sure as shit didn't get any good at that until I'd conducted like 100+ interviews, but maybe I'm just a slow learner haha
If you're a company that does this, please dog food your problems and make sure the interview makes the effort feel valued. It also smells weird if you claim it's representative of a typical engineering discussion. We all know that consultancy is wrangling data, bad data and really bad data. If you're arguing over what optimiser we're choosing I'd say there's better ways to waste your customer's money.
On the other hand I like leetcode interviews. They're a nice equalizer and I do think getting good at them improves your coding skill. The point is to not ask ludicrously obscure hard problems that need tricks. I like the screen share + remote IDE. We used Code which was nice and they even had tests integrated so there wasn't the whiteboard pressure to get everything right in your head. You also know instantly if your solution works and it's a nice confidence if you get it first try, plus you can see how candidates would actually debug, etc.
You need to do this to establish some baseline competency... for junior hires with no track record?
Recruiters have been notoriously bad in my experience. Relying on network has the potential to create bias and avoid good candidates that simply don't have an "in".
So far, everyone that elected to use GPT did much worse. They did not know what to ask, how to ask, and did not "collaborate" with the AI. So far my opinion is if you have a good interview process, you can clearly see who are the good candidates with or without ai.
At a previous job I made the mistake of letting it write some repository methods that leveraged SQLAlchemy. Even though I (along with my colleague via PR) reviewed the generated code we ended up with a preprod bug because the LLM used session.flush() instead of session.commit() in exactly one spot for no apparent reason.
LLMs are still not ready for prime-time. They churn out code like an overconfident 25-year-old that just downed three lunch beers with a wet burrito at the Mexican place down the street from the office on a rainy Wednesday.
The day when generative AI might hope to completely handle a coding task isn't here yet - it doesn't know your full requirements, it can't run your integration tests, etc. For now it's a tool, like a linter or a debugger - useful sometimes and not useful other times, but the responsibility to keep bugs out of prod still rests with the coder, not the tools.
That's...oddly specific.
I think we're in a period similar to self-driving cars, where the LLMs are pretty good, but not perfect; it's those last few percent that break it.
Ive had ChatGPT do the same thing with code involving SQLAlchemy.
That sounds like a lot of developers I've worked with.
Give examples and let it extrapolate.
New better models are coming out almost daily now and it's almost common knowledge that Copilot was and is one of the worst. Especially right now, it doesn't even come close to what better models have to offer.
Also the way to use them is to ask for small chunks of code or questions about code after you gave them tons of context (like in Claude projects for example).
"Not ready for prime time" is also just factually incorrect. It is already being used A LOT. To the point that there are rumors that Cursor is buying so much compute from Anthropic that they are making their product unstable, because nvidia can't supply them hardware fast enough.
"Oh, I don't remember how to do parameterized testing in junit, okay, I'll just copy-paste like crazy, or make a big for-loop in this single test case"
"Oh, I don't remember the API call for this one thing, okay, I'll just chat with the interviewer, maybe they remember - or I'll just say 'this function does this' and the interviewer and I will just agree that it does that".
Things more complicated than that that need exact answers shouldn't exist in an interview.
Agreed, testing for arcane knowledge is pointless in a world where information lookup is instant, and we now have AI librarians at our fingertips.
Critical thinking, capacity to ingest and process new information, fast logic processing, software fundamentals and ability to communicate are attributes I would test for.
An exception though is proving their claimed experience, you can usually tease that out with specifics about the tools.
To me it's the lack of skill. If the LLM spits out junk you should be able to tell. ChatGPT-based interviews could work just as well to determine the ability to understand, review and fix code effectively.
I’m basically pair programming with a wizard all day who periodically does very stupid things.
I would love to go through mock interviews for myself with this approach just to have some interview-specific experience.
>> So far, everyone that elected to use GPT did much worse. They did not know what to ask, how to ask, and did not "collaborate" with the AI.
Thanks for sharing your experience! Makes sense actually.
Dead Comment
As an interviewer, it's wild to me how many candidates think they can get away with it, when you can very obviously hear them typing, then watching their eyes move as they read an answer from another screen. And the majority of the time the answer is incorrect anyway. I'm happy that we won't have to waste our time on those candidates anymore.
We actually allow using AI in our in-person technical interviews, but our questions are worded to fail safety checks. We'll talk about smuggling nuclear weapons, violent uprising, staging a coup, manufacturing fentanyl, etc. (within the context of system design) and that gives us really good mileage on weeding out those who are just transcribing what we say into AI and reading the response.
I'm genuinely curious what questions you ask during the behavioral interview. Most companies ask questions like "recall a time when..." and I know people who struggle with these kinds of questions despite being good teammates, either because they find it difficult to explain the situation, or due to stress. And recruitment process is not a "basic conversation" — as a recruiter you're in far more comfortable position. I find it hard to believe anyone would use an LLM if you ask them question like "what were your responsibilities in your last role", and I do see how they might've primed the chat to help them communicate an answer to a question like "tell me about a situation when you had a conflict with your manager"
I love the idea of embedding sensitive topics that ChatGPT and other LLMs will steer clear of, within the context of a coding question.
Have you ever had any candidate laugh?
Any candidates find it offensive?
I had just recently lead through several interview rounds for software engineering role and we have not had any issue with LLM use. What we do for the technical interview part is very simple - live whiteboarding design task where we try to identify what the candidate's focus is and might pivot at any time or dig deeper into particular topics. Sometimes, we will even go as detailed as talking about particular algorithms the candidate would use.
In general, I found that this type of interview is the most fun for both sides. The candidates don't feel pressure that they must do the only right thing as there is a lot of room for improvisation; the interviewers don't get bored with repetitive interviews over and over as new candidates come by with different perspectives. Also, there is no room for LLM use because the candidate has to be involved in drawing on the whiteboard and showing their technical presentation skills, which are very important for developers.
I wouldn’t be surprised if the effect of Google Docs and Gmail forcing full AI, is a generation of people who can’t even talk about themselves, and can’t articulate even a single email.
Is it necessary? Perhaps. Will it make the world boring? Yes.
As a person looking for a job, I’m really not sure what to do. If people are lying on their resumes and cheating in interviews, it feels like there’s nothing I can do except do the same. Otherwise I’ll remain jobless.
But to this day I haven’t done either.
In my opinion, if a prospective employee is able to successfully use AI to trick me into hiring them, then that is a hell of a lot closer to the actual work they'll be hired to do (compared to leetcode).
I say, if you can cheat at an interview with AI, do it.
Why does it feel like that when you’re replying to someone who already points out that it doesn’t work? Cheating can prevent you from getting a job, and it can get you fired from the job too. It can also impede your ability to learn and level up your own skills. I’m glad you haven’t done it yet, just know that you can be a better candidate and increase your chances by not cheating.
Using an LLM isn’t cheating if the interviewer allows it. Whether they allow it or not, there’s still no substitute for putting in the work. Interviews are a skill that can (and should) be practiced. Candidates are rarely hired for technical skill alone. Attitude, communication, curiosity, and lots of other soft skills are severely underestimated by so many job seekers, especially those coming right out of school. A small amount of strengthening your non-code abilities can improve your odds much faster than leetcode ever will. And if you have time, why not do both?
I haven't looked for development-related jobs this millennium, but it's unclear to me how effective a crutch AI is for interviews--at least for well-designed and run interviews. Maybe in some narrow domains for junior people.
As a few of us have written elsewhere, I consider not having in-person interviews past an initial screen sheer laziness and companies generally deserve whoever they end up with.
Never buy into this mentality. Because once you do, it never goes away. After the interview, your coworkers might cheat, so you cheat too. Then your business competitors might cheat, so you cheat too. And on and on.
But YMMV. I have 9 years and still can get interviews the old fashioned way.
Deleted Comment
Instead, we were looking to see if they followed instructions, and if they left anything out.
I never had a chance to test it out, since we hadn't hired anyone new in so long, but ChatGPT/etc would almost always fail this exam because of how bad it is at making sure everything was included.
And bad programmers also failed it. It always left us with a few candidates that paid attention, and from there we figure if they can do that, they can learn the rest. It seemed to work quite well.
I was recently laid off from that company, and now I'm realizing that I really want to see what current-day candidates would turn in. Oh well.
Companies seem to think that we program just for fun and ask to make a full blown app... also underestimating the time candidates actually spend making it.
Remember that you are only catching the candidates who are bad at cheating.
Imagine you need to hire some people, and think about what you’d want. That’ll answer your question. Do you want people who don’t know but think AI will solve the problems, or do you want people who are capable of thinking through it and coming up with new solutions, or of knowing when and why the AI answer won’t work?
The ultimate question that an interview is trying to answer is not "can this person solve this equation I gave them?", it's usually something along the lines of "does this person exhibit characteristics of a trustworthy and effective employee?". Using AI when you've been asked not to is an automatic failure of trust.
This isn't new or unique to AI, either. Before AI people would sometimes try to look up answers on Google. People will write research papers by looking up information on Wikipedia. And none of those things are wrong, as long as they're done honestly and up front.
In most interview tasks you are not solving the task “with” ai.
Its AI who solves the task while you watch it do it.
Detection of AI in remote interviews, behavioral and technical, just has to be taught today if you are ever interviewing people that don't come from in-network recommendations. Completely fake candidates are way too common.
Start with recording the session and blocking right-click, and you are halfway there. Its not hard.
The AI app has helped me surface top candidates. I don't even look at resumes anymore. There's no point. I interview the top 10 out of 200, and then do my references and select.
- share your screen
- download/open the coding challenge
- you can use any website, Stack Overflow, whatever, to answer my questions as long as it's on the screenshare
My goal is to determine if the candidate can be technically productive, so I allow any programming language, IDE, autocompleter, etc, that they want. I would have no problem with them using GPT/Copilot in addition to all that, as long as it's clear how they're solving it.
It proved to be awkward and clumsy very quickly. Some candidates resisted it since they clearly thought it would make them judged harsher. Some candidates were on the other extreme and basically tried asking ChatGPT the problem straight up, even though I clarified up front "You can even use ChatGPT as long as you're not just directly asking for the solution to the whole problem and just copy/pasting, obviously."
After just the initial batch of candidates it became clear it was muddying things too much, so I simply forbade using it for the rest of the candidates, and those interviews went much smoother.
But for me, it's just not how my brain works. If someone is watching me, I'll be so self-conscious the entire time you'll get a stream of absolute nonsense that makes me look like I learned programming from YouTube last night. So it's not worth the time.
You want some good programming done? I need headphones, loud music, a closed door and a supply of Diet Coke. I'll see you in a few hours.
No, it's not "obvious" whatsoever. Actually it's obviously confusing: why you are allowing them to use ChatGPT but forbidding them from asking the questions directly? Do you want an employee who is productive at solving problems, or someone who guess your intentions better?
If AI is an issue for you then just ban it. Don't try to make the interview a game of who outsmart who.
- You get to see how they then review the generated code, do they spot potential edge cases which the AI missed? - When I ask them to make a change not in the original spec, a lot of them completely shut down because they either didn't understand the code generated well enough, or they themselves didn't really know how to code.
And you still get to see people who _do_ know how to use AI well, which at this point is a must for its overall productivity benefits.
Too many people are the opposite that I would literally never tell you
And this works.
what can we do to help that?
I’ve had interviews where AI use was encouraged as well.
but so many casual tirades against it dont make me want to ever try being forthcoming. most organizations are realistically going to be 10 years behind the curve on this
I do not want AI. The human is the value add.
I understand that people won't feel super comfortable with this, and I try not to roast the candidate with leetcode. It should be a conversation where I surface technical reality and understanding.
if i see anything remotely challenging i dip out. interviewing is just a numbers game nowadays so i dont waste time on interviews if they seem like they're gonna burn me out for the rest of the day. granted i have 11 years experience
Some of the questions in our interview loop have been posted in github... which means every AI has trained on them specifically. They are, therefore, useless if you have AI turned on. And if you interview enough people, someone will post on github, and therefore your question will have a pretty short shelf life before it's in training and instantly solved.
Dead Comment
Once you get to the interview process, it's very clear if someone thinks they can use AI to help with the interview process. I'm not going to sit here while you type my question into OpenAI and try to BS a meaningful response to my question 30 seconds later.
AI-proof interviewing is easy if you know what you're talking about. Look at the candidates resume and ask them to describe some of their past projects. If they can have a meaningful conversation without delays, you can probably trust their resume. It's easy to spot BS whether AI is behind it or not.
That said, I'm waiting for an "interview assistant" product. It listens in to the conversation and silently provides concise extra information about the mentioned subjects that can be quickly glanced at without having to enter anything. Or does this already exist?
Such a product could be useful for coding to. Like watching me over the shoulder and seeing aha, you are working with so-and-so library, let me show you some key parts of the API in this window, or you are trying to do this-and-that, let me give you some hints. Not as intrusive as current assistants that try to write code for you, just some proactive lookup without having to actively seek out information. Anybody knows a product for that?
The only way I can see that working is if it spends hundreds of hours watching you to understand what you know and don't know, and even then it'll be a bit of a crap shoot.
This was 2-3 years ago in a remote interview. The candidate would hear the question, BS us a bit and then sometimes provide a good answer.
But then if we asked follow up questions they would blow those.
They also had odd 'AV issues' which were suspicious.
If you look solely at my GitHub you'd likely reject me right away.
I wish I had the time and energy for passion projects in programming. I so wish it was so. But commercial work has all but destroyed my passion for programming, though I know it can be rekindled if I can ever afford to take a properly long sabbatical (at least 2 years).
I'll more agree with your parent / sibling comments: take a look at the resume and look for bad signs like too vanilla / AI language, too grandiose claims (though when you are experienced you might come across as such so 50/50), or almost no details, general tone etc.
And the best indicator is a video call conversation, I found as a candidate. I am confident in what I can do (and have done), I am energetic and love to go for the throat of the problems on my first day (provided the onboarding process allows for it) and it shows -- people have told me that and liked it.
If we're talking passion, I am more passionate about taking a walk with my wife and discussing the current book we're reading, or getting to know new people, or going to the sauna, or wondering what's the next meetup we should be going to, stuff like that. But passion + work, I stand apart by being casual and not afraid of any tech problems, and by prioritizing being a good teammate first and foremost (several GitHub-centric items come to mind: meaningful PR comments and no minutiae, good commit messages, proper textual comment updates in the PR when f.ex. requirements change a bit, editing and re-editing a list of tasks in the PR description).
I already do too much programming. Don't hold it against me if I don't live on the computer and thus have no good GitHub open projects. Talk to me. You'll get much better info.
Also if you have a novel or disclosure sensitive passion project, GitHub may be avoided even as a very conservative bright line.
As stated above I think it can be good to find common points to enhance the interview process, but make sure to not use it as a filter.
So you just subjectively say "this resume is too perfect, it must be bullshit"? How the fuck is any actual, qualified engineer supposed to get through your gauntlet of subjectivity?
I'm sure some small % of people get away with it by using LLaMA-FooQux-2552-Finetune-v3-Final-v1.5.6 or whatever, but realistically, the majority is going to be obvious to anyone that's been force-fed slop as part of their job.
I am imagining an AI saying my CV is AI-generated, when in reality, I do not even use Auto-correct or Auto-suggest when I (type)write! :-)
Generally, this is how to figure out if a candidate is full of crap or not. When they say they did a thing, ask them questions about that thing.
If they can describe their process, the challenges, how they solved the challenges, and all of it passes the sniff test: If they are bullshitting, they did crazy research and that's worth something too.
If you do a rote interview that's easy to game with AI, it will certainly be harder to detect them cheating.
If you have an effective and well designed open ended interview that's more collaborative, you get a lot more signal to filter the wheat from the chaff.
It's as if we were testing for replicants in Blade Runner: The AI response will rarely figure out you are aiming to look for something frustrating, that they are actually proud of, or figure out when you are looking for a hot take you can then disagree with.
AI doesn't just change the interviewing game by making it easy to cheat on these interviews, it should be changing your hiring strategy altogether. If you're still thinking in terms of optimizing for cogs, you're missing the boat—unless you're hiring for a very short term gig what you need now is someone with high creative potential and great teamwork skills.
And as far as I know there is no reliable template interview for recognizing someone who's good at thinking outside the box and who understands people. You just have to talk to them: talk about their past projects, their past teams, how they learn, how they collaborate. And then you have to get good at understanding what kinds of answers you need for the specific role you're trying to fill, which will likely be different from role to role.
The days of the interchangeable cog are over, and with them easy answers for interviewing.
"talk about their past projects, their past teams, how they learn, how they collaborate"
You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.
- “big” tech companies like Google, Amazon, Microsoft came up with these types of tech interviews. And there it seems pretty clear that for most of their positions they are looking for cogs
- The vast majority of tech companies have just copied what “big” tech is doing, including tech interviews. These companies may not be looking for cogs, but they are using an interview process that’s not suitable for them
- Very few companies have their own interview process suitable for them. These are usually small companies and therefore the number of engineers in such companies is negligible to be taken into account (most likely, less than 1% of the audience here work at such companies)
This is the job of a good interviewer. I've run the gauntlet from terrible to great answers to the exact same questions depending on the interviewer. If you literally just ask that question out of the blue, you'll either get a bad or rehearsed response. If you establish some rapport, and ask it in a more natural way, you'll get a more natural answer.
It's not easy, but neither is being on the other side of the interviewer, and that's never been accepted as an excuse
The council itself is made of "busywork" worker bees. Slave hiring slaves - the vast majority of IT interviewers and candidates are idiot savants - they know very little outside of IT, or even realize that there is more to life than IT.
This was the norm until perhaps for about the last 10-15 years of Software Engineering.
I didn't say that. I said that this style of interview was designed to hire pluggable cogs. As others have noted, that was the correct move for Big Tech and was cargo culted into a bunch of other companies that didn't know why their interviews were shaped the way they were.
> there would be "out of the box thinkers" who would go against your ideas and cause dissent. At that point, guess what? You're going to figure out how to hire compliant people who will execute on your strategy.
In answer to your original question: yes, I'm actively involved in hiring at a 100+ person engineering org that hires this way. And no, we're not looking to figure out how to hire compliant people, we're hiring engineers who will push back and do what works well, not just act because an executive says so.
> You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.
Only if you suck at making people comfortable and at understanding different (potentially awkward) communication styles. You don't have to discriminate against people for being awkward, that's a choice you can make. You can instead give them enough space to find their train of thought and pursue it, and it does work—I recently sat in on an interview like that with someone who fits your description exactly, and we strongly recommended him.
That’s exactly what we always needed, long before LLMs arrived. That’s why all the interviews I’ve seen or give already were designed to have conversations.
I’m agreeing with you, but I’ve never seen these ‘interchangeable cog’ interviews you’re talking about.
https://leetcodewizard.io/
I think every interviewer, hiring manager ought to know or be trained on these tools, your intuition about candidate's behaviour isn't enough. Otherwise, we will soon reach a tipping point where honest candidates will be at a severe disadvantage.
Every alternative I can think of is either worse, or sounds nice but impractical to implement in practice at scale.
I don’t know about you, but most interviewers out there don’t have the ability to judge the technical merit of a bullshitters’s contribution to a class or internship project in half an hour, specially if it’s in a domain interviewer has no familiarity with. And by the way, not all of them are completely dumb, they do know computer science, just perhaps not as well as an honest competitor.
100% of all job interviews are a proxy. It is not possible to perform an interview in ~4 hours such that someone sufficiently engages in what the job “actually entails”.
A leetcode interview either is or not a meaningful proxy. AI tools either do or not invalidate the validity of that proxy.
Personally I think leetcode interview are an imperfect but relatively decent proxy. And that AI tools render that proxy invalid.
Hopefully someday someone invents a better interview scheme that can reliably and consistently scale to thousands of interviews, hundreds of hires, and dozens of interviewers. That’d be great. Alas it’s a really hard and unsolved problem!
Im really happy it's finally broken though. Dumbest fad our industry ever had.
"Cheating" on leetcode is a net positive for society. Unironically.
anyone I know who actually got a job through leetcode style in the last 2 years cheated. they would get their friends to mirror monitor and then type the answers in from chatgpt LOL
You ask a rote question and you'll get a rote answer while the interviewee is busy looking at a fixed point on the screen.
You then ask a pointed question about something they know or care about, and suddenly their face lights up, they're animated, and they are looking around.
It's a huge tell.
The coding challenge is supposed to be solved with AI. We can no longer afford not to use LLMs for engineering, as it's that much of a productivity boost when used right, so candidates should show how they use LLMs. They need to be able to explain the code of course, and answer questions about it, but for us it's a negative mark of a candidate proclaims that they don't use LLMs.
Do you state this upfront or is it some hidden requirement? Generally I'd expect an interview coding exercise to not be done with AI, but if it's a hidden requirement that the interviewer does not disclose, it is unfair to be penalized for not reading their minds.
Frankly, if an interviewer told me this, I would genuinely wonder why what they're building is such a simple toy product that an LLM can understand it well enough to be productive.