I think the sort of more general point, rather than "build something great that someone at Apple notices", is that Graduate and Internship programmes are scale businesses. They get thousands of people applying and will hire atleast dozens, probably some FANG are hiring hundreds. So yes, we're all special snowflakes, but that's not what this process is for. It's about harsh rounds of easy to design problems to winnow the field.
This totally changes when you're a specific candidate with skills Apple values. What this guy experienced wasn't an interview process. They had already decided to hire him before what he described as the interview process. They already know he's good, he experienced the talent acquisition process. The bit that happens when the company has already decided to hire you and is now doing everything it can to sell you on the job.
If someone outside of recruiting contacts you about a job at a company, you will not experience a normal interview process, because they're already quite likely to want to hire you.
This is my experience. Path in the door for every job I've had since I finished undergrad in the mid-aughts:
- 1st job .. I answered a craigslist ad.
- 2nd .. HN who's hiring post.
- 3rd .. reached out to an engineer on the team on linkedin.
- 4th .. referral from ex-coworker's partner.
- 5th .. browsed the website of the VC firm that's funded a few of my previous companies, and I reached out to the CEO on linkedin.
With the current significant amount of noise on the supply side, there's no alternative for them. Even the finest resumes can't replace the confidence you receive from an existing experience. This experience could come from collaborating with someone previously or observing a well-executed job done by them.
This is a weird take because the whole point of the last decade of leetcode was that people with special skills or that built something great can't even get hired. which was dumb. but reinforced the supposed uniformity of that process.
I'm glad people aren’t just continuing the practice simply because they experienced the practice.
The comment you're replying to is saying there's 2 processes, and it matters which one you're in.
System A optimizes for min(p(hire|bad_candidate)). System B optimizes for max(p(hire|good_candidate)).
System A has resume screens, long lead times, and Leetcode out the wazoo. System B has a few convos, short lead times, and review of actual previous work product.
Horror stories come from great candidates in system A. This article is describing how to get into system B.
To be honest, building something on your own time is very different, in that, it's much easier.
You start from scratch, you can pick something you're interested in building and maybe also feel you can succeed at. You make up your own requirements, as you see please, so things aren't ambiguous or changing in requirements constantly.
There's no prior legacy constraints, there's no time constraints, it doesn't require finding ways to deliver faster by leveraging other developers.
You get to spend as much time as you need, go watch tutorials, take a hiatus, come back to it when refreshed, fail multiple times and try again, etc.
You can choose the programming language, framework, library ecosystem, that you're more familiar with and comfortable with.
And so on.
So having someone that can show some cool personal projects I've found isn't always a great way to know if they'll be good in a work environment and on a team.
> This is a weird take because the whole point of the last decade of leetcode was that people with special skills or that built something great can't even get hired.
No, it looks like you're missing the point. Leetcode is not for people with special skills. It's for people with NO SKILLS other than "can code". Well if the only thing you can do is write code, you better be really good at writing code.
Most of this is correct - But Faangs have also been having their managers (eg EMs and PMs) do reach outs to make it look authentic - only to forward you to a recruiter and a standard 6 hour interview round (I know because I am not special and have fallen and almost fallen for this trap dozens of times)
Always nice when the startup CEO emails you directly via CRM automation about your unique potential at the company and when you agree to chat he hands you off to the "senior technical recruiter". Always makes me feel extra special :)
Something I've noticed over the past 5 or so years is a "hobby project" is increasingly becoming the standard. Your GitHub portfolio is almost as important as your resumé itself in many cases. It starts the conversation and grabs attention with potential employers, but doesn't act in lieu of a technical evaluation. Almost every competitive applicant at the intern level has some tool or webapp that they've built out extensively.
I can confirm that AWS has done this as well, I know of at least 1 person who has been hired in this manner, and it was because they were excellent OSS contributors.
As a data point on the scale of internships, Amazon hired approximately 15,000 interns last summer (not including winter interns). While I don't have a number for how many get full time offers, a large portion do since the internship serves as on-the-job training.
The problem is that "leetcode" is a definitely a thing that exists in "FAANG" interviews.
The biggest stumbling block with leetcode is that you shouldn't be programming like that in real life.
"make a linked list" no, just use a library like everyone else.
"Implement addition in python but with string inputs", "no you can't use the built in x" All of that "clever" shit should be filtered out at PR/MR/diff review time.
All of this "clever shit" is exceptionally bad programming. We don't live in the 80s anymore. we don't need to make our own sort algorithms. just. use. a. Library.
I'm not going to defend "leetcode grinding" and its pathologies but TBF this is problematic:
> "make a linked list" no, just use a library like everyone else.
> "Implement addition in python but with string inputs", "no you can't use the built in x" All of that "clever" shit should be filtered out at PR/MR/diff review time.
Sure, those statements (just use an existing library) are what you'll do in practice especially as a beginner. But there is value in asking these kinds of questions.
One is "do you know what's going on under the hood?" On another post I just saw a comment in which someone advocated counting the number of occurrences of something in a list by filtering it and then getting the length of the result. This is almost certainly a terrible solution as it allocates memory (which has to be freed) in what can be done in a single quick pass. By asking you such a question the interviewer can learn if you have some idea of the tradeoffs and why something might be a good or bad idea.
These are the kinds of decisions that can have order of magnitude impacts on runtime, which can have huge impact on the company's costs.
Likewise doing arithmetic with strings as inputs would expose interesting but not super complicated questions about how arithmetic works, how you parse a numeric value out of its written representation (detest the word "conversion" for this), what are interesting bounds. If the job is really so high level that you don't have to think that there's an actual machine involved, then yes the questions aren't useful. But how many jobs are really so abstract?
> We don't live in the 80s anymore. we don't need to make our own sort algorithms.
No, but sometimes you have to make a choice based on the data to be operated on.
> One is "do you know what's going on under the hood?" On another post I just saw a comment in which someone advocated counting the number of occurrences of something in a list by filtering it and then getting the length of the result. This is almost certainly a terrible solution as it allocates memory (which has to be freed) in what can be done in a single quick pass. By asking you such a question the interviewer can learn if you have some idea of the tradeoffs and why something might be a good or bad idea.
> These are the kinds of decisions that can have order of magnitude impacts on runtime, which can have huge impact on the company's costs.
Then the questions should be more like: What data structure, library, method, steps, would you use to solve this problem and why. What are the tradeoffs of your choices, etc?
All of this can be answered in fairly high level pseudocode too and still show that they understand the concepts etc.
> Sure, those statements (just use an existing library) are what you'll do in practice especially as a beginner
The more code you write, the more you have to maintain. Sure, of course there are times where you need to re-implement something from scratch. But those times are rare (or should be). Making something from scratch without strong justification is a strong signal, just not a positive one.
Now, as you point out, "when would you write x from scratch" is a great question to ask someone. I am vary wary of people that are willing to overspend innovation tokens. But thats a design/systems/culture question, not a coding question.
the signal I want from a coding test is the following:
o Can they demonstrate and understanding of the programming language?
o Do they create readable code?
o Do they ask questions?
o Do they follow the style of the programme they are modifying?
o Do they say "I don't know"?
o Do they push back on weird or silly requirements?
all of those things make working with someone much easier. None of those questions require "implement algorithm that only exist in Computer Science courses". They can all be answered by something like:
"here is a program that fetches a json document, but it breaks often, please make it more reliable"
That is far more realistic, and more likely to what they'll be doing.
The dirty little secret about most coding jobs is that actually, the important thing is getting something functional and stable for the business, so it can make money. Then after that its responding to feature/bug requests. As a programmer, your job is to convert business logic into computer logic. 99% of the time, this doesn't mean making artisan libraries that improve some tiny metric by 20x.
> Sure, those statements (just use an existing library) are what you'll do in practice especially as a beginner.
I think it's really not about "beginner" vs. "expert", but moreso about the specificity of your role. If you're tasked with making general cloud services, it's probably fine. As your role gets closer to core systems/algorithms engineering, obviously that changes.
> But there is value in asking these kinds of questions.
There is value knowing, but the dynamic of an interview make things like this harder to ask/harder to answer.
> One is "do you know what's going on under the hood?" On another post I just saw a comment in which someone advocated counting the number of occurrences of something in a list by filtering it and then getting the length of the result. This is almost certainly a terrible solution as it allocates memory (which has to be freed) in what can be done in a single quick pass.
In the real world the validity of this solution depends on its input and use. Also in the real world, especially if you're not on one of the above-mentioned specialist teams, it's usually more important to be readable and maintainable instead of just purely performance oriented. So a single pass solution that's harder to grasp at a glance becomes less desirable than multiple passes as long as your performance requirements can afford it.
> By asking you such a question the interviewer can learn if you have some idea of the tradeoffs and why something might be a good or bad idea.
Totally, but the issue becomes "Can you figure out an algorithm on a whiteboard", instead of the (as you've already agreed) correct path of "Can you work through tradeoffs of one implementation vs. another". I think this question could pretty easily be presented in a way that isn't a quiz and also allows the programmer to demonstrate their problemsolving ability without seeming adversarial.
> One is "do you know what's going on under the hood?"
Why should I need to? Let's take Microsoft's .Net sort implemtation.
It will intelligently determine which is the optimized sort given the conditions it's facing.
Heapsort? Mergesort? Quicksort? Radix Sort.
And the most-possibility optimized versions of each.
This is what data scientists do. We don't need programmers standing at whiteboards writting BubbleShort and being told it's wrong. In the real world, you call .Sort() and get on with real work.
Sure, you can hand-roll your own sort for hot-path cases but that's likely been taken into account anyway!
The problem is though how do you evaluate for real work without a portfolio of experience? Real work projects take weeks - perhaps months - to design, build, review and release. How do you test for that, really?
I agree leetcode style interviews are artificial, but I think they persist because few people have identified and popularised effective alternatives. At least with leetcode you've shown people in front of you have been prepared to go learn arcane stuff and apply it. It's not good, but it's better than nothing.
I was once asked to do the Gilded Rose kata[1] for one ~200 headcount Series D "startup", and it not only resembled real work - it's a refactoring problem - but it also showed their engineering culture. When I joined, I found smart people who weren't leetcode robots, but thoughtful engineers trying to solve tricky engineering problems. I "stole" Gilded Rose to use in other companies when interviewing until I joined my current employer (a FAANG, heavily prescribed process), with great success. I would like to see more katas that are as good at testing real world skills.
Also, something I've only ever been interviewed for twice in 25+ years, which I think is underplayed: pair programming and handling code review feedback. Do this more, please. If you hire people not knowing how they're going to respond to a principal engineer telling them "we need you to think about 4 other things that weren't in the original scope given to you by the PM, can you please go deal with them", why are you surprised when toys are subsequently thrown out of metaphorical prams?
It seems fairly straightforward to just ask verbal questions?
For example, if the position is mostly coding C then just get them to explain how some X interacts with some Y and how that interacts with some Z. Maybe with a copy of of K&R and get them to point to page so and so.
Do that a few times, maybe use a whiteboard, and I think any experienced C developer will be able to tell who's faking it and who really understands within 2 or 3 iterations.
Even if there are no experienced folks in the company, then it will take longer but, after an hour or two it'd be pretty difficult for anyone to fake a solid understanding of hundreds of pages of K&R.
This seems like a better way to evaluate real-world coding skills. Did you use it in a standard 45-60 min interview or as a take home problem? It seems like it'd take a decent amount of time to read the requirements and just introduce the task.
Refactoring is the best kind of test! It has a slight bias for programming languages but the bias can be mitigated by having refactorable libraries in many languages.
What’s funny is that in most positions the hard work has nothing to do with any of that. It’s communication, empathy, navigating people-problems and organizational dysfunction, and working with/around awful systems that you can’t change, while limiting their blast-radius to protect the business (and your own sanity).
You can always just google how to invert a binary tree or whatever and get a solid answer in no time. Which means it’s easy shit. Unless you’re one of a few devs doing PhD level work in the industry, that CS-heavy “mathy” stuff’s not the hardest thing you’re dealing with, and being good at that indicates almost nothing about your ability to deal with the hard problems you will encounter on the job.
(or else you’re greenfielding at a startup and are doing everything on easy-mode anyway, aside from problems you create for yourself on purpose or by accident)
In fairness, the ratio of "just write code" to everything you listed above changes quite drastically as your seniority increases.
To be sure, there will always be some senior/staff/principal engineers who sit in a room by themselves and code all day without talking to anyone, but, typically, the more senior you get in an organization the more your job consists of things other than writing code. When you're a junior or a mid, however, writing code makes up the vast majority of your day.
Communication, empathy and navigating people is easy: there's a deluge of people capable of doing that. Writing code is hard. Not many people can code at all; even fewer are any good at it. This is why you can easily be unemployed after BA in Communications but have little problem getting a job after high school if you can code.
Sure, you can pretend developers don't read and write code. Just like people who complain about leetcode pretend the questions are hard and that performance is uncorrelated somehow.
All of this "clever shit" is exceptionally bad programming.
In a company like Google (or Google 15 years ago really) there are problems with many possible solutions, but some solutions are better than others. The aim of leetcode recruitment is that it filters for people who can not only solve hard computer science problems, but also recognize what problem they're solving and find good solutions rather than just solutions.
"I can implement this with a library" is only a good solution if the library is solving the problem in the right way. If the library is solving the problem but solving it in an inefficient-but-working way that is not good programming. At the end of the spectrum where Google exists you need developers who know the difference.
The problem with leetcode is that it doesn't actually filter for people who can understand problems in a general sense. It filters for people who know leetcode problems.
> If the library is solving the problem but solving it in an inefficient-but-working way that is not good programming.
Yep. And even then, you need to know what you’re looking for. Years ago I wanted a library which implemented occlusion on vector shapes for plotter art. I had no idea what terms to search for because I don’t have a strong enough background in graphics. Turns out there are algorithms which help - but it was years before I found a good enough library.
And if you know what to search for, there are so many terrible implementations out there. Look for priority queues on npm. Half the libraries implement custom binary trees or something, despite the fact that using a heap (an array) is way faster and simpler. If you don’t know why a heap is better, how could you possibly evaluate whether any given library is any good?
You do know we hire people who have to _write_ these libraries, and they can't just defer to something else with no consideration to time or space constraints. None of the questions are designed to be brainteasers or 'gotchas' but to get you to start discussing the problem and show your depth of knowledge/expertise.
If I ask you a question along the lines of "write me a function to tell if two number ranges intersect" and your solution is to grab a library instead of writing a simple predicate...then perhaps the role is not a good fit.
"Use a library for everything" is how we ended up with left-pad on npm.
> "Use a library for everything" is how we ended up with left-pad on npm.
bollocks, that's because javascript doesn't have a standard lib.
> You do know we hire people who have to _write_ these libraries
I know, because I'm there. I'm working in VR/AR. We have a number of people who are world experts for doing what they are doing. Do I go off and re-make a SLAM library because I don't like the layout? no because I have to ship something useable in a normal time frame. I can't just drop 3 months replicating someone else's work because I think the layout is a bit shit, or I can't be bothered to read the docs (I mean obviously there are no docs, but thats a different problem)
But, and I cannot stress this enough, having more than one person making similar or related libraries is a disaster. Nothing gets shipped and everything is devoted to "improving" the competing libraries.
Adding 1 to an ASCII decimal strings is a great icebreaker and I always use it. It is not leetcode. It is a filter for people who have actually encountered computers before. Even though our industry is maturing there are still a huge number of candidates who present themselves without ever once having written a program. I find the question has a massive signal. Either the candidate aces it in 15 seconds, or they immediately ask a bunch of on-point clarifying questions, or they are stumped.
I have written countless programs but would not be able to answer that without first turning to google. You might have screened out plenty of valid applicants with this
The only justification is that its a filtering mechanism. You likely won't use any of this, but its a proxy to certain characteristics you might find desirable in a candidate - some baseline intelligence and motivation.
That you have the capacity to sit down and rote memorise a bunch of different techniques and practice completing a these kinds of tests indicates the likeliness of success in absence of other hard evidence of likely future performance. In a way it is the same function as university entrance exams.
That these companies continue to do this for experienced candidates and not just recent graduates without a track record in employment is more perplexing.
> That these companies continue to do this for experienced candidates and not just recent graduates without a track record in employment is more perplexing.
This assumes that they don't get similar number of people applying to the experienced positions who are lacking skills.
For example, I know some Java devs who could put down 5+ years of professional experience but are still very junior when it comes to problem solving skills. They are capable of following precise instructions and there's enough work of that nature to do - but an open ended "there's a bug that has a null pointer exception when running in test but not dev or production" is something that they're not capable of doing.
If they were to apply to a position that said senior java developer based on their years of experience and there was no code test as part if it, they might be able to talk their way through parts of it and there is a risk that they could get hired.
For a senior java developer position at Google, would it be surprising if they got 500 applicants with 5+ years of experience?
How would you meaningfully filter that down to a set of candidates who are likely to be able to fill the role?
I think a fair number of Leetcode 'medium' problems are reasonable as augmented versions of FizzBuzz. In most cases, someone who can code should at least be able to write a brute force solution to one of these problems and explain why their brute force solution is slow. Where I think Leetcode-style questions become problematic is when they're used as test of whether someone can invent and implement a clever algorithm under time pressure. Unless you're hiring someone to do specifically that, then I think this style of Leetcode interviewing is of limited value.
Also, some Leetcode problems just have unnecessarily unfriendly descriptions. For example this one, which is quite simple to implement, but which has an unnecessarily obscure problem description: https://leetcode.com/problems/count-and-say/
some companies/teams use leetcode hard level problems to specifically select for ACM ICPC level talent to solve their particular problems, writing very tight compute kernels in constrained environment and unbounded input
My understanding is that they use it to filter candidates a bit. FAANG positions get loads of applicants, so they have to find efficient ways of thinning the herd a bit. The correlation between people who are willing to grind leetcode and who turn out to be good engineers is high enough that it's worth it for them to keep as a tool to sort the massive pile of applicants they get.
Of course, hiring someone purely on how good they are at leetcode would be... dumb.
> The correlation between people who are willing to grind leetcode and who turn out to be good engineers is high enough that it's worth it for them to keep as a tool to sort the massive pile of applicants they get.
One "subtlety" this misses is that leetcode-style trivia tests only work for those who are willing _and able_ to grind leetcode etc.
There are many who have the aptitude and experience but have e.g. children, or interests outside of programming which means they do not have the required free time to spend rehearsing for this kind of interview.
> "make a linked list" no, just use a library like everyone else.
If you cannot sketch out a simple linked list on a whiteboard, you’re not an engineer and have no business being hired as one at a place like Apple, full stop.
I was asked to sketch out a hash table on the whiteboard. It was trivial, because a trivial hash table is trivial.
> you’re not an engineer and have no business being hired as one at a place like Apple
well, unless you have a engineering degree, you're not actually an engineer, but we'll gloss over that. (big hint CS isn't an engineering discipline, otherwise testing, requirements gathering and cost analysis would be a big part of the course. )
I was once asked to implement a distributed fault tolerant hashtable on a whiteboard. I'd still never make one from scratch unless I really really needed to. (and I've worked on distributed datastores....)
but that wasn't part of the coding test, that was part of the design test.
Which is my point, re-implementing things for the hell of it, is an antipattern. rote learning of toy implementations of some everyday function doesn't give you a good signal on if the candidate understands why you should use x over y.
and again, my point is this: If I catch someone re-implementing something in a code review, they need to have a really good fucking reason to do it, otherwise it'll be rejected.
My team is doing things for which libraries do not exist, things that are very much leetcode-esque, deeply into theoretical CS, lots of applied science. Some other teams here are doing data plumbing others do UI stuff.
Our team can get to be very picky about people because ... well, we need to be.
If you can't explain in detail how a linked list works, that's an enormous red flag that you've probably never studied data structures. It's a FizzBuzz level question.
I'm sure some companies have lousy interview practices, but algorithms and data structures are super important if you want to hire someone who's actually competent, who can think for themselves.
It also doesn't resemble "real" work with regards to gathering requirements, clarifying ambiguity, weighing up trade-offs, making sure code is clear, etc. It tends to be a rote regurgitation exercise.
LeetCode has no correlation with great software development. Great development is more about paying attention to the end users, having an eye for usability and design and great communication & the ability to execute. The current LeetCode churning explains why Google hasn't produced anything worth using since Brin & Page checked out and why so much of software today is absolute garbage - but it is what it is I suppose. I suppose its better to screen for IQ and desparation than you know...people who love developing great software. Let them keep doing what they're doing while treating their own people like disposable garbage during hard times (like Google & Meta just did) and we'll see what they'll churn out over the next decade or so (I'm not expecting anything from either of these companies).
where there are much greater selective pressures in India and China to excel at this than there are in the US, raising this irrelevant bar to absurd territory if you value your time
The reason they do this is because there is a lot of debate over having some objective way to collect information about a candidate. So they have multiple rounds of standardized technical interview questions from which they can compare interview feedback. The rounds will be from different interviewers so they can control for bias of one interviewer. Then they are able to compare this standardized data across multiple candidates.
Imo this process is flawed though because just one or two rounds of technical interviewing gives you enough information about whether the candidate can code. After that you need to understand how the candidate thinks since most of the job is spent doing things that aren’t coding. These are better probed by design questions, asking the candidate to critique some code, asking the candidate to explain a project from their resume and then propose alternatives and trade offs.
Too often you get people who pass this interview process that can code at a basic level but hinder your team by giving poor feedback, having a fixed mindset, being a bad communicator, not being able to unblock themselves etc.
It’s fine if you are just looking to grow a new grad though, although former interns are better.
Hiring should be a team level decision. Product roadmap should be an exec level decision. Google has it completely backwards. That's how you end up with hoards people gaming the hiring process by spending months on leetcode. And 3 chat apps from Google competing against each other.
Teams at Google are very fluid, there are many of them, and they change regularly. So it makes a lot of sense to be sure a candidate could be successful in a wide variety of teams.
Agreed. At our company the technical interview involves walking a dev through a working app that uses the same stack we develop with. There are bugs to solve, potential optimizations that exist and the dev explains and walks through the client/server/database portions (leaning into one or the other as needed). No tricky questions or puzzles... just actual code (a simplified subset but still real, working code).
As long as you're young and willing to put the work in (which is demonstrated by building things like nice apps), the software industry will always have space for you. No commitments, high stamina, inexperience in contract negotiation, willingness to believe in corporate ideals, willingness to live in corporate accomodation... what's not to like?
Microsoft and Apple were the first companies to understand this truth so deeply, so I'm not surprised Apple is still into these practices. If you are young, use them to your advantage; just - please go in with eyes wide open, because the industry doesn't really have your best interest at heart and never will.
This is just "one" experience, certainly not the norm. Even as an experienced engineer of 15 yrs I'd to solve leetcode (LC) style problems on the whiteboard a few yrs go. For better or worse (depending on whose perspective you see), LC allows companies to judge you based on a common framework of solving algorithmic problems using computer science fundamentals (data structures and algorithms) in a language of your choice. Obviously, the problems vary in difficulty and many are trick problems which is what makes them frustrating, but it's the unwritten sad truth of this industry and I believe it's here to stay _if_ you're vying for very high salaries.
I think it makes sense to use LC or something like it. Is there a better way to judge the 100 random developers who've applied to your position?
- They may not have formal education, you can't judge on that.
- They may not have active GitHub projects, you can't judge on that.
- They may not be active on social media or have any kind of fame, you can't judge on that.
- They may not have built anything they can show off like this `Find` app, you can't judge on that.
So what can you judge them on? LC makes that pretty simple: "can they answer some standardised questions about algorithms and data structures, showing that they have at least some basic knowledge of what's going on in computers?"
It's not without downsides, but I also struggle to see a better option that can scale to the armies of devs that Amazon, Google, etc, all hire.
Especially for Senior Developers I think systems design questions are more interesting. They usually don't just have one right answer and you need to discuss the advantages and disadvantages of the different approaches. They require knowledgeable interviewers of course.
Then again, I'm not working in a huge corp, so I don't know if this scales.
The problem with leet code is what it really measures is how long you have spent on leetcode. Yes, you can solve the problems if you have experience, but you are not going to look as good as someone who had done that specific problem on leetcode and could just write down the optimal answer.
Some of the highest salaries I know are in core ML at big tech or some of the foundational labs like Allen AI. While whiteboard coding interviews are often part of the process, it is generally pretty basic LC.
The crux of the evaluation often revolves around have you built something amazing before? For most new grad folks it is the work done as part of their PhD thesis, but for non PhDs it often boils to amazing past work. And it’s a similar process for more experience folks except the reliance on LC fades even more.
If this works well for cutting edge ML why shouldn't it work for everything else?
I know senior engineers at Apple who didn’t do a single technical interview. They got the role purely through networking like the OP did. Apple is pretty unique in this regard among big tech - teams have a lot of leeway in how they hire.
On the flip side, I’m fairly sure the hardware / system software teams have rigorous hiring processes.
Do Apple attract the best software engineers? If I were a talented hardware engineer, chip designer, hardware designer - I'd want to work at Apple because Apple attracts the best people in these fields. I imagine that Google, Microsoft, Meta, Amazon get the lion share of the best software engineers. No one has ever said Apple services, like the iCloud and Apple Music backend, or Siri, are quality piece of software engineering.
Apple has pockets of excellence. It used to be most of the company, something about how Jobs ran it, inspired people much overqualified to apply, and not even for very high salaries. At NeXT infamously a PhD (EDIT: in comp-sci, or physics?) worked as a janitor as he simply wanted to be around Jobs and cool smart people.
Currently entropy is getting to Apple in many ways. iTunes and Apple Music didn't use to be mediocre, but are now. Siri is very mediocre. OS quality is declining although still pretty good. Hardware division is excellent, but most hardware divisions are.
Under Cook, Apple products have become bland and overpriced in most categories (the M_ processors are a huge exception), but it's also wildly successful. The stock price is 7x what it was when Cook took over. He's one of the most successful CEOs of all time.
Unfortunately the only conclusion we can draw is that consumers don't care about:
iMessage and APNS is one of the largest and most reliable webscale services run by anyone. Apple's cloud product design is crap (really, you're gonna try to sell me a $5 storage plan ten minutes after buying a $2000 phone?!) but iMessage and APNS are used by hundreds of millions of devices daily and is down less than S3. Credit where credit is due.
Best? Probably no. But the good ones willing to have high salary for sure. The best work for niche companies where they have comparable or even better conditions. Ex colleague just rejected offer from Apple and picked 30 person very specialized company.
Siri is the worst piece of software I'm forced to interact with daily. iOS is also near the top.
GP didn't seem to be saying they didn't think about that software. They were saying they didn't have a good reputation, which is true.
Apple is actually astonishingly bad at software considering the amount of money they have to do it properly. Even Google eventually got Android to a place where it's very resilient and rarely buggy, but iOS has bugs for me almost daily where I have to reboot to fix it.
This is a great post, and it's good to hear from someone who is doing great things so early. I know HN is often down in self taught or bootcamp programmers, but I'm happy to see people who willed themselves through.
On this point regarding the lack of leet code at Apple versus other big tech:
> Is it a coincidence that Apple was the only Big Tech company that didn’t do layoffs?
Yeah, I do think it's probably a coincidence. On the whole, a higher focus on leet code in hiring is probably a negative thing, but I doubt it has such big effects as to cause layoffs.
I do think there is a large gap between truly self taught and bootcamp programmers. Most self taught programmers have bounced through several languages, frameworks and different attempts to "crack" programming. Bootcampers in my experience have a very narrow slice of knowledge and crucially almost no understanding. This is a generalisation and I'm sure there are counter examples but by and large this has been my experience.
I actually prefer hiring and working with self taught programmers especially when it comes to juniors vs CS grads. They have less bad habits usually and a stronger appetite to work things out for themselves.
Though this could be just my own bias having not graduated in a CS degree, anecdata and all that.
I don't think it's a coincidence. To my knoledge, Apple also wasn't trying to hire like crazy over the pandemic. So it's no surprise that when that "hot streak" ended they weren't affected as strongly.
And as others have said, Apple does rely much more strongly on contracting than the rest of the Big Tech.
I was speaking about the connection between leet code in hiring and layoffs. Unless you mean that there's a connection between leet code and cash on hand is due to not using leet code in interviews, in which case I'd love to hear that theory!
Team dependent hiring is not “unique” to Apple, as the author posits. This used to be the norm before Google. It is fairly well known that Apple has a “hire different” approach that heavily values specific experience, which frustrates the Leetcode monkeys on Blind
I think "unique among big tech companies" and "used to be the norm" can both be true!
It definitely has pros and cons though: I've worked on a handful of teams at Apple and was very lucky to have been able to work on super fun and interesting stuff on all of them, but an outsider confronted with 1000+ vague listings on jobs.apple.com is effectively playing the lottery of whether they'll get to interview for a team that's a good match for them.
I can't speak for Apple. But team specific hiring does have some big downsides. It allows some managers to brings friends / family in that are not really qualified. I once worked with a team that every single person on the team lived in the same town, and had some personal connection to the manager, they were not good. Without some standardization or minimum bar, it leaves teams to figure out how they hire, and that might not be good.
This totally changes when you're a specific candidate with skills Apple values. What this guy experienced wasn't an interview process. They had already decided to hire him before what he described as the interview process. They already know he's good, he experienced the talent acquisition process. The bit that happens when the company has already decided to hire you and is now doing everything it can to sell you on the job.
If someone outside of recruiting contacts you about a job at a company, you will not experience a normal interview process, because they're already quite likely to want to hire you.
Hiring managers will hire someone they already know or find themselves 9 times out of 10.
Do something that gets noticed or get to know people and network. Sending out cold resumes will have the lowest success rate of those three options.
[0] https://corecursive.com/shipping-graphing-calculator/
I'm glad people aren’t just continuing the practice simply because they experienced the practice.
System A optimizes for min(p(hire|bad_candidate)). System B optimizes for max(p(hire|good_candidate)).
System A has resume screens, long lead times, and Leetcode out the wazoo. System B has a few convos, short lead times, and review of actual previous work product.
Horror stories come from great candidates in system A. This article is describing how to get into system B.
You start from scratch, you can pick something you're interested in building and maybe also feel you can succeed at. You make up your own requirements, as you see please, so things aren't ambiguous or changing in requirements constantly.
There's no prior legacy constraints, there's no time constraints, it doesn't require finding ways to deliver faster by leveraging other developers.
You get to spend as much time as you need, go watch tutorials, take a hiatus, come back to it when refreshed, fail multiple times and try again, etc.
You can choose the programming language, framework, library ecosystem, that you're more familiar with and comfortable with.
And so on.
So having someone that can show some cool personal projects I've found isn't always a great way to know if they'll be good in a work environment and on a team.
No, it looks like you're missing the point. Leetcode is not for people with special skills. It's for people with NO SKILLS other than "can code". Well if the only thing you can do is write code, you better be really good at writing code.
it will open many doors :)
The biggest stumbling block with leetcode is that you shouldn't be programming like that in real life.
"make a linked list" no, just use a library like everyone else.
"Implement addition in python but with string inputs", "no you can't use the built in x" All of that "clever" shit should be filtered out at PR/MR/diff review time.
All of this "clever shit" is exceptionally bad programming. We don't live in the 80s anymore. we don't need to make our own sort algorithms. just. use. a. Library.
> "make a linked list" no, just use a library like everyone else.
> "Implement addition in python but with string inputs", "no you can't use the built in x" All of that "clever" shit should be filtered out at PR/MR/diff review time.
Sure, those statements (just use an existing library) are what you'll do in practice especially as a beginner. But there is value in asking these kinds of questions.
One is "do you know what's going on under the hood?" On another post I just saw a comment in which someone advocated counting the number of occurrences of something in a list by filtering it and then getting the length of the result. This is almost certainly a terrible solution as it allocates memory (which has to be freed) in what can be done in a single quick pass. By asking you such a question the interviewer can learn if you have some idea of the tradeoffs and why something might be a good or bad idea.
These are the kinds of decisions that can have order of magnitude impacts on runtime, which can have huge impact on the company's costs.
Likewise doing arithmetic with strings as inputs would expose interesting but not super complicated questions about how arithmetic works, how you parse a numeric value out of its written representation (detest the word "conversion" for this), what are interesting bounds. If the job is really so high level that you don't have to think that there's an actual machine involved, then yes the questions aren't useful. But how many jobs are really so abstract?
> We don't live in the 80s anymore. we don't need to make our own sort algorithms.
No, but sometimes you have to make a choice based on the data to be operated on.
> These are the kinds of decisions that can have order of magnitude impacts on runtime, which can have huge impact on the company's costs.
Then the questions should be more like: What data structure, library, method, steps, would you use to solve this problem and why. What are the tradeoffs of your choices, etc?
All of this can be answered in fairly high level pseudocode too and still show that they understand the concepts etc.
The more code you write, the more you have to maintain. Sure, of course there are times where you need to re-implement something from scratch. But those times are rare (or should be). Making something from scratch without strong justification is a strong signal, just not a positive one.
Now, as you point out, "when would you write x from scratch" is a great question to ask someone. I am vary wary of people that are willing to overspend innovation tokens. But thats a design/systems/culture question, not a coding question.
the signal I want from a coding test is the following:
o Can they demonstrate and understanding of the programming language?
o Do they create readable code?
o Do they ask questions?
o Do they follow the style of the programme they are modifying?
o Do they say "I don't know"?
o Do they push back on weird or silly requirements?
all of those things make working with someone much easier. None of those questions require "implement algorithm that only exist in Computer Science courses". They can all be answered by something like:
"here is a program that fetches a json document, but it breaks often, please make it more reliable"
That is far more realistic, and more likely to what they'll be doing.
The dirty little secret about most coding jobs is that actually, the important thing is getting something functional and stable for the business, so it can make money. Then after that its responding to feature/bug requests. As a programmer, your job is to convert business logic into computer logic. 99% of the time, this doesn't mean making artisan libraries that improve some tiny metric by 20x.
I think it's really not about "beginner" vs. "expert", but moreso about the specificity of your role. If you're tasked with making general cloud services, it's probably fine. As your role gets closer to core systems/algorithms engineering, obviously that changes.
> But there is value in asking these kinds of questions.
There is value knowing, but the dynamic of an interview make things like this harder to ask/harder to answer.
> One is "do you know what's going on under the hood?" On another post I just saw a comment in which someone advocated counting the number of occurrences of something in a list by filtering it and then getting the length of the result. This is almost certainly a terrible solution as it allocates memory (which has to be freed) in what can be done in a single quick pass.
In the real world the validity of this solution depends on its input and use. Also in the real world, especially if you're not on one of the above-mentioned specialist teams, it's usually more important to be readable and maintainable instead of just purely performance oriented. So a single pass solution that's harder to grasp at a glance becomes less desirable than multiple passes as long as your performance requirements can afford it.
> By asking you such a question the interviewer can learn if you have some idea of the tradeoffs and why something might be a good or bad idea.
Totally, but the issue becomes "Can you figure out an algorithm on a whiteboard", instead of the (as you've already agreed) correct path of "Can you work through tradeoffs of one implementation vs. another". I think this question could pretty easily be presented in a way that isn't a quiz and also allows the programmer to demonstrate their problemsolving ability without seeming adversarial.
Why should I need to? Let's take Microsoft's .Net sort implemtation.
It will intelligently determine which is the optimized sort given the conditions it's facing.
Heapsort? Mergesort? Quicksort? Radix Sort.
And the most-possibility optimized versions of each.
This is what data scientists do. We don't need programmers standing at whiteboards writting BubbleShort and being told it's wrong. In the real world, you call .Sort() and get on with real work.
Sure, you can hand-roll your own sort for hot-path cases but that's likely been taken into account anyway!
I agree leetcode style interviews are artificial, but I think they persist because few people have identified and popularised effective alternatives. At least with leetcode you've shown people in front of you have been prepared to go learn arcane stuff and apply it. It's not good, but it's better than nothing.
I was once asked to do the Gilded Rose kata[1] for one ~200 headcount Series D "startup", and it not only resembled real work - it's a refactoring problem - but it also showed their engineering culture. When I joined, I found smart people who weren't leetcode robots, but thoughtful engineers trying to solve tricky engineering problems. I "stole" Gilded Rose to use in other companies when interviewing until I joined my current employer (a FAANG, heavily prescribed process), with great success. I would like to see more katas that are as good at testing real world skills.
Also, something I've only ever been interviewed for twice in 25+ years, which I think is underplayed: pair programming and handling code review feedback. Do this more, please. If you hire people not knowing how they're going to respond to a principal engineer telling them "we need you to think about 4 other things that weren't in the original scope given to you by the PM, can you please go deal with them", why are you surprised when toys are subsequently thrown out of metaphorical prams?
[1] https://github.com/emilybache/GildedRose-Refactoring-Kata
For example, if the position is mostly coding C then just get them to explain how some X interacts with some Y and how that interacts with some Z. Maybe with a copy of of K&R and get them to point to page so and so.
Do that a few times, maybe use a whiteboard, and I think any experienced C developer will be able to tell who's faking it and who really understands within 2 or 3 iterations.
Even if there are no experienced folks in the company, then it will take longer but, after an hour or two it'd be pretty difficult for anyone to fake a solid understanding of hundreds of pages of K&R.
Astrology persists, but that's not a testament to its efficacy.
You can always just google how to invert a binary tree or whatever and get a solid answer in no time. Which means it’s easy shit. Unless you’re one of a few devs doing PhD level work in the industry, that CS-heavy “mathy” stuff’s not the hardest thing you’re dealing with, and being good at that indicates almost nothing about your ability to deal with the hard problems you will encounter on the job.
(or else you’re greenfielding at a startup and are doing everything on easy-mode anyway, aside from problems you create for yourself on purpose or by accident)
To be sure, there will always be some senior/staff/principal engineers who sit in a room by themselves and code all day without talking to anyone, but, typically, the more senior you get in an organization the more your job consists of things other than writing code. When you're a junior or a mid, however, writing code makes up the vast majority of your day.
In a company like Google (or Google 15 years ago really) there are problems with many possible solutions, but some solutions are better than others. The aim of leetcode recruitment is that it filters for people who can not only solve hard computer science problems, but also recognize what problem they're solving and find good solutions rather than just solutions.
"I can implement this with a library" is only a good solution if the library is solving the problem in the right way. If the library is solving the problem but solving it in an inefficient-but-working way that is not good programming. At the end of the spectrum where Google exists you need developers who know the difference.
The problem with leetcode is that it doesn't actually filter for people who can understand problems in a general sense. It filters for people who know leetcode problems.
Yep. And even then, you need to know what you’re looking for. Years ago I wanted a library which implemented occlusion on vector shapes for plotter art. I had no idea what terms to search for because I don’t have a strong enough background in graphics. Turns out there are algorithms which help - but it was years before I found a good enough library.
And if you know what to search for, there are so many terrible implementations out there. Look for priority queues on npm. Half the libraries implement custom binary trees or something, despite the fact that using a heap (an array) is way faster and simpler. If you don’t know why a heap is better, how could you possibly evaluate whether any given library is any good?
If I ask you a question along the lines of "write me a function to tell if two number ranges intersect" and your solution is to grab a library instead of writing a simple predicate...then perhaps the role is not a good fit.
"Use a library for everything" is how we ended up with left-pad on npm.
bollocks, that's because javascript doesn't have a standard lib.
> You do know we hire people who have to _write_ these libraries
I know, because I'm there. I'm working in VR/AR. We have a number of people who are world experts for doing what they are doing. Do I go off and re-make a SLAM library because I don't like the layout? no because I have to ship something useable in a normal time frame. I can't just drop 3 months replicating someone else's work because I think the layout is a bit shit, or I can't be bothered to read the docs (I mean obviously there are no docs, but thats a different problem)
But, and I cannot stress this enough, having more than one person making similar or related libraries is a disaster. Nothing gets shipped and everything is devoted to "improving" the competing libraries.
That you have the capacity to sit down and rote memorise a bunch of different techniques and practice completing a these kinds of tests indicates the likeliness of success in absence of other hard evidence of likely future performance. In a way it is the same function as university entrance exams.
That these companies continue to do this for experienced candidates and not just recent graduates without a track record in employment is more perplexing.
This assumes that they don't get similar number of people applying to the experienced positions who are lacking skills.
For example, I know some Java devs who could put down 5+ years of professional experience but are still very junior when it comes to problem solving skills. They are capable of following precise instructions and there's enough work of that nature to do - but an open ended "there's a bug that has a null pointer exception when running in test but not dev or production" is something that they're not capable of doing.
If they were to apply to a position that said senior java developer based on their years of experience and there was no code test as part if it, they might be able to talk their way through parts of it and there is a risk that they could get hired.
For a senior java developer position at Google, would it be surprising if they got 500 applicants with 5+ years of experience?
How would you meaningfully filter that down to a set of candidates who are likely to be able to fill the role?
Also, some Leetcode problems just have unnecessarily unfriendly descriptions. For example this one, which is quite simple to implement, but which has an unnecessarily obscure problem description: https://leetcode.com/problems/count-and-say/
Of course, hiring someone purely on how good they are at leetcode would be... dumb.
One "subtlety" this misses is that leetcode-style trivia tests only work for those who are willing _and able_ to grind leetcode etc.
There are many who have the aptitude and experience but have e.g. children, or interests outside of programming which means they do not have the required free time to spend rehearsing for this kind of interview.
If you cannot sketch out a simple linked list on a whiteboard, you’re not an engineer and have no business being hired as one at a place like Apple, full stop.
I was asked to sketch out a hash table on the whiteboard. It was trivial, because a trivial hash table is trivial.
well, unless you have a engineering degree, you're not actually an engineer, but we'll gloss over that. (big hint CS isn't an engineering discipline, otherwise testing, requirements gathering and cost analysis would be a big part of the course. )
I was once asked to implement a distributed fault tolerant hashtable on a whiteboard. I'd still never make one from scratch unless I really really needed to. (and I've worked on distributed datastores....)
but that wasn't part of the coding test, that was part of the design test.
Which is my point, re-implementing things for the hell of it, is an antipattern. rote learning of toy implementations of some everyday function doesn't give you a good signal on if the candidate understands why you should use x over y.
and again, my point is this: If I catch someone re-implementing something in a code review, they need to have a really good fucking reason to do it, otherwise it'll be rejected.
People always make weird arguments about lc, but whats the point?
Just put effort into it or not, no excuses needed
Our team can get to be very picky about people because ... well, we need to be.
I'm sure some companies have lousy interview practices, but algorithms and data structures are super important if you want to hire someone who's actually competent, who can think for themselves.
if you dont do that in leetcode interview - you will not make it, or you will be graded as junior/entry level engineer.
No senior engineer will be passed without doing what you described during the leetcode interview
where there are much greater selective pressures in India and China to excel at this than there are in the US, raising this irrelevant bar to absurd territory if you value your time
You thought the point of leetcode was to simulate live work conditions? lol
Imo this process is flawed though because just one or two rounds of technical interviewing gives you enough information about whether the candidate can code. After that you need to understand how the candidate thinks since most of the job is spent doing things that aren’t coding. These are better probed by design questions, asking the candidate to critique some code, asking the candidate to explain a project from their resume and then propose alternatives and trade offs.
Too often you get people who pass this interview process that can code at a basic level but hinder your team by giving poor feedback, having a fixed mindset, being a bad communicator, not being able to unblock themselves etc.
It’s fine if you are just looking to grow a new grad though, although former interns are better.
Deleted Comment
Microsoft and Apple were the first companies to understand this truth so deeply, so I'm not surprised Apple is still into these practices. If you are young, use them to your advantage; just - please go in with eyes wide open, because the industry doesn't really have your best interest at heart and never will.
- They may not have formal education, you can't judge on that.
- They may not have active GitHub projects, you can't judge on that.
- They may not be active on social media or have any kind of fame, you can't judge on that.
- They may not have built anything they can show off like this `Find` app, you can't judge on that.
So what can you judge them on? LC makes that pretty simple: "can they answer some standardised questions about algorithms and data structures, showing that they have at least some basic knowledge of what's going on in computers?"
It's not without downsides, but I also struggle to see a better option that can scale to the armies of devs that Amazon, Google, etc, all hire.
The crux of the evaluation often revolves around have you built something amazing before? For most new grad folks it is the work done as part of their PhD thesis, but for non PhDs it often boils to amazing past work. And it’s a similar process for more experience folks except the reliance on LC fades even more.
If this works well for cutting edge ML why shouldn't it work for everything else?
On the flip side, I’m fairly sure the hardware / system software teams have rigorous hiring processes.
Currently entropy is getting to Apple in many ways. iTunes and Apple Music didn't use to be mediocre, but are now. Siri is very mediocre. OS quality is declining although still pretty good. Hardware division is excellent, but most hardware divisions are.
Unfortunately the only conclusion we can draw is that consumers don't care about:
- software quality
- bugs
- voice assistants
...and do care about:
- hardware quality
- integrated ecosystems
- cameras
- battery life
What did he get out of that in the end? Just more janitor work?
GP didn't seem to be saying they didn't think about that software. They were saying they didn't have a good reputation, which is true.
Apple is actually astonishingly bad at software considering the amount of money they have to do it properly. Even Google eventually got Android to a place where it's very resilient and rarely buggy, but iOS has bugs for me almost daily where I have to reboot to fix it.
On this point regarding the lack of leet code at Apple versus other big tech:
> Is it a coincidence that Apple was the only Big Tech company that didn’t do layoffs?
Yeah, I do think it's probably a coincidence. On the whole, a higher focus on leet code in hiring is probably a negative thing, but I doubt it has such big effects as to cause layoffs.
I actually prefer hiring and working with self taught programmers especially when it comes to juniors vs CS grads. They have less bad habits usually and a stronger appetite to work things out for themselves.
Though this could be just my own bias having not graduated in a CS degree, anecdata and all that.
And as others have said, Apple does rely much more strongly on contracting than the rest of the Big Tech.
I dunno, I don't think the size of their cash-on-hand account is luck.
It definitely has pros and cons though: I've worked on a handful of teams at Apple and was very lucky to have been able to work on super fun and interesting stuff on all of them, but an outsider confronted with 1000+ vague listings on jobs.apple.com is effectively playing the lottery of whether they'll get to interview for a team that's a good match for them.