There's no shortage of criticism of the leet coding interview questions, but I found the system design interviews even more asinine.
I have never in my career had to do anything like designing a large scale system. Maybe I'm inadequate, maybe I've been insufficiently motivated, but it hasn't happened. If that's a requirement, say so and don't waste the time of applicants who don't know what a ring tokenizer is.
As it was, it turned into a ridiculous charade session where I watched a bunch of videos and regurgitated them as though I knew what I was talking about. "Oh yes, I'd use a column oriented database and put a load balancer in front".
Without any real-word experience it's just a bunch of BS. I'd never let someone like me design a large scale system - not even close. I don't want to design large scale systems, it sounds boring and like the type of job where you're expected to be on call 24/7.
I've worked with the Linux kernel, I've written device drivers, I've programed in everything from C to Go, and that's what I want to keep doing. Why put me through this?
> I have never in my career had to do anything like designing a large scale system.
Giving large scale system design interview questions for a role where someone never has to work with large scale systems would be a weird cargo cult choice.
However, when a job involves working with large scale systems, it's important to understand the bigger picture even if you're never going to be the one designing the entire thing from scratch. Knowing why decisions were made and the context within which you're operating is important for being able to make good decisions.
> I've worked with the Linux kernel, I've written device drivers, I've programed in everything from Fortran to Go, and that's what I want to keep doing. Why put me through this?
If you were applying to a job for Linux kernel development, device driver development, and Fortran then I wouldn't expect your interviewers to ask about large scale distributed web development either. However, if you're applying to a job that involves doing large scale web development, then your experience writing Linux kernel code and device drivers obviously isn't a substitute for understanding these large scale system design questions.
Oddly, knowing the limitations of last year's designs can, as often, limit you to last year's solutions. That is to say, the reason things were done in the past almost always come down to resourcing constraints.
Yes, it is good to understand constraints. It is also incredibly valuable to be respectful of the constraints that folks were working on before you got there. Even better to be mindful of the constraints you are working on today, as well. With an eye for constraints coming down the line.
But, evidence is absurdly clear that large systems are grown far more effectively than they are designed. My witticism in the past that none of the large companies were built with the architectures that we seem to claim are required for growth and success. Worse, many of them were actively done with derision for "best practices" coming from larger companies. Consider, do you know all of the design choices and reasons behind such things as the old Java GlassFish server?
Even more amusing, is to watch the slow tread of JSON down the path that was already covered by XML. In particular the attempts at schemas and namespaces.
You missed the key statement in the commenter's post:
"If that's a requirement just say so"
Clearly the roles they're applying for are not concerned with the ab initio design of large-scale systems. Which is why they said what they said. They're not whining for the sake of whining.
Your experience writing Linux kernel code and device drivers obviously isn't a substitute for understanding these large scale system design questions.
A drop-in substitute, no. But an engineer who has the wherewithal to truly master the grisly low-level stuff can easily ramp up reasonably quickly in the large scale stuff as well, if needed. To not understand this is to not understand what makes good engineers tick.
We get the fact that, yeah, sometimes, for certain roles a certain level of battle-tested skills are needed in any domain. Nonetheless, there's an epidemic of overtesting (from everything to algorithms, to system design, to "culture fit") coursing through the industry's veins at present. Combined with a curious (and sometimes outright bizarre) inability of these companies to think about what's truly required for the roles -- and to explain in simple, plain English terms what these requirements are in the job description, and to design the interview process accordingly.
I have really good analytical skills which I leverage to tackle issues in large scale system piecewise. I have to suspend my skeptical mind and switch to blue hat thinking to come up with something from scratch. Then I take it apart and iterate over it. I don't think large scale system design is a straightforward process and pretending it is so may very well lead to living in interesting times.
Agreed 100%. I spent ten years at Google, got promoted three times, never did any distributed systems stuff. When I decided to leave about 18 months ago, trying to cram for these interviews and memorize how WhatsApp works was the worst part of interviewing. But I jumped through the hoop and got a couple offers, neither of which involve doing distributed systems work. Those were definitely my weakest interviews, and in the case of my current employer, I literally said to my HM, "I've never done this kind of work, and I'm not going to shine in these interviews. Here's the kind of work I have done and I'd love to talk to you about it instead." I still had to do the sys design interview, but I think maybe that helped get it down weighted?
I never did any system design, and got offers at Meta and Google (I even aced the system design at Google). It's not a very discriminating interview I believe. And I found it quite fun to prepare.
I'm also a system programmer developer for the most part(kernel, hardware bring-up, low level firmware, performance tuning of embedded systems) and I got invited a couple times for FAANG interviews,and all of them had this system design interview with nosql column databases and loadbalancers behind nginx proxies of some sort. Problem is that, it's pretty far from my field.
Are you going to ask a cloud architect who connects frontend to backend with databases at scale a lot of questions about how to write a pci express network driver in linux kernel with great performance too?
I would like to be hired by a specialized hiring team in those big companies instead of going through the general hiring process that you're expected to be a technical God who knows everything at expert level.
I have designed large scale systems, but I often feel most of my system design interviewers haven’t and are much more pedantic/nitpicky than is reasonable. Leetcode is better because most people at least understand the questions they’re asking.
Last time I did a system design interview I mentioned database triggers as a way to maintain some kind of data invariant, which flustered my interviewer and I guess made it so their canned follow up questions didn’t work, so they asked if I could think of any other approach (the one they had in mind). I couldn’t and it made the interview very painful.
You just reminded me of the time I was explaining how to do consensus failover, and the interviewer asked me to do distributed single thread design. Great use of everyone's time. The market is full of LARPers who wish they had the chance to design something real, and they will take it out on you.
I find system design interviews generally provide significantly more signal than coding interviews, but still less signal than a lunch interview.
It's a test of breadth and depth and it takes an apt, possibly experienced interviewer to navigate the candidate's domain knowledge effectively. The goal isn't to build some elaborate, buzzwordy house of cards, but to understand where tools are appropriate and where they are not, to think on your feet and work with an interviewer to build a system that makes sense. And, just like the real world, criteria should shift and change as you flesh out the design.
In particular people trip over the same things every time: reaching for things they do not understand, not understanding fundamental properties of infrastructure (CPUs, memory, networking), and cache invalidation.
When I interview folks I always preface the prompt with an offer to provide advice or information, acting as if I were a trusted colleague or stakeholder.
If you want to be low level, then I'd question why we're conducting a large system design interview anyways. We could certainly frame it as a small system design instead, and focus on the universe contained within the injection-molded exoskeleton of the widget.
If you say "column oriented" I'm going to ask you to explain why. I'm going to challenge what the load balancer is doing or what you expect it to do, and why.
Building large systems in the real world well, and watching them scale up under load with grace (and without contorting your opex to have only lunar aspirations) is somewhat akin to watching your child ride their bike for the first time after the training wheels are off. It feels good. Just like seeing your hardware in the field produce a low failure yield.
There is satisfaction in doing good work, or at least their should be.
Careful with that line of thinking. There’s a significant body of research showing that people feel like a “chat about tech” interview provides the best signal, but it empirically performs the worst with a roughly 50% correlation to on-the-job performance. You’re better off flipping a coin because at least then you’re not biased.
> If you say "column oriented" I'm going to ask you to explain why. I'm going to challenge what the load balancer is doing or what you expect it to do, and why.
And I would have happily repeated something from a video I'd watched on YouTube two nights earlier. It's a cram test.
I built one large scale system in my career (when I worked at Google, I made a folding@home screensaver that used up idle cycles in production).
When I built it I ignored 95% of what Google knew about large scale system design because that knowledge was really about building scalable web services and storage systems, while I needed to build a batch scheduler which could handle many millions of tasks simultaneously. We depended on a few core scalable resources available in production (borg, chubby, bigtable, colossus) and tried as hard as possible to avoid spamming them with excessive traffic without adding lots of complex caches and other hard-to-debug systems. In fact, "simplicity" was the primary design goal. The system worked, it scaled, and if I'd followed all the normal Google guidelines, it wouldn't have (because scientific computation and web load balancers differ). Not sure what to take away from that.
These days in system design interviews I usually focus on limiting the use cases for the system so that I can architect something that: linear resource consumption for linear workloads over 2-3 orders of magnitude, is simple enough that a small group of engineers can understand the whole system and debug it when it breaks, and not try to optimize for future use cases (clearly documenting the limits of the system) or try to accomodate too many oddball one-off user requests.
I have designed a few systems, but my issue with the system design interview is that this is not how it works. There is never a blank page in real life like it is in one of these interviews, and the stuff that's actually on the page matters more than the stuff you're "supposed" to say in these sessions.
Yes, column-oriented databases absolutely do work better for OLAP use cases, but is it better enough for the specific use case to be worth introducing a new database technology into the organization, or would a new database within the existing managed psql instance be good enough for now? Those detailed organizational questions usually matter more in the first few iterations of systems than "principled" architectural concerns.
The useful kind of design is: What is the next best iteration of this system? Maybe with an appendix at the end discussing some ideas for the iteration after that. Sometimes that next iteration is actually the first iteration of the system, in which case you should definitely not be drawing 20 boxes with different components of how it will fit together, you should be looking for the simplest possible thing that could work.
One of the fun things at big successful companies is that there are actually a lot of systems that are quite a few iterations in, with a stable baseline of usage properties. With these, it actually can make sense to draw a bunch of boxes with different components targeting different well-known pain points in a way that avoids trading off important existing capabilities. But again, that's exactly the opposite of a blank page, and no amount of digging into the interviewer's toy system design question can get deep enough for that.
All of my answers to these questions - which have always been very well-received - have been over-engineered solutions that I'd never actually pursue in a real job. But interviewers aren't really prepared for questions like "what frameworks are already being used and familiar to most teams at the company?" followed by "since we already have familiarity with postgresql+rails+react, we should set up a new non-replicated but periodically backed up database in that database instance, start with a few tables with some reasonable relationships, use activerecord and its migrations, and implement the front-end in react, then we should launch it and see what bottlenecks and pain points arise".
I get it, these interviews are trying to see if you have the knowledge and chops to solve those pain points and bottlenecks as they arise, but I'm sorry, "do an up-front design of a huge fancy system" just doesn't answer that question.
I was asked how I would replace an large scale but inadequate logging infrastructure and I stared with "In a way that minimally disrupts everything currently in place for monitoring and alerting".
I'm not sure how well that answer played out but it's still the correct answer.
I am shocked even though I shouldn’t be at this part of the technical interview for devs and that devs can pass these interviews without demonstrating practical experience.
I’ve had interviews and jobs for three smaller companies where I was actually coming in as an architect 2016-2020). But they wanted to know about my real world experience.
Luckily my second job out of college way back in 1999 I actually had to manage servers as well as develop so I could both talk theory and practice.
From 1999-2012 I was managing infrastructure as part of my job at two jobs.
I’ve never interviewed at BigTech for a software engineering position. But I did do a slight pivot and interview (and presently work) in cloud consulting at BigTech specializing in “application modernization” - cloud DevOps + development.
Sure I had the one initial phone screen where I had to talk theory about system design. But my entire loop consisted of my walking through my past system design experience - and not all centered around AWS.
And yes I can talk intelligently about all of the sections that the page covers including your example of columnar vs row databases. But I wouldn’t expect that from most devs.
I was never on call at my last job. We had “follow the sun” support. But our site was only business critical during the day. One of the first things I insisted on with my CTO is that we hire a manage service provider for non business hours support.
Sort of related: at most tech companies, the difference between mid and Senior is not coding ability. It’s system design and “scope” and “impact”
Interesting take. Different strokes for different folks? You aren't right or wrong here, it's preference.
That being said I am in the opposite camp and I find that more and more, the systems that I am building and maintaining are large and distributed.
Despite what a lot of commenters here on HN will say - yes there absolutely are businesses out there that need tooling like Kubernetes and huge column databases.
IMO being able to think about the wider implications for smaller subsets of a system is an important ability. That being said, if your organization allows ICs to make technical decisions without any sort of review from someone whose job it is to architect large systems, idk that seems like something you should have.
It also depends greatly on what 'layer' of the stack you work in.
There's a difference between talking about the wider implications of a system and acting like software can be planned top down with any sense of accuracy
If you want to ask me about the wider implications for smaller subsystems then ask me about the Linux kernel. How did it become that system design is the only way to demonstrate this particular skill?
I think they're BS also. What you should be looking for is does the person understand how the project(s) they have worked on fit into a larger system. For instance, I have a high level understanding of how the other systems at the Co I work for work. I know what they do, I know what data they ingest and expel. I know how the data I consume is generated. I know how it all fits together and I can talk conversantly about it. I know that when a change request comes in if it affects other teams and I voice out when it does.
I think this is what you should look for in most candidates except high level system engineers and jr developers.
If you apply to a generalist SWE role at FAANG, you're expected to have some knowledge about these things since you're likely to encounter them. I didn't have such experience either, but it's now part of my day to day job.
If you apply for a targeted specialized role, you may get a system design question that relates to your domain. If there's a general system design interview, it should have less weight in the process. It's still an indicator on how candidates communicate and think through a design. Plus, these kind of things should be general knowledge for a computer scientist nowadays.
Interviewee: Long answer with novice RDBMS choices.
Interviewer(implicitly): I will judge you based on how you can "Problem Solve" many-many conundrum.
Interviewee: Use Cache, scaling, edge nodes, kafka because....
....
HIRED
....
Interviewee spends first 6 months writing a test framework to automatically run tests of deployment of an internal tool that runs python ingestion pipeline. Next 6 months figuring out how to help user add type hints to IDE for pipelines to help catch errors faster and how to roll it out to 100+ ICs in the company. Sadly fails.
Never ever writes a line of code to make "twitter", learns a ton on how to work with people.
Not sure what the point is here, but ... in my experience as the hiring manager I haven't seen huge success in testing a candidate whether they can do the exact thing they will be doing in the first 6 months (in software engineering).
I had better results in asking questions that bias towards a certain set of behavior traits (yes, this has other long-term problems) and not certain skills.
I understand that some candidates don't like it, because they are obviously good at something and want to be asked questions in that field. But ... I already know you are good at it because you said it on the CV, so i'm not wasting time with that (yes, if you are a good liar it takes us longer to find out).
And the culture is also promoting gameable interviews that tell one absolutely nothing to the interviewee about a candidate except that he has managed to parrot an answer from a online guide or DDIA.
I am sure a skilled interviewee can find the difference between a guy who knows his System design from one that does not. But its paradoxical to have a guide prepare someone on so many topics of which only 1% is what one person has done in their life.
When I see something like this I wonder if there are people out there who actually go chapter-by-chapter from start to finish, spending dozens of hours learning the content a random GitHub repo claimed was important.
You can always go for the fundamentals and go through DDIA. It is heavy on foundations but does not really give examples of specific real world systems. For that, Alex Xu’s books are probably the most popular.
This repo is a very high level collection of topics with short descriptions rather than a learning source. You can skim it a couple of minutes. But if you're curious about a topic, why wouldn't you read a free resource on it if the quality seems good?
“quality seems good” is the point of contention here; how would you know? You’re either already knowledgeable about this topic and therefore necessarily didn’t learn it from here or you’re not and therefore unqualified to judge its quality at all.
until recently i was a principal engineer at amazon. so maybe my opinion has some weight.
system design interview is more about interviewee asking questions..taking time to understand the problem..ask about product feature or SLA..understand functional and non functional requirement.
then its about candidate showing some knowledge set showing they can think and reason behind some immediate coding task. Demonstrate ability to make judgment..simplify where possible..discuss costs and trade offs.
this interview not about candidate building some system at scale themself. building and supporting has trials and lessons you only learn by doing and failing, not through interview prep or YouTube videos
The best technical interviews I've been on as a interviewee have been those where the expectations are clear. In your example:
"We're not expecting you to create Twitter in 15 minutes, but we want to understand how you think about the challenges and key considerations of building a large system like Twitter"
Many interviewers fail to provide enough context and that leaves the interpretation of the prompt too wide open. At that point, the interview has failed since whether a candidate can provide an answer that is aligned with the expectations of the interviewer has an element of chance to it.
>>>> Many interviewers fail to provide enough context and that leaves the interpretation of the prompt too wide open.
yes but this not a defect as youre viewing it.
in real world at amazon, your job to deal with ambiguity. the hand holding phase where youre given or told exactly what to do is maybe 1-2 year for college level hire. you work with ambiguity or you move out.
if you do not want ambiguity challenge then amazon not best fit for you. its not for everyone and amazon certainly has big problems in its culture. not defending any of it but saying to you what it is.
> Many interviewers fail to provide enough context and that leaves the interpretation of the prompt too wide open
So do clients/customers. What's your point? That an interview to assess whether a developer can elicit requirements should be less hard than dealing with an actual customer?
I decline non big tech interviews that involve some kind of "system design". I've worked with so-called staff engineers who designed systems with textbook antipatterns (I literally studied them). This idea that you need to regurgitate some bloated mess onto the cloud even to solve even the simplest problems needs to stop.
This is not how people learn. And I always found these books more useful after I have actually worked on a system than before. Do you know why people ask such questions at any company? Its because there is a rubrick at every company and 1/2 the people don't have a clue what else to ask except because they have no other good ideas or because they may have done it themselves and it was a nice problem without realizing what that experience gave them never actually gives the same to the new guy.
Skill addition is not additive it is about comparing choices and critically analyzing risks on things you *have already worked on*. Its as much about knowing what won't work i.e. subtractive. If you have never worked on a distributed system and are asked to work on some part of it the reasonable way it happens is:
1. You will be asked to initially to start working on a small part.
2. You will/should have someone to help you if you have no clue. This is called seeking feedback and it is done on google docs where people critique your design and you iterate.
3. You should have the time and sense to read up when you do (1), (2).
4. Maintain mental sanity and find mentors to help you out.
Notice that a system design interview prep will help you with exactly *zero* of the steps above. For all other situations where you are forced to do more than the above, run away in the opposite direction.
I think if someone just published a Cheat Sheet on the actual Rubrick it would be so thoroughly gamed that it would prove the point that these things are pointless.
I've seen people cram their heads with all kinds of prep problems like designing an elevator, a parking lot or a social media site and I have never understood the rationale behind such questions. Design isn't something you could do in an hour. Design is also a trial and error thing. Slight changes in data flow might change the entire system architecture.
I have never in my career had to do anything like designing a large scale system. Maybe I'm inadequate, maybe I've been insufficiently motivated, but it hasn't happened. If that's a requirement, say so and don't waste the time of applicants who don't know what a ring tokenizer is.
As it was, it turned into a ridiculous charade session where I watched a bunch of videos and regurgitated them as though I knew what I was talking about. "Oh yes, I'd use a column oriented database and put a load balancer in front".
Without any real-word experience it's just a bunch of BS. I'd never let someone like me design a large scale system - not even close. I don't want to design large scale systems, it sounds boring and like the type of job where you're expected to be on call 24/7.
I've worked with the Linux kernel, I've written device drivers, I've programed in everything from C to Go, and that's what I want to keep doing. Why put me through this?
Giving large scale system design interview questions for a role where someone never has to work with large scale systems would be a weird cargo cult choice.
However, when a job involves working with large scale systems, it's important to understand the bigger picture even if you're never going to be the one designing the entire thing from scratch. Knowing why decisions were made and the context within which you're operating is important for being able to make good decisions.
> I've worked with the Linux kernel, I've written device drivers, I've programed in everything from Fortran to Go, and that's what I want to keep doing. Why put me through this?
If you were applying to a job for Linux kernel development, device driver development, and Fortran then I wouldn't expect your interviewers to ask about large scale distributed web development either. However, if you're applying to a job that involves doing large scale web development, then your experience writing Linux kernel code and device drivers obviously isn't a substitute for understanding these large scale system design questions.
Yes, it is good to understand constraints. It is also incredibly valuable to be respectful of the constraints that folks were working on before you got there. Even better to be mindful of the constraints you are working on today, as well. With an eye for constraints coming down the line.
But, evidence is absurdly clear that large systems are grown far more effectively than they are designed. My witticism in the past that none of the large companies were built with the architectures that we seem to claim are required for growth and success. Worse, many of them were actively done with derision for "best practices" coming from larger companies. Consider, do you know all of the design choices and reasons behind such things as the old Java GlassFish server?
Even more amusing, is to watch the slow tread of JSON down the path that was already covered by XML. In particular the attempts at schemas and namespaces.
"If that's a requirement just say so"
Clearly the roles they're applying for are not concerned with the ab initio design of large-scale systems. Which is why they said what they said. They're not whining for the sake of whining.
Your experience writing Linux kernel code and device drivers obviously isn't a substitute for understanding these large scale system design questions.
A drop-in substitute, no. But an engineer who has the wherewithal to truly master the grisly low-level stuff can easily ramp up reasonably quickly in the large scale stuff as well, if needed. To not understand this is to not understand what makes good engineers tick.
We get the fact that, yeah, sometimes, for certain roles a certain level of battle-tested skills are needed in any domain. Nonetheless, there's an epidemic of overtesting (from everything to algorithms, to system design, to "culture fit") coursing through the industry's veins at present. Combined with a curious (and sometimes outright bizarre) inability of these companies to think about what's truly required for the roles -- and to explain in simple, plain English terms what these requirements are in the job description, and to design the interview process accordingly.
Deleted Comment
Are you going to ask a cloud architect who connects frontend to backend with databases at scale a lot of questions about how to write a pci express network driver in linux kernel with great performance too?
I would like to be hired by a specialized hiring team in those big companies instead of going through the general hiring process that you're expected to be a technical God who knows everything at expert level.
I rejected all those interview requests.
Deleted Comment
Last time I did a system design interview I mentioned database triggers as a way to maintain some kind of data invariant, which flustered my interviewer and I guess made it so their canned follow up questions didn’t work, so they asked if I could think of any other approach (the one they had in mind). I couldn’t and it made the interview very painful.
It's a test of breadth and depth and it takes an apt, possibly experienced interviewer to navigate the candidate's domain knowledge effectively. The goal isn't to build some elaborate, buzzwordy house of cards, but to understand where tools are appropriate and where they are not, to think on your feet and work with an interviewer to build a system that makes sense. And, just like the real world, criteria should shift and change as you flesh out the design.
In particular people trip over the same things every time: reaching for things they do not understand, not understanding fundamental properties of infrastructure (CPUs, memory, networking), and cache invalidation.
When I interview folks I always preface the prompt with an offer to provide advice or information, acting as if I were a trusted colleague or stakeholder.
If you want to be low level, then I'd question why we're conducting a large system design interview anyways. We could certainly frame it as a small system design instead, and focus on the universe contained within the injection-molded exoskeleton of the widget.
If you say "column oriented" I'm going to ask you to explain why. I'm going to challenge what the load balancer is doing or what you expect it to do, and why.
Building large systems in the real world well, and watching them scale up under load with grace (and without contorting your opex to have only lunar aspirations) is somewhat akin to watching your child ride their bike for the first time after the training wheels are off. It feels good. Just like seeing your hardware in the field produce a low failure yield.
There is satisfaction in doing good work, or at least their should be.
Careful with that line of thinking. There’s a significant body of research showing that people feel like a “chat about tech” interview provides the best signal, but it empirically performs the worst with a roughly 50% correlation to on-the-job performance. You’re better off flipping a coin because at least then you’re not biased.
source: https://en.wikipedia.org/wiki/Noise:_A_Flaw_in_Human_Judgmen...
And I would have happily repeated something from a video I'd watched on YouTube two nights earlier. It's a cram test.
When I built it I ignored 95% of what Google knew about large scale system design because that knowledge was really about building scalable web services and storage systems, while I needed to build a batch scheduler which could handle many millions of tasks simultaneously. We depended on a few core scalable resources available in production (borg, chubby, bigtable, colossus) and tried as hard as possible to avoid spamming them with excessive traffic without adding lots of complex caches and other hard-to-debug systems. In fact, "simplicity" was the primary design goal. The system worked, it scaled, and if I'd followed all the normal Google guidelines, it wouldn't have (because scientific computation and web load balancers differ). Not sure what to take away from that.
These days in system design interviews I usually focus on limiting the use cases for the system so that I can architect something that: linear resource consumption for linear workloads over 2-3 orders of magnitude, is simple enough that a small group of engineers can understand the whole system and debug it when it breaks, and not try to optimize for future use cases (clearly documenting the limits of the system) or try to accomodate too many oddball one-off user requests.
Yes, column-oriented databases absolutely do work better for OLAP use cases, but is it better enough for the specific use case to be worth introducing a new database technology into the organization, or would a new database within the existing managed psql instance be good enough for now? Those detailed organizational questions usually matter more in the first few iterations of systems than "principled" architectural concerns.
The useful kind of design is: What is the next best iteration of this system? Maybe with an appendix at the end discussing some ideas for the iteration after that. Sometimes that next iteration is actually the first iteration of the system, in which case you should definitely not be drawing 20 boxes with different components of how it will fit together, you should be looking for the simplest possible thing that could work.
One of the fun things at big successful companies is that there are actually a lot of systems that are quite a few iterations in, with a stable baseline of usage properties. With these, it actually can make sense to draw a bunch of boxes with different components targeting different well-known pain points in a way that avoids trading off important existing capabilities. But again, that's exactly the opposite of a blank page, and no amount of digging into the interviewer's toy system design question can get deep enough for that.
All of my answers to these questions - which have always been very well-received - have been over-engineered solutions that I'd never actually pursue in a real job. But interviewers aren't really prepared for questions like "what frameworks are already being used and familiar to most teams at the company?" followed by "since we already have familiarity with postgresql+rails+react, we should set up a new non-replicated but periodically backed up database in that database instance, start with a few tables with some reasonable relationships, use activerecord and its migrations, and implement the front-end in react, then we should launch it and see what bottlenecks and pain points arise".
I get it, these interviews are trying to see if you have the knowledge and chops to solve those pain points and bottlenecks as they arise, but I'm sorry, "do an up-front design of a huge fancy system" just doesn't answer that question.
I'm not sure how well that answer played out but it's still the correct answer.
I am shocked even though I shouldn’t be at this part of the technical interview for devs and that devs can pass these interviews without demonstrating practical experience.
I’ve had interviews and jobs for three smaller companies where I was actually coming in as an architect 2016-2020). But they wanted to know about my real world experience.
Luckily my second job out of college way back in 1999 I actually had to manage servers as well as develop so I could both talk theory and practice.
From 1999-2012 I was managing infrastructure as part of my job at two jobs.
I’ve never interviewed at BigTech for a software engineering position. But I did do a slight pivot and interview (and presently work) in cloud consulting at BigTech specializing in “application modernization” - cloud DevOps + development.
Sure I had the one initial phone screen where I had to talk theory about system design. But my entire loop consisted of my walking through my past system design experience - and not all centered around AWS.
And yes I can talk intelligently about all of the sections that the page covers including your example of columnar vs row databases. But I wouldn’t expect that from most devs.
I was never on call at my last job. We had “follow the sun” support. But our site was only business critical during the day. One of the first things I insisted on with my CTO is that we hire a manage service provider for non business hours support.
Sort of related: at most tech companies, the difference between mid and Senior is not coding ability. It’s system design and “scope” and “impact”
That being said I am in the opposite camp and I find that more and more, the systems that I am building and maintaining are large and distributed.
Despite what a lot of commenters here on HN will say - yes there absolutely are businesses out there that need tooling like Kubernetes and huge column databases.
It also depends greatly on what 'layer' of the stack you work in.
I think this is what you should look for in most candidates except high level system engineers and jr developers.
Nothing stops the interviewer from asking you even more relevant system designs questions like:
* How would you build a Linux kernel from scratch?
* How would you design a common interface for any device drivers (that you are familiar of)?
If you apply for a targeted specialized role, you may get a system design question that relates to your domain. If there's a general system design interview, it should have less weight in the process. It's still an indicator on how candidates communicate and think through a design. Plus, these kind of things should be general knowledge for a computer scientist nowadays.
If you apply for a specialized targeted role.... you still get the same generalized interview loop
Interviewee: Long answer with novice RDBMS choices.
Interviewer(implicitly): I will judge you based on how you can "Problem Solve" many-many conundrum.
Interviewee: Use Cache, scaling, edge nodes, kafka because....
....
HIRED
....
Interviewee spends first 6 months writing a test framework to automatically run tests of deployment of an internal tool that runs python ingestion pipeline. Next 6 months figuring out how to help user add type hints to IDE for pipelines to help catch errors faster and how to roll it out to 100+ ICs in the company. Sadly fails.
Never ever writes a line of code to make "twitter", learns a ton on how to work with people.
....
Interviews again: "How do you design twitter"?
I understand that some candidates don't like it, because they are obviously good at something and want to be asked questions in that field. But ... I already know you are good at it because you said it on the CV, so i'm not wasting time with that (yes, if you are a good liar it takes us longer to find out).
So your example, working as intended?
I am sure a skilled interviewee can find the difference between a guy who knows his System design from one that does not. But its paradoxical to have a guide prepare someone on so many topics of which only 1% is what one person has done in their life.
> I had better results in asking questions that bias towards a certain set of behavior traits
I am curious. What do you ask? Like do you ask tech centric questions around Proactiveness, Empathy, Motivation, Conflict Resolution etc?
Are you always this dismissive? How do you learn?
system design interview is more about interviewee asking questions..taking time to understand the problem..ask about product feature or SLA..understand functional and non functional requirement.
then its about candidate showing some knowledge set showing they can think and reason behind some immediate coding task. Demonstrate ability to make judgment..simplify where possible..discuss costs and trade offs.
this interview not about candidate building some system at scale themself. building and supporting has trials and lessons you only learn by doing and failing, not through interview prep or YouTube videos
The best technical interviews I've been on as a interviewee have been those where the expectations are clear. In your example:
Many interviewers fail to provide enough context and that leaves the interpretation of the prompt too wide open. At that point, the interview has failed since whether a candidate can provide an answer that is aligned with the expectations of the interviewer has an element of chance to it.yes but this not a defect as youre viewing it.
in real world at amazon, your job to deal with ambiguity. the hand holding phase where youre given or told exactly what to do is maybe 1-2 year for college level hire. you work with ambiguity or you move out.
if you do not want ambiguity challenge then amazon not best fit for you. its not for everyone and amazon certainly has big problems in its culture. not defending any of it but saying to you what it is.
So do clients/customers. What's your point? That an interview to assess whether a developer can elicit requirements should be less hard than dealing with an actual customer?
Skill addition is not additive it is about comparing choices and critically analyzing risks on things you *have already worked on*. Its as much about knowing what won't work i.e. subtractive. If you have never worked on a distributed system and are asked to work on some part of it the reasonable way it happens is:
1. You will be asked to initially to start working on a small part.
2. You will/should have someone to help you if you have no clue. This is called seeking feedback and it is done on google docs where people critique your design and you iterate.
3. You should have the time and sense to read up when you do (1), (2).
4. Maintain mental sanity and find mentors to help you out.
Notice that a system design interview prep will help you with exactly *zero* of the steps above. For all other situations where you are forced to do more than the above, run away in the opposite direction.
I think if someone just published a Cheat Sheet on the actual Rubrick it would be so thoroughly gamed that it would prove the point that these things are pointless.