I'm glad to see this getting roasted in the comments, as it's a really good example of how companies put out self-serving pseudo-statistical nonsense in an effort to promote themselves.
There's no effort to quantify what "technical" or "communication" skills are - these are left to the interpretation of a the interviewer. It makes no effort to show where these engineers are going, what the interview process they're completing looks like, what impact demographics had on this, etc.
I find this stuff repugnant. It perpetuates the myth that there's something really special about Silicon Valley engineers, while making only lazy and perfunctory efforts to examine any alternative explanations than "this is where the rockstar ninja coders work." Shameful.
"There's no effort to quantify what "technical" or "communication" skills are - these are left to the interpretation of a the interviewer."
And yet, these scores are measuring something and averaged across tens of thousands of technical interviews, you have enough statistical power to average out the particularities of each interviewer.
I'm sorry you find the results repugnant, but the results are what they are. And the article did have a large section on limitations of their analysis.
It's not the results I find repugnant, it's the assumption that the results have any real world validity. The _something_ they're measuring is as likely to be demographic biases as it is technical or communication skills.
> It perpetuates the myth that there's something really special about Silicon Valley engineers
Does this myth still have much traction? If anything, my general regard for engineers in the bay area has steadily declined in the last few years. There are so many really worthless folks who have only figured out how look like they have a clue, but go any deeper and they flail. I know I'm painting with a broad brush, and that's not fair, but most of the great engineers I work with are at various other places around the country, not California.
Personally, as a (now relocated) Bay Area native, I agree. Over, I think there's still a lot of prestige for large Silicon Valley firms, even if some of the gloss has justifiably started to fade.
This article certainly assumes that myth is still in place.
Hey, it's not our fault! Where else are posers gonna flock to to pose? I swear, even with all the remote work, more than 95%* of the people that worked here 5 years ago still do. We live here.
I know a ton of engineers. Of them all, those working FAANG are profoundly less skilled than the others. It’s impossible to miss its so obvious. Maybe my social network is an outlier, but I really really doubt it.
Author here. Yes, the skills are left to the interpretation of the interviewer, but most of our interviewers are senior engineers at FAANG. We've done quite a bit of work internally to make sure your interviewers are well calibrated, and we have a living calibration score for each one (calibration is based on how the interviewees they interview end up doing in real interviews).
The interviews in question are a mix of algorithmic interviews and systems design interviews.
Also, if I ever use "rockstar" or "ninja" in my posts, I hope someone finds me on the street and punches me in the face. I'd deserve it.
FAANG interviewer here. I've conducted many hundreds of interviews for multiple FAANG companies. The totality of my training in how to interview is about 4 hours (when I combine the training of each company). I have zero confidence in the usefulness of calibrating my answers and expect that neither I nor anyone else who does this would reliably score the same person with the same score most of the time, outside of a small percentage of outlier candidates.
Is there any statistical reason I should assume an interviewer at FAANG has some kind of special insight into what good technical and communication skills are? Were any attempts made to adjust for demographic biases? Is it possible, for instance, that Dropbox engineers are disproportionately taller white dudes with nice hair? I know that sounds a bit unserious, but all of those factors would increase the likelihood of a higher score.
I appreciate that you never used "rockstar" or "ninja". That dig was a bit unfair of me.
> "but most of our interviewers are senior engineers at FAANG"
Wait, so the result is that "interviewers from FAANG companies rate highly interviewees from FAANG (and FAANG-adjacent) companies"?
Or maybe more causally: "those who have already passed FAANG-style interviews are more likely to pass interviews conducted by FAANG people"?
I appreciate the mission here - but if the idea here is to give people a fair shot even if they don't have the pedigree of FAANG, building FAANG interview styles into the system seems counter to your stated goals. If anything these results are concerning - you can interpret the findings in (at least) two ways:
- these companies hire or produce superior engineers, the results you got are indicative of a broader higher caliber of engineer in those companies.
- the interviewing exercise is optimizing for "people who can pass/have already passed FAANG-style interviews", which rolls in all of the myriad biases of FAANG hiring and perpetuates them.
While there may not be any reliable measure of communication skills across the industry, the fact that the data was based on scores given my a large amount of people, that by definition means that it's accurate.
Think about it carefully - if people rate people as being good at communication, then there is no reason to quantify it any other way. There are some obvious flaws here, like the quantity of data and it's normalization, but it's basically a tautology.
"Think about it carefully - if people rate people as being good at communication, then there is no reason to quantify it any other way. There are some obvious flaws here, like the quantity of data and it's normalization, but it's basically a tautology."
So if a lot of people rate you as a great leader, you are a great leader? Even if you lie to their face and delover terrible results? Objective reality doesn't matter?
So the greatest leader in the world is in North Korea?
Communication is a clearly measurable skill. Just because a lot of people have been sold a lie, that doesn't make it true
> It perpetuates the myth that there's something really special about Silicon Valley engineers
Why are you sure it's a myth? My prior would be to believe that engineers at the most exclusive companies with the highest hiring bars that pay 3-5 times more than average would be better programmers. The article is just one data point confirming what intuitively should be true.
If Silicon Valley engineers are no better than anywhere else then someone should notify the execs at FAANG, I'm sure they'd be interested to know they are dramatically overpaying for talent.
I don't understand what is so uncontroversial about this. SV companies recruit the best talent from around the world and it's where the best talent wants to work. Similarly, the best financial talents are in NYC and London, the best actors are in Hollywood etc.
There are capable people who don't want to live where 1000sqft home costs $1M. Never mind all of the other problems with the region. Selection bias doesn't prove anything about the people you aren't selecting for.
> It perpetuates the myth that there's something really special about Silicon Valley engineers
In my [disillusioned] experience, this holds true: Silicon Valley engineers are very good for building throwaway MVPs that they won't have to maintain more than 3 years.
I've been very disillusioned by the quality of the software written by Silicon Valley companies, but in hindsight it makes sense: "Run fast break things" development culture resonates with the "raise VC money every 18 months" business culture, and then look for an exit in 5 years tops. There is no incentive in Dev or Business to really develop good software.
While I agree that this particular exercise is riddled with problems, I simply cannot image Hacker News rolling over and accepting evidence-based answers to questions of this nature, regardless of where the data came from or what the methodology was.
> There’s also the issue of selection bias. Maybe we’re just getting people who really feel like they need practice and aren’t an indicative slice of engineers at that company.
Or that your interview preparation platform prepares candidates better for Dropbox's interview process than it does for Microsoft's. Or that the people who were confident in their interview skills for Facebook decided not to use your platform. Or that these companies have different interview processes and selection criteria (they obviously do) so ranking "best" based on performance on different tests doesn't tell you that much.
There's hundreds of different ways to slice this data to come up with different hypotheses about what's actually occurring.
Author here. The data is mostly drawn from how people who work at these companies do in mock interviews rather than how our users do in real interviews with these companies.
You should probably change the title of the blog as most gainfully employed people interpret 'best performers' as people that are very good at performing their job and/or trained for a specific Circus Act.
Something like "We analyzed 100K technical interviews to see which companies employ the people that we feel best performed in our mock interviews" would be more authentic
There's still the selection bias of who volunteers to do these mock interviews. Probably its the people who want practice interviewing and at dropbox those are the top performers who want to "move up" to a Google, while at Google it's the people who aren't cutting it and know they are going to have to find another job soon.
The data and charts in the article look pretty nice!
One of the things I learned from my years in research/academia is that Design Of Experiments in itself is a pretty complicated task. Most experiments/studies are invalidated due to a huge amount of confounding factors and correlations that are not factored in for the experiments.
A cursory visit to r/science comments would show a lot of people who do science for a living providing valid criticism to published peer reviewed scientific studies due to wrong Design of Experiments procedures.
Having lived all this first hand makes me EXTREMELY resistant to take seriously the data, analysis and conclusions of the linked article.
Other than that, the effort is appreciated and I like the ideas behind interviewing.io.
Dropbox was known as a hard place to interview since 2014, the tech companies that had the hardest technical interviews are Quora, Palantir and Dropbox(honorable mention fog creek)(this was from a few years back so things may have changed). Just because the company makes it extremely difficult to get in does not mean the company is generally all around awesome or pays great or employs the greatest engineers. It optimizes for people who generally come straight out of an elite CS program with those learned concepts fresh in their mind, and for people who grind out on leetcode or who are great at interviewing. Of the three companies I mentioned above I would not work for any of them now.
Are you suggesting that these jobs don't mainly consist of quickly banging out dynamic programming and DFS/BFS traversal algorithms as quickly as possible while a stranger stares at you? From interview experience, I assume this is what engineers mostly do at these companies all day.
My brother is an MD and I showed him some Leetcode prep material since I am studying for interviews. Specifically, a problem from Grokking the Coding Interview.
His response - “It would be helpful if they then showed how that is used in real world code“. I had to tell him I don’t think these kinds of scenarios come up often in real-world use cases, haha. They are essentially coding puzzles to filter for people who are good at solving coding puzzles, which may or may not directly translate to being good at writing application code.
Yeah "we are more selective and exclusive than Google and Facebook" was Palantir's whole schtick when they originally took off. And a lot of very smart engineers bought it.
Turns out it takes a lot more than a high leetcode bar for your interviews to run a successful company.
There's a local agency here in the city I currently reside in that touts they're hiring percentage is lower than Harvard's acceptance rate. They're telling you up front its really hard to get a job there and its a very "bougie" place to have on your resume - as if they're on the same level as the FAANG companies.
The interview process is fairly long, accompanied by an office tour which touts the numerous "Freebies" they offer their employees. Then you find out after the entire process that you'll be getting paid 4050K LESS THAN the industry average. At the time I interviewed with them, I had five years experience in UI/UX and mobile development. What they offered me was essentially what I was making as an entry level dev.
It was easy to turn them down. No amount of free Red Bull is going to pay my mortgage.
Since a good bit before 2014, even. They used to recruit very heavily from MIT (while Palantir was overrepresented at Stanford and Quora at Harvard--all of these reflecting the almae matres of the founders). Note that this was before the popularity of Leetcode and the whole cottage industry around trying to game algorithmic-type interviews. I'm not sure if similar companies founded today would push these algorithm-heavy interviews as hard, since they've probably lost some signal now & prevailing attitudes have changed a bit.
At any rate, it doesn't surprise me at all that Dropbox engineers do better than FANG engineers on these technical metrics. The average Dropbox engineer is almost certainly a bit smarter and a bit better at algorithms than the average FANG engineer. Of course those attributes don't automatically translate into being a better engineer, though, nor do they automatically translate into company success or anything.
Kudos to interviewing.io to share this analysis. I agree with the many issues in methodology and analysis that others have raised here, and I agree there's a risk that a face-value reading of the blog post is highly misleading. But this is true for all data, and poo-pooing the analysis without crediting the sharing just leads to less sharing. To be clear, I'm supportive of the criticism, but let's also give credit where it's due.
Technical interview performance is a high stakes field for which almost all data is cloistered in individual hiring companies or secretive gatekeepers. In my mind, all efforts, even imperfect ones, to share data is a great step here. We should encourage them to continue to share, including pursuing options to anonymize the raw data for open analysis. The field deserves more transparency and open discussion to push us all to be better.
This is super interesting! Worth noting that another possible title is "... where the best performers were trying to leave during the study timeframe".
Really interesting to see dropbox so high - would be curious to see some other data to corroborate that they (at least used to) employ the best engineers.
From my time interviewing, I've seen clusters of very good candidates often be more reflective of which top companies were having a hard time, internally or externally. There was a while where my company hired a lot of people from Uber; right now we're getting Amazon and Facebook/Meta.
"best performers work" With a title like that you really just cannot take this study seriously lol. Not to say it's not interesting but that is one crazy claim, at best title should be "most effective interviewees." Also, I work in FANG but signed up for this website and can't even participate, so how you chose all these candidates is also questionable.
The author confuses successful products with high performing tech and high performing engineers. I’ve met many brilliant indie engineers that build highly performant code that almost no one knows about yet it provides these people with a steady stream of income and they would never work at faang or unicorns.
I don't see this claim being made anywhere. The article claims the opposite: the best performers on their technical interview happen to be from Dropbox/Google etc.
I was thinking the best performers may very well be at one of those companies, but there's a pretty good chance that they never did any kind of coding interview. Maybe they were hired through a merger or acquisition or perhaps their reputation or portfolio is more than enough.
And the implication is the 'quality' of engineers at the companies is actually reversed - the top performers at Dropbox are struggling and leaving while the under performers at FANG are struggling and leaving.
As an ex-Dropboxer, Dropbox asks legitimately tough questions. I only got in because I got asked the exact set of questions that I could figure out the answer for, once I joined and went through interview training I realized I would have failed about half of the questions that Dropboxers ask.
I'm also not sure how it is at other companies (at Google but haven't gone through interview training yet), but Dropbox's rubrics are also pretty strict. Doing "well" on a question requires getting through multiple parts normally with close to zero bugs that you don't catch yourself.
You just described the entirety of the tech hiring experience. Sure some people are bonafide geniuses that can crack the hardest problems in their sleep. The majority of tech workers, however, are ones who got lucky with the specific combination of questions asked in the interview. Maybe they had seen the question before. Maybe the solution just "clicked". This is why the best strategy for acing such interviews is – just apply to a lot of companies.
Of course, but this is a gradient. Struggling startup may ask a question that 80% of engineers are capable of answering (through luck, exposure, experience -- whatever dimensions).
Dropbox asks difficult questions, and it's hard to discern why. I don't believe the problems at Dropbox are particularly difficult relative to its peers. I don't think they innovate at a clip that's outsized, etc.. But they do this, and their engineering culture focuses on this.
There's no effort to quantify what "technical" or "communication" skills are - these are left to the interpretation of a the interviewer. It makes no effort to show where these engineers are going, what the interview process they're completing looks like, what impact demographics had on this, etc.
I find this stuff repugnant. It perpetuates the myth that there's something really special about Silicon Valley engineers, while making only lazy and perfunctory efforts to examine any alternative explanations than "this is where the rockstar ninja coders work." Shameful.
And yet, these scores are measuring something and averaged across tens of thousands of technical interviews, you have enough statistical power to average out the particularities of each interviewer.
I'm sorry you find the results repugnant, but the results are what they are. And the article did have a large section on limitations of their analysis.
Does this myth still have much traction? If anything, my general regard for engineers in the bay area has steadily declined in the last few years. There are so many really worthless folks who have only figured out how look like they have a clue, but go any deeper and they flail. I know I'm painting with a broad brush, and that's not fair, but most of the great engineers I work with are at various other places around the country, not California.
This article certainly assumes that myth is still in place.
* made up statistic.
The interviews in question are a mix of algorithmic interviews and systems design interviews.
Also, if I ever use "rockstar" or "ninja" in my posts, I hope someone finds me on the street and punches me in the face. I'd deserve it.
I appreciate that you never used "rockstar" or "ninja". That dig was a bit unfair of me.
Are you sure you'll never find yourself blogging about bands' flamboyant frontmen (or frontwomen), or covert agents in feudal Japan?
Wait, so the result is that "interviewers from FAANG companies rate highly interviewees from FAANG (and FAANG-adjacent) companies"?
Or maybe more causally: "those who have already passed FAANG-style interviews are more likely to pass interviews conducted by FAANG people"?
I appreciate the mission here - but if the idea here is to give people a fair shot even if they don't have the pedigree of FAANG, building FAANG interview styles into the system seems counter to your stated goals. If anything these results are concerning - you can interpret the findings in (at least) two ways:
- these companies hire or produce superior engineers, the results you got are indicative of a broader higher caliber of engineer in those companies.
- the interviewing exercise is optimizing for "people who can pass/have already passed FAANG-style interviews", which rolls in all of the myriad biases of FAANG hiring and perpetuates them.
This means absolutely nothing. It's a well known fact that SV titles hold next to no weight.
Think about it carefully - if people rate people as being good at communication, then there is no reason to quantify it any other way. There are some obvious flaws here, like the quantity of data and it's normalization, but it's basically a tautology.
So if a lot of people rate you as a great leader, you are a great leader? Even if you lie to their face and delover terrible results? Objective reality doesn't matter?
So the greatest leader in the world is in North Korea?
Communication is a clearly measurable skill. Just because a lot of people have been sold a lie, that doesn't make it true
Why are you sure it's a myth? My prior would be to believe that engineers at the most exclusive companies with the highest hiring bars that pay 3-5 times more than average would be better programmers. The article is just one data point confirming what intuitively should be true.
If Silicon Valley engineers are no better than anywhere else then someone should notify the execs at FAANG, I'm sure they'd be interested to know they are dramatically overpaying for talent.
I don't understand what is so uncontroversial about this. SV companies recruit the best talent from around the world and it's where the best talent wants to work. Similarly, the best financial talents are in NYC and London, the best actors are in Hollywood etc.
In my [disillusioned] experience, this holds true: Silicon Valley engineers are very good for building throwaway MVPs that they won't have to maintain more than 3 years.
I've been very disillusioned by the quality of the software written by Silicon Valley companies, but in hindsight it makes sense: "Run fast break things" development culture resonates with the "raise VC money every 18 months" business culture, and then look for an exit in 5 years tops. There is no incentive in Dev or Business to really develop good software.
the second best performing company on "technical" and "problem solving" was Bloomberg, literally the opposite of Silicon Valley
Or that your interview preparation platform prepares candidates better for Dropbox's interview process than it does for Microsoft's. Or that the people who were confident in their interview skills for Facebook decided not to use your platform. Or that these companies have different interview processes and selection criteria (they obviously do) so ranking "best" based on performance on different tests doesn't tell you that much.
There's hundreds of different ways to slice this data to come up with different hypotheses about what's actually occurring.
> At interviewing.io, we’ve hosted over 100K technical interviews, split between mock interviews and real ones.
Something like "We analyzed 100K technical interviews to see which companies employ the people that we feel best performed in our mock interviews" would be more authentic
One of the things I learned from my years in research/academia is that Design Of Experiments in itself is a pretty complicated task. Most experiments/studies are invalidated due to a huge amount of confounding factors and correlations that are not factored in for the experiments.
A cursory visit to r/science comments would show a lot of people who do science for a living providing valid criticism to published peer reviewed scientific studies due to wrong Design of Experiments procedures.
Having lived all this first hand makes me EXTREMELY resistant to take seriously the data, analysis and conclusions of the linked article.
Other than that, the effort is appreciated and I like the ideas behind interviewing.io.
Deleted Comment
His response - “It would be helpful if they then showed how that is used in real world code“. I had to tell him I don’t think these kinds of scenarios come up often in real-world use cases, haha. They are essentially coding puzzles to filter for people who are good at solving coding puzzles, which may or may not directly translate to being good at writing application code.
Turns out it takes a lot more than a high leetcode bar for your interviews to run a successful company.
Not because you get good work done, but because people will apply because you look good on a resume.
The interview process is fairly long, accompanied by an office tour which touts the numerous "Freebies" they offer their employees. Then you find out after the entire process that you'll be getting paid 4050K LESS THAN the industry average. At the time I interviewed with them, I had five years experience in UI/UX and mobile development. What they offered me was essentially what I was making as an entry level dev.
It was easy to turn them down. No amount of free Red Bull is going to pay my mortgage.
At any rate, it doesn't surprise me at all that Dropbox engineers do better than FANG engineers on these technical metrics. The average Dropbox engineer is almost certainly a bit smarter and a bit better at algorithms than the average FANG engineer. Of course those attributes don't automatically translate into being a better engineer, though, nor do they automatically translate into company success or anything.
Deleted Comment
Technical interview performance is a high stakes field for which almost all data is cloistered in individual hiring companies or secretive gatekeepers. In my mind, all efforts, even imperfect ones, to share data is a great step here. We should encourage them to continue to share, including pursuing options to anonymize the raw data for open analysis. The field deserves more transparency and open discussion to push us all to be better.
Really interesting to see dropbox so high - would be curious to see some other data to corroborate that they (at least used to) employ the best engineers.
From my time interviewing, I've seen clusters of very good candidates often be more reflective of which top companies were having a hard time, internally or externally. There was a while where my company hired a lot of people from Uber; right now we're getting Amazon and Facebook/Meta.
good luck with that
But where would the best go ? where the worst engineers are ? to appear even smarter ? or just a bit below ? to avoid toxic workplace.
I'm also not sure how it is at other companies (at Google but haven't gone through interview training yet), but Dropbox's rubrics are also pretty strict. Doing "well" on a question requires getting through multiple parts normally with close to zero bugs that you don't catch yourself.
Dropbox asks difficult questions, and it's hard to discern why. I don't believe the problems at Dropbox are particularly difficult relative to its peers. I don't think they innovate at a clip that's outsized, etc.. But they do this, and their engineering culture focuses on this.