> Guidance is expected; a great interview should be more of a conversation than a one-sided question and one-sided answer.
I had a particularly awful interview at Google where the interviewer scoffed at me needing assistance.
And in an interview at Twitter with a xoogler they asked me what I knew about number theory and I said "nothing" and they said they majored in it and proceeded to ask me number theory questions. For a Django tooling position. He asked me how many prime numbers there were and I said "optimus prime" and he looked at me and asked if I was serious and if he should record my answer. I said yes followed by "autobots roll out"
In such situations I tend to ask that question directly:
"It was my understanding that you looking for someone who will help you with Django tooling. I can do that. Although number theory sounds interesting, I never felt the need of diving into it in order to solve any Django-related issue. I would be happy to learn more about how you think number theory relates to Django tooling should I start working here."
When people try to be important like that, it is paramount to take them at face value. In a lot of geek environments this happens all the time. People trying to push that one niche topic they really know about, because this is their comfort zone. The problem is: that one niche topic will rarely be a natural fit for any given conversation. More often than not just asking how they think this relates to the conversation at hand is enough to tame them a little bit.
Many times the stuff people say will contradict with the stated (or implicit) goals of a conversation. in this case the chosen topic was one with which the interviewer felt at home.
So another approach would have been to make it even more about them by e.g. instead of answering their question showing that you did your research: "Ah number theory! I saw you majored in number theory, I admittedly never had the need to dive too much into it — what role does number theory play in $Companyname?"
I would add here, that you may ask in a polite way, how number theory relates to the position at hand, here Django. This way, the interviewer has to open up specifically. There might be something, that could be important and interesting or simply a bluff.
This doesn't always work - many software engineers believe that you should be a bit of a generalist. However, if you ask the question about Django tooling, you are at least keeping the door open.
Asking the "what role does number theory play" question is a way to send a message and lose your chance at the job, but so does "optimus prime."
I remember being told once in an interview "I'll be your google, your stackoverflow, so anything you'd usually search for on there ask me".
Eventually I got to the point where I forgot how to do something simple where you'd usually just google it to refresh your memory. I decided to take his advice seriously and asked him, to which he responded to me basically by rephrasing my question as another question back to me.
I can tell you now there is no chance Stackoverflow would be as big as it is if the responses you got back where riddles.
I agree in principle, but also see how the interviewer might have felt that there's a good opportunity there to assess your reasoning skills about that thing. I actually like it when an interview takes some small detours.
I had an interviewer say something very similar to that to me once during a live pair coding session. I got to a point where I asked him a question about how javascript worked and he said "Hmm, I don't know. Let's see!" We spent the rest of the interview trying to figure out if javascript did indeed work that way. I didn't get the job.
He asked me a graph search question. However at the time I didn’t know what a graph was. I’m self taught.
But I did understand the ask, which was to find a path through a series of locations.
I also happened to have some experience with slime mold finding optimal paths and once even tried to simulate it using a genetic algorithm.
That was the best solution I could come up with, a genetic algorithm that would optimize a path through a series of locations based on a reward system.
However, I did not say the word “graph” and its not an optimal solution.
I didn’t pass.
I view the situation as both “I wasn’t prepared” and “google missed an opportunity to hire someone who might approach things a bit differently”.
Therein lies your problem, and you can see it writ large on everything Google does. Because they're so dominate at so many things (search, video hosting / streaming, etc.), anything that doesn't fall under "optimal solution" and "immediately profitable" goes to the waste bin near immediately.
There are entire websites dedicated to how fast Google kills a project that could have been really great with a year or two of love and care.
They've come to expect overnight success because it's been so dramatic for the past two decades. This blinds them to a lot of opportunities, and it comes, pretty much, from the top down and has filtered into nearly everyone.
> “google missed an opportunity to hire someone who might approach things a bit differently”.
That isn't what they want. See above. Optimal solutions, instant / near-instant profitability.
It's also not about thinking creatively all the time. When I did interviews at a FAANG, there were many interviewers whose questions ranged from understanding basic CS to inventing new algorithms in specific circumstances. Generally, the second set of questions rarely had optimal solutions that took longer than O(n), and the first set of questions had answers that come out of a CS textbook.
In other words, the first set of questions is about doing normal tasks in the normal way, and the second set of questions is about solving relatively simple problems in a way that scales up massively.
When you are doing something that has a well-known optimal solution, your colleagues will expect it to be done in the well-known way. They don't want to have to read through your "creative" code to determine that you are just doing a graph search in a different way than they are used to. Engineering at a place like Google isn't about doing cool things: it's about doing things that are kind of cool in a way that a mediocre CS grad from a high-ranked school can understand and extend your solutions.
In my interview for my current job, I was asked to implement an algorithm that finds the edit distance between two strings. This is a classic algorithm, one that I had learned and implemented about a year before in my algorithms class, and loved for its utility, but I could not remember a specific little detail!
However, neither did my interviewers! So we worked together trying to re-derive the algorithm from the problem, the expected output, and the rough idea of how it worked. We did not succeed, but we did demonstrate an ability to work through things together, an ability to stand up and point out incorrect things, an ability to reason through a problem, etc. I got the job.
Well what do you expect from a person who majored in number theory but makes a living working on django tooling in a company that doesn’t really use django to begin with? Delay and reschedule interviews until you get an interviewer whose LN profile does not clearly indicate a mismatch
Really sounds like the guy wanted to talk about himself instead of actually interview you. I hope you were able to send some feedback to his superior, since they missed a hire because of his shit.
Whist number theory may never come up in Django tooling. "Something you don't know" always will; it is valid to try and determine how people respond to questions they don't know the answer too.
If the candidate says they don't know a topic, that is a valid response. It's not worthwhile to ask detailed questions about the topic. That's a sign of an inexperienced and/or unprepared interviewer.
I think this is an 'okayish' position to take. There are some people who absolutely can't handle being wrong, and I've personally seen them wreck havoc when they get out of their depth.
However, there has to be a limit to the nature of the questions that are being asked. If I'm spending my free time to go check some place out because they're telling me that they might be willing to give me a job, then I'm going to be kind of put out when they start asking me questions about what the 50th letter is in shakespeare's macbeth. The questions have to be somewhat on point. This isn't a medieval system of patronage where I have to be subject to every whim of my patron. We owe each other to not waste each other's time. And if they're going to 'break trump' then I don't see why I shouldn't as well.
I can hear the objection though, 'oh primes are relevant to development.' But are they really? Maybe for advanced data structures like hashes, but what are you doing with a web framework that necessitates you write your own hash function? Besides, if you know about prime numbers, but not weak primes, strong primes, strong pseudo primes, safe primes, etc do you really know about primes and their CS applications? No, you just picked up some random trivia in high school and you're put out that everyone isn't as excited about it as you are.
You would be surprised how common this is. Lots of startup companies also love to brag about how many xooglers they have recruited. I once interviewed at a startup and the CTO introduced himself as "xoogler". After the interview I checked his Linkedin profile and he had a summer internship at Google 7 years ago.
If you're using an interview that bares 'little relevance to an employee’s day-to-day work' and that requires dedicated prep to pass, then what you're doing is optimising a process for finding people who will tick boxes and jump through meaningless hoops.
That's fine - some jobs do require that - but it does mean that any 'best and brightest' rhetoric should be shelved. Everyone knows that tech hiring is broken, but still the trumpets sound about how it really means something to get through one of these processes. It is almost a counter-signal.
> If you're using an interview that bares 'little relevance to an employee’s day-to-day work' and that requires dedicated prep to pass, then what you're doing is optimising a process for finding people who will tick boxes and jump through meaningless hoops.
You mean all those jobs requiring college degrees? Or is that different somehow? The main reason is that difficult tests has positive signal even if they are partly irrelevant or tedious, and interview prep has nothing on 4 years spent full time, and on top of that you have to pay for college.
> The main reason is that difficult tests has positive signal
They provide a signal, but I don't think the evidence suggests that it's necessarily a positive one. "Can you play Czardas on the tuba" is a difficult task that would provide a strong hire/no hire signal, but that doesn't mean it would be a good signal to use when looking for developers.
The big tech companies themselves admit that a bunch of their employees are not sufficiently competent [1]; clearly, current hiring practices are suboptimal. Just because a test exists doesn't mean it's valid.
> You mean all those jobs requiring college degrees
There are many jobs where high performance doesn't require a college degree, and for those jobs, requiring a degree is again an exercise in box-ticking that should be replaced with a more meaningful measure. There are also jobs were significant education in the topic is important, and a degree provides better evidence of that that many other things, so can be a useful signal.
We know that tech hiring is broken, but whenever it's criticised, the people invested in this system get outraged. "What do you want us to do, just hire everyone?" etc. No one is saying - or has ever said - that assessing applicant competence is a bad idea, just that assessing the ability to rote learn is not the best use of time when hiring developers.
The current system is not the only way, and for an industry that prides itself on being filled with problem solvers and hackers, it's bizarre to me how much people rest on the idea that the existing system is imperfect but present, and so should be left alone.
>interview prep has nothing on 4 years spent full time
that's just the problem with these interviews. It's kind of obvious that a fresh grad who prepares for 2 weeks for those whiteboard interviews will completely outclass known most productive programmers and engineers in the world (take whatever example you want, I would name someone like John Carmack) who takes the test fresh and unprepared. The modes of thinking and toolboxes required are just entirely different.
That should tell enough about the quality of that style of interview if you are using them for anything other than to weed out people who can't code at all in a phone screen.
This happens in any high paying industry where candidates vastly outnumber their capacity to interview well. Graduates of top schools have to jump through tons of hoops to get a Goldman interview or work the (virtual) mailroom at a talent agency.
This is roughly true. Obviously with UTF-8 or UTF-16 you'll have O(n) access times for an arbitrary code point, but you can always convert to UTF-32 and that encoding is just an array of code points so you get O(1) random access back.
UTF-32 !== UCS-2 and makes no guarantees about fixed code point size like UCS-2 did (and is considered deprecated because of that). There's an encoding issue with UTF-16 extending past the current "Astral Plane", but neither UTF-8 nor UTF-32 have that problem. We have no current idea at all what we might ever think to encode past the "Astral Plane", but that doesn't mean you should ever assume UTF-32 is UCS-2 and free of surrogate code points.
The conflation of "amortized" with "expected" feels a bit off to me.
Amortized O(f) strongly suggests that the sum of n operations is very very close to O(n×f). I wouldn't say hashmap insertion is amortized O(1), because if you craft input that always incurs a hash collision, then n insertions is much worse than O(n). I would say that it's expected O(1), meaning that with high probability an insertion takes constant time.
Conversely, "amortized" is only a useful description when it is known that some operations will take much longer than others, yet the total time is bounded. (If you bring up amortized time, you're pretty much implying that the distribution of times is uneven, otherwise you wouldn't have mentioned it.)
For example, if you're doubling the length of an array on overflow, then I would say the amortized time of a push is O(1). It would seem weird to say that the expected time is O(1). If I wanted to be more complete, I'd say "normally a push is constant time, but when the array needs to be expanded then it's O(n). The amortized time is still constant, though."
Expected time refers to a single operation. Amortized time describes the mean time of a series of operations.
Fun fact: you can make the time of a push unconditionally O(1) at the cost of extra memory as well. This is useful if you really can't afford the unpredictable spike in performance. The trick is to slowly already start copying data into the next larger buffer as you're filling the previous buffer. As an example in pseudocode C++:
You can apply a similar trick to hash tables to get expected O(1) time inserts without amortization, by already building the re-hashed table during the building of the previous table.
I'm not sure how well it would work in practice, since you're depending on alloc() giving you uninitialized memory—if it zeroes it, then it's back to being O(n). But that's kind of an unfair quibble.
And of course, it adds a branch to the lookup. Still O(1), but you'd need to check which buffer to find a given index in. Really not a problem here since as you say, the whole point is to bound the latency, not minimize it.
> Expected time refers to a single operation. Amortized time describes the mean time of a series of operations.
Pardon my ignorance but I don't see the difference between these two sentences.
Expected time of a single operation = Sum of time taken for all possible input cases / Number of cases (assuming each input case is equally likely to occur). Is it not?
They are not the same. "Expected" refers to the running time of a single operation when it uses randomness. "Amortized" talks about what happens when you call the same operation multiple times (it has nothing to do with randomness).
Hash maps provide O(1) expected lookups. This running time relies on randomness, and it is the average running time of one operation. If you are very unlucky (or if an attacker can predict your random number generator), then it is possible for every single lookup to run in O(n) time.
Dynamic arrays (vector in C++, ArrayList in Java) provide O(1) amortized push calls. Dynamic arrays sometimes have to copy every element in the array into a new allocation when you call push. However, by doubling the size every time this happens, the copies happen so rarely that if you push k times (starting with an empty array), then the total running time of all k calls is O(k), even though a small number of the calls are much more expensive than O(1).
So, unlike expected running times, an amortized running time of O(f(n)) is an absolute guarantee that calling it k times in a row never runs slower than O(k*f(n)).
A difference is in the number of times you use some operation.
For example, you expect to do a lot of pushes in vector so you can say about amortized time because it will hold on average and you can calculate running time from it and expected number of pushes somewhat precisely (with unknown constant multiplier which is independent of input data, on theoretical hardware at least).
On the other hand, an individual array is usually sorted only once. Here it is more appropriate to speak about average or expected complexity of quicksort which is O(NlogN) but in you particular run it may become either better or worse and this will noticeably affect running time, by a factor dependent of N value.
You can say about amortized time of quicksort though if you expect to sort different arrays a lot of times and you know their distribution, or at least the fact that they are sufficiently random shuffled or maybe sufficiently sorted beforehand.
Ordinary hash maps are still amortized expected O(1) — they will hit an O(n) case, guaranteed, when they need to grow the array, and it’s expected that the amortized runtime is O(1).
Being merely “expected O(_)” would be appropriate for algorithms that lack amortization, such as quick sort.
Hmm I feel like the best way to "game" these interviews is just to learn solutions to the top 100 leetcode questions and then act as if you're figuring it out on the spot. That's what I started doing in preparation for a Facebook interview but I think that's not actually going to happen in the end anyway due to the hiring freeze and also the ridiculous H1B situation.
Recently gave an interview where they asked me to open my Leetcode profile, and checked if there are any prior submissions to the asked questions. ¯\(ツ)/¯
I sincerely hope you walked out of that interview without accommodating that request. Would hope you mind naming the company only so others can avoid it?
Treating a candidate as suspects seems to be a new low, even for the tech industry.
An interview is a two way street. That person probably did you a favor though by letting you know what your potential coworkers and/or company culture was about.
>One common subtopic I would recommend having a basic familiarity with is amortized big-O, aka expected big-O, whereby you use some neat probability theory to say that the expected value of an operation is, for instance, O(1) even though sometimes it may be O(n) for individual calls.
Isn't he mixing two terms here?
If I recall correctly amortisation shows up in deterministic algorithms where some steps might frontload work that consecutively makes others easier. For instance you have some graph algorithm that looks at all the neighbours of some node in every iteration which means the runtime theoretically depends of the degree of the node, but if you can guarantee that each edge will only be inspected once your total time complexity only depends on the number of edges.
On the other hand expected time complexity is used when you run a probabilistic algorithm like quicksort, where individual sort calls actually have a variance in time complexity.
The first time I encountered a "riddle" type interview question, I'd already held 5+ jobs and I nominally had 8 years' experience in IT.
The job was for a Unix sysadmin job in academia. The interviewer described a room with two lightbulbs in it, and a room down the hall with two switches, and I was supposed to figure out which switch controlled which bulb without going back-and-forth so much.
I was stumped & didn't really know how to proceed, so I just gave up. They hired me anyway. Afterward they explained the answer to me. Start with both bulbs dark; turn on one switch, wait a few minutes, then turn on the other and touch the bulbs. The hot bulb corresponds to the first switch. (This only makes sense in the age of incandescent bulbs.)
I thought it was nice and a clever touch to see whether a candidate can think outside the box, and of course I was grateful to be hired even though I couldn't do so.
Oftentimes I'll report for an interview and it's more like a preliminary onboarding and tour of the premises. Sometimes a company has already made that hire decision before they call you in-person. This might raise a red flag for you, if a company is so desparate to put you at a desk that they don't even make you run the gauntlet, so think about it.
Obviously I don't get the question. But, two switches, two lightbulbs, can't you just turn one switch on, leave one switch off, and see which lightbulb is lit? You only interact with each room once. Unless there's some funny business going on?
Edit: oh, the bulbs need to be off at the end? Nah
I had this same question on my first IT job interview. Stumped me also... I think now I'd probably do some interview prep next time. I didn't get the job. :-)
Why is the heat thing even necessary? Turn on one switch, walk down, see which light is on, process of elimination for the other one. Am I missing something?
As an aside, this isn't really true. LEDs are way more efficient (more watts in visible light and not just heat), but not 100% so. They still get warm and need heat sinks and sometimes airflow, etc. or they can still overheat.
If you leave a LED bulb on for a while next to one that's been off, you can definitely feel the heat difference with your bare hands.
This is especially true at the higher end of lumen output, where you're optimizing for peak brightness instead of max efficiency. The power vs brightness curve isn't linear for a given emitter (and the human perception of brightness isn't linear either).
A LED that's twice as bright will be more than twice as hot. If you have a bright headlight it will typically be in metal housing that also doubles as a heat sink, and probably have internal thermo regulation to turn itself down before overheating.
Nowadays everyone and their mother has a smartphone on them. Place down your smartphone and start recording. Move to the switches and flip them as needed. Check back the recording to provide your answer.
LEDs themselves don't really get warm, but LED bulbs that plug into a wall socket have a transformer in their base to step mains voltage down to something the LED can handle, and that can get quite warm.
I was already halfway through the article when I realized I'd misunderstood the title: It's not about about using games to interview people (Factorio style), which still might have made sense to test self-taught engineers, but about how to game the interview process.
And honestly, it's not really about gaming the process, but succeeding in it. Gaming the process sounds to me like you are abusing some weakness in the system without doing the work required. That is not the case here imho.
For real, if you spend 100+ hours studying algorithms and system design you are not gaming anything. Gaming me would be finding out the questions before hand and memorizing the solutions.
I had a particularly awful interview at Google where the interviewer scoffed at me needing assistance.
And in an interview at Twitter with a xoogler they asked me what I knew about number theory and I said "nothing" and they said they majored in it and proceeded to ask me number theory questions. For a Django tooling position. He asked me how many prime numbers there were and I said "optimus prime" and he looked at me and asked if I was serious and if he should record my answer. I said yes followed by "autobots roll out"
Such a waste of my time to fly out there.
"It was my understanding that you looking for someone who will help you with Django tooling. I can do that. Although number theory sounds interesting, I never felt the need of diving into it in order to solve any Django-related issue. I would be happy to learn more about how you think number theory relates to Django tooling should I start working here."
When people try to be important like that, it is paramount to take them at face value. In a lot of geek environments this happens all the time. People trying to push that one niche topic they really know about, because this is their comfort zone. The problem is: that one niche topic will rarely be a natural fit for any given conversation. More often than not just asking how they think this relates to the conversation at hand is enough to tame them a little bit.
Many times the stuff people say will contradict with the stated (or implicit) goals of a conversation. in this case the chosen topic was one with which the interviewer felt at home.
So another approach would have been to make it even more about them by e.g. instead of answering their question showing that you did your research: "Ah number theory! I saw you majored in number theory, I admittedly never had the need to dive too much into it — what role does number theory play in $Companyname?"
I would add here, that you may ask in a polite way, how number theory relates to the position at hand, here Django. This way, the interviewer has to open up specifically. There might be something, that could be important and interesting or simply a bluff.
Asking the "what role does number theory play" question is a way to send a message and lose your chance at the job, but so does "optimus prime."
Eventually I got to the point where I forgot how to do something simple where you'd usually just google it to refresh your memory. I decided to take his advice seriously and asked him, to which he responded to me basically by rephrasing my question as another question back to me.
I can tell you now there is no chance Stackoverflow would be as big as it is if the responses you got back where riddles.
That sounds like a lame pick-up line, and they sound like a crazy stalker!
He asked me a graph search question. However at the time I didn’t know what a graph was. I’m self taught.
But I did understand the ask, which was to find a path through a series of locations.
I also happened to have some experience with slime mold finding optimal paths and once even tried to simulate it using a genetic algorithm.
That was the best solution I could come up with, a genetic algorithm that would optimize a path through a series of locations based on a reward system.
However, I did not say the word “graph” and its not an optimal solution.
I didn’t pass.
I view the situation as both “I wasn’t prepared” and “google missed an opportunity to hire someone who might approach things a bit differently”.
Therein lies your problem, and you can see it writ large on everything Google does. Because they're so dominate at so many things (search, video hosting / streaming, etc.), anything that doesn't fall under "optimal solution" and "immediately profitable" goes to the waste bin near immediately.
There are entire websites dedicated to how fast Google kills a project that could have been really great with a year or two of love and care.
They've come to expect overnight success because it's been so dramatic for the past two decades. This blinds them to a lot of opportunities, and it comes, pretty much, from the top down and has filtered into nearly everyone.
> “google missed an opportunity to hire someone who might approach things a bit differently”.
That isn't what they want. See above. Optimal solutions, instant / near-instant profitability.
You didn't get rejected because you "did not say the word graph", you got rejected because your solution was very wrong.
Software engineering is not just about knowing syntax, CS degrees include a Data Structures & Algorithms class for a reason.
In other words, the first set of questions is about doing normal tasks in the normal way, and the second set of questions is about solving relatively simple problems in a way that scales up massively.
When you are doing something that has a well-known optimal solution, your colleagues will expect it to be done in the well-known way. They don't want to have to read through your "creative" code to determine that you are just doing a graph search in a different way than they are used to. Engineering at a place like Google isn't about doing cool things: it's about doing things that are kind of cool in a way that a mediocre CS grad from a high-ranked school can understand and extend your solutions.
However, neither did my interviewers! So we worked together trying to re-derive the algorithm from the problem, the expected output, and the rough idea of how it worked. We did not succeed, but we did demonstrate an ability to work through things together, an ability to stand up and point out incorrect things, an ability to reason through a problem, etc. I got the job.
"Whats the rest of your day?"
<shows schedule>
"Oh. X is from Google and he still does their style of interviewing"
However, there has to be a limit to the nature of the questions that are being asked. If I'm spending my free time to go check some place out because they're telling me that they might be willing to give me a job, then I'm going to be kind of put out when they start asking me questions about what the 50th letter is in shakespeare's macbeth. The questions have to be somewhat on point. This isn't a medieval system of patronage where I have to be subject to every whim of my patron. We owe each other to not waste each other's time. And if they're going to 'break trump' then I don't see why I shouldn't as well.
I can hear the objection though, 'oh primes are relevant to development.' But are they really? Maybe for advanced data structures like hashes, but what are you doing with a web framework that necessitates you write your own hash function? Besides, if you know about prime numbers, but not weak primes, strong primes, strong pseudo primes, safe primes, etc do you really know about primes and their CS applications? No, you just picked up some random trivia in high school and you're put out that everyone isn't as excited about it as you are.
Deleted Comment
That's fine - some jobs do require that - but it does mean that any 'best and brightest' rhetoric should be shelved. Everyone knows that tech hiring is broken, but still the trumpets sound about how it really means something to get through one of these processes. It is almost a counter-signal.
You mean all those jobs requiring college degrees? Or is that different somehow? The main reason is that difficult tests has positive signal even if they are partly irrelevant or tedious, and interview prep has nothing on 4 years spent full time, and on top of that you have to pay for college.
They provide a signal, but I don't think the evidence suggests that it's necessarily a positive one. "Can you play Czardas on the tuba" is a difficult task that would provide a strong hire/no hire signal, but that doesn't mean it would be a good signal to use when looking for developers.
The big tech companies themselves admit that a bunch of their employees are not sufficiently competent [1]; clearly, current hiring practices are suboptimal. Just because a test exists doesn't mean it's valid.
> You mean all those jobs requiring college degrees
There are many jobs where high performance doesn't require a college degree, and for those jobs, requiring a degree is again an exercise in box-ticking that should be replaced with a more meaningful measure. There are also jobs were significant education in the topic is important, and a degree provides better evidence of that that many other things, so can be a useful signal.
We know that tech hiring is broken, but whenever it's criticised, the people invested in this system get outraged. "What do you want us to do, just hire everyone?" etc. No one is saying - or has ever said - that assessing applicant competence is a bad idea, just that assessing the ability to rote learn is not the best use of time when hiring developers.
The current system is not the only way, and for an industry that prides itself on being filled with problem solvers and hackers, it's bizarre to me how much people rest on the idea that the existing system is imperfect but present, and so should be left alone.
[1] https://www.reuters.com/technology/exclusive-meta-girds-fier...
- having to work together with assholes towards a common goal
- how sometimes you can cram for a deadline, but sometimes you can't
- how, often, how well you do is just politics and networking
Obviously fuck everything I learned about however pushdown automata worked. That's not relevant to my career now. All the above still is.
that's just the problem with these interviews. It's kind of obvious that a fresh grad who prepares for 2 weeks for those whiteboard interviews will completely outclass known most productive programmers and engineers in the world (take whatever example you want, I would name someone like John Carmack) who takes the test fresh and unprepared. The modes of thinking and toolboxes required are just entirely different.
That should tell enough about the quality of that style of interview if you are using them for anything other than to weed out people who can't code at all in a phone screen.
I.e. don't believe the anti-American propaganda on Unicode.org.
{lefthandraised}\_({eyes}{slantedmouthkindabelowtheeyes})_/{righthandraised}
Amortized O(f) strongly suggests that the sum of n operations is very very close to O(n×f). I wouldn't say hashmap insertion is amortized O(1), because if you craft input that always incurs a hash collision, then n insertions is much worse than O(n). I would say that it's expected O(1), meaning that with high probability an insertion takes constant time.
Conversely, "amortized" is only a useful description when it is known that some operations will take much longer than others, yet the total time is bounded. (If you bring up amortized time, you're pretty much implying that the distribution of times is uneven, otherwise you wouldn't have mentioned it.)
For example, if you're doubling the length of an array on overflow, then I would say the amortized time of a push is O(1). It would seem weird to say that the expected time is O(1). If I wanted to be more complete, I'd say "normally a push is constant time, but when the array needs to be expanded then it's O(n). The amortized time is still constant, though."
Expected time refers to a single operation. Amortized time describes the mean time of a series of operations.
I'm not sure how well it would work in practice, since you're depending on alloc() giving you uninitialized memory—if it zeroes it, then it's back to being O(n). But that's kind of an unfair quibble.
And of course, it adds a branch to the lookup. Still O(1), but you'd need to check which buffer to find a given index in. Really not a problem here since as you say, the whole point is to bound the latency, not minimize it.
Pardon my ignorance but I don't see the difference between these two sentences.
Expected time of a single operation = Sum of time taken for all possible input cases / Number of cases (assuming each input case is equally likely to occur). Is it not?
Isn't it then the same as amortized time?
Hash maps provide O(1) expected lookups. This running time relies on randomness, and it is the average running time of one operation. If you are very unlucky (or if an attacker can predict your random number generator), then it is possible for every single lookup to run in O(n) time.
Dynamic arrays (vector in C++, ArrayList in Java) provide O(1) amortized push calls. Dynamic arrays sometimes have to copy every element in the array into a new allocation when you call push. However, by doubling the size every time this happens, the copies happen so rarely that if you push k times (starting with an empty array), then the total running time of all k calls is O(k), even though a small number of the calls are much more expensive than O(1).
So, unlike expected running times, an amortized running time of O(f(n)) is an absolute guarantee that calling it k times in a row never runs slower than O(k*f(n)).
For example, you expect to do a lot of pushes in vector so you can say about amortized time because it will hold on average and you can calculate running time from it and expected number of pushes somewhat precisely (with unknown constant multiplier which is independent of input data, on theoretical hardware at least).
On the other hand, an individual array is usually sorted only once. Here it is more appropriate to speak about average or expected complexity of quicksort which is O(NlogN) but in you particular run it may become either better or worse and this will noticeably affect running time, by a factor dependent of N value.
You can say about amortized time of quicksort though if you expect to sort different arrays a lot of times and you know their distribution, or at least the fact that they are sufficiently random shuffled or maybe sufficiently sorted beforehand.
Deleted Comment
Being merely “expected O(_)” would be appropriate for algorithms that lack amortization, such as quick sort.
Treating a candidate as suspects seems to be a new low, even for the tech industry.
An interview is a two way street. That person probably did you a favor though by letting you know what your potential coworkers and/or company culture was about.
Isn't he mixing two terms here?
If I recall correctly amortisation shows up in deterministic algorithms where some steps might frontload work that consecutively makes others easier. For instance you have some graph algorithm that looks at all the neighbours of some node in every iteration which means the runtime theoretically depends of the degree of the node, but if you can guarantee that each edge will only be inspected once your total time complexity only depends on the number of edges.
On the other hand expected time complexity is used when you run a probabilistic algorithm like quicksort, where individual sort calls actually have a variance in time complexity.
The job was for a Unix sysadmin job in academia. The interviewer described a room with two lightbulbs in it, and a room down the hall with two switches, and I was supposed to figure out which switch controlled which bulb without going back-and-forth so much.
I was stumped & didn't really know how to proceed, so I just gave up. They hired me anyway. Afterward they explained the answer to me. Start with both bulbs dark; turn on one switch, wait a few minutes, then turn on the other and touch the bulbs. The hot bulb corresponds to the first switch. (This only makes sense in the age of incandescent bulbs.)
I thought it was nice and a clever touch to see whether a candidate can think outside the box, and of course I was grateful to be hired even though I couldn't do so.
Oftentimes I'll report for an interview and it's more like a preliminary onboarding and tour of the premises. Sometimes a company has already made that hire decision before they call you in-person. This might raise a red flag for you, if a company is so desparate to put you at a desk that they don't even make you run the gauntlet, so think about it.
Edit: oh, the bulbs need to be off at the end? Nah
- 1 light bulb and 3 switches outside. Determine which switch powers the bulb while only entering the room once.
If you leave a LED bulb on for a while next to one that's been off, you can definitely feel the heat difference with your bare hands.
This is especially true at the higher end of lumen output, where you're optimizing for peak brightness instead of max efficiency. The power vs brightness curve isn't linear for a given emitter (and the human perception of brightness isn't linear either).
A LED that's twice as bright will be more than twice as hot. If you have a bright headlight it will typically be in metal housing that also doubles as a heat sink, and probably have internal thermo regulation to turn itself down before overheating.