It seems like all "code exercise" sites like this have the same annoying problems which I'll generally describe as lack of attention to detail in the exercise descriptions.
Take their Fibonacci exercise as an example. In the first section they say, "The first line of the input will be an integer N (1 <= N <= 100)" which specifies how many test cases follow. Then in the next section they say, "The first line of the input will be an integer N (1 <= N <= 10000), specifying the number of test cases"
So besides the fact that they aren't even consistent in specifying the expected range of N it isn't even clear why you would need to specify the number of test cases in the first place vs just reading one from each line until EOF. That makes you think maybe they want you to put in some basic error checking which again since they aren't consistent with N turns into a trial and error exercise if it even matters at all.
There is also no indication about the version of interpreters or compilers being used to check the submissions. If I choose to write my solution in python is that python 2 or 3?
Combine all that with an implied scoring system based on speed or number of tries and you have to wonder if this is really measuring anything relevant or if it is just filtering for people who's default assumptions happen to be the same as the person who wrote the lame exercise description.
I had to seriously do a double-take, I thought this was straight trolling, not a real company.
I doubt any serious developers would consider doing this, there's already a way to demonstrate your programming capabilities in the real world, either by contributing to an existing open source project or maintaining your own projects on github.
I agree: I have no idea how the scoring system works. Am I supposed to get it done in as little (real world coding) time as possible? As few keystrokes? Is it supposed to be done in as few cycles (on their test machine) as possible? Should it be bulletproof, or just work?
A number of start-ups have either already failed with a very similar business model or failed to go exponential growth. There was a good "Lessons Learned" post by one of them recently here on hacker news that got comments by both hiring managers and hackers. Why the rehash, again? What is this start-up really doing differently?
I personally wouldn't be motivated to use it either to find work or to hire. I think, when hiring, you really need to do a face to face interview. I've found resumes work well enough for pre-screening that it's not a pain point for me. And, it's not like there are so many candidates out there looking for work that it's impractical to do face to face, or at least telephone/skype, interviews with all the good candidates. I learned the lesson that face to face hiring is needed by hiring someone based on a co-worker's recommendation (even though he did badly in an interview, I looked passed it based on the recommendation). It's so costly and painful to make a hiring mistake that there's no way I would deviate from the current model and try a new method or service.
> it's not like there are so many candidates out there looking for work that it's impractical to do face to face
That really depends on your target market. For the most popular technologies (.Net, Java etc) in the most popular areas (London, Glasgow etc) for low-to-medium positions, you'll get tons of decent candidates.
I wish this company well, and perhaps they'll be successful, but I don't think they've nailed down some ultimate formula for ranking and hiring programmers. This system sounds very easy for low-quality programmers with lots of time on their hands to manipulate - you sign up with a throwaway account with some fake name, get the list of questions, take your time searching for the answers online, and then sign up with your real information and have a carefully prepared way of typing out answers that get you high scores. If this achieves any measure of traction, you can bet that this will happen a lot.
This isn't an easy problem to solve and is something that companies have been struggling with for a long time, so no disrespect intended. But based on my understanding of how this works I don't think it will be very likely to yield great results.
Thanks for the critiques, and yes, it's not an easy problem.
There are countermeasures. For instance, who said we'll let you see the whole question bank? And if you type a close enough approximation, out come the plagiarism checkers.
But when somebody's gone to the trouble to make a database online of every problem, I'm happy, because if that's so we must be successful enough that we have plenty more resources to game their gaming.
Isn't topcoder still the king of this kind of thing. I don't think they particularly market it as being geared at hiring but companies definitely hire from there (I know UBS likes to) and your score does reflect on your ability, at least as far as being able to code competitively.
I'm one of the co-founders and I'm still saving that quote.
As for the answer to the implicit question...the best people are passed by reference when they're already recognized. We're aiming to help people show that their skills have value when they don't have a reference.
The number is just, well, a pointer to a broader portfolio - and we'll only be making that portfolio more expressive over time.
Depending on your industry and location, recognition doesn't always come easy. Considering that a majority of employers will have a non-technical HR employee only makes things worse. I think that there is definitely a spot in the market for a company that can successfully break this barrier and improve the marketability and recognition of job seekers. The key is that a service like this isn't the be-all and end-all, but instead should serve as a supplemental indicator of a candidate's ability.
I agree. Referrals are the best ways of finding good engineers. Just because one could (or could not) answer questions correctly on a test doesn't necessarily mean that he or she is a good (or bad) engineer. Everyone has good days and bad days.
There are also plenty of ways of circumventing this. Since a test taker is evaluated not just by his answers, but also by the process of inputting his answers, that itself could also be faked away.
While I don't think the point of this idea is to evaluate the candidate's personality, I'm wondering if the founders thought about this. Is this going to be left to the companies to figure out on their own, or are the founders going to assist in this? What I don't like is the idea of encouraging employers to filter candidates by coding skills first, then personality. As a project manager, I've dealt with my fair share of really intelligent but lazy, egotistical engineers and thinking back I wouldn't really hire them had I known about their personalities earlier. I would rather hire engineers with B+ coding skills and A+ personalities, than engineers with A+ coding skills by B+ personalities.
Here's my problem with the idea that you'll always get the best candidates from referrals:
You're the hiring manager. I'm a professional programmer who works with you. Let's say I personally know and can vouch for the ability of 100 programmers. There are 7 billion people on the planet. What's more likely? That your best candidate will come from my 100-person address book, or that it will be one of the other 6,999,999,900 people in the world you're not looking at?
Now it is likely that someone from my address book will be, on average, a better candidate than a random selection from the rest of the world, but all that shows is that professional references will produce "above average" candidates.
EDIT:
I suppose the difference is whether you want the "best way to find good people," what the parent said, or a "good way to find the best people" which I argue referrals might not be.
As a younger developer, I struggle to imagine our senior developers - who have been doing complicated things for 5-10+ years - ever stoop to a number to quantify their experience. I suppose that senior developers are probably not the target audience, but I would never want a company to expect this, or a score to define my experiences as a developer.
A technical screening can be handled in a ten-minute phone call for free - with the dignity of the applicant intact.
As an experienced developer, I struggle to imagine any of the talented younger developers I've worked with being well-represented by a number that quantifies their experience.
I take issue with any hiring system that attempts to narrow the field of view of candidates. This includes any single-number "scoring" system, as well as hiring cultures that focus on single methods (e.g. "puzzle questions") to the near exclusion of other hiring criteria. If your organization is doing any of the above, it's shooting itself in the collective foot.
The problem is not the talented younger developers. It's the not-so-talented ones. At least they keep us old guys in business to clean up the mess afterwards but this is quite a problem in some fields. The talented younger developers are doing quite well by themselves. I get to meet them in small groups every now and then and some of what I find blows me away.
"...Lack of coder personality" That's more that just a mere gripe.
Many a job has been won or lost based on the character of a prospect. In fact, I've lost count of how many times I've seen otherwise very technically skilled people lose out to basic inter-personal communication. Above all else, a seeming lack of tact.
A developer standing alone can be evaluated by one set of metrics but when added to a group of other developers many of those metrics can be overridden by others and become meaningless. In simpler terms, teamwork.
I do not think developer grading is impossible, but there is so much more context involved. The "score" is dependent not just on the company, but the time and place in the developer's and the companies own life cycle. The perfect fit is always subjective -- these are organic life forms not rigid machines.
When you reduce a person down to a scalar, you're inevitably losing most of the signal in the different variables which go into that calculation. I am extremely skeptical that it's possible to come up with a model which makes that scalar useful to a hiring manager across a pool of candidates, even if you're some kind of data-science messiah. My .02 is that applicant scoring this way is inherently broken from a signal to noise perspective. (In fact, I bet if you just took GPA as the ranking metric, its performance would be superior as it's more competitive/harder to game, despite GPA being widely acknowledged as a subpar hiring metric - at least in my experience.)
I like that. An "in your own words" monolog can go a long way to help recruiters understand the type of person they will be dealing with.
That also means the recruiter(s) need to understand the questions they're asking. Technical merit is all well and good, but how well can a recruiter process the answers they're getting? This can't be a bullet point questionnaire with blind matching of answer "C" to question "1". Sometimes solutions aren't so black and white.
Take their Fibonacci exercise as an example. In the first section they say, "The first line of the input will be an integer N (1 <= N <= 100)" which specifies how many test cases follow. Then in the next section they say, "The first line of the input will be an integer N (1 <= N <= 10000), specifying the number of test cases"
So besides the fact that they aren't even consistent in specifying the expected range of N it isn't even clear why you would need to specify the number of test cases in the first place vs just reading one from each line until EOF. That makes you think maybe they want you to put in some basic error checking which again since they aren't consistent with N turns into a trial and error exercise if it even matters at all.
There is also no indication about the version of interpreters or compilers being used to check the submissions. If I choose to write my solution in python is that python 2 or 3?
Combine all that with an implied scoring system based on speed or number of tries and you have to wonder if this is really measuring anything relevant or if it is just filtering for people who's default assumptions happen to be the same as the person who wrote the lame exercise description.
I doubt any serious developers would consider doing this, there's already a way to demonstrate your programming capabilities in the real world, either by contributing to an existing open source project or maintaining your own projects on github.
I personally wouldn't be motivated to use it either to find work or to hire. I think, when hiring, you really need to do a face to face interview. I've found resumes work well enough for pre-screening that it's not a pain point for me. And, it's not like there are so many candidates out there looking for work that it's impractical to do face to face, or at least telephone/skype, interviews with all the good candidates. I learned the lesson that face to face hiring is needed by hiring someone based on a co-worker's recommendation (even though he did badly in an interview, I looked passed it based on the recommendation). It's so costly and painful to make a hiring mistake that there's no way I would deviate from the current model and try a new method or service.
That really depends on your target market. For the most popular technologies (.Net, Java etc) in the most popular areas (London, Glasgow etc) for low-to-medium positions, you'll get tons of decent candidates.
[1] https://news.ycombinator.com/item?id=2346119
This isn't an easy problem to solve and is something that companies have been struggling with for a long time, so no disrespect intended. But based on my understanding of how this works I don't think it will be very likely to yield great results.
There are countermeasures. For instance, who said we'll let you see the whole question bank? And if you type a close enough approximation, out come the plagiarism checkers.
But when somebody's gone to the trouble to make a database online of every problem, I'm happy, because if that's so we must be successful enough that we have plenty more resources to game their gaming.
- https://www.hackerrank.com/ (aka https://www.interviewstreet.com a YC company)
- https://codeeval.com/
- https://coderwall.com/
- https://www.employiq.com/
- https://www.mindsumo.com/ (college student only)
- http://www.codewars.com/ (in beta)
Edit: hyborg beat me to it :)
As for the answer to the implicit question...the best people are passed by reference when they're already recognized. We're aiming to help people show that their skills have value when they don't have a reference.
The number is just, well, a pointer to a broader portfolio - and we'll only be making that portfolio more expressive over time.
There are also plenty of ways of circumventing this. Since a test taker is evaluated not just by his answers, but also by the process of inputting his answers, that itself could also be faked away.
While I don't think the point of this idea is to evaluate the candidate's personality, I'm wondering if the founders thought about this. Is this going to be left to the companies to figure out on their own, or are the founders going to assist in this? What I don't like is the idea of encouraging employers to filter candidates by coding skills first, then personality. As a project manager, I've dealt with my fair share of really intelligent but lazy, egotistical engineers and thinking back I wouldn't really hire them had I known about their personalities earlier. I would rather hire engineers with B+ coding skills and A+ personalities, than engineers with A+ coding skills by B+ personalities.
You're the hiring manager. I'm a professional programmer who works with you. Let's say I personally know and can vouch for the ability of 100 programmers. There are 7 billion people on the planet. What's more likely? That your best candidate will come from my 100-person address book, or that it will be one of the other 6,999,999,900 people in the world you're not looking at?
Now it is likely that someone from my address book will be, on average, a better candidate than a random selection from the rest of the world, but all that shows is that professional references will produce "above average" candidates.
EDIT: I suppose the difference is whether you want the "best way to find good people," what the parent said, or a "good way to find the best people" which I argue referrals might not be.
A technical screening can be handled in a ten-minute phone call for free - with the dignity of the applicant intact.
I take issue with any hiring system that attempts to narrow the field of view of candidates. This includes any single-number "scoring" system, as well as hiring cultures that focus on single methods (e.g. "puzzle questions") to the near exclusion of other hiring criteria. If your organization is doing any of the above, it's shooting itself in the collective foot.
I get we always complain about the developer interview process, but are these business models actually solving the problem?
Many a job has been won or lost based on the character of a prospect. In fact, I've lost count of how many times I've seen otherwise very technically skilled people lose out to basic inter-personal communication. Above all else, a seeming lack of tact.
I do not think developer grading is impossible, but there is so much more context involved. The "score" is dependent not just on the company, but the time and place in the developer's and the companies own life cycle. The perfect fit is always subjective -- these are organic life forms not rigid machines.
That also means the recruiter(s) need to understand the questions they're asking. Technical merit is all well and good, but how well can a recruiter process the answers they're getting? This can't be a bullet point questionnaire with blind matching of answer "C" to question "1". Sometimes solutions aren't so black and white.