For additional context, we are currently using problems which have test cases, and it feels as if it doesn't correctly reflect the day-to-day of a data scientist. In fact, many interviewers pass a candidate iff they pass the test cases.
I'd love more open-ended problems.
I work in DS for a large corp and we're currently working on replacing our hackerrank coding interview with something that lets us evaluate the daily life of a dev a bit more realistically. This includes letting them use LLMs and watching them do exploratory data analysis, instead of just coding.
IMO there is a huge gap in the market for facilitating in-person interviews that no tool has really exploited yet.
I have a question for you: What's unique about HackerRank? Why choose HackerRank over any of the alternatives? Is it really just leetcode with better UI?
Thanks to this post I logged in for the first time in a long time, and solved a medium difficulty challenge. Here are my impressions:
It felt pretty easy. The UI looks pretty nice. Nice dark theme. It felt a bit difficult to find problems I was interested in. I can compare with different solutions on both leetcode and exercism, but I couldn't find that in HackerRank. I guess that's what the discussion tab is for?
Good luck! :)
On the pure play practice, we are working on introducing more real-world challenge set up + a real AI tutor
Unfortunately I don't know why we didn't choose HackerRank in the end. LeetCode was also an option but it didn't really fit what we wanted to test for.
I think the biggest point that could be improved is question quality, particularly for curated questions sets/prep kits where my expectations are higher. It's not uncommon to run into an edge case that's tested for but not specified in the problem, or some deficiency with the question itself (I've seen "fix the bug" questions with "from scratch" code provided, or vice versa).
Another thing which could be really valuable are challenges in big codebases - add feature X to some open source project or whatever. Curated/teaching versions of this kind of exercise are hard to come by and would serve as a middle ground between the usual toy problems and in-practice software engineering.
Would love to hear your thoughts and responses.