Well, they're (a) further up the supply chain that we are, and (b) have the resources to understand and influence their supply chain. You can be pedantic about the word "direct" if you like but I don't think that's useful.
Well, they're (a) further up the supply chain that we are, and (b) have the resources to understand and influence their supply chain. You can be pedantic about the word "direct" if you like but I don't think that's useful.
All of the products I can buy may or may not contain this unthetical cobalt. I don't know which, and my personal buying choice doesn't effect anything.
What are you proposing, that everyone with a smartphone or a computer be sued? How will that work?
This problem should be tackled, but it is worth thinking about likely unintended consequences of whatever power structures you set up to tackle it. I hear the Belgians have some experience ending slavery in parts of Africa.
I think this quickly gets into the details though. How much safety is required and what does it cost? Are there alternative materials that cost less than ethical cobalt? What age restrictions should be put on the labour involved and what will those children do instead (both with their time and to earn money)? Where will the adult workers come from to replace those kids and what training do they need?
In contrast the end user has almost no information, so punishing them is both unfair and ineffective.
I will say though, the problem is one of “standardization” across an organization where it’s too big for everyone to fit in a room.
Suppose you give each team high autonomy to hire whoever they like using whatever “good” process they come up with. 90% of the time this results in good hires. But as you grow, that ten percent of underperforming people becomes large in absolute numbers, and is very painful to deal with.
It becomes a real problem when relatively lower performing people end up concentrated on a team, and then start being the hiring gatekeepers for that team, thus multiplying the number of lower performing hires.
Later you start having institutional problems when everyone starts to perceive that the engineers in Department A are generally better than the engineers in Department B. Engineers in Department A are more likely to leave if they perceive the company is getting worse at engineering - it becomes a self-fulfilling prophecy.
Then you get enormous pressure to come up with standardized testing - aka algorithms on the whiteboard, or some other academic inspired exercise - imposed by higher level leadership that wants to address a genuine problem (skill disparity across the org) but does not know any better way to do it.
I think, as PG points out, there may be a real opportunity to innovate here, and probably a big financial opportunity if anyone can figure out how to productize and scale a solution.
I struggle to see an easy answer, though. In a utopian universe (for a hiring manager) I’d do something like pay candidates to come on site and work for a week, then make a hire/no-hire decision based on that. But I think that is far too onerous for candidates (and a big company) to have legs.
I think you've got a lot of this right (disclaimer: we've built the product I think you're describing)
The don't think the most important problem is standardisation though, it's observability/instrumentation ie. if you don't measure what's working, you can't improve things.
The very best tech companies measure quite a lot, and often look back at their hiring processes in the event of a mis-hire to figure out what went wrong and how they can avoid the same happening in future... but even then they only do that in exceptional cases because it's done fairly manually. That means they have low statistical significance and a stuttering cycle of learning.
I believe they should be constantly looking at what's working well, for every hire. So that's what we built.
Once your hiring pipeline is trivially visible, a lot of these questions go away. You can see what's working well and try new things in safety, you can optimise with your eyes wide open.
One thing we did straight away was to deprioritise CVs and replace them with written scenario-based questions relevant to the job. If managed properly that takes your sift stage from a predictive power around r=0.3 to a performance we find typically above r=0.6. Far fewer early false negatives makes your hiring funnel (a) less leaky, (b) more open to pools of talent previously ruled out by clumsy CV sifting, and (c) potentially shorter as the improved sift accuracy allows companies to consider dropping their phone interview stage(s)
Our NPS rating for HR teams is currently running at 85, and MRR churn is under 1% so there's clearly some value to the approach.
(Gallup 2013)
In my last company ($100B, publicly traded, extremely data driven), we interview candidates in a group of 2 (or more, but rarely) on clearly defined criteria to look for signals - in either direction.
During interview each of the interviewer looks for evidence to gather the signal - stronger the better and the purpose of the interview process is for all the interviewers to gather signals (preferably all criteria, preferably strong in either direction, but ofcourse bound by the realities of limited availability of time).
Once the interview process is over each interviewer jots down the signal strength and the related evidence on the scorecard independently and suggests the result of interview.
Later during a calibration, the signals and the evidences are presented to the interviewing peer group (recruiter, hiring managers, interviewers from other rounds), and pretty much disallows for any unconscious bias such as "I don't think Alice would be a good team lead (because she is a woman, and woman are not good managers), or "We should not hire Amit (because he is an Indian, and Indians write poor code").
Again the examples are too in-your-face, but unconscious bias is unconscious, and in the absence of having to defend your perspective to external parties with the support of evidences, which does not happen if there is only a single interviewer.
Think of it as the rubber duck for interview and biases, to keep your own unconscious bias as interviewer in check.
You've explained that your interview process has a predetermined scoring system which is a good start. I'm curious what the effect of this calibration stage is... did your company do predictivity and bias analysis on it?
Discussing candidates after an interview allows social dynamics within the group to distort the signal so you reduce the value of taking independent data points. Not only will it not reduce bias in the way you seem to suggest, but you'll also lose some of your ability to reduce random noise as the noise from more dominant interviewers will be amplified.
I don't have time to dig out citations, but a good starting point would be "What Works - Gender Equality By Design" by Iris Bohnet. She's one of the world's leading academics studying how biases are affected by different hiring techniques.
You just need to be careless once for it to be a problem. And considering how many sterile items you need to use every day - particularly in an emergency - it is not unlikely that one item went back into the reuse cycle without being sterilised. It's to avoid the question, 'was this item sterilised?' If its wrapper is open, assume it is used. You can only do that safely with single-use items.
How about putting the re-usable item in a wrapper so that the same rule can apply.