Good to see that they are taking a clearly communicated, carefully considered stance on a messy ethical issue. I don't really have a strong opinion on this case, but I think it's refreshing that HackerOne is dealing with a case with no clear best answer in a principled way.
2. those people frequently have more social power than nice people, and
3. the evil people will use their social power to paint nice people as evil (i.e. "bullying.")
If you're defining the laws for a community or society, or the Terms of Use for a piece infrastructure for such a community/society to use—then it behooves you to consider that any "hammers" built into your system will mostly be used by those with power against those without it, regardless of which side is "correct."
So: If you let people speak freely, the powerful will shout down the powerless. But if you let people silence others, then the powerful will silence the powerless.
Morally, it really comes down to a choice of which kind of hammer hurts wronged innocent powerless people the least. (Which can often mean offering no hammer that can truly be used to "deal with" obviously-evil people.)
I looked through the task manager of a corporate issued laptop and saw tasks belonging to very similar companies, as part of the disk image IT makes.
The corporation likely has a license for the software, as well as conditions for all their employees to expect monitoring.
A formalized bug bounty program would enable the software producer to have secure software.
Why exactly is HackerOne drawing a distinction with this software producer? I read the whole article and still miss what the controversy with this producer is.
Is all monitoring software now banned from HackerOne under the guise of a moral high ground HackerOne just created?
>Why exactly is HackerOne drawing a distinction with this software producer? I read the whole article and still miss what the controversy with this producer is.
That's helpful feedback on missing context from our post. Thanks.
This series by VICE articulates the sometimes subtle distinctions between legitimate monitoring software built for enterprises and parents vs this particular software (which they deem "stalkerware").
There are a lot of dubious companies on HackerOne. Why did taking a stance on this one have a perceived more positive outcome than taking any stance at all?
Pretty much zero of the companies on HackerOne are part of any social responsibility index, shariah compliant index, or trendy b-corporation index. And even in the non-zero rebuttal, the vast majority can have entire dissertations written about weighing the ethical considerations for doing any business with them.
So why even make a stance at all?
The time it takes for the arbitrary nature of your ethical decisions to become apparent is simply longer than it will take for your runway to deplete.
Why exactly is HackerOne drawing a distinction with this software producer?
The truth is: because a H1 rep went on Risky Business and did not deliver a very good performance.
Patrick, who is absolutely okay with H1 having FiveEye clients like the US DoD, has a very serious problem with them also servicing an obscure spyware application provider. Because, I suppose, being murder-droned by a panopticon hegemony is much better than getting yelled at by an angry spouse?
The purpose of the DoD is not to spy on people, it is to protect people. That some actions by some programs and and departments may cross the line legally during certain periods is not that same as an entity whose sole, or majority of goods or services are for, or marketed as being for, an illegal action.
What's so refreshing about their post is that they admit not to have an unassailable "moral high ground". They highlight the strongest arguments, including those arguing against their decision.
They do this because they recognise that decisions often have competing trade-offs, nuances, ambiguity. There is, unfortunately, almost no recognition of this fact in public these days. You're expected to pick a side, defend it, and attack others, using whatever rhetorical tool is available.
Among those tools of destructive debate: reducing any ambiguity in your favour, i. e. "They are banning Z, and I don't know Z, so I'll argue that Z is like A and banning A would be wrong". Or, equally bad, the slippery slope: "Z may be bad, but you can't give me an algorithm that unambiguously distinguishes Z and Y, nor Y and X, or, by transitivity, Z and A. Therefore, you can't ban Z without also eventually banning A, and that would be bad".
There are plenty of corporate applications; the majority of them are obvious and the user knows what's happening (in my country at least [UK] any employee would need to be specifically told and likely have to sign something before it could be used). Monitoring isn't the issue.
The problem is spying. Enabling people to more reliably spy on others is a problem.
This is going to earn me huge downvotes, but not all surveillance is equally illegal or equally unethical.
To me it seems that groups that run spy satellites and look out for nuclear missile launches are in a different ethical category than people who make software for perpetuating domestic abuse.
Clearly, I picked two extremes. That was just to show that not all surveillance is equally bad and that some can be better than others. I will leave other kinds of surveillance are just and unjust for other discussion.
Patrick Grey interviewed the CTO of hacker one in the latest riskybiz security podcast [1] on this topic. Patrick is a friend of Alex, but that doesn't stop a hearty debate. Highly recommended podcast for those interested in infosec in general.
> Companies should defer judgement to the courts rather than make arbitrary moral judgements.
Uh, no. Please no. I do not want the courts to arbitrate morality. That's a far far far more dystopian world than one where corporations do (supposing I accept their false dilemma). Companies can, in theory, be created by any person, with any moral alignment. That is not the case with governments (minus authoritarian ones, which function in the context of moral-defining as more or less the same as a company).
Additionally, deferring to the courts also leads to the ever terrible "this is moral because it is legal" and "this is immoral because it is illegal."
There is not a correct authority on morality to which you can defer. You cannot offload such decisions and wash your hands. Any moral decision you make, including deferring to some other moral-decider, is entirely your responsibility.
Note: I'm doing the naughty thing of morals=ethics. I know this is pedantically not the case, but I'm 99% sure that is what the article means. And, in general, this is also what everyone means outside of targeted discussions.
> ...if someone is infected with spyware they're probably better off infected with secure spyware.
I think this is a great ethical issue within the security community. There's many arguments against working for a company you have ethical disagreements with, but that becomes much more grey when it comes to security. Sure I might not agree with the mass surveillance of the government, but wouldn't I rather help the NSA not leave piles of malware sitting around on C&C servers than let it be exposed to even more malicious actors?
Security could use a hippocratic oath.
> FlexiSPY has not published a vulnerability disclosure policy or committed to no legal action against hackers. Both protective steps would be required should their program be hosted on HackerOne.
I'm surprised HackerOne doesn't have a policy surrounding this already. Are hackers who submit issues to HackerOne not protected?
> We will not take action against them based exclusively on moral judgements.
Hooray, kinda. I think this is a maxim that HackerOne could extend to not making moral judgements relevant at all, and to instead institute policies that reflect HackerOne's current morals. This increases transparency and allows HackerOne to say "We reject you because your company's goals/actions/whatever explicitly contradict our policy that everyone wear unicorn hats on Tuesdays".
> Their business conduct is not in line with our ambition to build a safe and sound internet where the sovereignty and safety of each participant is respected.
I think now would be a good time for HackerOne to write this stuff down. A very brief look at their site and the only thing I can see relating to this is the tagline "Make the internet safer together." From which sovereignty implications can be drawn, but having such policies explicitly stated and publicly available not only allows for transparency in decisions, but also works as an advertisement, "Oh hey, this company wants to protect my digital sovereignty, neat!"
Thanks tetrep. I agree with your statement "would be a good time for HackerOne to write this stuff down".
We just discussed it this morning internally. If you have suggestions on how to formulate such a policy, please email me at marten@hackerone.com.
Thinking out loud, HackerOne stands for and supports the security and integrity of every piece of software code, for transparency and openness, for the sovereignty of each human being connected online, and for fair and equitable principles for all online activity. And probably some other aspects that I didn't think of this exact second.
Don't get suckered into trying to write a 'clear set of guidelines' or a 'comprehensive community policy' or whatever they want to call it. 10 times out of 10, the people asking for such things are either looking to pin you on your own texts through language lawyering or are incapable of independent thought - not the sort of people you want to deal with anyway. The whole faux 'justice' (of this sort) rhetoric is just that - the upholding of an illusion of 'fairness', where that 'fairness' is a juvenile understanding of 'equal treatment no matter what', just like those who think that majority decisions are always right because they're 'democratic'.
The correct response is that of when people tried this trick on the SCOTUS when they asked it 'what is porn'. There, and here, the correct answer is: "I can't define it, but I recognize it when I see it." This of course is a deeply unsatisfying answer to people who can't (or won't) think for themselves, and doubly so for the aspi types that inhabit the interwebs in disproportionate numbers.
On one hand your right to run your business as you see fit and respecting your principles.
On the other hand you have discrimination of all kinds.
Think about the recent cases of a small baker with strong religious views refusing to create cakes for gay couples.
Think about CloudFlare protecting ISIS sites.
1. there are evil people, but
2. those people frequently have more social power than nice people, and
3. the evil people will use their social power to paint nice people as evil (i.e. "bullying.")
If you're defining the laws for a community or society, or the Terms of Use for a piece infrastructure for such a community/society to use—then it behooves you to consider that any "hammers" built into your system will mostly be used by those with power against those without it, regardless of which side is "correct."
So: If you let people speak freely, the powerful will shout down the powerless. But if you let people silence others, then the powerful will silence the powerless.
Morally, it really comes down to a choice of which kind of hammer hurts wronged innocent powerless people the least. (Which can often mean offering no hammer that can truly be used to "deal with" obviously-evil people.)
I'm pretty sure the Feds are happy to have traffic to ISIS sites be routed (unencrypted) through an American company's service...
That's a pretty amazing sentence. It illustrates just how messy this whole situation is.
The corporation likely has a license for the software, as well as conditions for all their employees to expect monitoring.
A formalized bug bounty program would enable the software producer to have secure software.
Why exactly is HackerOne drawing a distinction with this software producer? I read the whole article and still miss what the controversy with this producer is.
Is all monitoring software now banned from HackerOne under the guise of a moral high ground HackerOne just created?
FlexiSpy specifically marketed itself as a tool for spying on your spouse. Their front page used to include "read your partner's sms" https://web.archive.org/web/20060402200643/http://flexispy.c...
This series by VICE articulates the sometimes subtle distinctions between legitimate monitoring software built for enterprises and parents vs this particular software (which they deem "stalkerware").
https://motherboard.vice.com/en_us/article/inside-stalkerwar...
Pretty much zero of the companies on HackerOne are part of any social responsibility index, shariah compliant index, or trendy b-corporation index. And even in the non-zero rebuttal, the vast majority can have entire dissertations written about weighing the ethical considerations for doing any business with them.
So why even make a stance at all?
The time it takes for the arbitrary nature of your ethical decisions to become apparent is simply longer than it will take for your runway to deplete.
The truth is: because a H1 rep went on Risky Business and did not deliver a very good performance.
Patrick, who is absolutely okay with H1 having FiveEye clients like the US DoD, has a very serious problem with them also servicing an obscure spyware application provider. Because, I suppose, being murder-droned by a panopticon hegemony is much better than getting yelled at by an angry spouse?
Deleted Comment
Deleted Comment
They do this because they recognise that decisions often have competing trade-offs, nuances, ambiguity. There is, unfortunately, almost no recognition of this fact in public these days. You're expected to pick a side, defend it, and attack others, using whatever rhetorical tool is available.
Among those tools of destructive debate: reducing any ambiguity in your favour, i. e. "They are banning Z, and I don't know Z, so I'll argue that Z is like A and banning A would be wrong". Or, equally bad, the slippery slope: "Z may be bad, but you can't give me an algorithm that unambiguously distinguishes Z and Y, nor Y and X, or, by transitivity, Z and A. Therefore, you can't ban Z without also eventually banning A, and that would be bad".
The problem is spying. Enabling people to more reliably spy on others is a problem.
A consistently applied policy would see ties with ALL surveillance entities severed.
To me it seems that groups that run spy satellites and look out for nuclear missile launches are in a different ethical category than people who make software for perpetuating domestic abuse.
Clearly, I picked two extremes. That was just to show that not all surveillance is equally bad and that some can be better than others. I will leave other kinds of surveillance are just and unjust for other discussion.
That's a bit like saying the authors of Wordpress perpetuate fake news.
I've used similar products to monitor usages on teenager's devices and I can attest to their usefulness far beyond "perpetuating domestic abuse".
[1] https://risky.biz/
Uh, no. Please no. I do not want the courts to arbitrate morality. That's a far far far more dystopian world than one where corporations do (supposing I accept their false dilemma). Companies can, in theory, be created by any person, with any moral alignment. That is not the case with governments (minus authoritarian ones, which function in the context of moral-defining as more or less the same as a company).
Additionally, deferring to the courts also leads to the ever terrible "this is moral because it is legal" and "this is immoral because it is illegal."
There is not a correct authority on morality to which you can defer. You cannot offload such decisions and wash your hands. Any moral decision you make, including deferring to some other moral-decider, is entirely your responsibility.
Note: I'm doing the naughty thing of morals=ethics. I know this is pedantically not the case, but I'm 99% sure that is what the article means. And, in general, this is also what everyone means outside of targeted discussions.
> ...if someone is infected with spyware they're probably better off infected with secure spyware.
I think this is a great ethical issue within the security community. There's many arguments against working for a company you have ethical disagreements with, but that becomes much more grey when it comes to security. Sure I might not agree with the mass surveillance of the government, but wouldn't I rather help the NSA not leave piles of malware sitting around on C&C servers than let it be exposed to even more malicious actors?
Security could use a hippocratic oath.
> FlexiSPY has not published a vulnerability disclosure policy or committed to no legal action against hackers. Both protective steps would be required should their program be hosted on HackerOne.
I'm surprised HackerOne doesn't have a policy surrounding this already. Are hackers who submit issues to HackerOne not protected?
> We will not take action against them based exclusively on moral judgements.
Hooray, kinda. I think this is a maxim that HackerOne could extend to not making moral judgements relevant at all, and to instead institute policies that reflect HackerOne's current morals. This increases transparency and allows HackerOne to say "We reject you because your company's goals/actions/whatever explicitly contradict our policy that everyone wear unicorn hats on Tuesdays".
> Their business conduct is not in line with our ambition to build a safe and sound internet where the sovereignty and safety of each participant is respected.
I think now would be a good time for HackerOne to write this stuff down. A very brief look at their site and the only thing I can see relating to this is the tagline "Make the internet safer together." From which sovereignty implications can be drawn, but having such policies explicitly stated and publicly available not only allows for transparency in decisions, but also works as an advertisement, "Oh hey, this company wants to protect my digital sovereignty, neat!"
I think you misread that part. They require this, so FlexySPY joining their program would mean the situation would improve.
We just discussed it this morning internally. If you have suggestions on how to formulate such a policy, please email me at marten@hackerone.com.
Thinking out loud, HackerOne stands for and supports the security and integrity of every piece of software code, for transparency and openness, for the sovereignty of each human being connected online, and for fair and equitable principles for all online activity. And probably some other aspects that I didn't think of this exact second.
If anyone has thoughts on this, we are all ears.
Marten
The correct response is that of when people tried this trick on the SCOTUS when they asked it 'what is porn'. There, and here, the correct answer is: "I can't define it, but I recognize it when I see it." This of course is a deeply unsatisfying answer to people who can't (or won't) think for themselves, and doubly so for the aspi types that inhabit the interwebs in disproportionate numbers.
Deleted Comment
Dead Comment