I’m not a fan of comparisons to the Hippocratic oath. The greatest risk to AI ethics is not the ethics of software engineers but the ethics of the software engineering process. By the time tasks are handed to engineers, most of the ethical decisions have been made by product managers, designers, and business stakeholders who are focused on their own goals. Software engineers are accountable to their bosses before their users, no matter how high minded we like to pretend to be. To say it’s on the engineer to do no harm puts them in the tenuous position of doing the job or being replaced by someone who will. That isn’t setting us up for success.
The Hippocratic oath sounds more altruistic than the alternatives, but good legislation, including business audits and incentives, will have far more impact than a software engineer swearing they won’t be evil.
But ultimately there's always a software engineer involved in the creation of software - and that's not true of any of the other roles you mentioned. Since software engineers are necessary and sufficient to produce software, they should always be held responsible, and any oath should fall on engineers.
> To say it’s on the engineer to do no harm puts them in the tenuous position of doing the job or being replaced by someone who will.
Well, yes - if there were no tradeoffs there would be no point in having an oath to begin with. But there are software engineers today, including some on HN, who do things more harmful and unethical than medical malpractice, and they are personally culpable for the decision to do so - just as their replacements would be if they refused. I would also like to see laws criminalizing those individual engineers' conduct - maybe you're alluding to the same thing? - but an oath is a good start.
So management bears no blame for requiring illegal work be done, on pain of termination? Said another way, engineers now need to be technical and legal experts in the business domain?
(Remember employees in the US depend on the company for health insurance. Saying 'no' could cost a lot more than just ones position.)
Most software engineers are not like doctors. We have little autonomy over what is created. Our responsibility is primarily the how. And with devops sometimes the actual deployment and maintenance itself.
> Since software engineers are necessary and sufficient to produce software, they should always be held responsible, and any oath should fall on engineers.
That's not true in any way. Lots of software is written by people who don't even have a degree, others by some who have a computer science degree, but not an engineering one, etc.
The other issue is that software is rarely unethical. The unethical bit often comes from the way it is used.
And I'd have to agree with OP. In an idealist world you could assume software engineers would all be ready to quit their job at any sight of unethical affair, even say, launching something to production with a known vulnerability, or without anything but the most rigorous security review process having passed. But in practice, you're not going to achieve this result, unless you put a framework to incentivize software engineers towards being ethical. If you allowed them to sue their employer, and made it that they more often win the lawsuit, for asking them to build something unethical, or insisting that they do so even after the SE said it was unethical, or to retaliate in any way to an SE refusing to build something on ground of ethics, then you'd maybe start to see results. Otherwise, won't happen, and you've only created a scape goat to make it even easier for companies to push for unethical software, since they can now just blame SE they coerce into building it anyways.
The problem is, software can be really complex it's possible to make it so no one programmer can understand the whole picture. The tasks can be divided so the individual programmers are given order like "do X in condition of Y" which itself seems harmless and lawful, but combining them lead to malicious behaviours.
How about a compromise then? Software engineers are responsible for the negative consequences of the software. Any and all responsibility they take is then also shared with every person that is above them in the hierarchy.
Eg a developer does something and society finds this unethical and punishes them. The developer's boss, the boss's boss, the boss's boss's boss etc up to the CEO all get punished in the same way. Furthermore, to avoid companies trying to shield themselves from this by putting their developers into a different company, it will apply to software that you get from someone else too.
Suddenly this doesn't sound very appealing anymore, does it?
I think it something should already kick in if you create tracking pixels to read canvas data to identify users or generally work on fingerprinting. Especially if it is for a benign purpose as advertising, an industry that is notoriously toxic and would have no problems selling ever kind of data they get their hands on. It is fine to generalize it that way in my opinion. The directly conflict with any spirit of the law in most countries regarding privacy.
Aside from that, the quantification of attributes/properties of people can have negative implications for many people. Oversharing is a problem on the net, but at least here people just endanger themselves.
> But ultimately there's always a software engineer involved in the creation of software - and that's not true of any of the other roles you mentioned. Since software engineers are necessary and sufficient to produce software, they should always be held responsible, and any oath should fall on engineers.
Nah, it's not the same at all. The fundamental difference between creating a program and medicine is creating a program only has to be done once, or at least a only by a few.
Medicine on the other hand: it has to be redone with each new patient. If the Hippocratic Oath works to prevent 99.9% doctors from doing a harmful procedure then you've hit a home run. Sure, you will never completely stop some bad egg removing a perfectly good limb because a patient suffering from Xenomelia offered enough money. But who wouldn't call a thousand fold reduction a huge win.
We demonstrably have the 0.1% of programmers who are willing to break any oath. They make malware, and willingly take out Sony as mercenaries because Kim Jong-Un got pissed off at a movie. All that 0.1% has to do is write the program once. Thereafter you are not trying to discourage hoards of high skilled professional from doing it again, you are trying to stop a legion of dark net operators copying the thing and selling it to anyone. An oath is a waste of time under those circumstances.
1) New regulation forces some branches of software engineering to have some type of oath.
2) Now some software jobs will require only oath takers to do.
3) A well payed and powerful new cast of software engineers is born.
4) They are highly paid and have a powerful lobby working for them.
5) The oath takers become very frisky and only work on jobs with minimal risk. The ones that do screw up have an armada of lawyers, because of course they have a new association with deep pockets.
6) Innovation stalls for a while.
7) Big corps start outsourcing some of the oath-taking jobs. These engineers are not bound by the same regulation. Screw ups happen, people die at some point.
8) Maybe we should have the outsourced engineers also take an oath? Back to square 1
This is exactly what I found happened for medical Doctors in Canada (don't know for US). Not saying doctors are not doing a good job, and I can't imagine the stress and pressure they operate from. But suing for malpractice in Canada can be challenging to say the least. I have personal account of a Family member who was grossly mistreated, and all the Doctor did was changed hospitals, nothing more than a slap on the wrist.
> But there are software engineers today, including some on HN, who do things more harmful and unethical than medical malpractice...
I'm having a hard time trying to find examples of this, outside the field of armament development.
And in those fields where a software failure may result in death, e.g. aircraft development, proof of a software engineer willingly causing it, would likely result in jail time already.
The big question is: who has ultimate visibility on the consequences of a particular project? Very frequently software engineers are asked to work on projects where they only know one side of the picture. The executives in the company are the ones who know the ultimate context of what they're doing.
With a combination of local engineers, remote engineers in other countries, mechanical turk and some sleight of hand, I wonder if you could craft a nefarious project where nobody knows the whole picture.
Not sure hou understand how software is made. A programmer doesn't decide what to write, when to write it and they are lucky to be included in how it's made.
Programmers get specs and write programs to match those.
At no point is it the programmers responsibility to talk about the moral compass of the project and where it fits into society.
An oath to do no harm? You first need to give programmers the power to decide the fate of projects on their own the way only a doctor can decide medicine or treatment.
> Software engineers are accountable to their bosses before their users, no matter how high minded we like to pretend to be
I've quit jobs in the past because of ethical concerns about the way in which those above me have been acting. In one case this involved bribery of senior government officials to push through a project that put at risk the privacy of hundreds of thousands of people.
If you go along with shit like that, you're an accomplice and share partial responsibility. As professionals we have a responsibility to stand up for what is right. It's not good enough to fall back to the lazy excuse of "just doing my job".
And I mean, that is good of you to have a moral backbone, unfortunately there are many people behind you that will do the same job in software and they can be located anywhere in the world.
> Software engineers are accountable to their bosses before their users, no matter how high minded we like to pretend to be.
I agree to a very limited extent about the hierarchical nature of a typical corporation, but I also disagree. Software engineers at a certain level of their career and with relatively uncommon skills can pick and choose what companies they want to work at. In my opinion people of good moral character and conscience need to be prepared to refuse to accept a position at companies known to engage in activities against their principles. And further need to be prepared to resign if they are asked to do something clearly unethical.
From my particular specialization in network engineering, I would never accept a role at an ISP in an environment where I had to implement something like the GFW in China, or further walled-garden/censorship of the global Internet. It's directly contradictory to my principles. I sincerely hope that the best and the brightest of my colleagues would never choose to aid and abet internet-fuckery by autocratic regimes. If people from my field look at a project and could reasonably say "Vint Cerf would be really disappointed if he saw me implementing this...", I hope they will choose to walk away.
Being expected to behave ethically is part of being an Engineer regardless of what your boss expects. Now we increasingly see real world negative consequences of the work of software engineers I don't see why they should be any different.
I think the point is that we can't just rely on individual ethics to enact change. People have bills to pay and kids to feed, if it's all on the man at the bottom to say no then a lot of bad software is still going to be created.
The consequences of the decisions made by Microsoft Presidents and their ilk, since they are the ones who actually decide what software will look like.
But still it would be good when MS gets found to do unethical things that a programmer could be scapegoated instead of their president, at least from the presidents perspective.
In some Engineering societies maybe, in general you may be liable if you construct a building you shouldn't have.
Software is fundamentally different because until you run it it has no consequences, and even if you run it, it can be contained. I can write a worm and not release it on world. In that regard, it is more like engineering _plans_. I can draw up plans for a building that is designed to collapse with X number of persons inside -- in fact I can imagine either of the two assignments given as an exercise in University.
No reason to make more laws: it should be immaterial whether I chop down the Christmas tree at the local town square or program a robot to do it.
Well there is one major assumption error there - that negative real world consequences are only linked to negative ethics.
That is so wrong it isn't even funny. If the car was invented as powered by a Mr. Fusion the buggy whip makers going out of business would be a negative real world consequence.
There is the problem that everything can be subdivided in a bunch of menial task so every cog in the system isn't aware of their impact. It already happens a lot in software.
the idea of Hippocratic Oath reminds me of Asimov's Three Laws of Robotics in "The Naked sun" (SPOILERS ahead): the detective realises that the normally quoted First Law of Robotics ("A robot may not injure a human being or, through inaction, allow a human being to come to harm.") is actually just an approximation, he argues that the real Law is "A robot may do nothing that, TO ITS KNOWLEDGE, will harm a human being; nor, through inaction, KNOWINGLY allow a human being to come to harm."
This is important because even though robots really try their best, different robots could perform sub-tasks that look very harmless by themselves, but combined kill a human being:
- A robot is instructed to pour this bottle of poison into a caraffe of water and then leave the room
- Another robot is instructed to enter the room, take the caraffe of water and give it to a human to drink
The human is poisoned, but none of the robots are directly responsible (in the first law sense). Is the act of connecting the two dots the evil deed.
> There is the problem that everything can be subdivided in a bunch of menial task so every cog in the system isn't aware of their impact. It already happens a lot in software.
Precisely the issue with the original proposal. Would it have mattered whatsoever if PhD's took a Hippocratic oath when developing the Manhattan project?
I feel like this MSFT executive may already know that swearing engineer to "do no harm" is fruitless after reading the article, but it's still unfortunate that statements like his diverts attention from more meaningful proposals.
> Software engineers are accountable to their bosses before their users, no matter how high minded we like to pretend to be.
Isn't that what a hipppocratic oath would solve. They'd be accountable to the oath before their bosses, and that would give them reasonable grounds to refuse unethical work.
How do you guarantee that the boss will cooperate with the software engineers and tell them about all the unethical business practices that are currently happening?
Corporate strategists blaming software engineers for the consequences of corporate strategy is a fairly brazen kind of blame-shifting.
A system of ethics within the health system are necessary for customers to retain trust in the health industry. It's also strongly aligned with the selfish interests of workers who must enact that system of ethics. These properties do not neatly translate to software engineering—mostly because the most difficult ethical dilemmas in technology are rarely obvious when looking at source code. The problems with Facebook (for example) are not always inherent in code; many are only revealed after being deployed at scale and external groups begin exploiting the system.
> To say it’s on the engineer to do no harm puts them in the tenuous position of doing the job or being replaced by someone who will. That isn’t setting us up for success.
This is where a strict licensing requirement, like Canada's P. Eng, can empower the engineer. If you think what you're being asked to do would violate your professional ethics, not only can you decline to do it, but you have a system to ensure that you won't just get replaced by someone who will do it.
a licensing system is just a form of regulation as alluded to in the comment you're replying to.
And in the end, if software engineers are to conduct themselves in moral and ethical ways, they must be empowered to do so without having to sacrifice their personal wellbeing or livelihood. Regulation, it seems, is the only way to achieve that end.
Indeed it is the same with Chartered Accountants. Management can make whatever decisions they think they can make, but an accountant won't inact things that are unethical at the risk of their charter and professional standing.
The Hippocratic oath is similarly local in scope. Individual doctors try hard not to cause harm to individual patients, but the medical establishment causes massive amounts of harm, by:
- developing treatments for chronic symptoms instead of curing diseases
- being unprepared for pandemics
- making health care unaffordable except through employer plans
- promoting wrong nutrition guidelines for decades after the evidence was in
and more.
To have good outcomes you need ethics at both individual and system-wide levels.
Agreed, the vast majority of software engineering disasters can be laid at the feet of bad management, not software engineers. Let's clean house in management first, just like Deming did when he straightened out Ford.
To me this argument sounds a bit like a "only following orders" defence. I'm sure smart engineers ,I.e., all of them, can figure out what the application of their work will be.
The oath should be for everybody involved in the process.
After the 2008 mortgage crisis, Netherland required everybody working at banks to take the banker's oath, which is mostly about balancing the interests of the 4 main stakeholders of the bank: shareholders, customers, employee, and society. It's pretty broad, it doesn't magically fix everything, but it does make everybody more aware of their responsibilities. Maybe software companies should require something similar, where everybody needs to be aware of their responsibilities towards, well, primarily user data, I guess. And that goes for not just software engineers themselves, but for everybody involved in the process.
I think people assume the Hippocratic oath actually makes a difference. It's culturally important to many doctors of course, but how often has that hindered the military or the CIA from hiring enough doctors to facilitate a new interrogation program? Of the illegal human experiments that happened in the US many had doctors working for them as well. It's a good guiding principle but the idea that it would actually make an impact of any kind is debateable. It doesn't matter if 90% of coders follow the oath if 10% or even 5% are enough to handle the demand for oath breaking.
Agreed. Let's look at it from a simple emperical standpoint: where does the problem mostly arise, and who has the power to fix it?
Organizations that went through true iterative process to reduce failure rate like NASA figured out that they needed to allow true authority to specific domain experts to blow the whistle and not face reprisal or suffer for it. Oaths fix nothing, you need organizational change, and if someone is going to do that, its the management in charge.
> To say it’s on the engineer to do no harm puts them in the tenuous position of doing the job or being replaced by someone who will. That isn’t setting us up for success.
If we were to introduce an oath we would have to take further inspiration from doctors, e.g having a certification required to do the job, or, failing that at least having an industry-wide union/guild protecting the position.
Kind of funny to hear this from a company known for harvesting tons and tons of telemetry from customers, with no true way to fully opt-out which is pretty damn unethical and probably feeds their AI.
not to mention that software is reusable: a technology might be invented with all the good intentions and ideals... and later be used for evil purposes.
I agree, however not all ethics can be legislated.
This sort of thing works in other professions like medicine because malpractice can cause doctors to lose their license. Same with civil engineers. This changes things because the choice is now quitting or possibly never being able to work in the field again.
Perhaps principal software engineers in charge of life or death software should be licensed for accountability, “engineer on record”.
>To say it’s on the engineer to do no harm puts them in the tenuous position of doing the job or being replaced by someone who will. That isn’t setting us up for success.
obviously for a hippocratic type oath to work you need the same kind of system in place for qualifying engineers that you do for doctors and not allowing anyone to work as an engineer who failed the ethics board.
Here's a prospective from someone who became a software dev in my late 20s after other jobs. Relatively speaking, if you can write working code and show up on time, you have a lot of leverage in getting and keeping employment, more than most other middle to upper middle class professions. This means org politics has less effect on you, and less incentive to get involved.
As a dev you can, but don't have to, think as much about the politics and operations of your org for all kinds of reasons. You are relatively harder to replace so internal politics tends to matter less, and if the org makes decisions you don't like, you can be confident that you can leave and find something else versus the long and often unfruitful process of trying to change an org from within.
Politics can and often is messy, how often have you heard something like "I just want to build things" (it's how I feel for sure), if you can get paid well to do that, why get involved with a messy decision making process?
> Software engineers are accountable to their bosses before their users, no matter how high minded we like to pretend to be. To say it’s on the engineer to do no harm puts them in the tenuous position of doing the job or being replaced by someone who will. That isn’t setting us up for success.
The nazi officers who committed most of the atrocities used similar arguments. "I was just following orders!"
I expect better from a software engineer on hacker news. You've single handedly convinced most here - through your weak logic - that such an oath is necessary.
This assumes all of the vast majority of engineers are easily replaceable. Given that several tech companies prefer the chance of not hiring a good fit over the chance of needing to fire someone, I think the cost to a company if a significant subset of its engineers declined unethical work (especially engineers who at this point are high enough to have a good idea of the broad scope of the software) would make it difficult to perform what you suggest.
A software engineer is more like a chemist working for the pharmaceutical industry than a doctor treating patients. And chemists typically don't have an Hippocratic Oath. Pharmacists sometimes have their own version, but it is mostly about giving good advise to patients and respecting them as human beings.
But it doesn't stop the pharmaceutical industry from being heavily regulated, and while their business practices are often criticized, the drugs that come out of it are generally safe and effective. Many countries also have regulations making important drugs (ex: vaccines) accessible to everyone.
Exactly. An oath means nothing: what you need is skin in the game.
Some health practitioners are literally bought by Big Pharma, by their hospital accountant, etc. How would an oath fix that? Same with engineers or any other discipline.
You need to make sure that everyone in the process has skin in the game. For me it's less about control (legislation) than about responsibility and accountability (assessments, eating your own dog food).
How about a Hippocratic Oath for business leaders? This is shifting the responsibility from management towards the engineers. It's not the engineers who pulled the trigger at Facebook - or Microsoft. They build the weapons. Management fires them.
This is a hypocritic ode. If somebody is acting unethically at MS then it is management. All the innovation that is not happening because MS is abusing their position. Two times they have killed a universal software platform to preserve theirs: Java and websites. Ironically they are pushing websites now that the platform has shifted to mobile with objective c and Google's variation of Java.
>According to Brad Smith, just like it is the Pope’s job to bring religion closer to today’s technology, it is the software developer’s job to bring technology closer to the humanities.
The Pope is to religion as is the President of the biggest software company to software development. It is his responsibility, not theirs. Or does he see himself as that software developer? I guess it is more a Balmer developer and he means software engineers.
He could start by handing out software licenses / EULAS that take full responsibility for any damage the software does cause, like any other sold product has to do. Then, by business processes, management will take care of the ethical issues to minimize risks.
Building weapons is immoral? Tell that to the WW2 industrial complex that supported the war.
Not building weapons for the war effort is not always right. That is an intentional double negative because I think it's the most clear if you read it twice. Building weapons for the war effort is sometimes right would be the boolean negative of that statement.
>Microsoft executives have literally decided to build actual weapons.
Yep. Literally they did. Clearly all US weapons are evil in your opinion because you disagree with all US weapon usage I'm guessing? You have to combine the argument that they are literally making weapons with the fact that those weapons are being used in a way you don't agree with.
Keep in mind that most of these advanced weapons they are literally making are not designed against the current wars you most likely disagree with. They are built, to include AI, to keep pace with advanced threats from other countries. Allowing us to fall behind technologically, due to perceived moral black/white issues of current wars, could lead to a whole new world in 40 years as you make your arguments in a well protected environment. Not researching advanced topics will lead to an asymmetric fight... not in our favor... if the enemy so chooses.
Reference our usage of nuclear weapons. If you think that was evil, then you wouldn't want an evil country / group of people to gain such an asymmetric advantage. If you think it was necessary, then you want to have an asymmetric advantage when it is necessary against an evil group. Yes I recognize the inherent cyclical issue with the above statement. Either way, allowing all people to gain an asymmetric advantage while we just discard all research in hopes that others will follow is ignorant of history - war theory is a thing.
Crocodile tears, and a lot of cheap virtue signaling.
I have some friends who have worked at MSFT for a long time, about 20 years or so. There was a time when they used to talk about open source as if it was cancer (~2011). When MSFT started embracing the cancer, they didn't really up and leave. Now they are all talking about how great this open source thing is.
But even funnier was when they used to complain about Google's rampant user tracking. And then one day they added targeted ads into Windows 10. Did these people suddenly decide "enough is enough" and go and join the EFF? You already know the answer to that.
If you’re in management at an engineering dept/co and make decisions about what is going to be engineered and how that’s going to be deployed you are in engineering yourself and should obviously take the oath yourself.
Not saying I’m in favor of this oath, just that it seems silly to distinguish different roles in the engineering process.
I don't think the exercise of drawing the line between "engineering" and "not engineering" is a useful one here. The actual decisions and the pressure to perform for the job crosses disciplines up at the top of the management hierarchy.
The broader point is that in most companies engineering decisions don't come purely from the engineering department. They are often decisions made as part of bigger projects or efforts. For example, it's probably not up to engineers in most companies whether the any of the tech giants sell to the military. If it is, it's up to people who were engineers at some point and might still exist up at the top of the "product" part of the company, but who for all intents and purposes stopped writing any code or even managing anyone who writes code a long, long time ago.
Doctors no longer take the Hippocratic Oath (because it is incompatible with a lot of difficult situations doctors are placed in).
But more to the point: Why are we trying to shift focus from the wrongs large multi-nationals do to individual software engineers? Plus what would the result be if this "oath" conflicts with a manager's instructions?
Maybe we should start with Microsoft, Google, Comcast, Oracle, and similar taking an oath to do no harm, before we push engineers under the bus for not fighting hard enough against what they're ordered to do.
This absolutely reeks of the corporate effort to undermine action of climate change or environmental policy in general: advertising campaigns being run with a focus on individual responsibility to consume less, save water etc. as a way to divert attention from systemic action which would have an impact by controlling the biggest contributors.
While it is often referred to as "The Modern Hippocratic Oath", I would argue the Lasagna oath contains significant differences from the original Hippocratic oath, and it is worth treating them as separate things.
P..S. I remember because I was like "Mmmm, Lasagna...." both times.
Your first sentence contradicts your second. You also pocket-quoted the original post removing key context:
> Doctors no longer take the Hippocratic Oath (because it is incompatible with a lot of difficult situations doctors are placed in).
The original Hippocratic Oath is no longer used (as both the original post, and you yourself readily admit). Why it is no longer used it highly relevant to this discussion because someone is calling for a Hippocratic Oath-like thing in a different area.
The fact doctors have moved to a less idealized Hippocratic Oath should be a historical lesson, not something we should seek to emulate.
> Why are we trying to shift focus from the wrongs large multi-nationals do to individual software engineers?
Because software engineers are the ones doing the actual work. By imposing an ethical standard on the people doing the real work the multinational executives then either accept that limitation or knowingly accept risks from intentionally violating the spirit of that limitation.
> accept risks from intentionally violating the spirit of that limitation.
What risk? Since it's the engineers, not the executives that take any oath of professional conduct, there wouldn't be any risk for any executive. All this would get us is a legal framework for throwing engineers under the bus via an ethical commission if they do something silly like blowing the whistle on an unethical decision higher up in the hierarchy.
I'm very much for greater personal responsibility in the field of software engineering. Until not too long ago, I used to work a in a field (medical equipment) where accepting the burden of potentially catastrophic mistakes came with the job. But I also know -- based on that same experience -- that personal accountability is meaningless without organizational ability.
Unless this hypothetical "Hippocratic Oath for engineers" is backed by a "Hippocratic Oath for executives", a "Hippocratic Oath for product managers", and "Hippocratic Oath for engineering managers", (edit: or by a legal framework that requires companies to enable it) all it'll do is reduce the PR effort involved in cleaning up a mess like Volkswagen's emission test scandal to pretty much zero by providing an exceptional -- and very mythical-sounding! -- framework for scapegoating.
The people who actually decide to do these unethical things are the executives, not engineers. Executives are the ones who should be taking Hippocratic oaths.
I'm sure when someone gives you the choice between continued employment and following the ethical standard you'll choose to follow the ethical standard.
I mean, don't get me wrong, there are clear scenarios where I think many of us would choose to lose the job. For instance, I'll go unemployed vs directly causing someone to die. But those slightly more ambiguous scenarios are where we need to be enforcing it on a legislative level and the onus should be on ALL levels of the company (engineering and management).
This is not realistic. Any H1NB visa holder who doesn't do what they are told will be fired and must move their family back to their country of origin.
SWE would need to first unionize to protect workers from being fired/deported for pushing back before anything like this is even considered. Or tackle the problem where it begins: with the organization.
No, we are merely the technical hands of our management's will. AS IF WE HAVE A CHOICE WHAT WE DEVELOP! This is an asinine attempt to blame those not in control. The managers and the owners and the corporate board need to have this Ethics Oath, and it needs to be enforced by law.
This seems out of touch with the reality of these large firms. Management is pretty distributed, decision-making is distributed, and your manager is likely also an engineer, or used to be one. Many people work on tools that are general-purpose, in different levels of the stack. Someone working on keeping the data center running doesn't deal with what the applications actually do.
Also, people change positions fairly often and avoid work in areas they consider to be unethical. This means that the people in a position to actually make the call are people who don't find it unethical, because the people who would be concerned about it avoided the whole area. Like, if you don't want to work in ads, you'll probably find a job in some other division, or at least not directly on something you consider unethical. The people who came up with AMP (to pick something controversial on Hacker News) were true believers who sold it to management.
But people still care about the company's reputation as a whole, and as a result you get conflict between the people not actually working on the controversial thing and the people who are, but that mostly results in a lot of drama and cynicism.
The politics is complicated. I can't think of a generic oath that you couldn't rationalize your way out of.
Ironically the "don't seduce people" bit that got axed is actually still enforced and is one of the most common reasons doctors get their licensed revoked. (More specifically sexual misconduct).
Depends how you define evil. Is advertising evil? Then Facebook and Google engineers are committing evil. Is working with repressive governments evil? Then Apple and Microsoft engineers are committing evil. That's a sizeable chunk of engineers right there.
> Why are we trying to shift focus from the wrongs large multi-nationals do to individual software engineers?
This "ethics" business doesn't make sense because the stated goals of the people pushing this idea aren't their actual goals. The stated goal of this "ethics" push is to reduce the harm done by software to society. As you point out, it won't work. The actual goal of the "ethics" push is to entrench a certain politically-contentious ideology in tech by branding this ideology as "ethics" and thereby making it immune to criticism.
The only legitimate binding code of ethics is the law. If a practice is harmful, we can all talk about it together and agree to enact a law against it. The "ethics" people are trying to use elevated moral rhetoric to bypass this democratic process, and we shouldn't let them get away with it.
Lol. I'm married to an MD. Doctors definitely still take the Hippocratic Oath. Perhaps some places no longer do it, but I am not aware of any of here peers that have not taken the Hippocratic Oath.
because its and end run. the idea being that if software developers had this as a whole then corporations could not force them to do whatever it is that was then decided as against the oath but it amounts to nothing more than virtue signaling
the issue with a software developer Hippocratic Oath is that you can damn well bet it will be subject to the whims of whomever is loudest on social media or whatever political group wants to use it to damage the other party.
The Hippocratic Oath is protected by history and pretty much limited to interpretation but any such oath or rule today is not worth page it is printed on.
It's hard to see this as anything but a way for executives to foist responsibility upon software engineers when things go wrong (and of course, claim credit and profit when they go right) as other commenters have pointed out.
That said, this might actually work! If a software engineer can suffer personal harm by working for a business with iffy ethics, then they are incentivized to play it safe by avoiding working for those types of businesses -- thus correcting the market by internalizing the externalities. I doubt anyone would work for Facebook in a world with a Hippocratic Oath for Software Engineers that has real teeth.
Put another way: pointing to decision makers instead of individual engineers is a simple rephrasing of the Nuremberg defense, "I was just following orders!" It is obvious that we should hold leaders accountable. The question here is whether we hold individual software engineers accountable too (they're not mutually exclusive) and the answer is probably yes.
Wouldn't offshoring lower liability? If you can blame the developer why not offsource that or better outsource and remove any responsibility from the company.
The ACM Code of Ethics and Professional Conduct has been around for a while and surely is a good start. But if you read through it, you'll quickly come to the conclusion that unless leadership buys into it your only real option is to quit your job if asked to do something you shouldn't.
That's the whole point of a professional code of ethics. If you want to protect people for behaving ethically, that's government's job (c.f. whistleblower laws et. al.).
This sits a layer down in the defense-in-depth stack. And the idea is that if there's a recognized code, and consensus on what constitutes a violation, that employers will conform because if they don't they'll risk not just one "activist" employee leaving but most of them, out of a shared sense of communal ethics.
Would it work? No idea. My experience is that software people tend to be pretty squishy on matters of personal ethics.
The way I see it, the likely, big ethical issue with AI isn't some Terminator/Butlerian Jihad scenario, or even mass surveillance - it's that the wealth created by the technology will benefit the already wealthy much, much more than everyone else. This is generally true of most technology and even more so of the software industry (high scalability, "low" headcount, IP makes profit shifting and tax evasion trivial). But AI is unique in potentially enabling mass unemployment at a rate and scale never seen before in human history. This should be a joyous event, since humanity is finally free to enjoy the fruit of its labor without the labor part - but the way the world works today means most of us would not be allowed any bites of that fruit anymore.
When Bezos fires every single warehouse employee, what happens if the job they start retraining for also gets automated away before they can even start? And the next one, and the next one. If nobody is making a salary anymore, then it doesn't matter how much lower the prices are on Amazon (due to being produced in automated factories and shipped from automated warehouses) unless Jeff decides to reduce those prices all the way down to 'free'. At that point the assumptions underlying the world's economy would break down in a way that makes a corona shutdown look like a mild hiccup.
No software engineer is going to be able to do anything to help alleviate this. If you want to do something about this, you need to go into politics, not tech.
> There are no common ethics codes to determine how lethal autonomous weapons and systems that are developed for the military should be used once they end up in the hands of civilians.
It's interesting to me that this just presumes developing these autonomous weapons systems in the first place is ethical. I understand there is a difference of opinion on this ethical point, but it immediately frames the discussion pretty far away from the Hippocratic oath's requirement to abstain from causing harm.
"It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter."
> pretty far away from the Hippocratic oath's requirement to abstain from causing harm
So does abortion and euthanasia, and probably plenty of other practices as well. Both of those are without doubt harm-causing practices, with their related points of controversy primarily revolving around whether the harm that is caused is worthwhile in the context of the alternative being a potentially greater harm.
Putting aside the fact that the Hippocratic oath is not actually a relevant part of modern medicine (modern doctors are accountable to comprehensive, codified sets of ethics), the fact that there is no such thing as a set of common ethics by which people choose to live their lives kinda points out the futility of this idea.
One person could say developing weapons is bad because they cause harm, another could say it’s good because they can be used to reduce harm that would have otherwise been caused. Who’s right? Neither of them. That’s just two people with different opinions. I would personally suggest that establishing moral authorities like can often be harmful, because lacking any objective truths, it’s a topic people should generally be left to make up their own minds about.
Am I right or wrong? Who’s to say? I’m just a person with an opinion, and so is anybody who would want to agree or disagree with me.
I like the way you challenge the framing, and I agree, the right first question is "should we develop lethal autonomous weapons at all, and if so what kind is ok, what's the limit on that".
The way its asked looks like an attempt to shift the Overton window until autonomous weapons of all kinds are treated as a mundane inevitability not worth worrying about, with just the niggling details subject to ethical questioning.
But big shifts like that are exactly the sort of thing serious ethical codes should be used to watch out for. Not the niggling details afterwards.
Well for ine autonomous weapons were already there. Landmines for one. Back in the stone age even with snares even meaning /rope/ is an autonomous weapon.
There is no human in the loop (no pun intended for snares). It decides when to strike using physics and the answer is always "yes" if it is triggered.
What makes the new "autonomous" weapons different is that they attempt target differentiation. Mobility becomes useful then when weapons systems can say "no" when presented a target. Since even the Military Industrial Complex, purveyor of unneeded bullshit which wantonly takes lives would find it impossible to sell a drone that goes around shooting missles at all targets after launch.
So reading the comments. I am on the side of having business owners taking some kind of oath, not software engineers. If a software engineer was programming IE around the time that MS was hit by regulators years ago, and this "software engineer" oath was in place, would the software engineers being asked to code a web browser be at fault for the wrong-doings of the company?
It just seems wrong for someone that has been asked to "write a web browser" to be at fault for anything.
What about someone asked to code up a voice prompt on something that answers the phone for a telemarketing company. And said company later uses the code to do illegal spam robocalls instead of what they told the developer they were doing with it?
What about if a software developer that writes code to turn off and on a sprinkler system by phone is later convicted of writing the code for a bomb that blows up a building?
What about if a software developer writes code that matches human faces for the purposes of automatically unlocking his door at his home, ends up being open sourcing it, but the developer is arrested for that software being deployed on a drone that murders specific people?
What about if a software developer that is asked to install the above Face Rec system on a drone but is told it's designed to take pictures of people it knows at a birthday party, and is later switched out to trigger a machine gun.
There are not two sides, because both management and engineering should face consequences for building harmful software. And I'm not sure it's appropriate to test these "what ifs" as if looking for logical flaws in such a system, because a well-functioning court and jury should easily be able answer these questions, and I'm sure you had the "correct" answers in mind when you wrote them. Whether or not our courts and juries are currently well-functioning enough to support something like this is a different question, but we should strive for it.
In any case, we already have laws like this! And they're not controversial! If you commit war crimes by killing people with a shovel under the orders of your superiors, both you and your superiors are responsible, but the manufacturer of the shovel obviously isn't responsible for what you did unless they advertised the shovel's skull-bashing capabilities.
But what is harmful software? Today, while there are arguments about facebook being too addictive, it's really just competition between platforms, it's not everyone's opinion that the software engineers at facebook should be liable at this point for making an addictive algorithm. At some uncertain point in time, there's the possibility that facebook will be sued for the addictiveness and lose. Should that happen, I don't see why the software engineers should be partially to blame.
> a well-functioning court and jury should easily be able answer these questions,
I'm going to ignore the qualifier of 'well functioning' which obviously up for debate. The process of being charged with a crime, being put on trial, wondering at the consequences of the outcome etc., is no joke. It is a tremendously time-consuming, expensive, and stressful processes and even if you are acquitted there is no undoing the damage that has been done. There's a reason doctor's spend huge chunks of their income on malpractice insurance, and if we decide that engineers need the same protection in case they get sued than the biggest beneficiary is going to be the insurance companies.
If insurance companies also had to sigh oaths we might make some progress, but the nature of their game is to spread the risk - which is to say they take money from a lot of people and hope they never have to pay them back. There's only so much regulation can do about that.
It shouldn't just be thrown out as 'well if you make a decision in good faith then you are sure to win your court case'. It is not a reasonable burden to put on someone who cannot anticipate all the possible outcomes of decisions they make.
The Hippocratic oath sounds more altruistic than the alternatives, but good legislation, including business audits and incentives, will have far more impact than a software engineer swearing they won’t be evil.
But ultimately there's always a software engineer involved in the creation of software - and that's not true of any of the other roles you mentioned. Since software engineers are necessary and sufficient to produce software, they should always be held responsible, and any oath should fall on engineers.
> To say it’s on the engineer to do no harm puts them in the tenuous position of doing the job or being replaced by someone who will.
Well, yes - if there were no tradeoffs there would be no point in having an oath to begin with. But there are software engineers today, including some on HN, who do things more harmful and unethical than medical malpractice, and they are personally culpable for the decision to do so - just as their replacements would be if they refused. I would also like to see laws criminalizing those individual engineers' conduct - maybe you're alluding to the same thing? - but an oath is a good start.
(Remember employees in the US depend on the company for health insurance. Saying 'no' could cost a lot more than just ones position.)
Most software engineers are not like doctors. We have little autonomy over what is created. Our responsibility is primarily the how. And with devops sometimes the actual deployment and maintenance itself.
That's not true in any way. Lots of software is written by people who don't even have a degree, others by some who have a computer science degree, but not an engineering one, etc.
The other issue is that software is rarely unethical. The unethical bit often comes from the way it is used.
And I'd have to agree with OP. In an idealist world you could assume software engineers would all be ready to quit their job at any sight of unethical affair, even say, launching something to production with a known vulnerability, or without anything but the most rigorous security review process having passed. But in practice, you're not going to achieve this result, unless you put a framework to incentivize software engineers towards being ethical. If you allowed them to sue their employer, and made it that they more often win the lawsuit, for asking them to build something unethical, or insisting that they do so even after the SE said it was unethical, or to retaliate in any way to an SE refusing to build something on ground of ethics, then you'd maybe start to see results. Otherwise, won't happen, and you've only created a scape goat to make it even easier for companies to push for unethical software, since they can now just blame SE they coerce into building it anyways.
Until then, management falls on the sword, thanks.
Eg a developer does something and society finds this unethical and punishes them. The developer's boss, the boss's boss, the boss's boss's boss etc up to the CEO all get punished in the same way. Furthermore, to avoid companies trying to shield themselves from this by putting their developers into a different company, it will apply to software that you get from someone else too.
Suddenly this doesn't sound very appealing anymore, does it?
Aside from that, the quantification of attributes/properties of people can have negative implications for many people. Oversharing is a problem on the net, but at least here people just endanger themselves.
Nah, it's not the same at all. The fundamental difference between creating a program and medicine is creating a program only has to be done once, or at least a only by a few.
Medicine on the other hand: it has to be redone with each new patient. If the Hippocratic Oath works to prevent 99.9% doctors from doing a harmful procedure then you've hit a home run. Sure, you will never completely stop some bad egg removing a perfectly good limb because a patient suffering from Xenomelia offered enough money. But who wouldn't call a thousand fold reduction a huge win.
We demonstrably have the 0.1% of programmers who are willing to break any oath. They make malware, and willingly take out Sony as mercenaries because Kim Jong-Un got pissed off at a movie. All that 0.1% has to do is write the program once. Thereafter you are not trying to discourage hoards of high skilled professional from doing it again, you are trying to stop a legion of dark net operators copying the thing and selling it to anyone. An oath is a waste of time under those circumstances.
1) New regulation forces some branches of software engineering to have some type of oath.
2) Now some software jobs will require only oath takers to do.
3) A well payed and powerful new cast of software engineers is born.
4) They are highly paid and have a powerful lobby working for them.
5) The oath takers become very frisky and only work on jobs with minimal risk. The ones that do screw up have an armada of lawyers, because of course they have a new association with deep pockets.
6) Innovation stalls for a while.
7) Big corps start outsourcing some of the oath-taking jobs. These engineers are not bound by the same regulation. Screw ups happen, people die at some point.
8) Maybe we should have the outsourced engineers also take an oath? Back to square 1
This is exactly what I found happened for medical Doctors in Canada (don't know for US). Not saying doctors are not doing a good job, and I can't imagine the stress and pressure they operate from. But suing for malpractice in Canada can be challenging to say the least. I have personal account of a Family member who was grossly mistreated, and all the Doctor did was changed hospitals, nothing more than a slap on the wrist.
https://diamondlaw.ca/blog/how-canadian-law-discourages-pati...
I'm having a hard time trying to find examples of this, outside the field of armament development.
And in those fields where a software failure may result in death, e.g. aircraft development, proof of a software engineer willingly causing it, would likely result in jail time already.
Programmers get specs and write programs to match those.
At no point is it the programmers responsibility to talk about the moral compass of the project and where it fits into society.
An oath to do no harm? You first need to give programmers the power to decide the fate of projects on their own the way only a doctor can decide medicine or treatment.
I've quit jobs in the past because of ethical concerns about the way in which those above me have been acting. In one case this involved bribery of senior government officials to push through a project that put at risk the privacy of hundreds of thousands of people.
If you go along with shit like that, you're an accomplice and share partial responsibility. As professionals we have a responsibility to stand up for what is right. It's not good enough to fall back to the lazy excuse of "just doing my job".
There is no point wasting energy and time around such people if they don't share the same values.
It's not complicated (requires some networking) to find in any org, the characters who will "do whatever it takes".
Then getting them kicked out, opposing them, sidelining them, subverting them, avoiding them are all choices every Engineer has.
I agree to a very limited extent about the hierarchical nature of a typical corporation, but I also disagree. Software engineers at a certain level of their career and with relatively uncommon skills can pick and choose what companies they want to work at. In my opinion people of good moral character and conscience need to be prepared to refuse to accept a position at companies known to engage in activities against their principles. And further need to be prepared to resign if they are asked to do something clearly unethical.
From my particular specialization in network engineering, I would never accept a role at an ISP in an environment where I had to implement something like the GFW in China, or further walled-garden/censorship of the global Internet. It's directly contradictory to my principles. I sincerely hope that the best and the brightest of my colleagues would never choose to aid and abet internet-fuckery by autocratic regimes. If people from my field look at a project and could reasonably say "Vint Cerf would be really disappointed if he saw me implementing this...", I hope they will choose to walk away.
What percent of software engineers does this statement apply to?
Software is fundamentally different because until you run it it has no consequences, and even if you run it, it can be contained. I can write a worm and not release it on world. In that regard, it is more like engineering _plans_. I can draw up plans for a building that is designed to collapse with X number of persons inside -- in fact I can imagine either of the two assignments given as an exercise in University.
No reason to make more laws: it should be immaterial whether I chop down the Christmas tree at the local town square or program a robot to do it.
That is so wrong it isn't even funny. If the car was invented as powered by a Mr. Fusion the buggy whip makers going out of business would be a negative real world consequence.
the idea of Hippocratic Oath reminds me of Asimov's Three Laws of Robotics in "The Naked sun" (SPOILERS ahead): the detective realises that the normally quoted First Law of Robotics ("A robot may not injure a human being or, through inaction, allow a human being to come to harm.") is actually just an approximation, he argues that the real Law is "A robot may do nothing that, TO ITS KNOWLEDGE, will harm a human being; nor, through inaction, KNOWINGLY allow a human being to come to harm."
This is important because even though robots really try their best, different robots could perform sub-tasks that look very harmless by themselves, but combined kill a human being:
- A robot is instructed to pour this bottle of poison into a caraffe of water and then leave the room
- Another robot is instructed to enter the room, take the caraffe of water and give it to a human to drink
The human is poisoned, but none of the robots are directly responsible (in the first law sense). Is the act of connecting the two dots the evil deed.
Precisely the issue with the original proposal. Would it have mattered whatsoever if PhD's took a Hippocratic oath when developing the Manhattan project?
I feel like this MSFT executive may already know that swearing engineer to "do no harm" is fruitless after reading the article, but it's still unfortunate that statements like his diverts attention from more meaningful proposals.
Isn't that what a hipppocratic oath would solve. They'd be accountable to the oath before their bosses, and that would give them reasonable grounds to refuse unethical work.
A system of ethics within the health system are necessary for customers to retain trust in the health industry. It's also strongly aligned with the selfish interests of workers who must enact that system of ethics. These properties do not neatly translate to software engineering—mostly because the most difficult ethical dilemmas in technology are rarely obvious when looking at source code. The problems with Facebook (for example) are not always inherent in code; many are only revealed after being deployed at scale and external groups begin exploiting the system.
This is where a strict licensing requirement, like Canada's P. Eng, can empower the engineer. If you think what you're being asked to do would violate your professional ethics, not only can you decline to do it, but you have a system to ensure that you won't just get replaced by someone who will do it.
And in the end, if software engineers are to conduct themselves in moral and ethical ways, they must be empowered to do so without having to sacrifice their personal wellbeing or livelihood. Regulation, it seems, is the only way to achieve that end.
Software engineers are accountable to themselves before they are accountable to their bosses.
- developing treatments for chronic symptoms instead of curing diseases
- being unprepared for pandemics
- making health care unaffordable except through employer plans
- promoting wrong nutrition guidelines for decades after the evidence was in
and more.
To have good outcomes you need ethics at both individual and system-wide levels.
https://en.m.wikipedia.org/wiki/W._Edwards_Deming
After the 2008 mortgage crisis, Netherland required everybody working at banks to take the banker's oath, which is mostly about balancing the interests of the 4 main stakeholders of the bank: shareholders, customers, employee, and society. It's pretty broad, it doesn't magically fix everything, but it does make everybody more aware of their responsibilities. Maybe software companies should require something similar, where everybody needs to be aware of their responsibilities towards, well, primarily user data, I guess. And that goes for not just software engineers themselves, but for everybody involved in the process.
It seems to me that if everyone has to promise to do no evil, the meaning of such oaths would become diluted.
Organizations that went through true iterative process to reduce failure rate like NASA figured out that they needed to allow true authority to specific domain experts to blow the whistle and not face reprisal or suffer for it. Oaths fix nothing, you need organizational change, and if someone is going to do that, its the management in charge.
If we were to introduce an oath we would have to take further inspiration from doctors, e.g having a certification required to do the job, or, failing that at least having an industry-wide union/guild protecting the position.
This sort of thing works in other professions like medicine because malpractice can cause doctors to lose their license. Same with civil engineers. This changes things because the choice is now quitting or possibly never being able to work in the field again.
Perhaps principal software engineers in charge of life or death software should be licensed for accountability, “engineer on record”.
obviously for a hippocratic type oath to work you need the same kind of system in place for qualifying engineers that you do for doctors and not allowing anyone to work as an engineer who failed the ethics board.
They are responsible to their bosses as well as the public who are the end users of their designs/products
This was not always the case.
How the hell did we let that happened?
As a dev you can, but don't have to, think as much about the politics and operations of your org for all kinds of reasons. You are relatively harder to replace so internal politics tends to matter less, and if the org makes decisions you don't like, you can be confident that you can leave and find something else versus the long and often unfruitful process of trying to change an org from within.
Politics can and often is messy, how often have you heard something like "I just want to build things" (it's how I feel for sure), if you can get paid well to do that, why get involved with a messy decision making process?
The nazi officers who committed most of the atrocities used similar arguments. "I was just following orders!"
I expect better from a software engineer on hacker news. You've single handedly convinced most here - through your weak logic - that such an oath is necessary.
A software engineer is more like a chemist working for the pharmaceutical industry than a doctor treating patients. And chemists typically don't have an Hippocratic Oath. Pharmacists sometimes have their own version, but it is mostly about giving good advise to patients and respecting them as human beings.
But it doesn't stop the pharmaceutical industry from being heavily regulated, and while their business practices are often criticized, the drugs that come out of it are generally safe and effective. Many countries also have regulations making important drugs (ex: vaccines) accessible to everyone.
Deleted Comment
Deleted Comment
Some health practitioners are literally bought by Big Pharma, by their hospital accountant, etc. How would an oath fix that? Same with engineers or any other discipline.
You need to make sure that everyone in the process has skin in the game. For me it's less about control (legislation) than about responsibility and accountability (assessments, eating your own dog food).
This is a hypocritic ode. If somebody is acting unethically at MS then it is management. All the innovation that is not happening because MS is abusing their position. Two times they have killed a universal software platform to preserve theirs: Java and websites. Ironically they are pushing websites now that the platform has shifted to mobile with objective c and Google's variation of Java.
>According to Brad Smith, just like it is the Pope’s job to bring religion closer to today’s technology, it is the software developer’s job to bring technology closer to the humanities.
The Pope is to religion as is the President of the biggest software company to software development. It is his responsibility, not theirs. Or does he see himself as that software developer? I guess it is more a Balmer developer and he means software engineers.
He could start by handing out software licenses / EULAS that take full responsibility for any damage the software does cause, like any other sold product has to do. Then, by business processes, management will take care of the ethical issues to minimize risks.
Microsoft executives seem more in need of lessons in ethics than their engineers. Just one example from last year:
>'We did not sign up to develop weapons' say Microsoft employees protesting $479 million HoloLens army contract
https://www.pcgamer.com/we-did-not-sign-up-to-develop-weapon...
>They build the weapons
Talking of weapons, while we speculate about what AI might be used for, Microsoft executives have literally decided to build actual weapons.
Not building weapons for the war effort is not always right. That is an intentional double negative because I think it's the most clear if you read it twice. Building weapons for the war effort is sometimes right would be the boolean negative of that statement.
>Microsoft executives have literally decided to build actual weapons.
Yep. Literally they did. Clearly all US weapons are evil in your opinion because you disagree with all US weapon usage I'm guessing? You have to combine the argument that they are literally making weapons with the fact that those weapons are being used in a way you don't agree with.
Keep in mind that most of these advanced weapons they are literally making are not designed against the current wars you most likely disagree with. They are built, to include AI, to keep pace with advanced threats from other countries. Allowing us to fall behind technologically, due to perceived moral black/white issues of current wars, could lead to a whole new world in 40 years as you make your arguments in a well protected environment. Not researching advanced topics will lead to an asymmetric fight... not in our favor... if the enemy so chooses.
Reference our usage of nuclear weapons. If you think that was evil, then you wouldn't want an evil country / group of people to gain such an asymmetric advantage. If you think it was necessary, then you want to have an asymmetric advantage when it is necessary against an evil group. Yes I recognize the inherent cyclical issue with the above statement. Either way, allowing all people to gain an asymmetric advantage while we just discard all research in hopes that others will follow is ignorant of history - war theory is a thing.
I have some friends who have worked at MSFT for a long time, about 20 years or so. There was a time when they used to talk about open source as if it was cancer (~2011). When MSFT started embracing the cancer, they didn't really up and leave. Now they are all talking about how great this open source thing is.
But even funnier was when they used to complain about Google's rampant user tracking. And then one day they added targeted ads into Windows 10. Did these people suddenly decide "enough is enough" and go and join the EFF? You already know the answer to that.
Not saying I’m in favor of this oath, just that it seems silly to distinguish different roles in the engineering process.
The broader point is that in most companies engineering decisions don't come purely from the engineering department. They are often decisions made as part of bigger projects or efforts. For example, it's probably not up to engineers in most companies whether the any of the tech giants sell to the military. If it is, it's up to people who were engineers at some point and might still exist up at the top of the "product" part of the company, but who for all intents and purposes stopped writing any code or even managing anyone who writes code a long, long time ago.
But more to the point: Why are we trying to shift focus from the wrongs large multi-nationals do to individual software engineers? Plus what would the result be if this "oath" conflicts with a manager's instructions?
Maybe we should start with Microsoft, Google, Comcast, Oracle, and similar taking an oath to do no harm, before we push engineers under the bus for not fighting hard enough against what they're ordered to do.
You have incorrect information. A an abridged or modernized version of the oath is still taken upon graduation of most American MD schools.
While it is often referred to as "The Modern Hippocratic Oath", I would argue the Lasagna oath contains significant differences from the original Hippocratic oath, and it is worth treating them as separate things.
P..S. I remember because I was like "Mmmm, Lasagna...." both times.
> Doctors no longer take the Hippocratic Oath (because it is incompatible with a lot of difficult situations doctors are placed in).
The original Hippocratic Oath is no longer used (as both the original post, and you yourself readily admit). Why it is no longer used it highly relevant to this discussion because someone is calling for a Hippocratic Oath-like thing in a different area.
The fact doctors have moved to a less idealized Hippocratic Oath should be a historical lesson, not something we should seek to emulate.
Because software engineers are the ones doing the actual work. By imposing an ethical standard on the people doing the real work the multinational executives then either accept that limitation or knowingly accept risks from intentionally violating the spirit of that limitation.
What risk? Since it's the engineers, not the executives that take any oath of professional conduct, there wouldn't be any risk for any executive. All this would get us is a legal framework for throwing engineers under the bus via an ethical commission if they do something silly like blowing the whistle on an unethical decision higher up in the hierarchy.
I'm very much for greater personal responsibility in the field of software engineering. Until not too long ago, I used to work a in a field (medical equipment) where accepting the burden of potentially catastrophic mistakes came with the job. But I also know -- based on that same experience -- that personal accountability is meaningless without organizational ability.
Unless this hypothetical "Hippocratic Oath for engineers" is backed by a "Hippocratic Oath for executives", a "Hippocratic Oath for product managers", and "Hippocratic Oath for engineering managers", (edit: or by a legal framework that requires companies to enable it) all it'll do is reduce the PR effort involved in cleaning up a mess like Volkswagen's emission test scandal to pretty much zero by providing an exceptional -- and very mythical-sounding! -- framework for scapegoating.
I mean, don't get me wrong, there are clear scenarios where I think many of us would choose to lose the job. For instance, I'll go unemployed vs directly causing someone to die. But those slightly more ambiguous scenarios are where we need to be enforcing it on a legislative level and the onus should be on ALL levels of the company (engineering and management).
SWE would need to first unionize to protect workers from being fired/deported for pushing back before anything like this is even considered. Or tackle the problem where it begins: with the organization.
Are you now going to ask for more software import/export laws?
Also, people change positions fairly often and avoid work in areas they consider to be unethical. This means that the people in a position to actually make the call are people who don't find it unethical, because the people who would be concerned about it avoided the whole area. Like, if you don't want to work in ads, you'll probably find a job in some other division, or at least not directly on something you consider unethical. The people who came up with AMP (to pick something controversial on Hacker News) were true believers who sold it to management.
But people still care about the company's reputation as a whole, and as a result you get conflict between the people not actually working on the controversial thing and the people who are, but that mostly results in a lot of drama and cynicism.
The politics is complicated. I can't think of a generic oath that you couldn't rationalize your way out of.
This "ethics" business doesn't make sense because the stated goals of the people pushing this idea aren't their actual goals. The stated goal of this "ethics" push is to reduce the harm done by software to society. As you point out, it won't work. The actual goal of the "ethics" push is to entrench a certain politically-contentious ideology in tech by branding this ideology as "ethics" and thereby making it immune to criticism.
The only legitimate binding code of ethics is the law. If a practice is harmful, we can all talk about it together and agree to enact a law against it. The "ethics" people are trying to use elevated moral rhetoric to bypass this democratic process, and we shouldn't let them get away with it.
Deleted Comment
Lol. I'm married to an MD. Doctors definitely still take the Hippocratic Oath. Perhaps some places no longer do it, but I am not aware of any of here peers that have not taken the Hippocratic Oath.
Reality, we have virtue signaling.
.... soap box
because its and end run. the idea being that if software developers had this as a whole then corporations could not force them to do whatever it is that was then decided as against the oath but it amounts to nothing more than virtue signaling
the issue with a software developer Hippocratic Oath is that you can damn well bet it will be subject to the whims of whomever is loudest on social media or whatever political group wants to use it to damage the other party.
The Hippocratic Oath is protected by history and pretty much limited to interpretation but any such oath or rule today is not worth page it is printed on.
That said, this might actually work! If a software engineer can suffer personal harm by working for a business with iffy ethics, then they are incentivized to play it safe by avoiding working for those types of businesses -- thus correcting the market by internalizing the externalities. I doubt anyone would work for Facebook in a world with a Hippocratic Oath for Software Engineers that has real teeth.
Put another way: pointing to decision makers instead of individual engineers is a simple rephrasing of the Nuremberg defense, "I was just following orders!" It is obvious that we should hold leaders accountable. The question here is whether we hold individual software engineers accountable too (they're not mutually exclusive) and the answer is probably yes.
Deleted Comment
https://www.acm.org/code-of-ethics
This sits a layer down in the defense-in-depth stack. And the idea is that if there's a recognized code, and consensus on what constitutes a violation, that employers will conform because if they don't they'll risk not just one "activist" employee leaving but most of them, out of a shared sense of communal ethics.
Would it work? No idea. My experience is that software people tend to be pretty squishy on matters of personal ethics.
Deleted Comment
When Bezos fires every single warehouse employee, what happens if the job they start retraining for also gets automated away before they can even start? And the next one, and the next one. If nobody is making a salary anymore, then it doesn't matter how much lower the prices are on Amazon (due to being produced in automated factories and shipped from automated warehouses) unless Jeff decides to reduce those prices all the way down to 'free'. At that point the assumptions underlying the world's economy would break down in a way that makes a corona shutdown look like a mild hiccup.
No software engineer is going to be able to do anything to help alleviate this. If you want to do something about this, you need to go into politics, not tech.
It's interesting to me that this just presumes developing these autonomous weapons systems in the first place is ethical. I understand there is a difference of opinion on this ethical point, but it immediately frames the discussion pretty far away from the Hippocratic oath's requirement to abstain from causing harm.
- Borenstein
So does abortion and euthanasia, and probably plenty of other practices as well. Both of those are without doubt harm-causing practices, with their related points of controversy primarily revolving around whether the harm that is caused is worthwhile in the context of the alternative being a potentially greater harm.
Putting aside the fact that the Hippocratic oath is not actually a relevant part of modern medicine (modern doctors are accountable to comprehensive, codified sets of ethics), the fact that there is no such thing as a set of common ethics by which people choose to live their lives kinda points out the futility of this idea.
One person could say developing weapons is bad because they cause harm, another could say it’s good because they can be used to reduce harm that would have otherwise been caused. Who’s right? Neither of them. That’s just two people with different opinions. I would personally suggest that establishing moral authorities like can often be harmful, because lacking any objective truths, it’s a topic people should generally be left to make up their own minds about.
Am I right or wrong? Who’s to say? I’m just a person with an opinion, and so is anybody who would want to agree or disagree with me.
I think the main motivator in almost all of those things is money.
The reason for wars is money, they just get justified by "the greater good".
Same for all the involved technologies.
A less controversial example would be something like chemotherapy. In fact, a lot of treatments for terminal and chronic ailments are pretty harmful.
The way its asked looks like an attempt to shift the Overton window until autonomous weapons of all kinds are treated as a mundane inevitability not worth worrying about, with just the niggling details subject to ethical questioning.
But big shifts like that are exactly the sort of thing serious ethical codes should be used to watch out for. Not the niggling details afterwards.
There is no human in the loop (no pun intended for snares). It decides when to strike using physics and the answer is always "yes" if it is triggered.
What makes the new "autonomous" weapons different is that they attempt target differentiation. Mobility becomes useful then when weapons systems can say "no" when presented a target. Since even the Military Industrial Complex, purveyor of unneeded bullshit which wantonly takes lives would find it impossible to sell a drone that goes around shooting missles at all targets after launch.
It just seems wrong for someone that has been asked to "write a web browser" to be at fault for anything.
What about someone asked to code up a voice prompt on something that answers the phone for a telemarketing company. And said company later uses the code to do illegal spam robocalls instead of what they told the developer they were doing with it?
What about if a software developer that writes code to turn off and on a sprinkler system by phone is later convicted of writing the code for a bomb that blows up a building?
What about if a software developer writes code that matches human faces for the purposes of automatically unlocking his door at his home, ends up being open sourcing it, but the developer is arrested for that software being deployed on a drone that murders specific people?
What about if a software developer that is asked to install the above Face Rec system on a drone but is told it's designed to take pictures of people it knows at a birthday party, and is later switched out to trigger a machine gun.
In any case, we already have laws like this! And they're not controversial! If you commit war crimes by killing people with a shovel under the orders of your superiors, both you and your superiors are responsible, but the manufacturer of the shovel obviously isn't responsible for what you did unless they advertised the shovel's skull-bashing capabilities.
I'm going to ignore the qualifier of 'well functioning' which obviously up for debate. The process of being charged with a crime, being put on trial, wondering at the consequences of the outcome etc., is no joke. It is a tremendously time-consuming, expensive, and stressful processes and even if you are acquitted there is no undoing the damage that has been done. There's a reason doctor's spend huge chunks of their income on malpractice insurance, and if we decide that engineers need the same protection in case they get sued than the biggest beneficiary is going to be the insurance companies.
If insurance companies also had to sigh oaths we might make some progress, but the nature of their game is to spread the risk - which is to say they take money from a lot of people and hope they never have to pay them back. There's only so much regulation can do about that.
It shouldn't just be thrown out as 'well if you make a decision in good faith then you are sure to win your court case'. It is not a reasonable burden to put on someone who cannot anticipate all the possible outcomes of decisions they make.