But don’t let that distract you; it was designed to kill people (2017)
I’m not sure his solutions are sound. I don’t think you should violate your own morals, but the real issue isn’t that your code can be used for wrong doing, it’s that there are no consequences when it is.
I think weapons manufacturing is a better example than software, because it’s much clearer. Weapons are necessary to defend free society, and when they are misused, we hold the misusers to justice.
Or at least we used to. Because today America is actively bombing civilians in 7 different countries, and no one is ever going to be held accountable for the outright human rights violation. I mean, it’s a war crime to kill civilians. I don’t think you can blame the weapons manufacturers for this though, and I don’t think we should have to. Because it’s our common ethics that should prevent it.
I guess, though, that in a world without accountability, your individual morals is all that’s left, but it’s our ethics that need to change if we actually want change. Because if you personally refuse to write the software, someone else will. Like I said, I think you should absolutely refuse to write the software because you won’t be able to live with yourself, just don’t expect the violators to stop unless we stop them.
I don't understand this guy at all. Two things confuse me:
First, the purpose of a military is to be able to break things and harm people as effectively as possible. Did he think his job would involve making the military less effective?
Second, why is it bad to work on software that's used to kill people? Killing isn't necessarily bad. If this software helped kill Abu Bakr al-Baghdadi or Ayman al-Zawahiri, it would have a hugely positive impact upon the world.
To me, the post comes across as a bunch of meandering moral grandstanding. The core thesis is either utterly pedestrian ("Think about the consequences of what you’re building.") or totally fringe (that helping the US military is immoral).
I think a lot of people are missing the point of this post, especially because it offers an opportunity to nit pick on specifics. But the point here isn't "don't write code for the military" or "don't write code for killing" or anything else, it's that we need to think about if we're okay with the code we're writing. We need to think about the implications of the things we build.
We're never going to find a hard line we all can agree on, aka "don't work on missiles, but airplanes are okay" or something similar. But we can all stop to think about what that line is for us personally, and if what we're working on crosses it. What would we do if asked to violate that boundary we've created for ourselves?
When I was in grad school, we had to take an ethics in engineering course. We split into groups and discussed the ethics of building certain things, in my group's case building predator drones. While I didn't agree with the person who said they were okay building them because we need weapons, I was far less concerned with that individual than the one who looked at me, straight faced, and said "I don't get what airplanes have to do with morals".
I think a lot of people here are getting hung up on some imagined implication that any code that could ever be used for evil is wrong to make. Yes, this is a grey area and there's no clear line of what the correct ethical choice is in many situations. But the point is that it's barely discussed professionally at all. Other disciplines have quite a bit of exploration into their professional ethics, but software engineering seems to gloss over it. Yes, there is some precedent, but it seems disproportionately small when considering that it is becoming an increasingly ubiquitous role in our society.
It seems like it's not even in our vocabulary as engineers to comprehend the ethics of our work. There's no framework for analysis or disclosure. Sure, there aren't any easy answers to most situations, but have we even tried?
Killing people is at the extreme end of ethical concerns, but a lot more people are working on code that's designed to tighten corporations' control over users or otherwise take away their freedom and privacy and increase authoritarianism. Things like DRM, "security" ("because who doesn't want to be safe and secure?"), removing functionality that allows extensibility/interoperability, etc. I've heard it phrased thus: "Do we want to help them build better nooses to put around our necks?"
A _lot_ of folks here in the comments are missing the point and "getting distracted" by the word "kill."
Sure, it was probably a bit short-sighted of the speaker to not connect the dots and see that this software would be actively used to seek out and kill people. But to me the core message of this talk applies to the very large grey area that lies in between fully ethical software and stuff the DoD makes to kill people (yes killing _can_ be necessary I know, I know, don't get distracted!). And that grey area is mostly social media and ad-tech.
These are two domains which require software to be written that is actively harmful to people's privacy and mental health. We see Twitter being used to target and harass people to the point of suicide. Instagram has been precisely designed to the point of addiciting its users into a fake world that makes them feel like they are nothing and that everyone is happier than they are and it depresses them.
The examples I just cited are prevalent criticism and can start to echo in our chamber here (HN). But how about a real, recent example: Facebook's use of two-factor authentication phone numbers in advertisements. Some engineer at Facebook was given these requirements to implement this super shady and deceitful functionality and chose to implement it anyways without pushing back. It takes advantage of folks who were simply trying to improve the security of their account, but now it is being used to target them with advertisements.
Most of us will have long careers that don't involve writing software that will kill people, but a stunning majority of us will be somewhere in this grey area at one point or another, and you must think about what you make at that point still.
Yes, it's important to understand what you're doing.
However: a country without a military (or an allied country with a military) is very quickly not a country. If the US and Europe stopped having a (working) military, they would be immediately taken over by totalitarian regimes who would be delighted to trample over all the rights and privileges their citizens currently enjoy. South Korea, Taiwan, and many other countries/regions would instantly be destroyed by powerful and dangerous neighbors. Free countries are not free because everyone around the world is nice; they are free because people are willing to die to protect them.
A military must have weapons that can kill people. The real goal of such weapons in a western democracy is not to kill people - it's to be able to kill people so that no one will take over or threaten the country and its allies (at least not without consequences). It is entirely ethical to enable self-defense, and self-defense is the purpose of the Department of Defense (remember, its very name is "Defense"). The ACM code of ethics doesn't forbid this, because it focuses on unintentional harm, not intentional harm from a lawful order to protect a country. The author seems to think it's unethical to enable self-defense, and that's just nonsense. Weapons (and anything else) can be misused, but we need to hold the misusers responsible - not pretend that they aren't needed. It's a good thing that military personnel are willing to risk their lives to protect others, even those who don't appreciate their sacrifices to do so.
> a tool to use phones to find WiFi signals.
> Does it find phones ... This was never about finding better WiFi. We were always finding phones. Phones carried by people. Remember I said I was working for a Department of Defense contractor? The DoD is the military. I was building a tool for the military to find people based on where their phones where, and shoot them.
I got distracted by this utter failure to define the objectives and requirements of the project. If you want to find phones, don't start by using phones to find wifi routers.
But yeah, if you work for the DoD, you should probably be ok with the stuff you are working on being used to kill people. It's a big part of what they do.
Where do you draw the line? Almost any significant technology can be applied for military purposes. Likewise, the Internet itself came from a DOD research project; military technologies can be repurposed for peaceful uses.
I feel like this underestimates the complexity of the problem.
Both markets and open-source software run on abstraction. (The official open source definition doesn't even allow restrictions based on field of use.)
If people want to implement "know your customer" like the banks do, and make decisions based on their own political values, your customers need to share a lot more information and there isn't going to be very much privacy. Buying services is going to require a lot of hoop-jumping.
And then consider the effect on product design. Unless countermeasures are built in, a copier can be used to counterfeit money. And that's an easy case.
Now we have a simple service for distributing text messages making front page headlines for its effect on society.
It was naive, but the assumption that customers are responsible for their own actions was a useful fiction. A society of mutual distrust makes everything difficult.
The author is naive beyond belief.
What kind of software did he think the DOD would have him writing? Fart apps?
Great article. I believe it's important to understand purpose of everything you do.
This story crystalizes that into a very compelling advisory.
Often we do things without that understanding in mind and that can lead to many problems, including miscommunication, errors and possibly what this article alludes to.
I only skimmed it but this doesn't really make sense.
a) Obviously if you work for the military, killing people is going to be involved somewhere. That's what the military does!
b) Why would put so much effort into locating wifi routers when they wanted to find phones? They have no need to hide that objective - it's a perfectly obvious thing for the military to want to do.
c) I didn't get that far but is he assuming that "find a phone" = "kill the owner"?
The military is an absolutely essential institution, but for the sake of the civilians and the soldiers, we shouldn't forget its function: Kill people and destroy their creations. I saw a recruiting commercial for the U.S. military showing an aircraft carrier and calling it a 'global force for good'. That misleads recruits: It's a global force of death and destruction, and that will be your job if you sign up. We don't like to think that, and we can't let that result in rejecting all use of the military - which is just as irresponsible - but we must face the reality of a very serious subject so we can think seriously about it.
When we imagine that the military is something else, especially something glorious, not only do we risk the worst evil of humanity, war, but we also harm the soldiers (and sailors): War is very damaging to them, and not just the dead and the physically wounded, but causing and experiencing death and destruction results in great psychological harm. Humans are not cut out to do it: IIRC the details, on D-Day in WWII, half the soldiers didn't fire their weapons when they should have. After every war, you can read that people 'were not the same' when they returned; many are damaged. Suicides are (or recently were) very high among U.S. soldiers, and the current wars are relatively very low risk for them. I've read interviews with elite special forces soldiers who talk about how hard it is, psychologically, to kill.
Another consequence is that we put soldiers in positions to fail: We send them to wars that they cannot win, usually because we ignore the essential requirement of a stable political outcome - Afghanistan and Iraq are only the two most recent examples. Many in the U.S. like to imagine an invincible force, a panacea for international problems, but just a brief glance at history shows otherwise: since WWII, there has been one clear victory (Gulf War), two endless stalemates (Korea, Afghanistan), one ongoing quagmire of mostly negative results (Iraq), and one loss (Vietnam). We also ask soldiers to do jobs they are not trained for, such as policing: Police are there to bring and maintain peace and public order; soldiers are trained to do the opposite, kill and destroy.
When we have a clear idea of a military's function, we can align outcomes with our values, minimize the use of soldiers precious lives and health, and put them in a position to succeed. To ignore the reality of the military's function, of killing people and destroying their civilization, is immoral IMO.
LOL @ "North Virginia" . Also, when he says R^2, is he talking about Pearson's correlation coefficient? That paragraph is confusing, and in the next one he admits being mystified by the idea of gaussian distributions... I guess I should stop trying to make sense of the technical elements of this article.
I don't see this as a "don't help the military" post. This is a post about needing a profession of software engineering.
There are, clearly, ethical lines. Leaving aside where the line and the military cross, it is important to think how we will build such a profession - and enforce membership (which is the whole point)
Also worth noting is a common medical ethics quiz: You are a ER doctor, and a college football player comes in, RTA, spine shattered, internal bleeding, conscious and not in pain but needs operation to stem the bleeding.
He clearly and openly states that as he will never walk again, he does not wish to live.
Do you operate?
Most doctors it seems operate, and oddly that is a violation of most ethical board recommendations.
This is a wonderful article and interestingly one of the more poignant bits is a quote from someone in the film industry (to paraphrase):
"We meet with a lot of startups and the only question is 'Can this be done?' Nobody is stopping to ask questions regarding the ethics."
That being said however, it is really tricky to come to any easy conclusions. We live in a complex world and it's not clear even after exhaustive questioning what damage could be done with the work that one is doing.
Fire was one of man's greatest discoveries - but if you stop and look at all the dangerous uses it could have been put to, it is quite possible we would not be sitting here today on HN...
The article is interesting, but not a new problem. Any technology can be used for good or bad (within a given morale framework). Yes, we should all keep it in mind, and ensure governments representing us regulate technologies, or do not use them for unethical purpose. The article doesn't even address this.
Unfortunately, the author writing 6 or 7 long paragraphs to build tension, repeating the same sentence for drama effect, to finally get to the the author's pretend shocking discovery that the DOD and its contractors are producing lethal technologies, ruined it for me.
Bill Sourour's blog post discussed on HN:
I'd have worked on greyball if it was pitched as a tool for disrupting justice, but would feel bad if it was for people who threw up in the car too many times.
I see these kinds of articles constantly. Are there really that many people out there who aren't aware of their own responsibility for their actions?
He seems to be saying don't build something if it's possible that some people could do bad stuff with it. That would mean we shouldn't build end to end encryption, because terrorists could use it to hide. And that we shouldn't build Tor, because child pornographers could use it.
The job said “Department of Defense” right on the tin. He was not misled in the slightest. He said it paid half again as much as other internships, but I see no mention that he returned any of his bloodsoaked gains.
... I think the ethical responsibility here is with the trigger-pullers and their chain of command; not the phone-finders.
leave aside the article.. it's done its job..: just read the comments in this thread. my oh my
tl;dr man working for the military is shocked to find out that his code is helpful to the military. edgy writing about ethics ensues.
This was my same experience in the DC area, anyone with a Computer Science degree is getting scooped up by the intelligence community, and you will get interviewed by spooks because your friends are interviewing with the NSA.
I did some contracting for a Department of Defense subcontractor too, and then got the hell out of that town.
Those people are twisted. Their ideology is twisted. And your parents are just excited that you got an interview at any job.
I never really get why military has such a bad press in the programmer's crowd.
I mean sure, killing people is bad in a society but it's precisely because the world has been so far a succession of ruthless wars between groups of people that having a military is a good thing to protect you.
I know the "military kills unethically etc.." and "we are a peaceful world etc.." arguments but maybe people are getting a bit too, and wrongly, "certain" that it will last. If anything, history has shown that it's a succession of cycles until the next wars. Better be able to be the strongest ones.
I've been in my country's military (Europe) and most of the people are not psychopath (a few, sure). they are normal people thinking their job is an important one to protect peace at home.
So yeah "just don't go to job that makes you uncomfortable". It's the only takeaway i get from that article but i don't share the military shaming. Would have been more convincing with a oil industry or "on-demand market" example
“I came here with a simple dream...a dream of killing all humans.”
This is pretty dumb. Maybe I have a good imagination for malice but if I thought through every possible bad use of some code I could never write any code or produce nearly any item in the world. I suppose this is parallel to believing if guns themselves are bad or not.