Google Workers Urge C.E.O. To Pull Out of Pentagon A.I. Project

  • From the movie "Flash of Genius" about Robert Kearns, the inventor of the intermittent windshield wiper:

    "I can't think of a job or a career where the understanding of ethics is more important than engineering," Dr. Kearns continues. "Who designed the artificial aortic heart valve? An engineer did that. Who designed the gas chambers at Auschwitz? An engineer did that, too. One man was responsible for helping save tens of thousands of lives. Another man helped kill millions."

    "Now, I don't know what any of you are going to end up doing in your lives," Dr. Kearns says, "but I can guarantee you that there will come a day when you have a decision to make. And it won't be as easy as deciding between a heart valve and a gas chamber."

    To me this is incredibly valid for Silicon Valley engineers these days.

  • I completely support the employees but it is criminal that this NYT article does not mention the extensive roots of Silicon Valley in Pentagon funding. SV as a whole is substantially a creation of military spending.

    Just recently in the AI space:

    - Siri was spun out of a Pentagon project -- look up SRI International and CALO. Its purpose: a "soldier’s servant".[1]

    - Autonomous driving is a direct evolution of Pentagon-funded efforts -- see DARPA Grand Challenge.

    And it's not just funding the early research, it's procurement like this too. Military procurement has also supported the development of technologies when the commercial market couldn't.

    Again, I support the employees and hate the fact that in order to develop medical lasers we first have to figure out how to shoot down missiles with them. It's hugely inefficient, could spell our doom, and if you think about it, fundamentally undemocratic. (Gives elites more power to direct taxpayer dollars under the rubric of defense.) But this should be a basic part of any story on how Silicon Valley works.

    And to think that SV has a large population of supposedly "small government" Libertarian Capitalists... oh, the mental gymnastics in that.

    [1] https://en.wikipedia.org/wiki/CALO

  • Where does one draw a line these days among the personal, the moral, the legal, and the political?

    The military application in question is legal and is approved by a duly elected government that supports it politically. In earlier days, employees generally would see this as just doing their jobs in developing technology that their employers wanted developed and would not concern themselves about ultimate uses and applications. In other words, doing your job is personal and, as long as you do it honestly and work hard, you should not be faulted for doing it as requested by your employer. That was always the standard. What then is the new element from which this sort of employee-driven demand arises? Is it morality? In other words, if I help develop A.I. that can be used for all sorts of things, one of which happens to be military-related, is the effort "evil" if the employer for whom I develop it agrees contractually to provide it to the government for a wartime/military use that can kill people? Do I really make a difference for the good if I convince my employer not to do this if all this means is that the company down the street gets the contract and the military gets the same results, albeit from a different vendor? If this is so, then I assume that you as an employee can make no practical difference in making the world better by insisting that your employer forego this particular form of contracting opportunity. If you succeed, your employer misses an opportunity but the evil you see being released into the world still gets released. It just means that you do not personally contribute to the development effort by which it is made possible.

    Of course, it might theoretically be possible to persuade all persons working in the field of A.I. to ban further work that directly helps the military. But that would seem a practical impossibility. Many people in all countries believe that military technology of all kinds is proper, legal, and politically supportable for purposes of self-defense or for some other overriding purpose they deem proper. And certainly, there are bad people throughout the world who are eager to use any technology that comes their way for overtly evil purposes such as misuse of an atomic bomb. Unless and until human nature is fundamentally transformed, that will never change.

    So, what is the answer in a country such as the United States where people and companies have the freedom to develop A.I. for any lawful purpose and where some inevitably will do so for a military purpose of which you disapprove?

    You are then left only with a political solution: use political means to gain control of the government and the military and apply the force of law to ban the military use of which you disapprove.

    So this is either a personal act of futility by the Google employees or it is a case by which they cannot separate the personal from the political and thereby insist that their employer sacrifice particular economic opportunities to ensure that your personal actions do not support a political outcome of which you disapprove.

    Even then, does this mean that your employer should cease working on A.I. altogether? For, just as cash is fungible, so too is technology. Every improvement you make in A.I. might have an immediate use of x for your employer but, as humans collectively do this for all sorts of improvements, the results are there for the taking in the future for military applications of all kinds. In other words, you cannot put your improvement in a box or control it so as to limit its future uses (at least not in a free society). The computing technology of recent decades undoubtedly has bettered many aspects of life but it has also greatly magnified the lethality and utility of military applications so as to make the world far less safe. And this was inevitable unless a supervening agency were to have used forcible and totalitarian means to suppress such technological development from inception. Since no such supervening agency existed or even can exist in a free society, does this mean that all engineers and technical developers have blood on their hands because, ultimately, things they have done were used for applications of which they disapproved? Of course it does not. Nor would people today working on A.I. be held morally or legally responsible for ultimate downstream uses made of their work of which they would not morally approve today.

    But this brings us back full circle. In the long run, you cannot stop such uses (or misuses) made from your technical development work. Nor can you be held responsible for them even though you contributed to them in some remote degree through your work efforts. Why then should it make a difference if your direct work efforts for a company like Google are applied to a military application of which you do not approve but which is legal, politically approved by the governing authorities, and will happen anyway regardless of whether Google is involved?

    The puritans of old tried controlling the morality of others by shunning and shaming and doing it to an extreme degree. They failed miserably in their efforts because humanity is what it is and followed its own course without regard to external religious constraints.

    This sort of effort by Google employees is obviously different in that it is not religiously driven but does it amount to anything more than a shunning-and-shaming method for trying to impose one's sense of morality on others by signaling that this way lies righteousness and everywhere else lies evil?

    If this is what "don't be evil" now means, then Google will need lots of help going forward because every cause under the sun can be used in the same way to shun and shame. We then have management by a corporate board as may be swayed to and fro by any organized protest of the moment.

    Whatever this is, and however it might be defensible in "sending a message" or whatever, it is a sure way to put a company at a competitive disadvantage while accomplishing nothing practically. It may further political goals but, if those are the goals, better just to try to advance them directly and not by attempting to shun and shame your employer (and your co-workers who may disagree with you) into submission. The personal need not be political. If it does become that way, a new form of puritanism will hold full sway to the detriment of all.

  • This may be an uncomfortable fact but people have surprisingly short memories: the military funded the majority of the early advances in systems, networking, and cryptography (and especially as a large part of the latter subject area, invested heavily in fundamental, theoretical research). Not saying that I disagree with the employees' opinion, just that DoD/Pentagon involvement in artificial intelligence research shouldn't be viewed as a necessarily bad thing. Many other major powers have heavily invested in AI across all fronts (including military applications), and it would be stupid for the US to not have one of its' largest strategic assets to not be part of the process.

  • Do we really want to test a theory that we can have a repeat of the 1930s when the democracies fell far behind an autocratic regime in the arms race, and again the autocratic regimes will not win? Democracies are in retreat around the world, and I only hope we wake up before it is too late.

    The military-industrial complex may have become far bigger and perhaps in some ways a burden, but the world is a dangerous place and becoming more so. In my view, there is a perfectly valid argument that working with the military is the right thing to do.

    At the same time if individuals have pacifist leanings and do not want to work for the military, i would hope corporations respect that.

  • I heard an interesting view on this. Engineers build technologies like TensorFlow and demos of object recognition, which have obvious applications in drone combat (just stuff your model into the missile targeting system). Yet then when this tech is used for this purpose they're shocked, shocked -- and as long as they're not specifically building the missile targeting systems themselves they feel like their hands are clean?

    I'm not even sure what the consequence of this argument is; pretty much anything you build can indirectly contribute to the military industrial complex, even something innocuous like dev infra. But I also don't think that weak "everything is equivalent" argument means you're suddenly absolved of responsibility. One thing I am pretty sure of is that it must feel awful to waste your short time on earth on building tools specifically for killing.

  • As an engineer I have serious moral qualms about furthering the goals of any military. Contributing means being complicit.

    The fact that the military funds research doesn't change that.

  • On one hand, I reluctantly appreciate Trump's election because it will force silicon valley to think many times before readily giving up user privacy to the US government. I felt like the attitude was very lax under Obama (despite the Snowden revelations).

    On the other hand, this bothers me a bit because it continues to allow people in the valley to maintain a (sorry to use this word) delusion that what they are doing is "moral". If Google stops working with the Pentagon after this petition, people in the valley will pat their backs and enjoy how they are making the world a better place. They will not have any incentive to rethink the sale of user data to advertisers, creating highly addictive mentally harmful products, etc.

    Overall, it's good that at least people in the valley are somewhat mindful of their actions and care about society (compared my current industry, finance). I hope they can be successful at a deeper level.

  • Serious question for those who agree with this petition. Which country's military would you like to see be the most technologically advanced in 20 years?

  • > Google Workers Urge C.E.O. To Pull Out of Pentagon A.I. Project

    That moment when you realize your mega-corp employer's "culture" is merely just another tool to get your to do their bidding.

    How do the share holders feel? I'm sure they're betting that almost every employee will stay no matter how their technology is used.

  • I for one would rather have a drone strike program that actually actively avoids civilian casualties as much as possible. There are certainly gray areas here as far as the use of that program from a political standpoint (whether the strikes are warranted or whether it is part of regime changes). On the other hand, by not helping the military become more efficient we also risk losing existing lives (our own and civilian casualties) due to a lack of efficient analysis. We already use statistical analysis and many other methods (human and otherwise) to determine where to make military strikes, might as well improve on this to make fewer mistakes where possible (as gray as that may seem).

  • While I appreciate the engineers speaking out, it isn't really practical for a company the size of Google, with the resources it has, to not have programs that work with the military in one form or another.

    If, as an engineer, it is against your moral code to do any work that supports the military, your choices are limited to working in small companies where everyone is focused on the commercial products and services you are delivering. And even then, as some games companies found out, the CEO might do some collaborative work with the military for training or something.

    It should come as no surprise that Google teams up with the Federal government on things.

  • Business Ethics is the art of finding a company whose ethics are close enough to your own.

    Asking the business to reconsider before leaving is legitimate.

  • This has been by far one of the most shameful things Google has done.

    And their excuse that "they're only analyzing images" is a joke. Analyzing the images and "identifying objects" (their words) is probably 95-99% of the job in a drone strike. So if they're trying to make people think that their role is minimal in drone strikes, they're failing hard at that.

    I also think Eric Schimidt, who until last December was both Alphabet Chairman and working for the Pentagon, played a big role in this. Now he's still a "technical advisor" but if he continues working for the Pentagon I'd prefer he has no official affiliation with Alphabet.

    It seems Schmidt would like Google/Alphabet and Pentagon to have a much deeper relationship, if you can read between the lines in this post:

    http://www.defenseone.com/technology/2017/01/pentagon-needs-...

  • > In an interview in November, Mr. Schmidt acknowledged “a general concern in the tech community of somehow the military-industrial complex using their stuff to kill people incorrectly, if you will.” He said he served on the board in part “to at least allow for communications to occur” and suggested that the military would “use this technology to help keep the country safe.”

    I have a sneaking suspicion that Mr. Schmidt's views on what constitutes "killing people incorrectly" and what is required to "keep this country safe" both differ fairly significantly from those of the signatories of the letter.

  • I am against war and violence but even a cursory review of the last 10k years of human history make it abundantly clear that the threat Iran, North Korea or Russia pose to the Western World is real. I hate many of the things the US has done but I'm glad the world is not controlled by even worse actors. The sole reason for US and UN dominance is military power. If we ignore this the West will be overrun.

  • If you're an engineer working for Google, I think you can most likely afford to be an idealist, and in my opinion that's a much better option than just blindly following orders.

    Don't most of us want less war? Does it really drive you to enhance the capabilities of those committing acts of violence? How does one start to believe that by supporting the military you are somehow working towards a better world?

    If you're not going to be an idealist, then in all likelihood you're helping to work towards the vision of an idealist with more power than you.

  • The famous public letter from Norbert Wiener declaring he will not work for the military: http://lanl-the-back-story.blogspot.com/2013/08/a-scientist-...

  • Maybe if they were more accurate they would kill less people?

  • This is is not the same kind of technology as better materials science for gun barrels, or hypergolic fuels for ICBM maneuvering thrusters. This is about a software that constitutes a decision making process. Who dies is not up to you or a commander, but by a automatic software process. Debugging the code means innocent people are probably going to die to do it.

  • Just as with a nuclear arsenal, what keeps the peace is a balance of power between the nations. If one nations has access to a military technology that allows it to destroy it's ennemies and steal it's resources without fear of retaliation, it WILL use it.

    Our well meaning google friends might believe they are doing the right thing, but they are just doing what's fashionable instead of really thinking the consequences of these actions. Making ourselves weaker in the AI race augments the probability of war instead of diminishing it.

  • I support Google workers. The hubris of US Military is unprecedented. Our military is already trying to pick a fight mostly for no reason. The more the cost of war drops for them (less killed personnel) the more they will create devastation, look Middle East since Iraq invasion.

  • Relevant here: "Why don't I take military funding?" by Prof. Benjamin Kuipers.

    https://web.eecs.umich.edu/~kuipers/opinions/no-military-fun...

    From the introduction: """ Mostly it's a testimony that it's possible to have a successful career in computer science without taking military funding. My position has its roots in the Vietnam War, when I was a conscientious objector, did alternative service instead of submitting to the draft, and joined the Society of Friends (Quakers). During the 1980s and 90s, the position seemed to lose some of its urgency, so it became more of a testimony about career paths.

    Since September 11, 2001, all the urgency is back. The defense of our country is at stake, so this testimony becomes critical. In short, I believe that non-violent methods of conflict resolution provide the only methods for protecting our country against the deadly threats we face in the long run. Military action, with its inevitable consequences to civilian populations, creates and fuels deadly threats, and therefore increases the danger that our country faces. """

  • Im confused, do people working at Google not know the origin of their company? There would be no Google if not for Pentagon funded military research.

    Look up "Highlands Forum", DLI, MDDS. Sergey Brin was literally funded by NSA and CIA under MDDS.

  • Thank you, Google employees who signed this letter! Let us hope the, ummm, yea, "do no evil" company listens to you (and to its very own motto).

  • So it is ethical to track hundreds of millions of people to manipulate them for profit. But unethical to defend power plants in the United States?

  • I have been encouraging my friends and family to discourage their kids from working for US Army, Pentagon and similar organisations.

    For a country like USA it is stupid to waste human capital on fighting someone else's wars.

  • I think it's very important to consider the increase in military spending over the past decade and decrease in funding for universities.

    More and more funding in the military means more funding for specifically defense projects rather than straight up knowledge or public good.

    What is the long term affect of this? Perhaps research far more focused on destruction instead of public good. More better drones, less general knowledge or cures for diseases?

    It's a sad state of affairs, and workers standing up against military research within their companies is a good first step.

    ---

    One of the first sources I found on this is below, but I've specifically heard about it being vexing from AI researchers, as many don't want to directly support military applications, but don't have much of an opportunities for funding otherwise.

    https://www.bu.edu/research/articles/funding-for-scientific-...

  • My favorite:

    https://www.nytimes.com/2017/08/30/us/politics/eric-schmidt-...

    I wonder if Eric Schmidt left Google because of this.

  • So it says it will improve drone strike accuracy. Isn’t it a good thing? I mean there are currently a lot of civilian casualties from drone strikes, but that doesn’t stop anyone from using them anyway. Wouldn’t at least improving accuracy make thinks better in some regard?

  • "As Google defends its contracts from internal dissent, its competitors have not been shy about publicizing their own work on defense projects. Amazon touts its image recognition work with the Department of Defense, and Microsoft has promoted the fact that its cloud technology won a contract to handle classified information for every branch of the military and defense agencies."

    Google should stop hiring activists and start hiring pragmatists.

  • Working with China and other despotic regimes is cool, though?

  • Are the Google employees working on this project all American? Google's workforce is heavily multinational, but I would imagine a pentagon project would require some kind of security clearance?

  • Better headline:

    "Google Workers Astonished: Found Out They Work In Capitalist System, Not Nerd Commune"

  • > several Google employees familiar with the letter would speak of it only on the condition of anonymity, saying they were concerned about retaliation

    This is too bad that the fear of retaliation exists for a letter like this. Is this actually the case? Or did the author read into them wanting to remain anonymous incorrectly? Perhaps they didn't want to become a spokesman for this issue outside the company, but are fine making their opinion known to coworkers and managers.

  • Oh boy. Being on the inside of Maven, I can't tell you how confused these Googlers are.

  • Everyone needs to fear the military industrial complex. Its bad for every business that is not in the business of war and killing. I'm sorry but today is unlike anything we have ever witnessed. While Military Industrial Complex was one thing in 1950, today, its a totally different beast and our society can't take much more beast.

  • Google+Alphabet yearly revenues are in the same ballpark as South Africa's GDP.

    Google/Alphabet is already a country in its own right, and libertarians would already be sceptical of Google's role in society.

    This is just another step down a long road of Technocracy...

    Also, wasn't "Don't be evil" dropped as a motto? I thought I read that.

  • Their entire career choice and their current sky-high yearly salaries plus the stock price are based on what fundamentally was a military need i.e. the internet (or the TCP/IP protocol to be exact).

    The right course of action IMO would be to leave Google if you are unsatisfied with working for Pentagon.

    But Google HAS to work for the Pentagon and especially on AI because the knowledge that comes out of this isn't only going to be military but to a much higher degree and relevant extent for civilized use.

    Technology is always a double-edged sword and we have to learn to deal with the powers it gives us. But to not be on the forefront of technology when you can is a losers game and Google would lose much of its value if it isn't at the forefront.

  • This is a tricky problem and there is no easy position to take. On one hand you don’t want anyone to have such lethal power. You could argue the atomic bomb is the warning we should learn from.

    On the other hand, this needs to be evaluated in the context of “balance of power”. Can Alibaba really refuse helping Chinese military. If not the Chinese are going to gain the power regardless of whether US has it or not. Any advanced country could gain such power.

    These other entities may not follow the same ethical guidelines. Are we then proposing the US military should be less powerful than its rivals?

  • > There’s a strong libertarian ethos among tech folks...

    This does not ring true in my experience with "tech folks" in recent times. It may have been true at one time, but I think libertarianism in tech has fallen out of vogue.

    Also, I take issue with distrusting how government (or any powerful organization) makes use of technology puts someone into the libertarian political philosophy according to this article. A healthy distrust of how powerful groups use technology makes a lot of sense for anyone and is definitely not something exclusive to libertarians.

  • I want to remind those Google employees that they are able to work at Google and enjoy very high living standard and obviously freedom of speech just because they are protected by the US army.

  • undefined

  • >"...The letter, which is circulating inside Google and has garnered more than 3,100 signatures, reflects a culture clash between Silicon Valley and the federal government..."

    If Google still continues to engage in the 'Business of war', would these 3,100 employees walk out?

    Would they, then, also refuse to pay taxes to the US Fed Gov (which, obviously spends them, on the business of war) ?

  • Seems like the two most likely consequences would be more inaccurate drone strikes and a higher risk of further expansion of the territory controlled by Communist Chinese hegemony. I don't think these Google employees want that so I'd like to see their thoughts on why they think the US military having worse AI would bring about a better universe.

  • Totally agree! We should stop trying to make a weapon more accurate and go back to fire bombing complete cities. Only way to be sure.

  • To me this illustrates a huge opportunity: big tech can't effectively engage with public good projects (e.g. typical government territory). What's needed is a tech gov superlayer that provides an umbrella for these services and can also guarantee their internal integrity. The lack of guarantee over this last point is one major problem here.

  • But wouldn’t drone strike accuracy improvement actually reduce civilian casualties?

    Given that these strikes will occur one way or the other, isn't this stance actually adding more misery to the world?

    (So do drone strikes in general. Much better to negotiate at a table with a mediator or use a sporting event to resolve disputes, but I don't currently run things here.)

  • As someone who is hesitant of using (free) cloud services there you pay with your information I really like that there are some thoughtful and aware people working at Google. If it turns out that they have the power to affect decisions like this in there company I might choose to use more Google products.

  • Apple recently hired the Google A.I. Chief[1]. I wonder how much this is related.

    [1] https://www.nytimes.com/2018/04/03/business/apple-hires-goog...

  • I keep seeing the argument "More accurate weapons would kill fewer people". By that logic, it would be perfectly okay for Google to sell this technology to the enemy states so those guys could kill fewer Americans. This sounds ridiculous to me but it's likely I am missing something.

  • My condolences to leaders who organized this. They'll find in their management review that because a fly farted in their work areas they haven't achieved performance on par with their peers. At next round of layoffs, they'll be released.

  • So 3100 out of 57000 employees signed the letter. That makes google only 95% evil.

  • Google is a huge company, turning down big money from the US Gov't for legal purposes is probably not a realistic option (at what point would shareholders sue if they turned down hundreds of millions of dollars in revenue?)

  • I wrote a little article related to this matter; I submitted it to HN on: https://news.ycombinator.com/item?id=16760768

  • And this is why tech workers need to unionise. This petition is a weak statement that probably will have no effect. Unless these workers could pose a real threat to Google by threatening a strike action, Google has no need to change anything.

    Furthermore, if the workers really stood by their moral convictions, they'd use their collective power to tackle the issue more directly than by just appealing to Google. Companies like Google are like the railroads of the 19th century. They comprise the major infrastructure of modern society. A union at Google could threaten to effectively halt all institutional operations of governments or other companies in order to influence their actions. Is facebook manipulating voters? Okay, no more gmail for facebook till they stop. Is the government going to bomb Yemen? Alright, we're shutting down government accounts.

    As others in the thread have also said, I'm doubtful many of these signatories will quit their cozy jobs if Google doesn't back down. Without the group solidarity and pressure from a union, most individual workers just don't have any good incentive to put their money where their mouth is.

  • Anybody looking forward to the captchas asking us to identify drone targets?

  • During world war 2, fan favorite companies like Mercedez Benz were building tanks for the Nazis. If there is a war today don’t you think the government will seize everything and we will all be working for them? This is different in some ways because Google is likely being paid for this but I don’t think they can just refuse nor do I think they should be able to, frankly. That’s not how national security works.

    And just because employees at Google refuse, employees in one of the big Chinese firms won’t be able to. If our government does not have this tech then someone else will.

    We live in this fantasy, like people aren’t dying everyday from war in so many places around the world. We are not immune to that.

    Edit: to all the folks down voting me it’s a good idea to get a different perspective sometimes.

  • From what I can tell, there are ~90k google employees[1]. So, 3% of the employees signed this. I feel like I could find 3000 people out of almost 100k that believe in almost anything, especially if it is generically anti-military, or anti-USG.

    I doubt this would happen, but I wish these only petitions would include counter-petitions, which people could sign if they don't agree with the petition, so we could get an estimate of whether this is just a very vocal minority of Googlers or viewpoints are split.

    [1] https://www.statista.com/statistics/273744/number-of-full-ti...

  • The fact remains regardless of historical anecdotes or advances made in the past that in today’s economic landscape one of the few steadily profitable and maintainable businesses is doing work for the Department of Defense. Regardless of who is president this is an organization that regularly gets well over 200+ billion in funding and much of what they do has little oversight. If you are a business large or small and can land a long term DOD contract I’ve seen businesses that needed capital but couldn’t raise it pivot to a well funded DOD project and then able to reap those profits into their other businesses successful.

    It’s a sad state really but this is probably the main idea behind partnering with them in the first place

  • Well this is it, n'est ce pas? Is google really about "do no evil" or will capitalism melt the snowflakes. Either way, googles fate is of its own making.

  • I wonder what Geoff Hinton's position on this is. He seems somewhat anti-military from some side comments in his Neural Networks course.

  • These employees are right to be worried. If Google becomes a defense contractor I'll be going out of my way to stop using their products. I'm not a pacifist, but I'll always choose to work for and patronize peaceful enterprises, it's as simple as that. I also influence lots of non-technical friends and family in their IT purchases and habits and I'll definitely be warning them off as well. So Google, be less greedy and make the world a better place for us all.

  • The future of border security could be drones flying along the Mexican border, taking out intruders with precisely aimed headshots.

  • Not helping them improve precision of the drone strikes reduces the precision of those strikes which leads to more innocent lives lost as collateral. An argument could be made that using AI to improve surgical strike capability of drones would be the lesser of the two evils. Seems misguided, no pun intended.

    Also, Google dropped the slogan "Don't be evil" in favor of "Do the right thing." Not that it matters because they barely followed it anyway.

  • If I worked for Google and found out I was working for the Pentagon, my line would be crossed and I would resign.

  • I also think that disengaging is the wrong approach. Google once disengaged with China because they didn’t agree with China on some freedom of expression laws and all the management now think that it was a bad decision. In similar vein, if you are not part of it, somebody else will shape the outcome of this.. It should be rather you if you really care and could play a role in minimizing the negative impact of such tech

  • AI is needed, the sooner we can make or create it to hold consciousness the better. We're constantly changing as a species and what we are 10,000 years from now, will far more different than we are to the ancient Egyptians. The universe will go on with out us and if intelligence is rare and spaced far apart AI is really the only chance that we have to be able to communicate or even find other intelligences.

  • The beginning... we basically can't avoid this. If it isn't Google, it will be someone else.

  • I think this is a good idea because it would be great optics for Google to cancel this project.

  • Qualifying rahulmehta95's comment below a bit. The government and military ought to be accountable to the citizenry, not the other way around. Having said that, DOD will just find other companies to continue this research, so it actually behooves Google to stays on and to actually contribute in making the technology more accurate.

  • So this is about data processing of video feeds used to target.

    If you think about it, we (including Google employees) have known Google has been doing this in a different way to assist in waging war ever since Snowden showed us those slides. Google processes data for the government that allows them to know who to target for war.

  • Engineer brains must be allocated to benefitting Humanity.

    Ease the lives of your fellow traveler.

  • Their tech is already used in all kinds of war activities indirectly.

  • undefined

  • Google can't pull out. That'd be like being ungrateful.

  • A.I. is commodity compute. You can't pull out.

  • The CEO wants to match the evil of Mark Z.

  • undefined

  • It boggles my mind, with the current state of affairs, the number of people actively organizing to eliminate or curtail our fundamental human rights.

    It seems to me that 1st and 2nd amendment rights are both under particularly intense bombardment these days.

  • Let's keep advanced research out of the hands of the gun nuts (aka most of the USA).

  • As the Google/Youtube/Alphabet crew make plans to become murderers, in their catered, air conditioned offices, to the endless justifications in this thread, I have a big smile on my face as I think of the woman who went in yesterday and shot at these murdering scum. Google/Youtube/Alphabet workers are about to become murderers, and they just got some big blowback yesterday, paying them back in their own coin. Thinking of it puts a big smile on my face. Especially reading all the justifications here for the need to work towards militancy.

  • Democracies need top of the line militaires to exist. Plain and simple.

  • >"...that uses artificial intelligence to interpret video imagery and could be used to improve the targeting of drone strikes."

    These drone strikes have been happening for decades without interpretive AI, and they will probably continue with or without. So let's make the strikes more precise and save more civilians.

    >"But improved analysis of drone video could be used to pick out human targets for strikes, while also better identifying civilians to reduce the accidental killing of innocent people."

  • Well then, Google employees should start with moving out of the US. Because, they seem to benefit and enjoy the safety and security provided to them by the Pentagon.

    Grow the fuck up - part of living in a democracy is tolerating the implementation of measures one disagrees with. Republicans might oppose birth control, but companies continue to support it in their health plans. Likewise, a state of the art offensive military is democratically wanted in the US, state your disagreement and tolerate it's implementation