1,000 tech experts warn against AI arms race

  • Because the author was incapable of linking to Stephen Hawking's AMA, here it is: https://www.reddit.com/r/science/comments/3eret9/science_ama...

  • I think these people have read too much science fiction.

    >First we had the legs race. Then we had the arms race. Now we're going to have the brain race. And, if we're lucky, the final stage will be the human race.

    >John Brunner, The Shockwave Rider (1975), Bk. 1, Ch. "The Number You Have Reached"

    Great book, though.

  • I don't see how a ban could work. We already have technology that can identify targets, so the issue is whether a human is involved to approve each kill decision. But what constitutes human approval? Is it enough for a human to sit and watch target details flashing up on a screen, and intervene if they see an incorrect target?

    What exactly would be banned?

  • I would think it better that we have weapon systems that intelligently make an assessment as to whether their target is (A) a combatant, (B) a threat, (C) surrendered, etc... before engaging and killing them.

    The alternative is indiscriminate death that we see in mass bombings, mine fields, artillery strikes, and drone strikes.

  • their goal should be how to circumvent it or protect against it because no government is truly going to give it up. As the technology progresses there will more and more decisions removed from people to the point where what we think the line is today is just ho hum by then.

  • Naives, warning against a far-fetched Californian fantasy.

    Meanwhile, few warn against widespread surveillance, the repurcursions from the use of drones, etc.

  • Why is Stephen Hawking treated like a tech expert?

  • This should be read as "race for armed AI", not "an arms race toward the goal of AI". Very sneakily worded.

  • I fear the AI arms race may be inevitable. Even if all the nations could agree to place limits on AI research there will always be a huge incentive to develop something in secret.

  • Very few ML experts on that list I'm sure. But at the same time, I'm rather against autonomous weapon systems. Yet not as naive to think that modern armies will ignore the benefits of machine intelligence.

    If you think about it, the US has not even ratified the Nuclear weapons testing ban treaty. I doubt it will ever consider an AI weapons ban.

  • This is not only overblown, it is misguided. All banning does is make people continue in secret. In any event, research in that area will produce knowledge that is both useful for non-weapon purposes as well as weaponizable... The same as all knowledge that has ever existed.

  • I don't understand what kind of AI/robotics we are talking about here:

    - weak non-self-replicating;

    - weak self-replicating;

    - strong.

    Besides, maybe it's an egoistic point of view. Many species have perished while humanity established itself. Does it really matter if we go extinct, if the result is going to be a superior species?

  • Just makes governments want it more.

  • For a humorous take on the issue, see

    http://www.supportkillerrobots.org/

    Shameless plug, I wrote it on a lark.

  • Could we at-least use non-lethal weapons on humans ?