|A bomb difusal robot. / Photo by: Airman Mandy Mclaurin via Wikimedia Commons|
Over 24,000 scientists who specialize in artificial intelligence research have signed a pledge that they will not manufacture or develop robots that will attack people. Among those who pledge are Google DeepMind’s co-founders Mustafa Suleyman, Shane Legg, and Demis Hassabis. This is according to an American media company, National Public Radio.
The Lethal Autonomous Weapons Pledge reads, “We the undersigned agree that the decision to take a human life should never be delegated to a machine.” It is also meant to discourage countries and military firms to produce killer robots using artificial intelligence.
The pledge that was introduced by the Boston-based group The Future of Life Institute also details that engaging and selecting targets without intervention from a human would be destabilizing for every individual and every country. This is why thousands of AI specialists and researchers have agreed to remove such risk. It also calls on governments and technology companies from different countries to agree to the regulations, laws, and norms that will effectively outlaw the creation of killer robots.
Some of the individuals who signed the pledge include aerospace company SpaceX’s CEO and CTO Elon Musk, who is also the CEO of Tesla Motors. Google’s Senior Fellow and Head of Research and Machine Intelligence Jeffrey Dean likewise signed the pledge.
Toby Walsh, a professor of artificial intelligence from the University of New South Wales in Australia, told the Guardian that it should be an international norm which states that deadly weapons are “not acceptable.” “A human must always be in the loop,” he added. Walsh also signed the pledge.
Among the organizations that agreed not to develop lethal autonomous weapons are the European Association for Artificial Intelligence, The Swedish AI Society, Silicon Valley Robotics, University College London, and the Brazilian Computer Society.