Google Employees Outraged At Secret AI Contract For Drone Program
Employees at Google are upset that the company is developing artificial intelligence software to enhance drones for the Department of Defense. Despite the company’ pacifist motto, “Don’t be evil,” Google’s CEO agreed to a contract that will improve machine learning capabilities, benefitting one of the military’s more lethal programs.
The contract involves the incorporation of artificial intelligence in drones used for reconnaissance purposes, not those actually used for missions involving weapons — a fact touted by proponents of the company’s involvement.
Though many Google employees don’t believe that distinction makes a difference when the intelligence gathered from surveillance missions is used to more accurately target people during combat.
Others contend that some of the company’s competitors, like Microsoft and Amazon, are involved in the program and that Google must stay competitive by taking the contract. But that argument hasn’t held it’s ground among its detractors who see Google as ethically exclusive from similar tech firms.
According to Google, the technology in question is used for a number of applications as a tool for flagging images that are then reviewed by someone. It says the technology is open-source and intended to save innocent lives rather than causing more deaths.
The tech giant encourages its employees to be outspoken about the company’s policies, projects and philosophical goals, providing online forums for workers to discuss their thoughts and qualms.
Aside from moral dilemmas, this latest contention centers around concerns with the company’s image. One side worries that discontinuing the project will discourage the military from looking to Google for future contracts, while the other side fears the public perception of Google will suffer from it’s involvement with the military industrial complex.
The use of A.I. as a tool of warfare is becoming a controversial topic as the technology progresses. With anxiety around sentient robots rising up and destroying or enslaving humans, people are rightfully worried that its use in warfare could be the first step in the rise of the machines. Not to mention the fact that building technology with the aim of more accurately killing people is tough to defend from an ethical perspective.
On the other hand, it could be argued that developing weapons and technology to more accurately target evil, while minimizing innocent casualties is a net good.