Google will not seek to extend its contract next year with the
Defense Department for
artificial intelligence used to analyze drone video, squashing a controversial alliance that had raised alarms over the technological buildup between
Silicon Valley and the military.
The tech giant will stop working on its piece of the military’s AI endeavor known as Project Maven when its 18-month contract expires in March, a person familiar with Google’s thinking told The Washington Post.
Diane Greene, the chief executive of Google’s cloud-computing business, told employees of the decision at an internal meeting Friday, first reported by Gizmodo.
Google, which declined to comment, has faced widespread public backlash and employee resignations for helping develop technological tools that could aid in warfighting. The person said Google will soon release new company principles related to the ethical uses of AI.
The move is a setback for the
Pentagon's push to supercharge the military's capabilities with powerful AI that could help process battlefield data or pinpoint military targets. Audricia M. Harris, a Pentagon spokeswoman, said in a statement that it "would not be appropriate for us to comment on the relationship between a prime and sub-prime contractor holder."
"We value all of our relationships with academic institutions and commercial companies involved with Project Maven," Harris said. "Partnering with the best universities and commercial companies in the world will help preserve the United States' critical lead in artificial intelligence.”
Project Maven was launched in April 2017 to find ways the military could use AI to update its national security and defense capabilities “over increasingly capable adversaries and competitors,” a Defense Department memo stated. In a pilot effort, AI was deployed to analyze hours of footage from Predator drones and other unmanned aircraft, pinpointing buildings and vehicles and processing video now tagged by human analysts.
But the request of private-sector help from companies such as Google, which develops some of the world’s most sophisticated image-recognition software and employs some of the top minds in AI, quickly sparked a firestorm over the potential that the technology could be used to help kill or serve as a steppingstone toward AI-coordinated lethal warfare.
Thousands of Google employees wrote chief executive Sundar Pichai an open letter urging the company to cancel the contract, and many others signed a petition saying the company’s assistance in developing combat-zone technology directly countered the company’s famous “Don’t be evil” motto.
Bob Work, the former deputy secretary of defense who launched Project Maven last year, called Google's decision not to renew the contract "troubling" and expressed concern that it could discourage others in Silicon Valley from working with the military on autonomous technologies that could assist in foreign conflicts and national defense.
The decision "seems motivated by an assumption that any use of artificial intelligence in support for the Pentagon is a bad thing. But what about using artificial intelligence to power robots that defuse bombs or IEDs? Or using AI to prevent cyberattacks on our electrical grid?" said Work, a senior fellow at the Center for a New American Security, a Washington think tank. "All of these would save the lives of our people or protect our networks or society. That would seem like things employees of Google might be proud to do."
"Not being able to tap into the immense talent at Google to help DoD employ AI in ethical and moral ways is very sad for our society and country," he added. "It will make it more difficult to compete with countries that have no moral or ethical governors on AI in the national security space."
Google had responded to earlier criticism by saying that the company’s involvement in Project Maven was limited to the “non-offensive” use of open-source, publicly available software “intended to save lives and save people from having to do highly tedious work.”
But Greene, who leads Google Cloud, told employees that the company had endured considerable backlash and pursued the work at a time when the company was more interested in military contracts, according to Gizmodo.
Several Google AI employees had told The Post they believed they wielded a powerful influence over the company’s decision-making. The advanced technology’s top researchers and developers are in heavy demand, and many had organized resistance campaigns or threatened to leave.