Google’s AI Principles

July 18th, 2018

Last month Google CEO Sundar Pichai announced Google’s seven objectives for the application and development of Artificial Intelligence (AI) in a fast-changing world.

Google drives AI on a large scale like probably no other company in the world, clearing the path for ever more sophisticated innovations in healthcare, medicine, autonomous technologies, disaster prevention, entertainment, security, and many other areas. Yet, Google also acknowledges that not every artificial intelligence application is also desirable. New technologies have invited – and always will – also abuse and malignant application.

Seven Principles

In its policy, Google clearly defines what kind of AI it wants to promote by laying down seven principles:

Its use of AI is to be socially beneficial, will avoid unfair bias and be built for safety.

Google also stresses the fact that AI systems need to be accountable to people and protect individuals’ privacy. Lastly, Google vouches to uphold high standards of scientific excellence and aims to remain at the forefront of research and disseminate the knowledge gained.

Unlikely to do any harm except for game addiction: 11 players vs AI Game – Real Soccer World on Google Play

Google Employees Leading the Way

Google’s AI policy does not come out of the blue. Nevertheless, it is a refreshing and ground-breaking initiative, addressing unequivocally some of the key questions that AI technologies face today.

It is no secret that Google’s own employees played an important role in the process. In an unusual uproar, earlier this year more than 4000 Googlers had expressed their objection to Google’s participation in the Pentagon’s Maven program, fearing that the AI technology (analyzing footage from aerial drones) could also be used for the development of lethal weapons. To the surprise of many, Google did take the employees’ concerns to their heart and in May the company announced that it won’t renew the contract with the Defense Department.

The AI policy also lays out what kind of AI Google will not engage in; in particular, Google will not engage in the development of AI that cause overall harm or facilitate injuring people. The attentive reader will detect that the policy’s wording still leaves room for interpretation. For instance, it remains very vague when it comes to espionage or cyber attacks. But a start is made and the company can in future always be judged by how true it is to its principles.

Everyone’s Responsibility

In the end, as with any new technology, the way AI being applied will depend not only on company intentions and policies, but just as much on the people involved – their willingness to apply sound judgement and speak out if necessary.

Google’s employees are to be complimented on leading by example and questioning the company’s engagement. It is remains the responsibility of all of us to do the same when we get into a similar situation.

Read Google’s blog post on the topic. 


Tags: