Google AI Principles updates, six months in

Author:

Six months ago we announced Google’s AI Principles, which guide the ethical development and use of AI in our research and products. As a complement to the Principles, we also posted our Responsible AI Practices, a set of quarterly-updated technical recommendations and results to share with the wider AI ecosystem. Since then we’ve put in place additional initiatives and processes to ensure we live up to the Principles in practice.  

First, we want to encourage teams throughout Google to consider how and whether our AI Principles affect their projects. To that end, we’ve established several efforts:

  • Trainings based on the “Ethics in Technology Practice” project developed at the Markkula Center for Applied Ethics at Santa Clara University, with additional materials tailored to the AI Principles. The content is designed to help technical and non-technical Googlers address the multifaceted ethical issues that arise in their work. So far, more than 100 Googlers from different countries have tried out the course and in the future we plan to make it accessible for everyone across the company.
  • AI Ethics Speaker Series with external experts across different countries, regions, and professional disciplines. So far, we’ve had eight sessions with 11 speakers, covering topics from bias in natural language processing (NLP) to the use of AI in criminal justice. 
  • We added a technical module on fairnessto our free Machine Learning Crash Course, which is available in 11 languages and has been used to train more than 21,000 Google employees. The fairness module, which is currently available in English with more languages coming soon, explores how bias can crop up in training data, and ways to identify and mitigate it.

Along with these efforts to engage Googlers, we’ve established a formal review structure to assess new projects, products and deals. Thoughtful decisions require a careful and nuanced consideration of how the AI Principles (which are intentionally high-level to allow flexibility as technology and circumstances evolve) should apply, how to make tradeoffs when principles come into conflict, and how to mitigate risks for a given circumstance. The review structure consists of three core groups:

  • A responsible innovation team that handles day-to-day operations and initial assessments. This group includes user researchers, social scientists, ethicists, human rights specialists, policy and privacy advisors, and legal experts on both a full- and part-time basis, which allows for diversity and inclusion of perspectives and disciplines. 
  • A group of senior experts from a range of disciplines across Alphabet who provide technological, functional, and application expertise. 
  • A council of senior executives to handle the most complex and difficult issues, including decisions that affect multiple products and technologies.

We’ve conducted more than 100 reviews so far, assessing the scale, severity, and likelihood of best- and worst-case scenarios for each product and deal. Most of these cases, like the integration of guidelines for creating inclusive machine learning in our Cloud AutoML products, have aligned with the Principles. We’ve modified some efforts, like research in visual speech recognition, to clearly outline assistive benefits as well as model limitations that minimize the potential for misuse. And in a small number of product use-cases—like a general-purpose facial recognition API—we’ve decided to hold off on offering functionality before working through important technology and policy questions.

The variety and scope of the cases considered so far are helping us build a framework for scaling this process across Google products and technologies. This framework will include the creation of an external advisory group, comprised of experts from a variety of disciplines, to complement the internal governance and processes outlined above.

We’re committed to promoting thoughtful consideration of these important issues and appreciate the work of the many teams contributing to the review process, as we continue to refine our approach.

Go to Source