Preface

Today, machine learning (ML) is the most commercially viable subdiscipline of artificial intelligence (AI). ML systems are used to make high-stakes decisions in employment, bail, parole, lending and in many other applications throughout the world’s economies. In a corporate setting, ML systems are used in all parts of an organization - from consumer-facing products, to employee assessments, back-office automation, and more. Indeed, the past decade has brought with it wider adoption of ML technologies. But it has also proven that ML presents risks to it’s operators and consumers. Unfortunately, and like nearly all other technologies, ML can fail - whether by unintentional misuse or intentional abuse. As of today, the Partnership on AI Incident Database holds over 1,000 public reports of algorithmic discrimination, data privacy violations, training data security breaches and other harmful failures. Such risks must be mitigated before organizations, and the general public, can realize the true benefits of this exciting technology. As of today, this still requires action from people — and not just technicians. Addressing the full range of risks posed by complex ML technologies requires a diverse set of talents, experiences, and perspectives. This holistic risk mitigation approach, incorporating technical practices, business processes, and cultural capabilities, is becoming known as responsible AI.

Who Should Read This Book

Non-technical oversight personnel - along with activists, journalist, and conscientious folks - need to feel empowered to audit, assess, and evaluate high-impact AI systems. Data scientists often need more exposure to cutting-edge technical approaches for responsible AI. Both of these groups need the appropriate critical literacy to appreciate the expertise the other has to offer, and to incorporate shared learnings into their respective work. Responsible AI is the field guide for this new generation of auditors, assessors, leaders and practitioners who seek AI systems that are better for organizations, consumers, and the public. In reading Responsible AI, auditors and attorneys can learn how to reframe their valuable knowledge and experience for better risk management of AI systems. Business leaders can use this book to understand the wide range of available approaches for building responsible AI culture, processes and governance, and to get a better grasp of the limitations of today’s AI systems. Data scientists can use Responsible AI to learn responsible AI methods, and to apply their technical skills with an improved understanding of the real-world complexities implicated by AI system decisions.

What Readers Will Learn

Responsible AI defines the eponymous concept and emphasizes why it’s so important. It addresses how to build accountable and diverse organizational cultures around AI, the necessary organizational structures and impactful roles that individuals can play, how existing roles at companies are evolving to incorporate responsible AI, and how responsible AI is being put into practice today. Integral to all of this is the education around, and standardization of, processes by which individuals can assess AI systems and appreciate the impact they have on business functions, consumers, and the public at large. To that end, Responsible AI examines effective privacy and security policies for AI, applicable legal and compliance standards, the role of traditional model risk management, and AI incident response planning. This book also aims to reinforce auditing and oversight knowledge by linking business and social outcomes to technical tools. Numerous technical approaches to engineer responsible AI systems are available today, and for all stages of the AI lifecycle. For the technical reader, Responsible AI explores porting standard software quality assurance processes to AI systems, experimental design for AI, and reproducibility, interpretability, fairness, security, and testing and debugging technologies.

Preliminary Book Outline

By the end of this book the reader will understand cultural competencies, business processes, and technical practices for responsible AI. The book is divided into three parts that echo each major facet of responsible AI. Each part of the book is split further into chapters that discuss specific subjects and cases. While the book is still being planned and written, Responsible AI will open with an introduction to the topic and then proceed to Part 1. A tentative outline for the book follows below.

Part 1: The Human Touch - Cultural Competencies For Responsible Machine Learning

Part 1 is targeted at the importance of organizational culture in the broader practice of responsible AI. Plans for the first chapter of Part 1 involve a call to stop going fast and breaking things, with a focus on well-known AI system failures and associated vocabulary and cases. Chapter 2 will analyze consumer protection laws, model risk management, and other guidelines, lessons and cases important for fostering accountability in AI organizations and systems. Chapter 3 will examine teams, organizational structures and the concept of an AI assessor. Chapter 4 will discuss the importance of meaningful human interactions with AI systems, and Chapter 5 is intended to detail important ways of working outside of traditional organizational constraints, like protests, data journalism, and white-hat hacking.

Part 2: Setting Up For Success - Organizational Process Concerns For Responsible Machine Learning

Part 2 is slated to cover responsible AI processes. It will begin with Chapter 6 and an exploration of how organizational policies and processes affect fairness in AI systems, and the startling lack thereof. Chapter 7 will outline common privacy and security policies for AI systems. Chapter 8 will consider existing and future laws and regulations that govern AI deployments in the United States. Chapter 9 will highlight the importance of model risk management for AI systems, but also points out a few shortcomings. Finally, the blueprint for Chapter 10 is a discussion of how corporations have heeded past calls for social and environmental responsibility in the context of future responsible AI adoption.

Part 3: The Scientific Method Versus The Kitchen Sink - Technical Approaches For Enhanced Human Trust And Understanding

The agenda for Part 3 covers the burgeoning technological ecosystem for responsible AI. Chapter 11 will address the important science of experimental design, and how it’s been largely ignored by contemporary data scientists. Chapter 12 will summarize the two leading technologies for increasing transparency in AI: interpretable ML models and post-hoc explainable AI (XAI). Chapter 13 is planned to be a deep dive into the world of bias testing and remediation for ML models, and should address both traditional and emergent approaches. Chapter 14 will cover security for ML algorithms and AI systems, and Chapter 15 will close Part 3 with a wide-ranging discussion of safety and performance testing for AI systems, sometimes also known as model debugging.

Bringing it All Together

After all that analysis and exposition, Responsible AI will end with a Chapter entitled “Bringing It All Together.” It serves as a reminder that while building responsible AI organizations and technology is hard work, it’s also quite within reach for individuals and organizations alike. Moreover, it’s necessary. The AI genie is out of the bottle. Headlines revealing embarrassing and damaging AI incidents became much more common in 2020. They won’t stop until people remake AI into responsible AI.

Conventions Used in This Book

The following typographical conventions are used in this book:

Italic

Indicates new terms, URLs, email addresses, filenames, and file extensions.

Constant width

Used for program listings, as well as within paragraphs to refer to program elements such as variable or function names, databases, data types, environment variables, statements, and keywords.

Constant width bold

Shows commands or other text that should be typed literally by the user.

Constant width italic

Shows text that should be replaced with user-supplied values or by values determined by context.

Tip

This element signifies a tip or suggestion.

Note

This element signifies a general note.

Warning

This element indicates a warning or caution.

Using Code Examples

Supplemental material (code examples, exercises, etc.) is available for download at https://github.com/oreillymedia/title_title.

If you have a technical question or a problem using the code examples, please send email to .

This book is here to help you get your job done. In general, if example code is offered with this book, you may use it in your programs and documentation. You do not need to contact us for permission unless you’re reproducing a significant portion of the code. For example, writing a program that uses several chunks of code from this book does not require permission. Selling or distributing examples from O’Reilly books does require permission. Answering a question by citing this book and quoting example code does not require permission. Incorporating a significant amount of example code from this book into your product’s documentation does require permission.

We appreciate, but generally do not require, attribution. An attribution usually includes the title, author, publisher, and ISBN. For example: “Book Title by Some Author (O’Reilly). Copyright 2012 Some Copyright Holder, 978-0-596-xxxx-x.”

If you feel your use of code examples falls outside fair use or the permission given above, feel free to contact us at .

O’Reilly Online Learning

Note

For more than 40 years, O’Reilly Media has provided technology and business training, knowledge, and insight to help companies succeed.

Our unique network of experts and innovators share their knowledge and expertise through books, articles, and our online learning platform. O’Reilly’s online learning platform gives you on-demand access to live training courses, in-depth learning paths, interactive coding environments, and a vast collection of text and video from O’Reilly and 200+ other publishers. For more information, visit http://oreilly.com.

How to Contact Us

Please address comments and questions concerning this book to the publisher:

  • O’Reilly Media, Inc.
  • 1005 Gravenstein Highway North
  • Sebastopol, CA 95472
  • 800-998-9938 (in the United States or Canada)
  • 707-829-0515 (international or local)
  • 707-829-0104 (fax)

Email to comment or ask technical questions about this book.

For news and information about our books and courses, visit http://oreilly.com.

Find us on Facebook: http://facebook.com/oreilly

Follow us on Twitter: http://twitter.com/oreillymedia

Watch us on YouTube: http://www.youtube.com/oreillymedia

Acknowledgments

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset