Listen free for 30 days

  • The Alignment Problem

  • Machine Learning and Human Values
  • Written by: Brian Christian
  • Narrated by: Brian Christian
  • Length: 13 hrs and 33 mins
  • 4.8 out of 5 stars (34 ratings)

Pick 1 audiobook a month from our unmatched collection.
Listen all you want to thousands of included audiobooks, Originals, and podcasts.
Access exclusive sales and deals.
Premium Plus auto-renews for $14.95/mo + applicable taxes after 30 days. Cancel anytime.
The Alignment Problem cover art

The Alignment Problem

Written by: Brian Christian
Narrated by: Brian Christian
Try for $0.00

$14.95 a month after 30 days. Cancel anytime.

Buy Now for $37.71

Buy Now for $37.71

Pay using card ending in
By confirming your purchase, you agree to Audible's Conditions of Use and Amazon's Privacy Notice. Tax where applicable.

Publisher's Summary

A jaw-dropping exploration of everything that goes wrong when we build AI systems and the movement to fix them.

Today’s “machine-learning” systems, trained by data, are so effective that we’ve invited them to see and hear for us - and to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem.

Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole - and appear to assess Black and White defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And as autonomous vehicles share our streets, we are increasingly putting our lives in their hands.

The mathematical and computational models driving these changes range in complexity from something that can fit on a spreadsheet to a complex system that might credibly be called “artificial intelligence.” They are steadily replacing both human judgment and explicitly programmed software.

In best-selling author Brian Christian’s riveting account, we meet the alignment problem’s “first-responders,” and learn their ambitious plan to solve it before our hands are completely off the wheel. In a masterful blend of history and on-the-ground reporting, Christian traces the explosive growth in the field of machine learning and surveys its current, sprawling frontier. Listeners encounter a discipline finding its legs amid exhilarating and sometimes terrifying progress. Whether they - and we - succeed or fail in solving the alignment problem will be a defining human story.

The Alignment Problem offers an unflinching reckoning with humanity’s biases and blind spots, our own unstated assumptions and often contradictory goals. A dazzlingly interdisciplinary work, it takes a hard look not only at our technology but at our culture - and finds a story by turns harrowing and hopeful. 

©2020 Brian Christian (P)2020 Brilliance Publishing, Inc., all rights reserved.

What listeners say about The Alignment Problem

Average Customer Ratings
Overall
  • 5 out of 5 stars
  • 5 Stars
    28
  • 4 Stars
    5
  • 3 Stars
    1
  • 2 Stars
    0
  • 1 Stars
    0
Performance
  • 5 out of 5 stars
  • 5 Stars
    24
  • 4 Stars
    4
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Story
  • 5 out of 5 stars
  • 5 Stars
    25
  • 4 Stars
    3
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0

Reviews - Please select the tabs below to change the source of reviews.

Sort by:
Filter by:
  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

Great insight into how machines learn

Very nice book. It offers details about the dynamics of machine learning not found in other popular books which mainly go over the implications of AI and how it is used but not how it is made. This book also covers important ethical questions which can only be understood through how machines learn. Recommended read. Thank you Brian.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

  • Overall
    4 out of 5 stars
  • Performance
    4 out of 5 stars
  • Story
    4 out of 5 stars

A Comprehensive Analysis of AI’s Impact on Society

The Alignment Problem provides an in-depth analysis of the history of AI and its impact on the world. It also offers excellent insight into the human brain, ethics, how we learn, and some of the prejudices in our society. Christian has done a great job of explaining complex concepts in a simple and easy-to-understand manner. The book is well-researched and provides a comprehensive overview of the subject matter. I would highly recommend this book to anyone interested in AI, ethics, and the future of technology.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

Mind-blowing

This book was incredible. I didn’t understand a decent amount of it, haha, but what I did understand blew my mind. The author explains very complex concepts very well, and has clearly done an insane amount of research. I definitely recommend!

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

Soul shattering

I decided to listen to this in the hope of understanding what A.I really is. It left me, feeling both exuberant and at the same time scared out of my wits with what we are creating…
This book has changed my thinking, my views and my judgement. The uncertainty of our actions is yet to play out ! Thank you for not just shining a light on the subject, but allowing the reader to soul search for the impact it will have on them!

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!