• Artificial Intelligence

  • Nov 16 2018
  • Durée: 42 min
  • Podcast

  • Résumé

  • An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)

    Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.

    Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

    See omnystudio.com/listener for privacy information.

    Voir plus Voir moins

Ce que les auditeurs disent de Artificial Intelligence

Moyenne des évaluations de clients

Évaluations – Cliquez sur les onglets pour changer la source des évaluations.