• AI and the Alignment Challenge

  • Mar 11 2024
  • Durée: 16 min
  • Podcast

  • Résumé

  • We dive deep into the intricacies and ethical considerations of AI development, specifically focusing on OpenAI's Chat-GPT and GPT-4. Join us as we discuss how OpenAI approached the alignment problem, the impact of Human Aligned Reinforcement Learning, and the role of human raters in shaping Chat-GPT. We'll also revisit past AI mishaps like Microsoft's Tay and explore their influence on current AI models. The episode delves into OpenAI's efforts to address ethical concerns, the debate over universal human values in AI, and the diverse perspectives of users, developers, and society on AI technology. Lastly, we tackle the critical issue of employing workers from the global south for AI alignment, examining the ethical implications and the need for support. Tune in to uncover the complexities and breakthroughs in the evolving world of AI!

    Dr. Joel Esposito. He is a Professor in the Robotics and Control Engineering Department at the Naval Academy. He teaches courses in Robotics, Unmanned Vehicles, Artificial Intelligence and Data Science. He is the recipient of the Naval Academy's Rauoff Award for Excellence in Engineering Education, and the 2015 Class of 1951 Faculty Research Excellence Award. He received both a Master of Science, and a Ph.D. from the University of Pennsylvania.

    Voir plus Voir moins
activate_Holiday_promo_in_buybox_DT_T2

Ce que les auditeurs disent de AI and the Alignment Challenge

Moyenne des évaluations de clients

Évaluations – Cliquez sur les onglets pour changer la source des évaluations.