• #237 Pedro Domingo’s on Bayesians and Analogical Learning in AI

  • Feb 9 2025
  • Durée: 57 min
  • Podcast

#237 Pedro Domingo’s on Bayesians and Analogical Learning in AI

  • Résumé

  • This episode is sponsored by Thuma.

    Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details.

    To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai

    In this episode of the Eye on AI podcast, Pedro Domingos, renowned AI researcher and author of The Master Algorithm, joins Craig Smith to explore the evolution of machine learning, the resurgence of Bayesian AI, and the future of artificial intelligence.

    Pedro unpacks the ongoing battle between Bayesian and Frequentist approaches, explaining why probability is one of the most misunderstood concepts in AI. He delves into Bayesian networks, their role in AI decision-making, and how they powered Google’s ad system before deep learning. We also discuss how Bayesian learning is still outperforming humans in medical diagnosis, search & rescue, and predictive modeling, despite its computational challenges.

    The conversation shifts to deep learning’s limitations, with Pedro revealing how neural networks might be just a disguised form of nearest-neighbor learning. He challenges conventional wisdom on AGI, AI regulation, and the scalability of deep learning, offering insights into why Bayesian reasoning and analogical learning might be the future of AI.

    We also dive into analogical learning—a field championed by Douglas Hofstadter—exploring its impact on pattern recognition, case-based reasoning, and support vector machines (SVMs). Pedro highlights how AI has cycled through different paradigms, from symbolic AI in the '80s to SVMs in the 2000s, and why the next big breakthrough may not come from neural networks at all.

    From theoretical AI debates to real-world applications, this episode offers a deep dive into the science behind AI learning methods, their limitations, and what’s next for machine intelligence.

    Don’t forget to like, subscribe, and hit the notification bell for more expert discussions on AI, technology, and the future of innovation!

    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI



    (00:00) Introduction

    (02:55) The Five Tribes of Machine Learning Explained

    (06:34) Bayesian vs. Frequentist: The Probability Debate

    (08:27) What is Bayes' Theorem & How AI Uses It

    (12:46) The Power & Limitations of Bayesian Networks

    (16:43) How Bayesian Inference Works in AI

    (18:56) The Rise & Fall of Bayesian Machine Learning

    (20:31) Bayesian AI in Medical Diagnosis & Search and Rescue

    (25:07) How Google Used Bayesian Networks for Ads

    (28:56) The Role of Uncertainty in AI Decision-Making

    (30:34) Why Bayesian Learning is Computationally Hard

    (34:18) Analogical Learning – The Overlooked AI Paradigm

    (38:09) Support Vector Machines vs. Neural Networks

    (41:29) How SVMs Once Dominated Machine Learning

    (45:30) The Future of AI – Bayesian, Neural, or Hybrid?

    (50:38) Where AI is Heading Next



    Voir plus Voir moins

Ce que les auditeurs disent de #237 Pedro Domingo’s on Bayesians and Analogical Learning in AI

Moyenne des évaluations de clients

Évaluations – Cliquez sur les onglets pour changer la source des évaluations.