This is the full version of my special livestreamed event on Artificial General Intelligence / AGI on July 18 and 19, 2024 You can watch it on YouTube here https://www.youtube.com/watch?v=W3dRQ7QZ_wc Watch the edited (Q&A) version with @LondonFuturists David Wood on YouTube here https://www.youtube.com/watch?v=yYyTIky2MLc&t=0s In this special livestreamed event I outlined my arguments that while IA (Intelligent Assistance) and some forms of narrow AI may well be quite beneficial to humanity, the idea of building AGIs, i.e., 'generally intelligent digital entities' (as set forth by Sam Altman / #openai and others) represents an existential risk that IMHO should not be undertaken or self-governed by private enterprises, multinational corporations or venture-capital funded startups. I believe we need an AGI-Non-Proliferation-Agreement. I outline what the difference between IA/AI and AGI or ASI (superintelligence) is, why it matters and how we could go about it. IA /AI yes *but with clear rules, standards, and guardrails. AGI: NO, unless we're all on the same page. Who will be Mission Control for humanity?