Responsible AI Report

Written by: Responsible AI Institute
  • Summary

  • Welcome to the RAI Report from the Responsible AI Institute. Each week we bring you the latest news and trends happening in the responsible AI ecosystem with leading industry experts. Whether it's unpacking promising progress, pressing dilemmas, or regulatory updates, our trailblazing guests will spotlight emerging innovations through a practical lens, helping to implement and advance AI responsibly.

    Support the show
    Visit out website at responsible.ai.

    © 2025 Responsible AI Report
    Show more Show less
Episodes
  • AI Risk & Ethical Considerations with Amy Challen, Global Head of AI at Shell | EP 09
    Jan 30 2025

    In this episode of the Responsible AI Report, Patrick speaks with Amy Challen, the Global Head of AI at Shell. They discuss the current landscape of AI, including the ethical considerations in AI development, the importance of risk management, and the public discourse surrounding responsible AI. The conversation highlights the need for a balanced approach to AI innovation and the role of leadership in navigating these challenges.

    Takeaways

    • The development of AI must serve humanity's interests.
    • Ethical considerations are crucial in AI and AGI development.
    • Different countries address AI risks in varied ways.
    • Public discussions on AI often overlook everyday ethics.
    • AI risks should be assessed pragmatically and holistically.
    • Technological innovation requires public and private partnerships.
    • Responsible AI is a collective effort across industries.

    Learn more at:
    https://www.shell.com/what-we-do/digitalisation/artificial-intelligence.html

    Amy Challen is the Global Head of Artificial Intelligence at Shell, responsible for driving delivery and adoption of AI technologies, including natural language processing, computer vision, and deep reinforcement learning.

    She spent the first decade of her career in academia as a researcher in applied econometrics, before joining McKinsey & Company as a strategy consultant. As a consultant she solved real-world problems across diverse functions and industries, for some of the world’s largest organizations, delivering significant commercial value. She joined Shell in 2019.

    Support the show

    Visit our website at responsible.ai


    Show more Show less
    22 mins
  • The Significance of AI System Cards with Bryan McGowan and Christopher Jambor, Trusted AI Team at KPMG | EP 08
    Jan 16 2025

    In this episode, Patrick speaks with Brian McGowan and Chris Jambor from KPMG about the importance of responsible AI practices. They discuss the limitations of AI models, the development and significance of AI system cards, and how these tools can help mitigate risks associated with AI technologies. The conversation emphasizes the need for a structured approach to AI governance and the role of transparency and accountability in building trust in AI systems.

    Takeaways

    • AI tools are evolving rapidly and need proper guardrails.
    • AI system cards provide a structured way to assess AI systems.
    • Transparency and explainability are crucial in AI governance.
    • System cards help improve AI literacy in the workplace.
    • A trust score helps users understand AI system performance.
    • AI governance must be scalable and adaptable to technology changes.
    • Robust testing and validation are key to responsible AI.

    Learn more at:
    https://kpmg.com/xx/en/what-we-do/services/kpmg-trusted-ai.html

    Bryan McGowan is a Principal in the KPMG Advisory practice and leader of US Trusted AI for Consulting. In this role, Bryan continues to expand his passion of leveraging technology to drive efficiency, enhance insights, and improve results. Trusted AI combines deep industry expertise across the firm’s Risk Services, Lighthouse, and Cyber businesses with modern technical skills to help business leaders harness the power of AI to accelerate value in a trusted manner—from strategy and design through to implementation and ongoing operations. Bryan also leads the Trusted AI go-to-market efforts for the Risk Services business and co-developed the firm’s Risk Intelligence product suite to help identify, manage, and quantify risks across the enterprise. His primary focus areas are business process improvement, control design and automation, and managing risks associated with emerging technologies. Bryan has over 20 years’ experience running large, complex projects across a variety of industries. This includes supporting clients on their automation and analytics journey for the better part of the last decade—designing and developing bots, RPA, initial AI/ML models, and more.

    Chris is a member of the KPMG AI & Digital Innovation Group's Trusted AI Team with a specialized focus on AI literacy and the responsible & ethical uses of AI. Before joining the Trusted AI team, Chris was an AI Strategy Consultant & Analytics Engineer working in industries such as technology, entertainment, healthcare, pharmaceuticals, marketing/advertising, higher education, and cybersecurity.







    Support the show

    Visit our website at responsible.ai


    Show more Show less
    16 mins
  • The Complexities of State and Federal AI Regulation with Soribel Feliz, Former Senior AI & Tech Policy Advisor at the US Senate | EP 07
    Jan 2 2025

    For this episode of the Responsible AI Report, Soribel Feliz discusses the complexities of AI regulation, emphasizing the need for a balanced approach that considers both innovation and the rights of creators. She highlights the challenges faced by startups in complying with regulations and the differing impacts of state versus federal policies. The discussion also touches on the evolving landscape of intellectual property rights in the context of AI development.

    Takeaways

    • Big tech has valid arguments against regulations.
    • Overly burdensome regulations could hinder innovation for startups.
    • Effective regulation must be tailored to different contexts.
    • State-level regulations can be more responsive than federal ones.
    • A patchwork of regulations can complicate compliance for startups.
    • Balancing AI development with creator rights is complex.
    • Policymakers need to collaborate with AI developers and creators.
    • Fair use and data licensing frameworks are evolving areas of policy.
    • Smaller creators often lack resources to defend their rights.
    • Ongoing dialogue is essential for effective AI governance.

    Learn more at:
    https://www.linkedin.com/in/soribel-f-b5242b14/
    https://www.linkedin.com/newsletters/responsible-ai-=-inclusive-ai-7046134543027724288/

    Soribel Feliz is a thought leader in Responsible AI and AI governance. She started her career as a U.S. diplomat with the Department of State. She also worked for Big Tech companies, Meta and Microsoft, and most recently, worked as a Senior AI and Tech Policy Advisor in the U.S. Senate.





    Support the show

    Visit our website at responsible.ai


    Show more Show less
    17 mins

What listeners say about Responsible AI Report

Average Customer Ratings

Reviews - Please select the tabs below to change the source of reviews.