• Article 23. Algorithmic System Integrity: Testing
    Feb 21 2025

    Spoken by a human version of this article.

    TL;DR (TL;DL?)

    • Testing is a core basic step for algorithmic integrity.
    • Testing involves various stages, from developer self-checks to UAT. Where these happen will depend on whether the system is built in-house or bought.
    • Testing needs to cover several integrity aspects, including accuracy, fairness, security, privacy, and performance.
    • Continuous testing is needed for AI systems, differing from traditional testing due to the way these newer systems change (without code changes).


    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Voir plus Voir moins
    6 min
  • Article 22. Algorithm Integrity: Third party assurance
    Feb 16 2025

    Spoken by a human version of this article.

    One question that comes up often is “How do we obtain assurance about third party products or services?”

    Depending on the nature of the relationship, and what you need assurance for, this can vary widely.

    This article attempts to lay out the options, considerations, and key steps to take.

    TL;DR (TL;DL?)

    • Third-party assurance for algorithm integrity varies based on the nature of the relationship and specific needs, with several options.
    • Key factors to consider include the importance and risk level of the service/product, regulatory expectations, complexity, transparency, and frequency of updates.
    • Standardised assurance frameworks for algorithm integrity are still emerging; adopt a risk-based approach, and consider sector-specific standards like CPS230(Australia).


    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Voir plus Voir moins
    7 min
  • Guest 3. Shea Brown, Founder and CEO of BABL AI
    Jan 31 2025

    Navigating AI Audits with Dr. Shea Brown

    Dr. Shea Brown is Founder and CEO of BABL AI
    BABL specializes in auditing and certifying AI systems, consulting on responsible AI practices, and offering online education.

    Shea shares his journey from astrophysics to AI auditing, the core services provided by BABL AI including compliance audits, technical testing, and risk assessments, and the importance of governance in AI.

    He also addresses the challenges posed by generative AI, the need for continuous upskilling in AI literacy, and the role of organizations like the IAAA and For Humanity in building consensus and standards in AI auditing.

    Finally, Shea provides insights on third-party risks, in-house AI developments, and key skills needed for effective AI governance.

    Chapter Markers

    00:00 Introduction to Dr. Shea Brown and BABL AI

    00:36 The Journey from Astrophysics to AI Auditing

    02:22 Core Services and Compliance Audits at BABL

    03:57 Educational Initiatives and AI Literacy

    05:48 Collaborations and Professional Organizations

    08:57 Approach to AI Audits and Readiness

    17:29 Challenges with Generative AI in Audits

    29:21 Trends in AI Deployment and Risk Assessment

    34:53 Skills and Training for AI Governance

    40:15 Conclusion and Contact Information



    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Voir plus Voir moins
    41 min
  • Article 21. AI Risk Training: Role-based tailoring
    Jan 31 2025

    Spoken by a human version of this article.

    AI literacy is growing in importance (e.g., EU AI Act, IAIS).
    AI literacy needs vary across roles.
    Even "AI professionals" need AI Risk training.


    Links

    • EU AI Act: The European Union Artificial Intelligence Act - specific expectation about “AI literacy”.
    • IAIS: The International Association of Insurance Supervisors is developing a guidance paper on the supervision of AI.

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Voir plus Voir moins
    6 min
  • Guest 2. Patrick Sullivan: VP of Strategy and Innovation at A-LIGN
    Jan 21 2025

    Navigating AI Governance and Compliance

    Patrick Sullivan is Vice President of Strategy and Innovation at A-LIGN and an expert in cybersecurity and AI compliance with over 25 years of experience.

    Patrick shares his career journey, discusses his passion for educating executives and directors on effective governance, and explains the critical role of management systems like ISO 42001 in AI compliance.

    We discuss the complexities of AI governance, risk assessment, and the importance of clear organizational context.

    Patrick also highlights the challenges and benefits of AI assurance and offers insights into the changing landscape of AI standards and regulations.

    Chapter Markers

    00:00 Introduction

    00:23 Patrick's Career Journey

    02:31 Focus on AI Governance

    04:19 Importance of Education and Internal Training

    08:08 Involvement in Industry Associations

    14:13 AI Standards and Governance

    20:06 Challenges with preparing for AI Certification

    28:04 Future of AI Assurance

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Voir plus Voir moins
    32 min
  • Guest 1. Ryan Carrier: Executive Director of ForHumanity
    Jan 20 2025

    Mitigating AI Risks

    Ryan Carrier is founder and executive director of ForHumanity, a non-profit focused on mitigating the risks associated with AI, autonomous, and algorithmic systems.

    With 25 years of experience in financial services, Ryan discusses ForHumanity's mission to analyze and mitigate the downside risks of AI to benefit society.

    The conversation includes insights on the foundation of ForHumanity, the role of independent AI audits, educational programs offered by the ForHumanity AI Education and Training Center, AI governance, and the development of audit certification schemes.

    Ryan also highlights the importance of AI literacy, stakeholder management, and the future of AI governance and compliance.

    Chapter Markers

    00:00 Introduction to Ryan Carrier and ForHumanity

    00:57 Ryan's Background and Journey to AI

    02:10 Founding ForHumanity: Mission and Early Challenges

    05:15 Developing Independent Audits for AI

    08:02 ForHumanity's Role and Activities

    17:26 Education Programs and Certifications

    29:21 AI Literacy and Future of Independent Audits

    42:06 Getting Involved with ForHumanity

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Voir plus Voir moins
    45 min
  • Article 20. Algorithm Reviews: Public vs Private Reports
    Jan 15 2025

    Spoken (by a human) version of this article.

    • Public AI audit reports aren't universally required; they mainly apply to high-risk applications and/or specific jurisdictions.
    • The push for transparency primarily concerns independent audits, not internal reviews.
    • Prepare by implementing ethical AI practices and conducting regular reviews.

    Note: High-risk AI systems in banking and insurance are subject to specific requirements

    Links

    • AI and algorithm audit guidelines vary widely and are not universally applicable. We discussed this in a previous article, outlining how the appropriateness of audit guidance depends on your circumstances.
    • Audit vs Review: we explored this topic in depth in a previous article.

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Voir plus Voir moins
    8 min
  • Article 19. Algorithmic System Reviews: Substantive vs. Controls Testing
    Jan 13 2025

    Spoken by a human version of this article.

    • Knowing the basics of substantive testing vs. controls testing can help you determine if the review will meet your needs.
    • Substantive testing directly identifies errors or unfairness, while controls testing evaluates governance effectiveness. The results/conclusions are different.
    • Understanding these differences can also help you anticipate the extent of your team's involvement during the review process.


    Links
    This article details a (largely) substantive testing method for accuracy reviews.

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Voir plus Voir moins
    6 min