Page de couverture de Artificial Intelligence Act - EU AI Act

Artificial Intelligence Act - EU AI Act

Artificial Intelligence Act - EU AI Act

Auteur(s): Inception Point Ai
Écouter gratuitement

À propos de cet audio

Welcome to "The European Union Artificial Intelligence Act" podcast, your go-to source for in-depth insights into the groundbreaking AI regulations shaping the future of technology within the EU. Join us as we explore the intricacies of the AI Act, its impact on various industries, and the legal frameworks established to ensure ethical AI development and deployment.

Whether you're a tech enthusiast, legal professional, or business leader, this podcast provides valuable information and analysis to keep you informed and compliant with the latest AI regulations.

Stay ahead of the curve with "The European Union Artificial Intelligence Act" podcast – where we decode the EU's AI policies and their global implications. Subscribe now and never miss an episode!

Keywords: European Union, Artificial Intelligence Act, AI regulations, EU AI policy, AI compliance, AI risk management, technology law, AI ethics, AI governance, AI podcast.



This show includes AI-generated content.Copyright 2025 Inception Point Ai
Politique Économie
Épisodes
  • EU AI Act Teeters on Brink as High-Risk Rules Deadline Looms
    May 4 2026
    Imagine this: it's early May 2026, and I'm huddled in a Brussels café, laptop glowing amid the scent of fresh croissants, as the EU AI Act's ticking clock dominates every tech whisper. Just days ago, on April 28th, the second political trilogue between the European Parliament, the Council of the EU, and the European Commission collapsed after 12 grueling hours. No deal on the Digital Omnibus proposal, tabled by the Commission back on November 19th, 2025. The stakes? Postponing high-risk AI obligations from August 2nd, 2026—now a mere three months away—to December 2nd, 2027 for standalone systems, or even August 2028 for those embedded in regulated products like medical devices from Siemens Healthineers or connected cars from Volkswagen.

    High-risk AI, listeners—that's the beast: systems in recruitment at companies like Unilever, performance eval in HR tools from Workday, or worker monitoring at Amazon warehouses. The Act, Regulation 2024/1689, entered force August 1st, 2024, tiering risks from unacceptable—like banned social scoring or real-time biometrics in public spaces—to these heavyweights demanding risk assessments, data governance, transparency, and EU database registration. Fines? Up to 7% of global turnover for violations, dwarfing GDPR slaps.

    The snag? Exemptions for AI in already-regulated gear, like toys or industrial machinery. Parliament, backed by industry lobbies, wants them out; the Council drags feet. POLITICO's Pieter Haeck called it a sticking point, with German Chancellor Friedrich Merz pushing cuts for industrial AI—branded a "corset" by his EPP group—while his Social Democrat partners balk. Next trilogue? May 13th. Miss the August deadline without adoption, and original rules bite hard, per DLA Piper's analysis. Financial firms, think credit scoring at Deutsche Bank, scramble now, as Finextra warns.

    Zoom out: the European AI Office, nestled in the Commission, oversees general-purpose models like Mistral's or Anthropic's—soon Mythos?—mandating red-teaming for systemic risks over 10^25 FLOPs, copyright summaries, and incident reports. Yet civil society, via Future of Life Institute newsletters, fumes: the Advisory Forum's still unborn, seven months post-call. Access Now slams gaps for migrants' rights. As UK AISI races voluntary cyber tests, the EU's enforceable lifecycle oversight shines—or stifles?

    This Act isn't just rules; it's a philosophical fork. Does risk-based rigor foster trustworthy AI, or hobble Europe's edge against US hyperscalers? With guidelines brewing—high-risk clarifications by June, per Dastra—compliance is a tech chess game. Will Omnibus save the day, or ignite chaos? Ponder that as August looms.

    Thanks for tuning in, listeners—subscribe for more deep dives. This has been a Quiet Please production, for more check out quietplease.ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI

    This episode includes AI-generated content.
    Voir plus Voir moins
    3 min
  • EU's August 2nd AI Deadline: Brussels Braces for High-Stakes Showdown on Worker Rights and Tech Rules
    May 2 2026
    Imagine this: it's early May 2026, and I'm huddled in a Brussels café, steam rising from my espresso as I scroll through the latest dispatches on the EU AI Act. The clock is ticking toward August 2nd, that do-or-die deadline for high-risk AI systems, and the air is thick with tension. Just days ago, on April 28th, the second political trilogue between the European Parliament, the Council of the EU, and the European Commission wrapped up in deadlock over the Digital Omnibus proposal. No agreement. The next one's slated for May 13th, but if they don't seal the deal before summer, those original rules kick in hard—no deferrals, no mercy.

    Picture the stakes. High-risk AI, as defined in the Act's Annex III, covers tools reshaping our workplaces: recruitment bots sifting CVs in Berlin startups, performance evaluators at Siemens in Munich, or task allocators monitoring workers from Dublin to Warsaw. Providers must self-certify conformity, log every decision, ensure human oversight, and register everything in the EU's public database via the AI Act Service Desk. Deployers? You're on the hook for following instructions, retaining logs for six months, and notifying affected folks. Non-EU giants like U.S. firms at Holland & Knight warn their teams: if your AI output touches EU soil—hiring Parisian candidates or scoring Milanese credit—appoint an authorized rep in Brussels, or face fines up to 3% of global turnover, per Article 99. That's €35 million or 7% for the worst offenses, plus market bans.

    The Omnibus, tabled by the European Commission on November 19th, 2025, begged for a reprieve: push high-risk employment obligations to December 2nd, 2027, and sector-specific ones to August 2028. German Chancellor Friedrich Merz champions easing industrial AI burdens to dodge "double regulation," echoed by Siemens spokespeople craving clarity. Italian MEP Brando Benifei, Parliament's lead negotiator, pushes back, fearing a fragmented framework. Venture capitalist Bill Gurley chimes in from afar, fretting AI could displace 59% of workers—curiosity and skill-building our only shields.

    Yet here's the techie twist provoking my neurons: this risk-tiered behemoth—unacceptable risks banned since February 2025, general-purpose models like GPT-4 under transparency mandates—aims for trustworthy AI, but delays expose the hype. The European AI Office, beefed up in the Simplification Package, now hunts infringements, drafts codes with devs, and eyes systemic risks. Will it foster innovation or stifle it? U.S. deployers tweaking SaaS platforms could flip from user to provider with one code tweak. As VDE notes, without harmonized standards, chaos looms.

    Listeners, in this AI arms race, the EU Act isn't just law—it's a philosophical gauntlet: balance godlike models with human rights, or watch jobs vanish into silicon. Prepare now; August 2nd waits for no trilogue.

    Thank you for tuning in, and please subscribe for more. This has been a Quiet Please production, for more check out quietplease.ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI

    This episode includes AI-generated content.
    Voir plus Voir moins
    4 min
  • Europe's AI Reckoning: Brussels Tightens the Screws as August Deadline Looms
    Apr 30 2026
    Imagine this: it's just past dawn in Brussels, and I'm sipping black coffee in a corner café near the European Parliament, scrolling through the latest dispatches on my tablet. The date is April 30, 2026, and the EU AI Act— that groundbreaking Regulation (EU) 2024/1689, which kicked off in August 2024— is hitting warp speed. Prohibited practices like manipulative subliminal AI got banned back in February 2025, general-purpose AI models like those powering GPT-4 faced obligations last August, and now, high-risk systems loom large with their deadline just three months away on August 2.

    Yesterday, April 29, Reuters dropped a bombshell: EU antitrust chief Teresa Ribera announced the Digital Markets Act is pivoting to rein in Big Tech's grip on cloud services and AI, targeting gatekeepers like Alphabet, Amazon, and Microsoft to make AI fairer and more contestable. They're even eyeing designating certain AI services as core platform services. But the real drama unfolded on April 28 in the second political trilogue between the European Parliament, the Council of the EU, and the European Commission. After 12 grueling hours, as The Next Web reports, they failed to agree on the Digital Omnibus proposal— that November 19, 2025, brainchild from the Commission aiming to defer high-risk compliance from August 2, 2026, to December 2, 2027, for standalone systems, and even later to August 2028 for those embedded in regulated products like medical devices or connected cars.

    High-risk AI? Think recruitment tools from companies like LinkedIn, performance evaluators at Siemens, or worker monitoring systems in Amazon warehouses— all classified under Annex III, demanding continuous risk management, data governance, and transparency, not just one-off audits, per OpenLayer's April 2026 guide. The Parliament, backed by industry lobbies, wants exemptions for product-embedded AI already under sectoral rules, but the Council isn't budging. Talks resume May 13, per DLA Piper's analysis. If no deal by August, the original deadlines hit like a freight train, catching unprepared firms off-guard.

    Yet, amid the chaos, silver linings emerge. AgFunderNews coins it a "Brussels moat": startups building auditable, compliant AI for high-stakes sectors like agrifood or health could dominate, turning red tape into competitive edge. The AI Office's upcoming guidelines on high-risk systems, expected May or June via Dastra's roadmap, plus codes of practice for deepfakes, promise clarity. And the Commission's EU Inc. push, unveiled last month, aims for a pan-EU company structure by year's end, easing scaling for AI founders fragmented by national laws— as Jeroen Ten Broecke of Philippe & Partners notes, slashing cross-border friction.

    This Act's risk-tiered genius— unacceptable, high, limited, minimal— is rippling globally via the Brussels effect, inspiring U.S. bills like the CHATBOT Act from Senators Ted Cruz and Brian Schatz. But here's the provocation, listeners: will Europe's push for trustworthy, human-centric AI stifle innovation or forge a safer digital frontier? As an AI dev in Berlin, I'm racing to embed risk pipelines into my code, per that arXiv insider research from startups. The clock ticks— prepare or perish.

    Thanks for tuning in, listeners— don't forget to subscribe for more. This has been a Quiet Please production, for more check out quietplease.ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI

    This episode includes AI-generated content.
    Voir plus Voir moins
    4 min
Pas encore de commentaire