• Mystery AI Hype Theater 3000, Episode 40: Elders Need Care, Not 'AI' Surveillance (feat. Clara Berridge), August 19 2024
    Sep 13 2024

    Dr. Clara Berridge joins Alex and Emily to talk about the many 'uses' for generative AI in elder care -- from "companionship," to "coaching" like medication reminders and other encouragements toward healthier (and, for insurers, cost-saving) behavior. But these technologies also come with questionable data practices and privacy violations. And as populations grow older on average globally, technology such as chatbots is often used to sidestep real solutions to providing meaningful care, while also playing on ageist and ableist tropes.

    Dr. Clara Berridge is an associate professor at the University of Washington’s School of Social Work. Her research focuses explicitly on the policy and ethical implications of digital technology in elder care, and considers things like privacy and surveillance, power, and decision-making about technology use.

    References:

    Care.Coach's 'Avatar' chat program*

    For Older People Who Are Lonely, Is the Solution a Robot Friend?

    Care Providers’ Perspectives on the Design of Assistive Persuasive Behaviors for Socially Assistive Robots

    Socio-Digital Vulnerability

    ***Care.Coach's 'Fara' and 'Auger' products, also discussed in this episode, are no longer listed on their site.

    Fresh AI Hell:

    Apple Intelligence hidden prompts include the command "don't hallucinate"

    The US wants to use facial recognition to identify migrant children as they age

    Family poisoned after following fake mushroom book

    It is a beautiful evening in the neighborhood, and you are a horrible Waymo robotaxi

    Dynamic pricing + surveillance hell at the grocery store

    Chinese social media's newest trend: imitating AI-generated videos


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show more Show less
    1 hr and 1 min
  • Episode 39: Newsrooms Pivot to Bullshit (feat. Sam Cole), Aug 5 2024
    Aug 29 2024

    The Washington Post is going all in on AI -- surely this won't be a repeat of any past, disastrous newsroom pivots! 404 Media journalist Samantha Cole joins to talk journalism, LLMs, and why synthetic text is the antithesis of good reporting.

    References:

    The Washington Post Tells Staff It’s Pivoting to AI: "AI everywhere in our newsroom."
    Response: Defector Media Promotes Devin The Dugong To Chief AI Officer, Unveils First AI-Generated Blog

    The Washington Post's First AI Strategy Editor Talks LLMs in the Newsroom

    Also: New Washington Post CTO comes from Uber

    The Washington Post debuts AI chatbot, will summarize climate articles.

    Media companies are making a huge mistake with AI

    When ChatGPT summarizes, it does nothing of the kind

    404 Media: 404 Media Now Has a Full Text RSS Feed

    404 Media: Websites are Blocking the Wrong AI Scrapers (Because AI Companies Keep Making New Ones)


    Fresh AI Hell:

    "AI" Alan Turning

    • Our Opinions Are Correct: The Turing Test is Bullshit (w/Alex Hanna and Emily M. Bender)

    Google advertises Gemini for writing synthetic fan letters

    Dutch Judge uses ChatGPT's answers to factual questions in ruling

    Is GenAI coming to your home appliances?

    AcademicGPT (Galactica redux)

    "AI" generated images in medical science, again (now retracted)


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show more Show less
    1 hr and 2 mins
  • Episode 38: Deflating Zoom's 'Digital Twin,' July 29 2024
    Aug 14 2024

    Could this meeting have been an e-mail that you didn't even have to read? Emily and Alex are tearing into the lofty ambitions of Zoom CEO Eric Yuan, who claims the future is a LLM-powered 'digital twin' that can attend meetings in your stead, make decisions for you, and even be tuned to different parameters with just the click of a button.

    References:
    The CEO of Zoom wants AI clones in meetings

    All-knowing machines are a fantasy

    A reminder of some things chatbots are not good for

    Medical science shouldn't platform automating end-of-life care

    The grimy residue of the AI bubble

    On the phenomenon of bullshit jobs: a work rant

    Fresh AI Hell:
    LA schools' ed tech chatbot misusing student data

    AI "teaching assistants" at Morehouse

    "Diet-monitoring AI tracks your each and every spoonful"

    A teacher's perspective on dealing with students who "asked ChatGPT"

    Are Swiss researchers affiliated with Israeli military industrial complex? Swiss institution asks ChatGPT

    Using a chatbot to negotiate lower prices


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show more Show less
    1 hr and 2 mins
  • Episode 37: Chatbots Aren't Nurses (feat. Michelle Mahon), July 22 2024
    Aug 2 2024

    We regret to report that companies are still trying to make generative AI that can 'transform' healthcare -- but without investing in the wellbeing of healthcare workers or other aspects of actual patient care. Registered nurse and nursing care advocate Michelle Mahon joins Emily and Alex to explain why generative AI falls far, far short of the work nurses do.

    Michelle Mahon is the Director of Nursing Practice with National Nurses United, the largest union of registered nurses in the country. Michelle has over 25 years of experience as a registered nurse in various settings. In her role with NNU, Michelle works with nurses across the United States to protect the vital role that RNs play in health care as direct caregivers and patient advocates.

    References:

    NVIDIA's AI Bot Outperforms Nurses: Here's What It Means

    Hippocratic AI's roster of 'genAI healthcare agents'

    Related: Nuance's DAX Copilot

    Fresh AI Hell:

    "AI-powered health coach" will urge you to drink water with lemon

    50% of 2024 Q2 VC investments went to "AI"

    Thanks to AI, Google no longer claiming to be carbon-neutral

    Click work "jobs" soliciting photos of babies through teens

    Screening of film "written by AI" canceled after backlash

    Putting the AI in IPA


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show more Show less
    1 hr
  • Episode 36: About That 'Dangerous Capabilities' Fanfiction (feat. Ali Alkhatib), June 24 2024
    Jul 19 2024

    When is a research paper not a research paper? When a big tech company uses a preprint server as a means to dodge peer review -- in this case, of their wild speculations on the 'dangerous capabilities' of large language models. Ali Alkhatib joins Emily to explain why a recent Google DeepMind document about the hunt for evidence that LLMs might intentionally deceive us was bad science, and yet is still influencing the public conversation about AI.

    Ali Alkhatib is a computer scientist and former director of the University of San Francisco’s Center for Applied Data Ethics. His research focuses on human-computer interaction, and why our technological problems are really social – and why we should apply social science lenses to data work, algorithmic justice, and even the errors and reality distortions inherent in AI models.

    References:

    Google DeepMind paper-like object: Evaluating Frontier Models for Dangerous Capabilities

    Fresh AI Hell:

    Hacker tool extracts all the data collected by Windows' 'Recall' AI

    In NYC, ShotSpotter calls are 87 percent false alarms

    "AI" system to make callers sound less angry to call center workers

    Anthropic's Claude Sonnet 3.5 evaluated for "graduate level reasoning"

    OpenAI's Mira Murati says "AI" will have 'PhD-level' intelligence

    OpenAI's Mira Murati also says AI will take some creative jobs, maybe they shouldn't have been there to start out with



    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show more Show less
    1 hr and 2 mins
  • Episode 35: AI Overviews and Google's AdTech Empire (feat. Safiya Noble), June 10 2024
    Jul 3 2024

    You've already heard about the rock-prescribing, glue pizza-suggesting hazards of Google's AI overviews. But the problems with the internet's most-used search engine go way back. UCLA scholar and "Algorithms of Oppression" author Safiya Noble joins Alex and Emily in a conversation about how Google has long been breaking our information ecosystem in the name of shareholders and ad sales.

    References:

    Blog post, May 14: Generative AI in Search: Let Google do the searching for you
    Blog post, May 30: AI Overviews: About last week

    Algorithms of Oppression: How Search Engines Reinforce Racism, by Safiya Noble

    Fresh AI Hell:

    AI Catholic priest demoted after saying it's OK to baptize babies with Gatorade

    National Archives bans use of ChatGPT

    ChatGPT better than humans at "Moral Turing Test"

    Taco Bell as an "AI first" company

    AGI by 2027, in one hilarious graph



    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show more Show less
    1 hr and 2 mins
  • Episode 34: Senate Dot Roadmap Dot Final Dot No Really Dot Docx, June 3 2024
    Jun 20 2024

    The politicians are at it again: Senate Majority Leader Chuck Schumer's series of industry-centric forums last year have birthed a "roadmap" for future legislation. Emily and Alex take a deep dive on this report, and conclude that the time spent writing it could have instead been spent...making useful laws.

    References:

    Driving US Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States

    Tech Policy Press: US Senate AI Insight Forum Tracker

    Put the Public in the Driver's Seat: Shadow Report to the US Senate AI Policy Roadmap

    Emily's opening remarks on “AI in the Workplace: New Crisis or Longstanding Challenge” virtual roundtable

    Fresh AI Hell:

    Homophobia in Spotify's chatbot

    StackOverflow in bed with OpenAI, pushing back against resistance

    • See also: https://scholar.social/@dingemansemark/112411041956275543

    OpenAI making copyright claim against ChatGPT subreddit

    Introducing synthetic text for police reports

    ChatGPT-like "AI" assistant ... as a car feature?

    Scarlett Johansson vs. OpenAI


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show more Show less
    1 hr and 4 mins
  • Episode 33: Much Ado About 'AI' 'Deception', May 20 2024
    Jun 5 2024

    Will the LLMs somehow become so advanced that they learn to lie to us in order to achieve their own ends? It's the stuff of science fiction, and in science fiction these claims should remain. Emily and guest host Margaret Mitchell, machine learning researcher and chief ethics scientist at HuggingFace, break down why 'AI deception' is firmly a feature of human hype.

    Reference:

    Patterns: "AI deception: A survey of examples, risks, and potential solutions"

    Fresh AI Hell:

    Adobe's 'ethical' image generator is still pulling from copyrighted material

    Apple advertising hell: vivid depiction of tech crushing creativity, as if it were good

    "AI is more creative than 99% of people"

    AI generated employee handbooks causing chaos

    Bumble CEO: Let AI 'concierge' do your dating for you.

    • Some critique


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show more Show less
    1 hr and 1 min