It's been a couple of weeks since the Chinese firm DeepSeek released its new R1 large-language model and sheared an enormous amount of value off of American AI companies. Now that the dust has settled, we don our AI-skeptic hats again and try to unpack what makes this model different, including how it was made so much more efficiently, what opening it up for free means for paid competitors, and whether we might not have to burn down quite so many forests going forward. (Hint: Don't get your hopes up.)
https://www.livescience.com/technology/artificial-intelligence/why-is-deekspeek-such-a-game-changer-scientists-explain-how-the-ai-models-work-and-why-they-were-so-cheap-to-build
https://hackaday.com/2025/02/03/more-details-on-why-deepseek-is-a-big-deal/
https://www.404media.co/openai-furious-deepseek-might-have-stolen-all-the-data-openai-stole-from-us/
https://www.vellum.ai/blog/the-training-of-deepseek-r1-and-ways-to-use-it
Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, a monthly bonus episode, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod