In this episode of Designing the Robot Revolution, David and Jacob explore whether the increasing size of large language models (LLMs) is delivering meaningful improvements or reaching a point of diminishing returns. They discuss the balance between scale and utility, questioning how much of these massive models' potential is actually being used and what a plateau would mean for ideation.
LINKS
OpenAI, Google and Anthropic are struggling to build more advanced AI
Hosted on Acast. See acast.com/privacy for more information.