Épisodes

  • eBay Chief AI Officer Nitzan Mekel-Bobrov
    Jan 30 2025

    We hear about Nitzan’s AI expertise, motivation for joining eBay, and approach to implementing AI into eBay's business model. Gain insights into the impacts of centralizing and federating AI, leveraging generative AI to create personalized content, and why patience is essential to AI development. We also unpack eBay's approach to LLM development, tailoring AI tools for eBay sellers, the pitfalls of generic marketing content, and the future of AI in retail. Join us to discover how AI is revolutionizing e-commerce and disrupting the retail sector with Nitzan Mekel-Bobrov!

    Key Points From This Episode:

    • Nitzan's career experience, his interest in sustainability, and his sneaker collection.
    • Why he decided to begin a career at eBay and his role at the company.
    • His approach to aligning the implementation of AI with eBay's overall strategy.
    • How he identifies the components of eBay's business model that will benefit from AI.
    • What makes eBay highly suitable for the implementation of AI tools.
    • Challenges of using generative AI models to create personalized content for users.
    • Why experimentation is vital to the AI development and implementation process.
    • Aspects of the user experience that Nitzan uses to train and develop eBay's LLMs.
    • The potential of knowledge graphs to uncover the complexity of user behavior.
    • Reasons that the unstructured nature of eBay's data is fundamental to its business model.
    • Incorporating a seller's style into AI tools to avoid creating generic marketing material.
    • Details about Nitzan’s team and their diverse array of expertise.
    • Final takeaways and how companies can ensure they survive the AI transition.

    Quotes:

    “It’s tricky to balance the short-term wins with the long-term transformation.” — Nitzan Mekel-Bobrov [0:06:50]

    “An experiment is only a failure if you haven’t learned anything yourself and – generated institutional knowledge from it.” — Nitzan Mekel-Bobrov [0:09:36]

    “What's nice about [eBay's] business model — is that our incentive is to enable each seller to maintain their own uniqueness.” — Nitzan Mekel-Bobrov [0:27:33]

    “The companies that will thrive in this AI transformation are the ones that can figure out how to marry parts of their current culture and what all of their talent brings with what the AI delivers.” — Nitzan Mekel-Bobrov [0:33:58]

    Links Mentioned in Today’s Episode:

    Nitzan Mekel-Bobrov on LinkedIn

    eBay

    How AI Happens

    Sama

    Voir plus Voir moins
    33 min
  • Unilever Head of Data Science Dr. Satyajit Wattamwar
    Jan 24 2025

    Satya unpacks how Unilever utilizes its database to inform its models and how to determine the right amount of data needed to solve complex problems. Dr. Wattamwar explains why contextual problem-solving is vital, the notion of time constraints in data science, the system point of view of modeling, and how Unilever incorporates AI into its models. Gain insights into how AI can increase operational efficiency, exciting trends in the AI space, how AI makes experimentation accessible, and more! Tune in to learn about the power of data science and AI with Dr. Satyajit Wattamwar.

    Key Points From This Episode:

    • Background on Dr. Wattamwar, his PhD research, and data science expertise.
    • Unpacking some of the commonalities between data science and physics.
    • Why the outcome of using significantly large data sets depends on the situation.
    • The minimum amount of data needed to make meaningful and quality models.
    • Examples of the common mistakes and pitfalls that data scientists make.
    • How Unilever works with partner organizations to integrate AI into its models.
    • Ways that Dr. Wattamwar uses AI-based tools to increase his productivity.
    • The difference between using AI for innovation versus operational efficiency.
    • Insight into the shifting data science landscape and advice for budding data scientists.

    Quotes:

    “Around – 30 or 40 years ago, people started realizing the importance of data-driven modeling because you can never capture physics perfectly in an equation.” — Dr. Satyajit Wattamwar [0:03:10]

    “Having large volumes of data which are less related with each other is a different thing than a large volume of data for one problem.” — Dr. Satyajit Wattamwar [0:09:12]

    “More data [does] not always lead to good quality models. Unless it is for the same use-case.” — Dr. Satyajit Wattamwar [0:11:56]

    “If somebody is looking [to] grow in their career ladder, then it's not about one's own interest.” — Dr. Satyajit Wattamwar [0:24:07]

    Links Mentioned in Today’s Episode:

    Dr. Satyajit Wattamwar on LinkedIn

    Unilever

    How AI Happens

    Sama

    Voir plus Voir moins
    25 min
  • Vanguard Principal of Center for Analytics & Insights Jing Wang
    Dec 30 2024

    Jing explains how Vanguard uses machine learning and reinforcement learning to deliver personalized "nudges," helping investors make smarter financial decisions. Jing dives into the importance of aligning AI efforts with Vanguard’s mission and discusses generative AI’s potential for boosting employee productivity while improving customer experiences. She also reveals how generative AI is poised to play a key role in transforming the company's future, all while maintaining strict data privacy standards.

    Key Points From This Episode:

    • Jing Wang’s time at Fermilab and the research behind her PhD in high-energy physics.
    • What she misses most about academia and what led to her current role at Vanguard.
    • How she aligns her team’s AI strategy with Vanguard’s business goals.
    • Ways they are utilizing AI for nudging investors to make better decisions.
    • Their process for delivering highly personalized recommendations for any given investor.
    • Steps that ensure they adhere to finance industry regulations with their AI tools.
    • The role of reinforcement learning and their ‘next best action’ models in personalization.
    • Their approach to determining the best use of their datasets while protecting privacy.
    • Vanguard’s plans for generative AI, from internal productivity to serving clients.
    • How Jing stays abreast of all the latest developments in physics.

    Quotes:

    “We make sure all our AI work is aligned with [Vanguard’s] four pillars to deliver business impact.” — Jing Wang [0:08:56]

    “We found those simple nudges have tremendous power in terms of guiding the investors to adopt the right things. And this year, we started to use a machine learning model to actually personalize those nudges.” — Jing Wang [0:19:39]

    “Ultimately, we see that generative AI could help us to build more differentiated products. – We want to have AI be able to train language models [to have] much more of a Vanguard mindset.” — Jing Wang [0:29:22]

    Links Mentioned in Today’s Episode:


    Jing Wang on LinkedIn

    Vanguard
    Fermilab
    How AI Happens

    Sama

    Voir plus Voir moins
    35 min
  • Sema4 CTO Ram Venkatesh
    Dec 23 2024

    Key Points From This Episode:

    • Ram Venkatesh describes his career journey to founding Sema4.ai.
    • The pain points he was trying to ease with Sema4.ai.
    • How our general approach to big data is becoming more streamlined, albeit rather slowly.
    • The ins and outs of Sema4.ai and how it serves its clients.
    • What Ram means by “agent” and “agent agency” when referring to machine learning copilots.
    • The difference between writing a program to execute versus an agent reasoning with it.
    • Understanding the contextual work training method for agents.
    • The relationship between an LLM and an agent and the risks of training LLMs on agent data.
    • Exploring the next generation of LLM training protocols in the hopes of improving efficiency.
    • The requirements of an LLM if you’re not training it and unpacking modality improvements.
    • Why agent input and feedback are major disruptions to SaaS and beyond.
    • Our guest shares his hopes for the future of AI.

    Quotes:

    “I’ve spent the last 30 years in data. So, if there’s a database out there, whether it’s relational or object or XML or JSON, I’ve done something unspeakable to it at some point.” — @ramvzz [0:01:46]

    “As people are getting more experienced with how they could apply GenAI to solve their problems, then they’re realizing that they do need to organize their data and that data is really important.” — @ramvzz [0:18:58]

    “Following the technology and where it can go, there’s a lot of fun to be had with that.” — @ramvzz [0:23:29]

    “Now that we can see how software development itself is evolving, I think that 12-year-old me would’ve built so many more cooler things than I did with all the tech that’s out here now.” — @ramvzz [0:29:14]

    Links Mentioned in Today’s Episode:

    Ram Venkatesh on LinkedIn

    Ram Venkatesh on X

    Sema4.ai

    Cloudera

    How AI Happens

    Sama

    Voir plus Voir moins
    30 min
  • Unpacking Meta's SAM-2 with Sama Experts Pascal & Yannick
    Dec 18 2024

    Pascal & Yannick delve into the kind of human involvement SAM-2 needs before discussing the use cases it enables. Hear all about the importance of having realistic expectations of AI, what the cost of SAM-2 looks like, and the the importance of humans in LLMs.

    Key Points From This Episode:

    • Introducing Pascal Jauffret and Yannick Donnelly to the show.
    • Our guests explain what the SAM-2 model is.
    • A description of what getting information from video entails.
    • What made our guests interested in researching SAM-2.
    • A few things that stand out about this tool.
    • The level of human involvement that SAM-2 needs.
    • Some of the use cases they see SAM-2 enabling.
    • Whether manually annotating is easier than simply validating data.
    • The importance of setting realistic expectations of what AI can do.
    • When LLM models work best, according to our experts.
    • A discussion about the cost of the models at the moment.
    • Why humans are so important in coaching people to use models.
    • What we can expect from Sama in the near future.

    Quotes:

    “We’re kind of shifting towards more of a validation period than just annotating from scratch.” — Yannick Donnelly [0:22:01]

    “Models have their place but they need to be evaluated.” — Yannick Donnelly [0:25:16]

    “You’re never just using a model for the sake of using a model. You’re trying to solve something and you’re trying to improve a business metric.” — Pascal Jauffret [0:32:59]

    “We really shouldn’t underestimate the human aspect of using models.” — Pascal Jauffret [0:40:08]

    Links Mentioned in Today’s Episode:

    Pascal Jauffret on LinkedIn

    Yannick Donnelly on LinkedIn

    How AI Happens

    Sama

    Voir plus Voir moins
    50 min
  • Qualcomm Senior Director Siddhika Nevrekar
    Dec 16 2024

    Today we are joined by Siddhika Nevrekar, an experienced product leader passionate about solving complex problems in ML by bringing people and products together in an environment of trust. We unpack the state of free computing, the challenges of training AI models for edge, what Siddhika hopes to achieve in her role at Qualcomm, and her methods for solving common industry problems that developers face.

    Key Points From This Episode:

    • Siddhika Nevrekar walks us through her career pivot from cloud to edge computing.
    • Why she’s passionate about overcoming her fears and achieving the impossible.
    • Increasing compute on edge devices versus developing more efficient AI models.
    • Siddhika explains what makes Apple a truly unique company.
    • The original inspirations for edge computing and how the conversation has evolved.
    • Unpacking the current state of free computing and what may happen in the near future.
    • The challenges of training AI models for edge.
    • Exploring Siddhika’s role at Qualcomm and what she hopes to achieve.
    • Diving deeper into her process for achieving her goals.
    • Common industry challenges that developers are facing and her methods for solving them

    Quotes:

    “Ultimately, we are constrained with the size of the device. It’s all physics. How much can you compress a small little chip to do what hundreds and thousands of chips can do which you can stack up in a cloud? Can you actually replicate that experience on the device?” — @siddhika_

    “By the time I left Apple, we had 1000-plus [AI] models running on devices and 10,000 applications that were powered by AI on the device, exclusively on the device. Which means the model is entirely on the device and is not going into the cloud. To me, that was the realization that now the moment has arrived where something magical is going to start happening with AI and ML.” — @siddhika_

    Links Mentioned in Today’s Episode:

    Siddhika Nevrekar on LinkedIn

    Siddhika Nevrekar on X

    Qualcomm AI Hub

    How AI Happens

    Sama

    Voir plus Voir moins
    33 min
  • Block Developer Advocate Rizel Scarlett
    Dec 3 2024

    Today we are joined by Developer Advocate at Block, Rizel Scarlett, who is here to explain how to bridge the gap between the technical and non-technical aspects of a business. We also learn about AI hallucinations and how Rizel and Block approach this particular pain point, the burdens of responsibility of AI users, why it’s important to make AI tools accessible to all, and the ins and outs of G{Code} House – a learning community for Indigenous and women of color in tech. To end, Rizel explains what needs to be done to break down barriers to entry for the G{Code} population in tech, and she describes the ideal relationship between a developer advocate and the technical arm of a business.

    Key Points From This Episode:

    • Rizel Scarlett describes the role and responsibilities of a developer advocate.
    • Her role in getting others to understand how GitHub Copilot should be used.
    • Exploring her ongoing projects and current duties at Block.
    • How the conversation around AI copilot tools has shifted in the last 18 months.
    • The importance of objection handling and why companies must pay more attention to it.
    • AI hallucinations and Rizel’s advice for approaching this particular pain point.
    • Why “I don’t know” should be encouraged as a response from AI companions, not shunned.
    • Taking a closer look at how Block addresses AI hallucinations.
    • The burdens of responsibility of users of AI, and the need to democratize access to AI tools.
    • Unpacking G{Code} House and Rizel’s working relationship with this learning community.
    • Understanding what prevents Indigenous and women of color from having careers in tech.
    • The ideal relationship between a developer advocate and the technical arm of a business.

    Quotes:

    “Every company is embedding AI into their product someway somehow, so it’s being more embraced.” — @blackgirlbytes [0:11:37]

    “I always respect someone that’s like, ‘I don’t know, but this is the closest I can get to it.’” — @blackgirlbytes [0:15:25]

    “With AI tools, when you’re more specific, the results are more refined.” — @blackgirlbytes [0:16:29]

    Links Mentioned in Today’s Episode:

    Rizel Scarlett

    Rizel Scarlett on LinkedIn

    Rizel Scarlett on Instagram

    Rizel Scarlett on X

    Block

    Goose

    GitHub

    GitHub Copilot

    G{Code} House

    How AI Happens

    Sama

    Voir plus Voir moins
    28 min
  • dbt Labs Co-Founder Drew Banin
    Nov 21 2024



    Key Points From This Episode:

    • Drew and his co-founders’ background working together at RJ Metrics.
    • The lack of existing data solutions for Amazon Redshift and how they started dbt Labs.
    • Initial adoption of dbt Labs and why it was so well-received from the very beginning.
    • The concept of a semantic layer and how dbt Labs uses it in conjunction with LLMs.
    • Drew’s insights on a recent paper by Apple on the limitations of LLMs’ reasoning.
    • Unpacking examples where LLMs struggle with specific questions, like math problems.
    • The importance of thoughtful prompt engineering and application design with LLMs.
    • What is needed to maximize the utility of LLMs in enterprise settings.
    • How understanding the specific use case can help you get better results from LLMs.
    • What developers can do to constrain the search space and provide better output.
    • Why Drew believes prompt engineering will become less important for the average user.
    • The exciting potential of vector embeddings and the ongoing evolution of LLMs.

    Quotes:

    “Our observation was [that] there needs to be some sort of way to prepare and curate data sets inside of a cloud data warehouse. And there was nothing out there that could do that on [Amazon] Redshift, so we set out to build it.” — Drew Banin [0:02:18]

    “One of the things we're thinking a ton about today is how AI and the semantic layer intersect.” — Drew Banin [0:08:49]

    “I don't fundamentally think that LLMs are reasoning in the way that human beings reason.” — Drew Banin [0:15:36]

    “My belief is that prompt engineering will – become less important – over time for most use cases. I just think that there are enough people that are not well versed in this skill that the people building LLMs will work really hard to solve that problem.” — Drew Banin [0:23:06]

    Links Mentioned in Today’s Episode:

    Understanding the Limitations of Mathematical Reasoning in Large Language Models

    Drew Banin on LinkedIn

    dbt Labs

    How AI Happens

    Sama

    Voir plus Voir moins
    28 min