Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

Auteur(s): Brian T. O’Neill from Designing for Analytics
  • Résumé

  • Is the value of your enterprise analytics SAAS or AI product not obvious through it’s UI/UX? Got the data and ML models right...but user adoption of your dashboards and UI isn’t what you hoped it would be? While it is easier than ever to create AI and analytics solutions from a technology perspective, do you find as a founder or product leader that getting users to use and buyers to buy seems harder than it should be? If you lead an internal enterprise data team, have you heard that a ”data product” approach can help—but you’re concerned it’s all hype? My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I share the stories of leaders who are leveraging product and UX design to make SAAS analytics, AI applications, and internal data products indispensable to their customers. After all, you can’t create business value with data if the humans in the loop can’t or won’t use your solutions. Every 2 weeks, I release interviews with experts and impressive people I’ve met who are doing interesting work at the intersection of enterprise software product management, UX design, AI and analytics—work that you need to hear about and from whom I hope you can borrow strategies. I also occasionally record solo episodes on applying UI/UX design strategies to data products—so you and your team can unlock financial value by making your users’ and customers’ lives better. Hashtag: #ExperiencingData. JOIN MY INSIGHTS LIST FOR 1-PAGE EPISODE SUMMARIES, TRANSCRIPTS, AND FREE UX STRATEGY TIPS https://designingforanalytics.com/ed ABOUT THE HOST, BRIAN T. O’NEILL: https://designingforanalytics.com/bio/
    © 2019 Designing for Analytics, LLC
    Voir plus Voir moins
Épisodes
  • 163 - It’s Not a Math Problem: How to Quantify the Value of Your Enterprise Data Products or Your Data Product Management Function
    Feb 18 2025
    I keep hearing data product, data strategy, and UX teams often struggle to quantify the value of their work. Whether it’s as a team as a whole or on a specific data product initiative, the underlying problem is the same – your contribution is indirect, so it’s harder to measure. Even worse, your stakeholders want to know if your work is creating an impact and value, but because you can’t easily put numbers on it, valuation spirals into a messy problem. The messy part of this valuation problem is what today’s episode is all about—not math! Value is largely subjective, not objective, and I think this is partly why analytical teams may struggle with this. To improve at how you estimate the value of your data products, you need to leverage other skills—and stop approaching this as a math problem. As a consulting product designer, estimating value when it’s indirect is something that I’ve dealt with my entire career. It’s not a skill learned overnight, and it’s one you will need to keep developing over time—but the basic concepts are simple. I hope you’ll find some value in applying these along with your other frameworks and tools. Highlights/ Skip to  Value is subjective, not objective (5:01)Measurability does not necessarily mean valuable (6:36)Businesses are made up of humans. Most b2b stakeholders aren’t spending their own money when making business decisions—what does that mean for your work? (9:30)Quantifying a data product’s value starts with understanding what is worth measuring in the eye of the beholder(s)—not math calculations (13:44)The more difficult it is to show the value of your product (or team) in numbers, the lower that value is to the stakeholder—initially (16:46)By simply helping a stakeholder to think through how value should be calculated on a data product, you’re likely already providing additional value (18:02)Focus on expressing estimated value via a range versus a single number (19:36)Measurement of anything requires that we can observe the phenomenon first—but many stakeholders won’t be able to cite these phenomena without [your!] help (22:16)When you are measuring quantitative aspects of value, remember that measurement is not the same as accuracy (precision)—and the precision game can become a trap (25:37)How to measure anything—and why estimates often trump accuracy (31:19)Why you may need to steer the conversation away from ROI calculations in the short term (35:00) Quotes from Today’s Episode Even when you can easily assign a dollar value to the data product you’re building, that does not necessarily reflect what your stakeholder actually feels about it—or your team’s contribution. So, why do they keep asking you to quantify the value of your work? By actually understanding what a shareholder needs to observe for them to know progress has been made on their initiative or data product, you will be positioned to deliver results they actually care about. While most of the time, you should be able to show some obvious economic value in the work you’re doing, you may be getting hounded about this because you’re not meeting the often unstated qualitative goals. If you can surface the qualitative goals of your stakeholder, then the perception of the value of your team and its work goes up, and you’ll spend less time trying to measure an indirect contribution in quant terms that only has a subjectively right answer. (6:50)The more difficult it is for you to show the monetary value of your data product (or team), the lower that value likely is to the stakeholder. This does not mean the value of your work is “low.” It means it’s perceived as low because it cannot be easily quantified in a way that is observable to the person whose judgment matters. By understanding the personal motivations and interests of your stakeholders, you can begin to collaboratively figure out what the correct success metrics should be—and how they’d be measured. By just simply beginning to ask and uncover what they’re trying to measure, you can start to increase your contributions’ perceived value. (17:01)Think about expressing “indirect value” as a range, not a precise single value. It’s much easier to refine your estimate (if necessary) once a range has been defined, and you only need to get precise enough for your stakeholder to make a decision with the information. How much time should you spend refining your measurement of the value? Potentially little to none—if the “better math” isn’t going to change anyone’s mind or decision. Spending more time to measure a data product’s value more accurately takes you away from doing actual product work—and if there isn’t much obvious value to the work, maybe the work—not the measurement of the work—needs to change. (19:49)Smart leaders know that deriving a simple calculation of indirect contributions is complex—otherwise, the topic wouldn’t ...
    Voir plus Voir moins
    42 min
  • 162 - Beyond UI: Designing User Experiences for LLM and GenAI-Based Products
    Feb 4 2025
    I’m doing things a bit differently for this episode of Experiencing Data. For the first time on the show, I’m hosting a panel discussion. I’m joined by Thomson Reuters’s Simon Landry, Sumo Logic’s Greg Nudelman, and Google’s Paz Perez to chat about how we design user experiences that improve people’s lives and create business impact when we expose LLM capabilities to our users. With the rise of AI, there are a lot of opportunities for innovation, but there are also many challenges—and frankly, my feeling is that a lot of these capabilities right now are making things worse for users, not better. We’re looking at a range of topics such as the pros and cons of AI-first thinking, collaboration between UX designers and ML engineers, and the necessity of diversifying design teams when integrating AI and LLMs into b2b products. Highlights/ Skip to Thoughts on how the current state of LLMs implementations and its impact on user experience (1:51) The problems that can come with the "AI-first" design philosophy (7:58) Should a company's design resources be spent on go toward AI development? (17:20)How designers can navigate "fuzzy experiences” (21:28)Why you need to narrow and clearly define the problems you’re trying to solve when building LLMs products (27:35)Why diversity matters in your design and research teams when building LLMs (31:56) Where you can find more from Paz, Greg, and Simon (40:43) Quotes from Today’s Episode “ [AI] will connect the dots. It will argue pro, it will argue against, it will create evidence supporting and refuting, so it’s really up to us to kind of drive this. If we understand the capabilities, then it is an almost limitless field of possibility. And these things are taught, and it’s a fundamentally different approach to how we build user interfaces. They’re no longer completely deterministic. They’re also extremely personalized to the point where it’s ridiculous.” - Greg Nudelman (12:47)“ To put an LLM into a product means that there’s a non-zero chance your user is going to have a [negative] experience and no longer be your customer. That is a giant reputational risk, and there’s also a financial cost associated with running these models. I think we need to take more of a service design lens when it comes to [designing our products with AI] and ask what is the thing somebody wants to do… not on my website, but in their lives? What brings them to my [product]? How can I imagine a different world that leverages these capabilities to help them do their job? Because what [designers] are competing against is [a customer workflow] that probably worked well enough.” - Simon Landry (15:41)“ When we go general availability (GA) with a product, that traditionally means [designers] have done all the research, got everything perfect, and it’s all great, right? Today, GA is a starting gun. We don’t know [if the product is working] unless we [seek out user feedback]. A massive research method is needed. [We need qualitative research] like sitting down with the customer and watching them use the product to really understand what is happening[…] but you also need to collect data. What are they typing in? What are they getting back? Is somebody who’s typing in this type of question always having a short interaction? Let’s dig into it with rapid, iterative testing and evaluation, so that we can update our model and then move forward. Launching a product these days means the starting guns have been fired. Put the research to work to figure out the next step.” - (23:29) Greg Nudelman“ I think that having diversity on your design team (i.e. gender, level of experience, etc.) is critical. We’ve already seen some terrible outcomes. Multiple examples where an LLM is crafting horrendous emails, introductions, and so on. This is exactly why UXers need to get involved [with building LLMs]. This is why diversity in UX and on your tech team that deals with AI is so valuable. Number one piece of advice: get some researchers. Number two: make sure your team is diverse.” - Greg Nudelman (32:39)“ It’s extremely important to have UX talks with researchers, content designers, and data teams. It’s important to understand what a user is trying to do, the context [of their decisions], and the intention. [Designers] need to help [the data team] understand the types of data and prompts being used to train models. Those things are better when they’re written and thought of by [designers] who understand where the user is coming from. [Design teams working with data teams] are getting much better results than the [teams] that are working in a vacuum.” - Paz Perez (35:19) Links Milly Barker’s LinkedIn postGreg Nudelman’s Value Matrix ArticleGreg Nudelman website Paz Perez on MediumPaz Perez on LinkedInSimon Landry LinkedIn
    Voir plus Voir moins
    42 min
  • 161 - Designing and Selling Enterprise AI Products [Worth Paying For]
    Jan 21 2025
    With GenAI and LLMs comes great potential to delight and damage customer relationships—both during the sale, and in the UI/UX. However, are B2B AI product teams actually producing real outcomes, on the business side and the UX side, such that customers find these products easy to buy, trustworthy and indispensable? What is changing with customer problems as a result of LLM and GenAI technologies becoming more readily available to implement into B2B software? Anything? Is your current product or feature development being driven by the fact you might be able to now solve it with AI? The “AI-first” team sounds like it’s cutting edge, but is that really determining what a customer will actually buy from you? Today I want to talk to you about the interplay of GenAI, customer trust (both user and buyer trust), and the role of UX in products using probabilistic technology. These thoughts are based on my own perceptions as a “user” of AI “solutions,” (quotes intentional!), conversations with prospects and clients at my company (Designing for Analytics), as well as the bright minds I mentor over at the MIT Sandbox innovation fund. I also wrote an article about this subject if you’d rather read an abridged version of my thoughts. Highlights/ Skip to: AI and LLM-Powered Products Do Not Turn Customer Problems into “Now” and “Expensive” Problems (4:03)Trust and Transparency in the Sale and the Product UX: Handling LLM Hallucinations (Confabulations) and Designing for Model Interpretability (9:44)Selling AI Products to Customers Who Aren’t Users (13:28)How LLM Hallucinations and Model Interpretability Impact User Trust of Your Product (16:10)Probabilistic UIs and LLMs Don’t Negate the Need to Design for Outcomes (22:48)How AI Changes (or Doesn’t) Our Benchmark Use Cases and UX Outcomes (28:41)Closing Thoughts (32:36) Quotes from Today’s Episode “Putting AI or GenAI into a product does not change the urgency or the depth of a particular customer problem; it just changes the solution space. Technology shifts in the last ten years have enabled founders to come up with all sorts of novel ways to leverage traditional machine learning, symbolic AI, and LLMs to create new products and disrupt established products; however, it would be foolish to ignore these developments as a product leader. All this technology does is change the possible solutions you can create. It does not change your customer situation, problem, or pain, either in the depth, or severity, or frequency. In fact, it might actually cause some new problems. I feel like most teams spend a lot more time living in the solution space than they do in the problem space. Fall in love with the problem and love that problem regardless of how the solution space may continue to change.” (4:51)“Narrowly targeted, specialized AI products are going to beat solutions trying to solve problems for multiple buyers and customers. If you’re building a narrow, specific product for a narrow, specific audience, one of the things you have on your side is a solution focused on a specific domain used by people who have specific domain experience. You may not need a trillion-parameter LLM to provide significant value to your customer. AI products that have a more specific focus and address a very narrow ICP I believe are more likely to succeed than those trying to serve too many use cases—especially when GenAI is being leveraged to deliver the value. I think this can be true even for platform products as well. Narrowing the audience you want to serve also narrows the scope of the product, which in turn should increase the value that you bring to that audience—in part because you probably will have fewer trust, usability, and utility problems resulting from trying to leverage a model for a wide range of use cases.” (17:18)“Probabilistic UIs and LLMs are going to create big problems for product teams, particularly if they lack a set of guiding benchmark use cases. I talk a lot about benchmark use cases as a core design principle and data-rich enterprise products. Why? Because a lot of B2B and enterprise products fall into the game of ‘adding more stuff over time.’ ‘Add it so you can sell it.’ As products and software companies begin to mature, you start having product owners and PMs attached to specific technologies or parts of a product. Figuring out how to improve the customer’s experience over time against the most critical problems and needs they have is a harder game to play than simply adding more stuff— especially if you have no benchmark use cases to hold you accountable. It’s hard to make the product indispensable if it’s trying to do 100 things for 100 people.“ (22:48)“Product is a hard game, and design and UX is by far not the only aspect of product that we need to get right. A lot of designers don’t understand this, and they think if they just nail design and UX, then ...
    Voir plus Voir moins
    34 min

Ce que les auditeurs disent de Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

Moyenne des évaluations de clients

Évaluations – Cliquez sur les onglets pour changer la source des évaluations.