Episode 217:

Bridging Data Models with Business Intuition with Zenlytic’s Founders Ryan Janssen and Paul Blankley

November 27, 2024

This week on The Data Stack Show, Eric and John welcome Ryan Janssen and Paul Blankley from Zenlytic. The discussion explores the evolution of language models, their academic journeys, and the transition from consulting to founding Zenlytic. The group delves into the balance between data-driven insights and human intuition in decision-making, the challenges of making data accessible for non-technical users, and the significance of understanding both data and human intuition in driving business success. The episode also touches on the importance of context in data analysis, the future of AI in analytics, and so much more. 

Notes:

Highlights from this week’s conversation include:

  • Ryan and Paul’s Background and Journey (1:05)
  • Excitement about AI and Data Intersection (2:50)
  • Evolution of Language Models (5:05)
  • Current Challenges in Model Training (6:51)
  • Founding Zenlytic (9:12)
  • Integrating Vibes into Decision-Making (12:58)
  • Precision vs. Context in Data (15:03)
  • Understanding Multimodal Inputs (17:47)
  • The Challenge of Watching User Behavior (19:26)
  • Empathy in Data Analysis (21:32)
  • AI in Analytics (23:18)
  • The Complexity of Data Models (25:33)
  • Self-Serve Analytics Definition (28:15)
  • Evolution of Self-Serve Analytics (32:09)
  • Distillation of Data for End Users (36:44)
  • Challenges in Data Interpretation (39:22)
  • Building a Semantic Model (44:18)
  • Using AI for Comprehensive Analysis (46:51)
  • Future of AI in Analytics (51:31)
  • Naming the AI Agent (52:53)
  • Final Thoughts and Takeaways (54:21)

 

The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we’ll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.

RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.

Transcription:

Eric Dodds  00:06

Welcome to the data stack show.

John Wessel  00:07

The data stack show is a podcast where we talk about the technical, business and human challenges involved in data

Eric Dodds  00:13

work. Join our casual conversations with innovators and data professionals to learn about new data technologies and how data teams are run at top companies. Welcome back to the show. We are here with Ryan Janssen and Paul Blankley from Zenlytic. Gentlemen, welcome

Paul Blankley  00:35

to the show. Thanks. Adam Asana, super excited to chat today.

Eric Dodds  00:39

All right. Well, give us just a brief background. You have different backgrounds, but they actually converged at one point. So Paul, why don’t you start and then tell us where your path crossed with Ryan.

Paul Blankley  00:49

Yeah. So I’m a nerd. Nerd. I was a math and CS undergrad, math and CS grad. And Ryan I met actually doing technical master’s degree at Harvard, studying language models, and this was right around the year that attention was all unique came out, and French warmers were like, sort of first becoming a thing. So we got to see a lot of that, the really early versions, back when they were language models, where they became large language models. And after that, we started consulting. Did consulting for about a few years, and then started the analytic during the pandemic, right? Ryan, you’re half the story? Yeah, well, my background is,

Ryan Janssen  01:23

I was a software engineer at the very start of my career in my native Canada. But then after that, I’ve spent 15 years now, in sort of the last mile of data analytics. And, you know, first I was a VC, you know, slash Excel monkey. I was a stool, and became a data scientist. So I worked in data science for a bit. And, you know, Paul and I, that’s where we met. In fact, we started data science consultancy together, and then we founded Zen learning together. And all those have been different, sort of parts of the same problem, which is like, either I’m a non technical end user, or I’m kind of a semi technical analyst, or I’m a, you know, very technical data scientist, all trying to sort of solve problems with

John Wessel  02:01

data. So guys, before the show, we talked about data versus vibes, and you know about founders or CEOs running companies on, sometimes a combination of both, and sometimes, you know, a little bit more slated toward vibes. So I’m excited to dig in on that. What are you guys excited about? I’m

Paul Blankley  02:18

excited for that one because I think that hits on a really important point that I’m excited to sort of expound on. And other than that, I’m excited to dig into just, you know, what is possible, what is not possible for English models. How, you know, how can we account fit language models in how we as humans sort of think about and operate in the world, and talk a little bit more about how that and how language models work actually affects what we do at zoom link, where we are very, you know, AI native, like aI native, first sort of business intelligence product. Awesome.

John Wessel  02:48

What about you? Ryan,

Ryan Janssen  02:50

yeah, excited for all those really excited to chat about, you know, intersection of AI and bi or AI and data in general, which is like, how do we get AI agents to answer problems and data? And it’s a really hard problem, frankly, because you’ve got this huge surface area of potential data types and configurations. On one side, you’ve got this huge surface area of questions people want to ask. On the other side, there’s a little pinch point in the middle. Such a fascinating field to work in. And, you know, llms, there’s just new stuff every day. So, you know, lots of stuff to talk about

Eric Dodds  03:18

there, all right. Well, hopefully we can get to all of that. So let’s dig in. Let’s do it. We have so many questions that we want to get to, but I’d like to start actually, with a little bit of history where your paths crossed. So it was at Harvard, and you were studying language models in the context of machine learning. And Paul, in the intro, you said that you were studying language models before the additional el got added to the acronym. So can you just talk about what you were studying? What did you think about it? Did you perceive it as a tectonic shift? And Ryan, you were a VC prior to that experience. So, yeah, I would just love to hear from both of you about what you were studying and what that was like, and then how it informed, you know, findings and lytic,

Paul Blankley  04:11

yeah, totally. And dig into that a bit. One of the, one of the things to kind of remember from that time is that it transformers in general, were like this sort of huge shift, because before transformers, you had these things called recurrent neural networks, which had these problems with memory and being able to generate anything you tried to solve that with this other architecture we called LSTMs. And all of these were sort of more complicated, but like less affected versions of transformers, the innovation and transformers was just realizing that the attention mechanism was kind of all you needed to actually do a really good job of predicting, predicting sequences. And so Bert, which is kind of the sort of initial, the initial transformer, if you will. There’s a bunch of others in its class, but that was sort of in the initial, like groundbreaking one. It was just dramatically better than anything else that people. I had seen it before. And again, this is not like it’s generating, you know, speech that sounds like human. It’s not going to pass the Turing test, but at all the things that it was evaluated on, it was pretty dramatically better than everything else. So we definitely did not know it was going to get here. But you could just see with transformers, it was like an unlocking of a new architecture. And whenever there’s a new architecture that you know unreasonably well at something compared to previous generations, you’re on this trajectory where it’s gonna just get better. And you can pretty reliably, like a Moore’s Law of sorts, just kind of be like, hey, it’s gonna get this much better every year. And that was true until chat kind of broke that and went and went, pretty accidental,

Ryan Janssen  05:44

yeah, yeah. When Paul and I actually were studying this, I don’t know if there was a second L, I think they were just models. At that point, I wouldn’t even call a language model. We were early, early on in the days, there were these tools that you could, like, prove that, like, you know, King minus man plus woman equals queen. And these, like, these very basic tools for performing, you know, algebra on individual words. And it wasn’t really, I think, you know, Burke was like a huge step change. And then I think watching the GPT two GPT three progression is when things became really apparent that there was plenty of room at the top for these models, right? And everyone kind of had a fundamental understanding these things are predicting the next poke on the next word. There’s a big question for like, if you know, the early models are very word salad, right? They almost made sense. And then like GPT three that you get a paragraph after, there was an alignment across paragraphs, and you could see it was becoming more and more coherent. And you know, for me, the watershed moment was reading about that like GPT three level, when I was like, okay, like, these things are actually being able to sort of demonstrate early, sort of what looks like understanding to us. And you know, that was kind of a turning point, because that’s when we could start thinking about this in terms of scaling laws. And which is really cool, we actually have a really predictable trajectory for this stuff, because we had an understanding of what you could put into it, right? It’s like, so can we put in more computers? And it’s like, Yes, I need 10x to compute, and it gets, you know, 50% better, or whatever it’s like, can we get in more data? And it’s like, yes, you 10x the data, it gets better. So we not only had a good idea of what the inputs were to improve the model performance, and I’m saying we collectively, you know, like we as, like a research community, we also we didn’t predict, predict the trajectory from them by seeing that, by seeing what would happen when you scaled up. And the question then is like, where does that take us today? Which is kind of an interesting question. Interesting question, right? So we have a bunch of those scaling opportunities that have been kind of tapped out, right? So if you think about, you know, 10x ing, the data, we can’t because these llms are basically using all the data, you know, internet, yeah, if you want to 10x the computer, 10x the cost of that, like, you know, they’re saying the next class of models is a billion dollars to train a model, and then you 10x that becomes 10 billion. There’s not much 10x left in that dimension, basically. So I think that’s why some people are saying, oh, things are rounding off now. And there’s a couple things that have to happen next. One thing that might happen is a new architecture, these architectures that are more efficient and they can learn faster, which is what transformers ultimately were. The other thing that could happen actually, the one area that we have to scale 10x still is inference time. And inference time is how long does the model run for? And now you might have heard of like, you know, GPT 01 for instance, is a reasoning model that uses longer inference. And instead of answering, you know, 100 milliseconds, it takes a few seconds to think and then gets back to you. Yep. You might have heard of Devin, the software development engineer agent. And, you know, it’s a performance step change. And they got that by letting it run for like, you know, they could run for 24 hours at a time on a program problem. So there’s still plenty of room at the top in terms of inference time. And that, I think, is why we’re seeing the emergence of AI agents now, because an agent is a fundamental part of an agent is actually expandable inference, like, I think that’s the next step in the scaling law. After that, we’re out of big axes. That might be time, or we might need to find some sort of architectural change, which is a hard problem, but that’s, you know, the next big unlock. I think, yep. Yeah, yep. So

Eric Dodds  08:55

connect the dots between all of the things you just, you sort of have, you’re in the middle of this learning. You guys did some consulting, but then you founded a data company with AI at the center of it. And I’m interested in why, you know, when I say data company, let’s just say like an analytics or business intelligence company with AI at the center. Why did you go there? When you could have gone so many different places, you know, in terms of, in terms of building a company around, around AI, 

Paul Blankley  09:32

I think a lot of it was that, you know, Ron and I both really like, liked data, like, it’s just something that we feel comfortable with. And when we did consulting, we got to see and experience a lot of the problems that we solve within the first hand and I think that kind of first hand exposure to the problem gives you insight to it in a way that we don’t have first hand exposure to the actual problem. It is not going to be able to empathize with the users of your product very much. So that’s what it’s like. Like, really important to us is that we’ve got to actually be able to enterprise with the users. And that led to the biggest problem that we saw in our consulting. This last mile we would do so much work to, like, set up a snowplex, kind of BigQuery, make sure, you know, everything’s clean and, like, relatively easy to use. And that name would just, you know, lay untouched in Tableau, or Power BI, yeah, it was like, Okay, well, this is a problem that, you know, it’s like 10% of any organization can actually use the data that they produce, and that’s like this massive bottleneck on the rest of the org that just wants to know basic things about, you know, which campaign should I be investing more money in, you know, and other things like that.

Eric Dodds  10:37

So, Ryan, did you want to add on to that?

Ryan Janssen  10:41

I totally agree. It’s just that, you know, data is cool. Well, finally, data school. I think we all think data is cool, but it’s the, I think there’s this kind of, you know, the feeling the data community is like, do we add value? Ben Stancil talks about this in a blog post where it’s like, at the accounting convention, the accountants don’t get together and say, Does accounting add value, right? And I just as a community, we’ve always kind of been a little bit excited about being on the sidelines a bit, and I think that the root of that problem is that it’s hard for people to use data, and that, you know, llms are kind of an unlock for making it easier to use to access data. So those two things together might be what takes us off the sidelines and into, you know, really well adopted, well used tooling, and that gets me excited.

Eric Dodds  11:20

So let’s dig into John, a question that you brought up, that you were excited to talk about, and is this concept of vibes are stronger than data, and you wanted to be so good. I love that. It sounds like a T-shirt. It sounds like a meme. I’m sure it already is, but dig into that. What did you mean by that when you asked that question or brought up that topic? Yeah,

John Wessel  11:45

so we were, yeah, talking before the show about that there are certain companies, a lot of times, very founder led companies that the founder somehow just gets locked in on the product, maybe from talking to people from vibes, and can really grow and scale companies to surprising sizes with this, like, vibe gut reaction type, you know, abilities. And I would even say some of those, you know, some of those situations where you try to move that company, like, no, like, cancel the vibes. Let’s just make all the decisions on the data. Now you probably do some damage, at least, you know, especially the companies at a certain scale, and then the vibes do run out. Like, there are certain companies that it’s vibe driven. And then you hit a point and, like, all right, the vibes are running out for whatever reason, scale or whatever. And then, like, there’s, like, what do we need to, like, kind of weigh more heavily toward data. But, yeah, so right? You know, both Ryan and Paul, you’ve done some consulting, obviously, work with a lot of companies with data. What are your thoughts on that?

Eric Dodds  12:52

How did Ryan model vibes into his VC spreadsheet?

12:56

Yeah, right.

12:57

Is there a weighted model for vibes?

13:01

The strength, you know, the funny thing is, actually, that is a big part of VC is like, there’s like, you know, totally, I don’t know if they actually say the vibes are through the roof. That’d be like, I don’t know if it’s explicitly modeled, but it definitely plays a role. I

Eric Dodds  13:12

mean, it was tongue in cheek, but for sure, yeah, yeah. Well, yeah, it’s

13:16

interesting. Like, the thing we said before the show is, the data is strong, but vibes are stronger. That’s right, you know, I think that’s probably, that’s probably true, right? And like, the mental model that I use to think about the world is that human beings are, you know, feeling machines that happen to think, you know, so we feel first and think later. And I think that everyone likes to pretend that we’re all very rational and predictable and everything. But, you know, in reality, it’s the feeling brain that’s running the show. So like, that’s why vibes become important, and that’s very difficult to, you know, that’s not only it’s not data driven, but like, it’s very difficult to model data to capture that. So like, I think the right approach needs both really, you know, and it’s like, right? It there’s times when you have to really be thoughtful and, like, actually use data to understand something at a high level of precision. And there’s times when the broad strokes are important and it’s more driven by how people think or get feeling or whatever. And you know, I guess the hard part is knowing which to employ when, yeah, well, one of the things that I would throw in where I almost would like disagree with Brian’s last point about precision a little bit, because the thing is, data is like, tries to sort of explain what’s going on, right? It’s like you sell things, an e-commerce Store, you have these transactions, and you can see, like, specific events that happen. And it’s precise in the sense that you can very accurately calculate revenue, but it’s imprecise in the sense that you lose a lot of, like, the feel of something, or the sort of surrounding context that you just can’t meaning for capturing data. So, and I think a good point of this is, it’s like, if you, like, one of Ben Sam’s analogies is, it’s

Paul Blankley  14:48

like, you run a failing bar, are you gonna go and, like, look at data all your bar tabs? Are you gonna watch videos with all of the people who went to your bar and weren’t happy to just hear from them? Like, why you’re gonna get so much extra information from. Information from the actual video, way more than you are from a transcript. I think another good example of this vibe is, like, Dylan Field, CEO, famous, and a massive company. He’ll get in there and, like, read customer support tickets that come in, because it’s, like, it’s just really not lossy. Like, representation, yeah, the sample size is big. But he comes in, he gets this, like, really fat pipe to like, what are the customers actually asking about? And yeah, it’s not an example size, but it’s also not like, cleaned up. It’s not like, oh, we trim with the outliers that don’t ask about that often. And in that sense, it is a lot more precise, because you just see everything. You see all the dimensions, like the video people’s facial expressions, and that’s where you get this deck feel so I think when people say, like, you know, use your gut versus data, your gut is kind of the distillation of every single data point you’ve ever seen in your life. Multi modal too. Yeah, multi modal exactly because, like, data isn’t just stuff that sits on a spreadsheet. Like, data is all forms of, you know, comprehension that we have. We take in, yeah, it’s, I

Ryan Janssen  16:01

do think there’s actually, there’s a certain whole class of person who says, Oh, I don’t need data to make decisions. I just trust my gut. But I, you know, you hear that a lot, and every time I hear that, I think, well, what do you think is you’re putting into the gut to, like, feed that gut, you know, like, I think that is the right chain. Is, like, get some good data, put it into your intuition, let your intuition figure it out, and then make a decision and action on that. John,

Eric Dodds  16:23

yeah, yeah. I mean, I think that’s why the meaning is slightly tangential, but you know why you want to bring in someone at a certain stage of a company who has a lot of experience solving similar problems? Because they put a lot of data into the gut, right? And so their intuition, you know, they probably have really good intuition, even if it’s a different context. Data, gut health. Data, right?

John Wessel  16:46

Is that a supplement? You guys so, yeah,

Eric Dodds  16:58

right booth and get your you know, data kombucha,

Ryan Janssen  17:03

Chris from this analytics flag, yeah, that group. I have a question. I have a question for Paul about your loss representation. Oh, this is my podcast. No, sorry, it is your podcast. So the loss of representation, the things you mentioned are mostly textual, right? So it’s like the figma CEO is a CEO reading these things individually. And is it possible that loss representation has been because we haven’t been able to understand text, but now, very recently, we have pretty good tools for processing and structuring raw text. And you know, if we increase the fidelity of that representation, does that mean that the CEO/COO of figma should actually be looking at structured data from those text representations? Yeah, no, I

Paul Blankley  17:47

I think that’s absolutely right. And I think the better we get at understanding all these sort of multi modal forms of input, like, the better the higher fidelity actual signal you would get from those things, because it’s like, like, a good example actually, that we did as a company, right? It’s like, we would watch videos of customers using the product when we first launched. We’d see where they get stuck. You know, you see their mouse move when they get a little frustrated, and they’re like, not sure exactly like, what to do next, or what to quit. Their own really high fidelity, but it’s something that you just have to be using, is, you can’t watch all the sessions, and you’ve got to then, do you know, event tracking? You know, you go and you track events, old events, like, how often are people logging in? How often they view and dashboards, how often are they doing this activity? When are they asking questions? And you have this, this representation that’s a lot lost here. Now, like, you don’t see the little frustrated thing right before they click on the thing, but it lets you view things at a higher scale. I think what Ryan’s alluding to, which I think is absolutely right, is the better and higher fidelity we can process these inputs and effectively aggregate these inputs in a way that we weren’t able to aggregate them before. You’ll be able to get just much higher fidelity signals on what people are actually doing, like answering the actual question you’re trying to answer. Right? It’s like if you had time, buy a time to watch every single video of every single person using, like, our product, or using, you know, something else, you’d have a great intuition for, like, what’s going on there. But no one has that time there. It’s just, you know, no one loves that. And it’s like, how do you aggregate that? Actually, an example of that is, y’all

Ryan Janssen  19:17

ever did a thing research for something you want to learn something, and the answer is in, like, a YouTube video, and you’re like, I don’t even watch this whole YouTube video. Like, that’s great, but like, I just want a quick answer. I find myself increasingly when that happens, I search in perplexity for it. Perplexity is indexed that entire video, right? So, you know, put the same trigger Flexi, even put the link in the video, and perplexity will get the answer out of the video without you having to go through and, like, find it inside 45 minutes of video, which I think is cool. So it’s like compressing that much more rich format into the ads that you need. Yep,

Eric Dodds  19:49

Let’s talk a little bit about this mental model of the map not being the territory, so, which I think is a fascinating side. Object, because data is a and Paul, you had a couple of very elegant ways of describing this that I’m going to butcher, just like I butchered the vibe statement. But you talked about how it’s a distorted distillation. Data is a distorted distillation of reality, right? And we just covered a couple of these things, right? Like, what are you losing when you go from watching all of these videos of users to just looking at essentially a log of behaviors, right? Like, you lose something there. You know, one of the things that’s interesting is even just the act of watching a user and perceiving that they might be frustrated, develops a certain level of empathy that I think is almost impossible to get by looking at a log of behaviors. But can you speak to that in terms of what you’ve thought about Zen lytic, right? You’re managing the loss of that in some sense, right? You’re trying to create a controlled loss of reality, right? So, like, you’re building a map that speaks to the territory, but is not actually the territory, right? Because, you know, that’s really difficult. How do you think about managing that process for users who are trying to, you know, trying to use data in a useful way? Yeah, I

Paul Blankley  21:20

think the map is not the map is not the territory is a great way to think about this, because, again, it’s like data is going to give you good insight on, like, high level things that can sort of be aggregated, but you lose a lot of this intuition of just like you see someone get frustrated, you see this, this problem. So how I think about balancing that is like, at some point you just can’t watch all the videos. You can’t read all the tickets. Like, the volume just gets too high to be able to handle that. But on the two axis, you need to, of course, have a high level like, how many times that people log in? Like, that stuff is important. But you also need to dive back into the just like raw seed, if you will. Like, you know, talk to the customer on a video call. This is why one of the sort of perennial startup advice is like, talk to your customers. The intuition behind that isn’t just that they are going to tell you what’s billed because easily, they will have great feedback, and you’ll get all these sort of non verbal things and like, how does it make how does the product make them feel, like, all these other things that are really important that you just don’t get if you’re looking at like laws. So that’s why that advice works too, because it forces you to give back into the actual reality of life, how are people experiencing this? And deal with all the feedback and do something to make that experience nothing

Eric Dodds  22:32

Can AI help close that gap? I guess to be like to ask a direct question about Zen lytic like, is that part of the hypothesis where you can actually draw some of the like territorial characteristics out with AI that it’s really difficult to do, let’s say, with a traditional vi tool. I think, I

Paul Blankley  22:54

I think they can definitely help. I don’t think it fully replaces what it looks like no matter how good it is, because remember, it’s not just that you have this data at the end. It’s part of, like, the training of your, you know, think about your brain is like a neural network, right? It’s like, it’s the training of your own weights on, like, how you think about something. So I don’t think it can ever, like, fully replace that. It’s just, I just don’t know if it’s actually fundamentally possible, but it definitely helps. And one of the big ways in which it helps is that you’re able to ask things at a higher level than you would before, whereas before, you would have to say, you know, I want the number of logins weekly, but like this, you know, customer or whatever. And now you can just sort of say, like, Hey, how is like, how is this customer doing? Is there anything I should be, like, concerned about? And maybe it chooses, it being, you know, an AR agent chooses, like, logins, dashboard views, like chat, questions asked, like, you know, a lot of these other sort of interaction metrics that it has, and it can give you this more holistic view, and maybe think of things that you wouldn’t have thought of. And a lot of that sort of

23:55

gave me a hypothesis, and let me look at a bunch of different things and go and look at all of them, and then kind of summarizing, for me, that gives you the ability to cover a lot more territory a lot faster. And that’s, I think, one of the big advantages that we can actually provide as a product, you know, Eric, one, one common manifestation that we see of that is when we give someone a sort of a higher precision view of data, which is our goal, right? It’s like, you know, help everyone see the data more easily. Yep. And we often get to a situation where it works really well, but it works so well that it exposes a lot of underlying issues in the data, right? Like, people, like, they’re kind of steep at the first time, and they’re like, Oh, we’ve got all these data quality issues, yeah. And there’s a bit of, like, an existential crisis around that. And, you know, is Ned good? It lets you, you know, if those were floating around for years, you didn’t know it. It’s like, was that data valuable? Like, yeah, it’s great to reveal that. But that’s definitely sort of symptomatic of, like, this map versus territory, kind of, yeah, I’m going to take the, I’m going to take the argument against Paul, where sometimes it’s good that the map is not the territory. And, you know, you so, like, we’ve been talking a lot about models, right? The map is a model. Mental models, via LLM models, via beta models. And it’s cool, like, our world is all models. And those models can, you can scale them up or down. And like, you know, there’s a famous story about, like, a map of the UK, like, the simplest model is an oval or whatever, and like, kind of gives the rough shape. And then you can make a higher precision one that shows the shoreline, and higher precision, higher precision. And if you wanted to make the model completely perfect reality you could, but the model becomes reality. The map becomes the same size as the Terraform. So there’s a trade off between, like, you know, complexity and expediency here, yeah, and sometimes it’s okay to have a simple representation. And, like, I think a lot about the concept of tolerance in data. And like, you know, when you’re a mechanical engineer and someone has asked you to build an aircraft blade or whatever. You give them a spec, but you say, I want this to be a 16 inch blade. But they say, you don’t just say 16 inch blade. You say, Okay, it has to be 16 inches plus or minus an eighth of an inch. And you give an accessible range where it would be wrong, right? Yep, yep, right. And we don’t have a version of that in data, really, right? And, but it’s funny, because it is really important, like, there’s certain times, if you’re running a high precision experiment where you know the difference between false positive and true negative is, you know, a 10th of a percent or something, you need very high quality, high precision beta. If you’re running an e-commerce Store, and one SKU has double the return to everything else, it doesn’t matter if it’s 1.5x or 2.5x or 3x you know, they’re getting a lot more returns. So investing more time and, you know, adding position to that model, making that map better, is not really worth it, because you’re already getting, you know, the information you need to make a decision. Yeah, yeah.

Eric Dodds  26:30

It makes me think about this year my son in his geography class, they started out with, they call it blob mapping, right? And so it’s fascinating. He actually can draw a pretty representative map of the entire world using circles and ovals. And it’s like, this is pretty, you know, he has, like, a good understanding of the layout of the world, you know, on a rectangular map. But it is literally just, you know, ovals, which is interesting. Also one thing Ryan, just to empathize with you, we have an identity resolution. It is basically an identity stitching product at RudderStack, right? So it takes, you know, all these expert tables in your warehouse, and, you know, creates nodes and edges. And it’s super powerful, super useful, but it’s also great . It’s a great way to discover big problems in your data, right? Because you just have 1000s of things collapsing on one node, and it’s actually inevitable, right? It’s not, that’s not a problem, you know, that’s not an identity searching problem. It’

Paul Blankley  27:27

actually an underlying data problem, which

Eric Dodds  27:28

is fascinating. So, like, same exact, you know, same exact thing, absolutely. Okay. I’m gonna do this next. I want to talk about the Zen lytic product we’ve been dancing around. I have a very good grasp of, I think, the shared world view, and then even some of the differences you know, which is fun to hear from both of you. But I’m actually going to start, I want to dig in on the topic of self-serve analytics, because this is a big Zen lytic thing. But I’m going to start actually, by asking a question to you, John, so leading data at a company where you had all sorts of different data consumers, right? So you know, you ran marketing for a while, you oversaw all the data, and you had these different stakeholders, from the sales team to customer success to marketing to executives. So what is your definition of self-serve analytics? Man,

John Wessel  28:21

loaded question,

28:22

slightly load

John Wessel  28:23

so loaded. Like, say, 10 years ago, like, if you told me, Hey, self-serve analytics is going to be, like, a controversial position, like, I would be like, No way. Like, everybody wants self serve analytics, yeah, but it’s really not like there’s been a and you, I mean, you guys are laughing like Ryan and Paul, but like, there’s been quite a backlash against it. So from my perspective, I thought of it probably in two or three categories, like, when we’re doing like, evaluation of, like, what tech do we want to use? How do we want to enable people? Like, first category was like, Okay, I’ll call it like, the full feature category, like, if somebody has something, we can guarantee to build it, because we have every single pie chart gage big number, like, you name it, like, we can build it, yeah

Eric Dodds  29:12

like, option value in terms of the say, products or interface that you can deliver to your customers enter, yeah? Like,

John Wessel  29:21

because if you’ve been in, if you’ve been an analyst for any length of time, like, you will have that one customer. Like, let’s just pick on sales. I always pick on marketing. So, like, a sales executive that’s like, I gotta have a dashboard, and I need gauges, and the gauges need to look like this, and I need these colors here. Like, like, you can get people that are very precise about what they want down to, like the color and the type. So, there’s a whole class of tools that are built to where you can, like fine craft, like very detailed things like that. Yeah. And then there’s another class of tools that are more, I would say, more built toward optimizing analytics. HOST workflow, yep, which, I guess, spoiler alert, that’s the route we went. Was like, okay, like, maybe it’s a little counter intuitive, but we believe that our best way to to do self serve was actually to empower a couple of analysts to be able to move really quickly in the tool to produce things that were useful for Google. And then the funny part about that is, although, like, we intentionally went with a tool that was selling to and, like enabling analysts, we ended up with a lot of like, citizen analysts. Like a sales manager is, like, I want to learn this tool, like the analyst part, like, actually, like writing a little bit of SQL and tweaking things, like they they did it, and we had a customer service leader, do the same thing, and one or two other people that were a manager or leader in a department, well, they got very, I mean, very light sequel, but nothing crazy. Typically, one of one or two of them went pretty deep, but, like, very light, like technical things and using the analyst workflow, it was one of the most counter intuitive things, where you would think that you gave somebody like, the most like, kind of Paul, and this was before like, AI was an option. But you would think if you get somebody the most polished, like, hey, look, you just have to, like, click and drag like, you know, this is that that’s what they would want. But it ended up being more strong to basically have a few, like, basically one, maybe two, key people in a certain area to really, like, enable them to be fast and awesome. They’re the go to for all of sales, all of marketing, all of customer service, and keep that, like, analyst centered workflow that’s worked best for us. Super

Eric Dodds  31:32

interesting. Okay, Paul and Ryan. Now the question is

Paul Blankley  31:39

that it’s a great question. And first of all, I would say, like self service and analytics is like a spectrum. It’s something where the goal posts have constantly been moving for it as products have evolved over the years. So if you go way back, people would look at business objects and say, like, that was self service. Like you just made the cubes and made the cubes, and then you could mess around the cube until you hit the limit. Yeah, and it’s like, then people were like, Oh, well, actually, that’s not really self service. Like, Tableau is way more like interactive dashboards. And you can even just upload your own CSV, and you can, you know, you can really make the visual whatever you want. You want to gauge and you want to make it blue, like, go for it. You know, Tableau is, like, really powerful on the visualization side. And then people were like, Wait a minute. Tables are still too hard for most people to use. It’s something the analysts are using. Can make the like, perfect dashboards, but yes, and then it’s like, okay, well, Looker is a lot easier. You don’t have to figure out the visualization thing. You just click on the data you want to feed. It’ll sell you the data. They might even tell you about visualization. And it’s like, there you go. It’s like, a lot easier now. But I think where we are kind of in and kind of following that is saying, Actually, looking is still too hard. Like, just look at how many people are actually using it. And the reason for that is that you’ve got to find the right explorer. You’ve got to know which data to use in the Explorer. You’ve got to remember, do we trend revenue by, like, the process, state it created, date, the ship date. Like, I don’t, I don’t remember. Yeah, actually, I’m just going to ask John, and that’s how that process goes, right? Yeah. And it’s like, I think sort of the thesis of Zen LED is that actually the best interface for data is talking to the analyst like, G, you ask for what you want, and the analyst says, like, hey, yeah, I’ve asked answer that question a bunch. It looks like this, typically, or Yeah, we don’t actually track that. Yeah, that’s not in the warehouse. We gotta, you know, we gotta start tracking that right. And it’s like that is sort of the feature that we’re building towards. Because we’re really building a co worker. We’re not trying to make someone faster using a BI product. We’re trying to make something where the analyst can basically give the system the right context, so it can actually just go and do that same work that you were describing, John, and everyone can have an analyst panel with them, because the problem with having the analyst do the work right is you have a finite amount of them, even if they’re sort of citizen analysts who are embedded in the teams, and the amount of questions that people have is really dated by the number of people that can answer them. So we’re trying to build a system where the data people can come in and say, Hey, this is how we do things. This is what’s unique about our environment. This is why these things are calculated in this weird way. And this is what you need to know about her. And then Zoe is able to go and actually answer those things. Ryan, yeah, well, I

Ryan Janssen  34:10

agree with Paul. As a co founder of pulse, I agree with Paul, I

Eric Dodds  34:13

would say, Well, you’ve disagreed a couple times.

34:16

I wanted to throw it over to you, no, that’s not what we’re doing. It

34:22

all, not what we’re building. Oh, yeah,

John Wessel  34:24

group therapy for founders right here. That’s

34:26

what’s right.

John Wessel  34:27

That’s what we do. No, a

Ryan Janssen  34:28

A couple things that Adam and first thing, you know, Paul spot on about like, the goal posts have moved. I’d say it might even be more sinister than that, and that, it’s, you know, who benefits, and I think that a lot of people have benefited from keeping the deficit definition of self serve to be as murky as possible, you know, and it’s like, because that’s something that sells is also something that’s very hard to do in a product perspective. And do you know, if you look at AI platforms 20 years ago, they had no hope of actually delivering a self-serve experience, but they still wanted to, you know, call themselves self-serve, and that’s been the case ever since, right? Like people always kind of played faster. Boost with that definition, because it benefits people making the definitions. I think one thing that makes it really crisp for me is, and I’ll borrow Eric’s like, pm ad for a second here. I think about the personas using, you know, the last mile of analytics. And I think for me actually, I think one of the big misconceptions is that there’s two of them. People always talk about sort of technical, non technical. Non technical, I think there’s actually sort of three big buckets, you know, I call them the 1% the 10% the 89% the 1% are the SQL monkeys, right? Those are the people that are, you know, really technical, the people who are building your semantic layers and who are administering your data warehouses and writing your DBT transformations. That’s the sort of one the 10% are the analysts. And this includes the sort of citizen analyst, is, yes, that you’re talking about John like it’s put off in their Excel powerhouses, sort of stretching into, sort of, like enthusiasts. And we’ll sort of dab with a BI tool a little bit. But like, you know, they don’t spend a lot of time writing, you know, SQL or Python or some other really, but, you know, more flexible scripting languages, basically. And then the rest of the 89% and that’s the group, that’s the those are the end users. That doesn’t mean that they’re not data driven or anything like that. It means that they’re busy focusing on the vibes. You know, like, vibes are a big part of their jobs too, and like, so it’s like, you know, when you’re a, when you’re a marketing manager, it’s, it means that you’re, you’re too busy being a marketing manager to have time to do analytics. And it’s funny, actually, I was just talking on LinkedIn, like, I’m the 89% like, even though I’m very good at Python, I’m very good at SQL, I’m a huge data nerd, but I’m too busy doing, you know, CEO stuff to go and, you know, write a bunch of queries against their own data warehouse, for instance. So it’s like, it’s more question of time and what you can focus on. But I think, you know, historically, when I think about what sort of, you know, BI has done, it’s been the 1% making dashboards for the 10% and then the 89% are kind of left out in the cold, you know. And I think that’s what they call self serve, really, you know, the 10% Yeah, they dabble a bit in exploration stuff, but not very much. Like they’ll kind of flirt with it a bit, but they don’t really get into full on, you know, deep, like they’re not writing notebooks to do analysis or anything like that. And then the 80 never said are usually kind of missing. Usually kind of missed out on the data. And it’s all vibes. It’s all vibes, you know, at the top. So I think that, I think the opportunity here is that that can shift, you know. And I think that, if I think that, if we can multiply, you know, the more technical folks so they can be available, and they can multiply themselves, they can do what I like to call analytics at scale, right? And like, move from sort of that, like, point to point defense, where it’s like, you know, one question, one answer, through to being able to build tools that the entire team can use and answer those questions. And then the analyst job shifts over to analyzing the data from the team. It’s understanding the sort of questions the team is asking, and like, how they’re using the data and what they need to have, and then you add that in a scalable way, so that, you know, not just the person asking that question can receive, you know what they need, that like the entire company will get those metrics at it, or will get whatever they need right. So I think that’s what we’re going to see happening over the next few years. It’s

Eric Dodds  37:55

interesting to apply the map. The mental model of the map is not the territory and the distortion that happens because of the distillation across the spectrum that you just talked about. And so let’s go back to, like, watching videos of users. Okay, so you’re watching videos of users. That is, you know, sort of actually, in itself, is a distillation of reality, right? Because you’re interpreting, you’re interpreting certain things, right? But let’s just call that, you know, sort of raw data, at least as far as we can consume it. Then you go to event logs, right? Then those event logs need to be summarized in some way, right? And so you can call that a semantic layer. You can call that, you know, a model or whatever. So there’s distillation happening there that’s performed by the 1% they’re delivering some asset to the 10% so there’s distillation there. And then, of course, that sort of filters out to the 89% and so when you add on, you know, then you go from the like, raw data to the logs. You know, there’s distillation logs to the 1% building an asset, 1% to 10% 10% to 89% you know, that’s like, it’s an insane amount of, you know, you sort of like, why is it hard to be compression, like, to use data at a company? And it’s like, well, I mean, you’re, you know, the distance, it’s a distance problem, right?

John Wessel  39:22

I think you just described the vibes. Because basically, it can start out as data in a log somewhere, and by the time it gets out to the 89% it is just a vibe. It’s like an echo. It’s like an echo of, like, whatever,

Ryan Janssen  39:34

the vibes processing machine. Yeah, I think it’s true. It is actually interesting enough. I think that probably the weakest part in that whole chain right now. So I think that chain is hard, but achievable for starters, you know, like, yep. And part of that is actually just the right systems that, you know, allow drillable data. You know, Paul was talking about setting up cubes and stuff like that. And, you know, that’s a hard block in that entire chain, right? Yeah, you can do higher resolution that cube. Part of that is. Good lineage, you know, it’s like, hey, where did this video come from? Or, you know, where did this data point come from? You know, I think on a human basis, it’s interesting. Actually, the hardest sort of people to put in that chain are probably the 10% you know. And these are, these are finding really great folks who can translate the, you know, the deep technical stuff into the vibes, business, outcome stuff, you know, and like, being able to be a translator for that is actually a really hard job. That’s a bit like a unicorn job you have to do. And that’s why finding folks like that is actually probably the hardest part. Like, they’re actually pretty rare. And that’s also one of the reasons why, you know, we’re always lamenting, like, are we adding value based? It is because we don’t have enough people like that. Yeah, yeah. Interesting.

Eric Dodds  40:40

Can I ask about Okay, so I want to talk more about the product experience, and I’m going to just frame a question. I’m going to frame it like a, I guess, an analytics type question, right? And I have a hypothesis on why AI could be really helpful here. But yeah, I’ll just frame this question. Then I’d love for you to explain, okay, how would I use Xen lytic? Or how, you know, what would the product experience of Zen lytic be like? So I want to go back to something that you mentioned, Paul, that’s related to event logging. So, and you mentioned a particular user behavior, like a login event. And so, traditionally, login events. You know, you would associate that with, maybe that’s part of your definition for an active user. Or, you know, there are semantics there, right? It could be an active user, it could actually contribute to, you know, a churn score. I mean, there are a number of things there, right? But one thing that’s really tricky about logins is it varies so much by product. And so I’ll give you a specific example from RudderStack. It’s really not a great indicator of, you know, whether or not things are going well, because a lot of times, if the data is flowing, you don’t need to log into the product, right? And so Exactly, you can have this really crazy inverse relationship where, you know, like that could be a sign that everyone’s super happy, right? I mean, that makes our job harder in terms of, you know, understanding the user and maybe they’re under other indicators. But that’s a tricky problem, right? So if you have a product where event logs are actually, you know, it’s less straightforward, maybe, then, say, a consumer product where there’s a daily login event that’s an indicator of some sort of, you know, outcome, or stickiness or loyalty. In the absence of that, how do you and that’s a The reason I bring that example up is that’s highly contextual, highly contextual, right? Like it’s the nature of the product, it’s the problem that the product solves for the user. There are different personas, and so even that metric could and probably is very different depending on the type of user in the platform, which means that the semantic definition is, you know, different for different users, but context is something that AI is awesome at, right? So there’s my really rough sort of problem of, like, trying to understand, you know, maybe like, the health of an account, where my event logs and the semantics related to them are actually pretty, pretty complicated and highly contextual. So there we go.

John Wessel  43:13

I think it might be fun to frame this and like, even more specifically, it’s because it’s Zoe’s, excuse me, Zen lubez agent. I was like, Eric, is that just like, asked the question of, like, Hey, how are my accounts doing? Or, like, how is my account specifically, this one doing be really interesting to, like, learn from you guys. Like, what types of things can the AI say from that? And then, like, what kind of context would it need to do a good job? And you can go as technical as you want.

43:41

And for the sake of argument, to set the table, let’s just say we have really good event logging, which we do from red or SAC, so we have, like, a lot of event data. And then we also have, sort of, let’s say, your traditional, like, sort of, you know, we’re ETL ing and Salesforce, and, you know, Customer Success ticket and all of that, right? And so we have all those tables in the warehouse, yeah, no, it’s perfect. I think it’s a whole start with a little bit about, kind of like, how, then what it works, and how we sort of think about the world. Yeah, the way we think about it is that the data tool should be trying to build these building blocks that can be used to answer a ton of different questions, and they should try to add as much context as possible to those building blocks. So

Paul Blankley  44:18

What that looks like is, it’s like, Hey, this is how we calculate logins. This is how we calculate active users. Always very complicated to actually do, but it’s like, that’s why the data team needs to define it. Like, we don’t think you will be pushing that definition off to the business people, because you’re going to get a ton of different definitions. Nothing’s going to create it’s going to be a disaster. So that’s why, sort of loss of philosophically we’re like, data team should be defining, what does it mean to be an active user? How do we calculate gross margin, like, all these kinds of business definitions, yeah, yeah. And part of that is not just defining, like, the SQL of like, how do I aggregate up something into active users? It’s also like, what does this mean? Like, how is it calculated? Like, why would it be used in a certain way? You know, so, so in addition to, let’s say, we’ve got our logins metric, like, you know, how often are people logging in? We’ve also got, like, product usage. We’ve got, you know, some meta level contacts on, like, what is, if we’re going with roto stack as example, what is rudder stack? What do they do? Like, what is the company? And you’ve got these contacts at these different layers, the most important ones being like, Okay, this is what product usage looks like. Like, this is the amount of like, gigabytes of events that have been logged by whatever customer we’re talking about here. Yeah. Then when you go in and you ask Zoe, hey, like, you know, can you tell me about my customer? Help for like XYZ customer, she’s going to go in and she’s going to search in this semantic model for like, XYZ customer and then any other terms that she thinks could be relevant. So she looked for something like, usage health. In case we have a health score, login activity, she just searches for a bunch of these different terms, and then we probably have a ton of stuff come back. So it’d be like, Okay, I see like, you know, gig events used. I see logins. I see, you know, a number of events streamed. I see, you know, like session duration on the site. I see, like, whatever other stuff we’re tracking there. And then, since I was able to, like, run more than one query, too, she could say, hey, let’s look at logins and session duration, those are over here. Let’s look at the quantity of usage, like events and gigabytes and everything gets streamed over here. And then she might be able to say, Okay, well, looks like this customer, you know, has kind of not that many logins. Like they got like two logins in the last week, but they also had like 80 gigabytes, you know, of, you know, actual information transferred. So they’re, you know, pretty heavy users of the product, regardless of them logging in. And she’s going to be able to actually go and reason it out and say, Okay, let’s look at this more holistic picture, because I can do more than one thing, like, it’s not helping you run one query. It’s able to actually go full of a few different things. And then you know whether the summary, and it’s saying, like, there’s not a lot of logins, but there’s a lot of log usage, that’s gonna gotta be a summary. And then you’re gonna be able to say, Okay, well, are they healthy or not? Like, what else do I need to know if they’re healthy or not? Yeah,

John Wessel  47:05

It was either. Yeah. It was either. This is a perfect segue. Ryan was either you or Paul that posted, I think it was a week or two ago, about one of the use cases, one of your customers. Oh, is this unlock for them? Of like, oh, like, I can run 10 scenarios at once. Like, what would take me, you know, like, I gotta do it one at a time. As a human, like, I can say, hey, customer health. Like, what’s customer health? And keep it really broad. See, like, 10 or 12 different things. Be like, no, yeah, yes, yes, yes. And then, like, continue to drill in, whereas, you know, as a human, you’re just gonna, like, on customer health, whatever comes to mind. Oh, I need to look at logs. And then you go down that road, and then you’re like, oh, logins aren’t good, and you go to the next thing. So I feel like that was a cool thought, even for me, because as an analyst, of course, I would treat it that same way, as like, Well, which one do I think is best? I’ll look at that first. I’ll go to the next one. But you’re not limited that way.

Paul Blankley  47:59

Yeah, especially time wise. I think it matters, because if someone asks you this really broad question, I always get this, like, seeing feeling in my stomach. So like, Where do I even start? Yeah, yeah. There’s so many things I could look at. Like, do I look at all of them? And do I look at some of them, and then probably get some of them, which ones? But then, like, you ask a system, like, Joey, and it’s like, I could look at all of these things. And you’re like, Yeah, you go and do that. Like, how many go get a cup of coffee or something while you

Eric Dodds  48:22

because I think that’s where the context is, that’s a much more articulate way to explain what I meant by context. Because even if we think about something like product health, it varies significantly on how you slice the business, right, okay, we have a free account that’s trying the product versus Sure. You know, a large enterprise is paying us a lot of money, right? Well, product health is really different for them, right? Like, adoption happens at different rates. I mean, guys, so even if you have a, and this is, I think, where map is not a territory, causes huge problems. Like, we have a product health score, and it’s like, great. It actually is, like, a bunch of different product health scores. Because, yeah, can’t distill all users or customers down into a single positive stage,

Ryan Janssen  49:08

yeah, you know. Like, it all comes back to map design, you know. So in that case, like, that’s the equivalent of the map showing the UK 400 miles north of where it really is. And if you go to sell the UK, you’re going to miss it, basically, because the map, yeah, sure. It was like, the question becomes like, what are the right primitives? What are the right Mad Libs that you can give, you know, whether it’s an AI analyst or human analyst, what do you know that when you set those primitives, you know, data teams have a tremendous amount of power to shape how an organization thinks. And it’s like, if you start putting the raw metrics in there that don’t let you account for that context, then Yep, you know, any animals. Apologize that people will probably use them incorrectly, too. But if you set those properly, and, you know, I guess in that case, our goal is to, you know, bubble up all the most relevant information, there’s still always going to be a synthesis step at the top for achievement. You know, it’s like, yeah, Zoe can summarize things and, you know, talk a little bit about it. But like, we fully expect that. You know, the humans are going to review all the data and make a decision based on that, you know. And our objective is really to make sure that they have a really fat pipe to that data. They need to make a decision. Yep, a

Eric Dodds  50:11

A couple specific questions. And John, you may have a couple too, but because I know we’re close on time, but in terms of the product experience, can I bring my own primitives? So let’s say I have as an analyst, I’ve generated, you know, I have some sort of definition of active users, you know, that’s represented as a model or a table or whatever. How does that work? Because there’s obviously a semantic layer here. Does Zen lead provide that? Can I plug my own pieces into that, just from a product experience at a company? Let’s just use RudderStack, right? We have models that are running. We have reports or whatever. So if I’m onboarding into Zen lytic What do I need to bring? What do I need to develop? How does this semantic layer work? Yeah, great

Paul Blankley  50:52

question. Our interface is always like tables in the SQL warehouse. It’s like, as long as you can define some SQL to aggregate up active users on some table, name your warehouse, then that’s all you need to kind of take down. We basically sit on top of those tables. And the kind of things that we expect you to find on top of those are, like any additional sort of English contracts you need, and sort of the aggregation, like the measure, like, how do you calculate gross margin? How do you calculate active users? With those building blocks? Then we can just kind of wheel and deal and pair those around, however she needs to answer the kind of questions. I love it.

Eric Dodds  51:26

I have one more question before we end. But did you have any other questions? Oh, gotta give you the last word, yeah,

John Wessel  51:31

I appreciate that I’ve used the product. Okay, so I guess I actually have one question. This is kind of future looking. When do you think AI agents in general, you can, I think this is a general question, will be better at knowing what they don’t know, and be able to better integrate into, like, project management and, you know, things like that. Because I think that, to me, would be a really interesting, you know, component to this.

51:59

Yeah, definitely. I

Paul Blankley  52:00

I think part of that is there’s two components. One of, like, the underlying models, as they get smarter, will be, like, less falsely confident. The other one is, it’s kind of like the kind of fine tuning you do on them does actually shape this kind of behavior. Like this is the right kind of behavior to shape for fine tuning, whereas, like, some behavior you want it to be just sort of like how you tell it to behave, if it’s like, in line with how it’s been trained so far. But there’s other things where you want it to not be confident, you know, you want to have a little more of your ability to I would

52:32

Say, if you want a really concrete answer, I think we’re not going to get all the way over there, but we’re going to see a step change at all, being able to understand what they don’t know. When the general release of reasoning models comes out, which no one knows for sure, open, open their eyes the furthest ahead with this no one knows for sure, but the rumors are that it will be busier. Wow. I love it. All right.

Eric Dodds  52:54

One last question for you, Paul, which is maybe the hardest question, and that is, how did you come up with the name Zoe for your AI? I mean, I feel like that’s the hardest thing for any AI company, is to name their agent, right? And,

John Wessel  53:09

I mean, and then defend the name against the other and then defend the name again.

Paul Blankley  53:13

So I think we’ve got a good I think we’ve got a good pace here, actually. So again, like, right? I studied, you know, Burke’s Elmo, Big Bird, like, like, all the initial transformer models were actually named after Sesame Street bar pins. I believe it or not,

John Wessel  53:27

I did not make that connection.

Paul Blankley  53:28

Wow. So Zoe is the only named Sesame Street character and Dynamo deck. Obviously, you know, we wanted something that’s sort of like, close to us, enumeration wise, though, Zoe was, like, this sort of obvious choice for us because we wanted to sort of pay homage to the original, like, transformer model, and be sort of consistent with the Z branding with Semitic, wow, Zoe was always the obvious choice.

Eric Dodds  53:50

Wow, that is awesome. I did not put that together. And yeah, and you have, like, you got the Z, right? So it’s your, yeah, exactly. We

Paul Blankley  53:59

got, like, got the Z which, which is not always a good thing, so that you want to be at the top of the list at the bottom. Yeah, that’s

Eric Dodds  54:04

true. That’s true. Facts, awesome. Well, Paul and Ryan, thank you so much for joining us. I learned so much. And yeah, it was fun talking about mental models and everything. And we’ll check out the product. It sounds awesome. I loved it. Thank

Paul Blankley  54:18

you guys so much for having us. Absolute blast.

Eric Dodds  54:21

The data stack. The show is brought to you by RudderStack, the warehouse native customer data platform. RudderStack is purpose built to help data teams turn customer data into competitive advantage. Learn more at rudderstack.com