This week on The Data Stack Show, Eric and Kostas chat with Nick Handel, co-founder and CEO of Transform. During the episode, Nick discusses the difference between metrics layer and metrics stores, how to fuse two sources into a metric, and how to manage metrics amidst hypergrowth.
Highlights from this week’s conversation include:
The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we’ll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.
RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Eric Dodds 0:05
Welcome to The Data Stack Show. Each week we explore the world of data by talking to the people shaping its future. You’ll learn about new data technology and trends and how data teams and processes are run at top companies. The Data Stack Show is brought to you by RudderStack, the CDP for developers. You can learn more at RudderStack.com.
Welcome to The Data Stack Show. Today we are going to talk with Nick from Transform. Kostas, you and I have actually chatted offline a whole lot about this company and the concept in general, they build a tool that does metrics store, metrics layer is also a really common term for what they do. DBT kind of has used that terminology as well. And so my burning question for Nick is actually just defining the metrics layer. It’s a concept that sounds great, but it’s one of those terms that we run across all the time on the show, where, if I asked you to define that, it’s probably kind of hard, just because the term is a little bit ambiguous in terms of what does that mean, for me day to day? As someone who works with data? What are your thoughts?
Kostas Pardalis 1:19
100%. Yeah, totally, I totally agree with you. And we definitely have to spend some time on but I really want to go through like, also a bit of like, the history behind this technology, because, okay, we start hearing about all these variations of the semantic layer, the metric layer, the metrics, the headless VI, all these different names of like, pretty much the same thing. But obviously, like people were struggling with a simple move, like in the past, right. And so that’s also like, for example, how Looker became successful because they provide like a solution to that. So I’d love to hear like, what we, from Nick, like what we did in the past, what has changed today, and why we need to keep innovating on that, hopefully like these together with like, what you have in your mind will help like elevation owners understand the value and how these technologies are used today.
Eric Dodds 2:19
All right, well, let’s go figure out what the metrics layer is.
Kostas Pardalis 2:21
Let’s do it.
Eric Dodds 2:24
Nick, welcome to The Data Stack Show. We are so excited to chat with you.
Nick Handel 2:28
Great to have you here. Thanks for having me.
Eric Dodds 2:31
Of course. Okay, let’s start where we always do. Give us your background and what led you to starting Transform.
Nick Handel 2:39
Yeah, definitely. I started off studying math. I was always really into math, did math competitions as a kind of was a nerd, and was proud of that. And yeah, so I studied math in college moved over to kind of applied math. So first two sciences. And then I started working in economics and kind of got more interested in the behavioral pieces out of school. So it was 2012. We were kind of coming out of this double-dip recession, which I thought was really interesting in school, and studying economics and math, I was like, just wanted to pay attention to that. And so I ended up going to Blackrock as a research analyst, I was working on a team that basically did macroeconomic research and produce signals for various investment funds. Really enjoyed that. But I also found that the feedback loops were quite long macroeconomics kind of slow and post all the global meltdown, just found that things were kind of calm in 2013, 2014. And so I wanted something that would give me faster feedback loops. So started kind of looking around, didn’t know exactly what I wanted to do, but figured that it would probably be somewhere in tech. And luckily, I in 2014, landed on the data science team at Airbnb, I was on the growth team worked a ton on kind of product experimentation, lots of different analysis, machine learning, pretty broad at the time, that kind of data science team wasn’t super-specialized. And, yeah, eventually, I had been kind of working closely with the Data Tools, Data infra team, and eventually decided to move over to the product team. In 2017. I did about a year as a product manager building data tools for people like me, and then realized I kind of wanted to go back to data, which is kind of a trend in my career just kind of veer back and forth between product work, data work. And, and so went and joined a startup called branch which is at microlenders as their head of data, and micro lender, tons of machine learning applications for loans and fraud and like all kinds of different things, but that ended up being really fun. Have lots of interesting kind of NLP applications. And on the side, like head of data was also responsible for a lot of the reporting and kind of figuring out how is the loan book doing. And so ended up going back to a lot of the analytical work that I was doing earlier in my career, and just really enjoyed that. But realize that some of the stuff I was working on at Airbnb was one really valuable, I think we were a bit ahead of the curve just because we had one data scale problems, and to just a bunch of resources to go and try and build various tools. And yeah, realize realized that this metric space was really interesting, and decided to start working on Transform at the end of 2019, and gist have been doing that since.
Eric Dodds 5:44
Very cool. And tell us about a high level— We have so much dig into, but tell us at a high level what does Transform do?
Nick Handel 5:53
Transform is working on a piece of technology called a metric store. And that is a very, I think, loaded term, I don’t think people really know what that means. So hopefully, we’ll hash that out. But we definitely will add a very, very high level, there are kind of two really big problems with how companies and how people how data analysts and business users interact with metrics today. One is that, especially on larger teams, there tends to be this kind of challenge of defining metrics consistently. That means, like, is my SQL The same as yours? That means I tried to do this analysis, and it’s slightly different than the way that you talk about this revenue thing, or new user signups or whatever it is. And so there’s this, basically this trust problem. The second kind of big piece of what the metric store is aimed at solving is around productivity. So it should be very, very easy to go and ask a question like, what’s the search to checkout conversion rate, and now I want to know, search to checkout conversion rate by country, like, those are questions that should not require, knowing how to join three different tables, and knowing how to filter various things, or aggregate various things, those definitions should be baked, and people should be able to kind of reliably consume them around in organization.
Eric Dodds 7:25
Totally. Okay, so I want to dig a little bit deeper. So there’s sort of two terms. So you said metrics store. And then also this paradigm is described as a metrics layer. And what’s interesting about the terminology is that you think of a layer as sort of spanning a stack or sort of spanning a lot of different things at a store as sort of a distinct entity. Can you help parse that out for us? Like, what’s the difference? Are they are they different? Are they the same thing?
Nick Handel 8:03
Okay, so I’m gonna start with semantic layers, and I’m gonna move my way up to a metric store. I love it. I think that there is in this like, Inner Inner bubble, let’s imagine kind of like three circles that are encompassed with each other. There are three circles that show like machine learning AI, forgetting what the, like NLP or something is in the middle. So in the middle, the like, most narrow definition of in this space is a semantic layer. And so what is the semantic layer? It’s basically a business representation of some kind of company’s data. Another word could be like an ontology. I think that when people talk about semantic layers, they think about most recently, they probably think about, look, ML, what’s within looker? There are other tools in this space. There’s, at scale talks about themselves as a semantic layer. Historically, business objects was probably kind of the biggest rent tons of people still use it. Yeah. That’s sorry, historically, maybe that’s a Yeah, I shouldn’t say yeah. I mean, no, but it does feel that way. But it is amazing how many people still use it? Yeah. And so like, what does this thing do? Basically, if you think about a entity relationship diagram, a bunch of different datasets, a bunch of different objects. Within them, they have measures and dimensions, things that are aggregate gettable. And then things that are the various kind of like group buys, and filter buys the dimensions. And they have relationships between them. And so that right there is basically the kind of semantics of what exists within a data warehouse. And it’s kind of the building blocks that we do a lot of our analysis on top of whatever kind of data modeling techniques you use, there’s still some kind of expression of semantics there. Sure. So my definition of a met Trixx layer is, I think that they can come with kind of very weak concepts of semantics all the way to very strong concepts of semantics. So a very weak concept of semantics and emetrics layer would be to just define some SQL, right, it’s not very structured, you’re not really kind of creating these like reusable abstractions, you’re just writing a SQL query. On the other end of it, you have something where you are providing these measures and dimensions and relations in a very structured format, and then expecting the kind of semantic layer to do a lot of the work for you to define the sequel and build it up. And the way that I think about the metrics layer is, is basically just taking whatever that semantic construct is, and extending it to have a concept of a metric. So an object that is a metric. I think that there are some like interesting conversations around what makes a metric different from a measure. The way that I think about that is, measures are basically aggregate table expressions. And a metric could be as simple as an aggregated expression. So like, a count star from the transactions table, is transactions. And that could be a metric. But then there, there are some metrics that are just a whole lot more complicated than that. And they require, like a conversion metric or something like that. It requires two events, it requires an understanding of the relationship between those two events, for an understanding of like, the timestamp, what does that even mean, within those two events? Why does it matter that those two timestamps are happening within seven days, or whatever this kind of like conversion window is? That’s kind of the difference between these metrics and measures, metrics are basically a more kind of complicated construct. So metrics there basically is like, the semantics of what’s in the warehouse, some kind of concept of like, how do I take that semantics, turn it into some logic that can be executed to actually pull out a measurable object, a metric? And then some kind of query interface? And I think if you look across the board, that’s kind of what these various tools are semantics, some kind of performance, and then query interfaces. Metrics are basically just takes that and extends it with organizational governance beyond technical governance, how do you get a bunch of people to collaborate and agree on the definition of a metric? And then how do you basically involve the various people that need to be in those conversations, and kind of manage that lifecycle of the metric.
Kostas Pardalis 12:40
Okay, there are like many interesting topics and questions that I want to ask on a little softly. But before we go there, I want to ask you something that came up in my mind, like as you were making the introduction of yourself. So you mentioned, like two important terms as part of your work experience, right. Like you’ve mentioned, metrics, that’s something that I mean, you build the business around that. And you also mentioned signals, right? Like you said, that you may miss it, like very clear that when you were like BlackRock, that your word was like to generate signals, not metrics. So what’s the difference between signals and metrics?
Nick Handel 13:20
Yeah, so I kind of skipped over this part of my career, but signals in my mind are features for machine learning applications. Okay. So that’s, that’s what I was talking about. Yeah. And when I was a product manager at Airbnb, what I was working on was Airbnbs features store, which is actually quite similar to metrics store, but also quite different in a few interesting ways. So they’re just different requirements for these two tools that make them. There’s quite a long way between those two tools being the same. But yeah, I’m talking about kind of machine learning applications when I say signals.
Kostas Pardalis 13:59
Okay. Let’s look at some really interesting distinction. And it’s really interesting, like that mentioned, features further. Okay, cool. So let’s go back to metrics. Can we start with little bits, like from a product perspective, like, who is today like the user of a metric layer? Like who is the primary user who engages to transform as a platform?
Nick Handel 14:28
This is part of the interesting thing about what the metrics layer could become. There is this gap between the way that data people talk about data and the way that business people talk about data, when you get a data request as a data person. It’s almost always in the format of, I would like these various metrics. I would like them filtered by these various dimensions. I like to have these various dimensions for me to slice and dice and kind of ask some questions. And then what you do with that is you go and you translate that request of, I would like these metrics at these granularities filtered in these ways to SQL on top of tables in the warehouse. And so the big gap between the business person who’s trying to fetch some data set that they can use for their analysis, and the data person is this separation between the tables and all of the kind of sequel logic that needs to be expressed to actually pull the data out. And the way that people talk about data. And so I think that the point of the metrics layer is to try to bridge that gap. Because it is a common language, it’s kind of the way that we translate a business users needs into something that can actually be expressed as SQL as logic on top of a data warehouse. And so ultimately, I think that the end-user metrics layer is the data analyst, like the analytics engineer, the data engineer, they need to be able to define the logic on top of these tables. As these objects, these concepts that business people can understand and can consume in downstream applications. I think that the metric store extends that a little bit. And it actually adds this piece of organizational governance, where there are business owners to metrics, and they should be involved in that process. And so in some ways, there is actually a second user of a metric store, which is the person who is responsible for a metric or partakes in the conversation of how it should be defined.
Kostas Pardalis 16:35
Okay, so people using the Transform platform can collaborate, right? Like you have the analyst, the data engineer, and also like business user there, and they engage in the process of defining the metric. Can you take us through like this experience? How you have seen it working inside an organization.
Nick Handel 17:04
Yeah. So I think a good example is actually, I think you had paid from Netlify on the show a few months back. And I think that her explanation is probably better than better than my explanation, because it’s a lived experience from an actual user. But just to kind of reiterate it from our perspective, data analysts know the logic of how to express these things. Business users don’t necessarily know the nuances of how to join various tables, how to filter in the correct ways how to produce these metrics. And so the data analyst has this problem of— It’s a good problem to have. Business users are interested, they want to know the answers. And so they’re basically going and asking various questions. And that is time consuming for the data analyst. It takes a lot to kind of resolve all those questions. I think that there’s a good book from Tomash, Tongass and Frankin, from from Looker that, basically coins, the word data breadlines, and like, the gist of that is, business users are kind of getting in line waiting for their data. And so what the semantic layer, what this like metrics layer on top of that enables, is, it enables the business user to have a metric concept that they can understand that they can request, and allows them to interact with the tool in this kind of more self serve way. And that really was kind of the special innovation of of like, look, I’m Elon and Looker in general, on top of historical tools that have done similar similar things in the data space. And so what that enables is basically just, I have questions about metrics, I would like to get a CSV, I’d like to get that data in Excel or in Google Sheets, or I just want to see a line chart up the metric and I want to know, like, is this thing going up? Can I slice it in a few different ways and figure out why it’s going up? That is, that’s really valuable for the end business user. And it solves a problem for the data analyst of having to go and kind of rewrite the same SQL over and over again, try and manage that SQL, try and hash out the definitions between SQL that wasn’t written by them that it’s kind of presented to them as like the right numbers or something like that.
Kostas Pardalis 19:29
Okay, so you’ve had there’s some curriculum like, you have the, you have the business user, the business user comes and says like, Okay, I want let’s say to measure MRR and we start from there and through like going back and forth between them and the analyst and maybe also like they’ve engineer when adopting some SQL code that calculates the amount. What happens after that? Let’s say we’ve done that. We have started like In the in the store, right? Like we have the definition there. And we have an agreement between these three tools. Because my next question is going to be what happens like when there are multiple different versions or what MRR is, but what makes like how this information is then like reused inside the organization and where it is consumed from there.
Nick Handel 20:23
I think there should be many options. There’s this word of headless BI, right, that’s kind of a word that’s thrown around. I think that people often conflate it with the metrics layer, I actually don’t think that the metrics layer should be headless, I think that it should be able to show you the metrics, it should be able to show you the value. But I think that the interesting piece is that and so in some ways, you could say that the metrics layer like if it’s not headless, BI, it’s, it’s kind of overlaps with BI and in some small ways, but it’s basically just the metric that you’ve defined, presented on a graph. It’s not the kind of freeform let me write some SQL and then produce 100, different graph types kind of experience that people expect data BI. But so that’s the first thing, I think that this tool should be able to show metrics. And I think that you should be able to kind of build a really, really great experience for the business user to come and ask a question. And for the data analysts to know that if that question that gets asked, gets, like copy and pasted as like a PNG and thrown in a decK to the board that it’s going to be the right number, and it’s going to be, it’s going to be well received and not going to, like lead to chaos down the road. But beyond that, I think that the difference between what we have with with Looker today is that this should be open. I think Looker has kind of moved in this direction a little bit more recently, with the kind of promise of an integration with Tableau, I think that some integrations with like, with G sheets, etc. But I think that it shouldn’t just be a limited set of integrations, it should be generic API’s that are open, that anyone can consume these datasets from. And so I think that the kind of format that those API’s should expose is metrics by dimensions. That’s, that’s typically the the way that the end business user asks, and I’m not saying that the business user shouldn’t have to interact with an API. But I think that these tools, ultimately, will be pressured to build better first class experiences with these centralized metric definitions. Some tools actually already have done this. Examples are, I think Tableau has been very open to this. Mode has done a good job There’s a transform JDBC that allows mode to directly connect to transform and allows you to ask for revenue by country by day, and it sends that request to transform and transform compiles that into the correct SQL query, and then returns that result. So, that’s kind of the experience that I think that we are, that we’re moving towards.
Kostas Pardalis 23:12
Okay, so let’s, let’s go closer, like to the technical details, and we have, let’s say, the semantics that they are brought binds to the business user, the expert, per se, in this case, then we have like, also like a SQL query, which is created by the analyst. So how do we represent these two sources? Like how do we fuse these two sort of information into a metric? And I mean, that like, on a technical level, right, like, is this like, a document of data together with a SQL query? Or how it looks like?
Nick Handel 23:53
This is my favorite part of what I work on, this is the kind of piece of products that I refuse to stop thinking about. And it will probably true be true for a long time. So the really interesting thing is, how do you how do you create these reusable abstractions, these kind of core components that allow you to define the logic for a dimension for a relationship between two tables for a measure, and be able to kind of use those as building blocks that enable you to define a metric with, in a dry way, so don’t repeat yourself in a non duplicative way. And then, once you have that kind of nice, clean those nice clean abstractions, how do you then allow a user to express express them? And so with I’m going to talk about metric flow, which is Transform’s open source metrics framework, because this is the piece this like, this is what metric flow is. And I would just maybe just kind of preface that width, it’s a component within the like broader transform the set of products. And so what the spec within metric looks like is, well, anyone can go look at it. And we have some examples of defining like your Salesforce data in it, and whatnot. But it’s a YAML file that basically has some SQL expressed in it. So you can point it at a table, you basically say Cor DOT transactions is the name of the table. And I would like to define a measure that is the sum of one or like a count star, like a count of how many transactions have occurred. And then I would like to define some dimensions, which are maybe the payment type, the store that the transaction occurred in, etc. And so I’ve now kind of captured the semantics of one table, we then take various identifiers that exist in different tables. And that allows us to do join resolution between two different data sets. So if I have a transactions table and a users table, I can go in to find various dimensions that belong to a user, like maybe their signup method various kinds of dimensions that might exist in that table. And then when an API request comes in to transform, it says, Okay, I have a transactions table, I have a user’s table. This is the user ID, which is a foreign key in the transactions table. This is the user ID, which is a primary key in the users table. And so if I asked for how many transactions occurred by the user signup method, then we would basically go and construct a query, which is just SQL, and it’s just using the kind of logic that’s defined in these two YAML files, to say, I need to take the transactions table, I need to join the users table. And I need to aggregate transactions to the level of the user signup method. And then I need to return that to the end-user. And so that whole process of the spec, the like, construction of various kind of joins and all of the logic, and then the rendering of that logic down into a specific SQL dialect is what is a triplet.
Kostas Pardalis 27:15
Okay. So, in fact, there’s some regular shell, the user does not really have to define SQL, right? This is something that’s, it’s pretty much, let’s say, generated by the system, correct?
Nick Handel 27:28
Yeah. One thing that you can kind of imagine is the ideal input into a framework like this is nice, clean, normalized tables. That being said, we have done so much work to make it so that you can put in a raw event log, or a partially denormalized aggregated table, you can put in a very wide range of tables. But basically, the end-user is just telling us what is in the Data Warehouse. And there might be a little bit of SQL. Like, maybe you want to take a number of transactions that a user has had, and define them as a power user, if they’ve had more than 10 transactions, you could go and write a case when in a number of transactions is greater than 10, then power user else regular user. And so you can express this SQL and that SQL will then be reused across every single time that you go and request that dimension.
Kostas Pardalis 28:31
Okay. That’s interesting. And okay, so what that lets you suppose right now, I mean, what kind of like database system like users can interact with the framework?
Nick Handel 28:45
Within metric flow, this is a big reason why we open-sourced it. One is, what I’m talking about is it means it’s a huge engineering project, like, one. I think that we’ve been thinking about this for most of the last eight years. And so it felt right to kind of put that technology out in the world. But to its really ambitious, like, the idea that you can take a bunch of different data sources, and then be able to do like multiple hops of joint resolution to determine what’s on the left side and the right side of these joins, to be able to kind of like freely traverse the warehouse is a huge project. And so, what can you do with it? Well, you can basically ask for any metric in dimension where there is a joint path between them. And so if you think about it, like, what would you use that for? Well, I mean, you can ahead of time construct data Mart’s in a programmatic way. On top of a well-defined kind of set of abstractions set of logic, where you’ve defined your logic in a very dry way. And you’ve then programmatically generated all of the different data Mart’s that you would want to build. So the like, dream here is that you can and take what would be typically a very deep DAG, like a DAG that has many layers of data transformation, and actually flatten that to, I’m going to use my kind of data transformation framework, something like DBT, or just kind of expression of SQL or spark or whatever you want to use to clean up my datasets. And then from there, I am going to basically just expose these nice clean objects to a metrics layer to this kind of, historically, Airbnb has sometimes referred to Minerva as a de-normalization engine, such that you can define that logic in one place, programmatically generate all of your data Mart’s and even dynamically generate data Mart’s from downstream querying interfaces. So that’s, I mean, that’s like a huge step from the world that we exist in today. But it is actually the world that a few large companies are existing in right now. So this is kind of what LinkedIn is unified metrics platform does. And this is absolutely what Minerva does at Airbnb.
Kostas Pardalis 31:12
All right, so we have like a huge project here with amazing potential. It’s open-source. So how do you hope that like people are going to engage with you like, what kind of contribution from the community out there are you looking for?
Nick Handel 31:30
Yeah, so I think there are lots of ways to engage, frankly, what we’re trying to solve is like, de normalization, which is a nontrivial task. And when you think about, like, what is involved there, you need to be able to input any kind of underlying data structures. So you need to be able to support SCD tables, and partially aggregated tables, and raw event logs, and a lot of work has been done to be able to support those types of datasets. But there’s always new modeling techniques, there’s always kind of nuances there. There are lots of kinds of features where we’ll be opening more and more GitHub issues, but also, generally, we would love more issues. And then, beyond that, I think that there are different support their support for different data warehouses. So, today, metric flow, we started off with Snowflake Redshift inquiry. But, next on the next on the list is Databricks, Spark, Presto, Trino, SQL, Postgres, these are all the kind of the key ones that we really need to support soon. And then there’s a long tail of really interesting data warehouses that you could support with metric flow. The other thing is new metric types, I think that this is, this is like where metric flow really shines, I think that previous semantic layers have kind of struggled to define complicated metric types. They’re like, various strange ways of kind of defining complicated metric types in something like look ml, and it should be very, very easy to basically define a function within metric flow to support some new metric type. An example is like a conversion window, you basically need to point it at two data sets, you need to say, or a conversion metric, you need to be able to point it at two data sets, say, these are the timestamps in these two data sets. This is the conversion window. And then true false, do I want multiple conversions to be able to happen within a window, that’s like, effectively the inputs into a function that could be a conversion metric. And that’s kind of how the framework is designed. That’s actually not a metric that we support today. But it’s obvious that we should support that. So those are the kinds of conversions or features that I think would be really helpful. There are also optimizations, so we’re trying to render both very legible and very performance equal. And so we need to be able to basically optimize the queries as much as possible, to make them as efficient and readable as possible. And we’ve done some work there, but there’s more that can be done.
Kostas Pardalis 34:17
Yeah, that’s one of the things that like I wanted to ask you, which is about performance, right? Like, a lot of work is getting done, like BI from, like, DBAs, to analysts to data engineers, like pretty much everyone like trying to optimize the queries that they have in order like to either reduce latencies or reduce the resources that are needed, right. But this is part of like working with, let’s say, like, their all SQL query that exists out there and to predict complicated job and in many times, like requires, like a lot of like, deep knowledge of the database system, right. So how do you see solving this problem? Beggars That’s a feeling that I have, right? I mean, there are like two extremes. One is like, okay, let’s let the user control completely like the query and do whatever they want, which is what they’re doing today. Right? There’s the other extreme, which is, let’s make the full experience completely go back. autogenerate. Like the queries, you don’t have, like, any way like to, as a user to do something with it. But we will do the best we can like to optimize, right? And my feeling is that usually, the optimal solutions that are like somewhere in between. You can’t really be a system that optimizes the best possible way for every possible query and metric and use case out there. But at the same time, as you want, but more like usability there by automating as much as you can do. So how did you see from the very rich periods like this being delivered for the end?
Nick Handel 35:59
Well, this might be the optimist in me, but I think I think I might disagree with that, that you can’t create something that’s, that’s like, perfectly optimized. And I can’t take credit for this, like this is this my co-founder, Paul, but I think that the design of metric flows is really built around the idea that we can build both very optimal and very legible sequel. And the way that we do that is, historically, all of the versions of these metrics, frameworks that I’ve worked on in the past have been these kinds of like, template-based. Let me define some macro. And then let me basically wrap some other macro where I’m consuming from some like SQL that’s within kind of a from statement, that I’m doing some additional transformation on it. And there are kind of two conflated problems there. One is how do I express the logic? And the other is, how do I build, like, the sequel. And so the way that Metaflow works is, you basically create a query plan, which takes in the semantics in the API requests. So what’s in the warehouse, and what does the end-user want, it then creates a Data Flow Plan, which is read from this table, filter this thing, aggregate to this level, read from this other table, join these two things aggregate to this other level, etc. And you can create complicated query plans from that. And so you can really build like any metric type, you can build quite complicated queries off of this, which is when I talked about trying to solve the problem of de normalization, that’s roughly what I mean. And from that, you can then optimize that query plan. So if you have overlapping pieces of read from this table, filter, this thing, aggregate this thing, well, I can pull from two different parts of a kind of a data flow plan, like this DAG of operations, and put that into a CTE or something like that. And then read from that in the rest of those operations. So you start with this very flexible way of kind of building a structure of like logic that needs to be expressed to get to what the user is requesting. And then you go, and you optimize that. And then you take a DB-specific render, that can render to the nuances of each of the different databases. And you render this optimized query plan into that database dialect. And so with that, like, you can create pretty darn legible and like, pretty performance SQL. I think it’s similar in some ways to what Apache Calcite is. But written in Python and a fairly different approach.
Kostas Pardalis 38:52
Okay, that’s super interesting, actually. One is spending a lot of time like, the months, three North right now talking about performance optimizations. And it’s an interesting problem. It’s a big problem. And it’s something that I mean, there are always trade-offs, right? And actually, it’s kind of interesting, because as us like a query engine, like we approach like, the problem will be like, from the different side, like, we are trying like to figure out, for example, what workloads we should consider us, like more important right now and try like to mais the whole system like to deliver like more performance there. So yeah, anyway, I’d love to talk more about the performance part to be honest, but I think we can do that offline.
Nick Handel 39:46
Yeah, me too. We can find another time.
Kostas Pardalis 39:48
Yeah. All right. Do you see any kind of— I mean, I under-sampled like inside the company, we want to define like a very good metric about MRR, for example, we all agree upon this and like we, we use it right? First of all, how easy it is for this to happen inside organization? Because humans are involved and humans have different ways that they the breadth information, and semantics come mainly from the right. So let’s talk a little bit about that. Do you see like, cases where MRR might be multiple times defined inside the same organization? Do you see like constraint use let’s see if the end? And where are the limits between like technology? We’re like the human factor?
Nick Handel 40:40
Yeah, I like this question a lot, especially based on what we’ve been chatting about for last 10, 15 minutes. Because ultimately, all the things that we just chatted about: very cool, very interesting, very fun. Like, it doesn’t really matter, unless like the organization can agree on a metric. And actually connect it to the places where people want to do analysis where people want to consume a business user really only cares about rendering like legible or performance equal if their metric comes back to them really, really fast. And it’s the correct metric. Right. And so, this is, this is kind of the fun part where, like, the technology should just disappear under the hood. And I think that there are a few different pieces around lifecycle management, but maybe just at a high level, like, what is the lifecycle of a metric look like? So you have the definition of a metric, right? Here is the technical definition. Can we get everyone to agree on philosophically, like, what is this thing? Just ignoring the like sequel expression? Is this what MRR means? Are we filtering the right things? are we including the right things? Does everyone know what that is? And in my mind, there is a long, long way to go there, I don’t think that we have solved that problem. And I think that, if anything, it’s a little bit organization specific. But I think that the way that we’re doing it right now, which is either nothing, maybe an email at best, and, and well, maybe at best, like a Google sheet, or something like that, that has all of the like human-readable definitions of some of these metrics, is not enough. And so, I think that the important piece there is, is kind of, I think this is like an open conversation, I think, more people should talk about how to get this done. Airbnb, I was kind of, I saw how it was done there. And I think that that worked at that organization. Everybody’s talked about this system that they’ve built out now called Midas, which is like, how do we create a metric to find a business owner to find a technical owner, put a stamp on it, let everyone know that this is a Midas approved metric. This is tier one, everyone should trust it. Everyone has agreed on the definition. And here’s like the form that was filled out that has the human-readable description, as well as the technical logic. But I don’t think that that’s going to work at every company. Like I think that generalizing human process is way, way harder than generalizing technical solutions of medicine. And we have some, we have some opinions here. And we’ve tried to express those in our prod product around kind of ownership and business and non-technical owners, tearing this like approval process to kind of market metric is stamped and accurate. But frankly, there’s a lot more work that needs to be done there. Beyond that, you get to iteration, how are you tracking versions of metrics? Can you construct historical versions of metrics? Or do they kind of disappear once you evolve? Because the underlying tables are wiped away? Do you save a snapshot of that metric? Before you wipe away the table and kind of keep that in an archive somewhere? There’s a lot of kind of process there that I would say, we haven’t even scratched the surface of, or like archival, hey, this metrics, not useful. People should not come to this in seven months and say, and like try and do some analysis on it, because nobody’s looked at this table. It’s not even supported anymore. This isn’t the right definition. So the one technical piece that I would say is, how do you agree on an MRR definition? Well, getting business people and getting technical people tall, get in a room and talk about it is probably the best way to get that started. But at least technically, our framework doesn’t allow you to define two metrics with the same name. And every metric has a name in the code, which is some snake case like MRR underscore new users or something like that. But you can’t redefine that thing. So there’s at least protections against defining something with the exact same name. So I could have MRR ni users and I could have MRR first-time purchasers or something like that. And ultimately, those two things will show up as probably different tiers and different owners and like, hopefully, we’ll start a conversation where we can at least see that there are two different definitions.
Kostas Pardalis 45:13
Yep. 100%. All right. That’s all from my side for now. We could add another hour, but I completely monopolize the conversation, and I have to give the stage to Eric. So Eric, all yours.
Eric Dodds 45:28
No, this is such a great conversation. So question for you, Nick. Following along on what we just said, I’m thinking about our listeners, who are maybe environments where metrics are changing really fast. Let’s just imagine a world where you can easily get agreement on MRR. Even if you agree on it, like everyone agrees that it sort of changes next quarter because the business model changes. Well, sure. Sure, whatever. Of course, that happens a lot with sort of earlier-stage companies. And so, the world that you described at Airbnb where there’s sort of agreement on this and all this infrastructure around it and sort of living in the world of Minerva, and sort of having all of that. There are so many companies where the business models changing, their acquiring companies adding a new product lines, etc. I just looked another way that you think about that, right? Because if you think about the sort of singular case of a single business unit with a single metric for MRR. Great, that’s fine. Right. But if that’s changing a lot, how do you think about managing that?
Nick Handel 46:53
I worked at Airbnb when it was a little bit smaller: 2014, production was maybe 100 people, and I saw it evolve to product team was probably, I don’t know, 1500 Something people? Yeah, I saw a lot of growth there a lot of evolution, lots of definitions of new metrics, lots of iteration. But there is a set of metrics which remain relatively stable, right? Like, you’re not going to completely unless you completely pivot your business, which is truly a startup. And I worked at a startup afterwards, and I saw, like, new products get launched. And we only had a few kind of stable metrics. And now I work at a startup, like we have our own metrics defined and transformed. They’re very quickly, they evolve very quickly. So I do live this, but the thing I would say is that there are some metrics, which are stable, and they are very important. An example for us is the number of queries that are issued against the metrics framework, from any source from our Tableau integration from a remote integration from Google Sheets, sure, I’m within our UI from within the Python API, etc. Like, that’s a metric that we track. And it’s very unlikely that that metric is going to change no matter what we ship or whatever we’re working on. And we consider that a tier-one metric, which basically means that it is stable, and everyone in the company should feel safe consuming it. At the same time, we launch new product features, and we want to understand how people are consuming those features. And so we’ll get some new set of events. And we’ll define a tear three metric on top of that, which basically says, hey, no idea if this is good, but like, we wanted to put it on a graph, we just wanted to look at it, right? I’m like, maybe we’ll see a dip in weird ways. Or we’ll do some query in the future that tells us it’s wrong in some way. But basically that allows us to kind of track the new things with the old things and keep them separate, you’re, I think that tiering is probably the best solution to this versus ust throwing everything in a big folder and like reading through it every time you have a question.
Eric Dodds 49:11
That makes total sense. Okay, so that leads me to my next and last question. I actually lie a lot when I say that, so there may be one more, but we’re close to the buzzer here. So, this has been a concept that has fascinated me for a long time. So on some level, like if you take, say, an e-commerce business, the set of core metrics that don’t change are very, very similar across them. The user flow can essentially be broken down into a handful of touch points that are essentially the same even if they have different names. Do you envision a future where you can essentially have a metric store that maps to your business model almost out-of-the-box? And so when you adjust metrics in that context, you’re really almost adjusting sort of components of the semantic layer in the way that you name things. In reality, the semantics that you need to define are pretty much mostly known quantities. You could argue the same for a b2b SaaS that has a freemium model or whatever. These are known business models, known touchpoints, generally known metrics. Do you envision a world where you have out-of-the-box, this is your metric store and you don’t really have to do a lot of definition?
Nick Handel 50:46
So I really do like that idea. Unfortunately, the answer is—as most of the time is with questions like this—it kind of depends. Unfortunately, and what it depends on is how you want to measure your business, right? Like how you want to understand it. And I think that every business is unique, and that leads to various nuances that like, lead to different definitions of MRR and etc. But I do think that, at the core, there’s a set of metrics, where if somebody says, Hey, I’m an e-commerce business, then we should be able to say, okay, here are 20 metrics that you probably want to track. And you probably have some tables that look like this. And so let’s put them in, and let’s kind of have this like boilerplate template for e-commerce businesses, I think that would be very cool. And I think that that’s an interesting way, especially with what I said about, I think that the metrics layer should be able to visualize data, at like a very basic level, it should be able to show the value of the metric, I think that’s a very interesting thing, where you can imagine, an e-commerce business, just getting off the ground, and exporting a bunch of data to the warehouse, and then basically having the 20 most common metrics that they should be tracking visualized for them, like, that’s a great way to start off as a business, versus kind of the situation that we have now of like, dump it all in munge it around, like, it’s a lot of work. And even kind of beyond that, like, if you’re using something like a Shopify, Salesforce like, etc, they produce some kind of nice, clean datasets. So I also like the idea of basically pre defining those metrics on top of those datasets. And basically having all of the kind of getting started work done for you.
Eric Dodds 52:36
Yeah, super interesting. For a while, I’ve had this sort of, I wouldn’t say like a dream, but like imagining, basically running Terraform script, and it spits out, like, not only, like a stack with all of the tools that you need for a business model. Yeah, like the metric and sort of the table structures and all that sort of stuff. I mean, that’s, I guess, technically possible. But it’s super interesting to think about that. It’s almost like stack a box with the metrics layer and you’re off to the races.
Nick Handel 53:10
So I wrote this blog post a few weeks ago, where I kind of talked about the metrics layer, as infrastructure as code for your data warehouse, right? Like, how do you go and define all of these metrics on top of the tables that are landing in the warehouse and then programmatically generate all of the data Mart’s, that in my mind is like very similar to Terraform, plumy, etc. Yeah, but for your data warehouse, and the cool thing there is like, you make a change to a single metric, and it’s showing up in five different data Mart’s like, boom, that change cascades into all the places where it’s being consumed, which I think is like the really powerful part about thinking about the metrics layer as the de normalization engine to kind of tie it back to the technical talk earlier.
Eric Dodds 53:55
Yeah, absolutely. All right. Well, we are at the buzzer, Nick, this has been so wonderful. We learned so much. I wish we had another hour, so we’re gonna have to get you back on the show.
Nick Handel 54:05
Cool. Can’t wait. Thanks so much for having me.
Eric Dodds 54:08
Kostas, I love talking to all these smart people that we have on the show. And one of my big takeaways was the discussion around and I would love your thoughts on this to sort of the legibility and performance of the SQL queries that transform generates. And he was very explicit about the trade-off there, and how it’s a pretty hard problem to balance both legibility and performance. And I just appreciated that he multiple times sort of stopped and almost reflected on the difficulty of thinking through that problem and whenever anyone acknowledges upfront the difficulty of a problem and sort of reflects on it, I know that they’re really thinking deeply Got it. But that’s actually more your area of expertise. So what did you think?
Kostas Pardalis 55:03
That I might get kidnapped.
Eric Dodds 55:07
Well, I didn’t necessarily mean that. I meant performing queries on the data infrastructure, not in your own.
Kostas Pardalis 55:16
Yeah. Well, I have to add something to what you said about MIT, though. I mean, I totally agree with you about like, getting in conduct like with all these, like smart people, and all this, like the people, the team in Transformers, like very talented and smart group of people. But what I really love without discussions with him is also how passionate he is about the problems that he’s he’s going after, which I think is like, let’s see what when you have this passion together with like these, like, let’s say, smart, like, these are model smart people, I think something good will come out of it for sure. Yeah. I think I said that. I mean, yeah, like, they’re like maybe many trade doors that these technologies are going to face and how to decide. And that’s what makes like, the problem of link is going after like really, really hard. There are trade, those will have to do with like how people interact with each other because there is like a very strong human capital there. And many different like personas, for example involved in using the product. And obviously, like military dose, the technology itself, right? Balancing all these and finding like and doing like the right trade doses, what do they think is going to decide who’s going to win in this space? And it’s one of the reasons that I’m really looking into like these technologies, because it’s like really fascinating to see like how people can build experiences and products around such complex problems.
Eric Dodds 56:48
I agree. Well, thank you so much for joining us. Tell a friend. If you enjoy the show, ask them to subscribe. And we always like new listeners, and we will catch you on the next Data Stack Show.
We hope you enjoyed this episode of The Data Stack Show. Be sure to subscribe on your favorite podcast app to get notified about new episodes every week. We’d also love your feedback. You can email me, Eric Dodds, at eric@datastackshow.com. That’s E-R-I-C at datastackshow.com. The show is brought to you by RudderStack, the CDP for developers. Learn how to build a CDP on your data warehouse at RudderStack.com.
Each week we’ll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.
To keep up to date with our future episodes, subscribe to our podcast on Apple, Spotify, Google, or the player of your choice.
Get a monthly newsletter from The Data Stack Show team with a TL;DR of the previous month’s shows, a sneak peak at upcoming episodes, and curated links from Eric, John, & show guests. Follow on our Substack below.