Episode 69:

What is the Modern Data Stack?

January 5, 2022

​​This week on The Data Stack Show, Eric and Kostas hosted a panel of experts from across the business and data landscape including Timothy Chen of Essence VC, Brandon Chen of Fivetran, Paul Boccaccio of Hinge, Jason Pohl of Data Bricks, and Amy Deora of dbt Labs. Together the group discusses what defines the modern datastack and how it impacts each of their businesses.

Play Video

Notes:

Highlights from this week’s conversation include:

 

  • Panel introductions and backgrounds (2:55)
  • What the modern data stack means to each of our panelists (5:04)
  • Defining the fundamental components of a modern data stack (17:22)
  • How the modern stack drives insights and actions for businesses (28:03)
  • Getting to a uniform definition to the modern stack (33:45)
  • Managing the modernization of a large scale data stack (39:09)
  • How testing works in the dbt context (48:44)
  • The relationship between the data warehouse and the data lake (52:25)
  • What has us most excited or the future of modern data stacks (56:02)

The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we’ll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.

RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.

Transcription:

Automated Transcription – May contain errors

Eric Dodds 00:06
Welcome to the data stack show. Each week we explore the world of data by talking to the people shaping its future, you’ll learn about new data technology and trends and how data teams and processes are run a top companies. The Digitech show is brought to you by rudder stack the CDP for developers you can learn more at rudder stack.com. Welcome back to the dataset Show. Today, we are recording an episode with a panel of guests. This episode is also live streamed. So thanks for everyone who joined us on YouTube. For the live stream, we’ll be doing another one of those. And we’ll let you know about that on upcoming shows. This panel is pretty incredible. Back we started the show, I would have said you were crazy. If you said that we were going to have a panel with people from dbt, data bricks, Fivetran, as someone who’s building data infrastructure at Hinge, and then VC investor from Essence VC who invest only in data infrastructure products. So this is pretty incredible. And I’m really excited to hear about the different ways that each of these people talk about the modern data stack, because they each come from a different part of it on the you know, sort of SAS side of like the tooling providers. But then you have someone who’s actually using some of these tools to implement stuff. And then you have someone who’s trying to think about how to invest in them. And so I think the variety of perspectives are gonna be really, really helpful. So I’m pumped, the cost is what are you going to ask everyone?

Kostas Pardalis 01:40
I think I will improvise, to be honest, as I usually do,

Eric Dodds 01:44
we always do.

Kostas Pardalis 01:47
Well, I think it’s an excellent opportunity to see that their modern data Psych is just like a marketing term, as many people say, or something more than that. So yeah, I think we have the right panel there to figure this out. Hopefully, like at the end of the of this discussion, it’s going to become much more clear why we needed this tournaments. What’s the essence of that term? So yeah, that’s my goal for today, trying to understand better what the Moodle dataset is.

Eric Dodds 02:18
Alright, well, let’s dig in and talk with all of these amazing thinkers about the modern data stack. Students. Welcome to the data sack show. This is probably our most exciting episode. Today. We’re also live streaming this episode, which is really exciting. We have some of the best minds in data here to talk about the modern data stack. And I just could not be more excited. So we have so much to get through. Let’s just do quick intros, maybe 30 seconds or a minute, introducing yourself. And I’ll just call out the name since we have such a big crew here today. So Jason, do you want to kick us off?

Jason Pohl 02:57
Sure. Hi. My name is Jason Paul. So I’m a principal Solutions Architect here at data bricks. I’ve been, I was one of the first 10 solution architects, I’ve seen the company grow from just one instance, type on one cloud to supporting all three clouds and more instance types, and I can count. So I deleted the data management subject matter expert group for for data, bricks. So anything that has to do with data engineering, or data governance, I basically help enable the field our customers and partners on how to do best and serve as a conduit back to product management as well.

Eric Dodds 03:27
Awesome. Amy, how about you?

Amy Deora 03:29
Hi there. I’m Amy Deora, I head up partnerships for dbt Labs. So I lead those relationships with other products that are integrating with dbt. And then with consulting partners that are bringing dbt into industries all over the world. Before that I worked about 15 years in data analytics and data science consulting before joining dbt labs. Happy to be here today.

Eric Dodds 03:48
We’re happy to have you, Paul, you’re up next

Paul Boccaccio 03:51
are Pikachu I work at Hinge, one of the core data platform team where I’ve built out our monetization of our pipeline. So this conversation is so this really interesting to me

Eric Dodds 04:03
that we can’t wait to hear what you build. All right, Brandon.

Brandon Chen 04:06
Good. Thank you, everyone, for tuning in. I am a manager of our technical product marketing team. Here are Fivetran. Prior to joining Product Marketing trend, I was our first West Coast sales engineer. So really excited to see how Fivetran grown over these past couple of years. And I should also know when I’m talking about Fivetran, I’m also referring to HVR. And one of the companies that we’ve recently merged with as well.

Timothy Chen 04:28
Great and Timothy, I everybody I’m Timothy Chen, I’m investor here at EssenceVC specially invested a lot in data infrastructure so especially very excited about this as well. Before that I was engineer working on open source, you know, contributed spark Kafka, drill others related data stuff as well. And yeah, so definitely seeing a lot of interesting stuff happening this space.

Eric Dodds 04:50
Great. Well, like I said, we’re all in for a real treat here. We’re not going to waste any time so cost us so why don’t you kick us off with the first question.

Kostas Pardalis 04:57
Yeah, let’s start with the most important All right, so I’d like to ask our panel here, what the modern data stack is and how they understand it. So let’s start actually with Paul. Because I’d like to hear like the opinion from the stakeholder, right? Like, someone who gets benefitted by it, and is using like on every day. So Paul, what is the model data stack?

Paul Boccaccio 05:24
is, of course, a loaded question because it can mean all kinds of things. But to me, I think it’s a combination of volume, access, and trust. So like, pulling heavy volumes of data through a reliable way. And obviously, trust, like it has to be secure, its intrusion and whatever. But it also has to have high data quality, you have to know that what is coming in at various stages of your process is when you get out the other side. Because if you’re, if you’re running these experiments, a very small margin could mean can mean a lot to your, to data scientists and stakeholders and so on. But in terms of access, I was thinking about this, as we were talking earlier, do you have sort of disparate patterns of access that like you have to be able to explore the data to not just like machine learning models or what have you, but have like actual human exploration, and you have to be able to sort of examine your data point of time. So you have like, repeatability concerns, but you also have, like, you thinking about, like, what is modern, like, you have to have to be compliant with the law, as well. So like privacy, and just like GDP are all the all of the all the ways in which you have to now touch every piece of data across your entire stack at reliable intervals in order to ensure that people are being protected under law.

Kostas Pardalis 06:41
Okay, well, that’s, that’s, I think, like, it’s a very interesting definition. And I’d like to ask next, the VC of this panel, because he’s probably getting like a lot of pitch decks out there where people are talking, or they are trying like to define yourself as like the modern data type of part of the modern data stack. So theme, like what do you see out there? Like, how do people communicate to you the modern data soc? And what’s your opinion about it?

Timothy Chen 07:11
Yeah, the modern data stack is one of those buzzwords that you were so intrigued about that you really have no idea what it means. And it’s just sounds good. Keep saying it over and over. Because I think it’s one of those things where it feels like it’s in a center of something, you just cannot fully grasp it, because everybody has their own definitions. And it can be used in so many different contexts. But in your response, back to your question about what do I see, I think modern data stack, what is the most comprise of what I can tell, is really the move to the cloud. Everybody has democratizing access to all this sort of data that they need, and different functions. And therefore, there’s a collection of tools, there’s a collection of products that everybody’s keep mentioning over and over. And you can sort of change and mix and matches. But there’s a few things that are always sticking there, which is kind of like your storage or warehouses and some level of different things here. So that I think, in some way, it feels like people are trying to redefine modern data stack every single day, because hey, I think there needs to be a real time streaming one. And maybe there needs to be a better way to get analysts a better graphical reports now, right? You can kind of stick a lot of things in here. But I think the coral, from what I can tell when I was actually talking to different customers and friends are also investing in this space. What we’re seeing is modern data stack is also an opportunity where people are really looking at all the things they’ve done in the past. And like, Hey, do I really need to be doing data the same way as before? Right? Does all the new buzzwords like data mesh and all the stuff that we’re seeing right now? Do we have a collection of tools and practices and we actually had helped enable democratizing access and make data infrastructure and data analytics done in a very different way. But of course, modern, has no goal and also has no specific limitations or even like, requirements. I know that it’s a very fuzzy word, but I’ll just leave it at that. Because we’re all I’m trying to figure out right, even.

Eric Dodds 09:16
Yeah, I’m glad we’re trying to talk about modern data second data mesh, though, because I think it’ll that will at least be a little bit

Timothy Chen 09:24
every plus where we could ever heard of it. We could have like a five hour

Paul Boccaccio 09:27
discussion. So

Kostas Pardalis 09:28
yeah, maybe maybe we should make another panel just for the data mesh, and see who is most confused about that there. But I’d like to ask next, Brandon, mainly, like, what is interesting about Brunnen? I’m very interested to hear like she’s opinions because I think in many people’s minds, the model data sight is a term that it has touted or it is heavily associated with 5g but also because as the Modi said, important part of like what the modern data site uses To the democratization of accessing data, and I think that’s like a big part of the mission. And the vision that Fivetran has. So rather than the stage is yours, there was about the modern data

Brandon Chen 10:12
stack. And parchment even has a modern data set conference. So we discussed this quite quite in detail across the conference with all the attendees, all the different panels. And I truthfully expect that we’ll be talking about this for multiple years to come, because the definition will continue to evolve will continue to evolve. But at some point, we might not refer to as modern data. Second, what are some other types of data sec, but I think it was Notorious BIG, who said, more tech, more problems. And ultimately, I think that’s what the modern data stack comes from, right. And then, Timothy was talking about this too, the rise of new cloud technologies, how they make it a lot easier to scale out what you’re trying to do with your data strategy. And with all these new technologies and capabilities that we can do, as different data teams across different companies continue to expand as well. And with all of these new tech, really, it just comes to what else can we try to solve for. So then I’m going to throw in another buzz, buzzwordy term, machine learning AI, these are just generalizations of modern data problems that people continue to run into, and really starts to become what is your modern data stack really depends, in my point of view on what your company is trying to solve for. Like if you all have never utilized any sort of new technology before, imagine your company as a sim single entity, single one person entity, and all you have is one laptop to work off of you have no access to any cloud, then for you, a modern day stack might simply be databases, like locally set up your computer that happens to record transactions, just from this one little office that you get what you want. And really, the definition will continue to change as new companies introduce new terms introduce new use cases. And I fully believe that I’ll try to keep up with all that change as a company as well.

Kostas Pardalis 11:52
Yep, makes a lot of sense. All right. So next, I’d like to hear like what Jason has to say he’s, I mean, as part of data, bricks, and like, what I find like extremely, extremely interesting about data, bricks and spark is that we are talking about a tool that exists for a long time, and it has evolved all these years. So Jason, from your experience, and like working in this space, like all this time, what is this? The modern data suck?

Jason Pohl 12:25
Yeah, I think for me, the modern data stack, I used to, I used to be a data warehouse architect years ago. So I would I would work with companies, I would, I would use the popular ETL bi and databases of the day to build these data warehouses. And I think since then, we’ve had like these digital native businesses like Facebook, Airbnb, Uber, they’ve kind of like they build their entire businesses off of different tech stacks and the one that I used to implement 1520 years ago, and those tech stacks were open. So they came from open source. And initially, I don’t think there was there wasn’t a public cloud to go to. But now there is a public cloud. And what was really unique was they were using these tech stacks, and it was really multiple stacks to, to do data processing and do historical analytics, but also do artificial intelligence and apply machine learning to their models, to their data to be able to optimize everything from lead flow are legion to optimize ride routes for Uber. So so there’s this been, I think, evolution where as these digital natives started up, they used open source created their own open source projects, and then use those also for applying machine learning. But now, they’ve those same projects had been ported to the cloud. And now I think the modern data stack is the culmination of all these open prod projects that have now got either hosted by the cloud providers themselves, or other cloud services like like Databricks, where we host Apache Spark and ml flow and all these other open source projects that we’ve developed over the years. So I see it as a way for companies to kind of like pick and choose whichever parts of the stack, they want the best of breed and combine them in a way that gives them the maximum velocity for whatever they’re trying to do. Okay,

Kostas Pardalis 14:02
okay. That’s, that’s great. Amy, I lift you to be the last one, because I have I have some very important reasons for that. So and we will see like in a bit because there is a follow up question. But before we move to the follow up question, tell us like, from your perspective, personal perspective, because you’re also been like in this space for a long time. And also, like from the diabetes perspective, like what these model data psyches?

Amy Deora 14:30
Yeah, I think of the modern data stack kind of in In contrast, right to what we had before how data teams were working kind of before we had these suite of tools. And probably the biggest change is kind of what Jason said about having really focused on best of breed tools for each specific job that the data team does right. So in kind of in the past when we were looking at different different solutions folks would look at some of maybe the Informaticas of the world or these kind of all in one solutions that did a lot of different things. Right. And that was kind of thought to be kind of easier and better. But now we have a data team that’s choosing kind of the very specific, the best tool for what particular job they’re doing, whether that’s ingestion or transformation. Or we’re even seeing a lot of a lot of innovation in the business intelligence layer, right kind of beyond traditional dashboards to tools that bring in kind of bring in notebooks from data science into analytics in a new way, new tools, where we’re bringing in data from the data warehouse back to the back to Salesforce, and kind of back to these other applications in a way that we weren’t before. So So finding those best in breed tools, and being able to have interoperability, so tool, so teams have the ability to both choose the tool that works best for their particular use case. And also change those tools. Right? When, when a new when folks see a new, there’s new data warehouses on the scene, there’s new different tools on the scene. And because of the interoperability between all of those different tools in the modern data stack, folks can then choose what’s best for their use case. And there are a lot of innovation just happens from that choice now that the team has in terms of the tools that they use.

Kostas Pardalis 16:10
Mm. Yeah. makes a little sense. Anyone wants

Brandon Chen 16:14
to add something? Yeah. And I think both Jason and Amy’s responses are good examples of how their companies have also continued to push for the idea of the modern data sec. Right. Like with the before Databricks, there was a concept of let’s say, like a data lake, and then a job is ruined, that really pushed that concept for it. And then data bricks came out with this, okay, on top of this data lake, how we’re going to make things more structured, how we’re going to add, for example, like ACID transactions to your data lake. And now we have this concept of data lake house, which AWS has also started to what is it called adopters? And then across, how, what is it called DVT. For example, the lines of between analysts, and its typical data analysts and typical data engineers are continuing to be more and more blurred with what the abt is putting on top of traditional Alice workflow, traditional modeling. But now we have this term analytics engineer. And it’s these are all, in my point of view, great examples of how technology, and the terms that are being pushed out as this technology evolves, continues to also make the definition of modern data stack and fall as well.

Kostas Pardalis 17:12
Mm hmm. Yeah, that’s a great point. I’ll go back to Amy. And I want to ask my like, next question. And the reason that I must do this first to you, Amy, is because you’re a person that works with partnerships. I think everyone will agree in this panel, that partnerships are like something very important for everyone that works and who is part of the data stack? And which makes sense, right? Because it still needs another tool in order to like deliver value of the like, that’s why we have a stack, and we just have like one tool out there. So my question, Amy, for you, and for the rest of the panel later is what are, let’s say, the most fundamental and important part of the stack, like what defines the stack? What kind of like functionality at minimum we need in order to say that we have implemented like the modern data stack?

Amy Deora 18:02
Yeah, this is a question where the answer is definitely evolving, right? So if you ask folks, maybe a year ago, they would say, data ingestion, data warehouse and transformation at a BI tool, right, they would say those are kind of like the categories. Now we really have a lot of innovation around a lot of that, right. So people would say ingestion, data warehouse, or a data lake, right, or query engine, there’s all kinds of like pieces that we can we can make our data source transformation, then we might say in the Ba, ba bi layer, we’ll have kind of more exploratory analytics tools, like a notebook will have traditional BI tools, like dashboards, will have also what’s called sometimes reverse ETL, or operational analytics, or basically this idea of being able to take data from our warehouse and put it back into our systems, then there’s kind of two other kind of categories that are kind of that are now part of what most people call the modern, the modern data stack, but they’re kind of more in a little bit of still exploratory stage. One of them is probably observation and testing, right? observability and testing, so kind of data quality observability. There’s a lot of companies in this space, and a lot of folks are figuring out exactly how this fits in the modern data stack. But I think to Paul’s point earlier, like this is this is going to be important, right kind of understanding kind of testing observability. Some folks also put kind of data privacy and into that bucket as well. Then also, there’s probably a part that we’re really excited about a DVT is what some what some people call the metrics layer, right? So this idea of between your data transformation, creating your data sets that are ready for analysis, and kind of your traditional BI tools. How do we make sure that the definitions of the metrics that we use to measure our business are consistent across across all of the folks whether they are using that data for BI whether they’re using it for exploratory analytics, whether they’re pushing it into another system? So the metrics layer is kind of kind of a piece of evolving, but I think there’s a lot of different a lot of companies and a lot of different folks are really thinking about us as kind of a new evolving part of what we call the modern stack. Mm hmm.

Kostas Pardalis 20:12
Okay, that’s a, I think a very, very thorough definition of what it is. It’s like, it’s great to hear about, like, all these different layers that it has, let’s say, and I think that’s probably also contributing to a lot to, let’s say, the problem that people have out there like to define it exactly. And say, like, what this is, and why we’re using it, because they’re also like, from whether there’s like, many different like variations, right? There’s like every company has exactly the same meet, or it’s of the same maturity level to like, utilize like AI in the middle, like some companies, they just do, like their BI layer, right doesn’t mean that there is no stack there are like modern data stack, like doesn’t apply to them. What Fivetran thinks about like, the most important components of the data type, of course in session, right, like we leave that this correct, or we can leave without any issue. What do you think?

Brandon Chen 21:10
Yeah, I mean, in my point of view, of course, ingestion is always first and foremost, right? Getting the data to where it needs to be. So that you can actually do what you want to do with the data with the complementary tools. And that feeds back into Amy’s earlier point about interoperability operability, of making sure that all of the tools that you pick are seamless, they all work together. And many times watch, you start with the let’s say, like data storage layer, right, trying to figure out how to make the ad hoc queries, the running a lot more efficient. So many point points of view, it is the data storage piece that is the most important to many companies, because they want to make sure that, hey, as they’re caught application, number of tools are they using continue to evolve, they want to make sure that as importing that data and whatever underlying data storage system they’re using is able to be to support all that new data that they’re going to be working off of. And that’s affordability is broken into few pieces, of course, price is always going to be a part of it, money is always going to play a part of it. But other part is back in query optimization, how how efficient can the queries that your your team is used to running actually work on these different two different data storage pieces? My point of view, it’s fine to start with a please just always consider as your build out the rest of stack, how it’s going to function at that piece. And of course, part of is going to be driven by what your company wants to do. What are what is probably the move to reevaluate certain data tools you’re using. And that case, it oftentimes starts with what is the solution? That’s problem that you’re trying to solve for? Is it that your data storage doesn’t work? And just going really high level here? Is that that your data integration is breaking all the time, and you need a fully managed service to accommodate changes, like API’s API changes across all the tools you use? Depends on what problems you’re having? Sorry, it’s a bit of a non answer.

Kostas Pardalis 22:50
Ah, no, no, no, I think it’s a very, very good point, I have a question that it’s, it’s just for you, mainly, because it has to do with with congestion. I mean, Python has been like in the space for quite a while has actually like disrupted, let’s say, the space. Now remember, like, if we were talking about six years ago, seven years ago, like all the noise was about how we can get access to our data, right. And I think like big part of the mission that Fivetran has is like to make it as easy as possible, like to get access to your data. Have you seen like, after all these years that this goal has been achieved, and more and more companies are like actually focused on other parts on implementing other parts of the modern data stack, and they consider like the innovation part solved, or you think there’s still a lot of work to be done there.

Brandon Chen 23:40
I think there’s still going there’s always going to be more to do and part of when we think about data integration, data replication, sometimes we just think about it in terms of how to get data from point A to point B. But there’s a lot that goes into right like beyond just getting good at point about the how can we make it as efficient as possible? How can we make sure that we’re connecting not just to maybe one employee in some through some API boggling make sure we’re pulling all the fields man, how can we make sure we’re structuring that data so that when it lands in the warehouse, that whatever queries you want to run out will actually function because the data types have cast correctly. And part of making data integration easy. And making data accessible is making sure that everyone understands how to use the tool. So that’s one of the core components of fire trend is ease of use, easy to use, easy to use, you won’t see a lot of buttons in there. When I was a sales engineer, sometimes I actually dreaded demoing by Trump, because people would ask all these questions about what’s going on behind the scenes. Yeah, I can show you what’s going on in a workflow. But you won’t see anything in the UI except for some gifts that are moving back and forth the representative. And the reason I say that this always evolve is because if the goal is to make it as easy as possible, make data integration as possible means abstracting away a lot of those considerations, right things like reading API documentation should be in theory, a thing of the past of using any of these tools, maintenance, as let’s say schemas, change the data source system that should also be a thing with fast. And to make all those backend considerations work, a lot of it goes back to will it evolve? Yes, it will evolve because people continue to do funky things to the source systems, people continue to adopt best of breed tools unique to their departments, various challenges that they’re trying to solve. So there are, in my opinion, will always be worked with fashion and do to further optimize some of these backend considerations we’re making on behalf of our customers, supporting higher throughputs, as the data volumes across all these sources continue to grow as well. And ultimately making it as out of the box as possible. So making sure that we hit all the edge cases that our 1000s of customers are continue to run into with other funky setup. Mm hmm.

Kostas Pardalis 25:40
Jason, you’re the last one from the vendors that we have on the panels. So what do you think from data bricks perspective about how you define, let’s say, the fundamental pieces of the modern data stack?

Jason Pohl 25:52
Yeah, I mean, I, one thing I think that’s, you know, changing a little bit since since I started was a dumb, when I first started the Databricks, a lot, you know, it was pretty common as an architecture to basically do, you’d have a data lake where you store all of your data, and you do a lot of the data transformations there. And then you’d offload some of that data to like a data warehouse, either at redshift, or snowflake, or, or BigQuery, or something. And then you would basically use that for serving up the the analytical queries from from from intelligence. And I think we’ve been kind of like marching towards this confluence for a while, but But now, you really can basically achieve both things on the data lake. So we, we have this data lake house concept that we’ve we’ve come up with. And the concept behind it is like, you can have the economics of a data lake and do all your ETL there. But you can also have your interactive bi queries run on top of that data lake using SQL. So essentially, you can you can write your data once, and then do all your use cases on top of it, whether it’s streaming or sequel for BI, or machine learning, or graph analysis, it doesn’t really matter. And so I think with the the modern data stack, there’s, there’s a number of different tools out there that do these different individual things. But you can, you can kind of like, combine all of them on top of the lake house, as long as they’re adhering to open standards. In some way. This is kind of like realizing what what Google realized 20 Some years ago, and they, they wrote that matte paper, the white paper for MapReduce is that they realized that they had way too much data to copy it around to do their processing. And they’re going to have to bring the processing to the data. And that’s kind of like where the impetus of their MapReduce started. And then it spawned off all these other open source projects from there. So

Kostas Pardalis 27:35
that’s very interesting. And I’ll ask next, our investor, I think this is going to be interesting, because some people might also find good ideas to go in bits of investors after that. So what is from an from investor point of view, like the important parts that someone needs to define this data stack, when implemented?

Timothy Chen 27:59
With our customer point of view, or like what actually want to buy

Kostas Pardalis 28:02
from a customer? Because like, okay, I guess investments like investing, like in ideas that they have in market? So I think it’s,

Timothy Chen 28:12
you know, like, I think it’s just hearing last few folks talk about Wait, we’re actually talking about different kinds of personas, that are actually looking at the stack in different ways, right? When we talk about ingestion, or infrastructure or data. Those are definitely more of a data engineer infrastructure sort of point of view of people. So that’s kind of what’s one kind of customers, right. And then we look at the analytical users, I did analytics, where dbt has a large brand around those kinds of users coming into the space. But we also have, it was really interesting, exciting that we’re seeing as investors is that data is truly getting pushed out for many different kinds of functions. Lots of people are increasing budgets to hopefully get into a data driven enterprise, right? are data driven businesses, where can we actually leverage analytics? Where can we leverage AI? Where can we leverage where we actually manually have to do processes? Is there a way to do RPA says, you know this, there’s data’s driving insights and insight driving into actions. And actually, it’s actually turns automations, can we continue to actually feed us loop into multiple places. And so there’s also a segment of users or just as business analysts, product managers, anybody is actually doing any sort of functions like sales or marketing, right? This is where reverse ETL even came from, right? Why do we even need to have this into a modern data stack? Well, we don’t just stuck them into a dashboard. We had to push it out to some other people tools that they use. And so if you’re looking at what is necessary to implement this, this really is a very tricky question. Because everyone’s needs are different. Everyone’s maturity level of how they view data and extra complexity differs a lot. And that’s actually one of the difficulty in investing in this space, because hey, one person will be maybe proposed Hey, we need an all in one solution that just takes now modern data stack in a box, install everything you go. Don’t worry about anything else. Don’t worry about any other vendors. Right? Those are solutions. And definitely companies out there trying to make all the complexity reduced into one single click install. And there’s, there’s pros and cons of that, right? How do you do the best of breed? If you have? How do you hide all the complexity? Can you make the Fivetran like experience, just mask every single vendor out there like that’s one kind of considerations. However, on the flip side, if you say, hey, we want to be the most modular, we can truly choose whatever you need. There’s a lot of engineering work and sort of like maintenance work, and integration, glue work that needs to happen. And so there’s a huge spectrum. Everyone’s trying to figure out, if you talk to any head of data, they’re scratching their heads every day still, because they’re fighting this fights between I’m hearing all these buzzwords about modern data stack. And I think maybe I need something like this. But I’m not entirely sure. How do I mature my organization’s you understand? What do I need? First? How long has into implemented? What are the trade offs? What are the ways I should actually even think about doing this for in the first place. And it’s not easy, because I think you really talk to data engineers, they have one set of concerns, analysts have the data in a truly business, you really need somebody who understands all those pieces. And most of these companies that can actually say, Hey, here’s what we’re gonna buy. First, let’s ignore all the noises. Let’s take this approach this far from this business unit, let’s spend this as much money, right. And for larger places, it might start from more foundational pieces, like like the data bricks source, and fight your way up. In some more smaller companies, it might be starting from like more more defined smaller pieces of solutions that can kind of combine them all right, you need both. But there’s just like, it’s hard to have one product catch all in for everyone’s need. And that’s, that’s the complexity of the space. And we’re still evolving, of all the new tools people are proposing about. So I it’s hard to answer the question, because we have so many discussions. It’s not there’s no one single answer. They’re asking me and I have to relate to each other. And they all fight talk to each other, like, kind of cynicism whatsoever of how one way to implement modern data. Is that

Kostas Pardalis 32:27
Yeah, do you might have a question that it’s, it’s for you. And actually, it’s a question that comes in spotlight from a question that I grew up like from from the community, there is a very specific how to that like, there are very specific semantics around the term stack when it comes to developers, right. For example, we have jam stack, right? This is like the latest thing before that, like, I don’t know, like, decade plus ago, we had lamp where we had like Linux bundled together with my sequel, and Apache and PHP. So it’s something that’s like extremely, extremely well defined in the mind of a developer, what, like a developer stack is right. And I must think you because you’ve been like a developer. So you understand this like very well. So do you think we will reach a point where we will have something similar also for data and the modern data stack can become this?

Timothy Chen 33:21
Yeah. It’s, I think it would be interesting, because if you look at everyone’s definition of modern data, sec, at the moment, there’s actually is a few pieces that’s fairly consistent. hasn’t changed that much. Yeah. And so you kind of have to play the devil’s advocates of each category a little bit, right? You know, like, Okay, is there new categories that we need to because we’re, we’re constantly inserting new new new categories. You know, we have a lab set is just four letters, we messaged her when I worked at there’s a smack stack, right? There’s only five letters. And the Modern data stack, there’s an infinite number of letters. So there’s no set number of characters anymore. So this is like a unbuffered string, you can just do whatever you want. And it’s, it’s, it’s, that’s why it’s hard. Because, we’re going to increase the number of categories for sure. I don’t think we’re ever gonna have just five there’s, there’s more business analyst stuff, there’s AI things. We haven’t talked about catalogs or discoverability. There’s stuff we’re gonna add to the stack over time, and will we see consolidation in each one of those categories? I think we will. And there is also going to be cross functional things my tool that does catalog will also do quality, my quality also does catalog. Multiple vendor names are showing up in different categories of problems. So will we get there? Will all this shuffling end and just become one letter? I don’t think so. You know, you know our future will be really interesting. I do think that there’s definitely going to be consolidation of logos really do is no way to have a crystal ball. I don’t know exactly what that will look like. I do think though the, like I said, the number of categories will increase. So we’re going to actually is even even debatable what categories even makes worse the category as a modern data set in the first place, right? And then even figure out like, what what is the cross section of these people really

Kostas Pardalis 35:24
100% 100%. And I think like, if someone sits down into like, all the different categories that they appear, like, from quality, even like, if you take data quality, for example, there are probably some categories inside that you say so yeah, absolutely. And I think that’s one of the reasons like people should keep in their mind, like, why it’s so hard to define it is because it’s just like, all these vendors are like, appearing right now. Because now like the market is ready for that. But we still make time like to define exactly all the categories and all these things for your last but for a good reason that you let’s say, represent the most important stakeholder, which is the customer and the user show, and you have implemented a very large scale, like a data stack. So what is the data stack for you? And like, what is what are the essential components of this data stack of the modern data stack?

Paul Boccaccio 36:23
So I think it’s related to two pieces. It’s easy to talk about data in the abstract. But, but really like thinking about the end purpose of all of this machination is, is important in each step, like we’re doing all of this work so that people can answer questions to drive the business to do particular things like tangible, tangible decisions have to be made as a result of all of these systems. So like, for me, it’s mostly about trustworthiness, like over, even over performance is like, although trust with it is in performance. Like if you don’t have anything to to look at the it’s very trustworthy, but it’s not very useful. But it’s yes, you have like, like compliments, like cables, like materialized, like, all of these sort of sort of vendors in this space are, are dealing with the abstraction question that Brendan was talking about earlier. And like they’re doing it in a way that is observable. So for me, it’s like, I want to know what is happening at each stage. I want to I want to have like the, the sort of like eye into the opaque box and see like, what decisions is it making about my data? And so like, just correct, like correctness, and trustworthiness seems to be the most essential piece of the modern data stack to me. Because if you’re if you’re serving up answers that are not provably correct, then you lose stakeholder trust. And then also, you know, you’ll lead your business to make their own decision, or maybe at a crucial time, so you have to make sure that you’re, you’re telling people the truth. Yeah,

Kostas Pardalis 37:54
yeah. 100%. I think that’s like a pretty hard problem to solve. And I think there’s a lot of debate on what’s like the best way to implement quality. I remember, like having discussions were trying, like, there was a question is quality, something that can be implemented by one system, or it should be the responsibility of each part of like, the data stack that you have, right? Like, is the pipeline like responsible for its own quality? And then like the storage for its own? Like, I think it’s both like, both a product and an engineering difficult problem to solve. But so that makes it also like, so interesting and fascinating. Eric, the stage is yours.

Eric Dodds 38:33
Oh, well, perfect timing. Because I was thinking, Paul, I have a two part question for you. Following along with thinking kind of beyond the actual tooling, or individual componentry into sort of what is the outcome of these things coming together? Before we get there, though? I’d love to know, from your perspective, it’s easy. I loved your comment about thinking about this in the abstract. And when you think about the modern data stack in the abstract, you sort of arrive on the scene with like, this amazing modern toolset, right? And, in reality, different parts of the stack, become modernized at different rates. Could you speak a little bit to what that dynamic is, like, sort of managing a large scale, you know, complex data stack?

Paul Boccaccio 39:23
Sure, I mean, I think it starts with the, your like message bus, like, however you pass information from one group inside your organization to another. And so like, like, well, I guess, way, way back in the day, you had like, maybe one database or everybody would like go ask the database, a capital D. And it’s like, well, that doesn’t really work. Once you scale past that enough people to sit in a room, you know, so like, being able to communicate through in different parts of your organization means that I mean, people love to like fetishize Conway’s Law or whatever. But like you have you have these like sub organizations or like sub organisms that are have to like, learn to trust each other, as well. So like, if you’re modernizing at a different speed, you have to sort of prove that your piece is is trustworthy, first of all, but also will work with the other organizations piece. And so like, pass passing data back and forth, like a no rush that does that, if that’s like your bread and butter, so it’s like being able to be able to have like, inspect ability into or like, an eye into the systems that communicate, like, internally. So you’re taking data in from the outside. And so the there’s like the idea of a single source of truth or what have you. But you end up having whatever, like multiple views on that data, or yeah, we’re just like the, I suppose it does come back to trust again. So but that’s kind of the the main pieces just like you have your, your overall data strategy of like, this is how we’re going to test how we’re going to like manage lineage, for instance, throughout the organization, this is how we’re going to manage, just like knowing what do we know, when do we know it? How do we know that we know what we know? This kind of

Eric Dodds 41:01
circular? Sure, I think that’s a really, it’s, it’s so fun to talk about the specific tech, but it’s a really helpful thought pattern to put trust at the center and say you modernize your stack according to the ability or sort of need of various components to deliver trust. And that’s such a that’s such a helpful thought pattern. So second part of the question. So building trust, especially as you have a stack, that sort of increasing in complexity is non trivial. Right. And I know there’s some tooling around that. My guess is that on the ground in a lot of companies, you have a team that sort of managing that across the toolset. So let’s say trust is sort of a central use case. Are there other use cases in the business? I mean, analytics is the primary use case. But let’s just sort of assume let’s abstract the modern data stack and say you have a great set of tools and every part, what are the other use cases that are hard to build? And are those you know, sort of still on the analytic space? Are those enabling teams who are delivering, like, the actual experience to the end user? I just love to know sort of you great, you have all the all the modern tools, but what are still the hard problems to solve? Even if you have best in class?

Paul Boccaccio 42:15
Well, I think it’s, I mean, not to get the answer, it depends is always is always like both true and disappointing. But if you have to sort of like focus on that, like, oh, it depends, like, like, you’re focusing on flexibility as your as like your method of delivery. So like, if you, if you like, I’m thinking of like ML ops, for instance, like, everybody seems to be like reinventing ml ops for themselves. And being like, our, our use case is super different from yours. We’re doing statistics on numbers. And it’s like, cool. But so so like, talking to individual like ML engineers, or like you’re talking to like somebody who’s building out an experimentation platform, or integrating an experiment, platform, something like that, getting them What, like getting them enough information to answer the questions that they need to answer now, without like, without like, building a behemoth of a thing where you’re like, this solves all of human knowledge. And they’re like, cool, we needed to not do that, and give us an answer tomorrow. So like, so like being like talking with individual people? Like, no, that’s just like one of the that’s a lot of companies value set, like get him specifically, we have a lot of like, individual sort of power to, to, like, make what we think is necessary for this space. And so like, having the flexibility within your team to make whatever whatever like it depends means to deliver the answers that you need to answer. You know, that’s, yeah, so we’re,

Eric Dodds 43:45
yeah, that’s great. That’s good. I once had a coach who said, you know, there’s not sort of like, advanced maneuvers, there’s just mastery of the basics. And then combining those two apply to like a really specific situation and a really specific, you know, context on the field. And so I’m so I’m glad that you said it depends, because like, there’s not a tidy answer to that. But it really does depend. And I think, to your point, if you have the core there, and a team that has a tool set that allows them to be flexible to address those needs, I think is a great answer. All right. Well, I think we’re coming up on close to time here. costus Why don’t you go ahead. I think we have time for another question.

Kostas Pardalis 44:23
Yeah, actually, we have like two or three questions that are coming from the community. So these are like, questions love, like everyone has to answer. I’ll try like to find the right person to answer the question. But I think it makes sense like to try and get like some quick answers to these. And I’d like to start in task. Brandon about ELD it’s something that I mean, it’s heavily associated with with like Fivetran. And many people argue that like actual ELT is nothing new it’s something that has been existing like for since forever Help us understand a little bit better like at the end. What’s the difference between ETL and ELT? And how new Visio thingies?

Brandon Chen 45:07
Yeah, and my thoughts on this are pretty similar. I’d say to George, our CEO thoughts. You’re everyone’s right ELT is not hey, you can use other tools to do ELT too. You can use, like script your own nightly dump full load, dump a refresh, effectively, that’s what you’re doing to with the OT for just taking that at face value. I’m taking everything of the ETL and the chancellor in the warehouse, you do that all the difference between what Fivetran does with the differential between what we do and what the process is taken at face value is all that backend stuff, all the things of, Hey, how can we make sure we’re not just doing a full nightly pool? And you know, crashing all your systems? How can we make sure that we’re using the most efficient way possible to actually read this sort of information? And how can we make sure that as we’re your data updates over time, where I’m categorizing updates as schema changes, or updates to values in previous rows or net new rows, how can we make sure that all those changes are pushed and the evolution of the term of like ELT, in my point of view, will continue to go more and more and more, oh, sorry, the value of the efficiency can’t be understated, because it’ll only continue to be more important, especially as we move towards more real time analytics. And part of that ELT to is also making sure that we’re doing everything that Paul’s talked about. And I think Paul has a very interesting point of view on the data tracking piece, right, all of that, making sure that you understand where your data is flowing into and making sure that you’re staying within compliance, making sure that your data stays secure, and adheres to ever changing rules laws, as they relate to data privacy, like GDPR CCPA, any of those, I think ELT is nothing new. But the way that we’re approaching our interoperability like that term, with other tool sets really enabled the best use the most efficient use of your T’s, like get your data to work off of.

Kostas Pardalis 46:54
Yeah, yeah, I definitely agree with you. And like, what I think people get confused a little bit with is that many things is probably like, everything like in technology is not new, like we keep reinventing things. And there is a good reason for that. It’s because we have different needs, and different technologies that we have to work with. That’s why I mean, the database was has been invented, like in the 70s. But we are still building databases, right. And we create new categories of database. The same thing is also with the like, the moment that we have the database, we started in the community, but we are still doing it. And we are inventing ETL. And we are implementing in a different way. So I think, I don’t think like people should be mad about like using terms that have been used in the past or like processes that we keep doing from the past, which doesn’t understand that like there are like good reasons to iterate on these technologies and create like, a different version of it. Right? At least that’s how I, I see it. But I totally agree with you and thinking like I think it was like a great explanation of why like ELT is something that we use more often today. Next, I’d like to ask Amy, something, which I think here and dbt is probably like the best person like to answer. Traditionally, SQL has been a very, very hard language to apply all the best practices that software engineering has come up with, right. And it’s one of the reasons that it was always like a bit of, let’s say, second class citizen in terms of like language, right? One of the things that’s really, really hard is testing, like, it’s very hard to like to maintain a code base of SQL and like do testing and unit tests. And like all that stuff, because like in software engineering, how do you see this from the perspective of dbt? Because my feeling is that dbt has added a lot of value on this and has changed things. So I’d love to hear like your opinion on that. And like, what do you think, what do you think is missing? And what do you think has been solved? Because of dbt?

Amy Deora 49:03
Yeah, so for folks that aren’t familiar with with testing in the dbt context. So a test is any SQL statement that can return zero role rows if that test is passed. And we use tests in dbt, to look at kind of quality, both data quality and just kind of unexpected things that are happening in your data pipeline. And those tests can then either provide a warning to an analyst or someone that that something is unexpected, kind of in this data pipeline, or also that something is broken with the idea that you can kind of find those things before before your end user of your pipeline finds that, again, there’s lots of there’s lots of ways that folks in the community this is a really a point of a lot of innovation, people using kind of the dbt testing functionality to do to implement things that look like unit tests from software engineering and things like that. There’s always this there was this kind of hard part of using SQL again, it’s kind of hard to have In traditional unit testing, right, you have kind of a synthetic data, a synthetic data set that you can kind of load in and see, okay, did this operation do exactly this right at this step along the way, there’s lots of folks using kind of DVT tests to implement things that look a lot like unit testing. So it’s an area where a lot of folks in the in the community, if you just Google like DVT, unit tests are several folks that have posted a lot of kind of a lot of ways and frameworks of doing this and testing data, kind of using the DVT test framework. So it’s definitely something where we’re seeing a lot of a lot of innovation and a lot of kind of great ideas. And we’re kind of providing those tools to the community so that folks can kind of develop that set of best practices.

Kostas Pardalis 50:43
Okay, that’s, that’s great. I mean, what I would add is that, I mean, people keep thinking that this thing is still something really hard to do with sequel. And as we said, like, also apply the rest of like, the, the software engineering, like best practices, but this has changed, actually. I mean, there are still like things that are happening, that it’s like, there’s a lot of like, innovation still happening there. But I think that with the introduction of dbt, like many of these problems are not that much of a problem anymore. And it’s more about like trying to figure out what’s the best way to do it, not if we can do it or not do it right. So that’s why I also like wanting to ask you about that. Because from my perspective, that was like one of the huge, like, contributions of dbt as a framework to the community. Cool. Next one is Jason and I have like a bit of a provocative question for him both. I’m sure I cannot help myself. Like, I have to ask that. Do you see a future where I mean, before I asked that many times, like during our conversation today, and like, pretty much like I think almost everyone mentioned that, okay, storage is like probably one of the most important parts of the stack that we are talking about, right? Like without storage, like we have nothing? Do you see a future where like, the data warehouse is not at the center of the data stack? And like another technology like the lake house, for example, or like the data lake or like, I don’t know, something exotic I don’t even know about it might take this position, this position that like the data warehouse today has in terms of like how, let’s say, important it is for the data stock?

Jason Pohl 52:20
Yeah, I mean, I think the the data warehouse is, I don’t know, if it’s ever really been in the center, I think it’s a bit on the edge to be honest. Because if you, if you ask people, if we have a data warehouse, do you? Do you also have a data lake? Most of them will say, Yes, I think if you ask them, Do you have more data in your data lake or your data warehouse, most of them would say most of the data is in the data lake. And in fact, the data that’s in the data warehouse, it’s not that it only resides there. It’s basically just a replicated copy of data that’s already in the data lake. So I feel like the reason why people were attracted to data lakes in the first place was that it’s an easy cheap way to put any type of data. So not just for the the structured data that you might need. But unstructured data, like pictures and videos, I worked with one customer who was like an advertising agency. And they were they were doing a campaign for like a software drink company. And they wanted to look at social media pictures to see which pictures had that soft drink in it, and who were the demographics of the people in the picture so that they could better market their product. And I don’t know if you could do that in a data warehouse, it’d be really hard to do that. And so I feel like those types of advanced use cases, they’re just going to get more and more, and the people and the companies that can master that type of analysis, are basically going to get an edge over their competition, which is why you see the top 1% of the Fortune 500 being the fang companies, these are the companies that have mastered how to do this. And so I do feel like the datalake is not going to go away, because that is the place to do this granular machine learning type of analysis. And I do feel like now that you can do a high performance interactive queries on the data lake, which we Databricks recently broke the data warehousing benchmark with the TPC DS benchmark. So we’re able to prove that you can have fast queries on a data lake, and you don’t need a data warehouse anymore. So I think it’s just going to be if you wanted to consolidate fewer things on that stack. I don’t know why you would have to have a separate data warehouse for these things anymore.

Kostas Pardalis 54:17
Yeah, that’s, that’s, that’s great. Like the answer that I was expecting to be honest. Thank you so much for I’m personally very, very interested in like this kind of in this space, like what happens like what’s happening today with old innovation, lightning, data lakes. So just like data, bricks, it’s also like, you see like stuff like who the iceberg out there. I mean, there’s a lot of innovation happening there. And it’s very, very interesting, like to see what will happen in the future. So hopefully, we’ll have like the opportunity in a couple of Monticello again above that. So we are at the end here. And I’d like to close our panel today by one last question that I’d love For you like to ask to reply to answer sorry, with, I don’t know, just one word if possible, right? And the question is, what is what do you expect as the next? What next technology, let’s say makes you very excited, right? What do you wait to see like in the market around the data stock, and what he would like to share with our audience out there in terms of like excitement, things that are happening, and blitz out Jason with you. So

Jason Pohl 55:31
I’m kind of interested in the governance layer for you know, these these stacks, I think in the, in the ACC a16z diagram, it’s like at the bottom somewhere, but I think it’s at the bottom because you kind of like have to have it and, and all these different tools have got different ways of governing either their data assets, or AI assets, or whatever. And so I think unifying that is going to be something that’s going to be interesting. And then just the sheer number of open source data catalogs that have come on to the scene in the last few years, I think it’s like half a dozen of them. And maybe a handful of commercial companies that are behind those, like seeing, seeing how that plays out is going to be interesting, because I I’m a big fan of open source, but I feel like it’s hard to have like, six open source projects have something that does the same thing you usually like to so it’s gonna be interesting to see, you know, how that whittles down.

Kostas Pardalis 56:18
Alright, that’s interesting, Amy, what about you?

Amy Deora 56:22
Yeah, I think the idea of headless VI. So the idea of like, kind of keeping your metrics in one layer that can then feed all kinds of various BI tools, including kind of those set of BI tools that are very specific to industry use cases. So I think that’s going to be really interesting. So everyone in the organization kind of being able to interrogate data using tools that are real specific to kind of their use case, but all kind of in one source of truth. Mm hmm.

Kostas Pardalis 56:48
That’s very exciting. Brandon.

Brandon Chen 56:52
Yeah, we’re very similar. As for Jason, maybe slightly different approach. I think a lot of these data catalog tools are very interesting to me. Because the a lot, one of the problems that I see with a lot of analytics teams is just the rate of onboarding, understanding what was the working with understanding what field definitions are also any tool that could solve for that? Whether it’s account explicit catalog tool, or some other data dictionary evolution can be fantastic to have.

Kostas Pardalis 57:17
And last but not least, Paul,

Paul Boccaccio 57:21
split honestly. Because, like, personally, with if I didn’t have to think about like value to the business, I would say like bespoke ml, in the people, companies like hug you face or, or something like that, which is like delivering, like, easy ml that you can just like deploy via sage maker via like, Whatever, whatever tool and like get back fast answers to your to your questions. But I think like that’s trying to hit the my real answer, which is democratization. So like, letting similar I think, to Amy’s answer, but like allowing each person to ask the questions that are most pressing to them without, without friction. So like giving them access to the most relevant parts of the data, without confusing them with irrelevant pieces. Okay.

Kostas Pardalis 58:13
Thank you. I’ll give the microphone to Eric. I really enjoyed this panel today. I hope that you also had fun. And hopefully, our audience also, I will be a little bit wiser. And after today’s panel and other Sun a little bit better, what’s the model data psyches? Eric?

Eric Dodds 58:35
Yes, well, that the live stream, the live stream listeners got to experience that. But the podcast listeners, one is edited out. This has been amazing. I learned a ton. And I think I think one of the big takeaways is that it’s an evolving question. And we’re all working on some some pretty hard stuff here. But some pretty exciting stuff that’s enabling all sorts of interesting use cases, technologies and job roles inside a company. So thank you for your time. Thank you for helping us understand all of these things on a deeper level. And we’ll catch you on the next show.

Brandon Chen 59:10
Thank you so much for the rest of your day. Oh,

Eric Dodds 59:13
what a treat to hear from so many great minds in the data space. I think one of my big takeaways is that the data stack is changing. It has changed, right? If you think about five years ago, there are tools that didn’t even exist, right. And now, a tool like dbt is a key piece of many data sacks right. It’s changed drastically and then hearing, hearing, Timothy, talk about how he views the modern data stack in the context of investment companies. It was so interesting that he said, you know, you can kind of apply that term to like so many different pieces of the puzzle here, or or to the whole and so on. I think part of the like dynamic nature of the stack and how it’s changing and increasing complexity makes it a pretty hard term to nail down. And I thought, I really loved how when we were talking, when we asked questions of Paul from Hinge who’s doing this work every day, he’s a brilliant guy, which was clear from his answers. But it was, it was hard to answer these questions, I think because of some of those reasons. So it was really helpful. And I think it’s a sure it’s a marketing term. But I also think all the factors of sort of the dynamics and complexity, make it hard to nail down. What do you think? Yeah,

Kostas Pardalis 1:00:36
first of all, using marketing terms is not a bad thing. I mean, there is a reason that we have marketing out there. And it’s not like, it’s just evil reasons behind it, like we need to, whenever we build something new, or we are like, let’s say reinventing something, we need to also invent new terms of new blood blood, so we can talk about it, right. So the modern data, like it’s another attempt towards that, like nothing else. But but, you know, it’s, it’s a very strong evidence that something is happening in the space. And that’s what we have to keep, like, there’s so many things happening. As Tim mentioned, at some point, you see so many new categories, like appearing every day, things that were just like monoliths in the past, they are not monoliths anymore. Like we that products from the past, like broken down into like, a large number of other products. And that’s a good thing. That’s an indication that many great people with very smart people are trying to figure out how to make things better, right, then that’s why I think that even if we manage today to give a definition of what the modern data stack is probably like in a couple of months from now, it’s going to be at least slightly before that, right? Yep. And that’s fine. I mean, it’s an indication of that. It’s the testament, I would say that we are living in very exciting times, for anyone who is like in this space, and is working with data. So that’s what I did from our conversation today.

Eric Dodds 1:02:06
As always a very thoughtful and concise reflection from classes, of course. We’ll have another panel coming up in in early 2022, which will be which will be super exciting. So we’ll let you know when that’s gonna come up. So you can register for it.

Kostas Pardalis 1:02:24
Yeah. And so that was our first one. So if anyone from the people that was listening to they have any suggestions, or criticism or whatever, please reach out.

Eric Dodds 1:02:34
Oh, yeah, you can go to datasets. show.com. And we have a contact form on the site now at datasets. show.com. So please, we’d love your feedback and ideas on a live stream that you’d want to see.

Kostas Pardalis 1:02:45
Yes, boosts calm. We want to be friends.

Eric Dodds 1:02:49
Absolutely. All right. We’ll catch you on the next show. We hope you enjoyed this episode of the dataset show. Be sure to subscribe on your favorite podcast app to get notified about new episodes every week. We’d also love your feedback. You can email me Eric DODDS at Eric at data stack show.com. That’s E R I C at data stack show.com. The show is brought to you by Rutter stack, the CDP for developers learn how to build a CDP on your data warehouse at Rudderstack.com