Episode 26:

Democratizing the Insurance Market with Daniel Gremmell from Policygenius Inc.

February 24, 2021

On this week’s episode of The Data Stack Show, Eric and Kostas are joined by Daniel Gremymell, head of data at Policygenius Inc. Policygenius, an insurance marketplace, strives to make it easy for people to understand their options, compare quotes, and buy a policy all in one place with help from licensed experts.

Notes:

Highlights from this week’s episode include:

  • What brought Daniel to Policygenius and how his background in industrial engineering and statistics impacts what he does (1:49)
  • Policygenius consolidates carriers and pairs insurance customers with live experts to get the best prices and plans (6:29)
  • How data analysts and data scientists re-shape the customer experience of selecting insurance (10:36)
  • How roles and titles like “head of data” are changing the industry (24:32)
  • Organizing a company with structured embedding (27:28)
  • Policygenius’ data stack (31:31)

The Data Stack Show is a weekly podcast powered by RudderStack. Each week we’ll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.

RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.

Transcription:

Eric Dodds  00:06

Welcome to The Data Stack Show where we talk with data engineers, data teams, data scientists, and the teams and people consuming data products. I’m Eric Dodds.

Kostas Pardalis  00:14

And I’m Kostas Pardalis. Join us each week as we explore the world of data and meet the people shaping it.

Eric Dodds  00:26

We have Daniel from Policygenius on the show today. My burning question for Daniel is really his opinion on titles around data in leadership roles. So we’ve increasingly seen even C-suite level data titles, which is really interesting. It hasn’t been that way for a super long time. So I’m just interested in his opinion on that, because he has such a wide purview over the different data functions. Kostas, what interests you? What’s the one question that you want to get an answer to?

Kostas Pardalis  00:55

Yeah, I really want to learn more about how data and data analysts, scientists, data engineers, and this whole new organization interacts with products. I think it’s a very good case, because they are B2C in the marketplace, which means that they have to deal with a lot of data. So I’m pretty sure that product and data analysts are working very closely together, and I have quite a few questions around that that I’m really looking forward to hearing the answer from Daniel.

Eric Dodds  01:24

All right. Let’s talk with Daniel and get our answers. All right, welcome back to The Data Stack Show. Really excited to have Daniel from Policygenius on the show. Daniel, thank you for joining us.

Daniell Gremmell  01:37

Appreciate it. Thanks for having me.

Eric Dodds  01:39

Absolutely. Well, we’re excited to chat. Before we get going. I would just love to hear a little bit about your background. And what led you to Policygenius. And what do you do there?

Daniell Gremmell  01:49

Absolutely. So I’m gonna go the reverse order of that question. And so I’m the head of data here at Policygenius. I’ve been at Policygenius for the past year. And I came on to help us build out and expand our data capabilities. So on the first set of data that we had, when I came on board, I started with two folks. And we’ve since built the team. At Policygenius the data team oversees data engineering, data analytics and analysis, and then data science and machine learning. So we have various roles from data engineers, data analysts, to data scientists. So prior to that, I was at Plated. And I was VP of data science. Plated was the old meal-kit company. And so it was food and supply chain problems, which is super fascinating. And then even going further back before that, I worked in many different industries and their data leadership roles, including publishing, aerospace, automotive, and health care a little bit. And so it’s been interesting to work in a lot of different fields and get a lot of industry experience. And my training is in statistics. So I did a Master’s at Rochester Institute of Technology, in statistics with a focus on machine learning. And before that I was an industrial engineer and have a background in industrial engineering as well.

Eric Dodds  02:54

Very cool. It’s fascinating to hear about guests on the show who work in the data space, their various backgrounds. Well, quick question on that, what types of things did you do in industrial engineering? Just out of curiosity.

Daniell Gremmell  03:07

Yeah, so just a little more context to that. I mean, data science didn’t always exist. Before there was data science, there was statistics and statistics is one of those things that’s embedded across many different professions and industries. Actually my first degree, my bachelor’s, is in operations management. And so early, early on, in my career, I worked in operations management at both the manufacturing and distribution level. I then moved into industrial engineering because I was fascinated with solving and crafting systems to help people become more productive and efficient, as opposed to just directly managing folks. So naturally, when I moved into engineering, if anybody has ever gone to school for any type of engineering, mechanical, industrial, etc, there’s a lot of math. And so that’s where I found my love for mathematics statistics, and then eventually went back to school for my Master’s in statistics, and hence how I ended up in analytics, data science, etc. So yeah, that’s that’s kind of the pathway and how I got there. I think all those things kind of lead into each other.

Kostas Pardalis  04:08

That’s really interesting, Daniel, what you said about engineering disciplines in general. That’s also my experience. I remember when I started studying electrical and computer engineering–and by the way, my dream to do that is because I only cared about computers, right–and I remember that at some point, I realized that almost half of the semester at the beginning were mainly math and physics, and I was like, okay, when’s the fun stuff going to start? But that’s when I was still a teenager, right. After I finished and graduated I really appreciated all the exposure to both disciplines, both physics and mathematics. And it’s one of the best things that at the end you go through because of following an engineering discipline. So that’s a great point what you talked about.

Daniell Gremmell  05:02

I mean, that’s super fascinating. And yeah, I mean when you go through an engineering path, etc. Absolutely right. Like, when you take your first course on differential equations and whatnot, you’re like, oh, my god, like, this is just insane. Or calc four, calc three or something like that, like, oh, man, what is all this integration about? But yeah, I think what attracted me to analytics and data science, in general, is, and especially more on the applied side is just being able to take all that math and actually translate it into something you can see and touch and actually solves real problems. That’s where it becomes super fascinating. And luckily, you know, when I did all my ingredients, I was working full time. So I was able to take directly what I was learning in school and apply it on the job, which was always fascinating to be able to do that.

Eric Dodds  05:45

Yeah, that’s super interesting. You know, looking back, not too long ago, I was thinking about, you know, my coursework in college. I have a business degree in marketing. But I took a lot of statistics classes, because I really enjoyed the sort of the idea of being able to answer questions with math, you know, and statistical significance. But I was thinking about my favorite classes in college. I think someone asked me that question. And I said, it’d probably be a tie between statistics and consumer behavior. Then I realized, oh, I guess I work in data, that makes a lot of sense. Like, that’s, that’s kind of interesting. So the application there of the math is really interesting. Well, getting back to Policygenius, can you just give us a little overview of the business and the problem you solve?

Daniell Gremmell  06:29

Yeah, absolutely. 100%. And definitely feel free to go to our site and read more if you’re listening. But yeah, so we are an insurance marketplace, that is definitely the best way to describe us. And basically, the way that our processing business works is we make money as an insurance broker. So if I can describe the process to you, we have multiple products on the insurance space, life insurance, home and auto insurance. And really, the way the process works is folks begin their journey online. And usually, they come to us to compare options, compare prices. There are people that are generally curious about the life insurance, or the home and auto insurance industry, and they’re trying to get the best product they can. So they’ll come to our site to explore, they’ll come through our product funnel. And then as they get through our product funnel, you know, we’re giving them education, information, and collecting info about them. And then once they get to the end of that online funnel, we connect them with one of our live agents. So we have a series of live agents that we work with, that work with us, they work directly for us; they’re licensed insurance brokers. And so they can, they can definitely partner with you to get you the best coverage possible as far as life insurance goes, and home and auto insurance. And so they’ll have those deep conversations with you. They’ll help you. On the life insurance side, they’ll collect some of your health information, etc. And then basically, they’ll compare rates, help you select a carrier, and then they manage the process of getting your information over to that carrier for underwriting, etc. So we do still have to go through carriers and whatnot, to get a policy actually enforced. We don’t actually put policies in force ourselves, meaning we don’t insure you know, one of our whatever carriers do. We help facilitate that process.

Daniell Gremmell  08:07

So it’s the benefits of actually coming through us versus going directly to a carrier. If you go directly to a carrier, they’re gonna give you a quote, and then when you decide to move forward on that quote, they’re gonna put you through a process called underwriting. So in underwriting, what actually happens is you can apply and if something comes back, your price could actually change. And then that’s the policy that you have to choose from. And that’s it to go in force. What we’re able to do is, if we put you through to the insurance company, and then your application comes back, and your price is adjusted, we’re able to compare that with other carriers before we actually choose to put your policy in force with that carrier. We do that in conjunction with you. So that’s really the value that we’re adding as a marketplace is we’re consolidating all the carriers you can choose from, we’re providing you the education and the support to help you make the decisions that are best for you. We’re not biased by conditions, etc. And then we’re also looking out for you to make sure that you get that best price possible, so that you’re not going at it alone or having to work through an individual carrier.

Daniell Gremmell  09:06

So in general, that’s the process. And that’s the business model. And again, we do have the life and home and auto side, we also look at ourselves as more than an insurance company. So we also look at ourselves as a financial services company. So we have products like wills and trusts that just rolled out last year on the mobile app. And that allows you to just go on our app and go through that process and get a will in place that covers you and your family in the event that something happens to you. That is normally a very complicated and expensive process that folks have to work on with lawyers and legal counsel. We’ve taken all that and productized it into an experience. And it’s legally binding. We’ve consulted with numerous lawyers and law firms around the country, based on states, etc. to provide that service. And so that’s just one pinnacle of the strategy of financial services that we’ve implemented since insurance.

Eric Dodds  09:51

Got it. We have so many questions to ask you and so many things I’m interested in but let’s start with, one pattern we’ve seen on the show is that a lot of times our guests work in a context where there’s sort of a traditional process, and they’re using technology to help reshape that process, you know, which it seems like is what Policygenius is doing with the process and the whole customer experience of buying insurance. And we’d love to know, from your perspective in terms of the data, and the ways that that helps shape the experience, as head of data, how do you view the role of data and the process of reshaping that experience? And how do you use that to, you know, create, and sort of inform the customer experience?

Daniell Gremmell  10:36

Yeah, that’s a really great question. So some of that comes back to how we structured the data team to kind of answer your question. We’re a product focused company. So what that means is that we are continuously looking to improve our customer experience, and translate that into gains for the company to help us grow in scale. So we have a series of roles from data engineers, data analysts, data scientists, they all kind of interact with the product to contribute to the product in different ways. So starting with data analysts, data analysts are embedded directly into our product teams to work closely with product managers and engineers. And they serve the product process and the customer experience in two ways. Number one, they participate in discovery through contributing with research. So if you’ve ever worked on a product team, you know Kostas knows this as leading product for your company, you have designers, etc, who are helping with UI research, customer experience, and designing what the customer sees. A really good way that we think about data analysts is they’re kind of like a designer, but on the quantitative side, so all the research is quantitatively driven. They’re looking for trends and patterns that we’ve seen through how our customers interact with our site through where we’re losing them in product funnels, and they’re taking those insights and questions, and they’re developing and crafting research around that that helps influence the product experience that we’re going to show customers as they come through, or even on the back end side, how agents kind of work through their process when we get customers on the phone. So that’s on the discovery piece; it’s heavily driven by quantitative research. The second way an analyst contributes is on the delivery side. So whenever we find a feature that we want to develop, we develop that feature, and then most of the time, I would say, you know, 90% of the time, we’re going to A/B test that feature and look for impact, and how it affects customers, right. And we’re trying to learn. So we have a strong test and learn culture and process in which we’ll try new things, interact with our consumers, and really understand if it’s working for them, if it’s benefiting them, you know, in the experience, so we do controlled A/B tests. Our analysts help lead that process by structuring the test, helping out with the sample size and power calculations, and helping ensure that we have at scope that we’re measuring for the right outcomes. So that’s one way we contribute to the product experience what the customers see by leading that experiment process.

Daniell Gremmell  12:51

The second way that we contribute to the customer experience, and what folks end up seeing is on the data science side. So you know, admittedly, it’s a little bit new. And we just built a team last year so we’re still working on use cases. But we use machine learning to help influence and drive the product experience and process. That includes use cases around personalization, propensity modeling, routing, as well as anything we can do to augment the agent efficiency, and customer experience in the physical process as well. So in a nutshell, those are some of the ways that we contribute to the customer experience.

Kostas Pardalis  13:26

Oh, this is great Daniel. I have a couple of questions, actually. And so I’d like to start from what you described around how the analysts work together as part of the product team. And I think that’s an amazing metaphor that we use there about them being like the designers. Can you help us to understand a little bit better how and at which points, the analyst works together with the product team in evaluating the features. I mean, you talked about the A/B testing, but before you reach the A/B testing, I’m more interested in, let’s say, we create a new feature, we need to measure some things that they are going to be used in afterwards and through the A/B testing to figure out what works best. How does this work in your organization? Like who’s responsible for that? Like, who is responsible for what’s going to be tracked and why? And how is this communicated with the analysts? And what’s the process there? Can you help us understand this a little bit better?

Daniell Gremmell  14:24

Yeah, 100%. I know this quite well, because when I got to Policygenius, this is like one of the first things I worked on with the collaboration with product was really standardizing this process. And so in my experience, when it comes down to it, if you think about accountability, and whatnot, the entire product team is accountable for the results of that experiment. So that’s the way we think about experiments from the get-go. That whole team is accountable for ensuring that we’re developing experiences that are beneficial to our customers and are value added.

Daniell Gremmell  14:53

Now from there through the experimentation process we have a couple of breakdowns as to who does what. Usually, PMs are going to initiate the experimentation process through hypothesis development, and really defining what it is we’re trying to test. And so they’re just kind of the centralization point for that piece of the process, meaning that they take our company strategy, they take the insights from analysts and designers, as well as the rest of the team to really define what it is we’re building, and why we’re building it and what hypothesis we’re trying to validate or invalidate. From there, there’s a strong collaboration point that happens with our analysts. And what our analysts do is they’re gonna go through the mathematical side of this, as well as a line with our PMs on the primary metric that we’re trying to drive impact to.

Daniell Gremmell  15:43

And again, this happens much more like much more naturally than maybe I’m describing just because these folks are embedded. They’re working together on a regular basis, they’re going through this, they’re going through and sharing context on a regular basis. So it’s less like check off the process box, and more of these natural synergies occur. So there’s alignment on that primary metric that happens usually between the PM and the analyst. And our analysts will come with recommendations on a regular basis, as better indicators of success of that experiment, depending on what we’re driving.

Daniell Gremmell  16:12

Yep. And then from there, you know, it’s a joint process kind of between engineering, getting the change finished, the PM, kind of managing the rollout, and then the analysts doing the monitoring of the experiment, when it’s live. And then kind of wrap up the process, the analyst and PM are keeping an eye on the new feature as it launches to make sure we’re not seeing massive drops or anything, anything that would be that would be detrimental to the experience. And then once we hit our sample size, our analysts will do that final analysis and provide the final result as to whether or not the test was successful. And one thing I forgot to mention early on the process, right, before we launch anything, part of our process is, PMs and analysts work together on that decision criteria.

Daniell Gremmell  16:54

What’s really important about any A/B testing process is, based on the outcome of the experiment, what are you going to do and how you can action what happens? And so normally, it’s normally pretty simple and pretty standard; if you see a significant increase, you roll the experience, if you see no effect, you might iterate. And then oftentimes, if we see a detrimental effect to the customer experience, then it’s a kill feature or a potential iterate. But we align all that upfront. That way, we have a blueprint for when the test actually finishes. So I think my answer was probably a little more nuanced and complicated than maybe I thought. But the answer to your question is the whole product team is accountable for the experience itself and the experience in the experiment itself. But then there’s definitely places of centralization that occur within the process.

Kostas Pardalis  17:38

Oh, that was great. That’s exactly what I was looking for. To be honest, I’m very interested. I mean, I’m coming more from B2B space. So we will also discuss that, because I have some questions to ask about it, based on your experience. So I’m super interested in how this looks in the B2C environment where you have actually a lot of data to work with. But before we go there, do these experiments ever fail? I mean, can you reach a point where actually, the output of the experiment is we cannot decide between A and B, or you have to go back and see what went wrong with the experiment itself and not with the feature? Is this ever a case?

Daniell Gremmell  18:16

Yeah. So I mean, to answer that question, I think the best way to answer it is, if you never have experiments that fail, that means that you’re not testing aggressive enough changes, right. The whole point of a test and learn process is you’re trying to learn and fail fast. So I mean there are oftentimes where tests are successful, there are times when tests are non-significant, and then there are times where tests just fail. And that happens in any company that I’ve worked at and experimented with. And the idea is you’re trying to push aggressive changes out there, you’re trying to overhaul what the customer sees, and really try to find something that’s better. Rather than testing really small, incremental things, you get there faster by taking bigger jumps. So that’s the idea. And that’s what we tried to do. So we do see tests that are not significant, or tests that fail, and that’s where the decision-making criteria really comes in. Because if you don’t rely on that upfront, you will make it to this horsetrading ambiguous world afterwards in which, okay, well, this wasn’t significant, should we roll it out? Or should we not roll it out? Usually, there’s discussions that happen beforehand, and things that you want to align on, based on what the feature is. And honestly, with some features, we’re kind of testing for parity, sometimes we are testing for a non-significant effect. If we roll out something that’s aggressive that’s beneficial to our process or something like that, if it’s not significant, sometimes that’s a good outcome for us, and we will roll that out. So it really depends on the nature of the test and what we’re looking for.

Kostas Pardalis  19:37

This is great. And the reason I asked you this question is because I also have the same feeling for a long time to be honest. A/B testing, for example, is a process where at the end, we are going to say okay, we are going with a or b, right. But what I think most people forget is that data cannot always give answers. And one of the, in my opinion at least, responsibilities that the analyst has inside their organization and team is also to point out when we can trust and when we cannot trust this data. And based on that, reiterate and fix the problem, or try again later or whatever, trying to figure out how to solve that. So yeah, that was great.

Kostas Pardalis  20:20

So question about B2B now. B2C has access to a lot of data, right? If things go well, you will probably be interacting with thousands of users, you have like thousands to millions of data points. And that’s quite important when you do statistics, in B2B, you have the opposite, you don’t have that access to the same amount of data. Based on your experience, how do you see the techniques that you are using right now in Policygenius work and what does not work in a B2B environment? What’s your advice around how someone in a B2B environment should apply statistics and analytics to drive the product?

Daniell Gremmell  21:02

Yeah, 100%. That’s a great question. Before we answer that question, I’m going to jump back to my last answer for a second, you pointed something out that I forgot to hit on. So talking about failed tests, again, for one brief sec. What also happens for us, like if we have an experiment that doesn’t go as planned, our analysts do what we call secondary follow-up analysis. So you know, normally, when you test something in an A/B test, you’re testing like an aggregated metric. Well, if that metric doesn’t turn out, like we expect, our analysts will dive in and do much deeper analysis and modeling around that to truly understand how different consumers were behaving when they experienced that feature. And we use those insights to help us drive the product experience or iteration going forward. So that’s just a little more nuance into what we do for failed experiments.

Daniell Gremmell  21:44

But to answer your question about B2B, you know, it’s a great question. And really, when you think about B2B and lower traffic scenarios, you have to call back to your statistical training. So yes, in the B2C world, we’re able to use tests that we’re able to get sufficient power on, and even sometimes, you know, with our traffic, sometimes tests need a little bit of time to run. In the B2B world, you have another different challenge, where you just have smaller sample sizes. And so the challenge becomes, okay, well, I can’t use just regular hypothesis tests. So you have to kind of adapt and use either tests that are more robust to lower samples, something like a Fisher’s Exact Test, or something of that nature, or just use, you have to use models that have a higher degree of power, right. And so that means that you’re able to build a little more rigor into smaller sample assumptions, and then kind of use those to extrapolate out and then implement those changes. I mean, if you think about, you know, back in the day, the T-test itself was actually invented by Guinness. Um, and the reason why they invented it is so they could test batches of beer, well, they didn’t have thousands of batches of beer to test, they really had a couple of batches they could only do at a time. So the T-test was invented to help them with things like that. And the same principles apply with B2B, right, you don’t always have thousands of reps. So you have to use tests that are clickable to scenarios that you’re in, and that you’re able to get high degrees of confidence in or at least are directional enough for you to be able to make decisions and advance your business strategy. I think one thing to remember from a business strategy perspective is, you know, we’re not writing research papers here. So we’re trying to grow businesses, and we’re trying to advance that business culture. So being able to take some liberties with that, or at least have something that’s directional for you to react to, is much better than just kind of flying blind, or going by your gut.

Eric Dodds  23:30

Yeah, this is so interesting to learn from you. Daniel, I want to step back from the details of Policygenius a little bit and talk about as funny as it sounds, your job title. So head of data, is a leadership role that sort of implies coverage of a lot of different areas of data. And this is a discussion we had earlier this week, actually, just around, you know, it’s the concept of leadership in data, you know, hasn’t been around, I mean, to some extent, it’s been around like you think about IT leadership, you know, sort of the history of technology, and all of those sorts of things. But in roles like you have in terms of head of data that have a broad scope, we’re seeing more and more of that. And so I’m just interested, in your opinion, holding that title, you know, what does that look like for you, number one, and then, because you work in the field, how do you think that that is changing in the industry? And how do you think that will look moving forward? Do you think we’ll continue to see more and more of that, and what are those roles look like inside of companies?

Daniell Gremmell  24:32

Yeah, that’s fantastic. Yeah. Simply the role of head of data can sometimes be a little bit ambiguous, because you’re like, oh, what does that actually mean? Because data is like a thing. So yeah, but in terms of Policygenius, and actually like most data roles I’ve held, the definition has consisted and spanned over those three core areas that I mentioned earlier in this conversation, data engineering, data analytics, and data science and machine learning. And I make those distinctions for purposes of  scoping roles and helping to find people who are specialized in those fields and functions. So I think the way we’ve defined a Policygenius is very similar to other places I’ve been at, even if the role is called something kind of different, right? When I was at Plated, my title was VP of data science. So even though my role was data science, I still oversaw good engineering, data analytics, and then the data science itself. So really, it’s less around the title itself, and more around the scope and breadth of the role. And there’s real power in synergy to owning those three components. Because really, what you’re doing is you’re selling, you know, you’re providing a service back to the company. And the idea is to make the most use of that service as possible. And by having those roles kind of, under the same function and leader working together, we were able to develop efficiencies internally, because they all kind of rely on each other. Data analysts work heavily with engineers, and they often need that deep engineering support. And anybody I’ve ever talked to you out in the industry and pure benchmarked with, they’ll always tell you, one of our biggest struggles is like getting the data structured in the fashion that we need. So having a good engineering team, outside of data with different priorities isn’t always helpful.

Daniell Gremmell  26:18

So those internal efficiencies are really what we strive for, by kind of embedding these functions together. And going forward, it’s hard to tell. I mean, like the way that our team is structured is not the same across every company. Some companies that you look at, they have a more functional model, where data science kind of reports into like a marketing leader or somebody like that, or a finance leader, and then data engineering maybe doesn’t exist or is classified as engineering, data analytics might be BI or something of that nature. So there is some decentralization that happens. And really what I think it’s a factor of is the scope and size of your company, the presence of where it is globally. Yeah, those are kind of the factors and features to think about. And so what is the trend again like going forward. I think only time will tell. I do see specialized roles coming out, like, you know, people that are simply focused on AI and data science. But again, I think it’s a factor and feature of the type of business that you’re running and the industry that you’re in that kind of defines what the scope of the role should be. I think there’s power to having data centralized under one leader. Iint just helps provide a more cohesive and consistent vision. But there could be arguments for other cases as well.

Eric Dodds  27:28

Sure, sure. Yeah. I mean I was thinking about this just sort of working on some ideas for blog posts. And it’s interesting to see different models, right, you kind of have disaggregated teams that you mentioned, right? So data science is going to report up to, you know, marketing, you know, and then maybe data engineering separate. Sometimes you have them combined, I would say that’s probably less common, but becoming more common. But I was thinking about this concept and would be interested in your feedback on it. So there’s this kind of a model many times, almost like a shared service center, you know, where data functions have internal customers. But one thing that I think is really interesting to think about, and it sounds like a dynamic that you’ve created at Policygenius is the data function really being a strategic partner, not necessarily just a function that has internal customers, right? Well, we need this or we need that or we need analytics or we need a model. But more of saying, okay, we’re trying to solve a problem. How can data help us do that? And so data becomes a strategic partner, as opposed to just sort of a business function that serves other functions.

Daniell Gremmell  28:38

Yeah, that’s 100% right. Buckle your seat belts, folks, hot take coming. Yeah, like having a shared service center and having data folks basically be button pushers, is not really an efficient use of their skills. Nor does it motivate them.

Daniell Gremmell  28:56

So I’m familiar with the model you talked about. I’ve even worked in organizations where that’s implemented. I don’t think it’s as effective because what you’re relying on, if you think about it, you’re relying on your business stakeholders whose responsibility is overseeing a department or function where they have specialty in trying to come up and brainstorm a solution that they then pass off to have somebody implement.

Daniell Gremmell  29:21

So what you end up with is basically a sub-optimal solution in the sense of, you didn’t have people work on problems with their strengths at play. So by partnering with data folks and data science strategically, and allowing them to identify opportunities, and help contribute to solutions that provide paths forward, it’s going to cause you to think differently, and approach problems in a way that you probably would not have thought of before, you know, because you’re not a data scientist or a data analyst, and hence vice versa. It also is the reverse way. Data scientists are not marketers; sometimes they are but not always. And so they don’t always think in terms of the business of you know, of the marketing perspective or the end-user perspective. And so having these collaboration points, ultimately creates a better outcome than simply passing something off for people to implement. And so I kind of mentioned in the beginning, you hit it right on the head, we go with a model here–and I’ve preached this model in conferences and places around the world, because it’s a great model, I’ve used it many times. It’s called structured embedding. And so what I mean by that is, yes, we have a centralized data team. But we don’t just take requests and farm out resources. We actually embed our resources into different product teams and business functions. And the idea is that they’re on the ground, they gain and learn the business context. And then they’re able to contribute in ways that are more beneficial to the company than simply taking orders and executing work.

Eric Dodds  30:42

Very interesting structured embedding. That’s the first time I’ve heard that term. But yeah, I totally agree. That’s something that we see. We have the privilege of talking to so many different companies. And I think that’s a trend that we’ll see increasing significantly in coming years. Yeah, thanks for your perspective on that. Well, I know we’re closing in on the hour here. So why don’t we jump over to the technologies that you use in your data stack. And I’m really interested in your perspective on this, because you have a wide purview over all of the different data functions. So I would just love to know and just kind of run us through the high level, you know, what do you use to sort of collect and process data and store it? And then the various ways you pipe it to all the different, you know, places and teams needed?

Daniell Gremmell  31:31

Yeah, definitely. Yep. No trade secrets here. I mean on every job post we have, we put the technology that we work with, but at a high level, we’re a GCP shop. So we work in Google Cloud, that’s our main cloud provider. And then we have various amounts of tools that we work with to help us capture and move and process and transform data, kind of events and whatnot that happened on the site. We have event streaming that happens with, you know, providers like Segment, we then on the back end utilize Airflow to help us with our ETL between our databases. So pretty common stack that people see across data architecture, and kind of ETL and processing and moving data. We also use Airflow to do our ELT processes internally in our data warehouse. So everything kind of gets piped and centralized within our data warehouse. Our data warehouse is like the hub, kind of like the center of consolidation for everything that happens around the company. And then we’re going to connect that to various, you know, reporting, BI, as well as modeling tools to help us help us do more advanced and sophisticated things with that data as well as make decisions. So yeah, I mean, in a super high nutshell, that’s, that’s really what the process is super simple.

Eric Dodds  32:43

Yeah. Quick question on the data side of things. Are there any sort of unique challenges you face with the type or structure of data that you deal with a Policygenius? You know, so is there some sort of data format, and there may not be we just like to ask, because we find out interesting things, but some sort of data format, you know, related to dealing with insurance information applications or other components like that that’s sort of presents a particular challenge or a unique requirement around moving the data or processing it.

Daniell Gremmell  33:15

I would say, there’s, I don’t think there’s that there’s anything that jumps out at the top of my mind, I mean, underneath the hood, every now and then we have some JSON strings and whatnot to parse. So there are things like that we run into. But like, as far as like, the industry itself, and like the types of things that happen, you know, our data is relatively structured. And so we don’t really run into a scenario a lot where we have to work with a lot of unstructured data. There are small pockets of use cases here and there. But for the most part, everything is collected on the site, etc, is pretty well structured, so nothing really unique to the insurance space, nor us about some of our data structures.

Kostas Pardalis  33:55

So Daniel, last question, before we conclude our conversation, although I think we have many more questions to ask, maybe we should arrange another episode with you. What’s next? I mean, what fascinating projects you have internally, and what you’re really looking forward to in terms of either technologies out there that you’re going to use, or I don’t know, even organizational changes. Can you share something that excites you about the future inside Policygenius in terms of like the technologies and anything related to data.

Daniell Gremmell  34:28

Well, I love doing shows like this, because it also gives me an opportunity to recruit a few folks out there listening. We are continuing to grow and scale the team. So we’re looking for data engineers, data analysts, data scientists. So you know, we grew aggressively last year, and we’re growing aggressively again this year. And it’s really a testament of the value that we brought the company and the value that people have seen, and being able to use data to help drive strategy and help drive product and help drive what the consumer sees. And so we want to do more and more of that and get better and faster at it, so that’s really what’s exciting is we’re kind of year one into having the data team structured and, and having run many resources. And so year one is kind of like the foundational, people are learning and trying to understand the business and data etc. And so now we’re getting into that stage where we’ve had people here for a decent amount of time, and they’re starting to think a little more creatively, they’re starting to think a little more generatively and take bigger roles on their embedded product teams to help drive product strategy forward, and provide those insights that are needed to help advance our business.

Daniell Gremmell  35:36

So that’s what’s really exciting about what’s coming up. So, yeah, again, if you’re out there, we’re hiring, go to our site, look at our roles, and please come through the process, because there’s a lot of exciting things on the horizon. But we’re also you know, so analytics is definitely a place we’re always investing, you know, the faster we can decision and research and help drive product strategy, the better off we’re going to be, obviously, like we’re trying to accelerate our machine learning philosophy. It’s something we dabbled in last year, and definitely is something that we’re continuing to dabble in this year, and expand and accelerate. So yeah, those are some of the cool, exciting things that we’re looking at from a high level perspective.

Kostas Pardalis  36:13

That’s amazing, to be honest, Daniel, after our conversation, I would definitely consider applying for a job there and working with you.

Daniell Gremmell  36:24

Come on down! Especially looking for data engineers they’re a little hard to find. If you know any of those, send them through.

Kostas Pardalis  36:31

Thank you so much and I’m personally really looking forward to chatting again and learning more from you.

Daniell Gremmell  36:36

Appreciate it.

Eric Dodds  36:37

Thanks for being on the show, Daniel. And we’ll catch up with you again, maybe in another three or six months, and have you back on the show to tell us what new things you’re working on.

Daniell Gremmell  36:45

Cool. Thanks.

Eric Dodds  36:47

Well, that was a really interesting episode. I think my big takeaway was the concept of structured embedding. I have not heard that term before. I mean, I’m sure it’s been around. But really fascinating to hear about sort of the strategic placement of people in the data function in various parts of the organization. I love it. And I think we’re gonna see more and more of that, and hopefully, hearing more and more of that term. Kostas, what was your big takeaway?

Kostas Pardalis  37:15

Yeah, I agree with you. I think that as in the past, we were hearing the motto that every company is a technology company, like the future, we will be saying that every company is a data company. And as this happens, I think we will see very interesting restructuring and structures around how data works inside the organization and how this affects the structure of the organization, right. So that’s super interesting. I really enjoyed chatting with Daniel mainly because he has a very unique and amazing overview around all the different functions that are related to data, because of his role as a VP of data, right? So he has a very good understanding of data engineering, data science, data analytics, and how all these different things around data work together to provide value to the company and of course, to the customers. super interesting for me that data analytics can also work in B2B, where it’s the typical problem of will we say we don’t have enough data, so why do A/B testing, but there are solutions there from what we heard from Daniel. And to be honest, we have many, many more questions to ask him. So hopefully, we will have the opportunity again in the near future.

Eric Dodds  38:27

I agree. All right. Well, subscribe on your favorite network to get shows weekly, and we will catch you next time.