{PODCAST} In-Ear Insights: The Data Quality Lifecycle

{PODCAST} In-Ear Insights: The Data Quality Lifecycle

In this episode of In-Ear Insights, Katie and Chris discuss the evolution of the Trust Insights 6C Data Quality Framework into the Data Quality Lifecycle. Learn what the Data Quality Lifecycle is, why it matters, and how to start applying the concept to your own marketing data.

[podcastsponsor]

Watch the video here:

{PODCAST} In-Ear Insights: The Data Quality Lifecycle

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.

Christopher Penn 0:02

This is In-Ear Insights, the Trust Insights podcast.

In this week’s in ear insights, we are talking about data quality, we’ve actually been talking about data quality, literally, since the inception of the company.

And what it takes some time today to actually revisit one of our data quality frameworks that is as old as SEO that, I think is one of the first things we publish.

So Katie, when we talk about data quality, we normally talk about our six C framework, right? The the six C’s of data quality.

And you had said that in the past, when you try to remember it, and share it with people, some of the things you stumble over, so you want to share some of that, or should we talk to the framework first?

Katie Robbert 0:49

Well, let’s talk through the framework first, and why we put it together.

And so the six C data quality framework, which is a little bit of a tongue twister, um, you know, the reason we put this together is because we wanted to have a way to explain to people why data quality, not only was important, but how to think about it for your own data.

So almost kind of like a checklist.

And so, you know, like any sort of framework or any sort of visual, we start at the top, so it needs to be clean data.

And so when we’re describing clean data, we go into what that means.

And so it needs to be prepared, well, free of errors.

And then we go into detail about what those errors mean.

And then it needs to be complete, no missing information.

So you can’t have you know, like you’re missing the whole month of June, for example, comprehensive, must cover the question being asked.

And so if you’re asking a question about Twitter followers, why are you looking at your financial data.

So it’s like making sure it’s not a mismatch? Chosen.

This is the one that I stumbled on.

But we’ll get back to this.

And so no irrelevant or confusing data.

And then credible must be collected in a valid way.

So there’s some sort of a methodology that says, This is how we collected the data.

And maybe it’s sort of the methodology statement from your software platform.

And then calculable must be workable and usable by business users.

So it must be presented in such a way, whether it be Excel or whatever, that you can actually do something with it.

So those are the six C’s.

Now, if we go sort of back into some of these, so there’s a couple of places where I get hung up on this.

And the reason why we’re talking about this today is because Chris and I are due for a refresh.

So we want to sort of think through like, what are those places? So to me, clean and complete, are kind of the same thing.

So when it may be it’s just a way in which it’s being described.

So clean data is prepared well and free of errors.

So then you move on to complete, no missing information.

Well, wouldn’t that mean that it’s clean and free of errors? And then the next one is chosen and comprehensive? So comprehensive must cover the question being asked chosen, must cover the question being asked.

So to me, those are the two big places where I stumble, because I feel like one and two and three and four are almost identical.

But Chris, as the person who created this framework, helped me understand your rationale behind this.

Christopher Penn 3:35

So you are exactly right, that one and two, and three and four are artificially split.

And the reason they’re artificially split is because of the techniques you need to do to fix the problem.

So when you have clean data that is free, that you have a dirty data that’s that’s rife with errors.

You have to do stuff like anomaly detection to look for those errors to say, okay, clearly, this day in Google Analytics says we got 100,000 visitors.

And we know we didn’t because there was no commensurate increase in like results.

We know that’s an error, right? And you go into those, oh, look bot traffic, you know, all that stuff.

So there’s a, there’s dirty data.

And there’s ways of cleaning dirty data.

On the flip side, when you have missing data, like, Oh, you forgot your Google Analytics tracking code on your website, and you have three weeks of missing data.

Now, that’s not dirty, there’s in the sense that the data is corrupted, it is just not there.

And so you have to use techniques like imputation to try and either fix that or use certain types of selection to essentially say, Okay, if we’re doing period comparisons, okay, we got to knock out an equivalent period.

You’re in, you know, the year of your days, we’re like, we can’t use three weeks in June of the previous year for comparison, because things will be skewed.

So, conceptually clean and complete, do belong under the umbrella of clean, but I split them up because I need to know, okay, what do I need to do to fix the problem? Comprehensive and chosen the same thing? So comprehensive, comprehensive, really, you know, again, you could bucket that all under chosen well, for example, comprehensive is what data is missing, that you have to go and get, right.

And that’s a process problem and a people problem, like, Hey, we’re doing ROI.

And we have none of the are, because finance to give it to us, and we have none of the AI, because we didn’t take into account soft dollar costs.

chosen is a technology solution, where you have 510 dimensions and metrics and Google Analytics well, great, you need to do something like regression analysis to figure out which of the out of which five of those 510 actually are important.

So that’s feature selection.

And so again, conceptually, buckets under one, but the techniques used to fix the problems are very separate.

That’s why they’re broken out.

Katie Robbert 5:56

Okay, that makes sense.

And so, you know, as you’re describing that I’m almost sort of envisioning, you know, using our people process platform to reorganize this a little bit.

But that’s, you know, further on down the line.

So, you’re talking about this for people who have the skill set and access to things like machine learning.

I guess that’s the other place where I get caught up.

So I personally am not someone who uses machine learning I don’t sit down and start programming things in are.

So if I’m trying to do data quality on an Excel spreadsheet, which is how the majority of marketers are doing it at the current present moment, you know, I feel like I’m not using things like imputation.

And I’m not necessarily running regression analysis, which I know you can do in Excel.

But you know, and so I guess that’s sort of the other place where is it too complicated? For people who aren’t using machine learning? Or do we have to have a version for people who use machine learning and people who don’t.

Christopher Penn 7:11

And most of this stuff is actually not technically machine learning.

Most of this is basic data science.

I think it depends on again, which which prison we’re looking through.

So in a lot of cases, what you mentioned that people processing platform, I think, is really important to this, because most of these problems are process problems.

And most of the solutions are platform solutions to the problems where you can say like, you know, data cleanse, data, cleanliness is a process problem, like you’re the data you’re bringing in isn’t isn’t clean.

And there are some technological solutions you can use to solve it, it’s better if you fix the process, it’s broken.

But if you can’t, or if you’ve got legacy data that you have to use, you have to do your best to what you can say for data completeness, completeness, right, that’s a process problem, like, Oh, you forgot to, you have no governance around your website.

So when you make changes to it, you drop your tracking codes and bad things happen? Well, okay, there’s technological ways to mitigate that.

But it’s better to not have that problem in the first place.

So I think you could make the case for each one of these things.

There’s a people process platform aspect to each of the six things.

The only one, I think that we could easily drop is calculable, because it’s kind of implied if you can’t use the data.

Like, I mean, it’s so poor quality that there’s there’s really no point I almost think it’s implicit I leave it on because again, it’s one of those things that it’s kind of restating the obvious, like, if you can’t use the data, it’s not quality.

Katie Robbert 8:43

I, I think it’s fine to leave it on and restate the obvious.

So for example, the, you know, the stating the obvious example that I like to use is, um, so my husband is a butcher.

And they actually have to include instructions on things like a Thanksgiving turkey, like step one, remove the plastic.

And so it’s obvious you should understand it.

But there is a reason why step one is removed the plastic because enough people have not removed the plastic before cooking the turkey that they have almost at their kitchen on fire.

Therefore, it doesn’t hurt to state the obvious because for those who know, they just skip over that step.

They’re like, Oh, Duh, of course, it has to be calculable.

My only issue with calculable is my ability to pronounce it especially in a public forum.

And so it usually takes me two to three times to say or I have to slow down and really think through every syllable calculable and I think that’s part of my like, meaning menino mumbles Boston accent

Christopher Penn 9:49

Fair enough.

Fair enough.

I mean, again, we can pull up a tool like this for the writing folks in the crowd.

If you’ve never used related words.org It is a fantastic tool that looks At not just, you know, basic synonyms, but this sort of this is machine learning in a lot of ways, the words that appear near your target words.

So we can always stop, you know calculable or something in there and see what other alliterations we can we can pull together.

But yeah, I don’t know that we need a machine learning versus non machine learning or data science versus non data science, I think the each of the aspects is important.

But we could definitely have a higher level one where it’s you’ve consolidated down just, you know, it cleanness comprehensiveness, you know, credible and clickable and you’re down to four.

And the other question is, and I think this is a good one for folks listening, if you are listening, and you have a point of view on it, what’s not on here, that to you is important for data quality I, one that I debated and we left off because I think it is so situational is cost, right? You, your data always comes at a cost of some kind time or money or resources.

So how do you balance, quality of data with with cost, you know, it’s, it’s the whole perfect is the enemy of good.

You can try to wait for perfect data, but then you lose the opportunity to actually get data you can act on.

Katie Robbert 11:18

I understand that it’s situational.

But it’s a relevant point.

For every single situation, it’s the outcome that’s going to be different based on, you know, what platforms you’re using, what people you have, or the processes that you have, in order to, you know, hold the data.

So I would, I would argue that you could replace cost, and still have it with one, you know, swap it in for one of these other ones that we’ve talked about that are duplicative and still have the six C’s.

Because I do think that, you know, when you think about data collection, you have to be thinking about well, is it a time thing? Is it a platform thing, you know, and the ability to extract the data.

So for example, we use a CRM that collects all of the data that we want.

But from a cost standpoint, we don’t have the license that allows us to extract the data that we’re collecting.

And so we would have to work around it.

So we would either need to pay more for a higher level, or we would need to use resource time to code against the API.

So I do think that cost is a valid see in that situation.

Christopher Penn 12:40

Yep.

And so is the thing that this is the this is the one the glaring absence that I think is on here, and I’m fine with making eight or nine or 10, I don’t care because I’d rather have it be coherent and comprehensive, then then, you know, fit in is, I guess, let’s see, it’s letter C conclusive.

And that is, your data’s got to be something you can make a decision on.

Right? It’s good.

All the data quality.

Part of of good data is you got to do something with it.

When if you don’t, then it’s just literally a decoration.

It’s just a waste of time.

So is your data conclusive in the sense that you can take action on it?

Katie Robbert 13:20

I think that that makes sense, especially since these are called instant insights.

If you can’t pull insights from the thing that we’re teaching you about, then it’s we haven’t done our job.

Because that’s, I guess that’s sort of the so what is missing is like, so what I have great data, what do I do with it? Hmm.

And so it needs to be conclusive.

It needs to tell you something, and that sort of goes back to you can sort of draw that thread back to, you know, is it clean? am I choosing the right data? Do I know the purpose? And so, you know, as I’m thinking about how we’re going to reshape this, I do think that this could be a multi part blog series, really digging into each of these c’s each of these buckets as to how they fit into the five P’s.

So what’s the purpose of the data that you’re collecting? And so going down that list of you know, do you have the people to set up the platform correctly? What is your process for keeping the data clean, and the governance and so talking through all of those things? And so it’s, you know, it’s interesting, because, you know, you’re talking about you don’t care if it’s like six or nine or 12 C’s, but I think we our job, Chris, you and I Our job is to explain it in such a way that doesn’t feel overwhelming, because the more steps that we add, the more overwhelming it can feel to people in terms of like, well, I can’t complete, you know, 12 C’s, but I could maybe do four you know, and so, I think it’s good for us to break it down into that.

very granular level, sometimes when I use these frameworks in my presentations, I will sort of bring them up and bucket them, you know, so I think we have the, you know, project lifecycle or the machine learning lifecycle.

And it’s about nine steps or something like that, and I bucket it high level up to four, where it’s like, you know, you plan, you develop, you test, and then you deploy, but when you want to dig into it further, then you can sort of break it down into each of those very discrete steps.

So I think that we have a couple of different ways that we can approach it.

But I think the the thing we’re coming back to is all of the steps are valid, it’s just a way in which we want to explain them so that people feel like they can understand it.

Christopher Penn 15:47

Right.

And it, you know, selfishly internally for us, it maps to what solutions do we know we can bring to somebody who’s got this problem? Like, if you come to me, and you say, Hey, I got this data set? I don’t know if it’s clean or not that, you know, you look at the framework and go, Okay, well, what are my options? What are the, you know, what, the people problem I, you know, it’s not something that we can fix immediately.

But we certainly can help provide training and education, the process problem, we can definitely fix like, Okay, let’s stop doing silly things.

And then the platform is okay.

Well, with with the data you’ve got so far, like, are the ingredients just not ready? Like, hey, you bought whole wheat berries instead of wheat flour, but okay, we can grind that the wheat flour and still make bread, or you bought sand, like, Okay, you’re kind of stuck, you can’t No matter how good your technology is, you can’t turn sand into flour.

Katie Robbert 16:39

Well, and with that, you know, it’s something that we can set you up for good data quality, but also if you don’t have the resources on your team to do something about it to you know, calculate it and draw some insights from it.

That’s something that we can definitely help with as well.

And I think that that’s aside from a lack of governance over how data is collected.

I think that’s the other most common problem that we see is we’re collecting all of this information.

We have all of this data.

There’s nobody on our team who can do something with it.

Christopher Penn 17:16

Mm hmm.

Yep, got great ingredients, and No, chef.

Katie Robbert 17:20

Mm hmm.

Where does something like? So I guess maybe under calculable, I’m gonna be answering my own question.

But where does something like, you know, in that scenario, I’m collecting data from eight different strings, and I need to put it together in a dashboard.

Where does that fit into something like this?

Christopher Penn 17:42

That partly fits into calculon Partly fits into conclusive right, which is, you’ve got to be able to make a decision from the data.

So what do you need to do to the data in order to make a decision on it? Sometimes that might just be, you know, taking a bunch of data slapping in today’s to a dashboard, putting a few charts side by side, see, okay, you can make a decision based on this.

We had very recently with an SEO client, where they’re like, oh, yeah, I just need to know which pages to work on.

Okay, here’s a list of the pages to work on.

Other times, you may need to actually do substantial ETL stuff in order to make it usable.

So ETL stands for Extract, Transform, and load, you have data from all these little places, right, you got to pull the data from each of the places.

Think of it like, you know, you’re in the grocery store, you got to get flour and sugar and yeast, and milk and butter, all these different parts of the store and get it all home.

And then you have to transform it right? So you got to put in a mixer and mix it and let it rise and stuff.

And then you’re finally ready to actually to bake with it.

And so, from a what we do, side of things, it would be well, do you even know what ingredients you need? Like, you know, what data you need? What is the outcome, the purpose, right? And then what transformations do you have to create? So we have, you know, one customer that is having us extract data from their Hubspot instance and do a lot of transformations to make it appear in a Data Studio dashboard for them.

That’s part of that conclusiveness is to say, look, here’s what, here’s what you need to do to this data in order to make decisions, right? Because in its raw form, you can’t it’s it’s like 2 million records of stuff.

And you know, that’s not helpful.

Katie Robbert 19:27

So then I think what’s also missing from this framework is the purpose and that so you start with clean but you’ve skipped over.

What’s the data meant to do in the first place? And I think if you’re going to end with conclusive, you need to start with whatever that C is for protocols, call or concept.

Sure.

Either one is fine as long as as long as I can pronounce it.

We’re good.

But yeah, I think that that might be the other thing that’s missing is we always challenge our clients and ourselves to start with, what is the question you’re trying to answer? So I think the same should be true of this.

And any other framework that we’re putting together is, what’s the goal? What’s the purpose? Why are we doing this in the first place? So therefore, you can always draw that line back to Okay, the data is clean.

But does it meet the goal, the data is complete, but doesn’t meet the goal.

The data is comprehensive, but doesn’t meet the goal.

And that’s the way that I like to teach about some of these frameworks is, once you set the goal, once you set the purpose, once you know the question you’re answering, everything you do has to be able to be traced back to that.

And if it’s not, you need to start over again.

And either you’re answering the wrong question, or you have the wrong ingredients.

So if you’re trying to make a chocolate cake, don’t give me coconut.

I don’t want coconut and chocolate cake.

Christopher Penn 21:02

It’s interesting, because the way you say that now, it’s going to change from a data quality framework to a data quality lifecycle, because now you’re talking about the inherent process.

These, when we first put this together, it was just brain droppings, right? Oh, yeah.

Here’s the thing do as you remember them.

But now, when you have that from the very beginning, hey, clarity, okay.

So that that, that that purpose, you know, Are you clear about what you going to use this data for? Okay, then you would go into something like credibility, like did you collect is the data even collected been properly and if it’s had, if you know, there’s a problem with the credibility of the data, the rest of it doesn’t matter, then you get into stuff like, you know, chosen and comprehensive, clean, complete and stuff.

And then it becomes this lifecycle.

And that’s actually probably easier from a presentation and education perspective, because now as a story, instead of six, or eight or 10, random concepts, you’re like, I don’t remember any of these are like, Oh, from beginning to end, this is the story of how your data should help you get your job done.

Katie Robbert 22:09

It’s almost like you’re talking about starting with requirements gathering, which is what we always challenge people to do first, which is the part that they skip over.

And so I agree, I think that this moves from a framework to a lifecycle, because in that lifecycle, the very first thing you do is, what are we doing? Why are we doing it? And then you need to start to document Okay, what do we need? You know, what’s the governance? Who are the people? And that sort of goes through? Okay.

We know we need, you know, chosen data, we know we need comprehensive data.

I think that these now looking at it, they’re out of order, because you need to answer some of those questions first, before you can even dig into.

Is it clean? Is it complete? Well, is it even the right data in the first place?

Christopher Penn 23:00

Yep.

Yeah, comprehensive shows and incredible or governance questions.

Mm hmm.

Katie Robbert 23:04

And so if we continue to follow that lifecycle, you know, structure, then I think we can reorganize this and other frameworks to follow that, because you always, as much as you can sort of swap things in and out that lifecycle itself is the thing that stays static.

You always need to know what you’re doing and how you’re going to do it before you actually do the thing.

Christopher Penn 23:34

Yep.

That’s like, data storytelling.

Katie Robbert 23:40

Mm hmm.

Yeah, so we start with clarity or cause or I’m going to leave the alliteration to you.

This is it’s not something that I’m very good at.

So I’ll leave that to you.

So we start with what’s the point? We start with? What do we need? And that’s our requirements.

I don’t know if you want to get into, you know, the actual people or the technology piece of it.

I mean, I guess within each of the within each of the buckets, you can talk about that.

So that’s sort of like the subset of things that you need.

So you need the purpose, you need the requirements, then you need the data itself.

And to do the QA of it, you need to you know, collect the data, do the QA that and then you need to do something with it.

Christopher Penn 24:30

Mm hmm.

Exactly.

No, I think that, that makes a lot of sense.

And that again, one of the things when you’re explaining it to somebody is easier to then remember all the points because it follows a logical progression.

Mm hmm.

It has a plot.

Katie Robbert 24:45

It does.

Um, great.

So this is going to be updated from the six C data quality framework to the 20 foresee data lifecycle.

But I think data lifecycle makes sense, because data quality is part of any project, you know, whether it’s, you know, an actual, you know, data analysis project, or a design project, or a clinical trial data in some way, shape or form is going to be present.

You know, whether it’s people’s opinions, whether it’s numbers on a spreadsheet, or customer feedback, the data quality is always going to be represented in any kind of a project.

So I think a life cycle in this instance, does make a lot of sense.

Christopher Penn 25:43

Exactly.

So coming soon, ish.

Any final thoughts on this?

Katie Robbert 25:54

I think it was a great place to start.

And I think that that’s, you know, true for anyone who’s listening is you got to start somewhere, it doesn’t need to be perfect.

And so I think one of the things that Chris and I are trying to also communicate is, this is what we started with.

And it has been good enough for the past three and a half years.

And now we’re at a point where we’ve learned a lot about how this works, what works, what doesn’t, and how we can iterate on it.

So if you’re looking to start with data quality, or anything else in your organization, it’s okay for it not to be perfect, you can build on it.

Christopher Penn 26:31

Exactly, yeah, this is this is three and a half, probably almost four years old now.

So definitely time to to upgrade as we upgraded and replace a lot of things.

So thanks for tuning in.

If you’ve got questions or opinions about data quality of the data quality lifecycle, I’d like to share pop on over to our free slack group go to trust insights.ai/analytics for marketers, where you over 2200 other marketers are sharing opinions answering each other’s questions all day.

And wherever it is, you watch or listen to the show if there’s a child you prefer to have it on.

Most of them are going to be here at trust insights.ai/ti podcast where you can see the videos and things like that screen shares etc.

Thanks for tuning in.

We’ll talk to you soon.

Take care.

Need help making your marketing platforms processes and people work smarter.

Visit trust insights.ai today and learn how we can help you deliver more impact


Need help with your marketing AI and analytics?

You might also enjoy:

Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!

Click here to subscribe now »

Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday.

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This