livestream header

So What? Applied Data Science in Marketing

So What? Marketing Analytics and Insights Live

airs every Thursday at 1 pm EST.

You can watch on YouTube Live. Be sure to subscribe and follow so you never miss an episode!

In this week’s episode of So What? we focus on Applied Data Science in Marketing. We walk through the data science process and how to use it to validate your marketing data. Catch the replay here:

So What? Applied Data Science in Marketing

In this episode you’ll learn: 

  • How data science basics apply to marketing
  • How to look at various reports and interpret them
  • The most important functions in marketing data science

Upcoming Episodes:

  • YouTube SEO 2/17/2022


Have a question or topic you’d like to see us cover? Reach out here: https://www.trustinsights.ai/resources/so-what-the-marketing-analytics-and-insights-show/

AI-Generated Transcript:

Katie Robbert 0:24
Well howdy, howdy everyone. Happy Thursday. Welcome to SWOT the marketing analytics and insights live show, I am joined by Chris and John. So today we are talking about applied data science in marketing. And this is a topic that people have a lot of questions about. And don’t you worry, Chris, I have got a bundle of common questions about data science, for you, especially as it applies to marketing. So those are the things we’re going to cover is how data science applies to marketing, how to look at various reports and interpret them and the most important functions in marketing data science. So Chris, where would you like to start?

Christopher Penn 1:00
Gosh, I don’t know, I think one of the first places to start is that we need to contextualize data science. And actually partially explain it first data science is the application of scientific principles and techniques to data. Right? So dude, is just science applied to data, just like you have chemistry as a science, biology and science and things like that data sciences is no different. So it’s not magic, right? It’s not some arcane art, there’s no sacrificing of things other than maybe, you know, a mouse that you break or whatever, when you’re when you’re computing stuff. And there’s nothing mysterious about it is a lot of math, and some coding, and stuff. And that’s what I think most people find somewhat off putting about it is that it is a fairly technical thing to do. Even in the beginning, like, if you’re not comfortable, say, like Microsoft Excel, it’s it, you’re gonna have a bad time. So it is one of those functions that you need to have some aptitude for, even if you don’t necessarily get into the techniques. But it’s good to know what it is and what it does. Again, the the part that I think is most important for everyone to understand is it’s not magic, right? There’s no, there’s nothing secret about a lot of common data science techniques. And what makes it different is that very often these techniques, which are developed in fields like finance, and biology and medicine, are not common in marketing. And so, as a result, it does seem like these these crazy arcane arts. So I think that’s probably the first place to start. The second is, like I said, putting it in context. Data science is, at its core, it’s a practice, right, which means that it belongs to the same structural framework as everything else, which would be the five p framework, right? So you have your, the purpose, why are you doing the thing? The people who’s doing the thing, the process? What is the thing that you’re doing the platform? How are your tools using to do things? And then the performance? Did you do the thing? Well, data science is just a set of techniques that enables each of these stages.

Katie Robbert 3:18
So I have a couple of common beginner questions for you. So number one, you know, we talk about things like marketing analytics, or a marketing analyst is a marketing analyst, a data scientist? And then the second part of that question is if I’ve only ever used Excel, if I’ve never coded something before, am I a data scientist?

Christopher Penn 3:46
We can answer that both questions at the same time. You are doing data science if you are fundamentally performing the function science, which is to test hypotheses, right to prove or disprove something. So if you are pulling in a whole bunch of data, and there’s no hypothesis, you just cranking out a report, here’s what happened last month, you’re doing analysis, you are an analyst. If, on the other hand, you have a hypothesis, say like, sending out more tweets on Tuesday results in higher conversions. And then you go and test the hypothesis by any mathematically valid means you are now doing science. And that would make that data science. So it’s not the tools. It’s not the technology is the process that you’re using. So again, we go back to our our five p platform, data science is all about the process, whether or not you’re doing it.

Katie Robbert 4:43
Okay. I think you know, that’s an interesting way to look at it. Because I think that there’s a lot of confusion, or even just misunderstanding about what data science really means and that it’s not an approachable thing that you can bring into your remarketing practice or your business in general, because you need to hire specifically a data scientist, which is, you know, probably a good move, because it’s someone who’s going to be holding the rest of the team accountable to doing these practices. However, if you want to practice data science, then you need to make sure that you are just following basic, you know, good analytic practices of like, what’s the question I’m trying to answer? And that could be your hypothesis. And then you have, you know, following your data requirements, what data do I need? Am I using the right data, sort of following the 60s of data quality to make sure that you can answer the hypothesis, either positively or negatively?

Christopher Penn 5:45
It’s even simpler than that, are you following the scientific method? If you go back to high school, or even grade school, when you first learn the scientific method, you know, you’ll have a hypothesis, you create a test, you validate the results, you prove or disprove a hypothesis. When you’re when you’re doing hypothesis testing, you are doing the science and you don’t even necessarily need technology to do any of that, right. As long as you’re, you’re collecting data in a statistically sound way. I was looking this past week, at some of the early data science that was done by WB Dubois, who was one of the first presidents of the end NAACP, as for Black History Month, and in 1900. He published this enormous register 66 different charts and analyses that stuff about, you know, life for black Americans. In 1900, no Excel, no Tableau, no are no Python, hand drawn everything. But it was part of his own hypothesis that, you know, the effects of slavery and stuff had an impact on black Americans. And he proved it very clearly this this 66 Page publication, which you can find in the Library of Congress. You don’t need technology, you just need the hypothesis, and then the scientific method. So even back in 1900, somebody like, you know, Mr. Duvall was was just knocking it out of the park.

Katie Robbert 7:10
So what would you say, John, have you applied data science techniques to everyday life for you? I feel like when you change out some plumbing, you might apply data science, you know, the hypothesis is, this thing is leaking, because the person who previously installed it wasn’t paying attention. Let’s prove it right or wrong.

John Wall 7:32
Yeah, well, the thing with that, though, is, you know, there’s zero tolerance, tolerance for failure. And those tests, so I wouldn’t call that true science. Because, yeah, I would call it a real plumber. If I was doing science, we’d only be experimenting on their natural gas in their house. Very poorly. But yeah, it is interesting how it’s that experimentation mindset, you know, because that’s always been a cornerstone of marketing progress lab doing is, you know, you’re always running two or three things to figure out what works and what’s better. So, yeah, it’s just kind of funny how, you know, a lot of the data science I’ve done is about Dubois level of, you know, this is the stuff that was just lying around, we’re not using any specialized math or calculus, it’s more just, Okay, let’s try three things and see what kind of trouble we get into.

Christopher Penn 8:19
Exactly. Now, I will say the tools do help that you know, the tools, it’s like cooking, right? You can do everything with just a frying pan and a spatula, a lot of things will not turn out well, right. And there’s a lot of extra work. Like, if you’re trying to make a steak, that’s, that’s okay. If you’re trying to make soup, it’s a little tougher to you know, you could do it. But having the right tools certainly does helps like having a blender, you know, does help if you’re making margaritas, it’s not essential, but But it sure is, is nice. But the big confusion point that I think a lot of people have with data science is everybody spends so much time on the technology on the tools, they don’t want, they ignore the first three parts, which is why are you even doing it? Who’s doing it? And how are you going to do it. And without those parts, it gets very messy. Now, in terms of practices in data science, one of the the most important prerequisite practices before you can start doing the science itself is a technique or a system of techniques called exploratory data analysis. And this is all about given a bunch of data. Do you know whether the data is in fact, usable? And how do you even know what the data is? And in marketing, we’ve got a ton of this kind of data, you’ve got, you know, data from every conceivable tool, so I figured we’d spend a little bit of time taking a look at what exploratory data analysis looks like. We’re gonna use the programming language R. You can again you can do exploratory data analysis with Excel, I am not good at Excel. So I will say that you know, someone like oz use A lawyer who’s a Microsoft MVP could probably do everything I’m about to do in Excel, I just happen to do it in art, because it’s packaged up a lot faster. For me personally. Let’s start with something straightforward. This is good old fashioned keyword data, right? This is from the SEO tool, H refs, and this is our keyword list. Here, Trust Insights of the search terms that we think are important. You can see there’s a lot of stuff here. There’s a lot of data in here. So what I think our first step should be is what’s take a look at this data. From an exploratory perspective, I’m going to run a very quick analysis on it. And we get a nice set of charts out of this. So it won’t be you know, we’ll have to stare at the our interface very long. Many of today’s modern data science tools can produce these kinds of profiles, these reports that say, here’s what’s in the box, you gave me this bunch of data, here’s what’s in the box. And then if you know how to interpret the report, you can get a sense of, can I even use this data? If I can? Great, let’s proceed. If I can’t, you know, maybe, maybe this, you know, we need to find a different data source. So what we’ve got here from our keyword data, is a quick look at the types of data in here how many things are discrete columns, means text, or non numbers, and how much is continuous, which is numbers. So in this dataset, which is our, our SEO keywords, about 41% of the columns are not numbers, and 56% are numbers. We don’t have any missing columns. When we look start looking at observations, how many rows are completely all the data’s in 77%. That’s a yellow flag. Like we’ve got some missing data in here. And then we look at missing observations 3.7%. So just with this one, Katie, you mentioned the six C’s of data quality, right? Clean, complete, correct, etc. We’re already starting to see there could be some problems in this dataset. So we want to keep exploring, keep going through it. But this would be a first step. Now if this was like 20% Complete, I’d be like, we’re going to this data set might not even be usable. Most tools will tell you, you know how the data is structured, especially when you get to really complex things like inside your CRM, like if you have Hubspot or salesforce.com, you might have 30 or 40 different tables all together. And tools like exploratory data analysis can help you to start to work out. Okay, there’s a lot of things that are interconnected here. This one, again, is a real simple table, just SEO keyword data going on down, which data is missing. So now we’re looking at the individual variables. So we’re good for search volume, and country and keyword where we’ve got some missing data on traffic, some on search features, last updates, keyword difficulty, and we’ve got a lot of missing data on a page pay per click data. So we immediately that tells me because this is SEO keyword data, we may have a lot of terms that don’t have bidding on them. Right? There’s no, there’s no auction data. So it could be good or bad. We don’t know yet. But we definitely know that there’s chunks of data missing.

Katie Robbert 13:20
So I guess it really depends on the question. You’re asking whether or not that’s acceptable. Exactly. Like if we want to know what country that it looks like you have all of your data.

Christopher Penn 13:31
Exactly. Let’s rewind. Katie, why would we be looking at our keyword data to begin with? What’s the purpose?

Katie Robbert 13:38
The purpose of looking at our SEO keyword data is, you know, there could be a few purpose AI purposes. The purpose could be, you know, we want to know, are we ranking for the right keywords? Or are there other keyword opportunities? Or, you know, what are our competitors ranking for or depending on? You know, we’re trying to target a certain part of the country, what keywords do they care about the most? And so there could be a variety of reasons, but it all needs to sort of go back to keyword data in the first place.

Christopher Penn 14:14
Exactly. So for this dataset, I think it would be a good look for essentially keywords that maybe we should be trying to find opportunities for. I think that’s a pretty, pretty straightforward purpose, right? Are there keywords, that would be a good opportunity for organic? So if this was paid, if we said that what are some keywords you can bid on immediately looking at this chart, go Wow, we got a lot of missing data here. This dataset might not fulfill that purpose. If there’s a lot of missing bid data. Moving on down, we have our distributions. These distributions are essentially show you how the data is broken for these different variables. So Keyword Difficulty just looking at this chart from zero to 100. We can see that this data set has a little bit of a little If you draw a line down the middle, it leans more towards the right hand side, that means that from a difficulty perspective, we’ve picked a whole bunch of keywords, and a lot of them are on the difficult side to rank for organically. We look at sort of global search volume, this one leans very heavily to the left side. So we’ve got a bunch of keywords that don’t have a super huge amount of search volume. Again, kind of a, I call this a yellow flag, like, we picked a bunch of keywords that are tough to rank for, but don’t get a ton of volume. This may not

John Wall 15:33
be Yeah, page one, I was crying when I saw that page one, it’s actually this chart at least shows it’s not as bad as page one indicated there, I saw that sea of red and was just like, Oh, my God, this is gonna be ugly.

Christopher Penn 15:43
Exactly. You know, traffic potential things like that. So you can see that there is some skewing. So again, being able to look at a distribution say doesn’t lean left doesn’t mean you’re right. And then what are those leanings mean? The bar chart frequency here is not super helpful, we have these sets of quartile quartile plots, this is very much a statistical thing. What you’re looking at here are the first two quartiles of a data set plotted against the second two. And what that means is, you’re looking at whether there’s a skew or bad skew in the distribution. A normal distribution should be look like this was a nice horizontal, diagonal line, and all the data clusters on it, that means that there’s, it’s like a bell curve, essentially a normal distribution. When you have weird, wacky shaped things like this, it means that your data is not normal, which means that that dictates certain types of regression techniques. So when we think about the process of data science, we start thinking about what techniques are on the table for us to use a diagnostic like this test says, Okay, you got to take some things off the table, because the data will not permit you to get a valid analysis for certain techniques. So in this case, if I want to do a regression, say like, for traffic potential, how much traffic could these keywords drive? If I wanted to do you do regression analysis against that? There’s three major regression techniques that you learn in stats one on one, there’s Pearson, Spearman, and Kendall tau, this weird shape thing, instead of this lovely diagonal line takes Pearson off the table. So, which is important to know because a tool like Microsoft Excel out of the box, uses Pearson regression. So if I were trying to do this analysis in Excel, and I didn’t do my look at the data exploration, first I go, Oh, I’ll just run a correlation analysis. And look, you know, here’s this correlation, and the data would actually be saying, Ah, this won’t work, you’re going to come up with an invalid answer

Katie Robbert 17:45
is that because of the one of the first things that you that you’re showing is what’s missing from the data?

Christopher Penn 17:52
In this case, it’s it’s that it’s a non normal distribution. So it’s more of like a power law distribution when we go back up here to these charts, right? Your your traffic potential, like this is what I care about how much traffic can I earn with my keyword with these keywords? Look how heavily left that skews to one side, right, there’s a whole bunch of keywords here that have very low traffic potential. Instead of a bell curve like this, you know, CPC is more more shaped like a bell curve. This is not in any way a bell curve whatsoever. And therefore, that means that I can’t run Pearson regression against traffic potential, and understand and get a sense of what other variables correlate to it. Because I can’t compare this bell curve with this non bell curve, right there fundamentally incompatible. So knowing that I would have to choose a different regression technique.

Katie Robbert 18:45
You may be getting to this, but are there rather than you running? The script that you ran against this SEO data? Are there out of the box or third party services? That can do this kind of exploratory data analysis to verify that your data is good? Or your data is not good? Or is it something that you as the person doing the analysis, really need to be doing yourself and building your own set of scripts and code and, you know, building it out in Excel?

Christopher Penn 19:20
There aren’t any. So do you actually is a really, really important question. There are not any out of the box tools that do this with marketing data. Right? And that’s such an important thing, because a lot of marketers will fire up a tool like SEMrush or Ahrefs, or whatever, pulled the data and start working on immediately, not knowing that the data might be bad, depending on the purpose, right they because nobody ever asks the question, can I even do this? And then, once you run your, your campaigns or whatever, you may find that all actually the campaign didn’t generate results will why nobody can figure out why one of the underlying causes could be because you used the wrong techniques to draw conclusions to make decisions. And as a result, the decisions you made were based on incorrect interpretation of the data.

Katie Robbert 20:11
Got it? Okay. So there’s really no I can’t like, you know, import my, my CSV file into, you know, check my data.com or something, and get, I think I just came up with a great idea, you know, trademark, and get this kind of results to say your data is good, or your data is bad. So that for this in this application doesn’t really exist.

Christopher Penn 20:36
Exactly. And part of the reason is because it goes back to purpose, right? When we look at we think back to our Five P cycle, just because something is a power law distribution, or normal distribution doesn’t make it good or bad. It is a question of how are we going to try to apply it? Right? It’s just like, a blender isn’t good or bad? It’s the context we use in blender for Margarita, great blender for steak. No, not so much. Right? The blender itself is not the issue, the same thing is true of our data here. The distributions matter based on the techniques, we’re going to apply to them. And so a tool, a third party stuff would not know what our stated intent was.

Katie Robbert 21:19
I see. Well, and I think, okay, that’s super helpful, because I’m sure there are tools out there that can validate your data. But unless you’re telling it, why you’re looking to validate the data, which you know, these are not far as I know, you know, Cynthia things, you need to put that in. And so that then goes into the whole program piece. So you might as well just do it yourself.

Christopher Penn 21:41
Exactly. For now. For now, we are seeing more and more tools come on the market that do more data prep, and try to make it easier on people. You know, for example, Tableau really recently released the tableau Data Prep tool, there’s been tools like Alteryx, IBM Watson Studio and and data refinery and stuff that are out there that can do bits and pieces of it. But again, goes back to the purpose, a lot of these tools are general purpose tools, kind like a Swiss army knife. And if you don’t know what you’re going to be doing with the tools, then the tools obviously can’t conform to those specific use cases. So you in a lot of cases, yeah, you are going to have to, you may have buttons in those tools that can form these individual analyses. But you still have to understand why you’re asking these questions.

Katie Robbert 22:30
John buss up the duct tape, and you know, starts putting tools together to

John Wall 22:35
faking things. This is actually though horribly disappointing, really, when you think about it, because we know that 1/3 of you know, marketing orgs aren’t even using data at all. So now you have the two thirds that are at least a third of them are making completely erroneous assumptions, you know, that they have this small little bucket of data and are just presuming that it’s going to be normally distributed so that if they throw in money, they’re going to get there. And yeah, unfortunately, for over a third of those there, it’s not going to be a fun lesson. It’s going to be crashing the you know, the bus into the side of the building at full speed. I mean, it’s, you know, it’s not going to work. So, yeah, I guess, do your due diligence and look into the data before you promise quarterly results, because they could end very poorly.

Christopher Penn 23:24
Exactly. The next thing that this particular tool does is does a correlation analysis, right. And again, this is where that gotcha comes into play. Let’s say I care about traffic potential, right? I want to how much traffic can I get what variables Carly, most with traffic go looking across the table here quickly. Global volume, which is search volume correlates most strongly followed by difficulty, which is interesting. But because we just talked about comparing a power law distribution to a normal distribution, we know that this difficulty, scroll back up here. Right? Difficulty is a bell curve, traffic potential is a power law curve. So this conclusion in this correlation analysis is wrong. Because out of the box, this uses the Pearson correlation. So I would have to go back into my code and make an adjustment to use Spearman, which is best for nonparametric distributions. So that’s a case where we, if you know, the underlying data, and you know what techniques can be applied to it, you know, out of the box, this tool needs some adjustment, right, in order to draw a correct conclusion on what actually would be the indicator for keywords to choose.

Katie Robbert 24:40
So understanding that, to watch this episode, you have to have some sort of a basic level of understanding of statistics, which, in general, I do, but you just said you know, Spearman versus Pearson, which are two techniques I understand but you said a non parametric distribution

Christopher Penn 25:00
Oh, sorry. Not so a nonparametric distribution meet. Yeah. means it’s not it’s not a bell curve.

Katie Robbert 25:08
Oh, okay. So okay, so a bell curve versus a non bell curve got

Christopher Penn 25:13
exactly. Okay, exactly. So we know from our distribution charts, we got a mix of half and half. And so this default, which is built into the tool is only assuming everything’s a bell curve by the technique it’s using. And again, that goes right back to what we’re talking about, like, why don’t tools do this out of the box? Because literally, it had no idea the different distributions.

John Wall 25:37
Was it and it’s horrible, because it’s like, you’ve got a point three. So it’s like, it’s not even that good. And then you’re coming back say, Oh, by the way, actually, that’s way overstated. It’s not even that good.

Christopher Penn 25:48
Right? Well, we don’t know, that’s the thing is, when you’re comparing distributions, with, you know, regression, different regression techniques, you don’t know, sometimes they’ll come out stronger, sometimes they come a weaker, sometimes they’ll just give up. So the last thing that your average tool is going to do is it’s going to do what’s called principal component analysis, where it’s going to try to figure out what are the most important variables in a data set just to try and boil things down and make it easier? So we see analysis number one, here is, is a 30% variance, which is obviously in this case better. And so the question is, what variables make up component one, it’s number and then CP, a cost per CPS in its cost or something? I don’t know what is an H refs definition? Now, knowing the data set, I know I’ve got a problem here. One of the things h ref dot h refs does, which I really don’t like, is that in the data, they put a row number, right, that row number is not helpful for anything. And so this, our software out of the box, didn’t understand that that really shouldn’t even be in there. Right. And as a result, it’s it’s made an analysis, that’s flawed. Right? So again, it’s not the fault of the tool, the tool had no idea that there’s a row number is us as the marketers with our data to look at and go, actually, let’s take that out first, because that’s stupid.

Katie Robbert 27:20
You know, it’s, which was interesting, because I feel like you have to do data cleaning, to do this validation, to figure out if you need to clean the data. And it just like, it’s, this is the kind of thing where, like, my mind, starts to be like that monkey with the symbols of like, okay, alright, I think I’m following along. But that one to me, is, that’s a tough one. Because you do you have to structure the data and clean it first, before you can validate it. And once you validate it, you can see how much extra cleaning you would need to do or if you can even use it.

Christopher Penn 27:55
Exactly. And again, this is why in marketing data science, that process part is so important. You what you’re describing is actually Agile methodology, methodology for marketing data, right? For the marketing data science, which, again, if you’re familiar with agile, like, Oh, that makes total sense. I have to I have to iterate with my data cleaning until I get to clean data. It’s not a one and done thing, which again, yet another reason why there isn’t a tool off the shelf that you can just plug your data and then it just other thing, no tool on the market is going to know oh, well, for keyword data, make sure there’s no, the vendor didn’t include a row number. Like with Google Analytics, when you export data out of Google Analytics, you typically get a long CSV file with two different tables in the same file. And you have to know going into it, I got to delete a bunch of stuff to even make it work in another tool. It Google Analytics also always adds a summary row at the end of its exports, which annoys the daylights out of me, because it then throws off everything is like, Oh, this day, you have 540 1000 visitors like no, that’s the entire year. I don’t want that in there. So, to your point, Katie, there’s a bunch of process stuff that has to happen in order for stuff to work.

Katie Robbert 29:08
Well, and I think you bring up a good point, and agile in and of itself could be a whole episode. But applying Agile methodology to what seems like a cumbersome process, is you know, it’s hard to reflect on a slide to say like, this is agile within this, you know, whole framework, because that would be sort of unwieldy with a bunch of you know, circles. But that’s exactly what you need to do to approach these things is you know, if you’re looking at it from a waterfall perspective, so waterfall in project management is step one complete, and then you can move on to step two complete and then you can move on to step three complete, whereas Agile has a lot more things running concurrently by applying an agile methodology to the management of your exploratory data analysis, combining With data quality, that is where you’re going to get to results faster. And that’s where you will find out. Okay, this is data that I can use. Now with that it still takes that planning upfront to say, what is the purpose, who are the people, the process is going to be agile using EDA and data quality, the platform is going to be this software and this kind of data. And then this is my performance, my measure of success. So you can’t skip the planning process. That’s where you determine this is how agile is going to work, as in combining these few things, you know, so that sort of just a little bit of deviation into how agile would work in this instance.

Christopher Penn 30:43
Exactly. So that’s one example. Let’s look at another example. I’m going to this case, pull out some data. So I use a website scraping tool called scrutiny. On a scraped the entire Trust Insights website, let’s take a quick look at the data itself. And what we see is whether page was good or bad, the page, if there’s a redirect the text of the link linking pages together, what page A link appears on. And then when I crawl this, and this is we use scrapers, SEO scraping tools, to be able to understand like, for example, what pages are accessible on our website or not. But again, going back to kind of where we started. If we start with a basic exploratory data analysis to understand this dataset to get a better sense of what did this tool give us, we can start to show what we can do with the data. So I’m not gonna spend as much time on this one, we can see, again, we’ve got this case, 100% discrete columns, there’s no numbers at all in this dataset. Is that weird? There’s not a single number is all just categorical data. Which means immediately, if there’s an objective that we care about, we know this dataset will not answer that question. Right. So if we’re trying to understand, for example, which pages should link to other pages on the Trust Insights website for maximum, say, moving traffic around, there’s no traffic data in this thing. So we’d have to, we automatically know just looking at this, we if we have numbers we care about we need to submit this data, we need to go into Google Analytics and export our pageviews, for example. But looking right at this otherwise is actually a fairly healthy looking data set. We have 89% of pages are missing redirects, that’s actually a good thing, because it means that we’re not bouncing visitors all over our website. On a concern, one thing is this 5% of page is here that don’t have link text. So we’ve made it we’ve put a link to another page on our website, there’s no text to click on. It’s like, so what did we do wrong?

Katie Robbert 32:50
I don’t, I don’t know how that works. So

Christopher Penn 32:52
I know what that’s from. But I only know that because I you know, it’s it’s intentional. It’s part of our newsletters. But it’s something that if this wasn’t our website, or we didn’t know, our data and how we republish this would be an element of concern. We can look at the error code. So most of our pages on our website, work fine. There’s a bunch of redirects a few that were not checked. Critically, no fluorophores, no missing pages. I’m very happy about that. And nothing weird. So life is good there. The correlation analysis, principal component analyses are not helpful, because these are all there’s all categorical data, there’s no numbers. So even in a very simple examination of website crawling data, we can look at the data set and go, Okay, well, here’s some things that we would need. And we don’t have the date in here, because we didn’t clearly define that in the purpose. We didn’t say, we want to crawl this data for the purposes of, you know, figure out where to link pages. Traffic wise, because we didn’t define that we just put data in, we got an analysis that really doesn’t fulfill the need. So having that purpose section upfront is still super important. So I apologize if your brain is doing the Macarena?

Katie Robbert 34:16
It definitely is, it’s It’s definitely like vacillating between a margarita and the, you know, symbol monkey. I mean, to the point where I’m like words, what, who? You know, it’s, it’s interesting, because I look back at the statistics courses that I’ve taken. And I think I’ve shared this small anecdote with both of you is when I was in grad school, and taking a statistics statistics course, I didn’t really even understand the basics of how to make a chart in Excel. Let alone you know, my professor wasn’t necessarily teaching me, Pearson and Spearman and those kinds of things. Those are techniques and methodologies that I learned on the job when I was working in the academic setting. And then Chris, you have continued, you know, to educate me on what those are. And so I feel, I do feel like marketers, you know, depending on their educational background and their work experience, could be at a disadvantage. For some of, you know, for general data analysis. I mean, John, you were mentioning, you know, people, you know, you can’t, you know, 1/3 of marketers, you know, are using data and two thirds of that 1/3, you know, are maybe doing it wrong and promising the wrong things. And it occurs to me that, you know, a lot of our clients who come to us and say, Well, you know, I put the social media metrics together, but it didn’t demonstrate the thing, what went wrong? I think, Chris, you’ve just spent, you know, 35 minutes demonstrating why the analysis goes wrong, and why you don’t get the results that you’re hoping to get, it’s not because you put together, you know, bad campaigns, or that you did the wrong kind of targeting or that your copy was incorrect. The data doesn’t hold up to give you the answer that you’re looking for.

Christopher Penn 36:09
Exactly what we just did. There’s what you do at the drive thru line, when at Wendy’s, right? When you get the bag handed to you, you open the bag, you look in the bag and go, Okay, is that my order or not like, like five cheeseburgers, I ordered chicken nuggets. to hand it back to them. You don’t have to do this exploratory process all the time is when you get a new data set to work with, where you get fresh data to work with. You run it once, this is a quick health check. Like you know, again, fast food example, you have to keep opening the bag as you’re eating your meal. Once you’ve figured out that that is in fact, the meal that you order, then you just eat your meal. But if you open the bag, and it’s like all hamster, so like what happened, head backstage just give you what you

Katie Robbert 36:52
do you go to.

Christopher Penn 36:55
And the same is true of the exploratory data analysis techniques. You don’t have to keep rewriting them all the time this is is a health check on the way in. So when you kick off a new client, when you kick off a new project, when you start working with a new tool. This is a great thing to ask people who, you know, one of the things that has happened in recent years, much to the dismay of folks who’ve been in the space for a while, is there’s a whole bunch of folks who have taken the six week Crash Course and you know, becoming a data scientist, whatever, just like a six week Crash Course and becoming a dentist, I think these techniques are a great way to assess where somebody else is. So if you’re hiring a data scientist, or you’re hiring a data science agency or an analytics agency, these are questions you might want to ask, say, Tell me about your exploratory data analysis process. Tell me about how you evaluate a new tool. Tell me how you think about a data set that you didn’t generate? Wow, how do you analyze it? And likewise, when we start doing stuff, for example, I’m finishing up a paper on Tiktok. Right now, the first thing I did when I got that data set Tiktok is stuck it in and said, What is what kind of health is the data? And what does it have to say?

Katie Robbert 38:12
So John, I know one of the things that you do specific, you know, through, you know, the traction model is you like to do those A B tests, experimenting, you know, which channels are going to be most effective for you, you know, what do you look for when helping a client? You know, do they have the data? Do they have the right things? Because I would imagine a lot of what we just talked about applies to those projects, you’re you’re doing that exploratory data analysis, whether it’s formally or informally.

John Wall 38:43
Yeah, and lack of data is always the biggest problem, especially with smaller ventures, you know, they just, they have some data, and they’re taking a guess. But it’s interesting, too, I kind of, I think there’s a bigger thing there of how so much of what you were talking about Katie, with statistics, as you’re starting out with that you kind of always are going with normal distribution like that is the understanding of how things work. And the reality of marketing is, is it’s almost always power curve, that, you know, there’s only three or four winners in a space. And so you just have this fundamental mismatch of all the tools out there are, you know, working in normal distribution world where the reality is, you know, the marketing team only needs to find one program that works. And if they find the one that works, it’s going to crush everything. So yeah, again, it’s just keep panning for gold, you know, use the tools to try and direct you in the right direction. But it is still kind of an all or nothing thing. Yeah, I’d love to say that. You’re going to test 10 things and four of them are going to work average, but it usually works out that you’re going to try 10 And you’re either going to destroy it with one or go out of business.

Katie Robbert 39:52
The other thing that occurs to me, Chris, as you are going through all of this one of the questions that we get a lot Probably in every single conversation about any kind of coding data science machine learning is, will the machines take my job? And I think you just very clearly demonstrated the answer is no. Because there was so much human intervention needed in order to tell the basic code, the machine, you know, the output, what it is that you needed, and you have to keep going back and doing that, and revising it, that part of that Agile methodology of you, as the human need to determine the purpose. And then you as the human need to evaluate the data set to say, Are there row numbers? Is there a summary, you know, row are there, you know, columns that are strings, and they should be this instead, like, unless you the person tells the machine, that’s what’s supposed to happen? It doesn’t know it’s not going to guess, at it. And so I think you’ve just made a really good point as to why, at least in this context, humans are very much still needed to run the machines.

John Wall 41:06
They just said, The machines are gonna take your job, it’s gonna be a person who’s better at using the machines than you are. Yeah, I would agree with

Christopher Penn 41:13
that. What do you think about that question? The answer logically is will appliances take the chef’s job.

Katie Robbert 41:24
Someone still has to put the tequila and the limes in the blender, and push the start button maybe automatically senses that there’s stuff in there and it starts on his own, but someone still needs to get the tequila.

Christopher Penn 41:39
So that is that is going to be the answer for the foreseeable future is no, the machines are not going to take away the jobs, they will take away tasks. They will do individual tasks better, and you overall may need fewer people to get the same number of tasks done. But a machine is not going to take your entire job. I mean, there’s just no way for machine to be that multidisciplinary.

John Wall 42:02
Alright, sentient Margarita machine, there’s something there that can be

Katie Robbert 42:07
the sunset Margarita machine and the check my data.com Those are the two actions that came out of this episode.

Christopher Penn 42:14
Alright, I think that’s enough for this week. Thanks for tuning in everyone. Thanks for watching today. Be sure to subscribe to our show wherever you’re watching it. For more resources. And to learn more. Check out the Trust Insights podcast at trust insights.ai/t AI podcast, and a weekly email newsletter at trust insights.ai/newsletter Got questions about what you saw on today’s episode. Join our free analytics for markers slack group at trust insights.ai/analytics for marketers, see you next time.

Transcribed by https://otter.ai


Need help with your marketing AI and analytics?

You might also enjoy:

Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!

Click here to subscribe now »

Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday.

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This