In this week’s In-Ear Insights, Katie and Chris discuss whether AI and machine learning will imperil the career of the data scientist. What is data science, and how much of it can be automated and handled by machines? Tune in to find out.
Subscribe To This Show!
If you're not already subscribed to In-Ear Insights, get set up now!
- In-Ear Insights on Apple Podcasts
- In-Ear Insights on Google Podcasts
- In-Ear Insights on all other podcasting software
Advertisement: Google Search Console for Marketers
Of the many tools in the Google Marketing Platform, none is more overlooked than Google Search Console. Marketers assume it’s just for SEO, but the information contained within benefits search, social media, public relations, advertising, and so much more. In our new Google Search Console for Marketers course, you’ll learn what Google Search Console is, why it matters to all marketers, and then dig deep into each of the features of the platform.
When you’re done, you’ll have working knowledge of the entire platform and what it can do – and you’ll be ready to start making the most of this valuable marketing tool.
Sponsor This Show!Are you struggling to reach the right audiences? Trust Insights offers sponsorships in our newsletters, podcasts, and media properties to help your brand be seen and heard by the right people. Our media properties reach almost 100,000 people every week, from the In Ear Insights podcast to the Almost Timely and In the Headlights newsletters. Reach out to us today to learn more.
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
- Need help with your company’s data and analytics? Let us know!
- Join our free Slack group for marketers interested in analytics!
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher Penn 0:18
In-Ear Insights, we’re talking about data science and its ability to be automated.
One of the things that has been a topic of discussion for last couple of years is with things like automated machine learning AutoML, and auto AI, our data scientist is going to be out of jobs, right? They know what she’s able to do these increasingly complex, repetitive tasks will the job of the data scientist go away? I have some thoughts on that.
But Katie, what’s your perspective on the ability for machines to make this this particular career go away?
Katie Robbert 0:53
I think it’s like any career, I think it’s like any job where there’s going to be tasks that are repeatable, and, you know, make sense for machine learning to take over the piece that I see.
Not going away that you still need, you know, data science, thinking behind is coming up with the hypothesis and drawing the conclusions.
And I feel like those are the two pieces that a machine can’t replicate, the machine can do all the other stuff.
And maybe that’s like, okay, great, you go do that stuff.
I’m going to do that deep thinking of really coming up with the hypothesis and then drawing the conclusions, and then figure out what do we do with all this stuff.
So I feel like that’s true of any job, not just data science, but I feel like with data science, specifically, because it is so heavy in data processing, I can see where there’s concern about AI taking that.
Christopher Penn 1:51
I agree with you completely in that data scientist really is four careers for the price of one, right? So there is scientific thinking is subject matter expertise in an industry of some kind.
There is data engineering skills, and then there’s math and statistical skills.
And I guess, technically some coding skills in there, too.
So MAE says five jobs surprise one.
The coding stuff, yes, there’s a lot of them.
And at some really incredible advances recently in the ability for machines to write their own code, right? Right, human readable code, the GPT frameworks can do this pretty spectacularly.
We’ve seen the same true with IBM Watson Studio, where, with the auto AI feature, you can give it a data set, and it will spit back the code it wrote, that you can then edit.
The same is true for the data engineering side, there’s a lot of data engineering that, honestly should be automated, because it’s not a great use of anybody’s time.
We were running monthly reports this morning, for our clients.
And, you know, a good chunk of the reporting process, probably 80% of it is now automated.
We know what scripts are going out and gathering data processing it running machine learning simulations and modeling on it.
But you’re right.
At the end of the day, all of these things are a lot like appliances, right? So you have fancier and fancier appliances that could cook and blend and make soup and all these things.
But you still need a chef, right? You still need somebody to take all the outputs from the different components and turn it into a meal and to have it be coherent, right.
Nobody really wants like an ice cream Falafel sandwich.
I mean, that not that I know of, and if the machines don’t have supervision, that’s great possibly.
Katie Robbert 3:37
Well, and I think that it, you know, it’s, if you just think about it at a very basic level, you know, unless you’re living in like a Jetsons house or something from Pee Wee’s Playhouse from, you know, Pee Wee’s Big Adventure, you don’t have those mechanical arms, grabbing the eggs, cracking the eggs, putting them in the pan turning on the stove, like you still need human intervention, to put the stuff in the blender to put the stuff in the oven, the oven may do the cooking, but you still need to put the chicken in the oven.
And so I feel like it’s the same thing with, you know, this question about, you know, the age old question, which is, you know, only a few years old, will AI take my job.
Sort of, you know, it’s gonna take aspects of it, but then that frees you up to really focus on the human things that the AI can’t and will likely never be able to do.
And you know, we’ve talked about in different speaking engagements, you know, what AI can’t do in terms of like passing judgment for relationships that demonstrate empathy, because those are uniquely human characteristics.
And so when you think about it in terms of this, you know, what parts of data science can be automated You know, that scientific thinking there’s some of it that can be replicated in terms of if this then that if this than that.
But it’s so nuanced that the machine can’t think about things that isn’t aware of.
And so new discoveries all the time, new techniques, updated processes, the machine will struggle to do that without human intervention.
Christopher Penn 5:29
And it will especially struggle to understand it, once you get outside of its very narrow context, AI is still very narrow, focused, niche focused, it can do a task really, really, really well.
But once you start getting into the fuzzy gray areas where tasks overlap, the wheels kind of come off the bus.
So real simple example.
If you look at, say, anomaly detection, that’s insane.
The new Google Analytics 4, Google Analytics 4 is perfectly capable of highlighting an anomaly saying, hey, our forecasting shows that what’s happening right now is different than what was forecast.
And that’s the extent of what it can do.
It can’t say, and I think you should do this, or, Hey, I’ll bet you this is because of this, we still need to provide that context for ourselves when we see that, you know, every Monday morning, without fail, we get anomaly detection alerts on our Google Analytics 4 account from Google Analytics, saying, Hey, your traffic is up to 80% from the previous day.
And we know why that is because the newsletter goes out on Sunday.
And so people come into the office on Monday morning, they read it, they click on things and surprise, we see an increase in traffic.
But the software the AI does not understand that has no clue that that is the context.
Now, if that happened on a Tuesday, they’d be like, Yeah, okay, we probably should figure out what’s going on.
Because, you know, we didn’t expect that we don’t know what the context is.
So in that example, a data scientist would have a hypothesis, right? The extra traffic is coming from a newsletter, which is true on Mondays.
And on Tuesday, we were like, We don’t know what the hypothesis would be, we have to do some exploratory data analysis, which can see at last week’s live stream, to try and figure out what happened.
Katie Robbert 7:17
And so it’s, you know, and this is a question I’ve asked you before, but do you need to be a true data scientist? Or can you be a marketer with, you know, really good curiosity.
Christopher Penn 7:36
The fundamental underpinning of being a scientist of any kind, is knowing and applying the scientific method.
So you can absolutely be a marketer, you can be a grade schooler, right, and be using the scientific method, and applying a scientific mindset to things and that makes you a marketing scientist, or a data scientist, whatever, you don’t need a PhD, you don’t need to be able to code, you do need to be able to say, I’ve noticed this thing, I observe this thing.
I want to know more about this thing, I hypothesize that if this thing is happening, then this is the reason.
And then you go to prove or disprove that.
And you follow a rigorous, you know, repeatable process to do so that’s what makes you a scientist.
It’s not credentials.
It’s not technology.
It’s it’s the scientific mindset that you want to prove something and do so in a repeatable process that will stand up to scrutiny.
Katie Robbert 8:29
So you’re talking about the scientific method, which has been around for a very, very, very long time.
Why is that not something an AI can just replicate?
Christopher Penn 8:43
Because it requires a lot of contextual thinking.
So if offices really isn’t If This Then That statement, right? So if traffic is up 40%, then the reason is probably because of email marketing.
Right? So just in that very simple statement, we can a machine can measure that traffic increase.
But it’s that explanation part, the why part that it can’t do, because machines don’t understand anything.
I was reading a paper this morning on this one AI Chatbot.
That is very popular.
And why it seems to have a very flawed memory, right? Like you’ll tell the name of your dog.
And then 45 seconds later, you’ll talk to about your dog and it won’t know the name of your dog.
And it turns out the reason why is that the architects did not build it with any kind of volatile memory.
It literally is like someone with a substantial head injury, just having no short term memory.
They don’t understand anything.
They can see patterns, and they can repeat those patterns.
But they don’t understand what those patterns mean.
And our ability to reason is what makes us the indispensable part of of these processes, our ability to have that long term memory and Link memories up.
So Katie, when you write a blog post about something, you remember what you wrote, and then you can reference it later on, you can build on it machines cannot do that, because they don’t even understand what it is they’re writing.
Katie Robbert 10:13
Well, and I think that it’s interesting that you started talking about this aspect of it, because I think that there’s a misunderstanding with predictive text.
So a lot of the home assistants like Siri, or Google, or Alexa, or, you know, whatever you’ve named it, a lot of that is just predictive text.
And so you’re not really having a conversation with this machine that hasn’t suddenly become sentient and feeling and understanding and aware.
You know, it’s been programmed by a human to say, if the person says to you this, choose from these sets of responses, or if the person says this, go into your, you know, warehouse of information and pull out the answer to the thing.
And so that really is the limitation where we as humans, you know, when you know, to your point, like, if I’m writing a blog post, I might be thinking back on, you know, my entire life of memories.
And I might be like, Oh, this one anecdote is really, really useful to drive home the point that I’m trying to make, but unless I give an AI my whole body of memories of my whole life, they’re not going to be able to replicate that.
I mean, I may not even be aware of what memories I have, until any given moment, like, Oh, that’s right.
I forgot about that thing.
Until right now.
And so how can I program AI? To know what I know? What Half the time I don’t even know what I know.
Christopher Penn 11:53
And it’s even worse than that, in some ways, because everything that machines do when it comes to prediction is essentially looking for probability.
What’s what’s the probability that the next word, my sentence is going to be x? What’s the probability that the next location I’m going to search for in Google Maps is x and try to route inquiries that way and build these probabilities? That’s fine.
If you know what it is that you’ve been doing, right, if you if you are, have a hypothesis that is identical to or very close to previous hypotheses, if you have something net new, that’s never been seen before.
Again, all this stuff goes off the rails, because now you’re dealing with the unexpected and the machine doesn’t know what to model.
So it’s going to choose things that it thinks are most closely resemble it, but are not in fact, actually what’s happening.
So if you picked up your coffee right now, and it tasted like ketchup, right, you would be like, Okay, what the heck just happened?
Katie Robbert 12:50
Not just like ketchup for the record.
Christopher Penn 12:54
You would know, as a human, okay, something’s gone wrong here.
Let me try and figure out what a machine would immediately start saying, Okay, well, if this is catching, what are the other things that are similar to catch up and will and give you a list of probabilities how that might have happened.
But it could just be oh, you know, you were not awake.
When you were making a coffee, I grabbed the wrong thing, refrigerator, you grab ketchup stir creamer, the machines get because they don’t understand anything, cannot make that hypothesis that it’s just outside of their scope.
And there’s a great danger in in artificial intelligence and machine learning, where if we are relying so heavily on these existing trained models of things that have happened in the past, that we constrain our own creativity, because we’re relying on things that have already been seen.
And we are in fact, after something that’s not been seen before, when you think about content marketing, and blogging, and social media marketing and stuff, we are constantly trying to create new things have never been seen before.
Because that’s unique, original content.
And so AI is actually kind of a hindrance in that, because we’re not creating it.
If we’re using that as our generative method.
We’re not creating that.
So going back to the original question of is AI going to put data science out of out of business? You know, because part of science is exploration of What’s New, and you can’t explore what’s new by solely looking in the rearview mirror?
Katie Robbert 14:23
Right? You know, it makes me think about, you know, the legal profession is a really good example of this.
And so like when, you know, a court case comes up.
So let’s just say for example, I decided to break into your house and steal all your stuff.
And then, you know, I get arrested, I go to court, the legal system is likely going to reference a bunch of other cases similar to mine, as a precedent of like, here’s a case it’s very similar to what Katie did.
And here’s what happened and here was the outcome.
So we’re going to Use this as the basis to suggest, but a good majority of the time, they are unprecedented cases.
And so they can’t rely on what they did historically to determine what they want to do in this present moment.
And I see AI as a family very similar situation in that sense of, yeah, it can look at the historical things of what’s happened, as you said, Chris, but it’s never going to be an exact one to one.
And so you can’t you can use that as a guideline, but you can’t necessarily say, and so replicate it exactly.
Because there are no two identical situations.
And then I think the, you know, the notion of creativity is a big one, because that’s a question that’s coming up a lot is with all of these systems that can write content for you, you know, am I as a freelancer, as a writer, as you know, a Content Editor? Am I going to be out of a job? Well, yeah, if you’re okay, with really crappy content, if you’re okay, with content that is already been written, if you’re looking to churn out, just like a million words, that have no real value than sure AI can do that for you.
But if you want to Chris, to your point, have something unique, and thoughtful and adds value and hasn’t necessarily been done before.
Because you, as an individual, bring your own unique perspective to something, then no, AI will not take that.
Christopher Penn 16:33
And the same is true in data science.
If you are someone who is only doing analysis, which is looking at what happened, and you’re not doing any hypothesis creation, then a you’re not a data scientist, and B, you’re doing analysis.
And that is a task that can be left to machines, for a good chunk of the what happened part because you’re not doing anything new.
You’re You’re literally just processing data, it’s already been there.
Now the insights part, we’re trying to explain why that part is going to be very difficult to replace, because again, machines don’t understand don’t have the extra data context of understanding of why that pattern showed up in the data, they only know that the pattern was there.
So what is more likely to happen is the many junior roles that sit underneath a data scientist, like a data analyst, an analyst like a data engineer, or things like that, those roles may shrink, as as a lot of their tasks, like okay, do some PCA on this dataset to do just some histograms to understand the the shape of the data set and stuff.
Yes, those are tasks that 100% machines are already doing on last week’s live stream, we got a chance to watch that piece of Scott software inside of our studio, summarize a dataset and show outliers and the histograms the shape of the data set.
I didn’t need a data analyst to do all that the machine just did it for me.
And I as the data scientists go look at and go, okay, yes, this passes the sniff test for data we can use or no, this doesn’t.
So I think in the short term, the data scientist career is perfectly safe.
I have concerns in the long term.
Because as we automate more and more of those tasks, one of the risks we run is, by having fewer of those jobs, our pipeline of qualified people who eventually go from analyst to, you know, to senior analysts to data scientist, that pipeline shrinks, because there’s fewer people at the, you know, at the bottom doing all the grunt work.
And so there’s few people who will bubble up to the top, eventually, over, you know, over the span of years or decades.
And I think that’s a real risk, because one of the challenges we already have in machine learning is you have a bunch of people who are writing code and building models, who don’t have the formal academic background, or the you know, the years in the trenches of dealing with weird data situations.
And so they’re building models that are, in some cases, very biased, in some cases, very flawed.
And they don’t have the experience to know, I just built something that’s dangerous.
And, again, if we don’t have people in the trenches, manually doing the stuff, at least for a little while, then they never know what to look for.
They never know where to, to fact check their models to go.
Wow, you know, I know that this result here is clear.
There’s a skewed data set going in and let’s let’s not proceed, I think that’s there’s a real risk there.
Katie Robbert 19:42
You know, I’m without going too far down, like related but sort of off topic road.
You know, one of the things that I find really interesting about sort of when you talk about this is those different job levels.
So you talk about the lowest paid person doing the grunt work.
This is where I don’t necessarily agree with you.
Because I think that that’s that structure in a company is what leads to people being dissatisfied and leaving.
So there’s obviously a level of, you need to learn the job.
So there’s certain tasks that you need to do a master before we move up.
But I feel like what happens when you box people in like that to say, Well, you’re the junior person, so you do all the grunt work.
It doesn’t allow that person just share their perspective and their thoughts and their ideas.
And I feel like that’s the piece that’s missing.
And that’s the piece that would allow people to stay in those jobs.
And to continue to develop that critical thinking to continue to develop that exploratory.
We, as a society tend to box people in and say, Okay, you’re the junior person, you do the grunt work, I don’t want to hear from you, because you’re not experienced enough.
But from my perspective, you’re a human, you have the human experience, right or wrong, your perspective is going to be beneficial to us to learn.
And I feel like that’s gonna be one of the key pieces to making sure that that pipeline does stay, you know, rich with diversity, and people who are interested and people who are growing, that experience is not just boxing them into, okay, you’re the analysts, you can only ever look at the data, I want to hear from you about your perspective, I think that we need to teach people these jobs, and simultaneously treat them as if they were already thought leaders in the space and just say, tell me what you think, what do you see? What do you know, what have you learned, and it’s that, again, it’s that human experience that perspective, that’s going to keep people engaged in that particular role.
Christopher Penn 21:54
And there’s definitely importance to that, I think, where the challenge will be is that in, say, a large organization, you might have 100, data analysts today, as you automate a lot of those tasks, you got to be down to, you know, 15 of them.
And so, and those 15 people, instead of still doing that same amount of work will kind of be overseeing the machines, kind of what we’ve always said in our various keynotes and things that you’re not going to be the first violin anymore, now you’re going to be the conductor of the orchestra.
So even as a junior person, you may be overseeing some of the tools and technology.
And in that process, you’re not going you may not learn the importance of what you’re doing.
So a real simple example, if you were the junior most person on the line.
And your job is just to slice carrots, right, and you bring in a food processor.
And you know, the other person just puts the carrots in the general direction of it, and you get the same slice carrots.
If you don’t understand the you know why it is that you’re doing that, and how it’s how it does fit into the bigger picture than you lose that context of here’s why the cats have to be this thin.
And to your point that fits well with the idea of a person not being boxed into a single task role, right? single task roles are on the endangered list.
If your job is a single task, your job is in danger.
And you absolutely should be teaching people Yeah, we’re slicing carrots like this, because we need to make a shepherd’s pie, there’s not four and a half feet tall.
Nobody wants to eat a piece of pie this large.
And I think there’s I think there’s a path towards having some of that training and learning.
But I do think that something is lost when you at least not for a little while chopping those carrots by hand to understand, you know, why do you chop chop them this way out of bias and not the straight on while there’s there’s a reason for that.
And then a place of good employ.
Someone will explain that to you out of places badly managed, you’ll be told to shut up and just Slice the carrot.
Katie Robbert 24:00
And I agree with you that was the point that I was making was yes, it’s fine to teach people why they have to slice carrots a certain way.
But teach them don’t just say do it.
And don’t ask questions and we don’t want to hear from you.
You’re the junior most person which quite honestly, when I was the quote unquote Junior most person on the team, I hated being reminded that I was the junior most person on the team because it is it shouldn’t be time and seat it should be experienced.
And I feel like that.
You know that? Again, this is sort of going a little bit off topic.
But I think the point is, AI is going to take a lot of those repetitive things, but you still need to teach them and give the context as to why you want to teach AI to do the things the certain way.
So if the eventual goal is to have the machine slicing the carrots, that’s fine.
But understanding the history as to why we slice the carrots by hand and you know the history theory of how, you know sharp objects came about how they started as blunt objects and farming and agriculture, all of that context is just going to make for a richer experience when it gets to AI, because then you’ll have all of that information to give it you, then you as the person as the human, will be able to do more with that output and tell a greater story around the thing that you’re presenting.
Christopher Penn 25:27
Now, here’s here’s a curveball for you then, in that example, the company that leans heavily into AI, is probably going to have a fair number of people like yeah, I don’t have to worry about, you know, telling you to shut up and do your job, because now the machine does it for us, they will probably be the more incurious types, like, yeah, I have a machine.
So I don’t have to tell the junior person to shut up.
Now I don’t, I don’t have a junior person anymore, I just have the machine to do it.
And so that brings up an entirely different conversation about, you know, what a corporate culture looks like in the age of AI and machine learning.
But kind of bringing it back to the original topic, you know, machine learning and artificial intelligence will make that the carrot shopping process a lot easier.
But you still need a chef, you still need a recipe, you still need somebody who has the good judgment to say, You know what, this time of year, this particular carrot that we use is not in season, so we’re going to switch it out with something different.
And knowing that that will work and not saying, Oh, we can totally replace that with apples, it’ll be exactly the same.
It may, it may very well not be.
And those aspects are unlikely to be replaced by machine learning, at least not until machines start developing that sentience that you talked about, which is still a bit of a ways off, although there’s some announcements this past week, and coming up in the next couple of weeks about quantum machine learning that have everyone kind of scratching their heads going, I didn’t think machines could do that.
So stay tuned for for those announcements, we’ll have the probably the week of May 10, may cover them on the podcast, we’re on our live stream.
Katie Robbert 27:09
I think at least for me, and this is one person’s opinion, just one, you know, one out of how many billions of people, I don’t feel like AI will be able to replace that storytelling aspect of it that, you know, let me give you the history that’s passed down through generation to generation.
Let’s sit around, you know, outside of a bonfire and tell stories, I think that’s the experience that as technology gets more advanced as AI is doing more things.
We as humans, myself, are craving that connection from person to person to say, Okay, then let’s tell stories, let’s understand it deeper.
Instead, let me hear your perspective on this thing.
And then the AI can just go crunch the numbers and you know, come up with the predictions and do whatever.
I think that that human experience of what we have control over versus the machines is only going to grow deeper and richer.
That’s my one.
Again, one person’s perspective, I think that storytelling piece about, here’s why we did it, here’s the history of the thing, here’s what we plan to do with it.
And sort of here’s sort of the point of all of it, that is still going to be human.
Christopher Penn 28:29
I think in forward thinking organizations, that will be the case, I think for organizations that are a little more of aggressive in their culture, still having a very central command and control, you don’t get to talk to the person who loves above you, because you’re not authorized to I don’t think that that will be the case in those organizations and time will tell to see which ones survive in the marketplace, you no longer certainly the the current and the past and current generations of data scientists and stuff like that.
And organizational cultures still lean very heavily towards the command and control.
Katie Robbert 29:06
Which, you know, it really depends on the kind of business that you’re in, and whether or not that has a place but if you’re in, you know, a marketing company, or you know, really anything to do with consumers and travel and leisure and goods and B2C and that kind of thing.
There’s really no place for it.
Again, one person’s opinion.
i And I’ve mentioned this numerous times, I have never subscribed to the you can only talk to people at your pay grade at your level.
I find that whole thing to be bullshit I find it to be, you know, putting up silos where silos don’t need to exist, I find it a hindrance to getting worked on.
And I find it quite honestly just really insulting.
It’s a very archaic leftover way of thinking of how a business should be structured.
And I’m not saying that it needs to be a free for all anyone can do it.
Everything like, you know, obviously there has to be some structure in place.
But restricting who you can talk to you based on your title is crap.
That’s my two cents on that.
Christopher Penn 30:11
And yet, that’s exactly how AI functions today, because it is so narrowly focused on specific tasks, and cannot communicate that in some ways, we have replicated our human culture in the way the machines address their problems.
So good luck on solving that.
Katie Robbert 30:29
Well, and then we see that the AI gets it wrong a lot of times and the whole, you know, bias and ethics, that’s, again, a whole other topic like as we the humans, who are the ones programming the machines, we are so flawed, We’re so screwed.
Christopher Penn 30:49
So I guess to wrap up,
Katie Robbert 30:52
so your job is safe for now.
Christopher Penn 30:55
Yes, if you’re a data scientist, your job is safer.
Now, the tasks look forward to more and more automation of the individual tasks to make your life easier, frankly, to get more done faster.
I would like to think that we’re a pretty good example of being able to do a lot of stuff with a very, very small number of people and, and punch very, very far above our weight when it comes to to using data science and AI together to accomplish stuff for customers.
If you’ve got comments or questions or things that you’ve you’ve experienced in using artificial intelligence to automate parts of your data analysis, or data science, let us know join our free slack group go to trust insights.ai/analytics for marketers, where you and over 2300 other folks are asked asking and answering each other’s questions every single day.
And wherever it is you watch or listen to this episode of the podcast this place you’d rather have it on probably we have it go to trust insights.ai/ti podcast for all the other platform choices.
Thanks for tuning in.
We’ll talk to you soon
Need help with your marketing data and analytics?
You might also enjoy:
Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, Data in the Headlights. Subscribe now for free; new issues every Wednesday!
Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new 10-minute or less episodes every week.