In-Ear Insights Can Generative AI Replace the CEO

In-Ear Insights: Can Generative AI Replace the CEO?

In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris delve into the intriguing topic of artificial intelligence in leadership roles, specifically discussing whether generative AI can replace the CEO role. You’ll learn about the balance between automation and human leadership, questioning what roles can be automated and the implications of AI in management. They cover the emotional and practical aspects of leadership that AI might struggle to replicate, highlighting the unique qualities you bring to an organization as a human leader. Tune in to hear their insights on the future intersection of AI and executive leadership, offering you a thought-provoking perspective on how technology might reshape corporate management.

Watch the video here:

In-Ear Insights: Can Generative AI Replace the CEO?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

[podcastsponsor]

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.

Christopher Penn 0:00

In this week’s in ear insights, we’re going to chat a little bit of office space this week, if you haven’t seen the movie, it’s a classic from the 90s, highly recommended great popcorn movie, and specifically gratis channel, the Bob’s the consultants that come in to evaluate all of the employees, in the context of artificial intelligence now updated.

One of the questions the Bob’s ask every employee is So, Peter, what exactly would you say you do here? And this topic is from a comment I made on threads.

A little while ago, I said, when you look at the roles that are going to have partial or potentially full automation, with AI, the most expensive employee, at most corporations is the corner office, right? Particularly in publicly traded corporations, where you see, if you look at SEC filings, you’ll see the CEO makes like 880 times more than the lowest level employee in the organization.

And so if a company is thinking about using AI to save time, save money and make money.

One of the questions that I had is, is that role, something that could be when should be or should not be partially or fully automated? So and now generative AI sort of asking the question of the Bob’s, which is, what exactly would you say you do here? So Katie, as a CEO, admittedly, not a publicly traded CEO is making 880 times what the lowest paid employee makes.

But yet, what would you say you do here in the context of gender of AI? Is that is the role of a CEO in different organizations, something that can be partially or fully automated?

Katie Robbert 1:42

I mean, any? Yeah, I think any role has aspects of it, that could and should be automated.

You know, it’s interesting, because when I saw your comment on threads, which is why we’re talking about this, because I was half joking, you know, that you were attacking me.

But obviously, that wasn’t true, you know, I start to think about, sure, there’s things that someone in the C suite, or someone in a leadership role does that could be automated.

But then you run into, you sort of have like the other side, you sort of flipped where like, too top heavy to too many doers, and then nobody is steering the ship, nobody is in charge.

And that was the first place My head went was great if you automate the person out of that C suite position, who’s in charge, you know, and, you know, joking, half joking.

Like, Chris, if you automate me out of a job, who’s going to keep you in line, the bots aren’t going to do it.

And so that was sort of what I was starting to think about was, yeah, there’s all there’s always aspects of what you do, that should be automated.

But there’s certain things that a good leader not all leaders, but a good leader bring to the table that you should not be automating.

And that’s primarily setting the direction growing the thing, inspiring and encouraging employees, keeping them on track, making sure like, they can set the goals, but then somebody has to make sure that all of the people who are working towards those goals are working in the same direction, because without somebody in charge of that, it’s going to go like in a bunch of different directions.

And a lot of things will get done, but nothing that is cumulative to one specific goal.

And

Christopher Penn 3:33

I think the other aspect is, is a human aspect as well, which is, no matter how good we make a call back a few episodes of the podcast ago, no matter how good we make, it is still not a sentient, self aware machine does not have agency of its own.

And I foresee little to no circumstances where human beings would willingly be managed by non humans.

And I just don’t see people accepting the authority of a machine over them.

Katie Robbert 4:04

Well, and it’s interesting, because a lot of humans struggle with accepting the authority of another human.

And so to think that, you know, you could sort of make the argument like, well, it’s easier to accept the authority of machine, I mean, think about, you know, if you want to take a very small example, like take an ATM machine, you are taking direction from the machine when you walk up to it, because it walks you through a set of prompts.

Therefore, you’re not in charge the machine is, and you can think of a lot of different other examples where, unless you do exactly what the machine says, you know, like how you use a blender or frying pan or you know any of your other cooking analogies, then you’re not the one in charge.

It gives you a false sense of authority.

And so in some ways, we’ve already given ourselves over to machines, like for example, again, turn, like really simple example, like you can’t use your phone unless you turn it on.

So you have to do what the machine is telling you to do, which is turn the phone on first, then you have to connect it to a Wi Fi, then you have to log into it, you can’t skip those steps.

And so the machine in some ways is already telling you what you have to do.

But if you take it to, like a bigger conversation of well, now the machine is going to tell me not only how to do my job, but what to do.

And when, you know, you could say that those pieces already exist.

So theoretically, we’re already being governed by what the machines do.

We as the human managers are just sort of like, okay, well, the machine says that this is what we need to do next.

So now let me convey that to the people.

And I think, as I’m talking this through, I feel like I’m talking myself out of needing human managers.

But I also feel like I know that that’s not the solution.

No,

Christopher Penn 5:57

and I think the closest analogy that I can come up with here that displays the human, the average humans reticence about this sort of thing, when it comes to agency, the choices that we’re allowed to make.

We have the technology now today, to have fully self driving cars, right, we may not have the regulatory environment that permits them to be on the road, but the technology exists, and it’s very good.

It is it’s so good that even today, they are safer than human drivers, the number of people who want a car that they no longer have control over is approximately zero.

And people just do not feel safe.

Totally handing control over you just get in this vehicle and just let it do its thing we have today, it is possible to fly certain jets, passenger jets completely by machine, like the pilot can be there, you know, in case something weird goes wrong.

But fundamentally, today’s auto pilots and fly by wire the jet can be controlled, beginning to end entirely by machine.

And yet the number of airlines that have pilotless flights is also zero.

Because even if even though the technology is great, there’s that human emotional sense of I don’t want to be completely out of control.

Right, the pilot is there to make us feel better.

Right? If you go to, to the EU, there are a number of cities in the EU where public transit is entirely machine run.

However, there is a human sitting up front to give the public the sense of security like oh, there’s someone driving but they’re not.

They’re decorative.

They’re decorative humans.

When we talk about automating the CEO position, automating leadership positions in the company, I think that’s sort of going to be the dividing line as well, where people will simply just not feel comfortable handing over their sense of control to a machine no matter how good the machine is.

Katie Robbert 7:59

Well, I think, but there’s a couple of things there that we haven’t talked about.

So one is, I think they’re more than just decorative humans, technology is not perfect.

You know, before this, we were just talking about how I can’t get my TV to connect to the internet.

Like, that’s technology that is so commonplace that it should just work.

And it doesn’t work.

And technology is not perfect, it does not work 100% of the time, which is why you need humans.

Now, in terms of you know, this idea that humans need another human in place to feel safe.

What you’re not talking about is that we’re still human.

And so even if the machine, a machine, a software, whatever you want to call it, is giving us our set of instructions, we still feel things about it.

And we still need to process those emotions and do something productive with them or understand them.

And that’s not what a machine is going to be really good at.

You can simulate it, you know, to death.

But at the end of the day, a human to human interaction is the only interaction that’s really going to help you with those emotions that you’re having about being led by a machine.

And so even if you automate 99% of a CEOs job, that 1% is the emotion that 1% is the let me help you understand what’s going on with the machines or let me make sure that the machines are moving my entire organization in the right direction.

And so yes, it’ll probably change like the value of the role.

Or maybe it won’t, because maybe that’s still a specialized skill, where not everyone is meant to be in charge.

Christopher Penn 9:56

That’s true.

There’s a lot of other parts of the CEO job particularly again, a publicly traded companies and large organizations, which are communications roles where the CEO does not actually need to do the thing.

It’s tradition for the CEO to do that, but a machine could do that.

So let me it’s, it’s funny, we’re doing this the day after I was giving a talk.

So I have some pre loaded videos, let me show a very quick sound snippet here, of generated version of me.

Imagine a world where your marketing strategies are supercharged by the most cutting edge technology available.

Generative AI, generative AI has the potential to save you incredible amounts of time and money.

And you have the opportunity to be at the forefront, get up to speed on using generative AI in your business in a thoughtful way with our new offering generative AI for marketers, which comes in two flavors, workshops, and of course, now by the way, if you want to take that go to TrustInsights.ai AI slash and of course, deployment, but that is using a generative system, I gave it a prompt, and it already had a training video.

And it did the thing.

When I think about watching investors earnings calls and things where the CEO is talking through, they’re just basically reading aloud the financials.

That’s something that a system like that could use the CEOs, Avatar, to deliver the same information.

But the CEO would not have to be there to basically read financials, bedtime stories to a bunch of analysts, the CEO could be there to answer the questions when adults have, you know, interactive questions, but for sure, the 20 minutes of and we’re proud to announce this quarter, we did this thing that you don’t need a human that for that you can have a simulated human for that, which is

Katie Robbert 11:37

fine.

But I don’t know, I feel like you’re kind of glossing over, someone still needs to be in charge, whether it’s of the humans or of the machines.

And so I would imagine, you know, and I mean, we’re already starting to see this with our own company, like the role of the person in charge, the things that they are in charge of, is starting to shift.

And so I’m so sure, you can automate, you know, a CEO giving a presentation, you know, someone still a person still needs to be almost like a checks and balances, like is the machine doing the right thing.

So I personally, you know, and you can call this a safety thing and emotional thing, whatever, I would not be comfortable with you creating an avatar of me and giving it a script, because it goes back to my comment where technology is not perfect.

And when you least expect it, because it’s never at convenient times, technology will fail you.

And so perhaps there’s a glitch in the system, or perhaps, you know, the internet, you know, fails that day, or there’s a disruption to whatever.

And you’re banking on this technology giving this important information, when really it would have been faster if the person just sat there and did it themselves.

And then you wouldn’t have to be reliant on the tech and then you wouldn’t have to have a backup plan if the tech fails.

And then what if the tech goes sideways? And what if somebody hacks into it and interjects you know, commentary that you didn’t intend for the CEO to say, that’s all stuff you have to plan for when you introduce technology as a solution.

I’m not saying humans are easy, humans are equally as unpredictable.

But you have much faster recourse if I’m sitting in front of an audience.

And I start to say things that are not true about the company or I start to say things that are, you know, opposite of what our values are, somebody can physically just come there and remove me from the stage and start to course, correct the conversation versus if you’re stuck with this piece of technology, and somebody has hacked into it, you have no control over it, then you have to wait for the whole process to finish and then figure out how to fix it.

Okay.

Christopher Penn 13:56

But in terms of, I think that depend, there’s gonna be a lot of dependencies there.

Because, sure, like, as as a point of context, you’re a good leader, you’re a good leader, you’re good communicator, you’re good manager, it’s, it’s it is genuinely pleasant to work for you.

There are a whole bunch of CEOs who are not, who are the opposite of all those things.

And I feel like in those situations, even with technological glitches and risks, a machine is a preferable choice, right, as far as the CEO comes in stones and goes off on, you know, crazy rants and rebrands his company X and also these things, you know, these you have, you have a bunch of leaders out there who are really bad leaders.

And as we saw in the BCG study, machines are good at helping great people who better you know, they get incremental groups, but they’re great at helping bad performance become above average performance.

And when we think about that applied to the role of the corner office, I think there’s tremendous opportunity there.

Take Assuming they’re willing to take a bad leader and at least get them to be a mediocre leader.

Katie Robbert 15:06

And I think that that’s fair.

And you know, not all, not all people who are in a leadership role should be in that role.

That is 100%.

True.

I mean, and that’s, that’s sort of, you know, to your point of like that dividing line, you know, you have to start to separate out the good leaders, from the not so great leaders, and then the not so great leaders from just really like the poorly performing leaders, the people who have, you know, let power go to their head, and are just sort of, you know, like dictating things that will that make no sense.

So, yes, those people, I 100%, agree, should have AI replace them.

Because to your point, it is better than dealing with a toxic culture.

The risk is who is programming the system.

So you have to make sure that you’re breaking the toxic cycle, and that you’re bringing in individuals and data and processes that don’t then perpetuate that toxic culture, but then you’re actually fixing it.

So there’s like, you know, that is a whole other conversation.

But if we keep it very surface level of can you automate a CEOs job? The answer is yes.

100%.

Yes.

You know, I think that there’s, I can sit here and argue with you that there’s things that that I do that a machine will never be able to do.

But that’s just not true.

I would say a machine could probably do most of what I do, if I allowed it to, does that mean that we should? Not necessarily, it just means that we could.

So in those situations where you have a really bad leader? Yes, I think it’s highly appropriate.

Christopher Penn 16:47

Yeah, in which case, the the leadership would have to come from like a board of directors to mandate how the machine should be operating.

And then as a support staff to administer the machine and make sure it’s doing what it’s supposed to be doing.

But I could definitely see, you know, we think about some of the fun movies that have been made over the years, like Bharat and stuff like that, where you could definitely have a machine that would be a better leader than a bad leader.

And as long as, as long as they were mechanisms for people, for people to recognize that that machine has authority over them, and they have to comply with its decisions.

That could work.

I wonder and I’d be curious as to your your opinion on this, how far away are we before we start to see companies that try that as experiments say we’re going to have a machine made executive officer in this company that will make decisions and reports to the Board of Directors? You know, I can’t imagine there’s some startup in Silicon Valley going, huh? What if we let machines do this instead? I

Katie Robbert 17:56

would not be surprised.

If that’s not already happening, I think what is likely happening is that companies aren’t talking about it, they’re not sharing it, because they don’t want to call attention to it.

Because your point, humans need to recognize the authority.

I mean, we as humans already have trouble recognizing authority in our day to day life, like, you know, simple example.

Like, maybe my husband comes home and says, Hey, I need you to do this thing.

5050 chance that I don’t feel like doing it.

So I’m gonna say no, you know, but like, it’s just a very simple example.

And then you think when you start to, like, bring that into a workplace, and people are told, hey, you know, you have to say, now I respect and love the person that I’m married to.

But sometimes I just don’t feel like listening.

And vice versa.

He doesn’t feel like listening to me either.

And that’s just human nature.

Right? So when you start to amplify that in a workspace, and you have employees who don’t, you know, respect their managers who don’t respect C suite, and you’re still asking them to do things.

And then you say, You know what? Humans aren’t working.

Now you have to listen to a machine, the person who is already better about not recognizing authority and a human is 100% not going to say, Oh, great.

So now machine is going to tell me what to do a machine is smarter than me.

Like, I just I don’t see that going? Well,

Christopher Penn 19:24

it probably wouldn’t, unless you were just sit down and really focus in on those user stories in the fire P process to identify, like, here’s how your job is going to be better.

Right? So when you come in on Mondays and Tuesdays and Wednesdays and Thursdays Fridays, someone is not going to pull you in the center of the room and scream at you for 45 minutes in front of everyone else that’s that’s going to change immediately.

Then they’re exactly.

Someone is not going to hold you to arbitrary metrics at the last minute that have that you have absolutely no visibility into until it’s time for your review.

I think depending on how bad the leader is a contrast, or here’s what the machine will be doing instead might be palatable because you’re used to that, or the toxic dude who’s just, you know, being a jerk.

Katie Robbert 20:14

How is this? So, you know, we’re talking about like, bigger, theoretically more high value tasks.

But how is this different from companies that use AI for their customer service for their Chatbots? I mean, it really isn’t.

That’s right, because they’re their goal is to not make you believe you’re dealing with a robot, but that you’re dealing with a human.

And so they try really hard.

So this is why I say, I would be surprised if there aren’t already companies that are testing, you know, the automated executive, they’re just not telling you.

They’re not like, there’s no physical robot sitting behind the desk, you know, pretending to type all day.

It’s just, you know, you’ve just never actually met the person.

Maybe they made an avatar of the person.

Yeah.

And so it could be synthetic human.

I mean, for all we know, Chris, like, I’m talking to your avatar right now.

I can’t be 100% Certain.

Christopher Penn 21:06

This is actually something that came up recently at a talk I was giving, because it was a video talk like this.

And after I showed that clip of the the age and gender diversion, the person I was talking to is like, Well, how do I know that? I’m not talking that I’m really talking to you.

So you don’t you don’t know that?

Katie Robbert 21:26

I will say though, the second I start because I started earlier today.

I knew immediately that wasn’t you.

But that’s because I’m a people person.

And my job is to read people.

And there was enough difference in the way that the machine spoke the way that it actually formed your mouth speaking that I was like, that’s not Chris.

That’s 100% not Chris.

Yes.

But not everyone can tell.

Christopher Penn 21:54

Exactly.

And it gets expressions don’t necessarily match the tone that it’s speaking with.

Now, here’s an interesting version.

This is a translation.

So I want to see if this checks out for you, if this passes, or doesn’t pass is the exocrine schemata product, the three modes of Kaggle.

So Lu recruiter towards the Vita podshow almost a yak when opera tutorials on the screen, you’re gonna have the when you receive the restriction on the Java Installation of a torch.

How did that one do? Oh, that’s not you.

That was me, but translated.

But that’s

Katie Robbert 22:25

what I mean.

It’s like it.

It wasn’t us speaking the other language.

But again, this because I’ve been working with you for so long.

I know what it sounds like, when you do other dialects to speak other languages.

And that was way too smooth.

But exactly, but it made it it.

And I mean that in the way of your first language, your primary language, I should say is English.

And that interpretation, that translation came across as your primary language was something else, and you’re not a native speaker have anything but English.

And so that was immediately like, that’s, it’s too smooth.

Christopher Penn 23:05

Right? Exactly.

That was Ukrainian actually did that.

And then hand it to some Ukrainian friends.

Like that’s eerie that because they also all know, I don’t speak a word Ukrainian.

That’s very that is that you sound native.

Am I right? That’s awesome.

I’m not.

But that’s an example where it was taking was taking the English words and just reread rewriting them, but it captures the tone and it kept the facial expressions that match whereas in the purely generated version, you can tell this mismatches between the tone of voice and the facial expressions.

It

Katie Robbert 23:39

was, it was a little Max Headroom, like there was just enough glitch Enos that you could tell that it wasn’t truly human.

And then because I know you so well, the way in which it said certain words, I was like, that’s not right.

But it could pass.

For people who don’t know you that well.

Someone who just sort of casually knows you, or maybe saw you speak one time would have a very hard time distinguishing between that version and the real crispen.

Exactly.

Christopher Penn 24:08

So if we think now about, you know, the role of the CEO and Kevin machine automate that, I could clearly definitely see cases where the CEO needs to make some kind of speech or announcement or whatever, and they just key in the stuff into the software produces it, and then that video goes out to employees.

We used to work at an agency where the CEO had no so there’s this deathly fear of public speaking and did not enjoy it.

And, you know, I think if he saw that technology today, he’d be like, I’m all over that.

That is how I’m doing all my you know, from the corner office videos from now on, it’s gonna take give it to the machine and have it do it instead and it would actually be better.

Katie Robbert 24:46

Well, I was gonna say, and the other people involved who had to rein him in would probably be happy about that as

Christopher Penn 24:51

well.

Exactly, but it would be better because again, that was a task in the in his role of being CEO that He was, did not enjoy, and it was not good at.

Katie Robbert 25:04

Well, you know, and and that’s in that sort of you have to go back to like really thinking case by case who the leader is, I enjoy communicating I enjoy, you know, the things, you know that we’ve sort of talked through that, you know, you could theoretically automate.

So, you know, do I want to see what an automated version of B looks like? Absolutely.

You know, I want to sort of layer it and have it side by side with Katie GPT.

And sort of see what that looks like.

So of course, I want to experiment with it.

But I, at least right now, I can’t say for certain, but at least right now, I can’t see using it regularly.

You know, maybe we could try it out as an experiment to do like, you know, our end of year like Season’s Greetings or something.

That could be interesting.

That could be an interesting experiment.

But otherwise, I would still feel more comfortable flaws.

And all, you know, letting my Boston accent come out randomly.

I would rather it be me delivering certain things.

Yep.

Christopher Penn 26:11

I could totally see that for holiday greetings, where you have like ad messages to do.

You just do the you do the avatar training, and you put it in the ad scripts, and just general because it’s all gonna say the same thing.

It’s gonna say, Hi, this is Katie from Trust Insights, wishing you a happy holiday season and a prosperous 2024 over and over and over again with that person’s name and their company.

And by the time you get to video 20, you’re like, shoot me a shield get it right every time? Well,

Katie Robbert 26:38

here’s the thing, like, what level of QA do you have to build in? Or do you just assume that the machine is getting it right every time because this is part of the process that you really have to, you know, factor in that just because the machine can do it doesn’t mean that it’s getting done any faster.

Because let’s say, you know, we record 20 videos, I would want to have to go back and watch every single one to make sure that it didn’t mispronounce somebody’s name, or glitch or do something.

So like, in my world, now, it’s just taking twice as long,

Christopher Penn 27:09

right? And those are parts of the five P’s and and doing scenario planning stuff where you absolutely have to budget time for that, say, what is the level of QA? And are there things that you know, it’s going to get wrong? So for example, I know the underlying voice technology that powers those things we would take to as part of the process, we would take a list of the names, and just put them all in and have the software, read them all out, just the voice part and go, Okay, we know this name is going to be a problem.

Katie Robbert 27:41

And I think that that’s part of the whole planning that definitely just it can’t I mean, we were talking about this, when we were building the custom GPT model when we were building Katie GBT is that as this technology becomes more and more readily available, people are just going to start like pressing buttons and putting in videos of themselves.

And, you know, pretending that they’re native speakers of other languages, without any real planning.

So, you know, you happen to have friends in Ukraine who could fact check the translation.

But if you didn’t, you would be like, Okay, I don’t know if I got half of this, right.

So what I’m trying to say is, you know, happy holidays.

And what it’s really saying is, you’re a terrible fart face.

And you don’t want to be sending that off to people without double checking at first, because those are two very different messages.

Christopher Penn 28:32

They are I did a version in Polish, actually.

And I said to some Polish friends, like, Yeah, you sound like an American speaking Polish, like, it’s understandable.

But if you’re the goal is just sound native, you don’t?

Katie Robbert 28:45

Well, and that’s, that goes back to technology is not perfect.

Technology can do a lot of things.

But sometimes, you know, I I personally, I could be in a very small group of thinkers, but I think it’s better to be authentically human with mistakes than it is to try to make things perfect with technology.

Christopher Penn 29:10

And that’s, you know, we were talking about this on LinkedIn the other day, that is, one of the hallmarks of authentic content is that yeah, it’s gonna have screw ups and and if you want in a world where you can’t necessarily trust what you see on first pass, one of the easiest ways to judge authenticity right now is to look for the screw ups to say, Oh, I said the wrong word.

You know, I was coughing or, you know, doing strange things like a robot eyes and all the things but things that machines won’t do.

In the same way that if you look at the text that ChatGPT, for example, puts out, it generally doesn’t make grammatical mistakes.

It generally does not put an apostrophe in your weather shouldn’t be generally writes coherently and well.

And absent those human flaws, like hey, like misusing punctuation, it generally does not misuse punctuation was somebody who like, you know, flinging out exclamation points and semi colons like, you know, like Callahan and Halloween candy, and putting periods in weird places, those laws are actually part of that authentic human experience, it’s the difference between a natural diamond and a synthetic diamond, a synthetic diamond is perfect, you put it under a microscope, there are no flaws.

And all the angles are perfect.

A real diamond has flaws in it, because made by a natural process.

And it’s one of the reasons why synthetic diamonds cost much less than real diamonds.

Because they are, they are literally flawless.

So part of the value that you bring in part of the test for authenticity with these tools is are there subtle flaws in there that are uniquely yours.

Katie Robbert 30:45

And this is where you know, I mean, even to just bring it down a few levels to people using generative AI to create blog post, you start to run into the whole sameness issue.

Very Vanilla, very, you know, middle of the pack, if you introduce all of this technology to replace C suite, team members, you’re going to have a bunch of companies that look identical, that have no competitive advantage over the other ones, because you don’t have that benefit of human insight or, you know, spur of the moment ideas, or, Oh, I don’t know, I had this crazy dream last night, and maybe we can talk through and let’s try this thing.

Like you lose that innovation.

Because all the machines are now doing everything, as predicted, very straightforward.

So all the companies that are now being machine led, are all doing the exact same thing,

Christopher Penn 31:38

I can definitely see that the counter argument I would make there is that that happens with humans too, particularly not, you know, mediocre to poor leaders, they all just copycat.

And so yeah, you get a whole bunch of companies go on Amazon search for any product.

And you’ll see, like a legion of very strangely named copycat products that are all exactly the same, just like $2 cheaper.

Katie Robbert 31:58

Oh, and you know, but to your point earlier, that may be a better alternative to a toxic leader, a toxic culture, and may be you know, if used correctly, it could be sort of that like intermediary, as you’re trying to get the human side of things, you know, straightened out, at least you have some oversight, some leadership, while you figure out the next step, so, so your team members, your employees, still have direction, still have accountability can still move, you know, and, you know, stay focused on one thing, while with the machine oversight, while you sort out the human innovation, ideas, all that sort of stuff.

So I can definitely see where there’s a benefit.

I don’t think that replacing humans 100% with the machines, because as we talked about, you have those flaws, those uniqueness, that innovation, that spur of the moment that unpredictability, a lot of that is what does make a great leader, if you know used properly.

Exactly.

Christopher Penn 33:02

I think the model that probably will work best for most companies with you know, the C suite with leaders with managers is well Microsoft calls the copilot model where the the software isn’t doing the thing.

But the software is essentially there as your personal assistant all the time.

So it’s saying like, hey, you’ve got a meeting coming up.

And here’s the five things that you’re going to be asked to talk about, hey, this is what I saw in your email the other day, you you need to respond to these three people because you’ve designated these are the most important people that that need response as soon as, hey, the last memo that you drafted, it could be misinterpreted as sexist, you should probably revise it.

So having this copilot constantly being not watching over you, but also watching over you as a CEO saying you know what? That’s probably not the best idea.

It’s like Clippy, the revenge of Clippy.

It’s

Katie Robbert 33:50

interesting, though, because what you’re describing is what we all try to get out of his being micromanaged.

But the way that Microsoft is positioning it is no, no, no, we’re just here to support you.

But really, what you’re describing is now the machine is micromanaging.

Every single move that you make whether you realize it or not, because you’re like, Well, okay, it came it told me my schedules when we my meeting notes, that’s all great.

And then it just continues to do it.

And so you no longer become capable of doing these things on your own.

Christopher Penn 34:20

And this is the Intro This is the difference between human micromanagement and machine micromanagement which goes back to our discussion about leadership, these things, either they already micromanage us, right they are you take away someone’s phone, they they’re like I don’t know what to do more in general.

But a human micromanager generally has that role has some kind of role power over you that they can make your life unpleasant and generally has a a assertiveness to the machines cannot right so Microsoft Outlook can say to you Hey, that looks like an inappropriate email.

But you are not obligated to follow its instructions.

You’re not required.

Thoughts instructions, so that respects respect, it is still micromanaging, but it doesn’t have the ability to enforce suggestions, it can just give out.

Whereas human met micromanage, like now if you move the Move the mouse to the left to left it like,

Katie Robbert 35:13

Well, no, and that’s, that’s true, it’s, it’s a lot easier to ignore notifications, because you can also set the setting of like, I don’t want to see these notifications anymore.

She’s like, okay, whereas, you know, Chris, you can say to me, I want you to stop asking me what I’m doing every day, I’ll be like, That’s cool gonna happen anyway, you can’t.

So

Christopher Penn 35:33

I think we can arrive at the conclusion that for sure, generative AI and just AI systems, in general, will be co pilots and assistants, and not micromanagers, maybe micro suggesters that you can take or leave, but if you do take them will probably help improve the things that you’re you’re less good at over time.

But it’s unlikely that we will see just wholesale replacement of entire executive positions.

Partly because people are not comfortable with that.

And also partly, if we’re if we’re being totally honest, the people who are in power want to stay in power, because power brings a lot of things like wealth.

And those are people who are not going to want to give up that wealth and power to a machine.

But you know,

Katie Robbert 36:19

being in charge is not all it’s cut out to be.

But you know, I can understand what you’re saying.

But again, that’s sort of what this was, what separates me from other, you know, leaders is that it’s, you know, I’m not looking to be the almighty all powerful, I’m not looking to be the most wealthy, I’m looking to affect change in a positive way.

And once that stops happening, then I should no longer be in this position.

You know, and I but you know, you’re absolutely right.

If you start to take away that authority from the humans, they’re going to have a really hard time letting it go, they probably won’t so, but I think that’s where you can start to see that trickle down within the company and the offerings on the services, which companies are laggards and which companies aren’t.

And

Christopher Penn 37:04

those companies that are the most flexible, and the most agile, and the most adaptable, will win.

Because that’s essentially the essence of evolution.

Evolution doesn’t favor the strongest favors the most adaptable to any given situation and as the environment that we do business and continues to get even more chaotic and even faster paced, that adaptability is going to be what separates the winners from the losers.

So

Katie Robbert 37:30

should I worry that behind my back, you’re trying to automate me out of my job, Chris?

Christopher Penn 37:35

No, you can I if I’m going to do that, I will tell you to your face.

I’m promoted Katie GPT I

Katie Robbert 37:41

appreciate that.

I you know, I like the heads up, just give me 30 days.

Exactly.

Christopher Penn 37:45

If anything, I’m gonna try to automate myself out of existence and have that part of my job so I can just go off and you know, talk about AI all the time.

If you’ve got plans or are thinking about the aspects of your job, you want to automate or you have thoughts about how leadership could be automated in your role in your company and you want to share your thinking, go to trust insights.ai/analytics for marketers are free slack group where we discuss and looks at AI pretty much all the time with over 3500 marketers who are asking and answering those questions every single day.

And wherever it is you watch or listen to the show.

If there’s a channel you’d rather have it on.

Instead, go to trust insights.ai/ti podcast.

You can find us on most channels and while you’re there, wherever it is, you get your podcasts, you could leave us a rating and review.

It does help share the show.

Thanks for tuning in.

I will talk to you next time.


Need help with your marketing AI and analytics?

You might also enjoy:

Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!

Click here to subscribe now »

Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday.

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This