In-Ear Insights Generative AI and Governance Strategies for Leaders

In-Ear Insights: Generative AI Governance Strategies for Leaders

In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss generative AI governance and adoption in business and how leaders can put effective guardrails in place through understanding employee needs and setting clear expectations.

[podcastsponsor]

Watch the video here:

In-Ear Insights: Generative AI and Governance Strategies for Leaders

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.

Christopher Penn 0:00

In this week’s In-Ear Insights, we have a technology adjacent problem within business, I had a chance to talk to a bunch of CEOs recently.

And they all had the same general feedback, which is, hey, our employees are using generative AI at work.

And we don’t know how to stop them from doing that, knowing that there are risks to these technologies.

And I remember listening to them, and saying to myself, this is the smartphone all over again, it’s 2007 people bring their iPhones to work, and you have no control over it.

Right? This is 2008, and employees using Twitter.

And you back when Twitter was useful.

And Twitter is not something that your company controls.

And what do you do about it? Katie, when you think back at all the technological innovations that have happened from you know, these lovely supercomputers that we have in our pockets to general AI? What do you say to your fellow CEOs, when they’re like, we can’t control this? Like, how do we control this?

Katie Robbert 1:04

You know, it’s funny, it’s not even a technology problem, I take it back even farther than that of just general human interaction at work that has nothing to do with work.

That’s like saying, Hey, Chris, you and I are never allowed to talk to about anything unless it moves the business forward.

And it’s just an unrealistic dictatorship expectation to put on your workforce.

And that’s not a culture that’s going to survive very long.

And so it really comes down to the inevitable it’s going to happen.

But can you put guardrails around it? You know, and so you brought this up to me earlier, and my first thought was, oh, this is social media and Facebook all over again, how do I keep people from using their personal Facebook profiles? during work hours? It’s, you know, the office had a whole episode about, you know, time theft, and it’s like, are you sneezing? Are you breathing? Well, that’s not working.

So let me mark you down and strike against you that you’re not actually doing something to move the company forward.

And this is that all over again.

And so yes, there are some higher risks with generative AI AI, because now you’re in the well, it can do my work for me, it can write for me versus, you know, Facebook, where you’re just, you know, stalking all the people, all the people you dated in high school, which, you know, you’re not necessarily, you know, using that to do your job for you unless you work for, you know, a pie company or something like that.

So, what I would say to my peers, to other managers who are concerned about the use of technology in the workplace, that isn’t for work related things, is that it’s going to happen, you need to acknowledge it, you can’t stop it, but you can put guardrails around it and I think that, Chris, that’s what we should really focus on.

Christopher Penn 3:05

What about people who are to your point, they’re using it to do work to do their job.

And you’re, you know, for example, one of the risks of generative AI when you’re using it with third party software, like for example, ChatGPT, is that you are handing off some of your proprietary information, potentially, to the services.

So from that person, yeah, we acknowledge people are going to check to not work at 100% utilization for eight hours a day.

That’s, that’s just humans.

In the context of, hey, I know there are risks with these tools.

I know that there are copyright risks.

There are, you know, intellectual property confidentiality risks, what do you say to the executives there to say like, Okay, well, here’s, it’s good, because again, to your point, it’s gonna happen.

How do you put guardrails on that?

Katie Robbert 3:58

So this is where this is from, you know, we often talk about governance and data governance and business governance, and just general process, this is where those terms come in.

And so knowing that you can’t stop generative AI from happening and evolving and being incorporated into so even if your company isn’t using ChatGPT, there’s a lot of tools that you are using, that are incorporating generative AI such as Microsoft and Bing, and, you know, the list goes on and on and on.

So you can’t stop it from that point.

So what you need to do is, have an understanding of your intellectual property, have an understanding of your copyright.

And if you’re dealing with protected health information, for example, if you’re dealing with PII, if you’re dealing with, you know, copyrighted information, it is then You know, incumbent upon you to say, Okay, we know people are going to use this, how do we leadership provide a secure space for our employees to use this maybe it’s, you know, on a intranet very internal doesn’t reach the external world.

You know, maybe it’s on a secured server, maybe this is the process, maybe we give them a checklist of things they can and can’t put into generative AI systems like you can search for this on, you know, Google in incognito mode, you can’t search for this on Google publicly, you know, whatever those rules look like.

But that’s where you need to start is acknowledging Yes, this is a thing.

And now here’s what this means as a risk for the company.

Because if the employees don’t understand that there’s an actual risk, a data breach, a copyright breach, you know, any sort of liability, then what do they care? all they hear is no, no, no, which makes them want to do it even more.

But you your job is to help them understand why the answer is no.

Christopher Penn 6:08

So this is not a problem for Trust Insights.

But I could totally see this being a problem at our last company, you’d have an account coordinator, or an account executive who’s like, you don’t pay me enough to care, I don’t care about the stuff, I’ve gotta get my work done.

If I don’t have to be at the office till 8pm.

I’m going to use ChatGPT all damn day, and you can’t do anything about it unless you catch me.

How do you manage that?

Katie Robbert 6:35

And you know, it’s funny, because a lot of what I’ve been talking about at the conferences at Mads, I’ll be talking about a B2B is that we can’t forget that, at the end of this behind the curtain are people and people still need to be managed.

And so that’s a culture problem.

You know, that’s not a risk of ChatGPT.

That’s the risk of using the ChatGPT system is a side effect of a larger issue, which is you have disgruntled burnt out junior staff that you’re making stay until eight o’clock, to do repetitive, unfulfilling work.

That’s what needs to be managed, not them putting things into ChatGPT, which is going to be rare.

Yes, that’s a problem.

But it’s not really the thing that you need to solve for.

So the first thing you need to solve for is, why is your account coordinator staying until eight o’clock? What are the things that they are doing that are making them stay until eight o’clock? And are there better ways to do their thing? And maybe ChatGPT? Is the answer.

So what would it look like to bring that person into creating a solution, so that they can take some ownership of it and not have to stay until eight o’clock?

Christopher Penn 7:58

That requires management to be aware of, and empathetic to the problems that the staff are having, which it does, which in a lot of cases, is not the case.

And there are technological solutions that that will act as a band aid, you know, running private servers and stuff, all those things running jet ChatGPT, as their instance.

But it doesn’t, it doesn’t fix the the root problem.

Katie Robbert 8:29

Oh, it absolutely does not.

And that’s, and so this is, you know, and I don’t want to, you know, go too far off topic.

But ChatGPT is just another technology that people are going to use as a band aid to slap on top of their management problems, their people problems.

And so you know, you have your sort of split down the middle of leadership saying, Oh, my God, everybody’s going to leave because ChatGPT can do the work.

And then you have the other half saying, Good, let them leave.

ChatGPT can do the work.

So it really depends on what side of that conversation you fall on.

If you fall on the side of Oh, my God, everybody’s gonna leave, then you’re probably looking to fix some of the culture issues, some of the overall like disgruntled burnt out, how do we help versus the great, let them leave, you’re not going to solve that problem.

It is going to perpetuate and keep being a problem.

If that’s your attitude, if people are just replaceable if they’re just cogs in the machine, and you’re just sitting on top of all of it.

Christopher Penn 9:42

For someone that who works in that environment, maybe a senior executive, maybe you’re you’re not you’re not the one in charge, but you are reported to the one in charge is your option than just to leave and find a better company to work for.

Katie Robbert 9:57

It is that’s that’s one of your options.

So that is always one of your options.

I personally, I know this will come as a shock.

I’m stubborn, and I’m a fighter.

And I first, if I was not the person who had the authority to fix the thing, I would be making a lot of noise about the thing that needs to be fixed.

And Chris, you’ve seen this firsthand.

You know, and so I personally, and this is, again, this is just me personally, I personally feel like it is my responsibility to, you know, within reason, highlight, here are the problems, here are some solutions that I would recommend, maybe they’re not the solution, but it’ll get people thinking, and I feel like I personally, would want to at least explore that option.

And if once I’ve exhausted that, then I’m like, Okay, I’ve tried everything I can do to resolve this within my authority within my responsibility, you know, respectfully, legally, etcetera, etcetera.

There’s nothing else I can now do.

Now, I’m going to leave.

And you know, because at some point, like, people like me, who are fixers, we have our limits, too, you get burnt out of trying to fix something that nobody else cares about.

And so that’s when you say, Okay, now it’s time to leave.

And so yes, the option of picking up your things and walking out the door is absolutely an option.

And that’s an okay option.

If there’s nothing wrong with that option, it comes down to your own personal choice of how far do I want to take it.

And when you’re dealing with people problems, people are complicated, and a lot of it is out of your control.

And like, you know, this, it’s, you know, Chris, I can’t just come to you and say, We’re gonna do it this way.

Because I said, so that’s not going to work.

And I can’t also just come to you and say, Well, I’m the boss, and you can’t question me, I can do all of those things.

But I already know from, you know, history, that that’s not going to get us very far.

I also can’t just say, well, whatever you think is best, you know, just like do whatever you want, like there has to be some in between.

And that takes a lot of hard work.

And that is work that people are not willing to do, people are not willing to do the work to manage the people properly.

They want to delegate it to the machines.

And that’s where you see systems like ChatGPT coming in.

And so to the original question of how do I, you know, how do I prevent people from using systems like ChatGPT? To, you know, come into my company come into my culture to do the work? You can’t, you have to spend your time on the people management and not the place people don’t want to spend their time they want, you know, the quick wins the Fast Money, the instant results, the gratification, and people management is not that.

Christopher Penn 12:56

So I’m guessing I already know the answer.

And if you’re the consistent listener of the show, you know the answer as well.

But what does the roadmap to safe AI adoption look like for the worried CEO?

Katie Robbert 13:11

You definitely want to start with the five P’s.

The five p framework is purpose people process, platform and performance.

And you want to focus a lot on the first two Ps purpose and people.

The problem with a lot of companies is that they see what’s potential with performance.

And then they work backwards of they’re like, Okay, I want to increase my revenue by 5 million.

Let’s bring in ChatGPT.

Okay, those are two of the P’s.

But those are the two P’s you focus on last, you first need to say, what is the problem I’m trying to solve? What is the point? Why are we bringing in ChatGPT in the first place? And who, on my team, internally, externally, customers, stakeholders, investors, all the way down to, you know, the people who, you know, make sure that the floors are clean every night? Who needs to be involved with introducing a new technology who needs to be involved with helping us reach those goals? And then you can figure out, okay, what is the process look like for each of these individuals? What platforms, some of them may be using ChatGPT.

Some of them may not be using ChatGPT.

Some of them may just be analyzing data, or communicating with customers or answering the phones when someone calls and say, Hey, why is your content suddenly gone downhill? Are you using ChatGPT to generate it? You know, there’s a lot of different things to factor in.

That’s how you want to introduce ChatGPT appropriately, professionally.

You also want to factor in things like hey, what kind of data are we going to allow into these systems? We know people are going to use these systems.

What can we say about the data that we have so far? For example, Chris, we know that you can import Google Analytics data into ChatGPT, for example, and have it to an analysis, you, as leadership need to decide, is that, okay? Because if your ChatGPT system is not centralized on your own private servers, you’re putting your Google Analytics data into the entire ChatGPT ecosystem, for anyone to access.

Now, someone’s not going to say, Hey, give me Trust Insights, GA four data, and they’ll just get an exact replica yet.

And those are the things that we need to be aware of.

So as you’re going through the five P process, you need to be aware of what data are we going to allow into these systems? And what won’t we allow? That’s the work that needs to be done first.

Christopher Penn 15:57

Okay, so it sounds like the the almost like you, we need to ask people on a on a task basis, kind of what what tasks they have, and what what their user stories are.

Katie Robbert 16:12

Absolutely.

And so my user story is going to be very different, Chris, from your user story.

And so my user story, sort of backup for those who aren’t familiar or user stories, a simple sentence composed of three parts.

As a persona, I want to say that the persona is the audience, the want to is the action and the so that is the intent.

And so my user story could be as the CEO, I want to introduce ChatGPT, so that my processes are more efficient, and I can save money.

Pretty straightforward.

Chris, what would your use your story be?

Christopher Penn 16:52

Well, I’ll give you a real simple tactical one.

Because I feel and this isn’t there’s going to be in this week’s Trust Insights newsletter.

We have a tendency to try and go for like the whole enchilada instead of just, you know, a bean at a time.

And so this is an example of being as a reporting analyst, because that’s the role I’m playing when we do monthly reporting.

I want to find and replace all references from last month in my report.

So I don’t have to open up each individual PowerPoint presentation, manually edit it, and waste an hour of time every month on this task that clearly should be automated, but I don’t know how to do it.

And yet I know it exists.

I know that, like the programming language, Python can do it.

I’m just going to ask, I want to ask a ChatGPT.

How do you do this? Can you write the code for me? So that’s, that’s my user story for a very, very simple, but useful implementation.

Katie Robbert 17:52

And it’s interesting, because that never would have occurred to me as a user story as a way to use the system.

But by going through that exercise of a user story, I’m like, Oh, okay.

That’s how my analyst is going to want to use the system.

That’s how you know, my comms person, that’s how my marketing person, because everyone’s going to want to use it a little bit differently, we may all have the same goal, which is to more efficiently deliver client reports.

But we all play a different role in creating those reports.

And so utilizing user stories to understand how people are going to going to want to use the system, like ChatGPT is a really great communication tool.

So then you as leadership can say, okay, now I understand why people are wanting to use us, it’s not just to be lazy, and you know, get rid of things like the the time savings of you, having the system, change the dates and reports is going to be cumulative.

It’s going to add up, you’re going to be like, Oh, if I can do this here, can I also do this here? Can I also do this here.

And then you’re going to find yourself with all of this extra time to actually do deep valuable analysis, make insights, turn those insights into action plans to in those action plans into strategies, turn those strategies into revenue? And so you’ll start to see the ripple effect of it that way as well.

Christopher Penn 19:21

Exactly, exactly.

So I mean, that’s, that’s kind of where I think if CEOs had the level of insight, say like, Okay, this is just an example of a task that clearly a machine should be doing.

They would understand, okay, there’s, there’s very little risk here, right? Okay.

This is a this is gonna be Jiang a piece of Python code that will run on someone’s laptop, do a very simple operation, and yet, that will save an hour a month.

Now if you have a team of 10 analysts Stout’s 10 hours a month that’s 120 hours a year.

If you are an agency, it’s 120 Extra potential.

Your billable hours you could have for those 10 analysts.

So that’s, you know, that’s that I think would probably help those CEOs say, okay, I can see now the business case, for doing this, you know, that’s a low risk endeavor is, there’s high return, right? Because it’s saving time when time is our most valuable asset that we can’t buy any more of.

And so Okay, let’s let’s, let’s maybe approve the usage of va ChatGPT.

In in this type of use case.

Katie Robbert 20:32

And I think that that’s really what it comes down to is, you can either embrace it, or you can fight against it, but fighting against it, especially a system that is getting, you know, integrated into so many different things is going to be really difficult.

And so what would it look like for your company to embrace it, and just have those open conversations, you know, with your team with your company? Hey, we’re all excited about ChatGPT.

You know, what does this look like for us, we know that there’s value to it, and we’re still learning.

So help us learn together versus just, you know, throwing down the goal and saying, No, you can’t do this, because it’s gonna make people want to do it even more.

What is it about it that I can’t do? I know for a fact, Chris, if I said, you know, Chris, you know, you can’t do that you you can’t use ChatGPT be like, oh, yeah, watch me.

You know, it’s like when I was five, and my dad said, you can’t climb that tree.

In my head.

I took it as a challenge.

And I climbed the tree, and I absolutely fell out and broke my elbow.

He should have said is, I don’t want you to climb that tree.

Because you may fall out and break a bone and I would have been like, oh, okay, but he said, you can’t.

And I said, got it.

I’m gonna do it.

I’ll show you I’ll prove you wrong.

And it really comes down to that, you know, being clear with your communication.

Christopher Penn 22:00

Exactly.

If you have examples of the kinds of discussions you’re having with your team, your leadership, or you are in leadership and you’re having struggling to communicate your expectations about AI, and you want to share your experiences pop on by our free slack group, go to TrustInsights.ai AI slash analytics for marketers, where you have over 3500 other marketers asking and answering each other’s questions every single day.

And wherever it is you watch or listen to the show.

If there’s a challenge you’d rather have it on instead, go to trust insights.ai/ti podcast and find on the surface of your choice.

And while you’re there, please leave us a rating and review does help share the show.

Thanks for tuning in.

I will talk to you next time.


Need help with your marketing AI and analytics?

You might also enjoy:

Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!

Click here to subscribe now »

Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday.

One thought on “In-Ear Insights: Generative AI Governance Strategies for Leaders

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This