In-Ear Insights Quality, Quantity, Productivity, and Generative AI

In-Ear Insights: Quality, Quantity, Productivity, and Generative AI

In this week’s In-Ear Insights, Katie and Chris debate the importance of quality work vs. quantity work, measuring employee productivity, and how generative AI may impact the way you think about staffing and hiring.

Watch the video here:

In-Ear Insights: Quality, Quantity, Productivity, and Generative AI

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

[podcastsponsor]

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.

Christopher Penn 0:00

In this week’s In-Ear Insights, there’s a lot of talk right now about the impact that generative AI is having on tasks on careers and things like that.

And I wanted to take a step back today and talk about quality and quantity.

So Katie, as a people, managers, people expert, one of the things that we’ve always talked about in the past, when we had a much larger team, for example, was Yeah, sort of a, b, and c players.

But there’s nuance to that there’s people who are extremely skilled, right? And then there’s people who are highly productive.

And so I guess the question I have for you is, how do you find that balance is a a person who has B player skills, but a player reliability, better than someone who has a player skills, but C player reliability? The reason I asked that is because it is very clear that jannard AI is a big player for skills, right? It’s not a Pulitzer Prize winner, it’s not, you know, gonna be winning all this stuff right now.

But it is an A plus player on reliability, you go to it, you tell it the thing, and then seconds, boom, you’ve got your thing.

It may not be exactly what you want the first time around.

But it’s not like waiting for, you know, your your ghost rider to take six weeks to turn around a piece of content, even though they’re brilliant writer, but they take forever to get stuff out.

So how do you think about managing people with that reliability for skill? And then how do you see general AI impacting that?

Katie Robbert 1:25

So you asked a lot of questions in that intro, but you also answered a lot of your own questions in that intro, you may have not realized I will always choose reliability over ambition.

You know, and so when we talk about the A, B, and A, B, and C type players, so you have the A players who are basically you consider those, you know, your people who go above and beyond who are always looking for the next thing, the so that sort of like the good of the A players, the bad of the A players is that it’s not bad, it’s just the less desirable quality of an A player is that they may be looking for the next thing because they’re bored and don’t want to do what’s in front of them.

So they’re looking for something else.

And it can come across as ambitious, it can come across as going above and beyond.

But really, they don’t want to do their own work, they want to do the work that you know, the guy in the corner office is doing but they’re not willing to put in the work to get to that corner office.

So let’s read of good and bad with a with b, that’s basically your solid, you know, they get their things done.

They’re reliable, you know, they may ask for more, or the flipside, they may ask or not, they may say, here’s what I have on my plate, I’m going to do what’s on my plate, and nothing else.

And so that’s the B player, the C players are the ones who struggle to get the things that are on their plate done.

But also maybe aren’t sort of looking ahead of if I just do this, then I can also do this.

And so the C players, you don’t tend to have a lot of C players on your team, because they can be you know, using up resources in terms of you know, bandwidth or other people’s time, you know, money, that kind of thing.

Really good example of this, my husband I’ve mentioned before he works in a customer service team at a grocery store.

And he has one team member who is basically holding up headcount, this one team member is always out on some kind of medical leave.

And you know, always has a doctor’s note, but always magically around the holidays, has to be out for eight weeks, and they’re holding up headcount, which means that he can’t hire someone else, because they have to hold that spot.

That’s what I consider, you know, a C player, when they’re there, they might be good, but when they’re not there, it’s just a waste of everybody else’s time.

So I would say you don’t typically have a lot of quote, like C players, on your teams, you want your team to be a good mix of A’s and B’s.

The A’s are the ones who are going to like take charge and be more extroverted, the B players tend to be more introverted.

So that’s sort of just to set the context.

So in your question of do you want more A’s or B’s? Well, what both

Christopher Penn 4:13

Do you want it’s A’s and B’s within quality and quantity, because it has such a quality and a C and quantity.

Katie Robbert 4:21

And then you don’t want that person because having someone who’s just ambitious but doesn’t get the work done is also not useful.

And so basically what you would want if you were to sort of put it in, I don’t know, let’s call a Venn diagram.

For example.

You want the share of the Venn diagram to be mostly in the B, because the B is where you’re going to get the reliability, the consistency, the work done, you know, you want people who are going to say, and what else or you want people who are going to look at a problem and say, Can I turn this into a repeatable process? And that’s where you Have that share of, you know, a qualities mixed with be quality.

So it’s, you know, there’s no exact science to this, because people are different people are unpredictable.

And people change and evolve over time.

Whereas now you’re talking about generative AI, generative AI evolves, as you tell it to evolve, it changes as you give it more, you know, tasks to do as you give it more training data.

And so yes, the systems themselves will evolve, because the manufacturers of these things will tell them that they’re evolving, they’ll add new features.

But really, it’s just like any other team member where you have to tell it what you need, you have to train it.

So you need to know going in, here’s what I need out of this relationship.

And here’s I’m going to ask you to do the difference here is that generative AI, at least for now, as far as I know, isn’t going to say? And then what else? And then what else? How can we surprise and delight our clients? What else could we be doing that we’re not doing? Generative AI is going to be a solid, emotionless B player, it’s going to say, you asked me to do this thing I did this thing.

Christopher Penn 6:09

Right? What I think I’m hearing you say, though, is that if you want a team that is solidly B players, with the understanding that you want, you want a player quality way you can get it, but you must have a player quality for, for quantity.

And for reliability, you want your you want people to show up and do the thing as as best to the best of their abilities, even if their skills are not top notch.

You don’t want people who are very skilled, but just can’t get things done.

There’s, they’re scattered, they’re all over the place and things like that.

That to me, sounds like your team is largely going to be generative AI with a few humans to pilot a pilot it.

But because of general AI capabilities, it is an A player for quantity, and a great now a b b plus player for quality.

That sounds like that.

That’s the sweet spot where you want your team to be.

What’s

Katie Robbert 7:08

interesting is I feel like you’re mixing up A’s and B’s depending on the context.

And so if you want it, so I’m talking about a player’s in terms of humans, you know, a players tend to be the above and beyond.

B players tend to be like the solid, I’m going to get things done.

So if you want to turn that into generative AI, I would still say generative AI is never going to be an A player, regardless of quantity, because it it still needs you to tell it what to do.

Got it.

So take a off the table.

Generative AI is never going to be your a player that’s reserved for humans who can critically think and problem solve and reach out to clients and have relationships like that something that generative AI can never do.

Generative AI is going to be your BB plus like maybe a minus player, but it is going to be your workforce.

Christopher Penn 8:06

So in terms of the composition of your team them as as people thinking about hiring people thinking about their staffing levels for 2024.

How do you see that sort of a that sort of quantity play? You know, general AI can make a lot of stuff with the direction of a human steering it? Yep.

How do you see that impacting people’s decisions about what their what their team compositions gonna look like?

Katie Robbert 8:34

To borrow a quote from Chris Penn? It depends.

So think about it.

I mean, I would think about it the same way you think about your team composition right now, if you are someone who doesn’t want to delegate or tell other team members, here’s exactly what I need from you, or you just struggled to do that.

Because it’s just not one of your core competencies.

You’re going to struggle to integrate generative AI because generative AI is basically a team member who needs a lot of delegation and instruction.

And so when I think about team composition, you still need really strong humans who have those skills.

And then you also need the complementary, which is the people who can actually do the thing.

So I would say, you know, it’s funny, because as I’m thinking it through, I don’t see teams being majority generative AI, I see it.

So if you think like, let’s say our team had five people, I would say one or two of those seats might be generative AI, but then you still need someone to oversee it and tell it what to do.

You still need someone to check its work and make sure it’s, you know, the right thing and then you still need someone to maintain the system.

And so you still need a lot of human intervention to make sure that these B players are doing exactly what’s needed and that is No different from the composition of a human team.

So it’s just a matter of, you know, so you’re talking about like quantity and quality.

You know, so sure you could have a B player writer, who can churn out five articles a week, one article a day, generative AI can do that, or it can they can do teknicks that.

But that doesn’t mean they’re any good.

And you still need someone to say this is exactly what I need from you.

This is exactly what it needs to be.

This is exactly who we are as a brand.

This is exactly how we need it, edit it.

So I would say I don’t see the majority of a team being comprised of generative AI without, without humans,

Christopher Penn 10:39

there’s no question the humans still have to be involved.

But when I think about generative AI aware how things have evolved, for example, we’re in the midst of the 12 days of data over on the Trust Insights blog, which you can see at trust insights.ai/blog.

Last year, it took me five and a half, six hours per post, to to do all the data processing and get these things done.

The data processing portion and the writing the code portion this year, is taking 30 minutes.

So it’s literally a five or 6x improvement.

So from a seat perspective, in some ways, the majority of the workforce on that project is now people, people gendered AI that can code at 10x, the speed that I code at, right, so from a yes, you still need a human steering, and this is what I want you to do.

But the work product is better, because I’m not a great coder.

And the work speed is much, much better.

So it’s like a 10x improvement on speed, and probably a 2x improvement on quality, and so forth.

If Trust Insights had a team of developers that was working on this code for us, I would basically be saying, I’m pretty sure I don’t need the five of you, when I have could have one of you steering 10 instances of generative AI,

Katie Robbert 12:03

I feel like I mean, I hear what you’re saying.

That’s a pretty generalized example.

And I think that unfortunately, a lot of companies are thinking that way.

It’s very short term thinking.

They’re like, okay, generative AI can write code.

So I don’t need these five bozos over here who have been doing it their entire life.

The challenge, though, is that so, you know, let’s say, you know, you said, Okay, I heard generative AI can write code, hey, generative AI, write some code.

If you yourself, don’t have that capability and skill set, or someone on your team doesn’t, you don’t know that the code coming out is correct percent.

So the point being, is that, you know, there needs to be some thought it’s not.

So let me step back.

Because my like, my brain is now working faster than my mouth, there needs to be some consideration as to how you truly integrate AI in because it’s not a one to one replacement.

It’s not either a human or generative AI.

And I feel like that’s the context we’re talking about.

So I want to sort of step back and say, it’s not a one to one replacement humans are complex.

And what you’ve been saying for years is that the more diverse your skill sets, the harder you will be to be replaced by a machine by generative AI.

And we’re at that point now.

And so part of the job that I do is writing.

And so you know, we’ve built Katie GPT, she still needs a lot of work.

We’ve we’ve discovered this in our analytics for marketers free slack group, I had her takeover last week while I was on vacation, and you know, I would say she was maybe a b minus has some work to do.

But it’s a good lesson that, you know, custom GPT models right out of the box aren’t great, you still need some training, if even if they’re trained on your personal data, there’s still a lot of work to do.

And so I look at that, and I’m not like, wow, I’m going to be replaced by Katy GPT.

Like, you know, within six months, it’s not true.

There’s so much that I do that the machine will never be able to do.

Which specifically is be me think like me, because the way that my thought processes work is not something that I could say, Okay, let me write it down.

It’s going to be a repeatable process.

And that’s exactly how I come up with ideas and how I get inspired and how things work.

I’m never going to be able to do that.

And that’s something that you can’t replicate.

So therefore, it’s not a one to one.

And so if you take this example of developers, developers, yes, they write code, but they also have critical thinking.

They also have to step back and see the bigger scope of the problem.

And that’s something that generative AI at least at this time, isn’t Doing it saying, Here’s a defined task, you’re asking me to do this thing.

So it might take to your point some of the actual processing off the table.

But I don’t see it as a one to one replacement as the human.

And I think that’s where a lot of companies are getting this whole, how much of my team should be generative AI equation wrong.

Christopher Penn 15:22

And I agree, it’s not one to one, because you absolutely still need humans there.

Right? But it’s probably one 2.7, or maybe one 2.5, where you can have one person supervising the work have three or four instances of the stuff that do supplant.

So if you had a team of five developers, you probably need to write you probably you no longer need five, because the the arduous time consuming portion of the task, which is, you know, fingers on keyboard, writing the code, machines can do very capably.

In fact, like I was saying better than I can do when I look at the code that that like a language model spits out, it’s better than my code, I can still inspect it, I can say no, that’s completely wrong.

Like, that is not what I told you to do.

Fix that.

I was working on the old days data thing for Spotify playlists last night.

And there were a couple of cases where the language while we got just got caught up in its own shoelaces just fell flat on his face.

But I have the skill, have enough development skill to say, that’s not what I asked for do this.

But I don’t have to write that code because the machine does a better job of it.

So in this case of quality and quantity, it is that is a minus quality.

It’s better than the B minus programmer sitting in this chair right now.

But it is still the B player in terms of like, it has to be told what to do, it’s not going to take initiative, right.

But to your point, a lot of companies look at that and go a minus quality.

And it’s an employee, we have to tell what to do, but we don’t have to pay healthcare and salary.

Bring it on.

And

Katie Robbert 17:00

those are things that you definitely have to factor in.

But again, it’s sort of that short term thinking of, you know, as humans, we evolve in our careers, you know, we all sort of start at, you know, I wouldn’t say but a lot of us start at like, Okay, here’s the internship, the internship is meant to teach us, you know, the basics of the job, and then you sort of move into the, you know, Junior role, the coordinator role, then you move into, like, you know, like, okay, now I can sort of stand on my own two feet, okay, maybe now I can take on, you know, some independent responsibility, maybe it’s sort of like it evolves.

Generative AI, is going to evolve? Only if we tell it.

And I think that again, sort of goes back to what I feel like you and I sort of circling around the same point, but just in different words, is that treat generative AI like you would any other team member who needs direction? Who needs training? Who needs performance metrics? Who needs KPIs? Because let’s say you bring on generative AI to write all of your blog content.

Well, guess what, that’s all it’s ever going to do? It’s not going to say, okay, you know, what, I’m ready to move up in my career.

Give me more challenges, give me more things to do.

You, as the manager, have to proactively say, okay, team member, I think we’re ready to take you to the next step.

And they’ll say, Great, how do I get there, you still have to give it all of the things.

So I guess, I look at generative AI, as, in some ways, like, yes, it can produce a lot, but in a lot of ways, it’s a bit of a time suck on resources, because of how much instruction it needs.

And that’s why I say it’s never going to be an A player, because a players need very little instruction and oversight and training.

Whereas that is all generative AI needs.

Yes, it can produce a lot, but it’s still so much time from you as the human to teach it.

Christopher Penn 18:50

Okay.

I think I disagree with you there.

But we don’t have

Katie Robbert 18:57

you know, and that is totally fair.

This is just my view on how to appropriately use a system like generative AI.

So for example, a lot of companies are bringing it on to replace content writers, while the content that’s coming out is pretty mediocre.

And I think it’s called like the sameness factor, because they’re not giving it a lot of insight.

Because generative AI is not a content writer who has 20 years of experience, and can draw from different references and experiences and interview people and say, hmm, I’m sort of stuck on this.

Let me see where else I can find inspiration.

Generative AI says, Oh, you want me to write a blog post about SEO in 2024.

Okay, boom, here it is.

And that’s just

Christopher Penn 19:41

bad prompting, that’s more than that’s bad prompting, and shameless plug if you’d like to learn better prompting, there’s a there’s a course here for that.

Katie Robbert 19:50

But you see what I’m saying? Yeah, I

Christopher Penn 19:52

see what you’re saying.

But to me, it’s, to me, it’s no different than the human intern that you bring on.

You’re like, okay, In turn, we’re gonna have to explain to you everything like in minut detail.

The difference, at least to me is that when I sit down and write out a prompt build all the requirements, which by the way requirements gathering is, is like literally my life now, because prompt writing, if it’s done well is requirements.

What it’s funny, just as a brief aside, requirements gathering is the perfect format for generative AI prompts, if you if you know how these machines carry code from one instance to the next, and know what to write.

It is requirements gathering.

This is an example here I was working on last night where I was trying to put together this, this this code for our 12 days of data.

And one of the things that makes this work so well is I have to keep a copy of the requirements in the code, so that every time I feed it back to ChatGPT, in this case, when I was using ChatGPT, it remembers what it was doing, I have to be reminded I’m doing but because I took the time to write out the requirements.

Now every time I work with this thing, it’s it’s getting reminded, these are the requirements, let me make sure that I’m doing what I’m told when I work with the machine, if I invest the time upfront to do the requirements and build out really good prompts.

Every time I go back to that, I don’t have to do that, again, it’s done.

Whereas a lot of times with humans, if I had a human coding intern, I just be like, Okay, listen, this is how I want you to do it.

You’re not listening.

But

Katie Robbert 21:35

I apologize for interrupting you.

But I don’t think you’re hearing what it is that you’re saying.

You’re saying the same thing about machines and humans, it’s just in a different format, you have to tell the machine every single time, what it is you need done, you have to remind them every single time.

It’s just a matter of like, you can just copy and paste.

You don’t have to worry about pleasantries and formalities and feelings and all that sort of stuff.

So that I totally understand.

But you have to remind them every single time because they won’t remember, with a human, you have to remind them every so you’re saying the same thing.

Basically, they’re both at the basic understanding level of Yeah, I know, you told me this six times, but you have to tell me again, because I don’t remember from last time.

And so it’s just a matter of efficiency of how you can just copy and paste.

And here’s the thing, you still have to do work.

So you’re not convincing me.

Okay, that that one is better than the other, it’s still a time suck.

Christopher Penn 22:37

The difference is with a human.

The the fleshy, messy stuff in our heads is, is unreliable with the machinery.

Right now it is it is irritating, that it has such a short term memory, right? If we go into the platform here, this model here has a 12,000 word memory, this model here as a, like a 6000 word memory, hence, I have to tell the machine repeatedly Hey, remember what you’re doing.

This model here has a 90,000 word memory.

So as the models continue to evolve, they get better and better memories, the need to remind them constantly is going to diminish very, very quickly.

That’s not true of the stuff in here.

Katie Robbert 23:25

Which is interesting, because as he and we could debate this all day long, and I feel like we should start to wrap this up.

But as humans evolve in their roles, they need less reminding your B player moves into an A player.

So you’re eight They’re good boys.

But Right, exactly.

Right.

And so like and so that’s why you say you need that blend of A and B players.

And so I feel like if I can sort of you know, summarize, I feel like we’re both saying that as of today.

As of right now.

Generative AI is a B player with no real ambitions, because you have to constantly remind it what to do, it can do the thing, but you have to tell it.

And so it’s really no different from having an individual on your team who can do the work, but needs constant reminding of what the work needs.

Now, I think we should revisit this conversation as generative AI evolves, because the conversation is going to change but as of today, because we can’t predict the future as of today.

I don’t see generative AI is any different from any other person on your team who needs a lot of hand holding and oversight.

Yeah,

Christopher Penn 24:35

that’s something we say in in our talks and workshops, things that it is treated as though it’s the world’s smartest intern, right, but it’s still an intern.

And there’s a lot of hand holding you had you had.

There are some things happening right now in the space that we can talk about another time, that are dramatically going to change in the next three, six months.

That will really really change things up but that’s for another time.

If you’d like to hear about those things at some point to make sure that you’re part of our Slack community go to trust insights.ai/analytics for marketers, where you have over 3000 other marketers are asking and answering each other’s questions every single day about analytics data science and AI.

And wherever this you watch or listen to the show, if there’s a channel you would rather have it on.

Instead, go to trust insights.ai/ti podcast, you can find us on every major platform.

And while you’re on the platform of your choice, please leave us a rating and review.

It does help to share the show.

Thanks for tuning in, and we’ll talk to you next time.


Need help with your marketing AI and analytics?

You might also enjoy:

Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!

Click here to subscribe now »

Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday.

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This