So What? Marketing Analytics and Insights Live
airs every Thursday at 1 pm EST.
You can watch on YouTube Live. Be sure to subscribe and follow so you never miss an episode!
In this week’s episode of So What? we focus on how to integrate AI into your company. We walk through the latest developments with AI technology, the use cases for integrating AI into your company and why you need to factor them into your SEO plan moving forward. Catch the replay here:
In this episode you’ll learn:
- What are the latest developments with AI technology
- The use cases for integrating AI into your company
- Best practices to evaluate the readiness of your company for AI
Have a question or topic you’d like to see us cover? Reach out here: https://www.trustinsights.ai/resources/so-what-the-marketing-analytics-and-insights-show/
Katie Robbert 0:14
Well, hey everyone, Happy Thursday. Welcome to so what the marketing analytics and insights live show. I’m Katie joined by Chris and John.
Christopher Penn 0:23
Katie Robbert 0:26
This week we are talking about integrating AI into your company. Everyone’s talking about artificial intelligence right now ChatGPT Bard. I mean, I’ve personally lost track of all of the different systems that have come out just even in the past week, or the updates, and you know, so we don’t be like, that’s an important topic. But we want to really talk about you and your company. Because with all of this technology disruption, comes a lot of people disruption comes a lot of companies grasping on to shiny objects and trying to figure out, how does this work for me? Or if I start using this thing? What do I need to do? Am I even ready for it? So that’s what we want to tackle today, you know, what are the latest developments with AI? What are the use cases for integrating AI into your company, AI is not going to solve every problem. So you want to make sure that we cover that, and then best practices to evaluate the readiness. There are ways to tell whether or not your company is ready to start to bring a technology like artificial intelligence in So Chris, where do you want to start?
Christopher Penn 1:32
I think we have to start with the AI portion itself, which specifically means AI is going to find its way into your company in two different ways. One is purposeful, right? purposeful, meaning that your company decides, hey, this stuff is important. We need to we want to go and we want to buy this stuff, we want to get this stuff, let’s let’s put the stuff in the company, right? A good example would be like our friends over at go Charlie, you are a content marketer, and you say, You know what, we need to get more of our stuff. operational life operationalized or changed to have a different model to you know, different formats. So if I want to use a, a purpose built piece of AI, to do a thing, that’s one way that AI is going to find its way into your company. The second way that AI is going to find a company is it’s going to be forced on you. Right, so you upgrade to the latest version of Microsoft Office or use Bing or you use, you know, Google Docs, and suddenly, boom, there’s a whole bunch of like, really fancy new AI there and you didn’t get a vote, right? It just kind of appeared, say with like Google Analytics, Google Analytics has AI built into it. Specifically for its anomaly detection. If you were to go into to GA, you would see very, you’re pretty clearly like, Hey, these are some of the insights, these are things that you need to pay attention to. And Google will, you know, GA will yell at you about things. And again, you don’t get a choice in this, it just kind of appears. So I think the important thing to start thinking about is, do you have a plan for each? Right. So what’s your plan for purposefully introducing AI? And what’s your plan, which I think is a bigger problem for when AI finds its way in whether or not you want it?
Katie Robbert 3:16
Well, and I think that there’s a plan for bringing it in. But first you need to see if your organization is even ready to bring AI so sure there’s the purposeful in that example, with go, Charlie, like, you know, our content marketing, we want to do more, but are you even set up with clear processes and data collection that would allow for bringing AI, one of the big challenges that I see across the board is that there’s a misconception that artificial intelligence is going to solve a lot of problems that you already have existing in an organization. And what that really translates into is you don’t know what the problems are. So you’re just gonna slap another piece of technology right on top of it. So let’s just take AI out of the equation for a second. John, you’ve worked at a lot of startups, have you seen sort of this phenomenon of let’s just keep bringing in tech, and it’ll solve the problems?
John Wall 4:10
Yeah. You know, in dysfunctional culture, Yeah, that happens all the time. You know, like, everybody’s looking for the next big thing. But for startups, yeah, I don’t know, I just keep coming back to the, the idea that these are tools for experts, you know, you kind of need to know what you have to do and what you want to do. And then if you find something that you can automate and cut cycles out of the workflow for you, that’s great. But yeah, this is not like, hey, we don’t know what this is or what it does, but let’s just like start using it today. And like that’s just kids running with scissors.
Christopher Penn 4:47
It’s true, though, because I was having a chat with a friend this morning, actually, who’s a music student, and she was saying her professor was just ranting about ChatGPT and AI and People using the software to write like concert reviews, and just how inept these tools are. And I said, Well, what’s the prompt that they were using? It was like a two sentence prompt at all. That’s why you’re getting bad results. Because the person who came up with this to rant about it specifically doesn’t know how to use the tool. I said, I wrote a one page prompt back and sent it with it with a review. And she showed it to a professor and was like, Oh, it was like, this is a better review than I wrote, like, yep. If you know what you’re doing with the software. So that’s a big part of the five P’s, when it comes to AI and readiness for AI is, do people have the skills to even be able to use the tools? And again, I think, for purposeful AI, when you’re going to go out and buy an application, you can take the time to do that evaluation, say, okay, yeah, we’re ready, you know, are this departments ready to use this tool, when it gets forced on you, by big tech vendors, that’s when I think there’s gonna be a lot more potential for chaos.
Katie Robbert 6:03
So if we take the example of the five P’s, instead of understanding the readiness of artificial intelligence, you know, you’ve already started to cover a couple of them, Chris. And so you know, the people is going to be the biggest stumbling block in terms of introducing artificial intelligence, because big surprise, people run the show, you have to program the system, you have to judge the output, you have to do something with the information. And as these interfaces get more user friendly, it doesn’t change the fact that it’s still artificial intelligence. And that just because I can write a prompt, or someone else, Chris, in this example, can write a prompt, say, you know, give me a music review. That doesn’t mean that it’s any good, you then took the time to write a one page prompt, because you’ve been studying prompt engineering, and actually wrote something useful. Those are two different levels of skill, trying to accomplish the same thing. And so first thing you need to do is a skills gap assessment of do I have anyone in house? Or can I bring in someone who actually knows how this thing runs? So you need to do that skills gap assessment? First and foremost, it’s not good enough for you to say, hey, marketing analyst, here’s a piece of AI software, good luck, or hey, you know, content marketer. Here’s go, Charlie, good luck. There’s a lot of skill that goes into it. And Chris, and I covered this. somewhere, sometime, I don’t remember now. Oh, probably on Monday, when we recorded the podcast where you actually have to start thinking about, do you have software development skills within your team, not just analyst skills, not just writing skills, not just management skills? Because these versions, this new school of artificial intelligence requires some level of understanding of software development. So that’s number one, to determine the readiness of your organization in terms of the people, Chris and John, what else would you consider?
Christopher Penn 8:12
The other thing you need is in addition to technical skill, you also need domain expertise, right? You need to be able to know if the prompt is if the software the systems are spitting out what you intend, if you know what something means a real simple example, again, going back to our friend Google Analytics, you know, Google Analytics does not have any of the fancy large language model stuff. It’s got some basic anomaly detection things that will show up when when there are alerts. And the challenge with that even for, even for, like, just basics is Do you know what this means? Like? So what does this thing mean? So yes, AI detected an anomaly in our web analytics. So what, right? So what should we do about this? This is a really good example. sessions from direct traffic spiked, this is a literal, so what because you can’t do anything about it? Because direct traffic has no attribution. If you don’t know what that means. This, you might think, is this bad? Is this good? What what do we do about this? Going to the example of the prompt, you know, this is the example the example prompt that I wrote this, you need to have some domain knowledge about music, not just prompt engineering, but the music itself to be able to write the prompts necessary to generate a good review. So I had to dig around and rummage around in my head to figure out like, well, what are the things that you would want to have in a good music concert review? It’s not just you write a review of a concert that’s that’s not going to get you anything very specific. So you need that and then again, Katie, going back to what we’re talking about on the podcast this week, there is Also, then you have to go back and fact check the output to say, like, did this in fact, a do the job that we told it to do? And be? Is it factually accurate because it might not be?
Katie Robbert 10:11
Well, and you know, the really simple example of that that we gave was we asked ChatGPT what it knew about me, Katie robear. And it was about 90% of the way there until it said that I was a professor at Rutgers University, which I’m not, and I’ve never been to, and at this time, don’t have any affiliation with. And so I happen, you guys are lucky, I happen to be a subject matter expert in what Katie robear does and does not do. So I could easily fact check it. But someone who isn’t me who doesn’t know me that well might just look at that and go, Oh, okay, that makes sense. That’s a really simple low risk example. You know, but, Chris, to your point about the prompt engineering, people need to understand that these systems are not search engines, and a lot of those prompts that people are generating. They’re asking the questions that they would ask of a search engine to generate a piece of useful content, a music, review some data points, and that then goes back into that skill set of Do you feel equipped to use these systems the way that they’re intended? You know, I can say that I’m a novice, and I would need more training before I felt comfortable sort of leading the charge for the company on these different systems. So yeah, I feel like it’s a huge thing that needs to be evaluated first, regardless of whether it’s purposeful or forced upon you.
Christopher Penn 11:37
Exactly. And that goes squarely in the process section. So what are the processes that you need to have? If you think about the machine as sort of this black box? What are the things that have to happen before stuff goes in the black box? And what does have to happen afterwards? So fact checking, for example, something has to happen after the black box, and you take that process out and say, Okay, well, who were the people that need to fact check? What the machine has done? Because, again, someone who’s a prompt engineer may not have the domain expertise to look at that and say, this is correct. I did a bake off last night, you can go see it, see the video on the Trust Insights YouTube channel, where I put in? I said, what are some effective ways to to prevent COVID? Right, and I put it into Google Bard, ChatGPT, NPS GPT-4 model, and then Microsoft, Bing. And the funny thing was, so Bing, kind of hosed it, you know, it said, you know, wear a mask and social distancing and stuff like that. It did not get ventilation as well, the answers I was looking for. No, it did not get vaccinations or the answers. Google said vaccinate, wear a mask, but no, nothing on ventilation. GPT-4 got all three. So even though it’s not a search engine, it was more correct than the actual search engines. And so again, that fact checking that domain expertise to Okay, these are the things that should be in there that that these cases we’re not, is problematic, you need to have that domain expertise. So part of integrating AI into your company is who’s doing the fact checking?
Katie Robbert 13:08
Well, and so now you’re starting to talk about the other gap that you need to assess, which is your processes. So first, you need to evaluate whether or not the people who have the skills but then, okay, let’s say you have people who have skills to run these systems, what is the actual process so that you know that these systems are effective. And so we talked about how to AB test some of these things, a few episodes back, but it’s more than that. So if we look at the go, Charlie example, and that’s, you know, purely writing content, what is your content creation process? And where does this AI tool fit into all of that, so that you can determine this is saving us time, or this is taking us more time, or this is just sort of equaling out, or we can scale or whatever the situation is. And so the process is also something that needs to be evaluated. Do we even have a repeatable process where we can start to bring in artificial intelligence? Or if we don’t? What does that process look like? Because, again, sort of thinking about now, artificial intelligence is this sort of shiny object for a lot of companies, they’re just signing up for it and saying, Okay, what can it do? And then that’s a big time waster. We actually have a comment from our friend, Brian Piper sorry, John, this covers your face. He says great points, you also need to be able to track the impact that adopting new tools has on your productivity and conversions. That’s exactly it. Because bringing in new tools, there’s a learning curve, there is probably, you know, set up an integration. There’s maintenance of this thing, all of those pieces get factored into the overall cost, not just the $5 a month that you might spend on any given system.
Christopher Penn 14:58
How to go up here still great. Nope. doesn’t pass muster. So that takes us to the I think it’s a good good segue into performance. Right? So how, how do you measure? There’s obviously the cost side, the time to retrain, things like that. How do you measure the performance, particularly for stuff outside of marketing, let’s say like, you know, your janitorial service is putting in timesheets through Excel, and now Microsoft co pilots there. And they have the opportunity to do that or John’s using chat spot as part of the new Hubspot CRM. And John can now update deals just by talking to a large language model. How would you measure that performance? What are your what are the things that you would be looking for?
Katie Robbert 15:47
Well, it goes back to and we probably skipped over the most important P which is purpose. Because we were talking about purpose full, but not purpose of AI. So the first question you need to ask is What problem is this solving? Why are we doing this in the first place? What’s the so what of this? And so, you know, to use John as the example, you know, maybe his purpose for using the chat spot versus through Hubspot is to, you know, convert deals faster, or for him to be able to do even more outreach, because he’s not bogged down in the admin side of things. And so by stating that purpose, first by using that user story, as a business development partner, I want to use Chatbots, through Hubspot, so that I can close my deals faster. And then that would become the measurement the so that the outcome is the thing that you start to measure their performance on. And you would say, okay, am I closing my deals faster? Now that I’m using chat spot? If the answer is no, then you know that that’s your performance metric, you know that this is not helping you?
Christopher Penn 17:00
Yeah, I would argue, you know, there’s, there’s sort of the baseline, does the software do what it’s supposed to do? Like, does it do it’s what it says on the box. The folks over Hubspot, Dharmesh Shah, who has been building this has been very clear, this thing is still highly experimental, right? Like it will break more than a dozen. So you know, that’s totally understandable. But then parted for like a salesperson or a sales administrator, you know, which is this is the bane of CRMs is getting your salespeople to use the CRM to keep it up to date, right? So if, if you can look at something like, you know, record completion or number of updates in the system, if you suddenly see, when you roll out this tool that suddenly, you know, your sales force has gone from 40% of salespeople updating the system, you know, each week to like 80%, because they can just now tell it, hey, tell us, you know, this deal is moving forward. And instead of having to click around in the interface and click all the buttons, that would be a win, because you have a very clear metric, like Yeah, we want we care about efficiency here. But you’ve got to have those measures of efficiency. We even know to look for them in order to measure.
Katie Robbert 18:03
So John as the designated Hubspot user on our team, what would be your use case for using chat spot in Hubspot?
John Wall 18:15
Yeah, for a lot of this stuff, it’s party tricks, you know, I mean, updating records wouldn’t be the thing. But where it could really be fantastic is in report generation. You know, like you said, like that prompt that Chris had just tried there, if you can, you know, show me everybody that’s in this time zone, or show me everybody that’s in the Baltimore area, because we have an event coming up there, whatever, being able to slice and dice data, without having to go through a report interface is fantastic. And then there is other value too, if it’s able to do something like if I can say, okay, take all the contacts from a specific geographic area, and set them up with a task, you know, over the next week, to follow up or to kick off a series of emails, you know, something like that. It’s kind of cool if you’re able to do that. But yeah, from everything that I’ve seen, it is like it’s real alpha software, like I wouldn’t even be afraid to use it, because I wouldn’t want it to start sending up, you know, gifts to everybody from the New York area or something like that, like, it just could totally go off the track. So I would let it ride for a little while get at least get the beta,
Christopher Penn 19:16
you got to go in and change because I accidentally moved you to an opportunity.
Katie Robbert 19:21
See, this is what happens when you don’t have domain expertise or process
John Wall 19:26
when you’re playing with alpha toys.
Katie Robbert 19:29
But again, this is sort of the low risk example of Okay, now we’re just kind of playing with the system to see what it does. But some companies don’t have the same luxury that we do, where we can just back out of something easily because all three of us have access to the same, the same level of access in the systems. You know, I can imagine in enterprise sized companies with all the different layers of permissions. If Chris changed John’s status on an opportunity and maybe even the dollar value of it accidentally, but Chris isn’t an admin and he can’t roll that back, he then has to go through the chain of command to let someone know, hey, I need to roll this back. But now, you know, when enough time has gone by, or the board gotten alert, like, hey, we suddenly have a $50,000 opportunity. That’s amazing. Let’s start making plans around this money, like, it moves really quickly. And there’s a lot that can go wrong if there’s not a proper process in place for managing these systems.
John Wall 20:30
Yes, are your friends remember that? Use a sandbox.
Katie Robbert 20:34
It’s true, though. That’s why those things exist. The Sandbox, the staging environments, do all of your testing outside of a live environment, like don’t do things on a live website, don’t do things with your live data. That’s just a bad idea.
Christopher Penn 20:48
So how would you tackle it when AI is forced on you? Right? So you come into work on July one, and suddenly everyone as Microsoft co pilot at their desk, and you’re like, Well, you didn’t even know this was going to happen. And now everyone’s got the thing. You have a whole bunch of you’ll have no training on it. But okay, there’s a chat box here and the only other talker to it like it’s Clippy, and then so the clip, he’s like, Oh, I will delete all your files. As you as you request, I will delete all the files, because somebody typed in F this route, I wouldn’t want to work here anymore. And so this clip, he’s like, Sure, I’ll take care of that for you.
Katie Robbert 21:26
The first thing is to like, just stop and take a breath. Because you have to understand when this kind of software is forced on you, then there’s going to be some sunk costs, there’s going to be some failure points, there’s going to be some risks. Because to your in that example, if nobody saw this coming, and suddenly, oops, you know, we suddenly have this big piece of software, that we have to learn how to use and 90% of the company uses Microsoft suite. And the majority of the work that we do is powered by Microsoft suite, then you have to pause and take a breath, you have to stop for a minute. You have to designate someone doesn’t matter who someone has to very quickly figure out what this thing does. And if you can’t, if you can’t designate an expert, you pick up the phone and say, who’s my Microsoft expert who actually knows what this thing does? Hey, Chris, you’ve been talking about this for the past six months on your blog. Do you think maybe you could you know, do an emergency session, come and talk to my team about what the heck it is. Because to your point, if people just start playing with it, with live data with live files, they could accidentally delete really important things without knowing how to retrieve them. So I would say my advice when it’s forced upon you is to just like, stop for a minute. Take a breath, look around, see. Does anyone know what this system is supposed to do? Know? Okay, call in an expert and have them explain it.
Christopher Penn 22:58
Katie is embracing the Vanilla Ice School of manager of crisis management. Yes,
Katie Robbert 23:02
you have to stop, collaborate. And listen. I stuck with a brand new addition.
John Wall 23:11
You have to be careful because this is like prime ground to have a new coke scenario. You know, somebody will come out with something and everybody will get so angry with it, that they’ll start tagging it Clippy Jr. and, you know, they’ll switch over to Google Docs. So it’s, you know, let’s see who’s got their CX outfit on? And does it right.
Christopher Penn 23:34
Yep. So in specific cases, where it’s, it’s forced on you to the other thing is how do you manage? So one thing that tends to happen is that we tend to replicate what we know. Right? So what we’re seeing with, for example, generative AI models is people are asking prompts, you’re creating prompts and asking things of these machines that are familiar territory for them. So making a picture of a dog wearing a tutu on a skateboard. And the the actual capabilities of these tools are far beyond that. But because people are not aware of what those capabilities even are. They don’t know to ask for it. We were looking at some examples this morning, I was showing one in the video saying, Okay, make me a list of the Fortune 10 companies in order by domain name because that’s actually a very hard thing to find online. There’s actually it’s so funny. There’s like a fortune 500 service out there that just that provides a data file the fortune 500 You have to pay money just to get a list of the Fortune 500 Buy domain names. Well, now you can just ask GPT-4 for it and get exactly the same thing and pay no money for it. Well, you have to pay the 20 bucks a month. But if you didn’t know that that was possible. That is still closed off to you. So for companies that want to be agile to companies that want to move Head quickly, how do we help them? open their minds to what’s possible? What’s What does it? What does that process look like for, for introducing innovative thinking among the people? We’re talking to machines?
Katie Robbert 25:15
I think if you do that skills gap assessment, and you find that you don’t have that skill set in house that, you know, person who is sort of the, the cutting edge of technology, the person who’s staying, you know, involved in what’s happening, that’s okay. There are people out there who can do that. So I would say that, you know, I would recommend to my team, like if you if I had a team of people, and, you know, we suddenly were had to start using this new technology, and people said, How do I even know what it’s supposed to do? The first thing you do is you start to figure out, who are the experts in the industry that you can trust when they say, this is the thing that it’s doing, you can follow. So you don’t have to necessarily hire a Chris Penn, but you could probably follow his YouTube channel, you can probably sign up for the newsletter and figure out okay, Chris publishes once a week, I can at least stay about a week behind on the new tech, I’m being serious on the new technology. And then I at least have a fighting chance to understand what the thing does. And then you can say, Okay, we’ve at least figured out what the team is capable of. And then we least can figure out what the processes for using this. Now we’re ready to move on to the next phase. And that’s where you bring in an expert to consult to say, these are the capabilities. Help me understand your problem statements, your use cases, and I will help you figure out what kind of AI technology solves that problem.
Christopher Penn 26:48
Okay. It strikes me though, there’s probably not enough actual experts to go around.
Katie Robbert 26:56
Not at this time, probably not. I mean, it’s still, for as not new, as artificial intelligence is, the version of artificial intelligence that we’ve seen over the past few months is new, in terms of its capabilities, in terms of its utilities, in terms of the quote, unquote, ease of use. And so, you know, Chris, you’re doing your best to be an expert in these things. But you actually still have a full time job. At Trust Insights, being the chief data scientist, go, please don’t forget this. So you’re, so you’re trying to sort of pull, you know, two roles at the same time. Because keeping up with everything that’s happening with artificial intelligence, minute by minute, literally minute by minute is a full time job. And then you also have your regular full time job. And then you also have a personal life, that you want to make sure you’re keeping all of those things balanced. And so you’re right. There’s not enough experts who are respected, authoritative experts. Anybody can say, I’ve written the ChatGPT prompt. Anybody can say, Here’s my free ebook of the top 50 most effective prompts. I’ve never heard of half these people. They just came out of the woodwork, just like the people who could suddenly magically migrate you to Google Analytics 4 just came out of the woodwork, guess what all those people faded right back in. And so I think, you know, it’s gonna be hard because there’s going to be high demand for experts, but so few experts who are actually truly experts who can speak a non technical language to people who aren’t technical people. I feel like this episode is really making me rant.
Christopher Penn 28:42
How do you know the difference? If you don’t have subject matter expertise? How do you know who’s who’s got the goods? And who was foolish? It
Katie Robbert 28:51
matters? I mean, that’s a great question. I’m fortunate that I work with you and you know, a small group of others who I trust, implicitly, who have never bullshitted me, but I guess I would turn the question around on you since you actually assess
Christopher Penn 29:06
John. All right, let’s ask John. John, how do you know when someone’s got the goods versus someone’s, you know, just bullshitting. Like they were a crypto bro last week, and now that this week, they’re an AI expert?
John Wall 29:17
Well, you have to either there’s two ways to do it. One is either, if you know, is just probing questions, you know, you just keep poking until you break through to the other side. The other one is, you have to find another expert, who can give you a trap question to ask, you know, and then you can throw something out there and see if it hits but yeah, that’s, you know, that’s a whole skill set in itself that that’s a classic game theory poker thing, you know, being able to try and figure out who knows what they know and don’t know. And yeah, if you have the knowledge, then you’re in a power seat. You can go there, but if you don’t know yourself, then you’re Yeah, you may find yourself with a pork barrel.
Katie Robbert 29:53
I think you you started to say something, though, that I think is really important to hit on it. As I look at the person’s resume, and if they were a, you know, crypto bro last week and an NF T person the week before that, and a web three expert the week before that, and now they’re an AI expert, that to me raises a red flag, because given how quickly the technology is changing, unless you are the one single person developing all of it, which is impossible, there is no way that you are an expert in all of these things all at once so quickly. So I would say definitely, you know, use your best judgment in but do your research on the person. What have they spoken about before? Chris Penn, you know, sitting right next to the Chris Penn has been talking about artificial intelligence for the past, what, seven, eight years now 10 years. And so you’ve never wavered from. I’m an expert in artificial intelligence. I’m an expert in machine learning. You haven’t jumped on the web through your NFT or crypto or clubhouse or whatever the thing is, bandwagon you’ve said, Cool. All of those distractions can happen around me, I’m going to continue to be an expert in my lane. So you can trust me when I say I know what I’m talking about when it comes to these things. And you’re someone who you’ve demonstrated through your content through our work, that you are willing to dig into the systems. And so if someone’s calling themselves an expert, but they also don’t have any portfolio to show that they’ve actually used to these things, that’s also another red flag.
Christopher Penn 31:32
The thing I keep coming back to, though, is that one of the challenges and this, this now branches into an entirely different episode, which we’ll tackle maybe some other time, is that that inherently creates biases towards the established folks. And there may be new folks who really do know that their stuff, but you don’t know that they know their stuff because you don’t have the domain knowledge. One of the things that I looked at when the pandemic was first starting, is looking at your who’s already working in in places that are generally restrictive, like you know, Yale University’s virology lab, Dr. Akiko Iwasaki, right, not the kind of person who’s going to suddenly be a pandemic expert overnight, because she’s already a pandemic expert has been for 30 years. And then looking at what she starts talking about, right? Then looking at it cadre, in my case, about 53 people who you’re looking at their, like you said, looking at their credentials, looking at their backgrounds, and then looking at what they talked about what they generally agree on right now, everyone has kind of their own take. But like Eric Topol, for example, you Osaki Eric, ding, all these folks, were talking about that same general thing like, hey, the S one region of the spike protein is is the problem area, this is the thing that’s causing all the problems with this with with this virus. And then from there, once you’ve built a vocabulary of these are the topics that the the that the, this collection of people who seem credible, all seem to have in common, then you can look at the what the fringe folks are saying, okay, these people, they’re not saying the same thing. There’s no there’s not enough evidence to back up what they’re saying. Right? There’s, you know, the, in the in the AI world, these be the folks who are making you outlandish claims about AI or just doing silly stuff like use your, your your book of ChatGPT Beer prompts. And not looking at it. Here’s what the core technology does, you know, folks who talking about attention is all you need and the attention window in how a transformer works. This is not unique to one person, this is a generally agreed upon the industry standard of how a transformer model works. So folks who know their stuff will have that somewhere in their content. And you can look at it go okay, this person at least knows the basics of how transformer based model works. That I think for if you don’t have the domain expertise, at least getting the vocabulary is a good starting point. So that you can immediately spot like, okay, this person is saying stuff that’s wildly off base from what other folks who seem credible, are talking and that list of trusted folks will drift from time to time because ideally as more people come in to the space, but I find that technique tends to help identify the people that you need to make the you know, to make the integration AI easier and make the your BS detector better.
Katie Robbert 34:26
Well, and you know, again, I think you’re right, it’s a different topic. But back to your point about, you know, it creates bias for the new people who may be experts but don’t have the longevity in the industry to sit and make a name for themselves. I’m someone who I want you to show me Don’t just tell me. And so if you are in a position to try to find that expert, I would say I’d recommend looking for people who can show their expertise as in they’ve demonstrated they can use the tool they have their case studies, they have their sample data, they have their examples versus someone who’s just navel gazing and writing about the thing and, you know, doing, you know, fluffy prompts that don’t really amount to anything that’s not demonstrating to me that you’re an expert, it’s demonstrating to me that you can do the same thing that I can do, which is, hey, ChatGPT Write me a limerick, about beer. Anybody can do that. That doesn’t make me an expert just because I can fig just because I figured out how to sign into the system and get it to give me something back does not make me an expert. And 98% of the people that I’ve seen who are writing about these things are those navel gazers, there’s very few people who are demonstrating the ability. So to get back to the original question of, let’s say that this technology is forced upon you, because the systems that you’ve been using have made a major update, you’re suddenly using Microsoft copilot, but you didn’t know it was coming. You have to start looking for those experts really quickly. And there’s definitely sort of that short checklist, as Chris mentioned, of ways to find those people, especially if nobody on your team, nobody in your company either has the time or the capabilities, or the skill set to do to sort of do this investigation. What the heck does this thing even do?
Christopher Penn 36:15
That gets expensive, though? How do you how do you start growing your own experts.
Katie Robbert 36:20
So I would say in this example, I think anyone who works in a company who doesn’t think artificial intelligence is going to affect their job is sorely mistaken. We don’t know exactly what that’s going to look like for every single company. But you can make a safe assumption that in some way, shape or form, artificial intelligence is going to have some kind of an impact on your company in the next six to 12 months, if not sooner. So I would say the first thing you need to do is figure out where to get that information, where to get those really good updates on what’s happening. And I would say, Chris, you know, our content, our newsletter, your newsletter, are really good. Our friends over at the marketing AI Institute, have really kept their finger on the pulse of what’s going on with artificial intelligence. It’s literally what they felt the whole business around. And so now before the technology is forced upon your team, is the time to start finding those experts finding those resources that at least tell you what the thing is in a way that you can understand it.
Christopher Penn 37:33
I think the additional thing that, again, most companies don’t do and cost you nothing costs you $0 It does cost you time, is what we used to do with our team at the agency we used to work at which was essentially Lunch and Learns where we required people who didn’t have expertise to earn a little bit of expertise that we could then present it to the rest of the team, I could easily see a situation where, you know, if we’re talking about generative AI, you have sort of a prompt Lunch and Learn like this week, Katie is going to show the prompts that she worked on the caca results. And next week, John’s gonna show a prop that he worked on. And you kind of just create a impromptu knowledge management system where people start sharing things that work better. And that over time, as people learn from each other, you know, none of you are experts in this scenario. But as you find stuff purely through experimentation, you will start to increase the capabilities of the entire team. And I’ll talk more about that process about how you coordinate those kinds of things.
Katie Robbert 38:38
Yeah, it doesn’t have to be a long, laborious thing with your team. And it doesn’t have to be something that people lose sleep over, especially if public speaking isn’t really their forte. You know, it could be a very simple, you know, 15 minute, hey, here’s what I learned this weekend. It doesn’t even have to be in person, it can be over email, where you just start to give people on your team the opportunity to share what they learned about a specific topic. So in this context, are talking about artificial intelligence, I might say to you and John. Hey, so Chris, on Monday, I want you to share with the rest of us just one article about what’s going on with artificial intelligence. John, on Tuesday, I want you to find two different products and write a quick summary of what they are, you know, and on Wednesday, I will summarize what’s going on with you know, the big dogs in the funding and, you know, I will also share with you resources and so you can sort of divide and conquer that way amongst your team and say, just one article a day, share it with the rest of us, of what you found is happening with this topic. You know, for our team at the time, it was what’s going on with digital marketing, analytics, predictive forecasting, attribution analysis. So the things that our team was really involved with, with the clients, but Chris, you were really the expert. And so to give our team a very low risk, low cost way to become experts, we said, hey, you know, you marketing analyst, I want you to spend 15 minutes at our team meeting next week, showing our team how to set up a tag and Tag Manager. So then the task then becomes does that person know how to do it? If not, they have, you know, some time to figure it out. There’s no rules to say you can’t ask other people, how’s it done, but the goal is that that person who is assigned to is the one who then presents what they found.
Christopher Penn 40:39
And I would say, if you are not in a situation where you have access to that team, that doesn’t mean you don’t have access to those people, right? So you can go to something like, you know, this is our Slack community analytics for marketers, you find a couple of study buddies, if you will have you know, folks who are would be interested in doing something just like that, and and join up, join forces with them and say, okay, you know, here’s who wants to do a lunch and learn on on generative AI this week?
Katie Robbert 41:10
Absolutely. And I think that that’s communities are, where you’re going to find a lot of those like minded folks, and where you’re going to stay up to date on a lot of what’s happening. You know, Chris, you’re partial to Discord because you can find the major companies that are creating these technologies, that’s where they put out a lot of their updates. So if you’re someone who enjoys the Discord experience, for communities, that I would say, definitely start to join some of those, if that’s a topic that you want to know more about, for your team or for yourself, you can join our Slack group TrustInsights.ai AI slash analytics for marketers. We’re we’re also sharing a lot of the information that we’re finding. And we’re also trying to put out content like this podcast, this live stream, where we actually walk through the scenarios, the real life scenarios that you’re going to encounter, as artificial intelligence starts to affect your day to day.
Christopher Penn 42:05
Exactly. So to wrap up, the five steps you need to take to integrate AI into your company, you need to have a purpose. And sometimes it is purposeful, you’re going to buy an application to do a thing. Sometimes it’s not where Oh, it has been forced upon you. And you get to enjoy that, too. You have to audit your folks, your people, what skills do they have? Who should be in charge of various things? Three is got to look at your processes. What pretend AI is a black box? What do you have to have in place before things go into the box? And what do you have to have in place after things come out of the box in order to make AI safe and effective platform, there’s a gazillion platforms to choose from that. But really should be as it is your fourth on the list. You’ve got to get your purpose people and process in place. And fifth is look at what are the things that from a performance perspective how you’re going to measure that you’re doing the thing, and that the investments have been worth it.
Katie Robbert 43:02
John, final thoughts, Jerry Springer thoughts.
John Wall 43:06
You know, I was actually talking with a friend in academia who had asked yesterday, because he said ChatGPT kept forcing him to have a degree from Harvard. And I was like, oh, yeah, Katie was provost at red here. And it was interesting. I said, Yeah, we I see, I see a lot of problems with these hallucinations. And and especially people using his search. And he, of course, was like, no, no, even though people are like, don’t use this for search. And really, the model is about plausibility, it’s coming up with plausible things. And I was like, Oh, that’s a nice way to spin it. That sounds a lot less pejorative than hallucinate hallucinations. But but that is the thing. This is a way to make things more plausible and to get where you want to go. And so yeah, get ready to improvise. That’s really because we don’t know what’s coming at us. And, you know, get ready to try some new stuff. And hopefully, you don’t get in too much trouble.
Katie Robbert 43:59
I just want to acknowledge one last comment, our friend Brian said, communities are also a great place to ask about potential experts to get those qualifying questions. And that’s exactly right. You know, so that’s tying all the different pieces together, that we were talking about in terms of finding efforts if you’ve joined a community like analytics marketers, that’s a great place to sort of ask the question, hey, my company wants to start using Microsoft copilot. Are there any experts that you trust anyone that you would recommend to help us with this integration into our company? So that’s a really great point.
Christopher Penn 44:32
I’d say there’s a shout out Brian’s book, Epic Content Marketing, go grab a copy of it. It’s excellent.
Katie Robbert 44:40
I would say my final thought is, you know, yes, there’s the sort of gloom and doom artificial intelligence is coming blah, blah. Don’t panic. Just take a breath. And you will have time to figure out how this works in your company. If it’s forced upon you. Still stop. Take a breath. Stop. collaborate and listen is really the game here. I need to create a meme. But anyway, that’s different day. But make a plan, start making those backup plans for if that day comes. Start finding those experts start figuring out how you can get your team trained up, start signing up for those newsletters, Trust Insights newsletter is a great one to start with. Start following podcasts and blogs because a lot of people are talking about there’s a lot of information out there, so you need to start distilling it down into your specific use case.
Christopher Penn 45:30
All right, now that note we will see you all next week. Thanks for watching today. Be sure to subscribe to our show wherever you’re watching it. For more resources. And to learn more. Check out the Trust Insights podcast at trust insights.ai/t AI podcast and a weekly email newsletter at trust insights.ai/newsletter Got questions about what you saw on today’s episode. Join our free analytics for markers slack group at trust insights.ai/analytics for marketers See you next time.
Need help with your marketing data and analytics?
You might also enjoy:
Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!
Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new 10-minute or less episodes every week.