In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris provide a guide to generative AI for the C-Suite and discuss how CEOs should approach understanding and using large language models and AI, starting with identifying business problems first before considering technology solutions.
Subscribe To This Show!
If you're not already subscribed to In-Ear Insights, get set up now!
- In-Ear Insights on Apple Podcasts
- In-Ear Insights on Google Podcasts
- In-Ear Insights on all other podcasting software
Advertisement: Google Analytics 4 for Marketers
Attention marketers! Are you ready to unlock the full potential of Google Analytics 4? With only a few short months left until GA4 becomes the sole Google Analytics option, now is the time to get ahead of the game.
TrustInsights.ai's Google Analytics 4 course is here to guide you through the measurement strategy and tactical implementation of GA4 in just 5.5 hours. With 17 comprehensive modules, you'll gain the knowledge and skills necessary to effectively set up and configure GA4 to work for your unique business needs.
But that's not all. Our newly updated course, released in January 2023, covers major configuration differences in Google Analytics 4 to ensure you're up-to-date and fully equipped for success. Plus, our course is fully accessible with captions, audio, and text downloads, so you can learn at your own pace and in your preferred method.
The clock is ticking, and with GA4 set to replace all previous versions of Google Analytics, you won't have year-over-year data until the day you turn it on and set GA4 up. Don't miss out on valuable insights that will help your business thrive. Register for TrustInsights.ai's Google Analytics 4 course now and take control of your data.
Sponsor This Show!Are you struggling to reach the right audiences? Trust Insights offers sponsorships in our newsletters, podcasts, and media properties to help your brand be seen and heard by the right people. Our media properties reach almost 100,000 people every week, from the In Ear Insights podcast to the Almost Timely and In the Headlights newsletters. Reach out to us today to learn more.
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
- Need help with your company’s data and analytics? Let us know!
- Join our free Slack group for marketers interested in analytics!
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher Penn 0:00
In this episode of In-Ear Insights, we are talking about large language models in AI, but specific to a corner office, the big chair, the CEO, and that, of course, Katie is you.
When you look at the landscape of all the things that we talk about in AI, and you think of it with your CEO hat on, what are the things that you as a CEO need to know about? In order in order to be effective in order to understand the opportunities, the risks stuff? What is what’s on your mind, when you think about all the stuff that I foam at the mouth that regularly?
Katie Robbert 0:43
You know, the first question I always ask is, tell me when I need to pay attention? And, you know, that’s a loaded question, because it depends on what the business is about what the CEO cares about.
So for us, you know, my questions more specifically would be, well, you know, we’ll generative AI, will large language models make our business better? What does that mean? Will we have a competitive advantage? If we integrate large language models into our business? What does that mean for my bottom line? Is there going to be a lot of cost upfront to get them set up? But then in the longer term, I’m going to save much more money than I invested into this thing? What does that mean? For me needing to skill up myself? What does that mean, for me needing to skill up my team? What kinds of resources do I need to consider? Are there legal implications for using this kind of technology? That’s a question I have about any kind of tech that we bring on of what are the legal implications? What are the security considerations? Do? You know we personally at Trust Insights don’t deal with HIPAA data protected health personally identifiable? We just that’s thankfully, not the nature of our business.
But those are, that doesn’t mean that I don’t need to be aware of what that means.
So I guess so to sort of sum up, the first question I would ask is, what information do I need as a CEO? Right now, today? What do I need to be paying attention to? Do I need to be paying attention to all the little startups who have their own version of a skin on generative AI? Or do I need to be thinking bigger than that of, you know, what is this mean to bring a large learning model into my business period?
Christopher Penn 2:41
I think it’s the latter.
When you think about like large language models.
They’re almost like kitchen appliances, right? A brand new kitchen appliance.
So you don’t know what it is you just magically appeared one day, right? Someone Someone left it on the on the couch for you.
Your logical question.
So what is this thing? Like? What does it do? What’s a capable of how dangerous is it? Is there a manual, all those, all those things that it’s very difficult to get that baseline understanding, particularly from, say, the mainstream news and social media sites, because everyone in their cousin has got sort of their perspective on it.
And there’s not really a baseline of here’s, here’s the starting points of the things that you need to know like, hey, a large language model is fundamentally a word prediction machine that is used for tasks that involve language, right.
So even something as simple as like, being able to understand, okay, this is a task that involves language, an LLM is a good choice for it.
This is a task that does not involve language, and in any way, shape, or form.
And a large language model is going to fail spectacularly at it, you can’t use the technology for it, just like you can’t use a blender to make steak.
I mean, theoretically, you could do something with it, it’s gonna be horrible.
It’s not going to be the outcome you care about.
And so that to me, would be where any CEO would want to start is okay, well, explain this to me in terms of the opportunities and risks, what does it do? What’s in it for me, and what can go wrong?
Katie Robbert 4:11
It’s funny that you bring up the blender because I was going to use the blender as an example as well.
So a few months back, my husband brought home this really nice Vitamix blender, and I looked at it, I’m like, Okay, but what does it do? And he starts explaining to me that you can actually cook soup in it, it does this, it cleans itself, it has all these settings.
And I, as someone who isn’t comfortable in the kitchen, I immediately started to get overwhelmed.
And I had to reframe the question to say, Okay, but what do I specifically need to know about this type of machinery given the type of cooking that I’m going to do when you’re not around to supervise me to not chop off my fingertips because I stupidly reached into the blender trying to get everything out of it, forgetting that there’s bleeds inside of it.
And so he showed me, you know, the two or three functions on it that I need to care about as the person, you know who’s going to be doing the work.
And that’s very much the same way that I would approach these conversations with another CEO, when the CEO says, What do I need to know, you need to understand that particular person, that particular company, so that you can streamline the conversation down and get rid of a lot of the distractions.
And so, you know, when I think about your large language model, discussions, Chris, there’s a lot of information in there.
And thankfully, I’ve seen the talks, and I talk with you enough that I understand all the pieces, but not all the pieces are relevant to me.
Christopher Penn 5:50
No, that’s totally fair.
And the way I typically like to suggest people think about stuff like this is to a, you do need to understand the technology to some degree, right? Like it same with the blender, like you do need to understand what the basic function of a blender is.
Because it’s not a frying pan.
Right, and mistaking it for a frying pan.
As you mentioned, we could have catastrophic results on things like your fingertips.
From there, it’s asking people sort of where is the need? Right? So one of the things that we talked about, in our keynote addresses we do in the workshops, and the trainings we do is have people start looking at their organizations.
And as they, okay, there’s, there’s sort of two fundamental vectors, right? There’s stuff that you can do inside your company, oh, call this the internal square here, right things like your operations, and finance and HR stuff that, you know, is happening within the walls, your company, and then there’s all the stuff external that is partners, vendors, customers, the general public.
So the first thing you would want to do is look at your company and say, well, where where’s the need right now? Is it internal or external? And then the second dimension? Is optimization versus innovation? Are you are you looking to do things like save time, or maybe optimize headcount and stuff like that, then you’re in the optimization sector, right? So you’re, or you’re looking to streamline customer service interactions do what’s called call deflection, where you divert call volume away from your expensive call centers to machinery? Those would be examples of optimizations, internal and external? Or are you looking at innovation? Are you looking at net new things, new capabilities, like, you know, competitively, there’s no difference between you and the next three competitors, other than the logo, right? But you all do exactly the same thing? Is there an opportunity to take this new thing and offer something new externally, or internally introduce a new product line and use a new line of business based on the capabilities that any tool gives you? So that’s how I would typically start a very high level discussion with the CEO.
Because other than just knowing the basics, you don’t really need to know, like tokenization and embeddings, and transformers.
That’s, that’s not helpful.
It’s look at the business and say, Well, where are your needs right now.
Katie Robbert 8:18
And I think that this is a smart way to approach it.
Because even as you’re mentioning, you know, tokenization is and I already forget what the other word was.
You know, I can see and I’ve been part of a lot of conversations where someone will get stuck on those things like, oh, oh, so that’s something I need to know.
And they’ll try to get so far into the weeds with that particular functionality that it’s distracting to the overall goal of what it is you need to know.
i This is something that I’ll be talking about in some of my upcoming sessions at Matt’s and marketing, profs B2B Is that when you when a technologist is talking to a non technical person, and vice versa, there needs to be some way to refocus the conversation so that you don’t get stuck in the weeds of the technical functionalities.
Chris, you’ve explained tokens to me a bunch of times, and for the life of me, they don’t stick in my head, I have a general idea.
I could probably describe it to someone, but if I described it back to you, you’d say, okay, so you’re about 40%.
And 20 of that is how you spell the word token.
And even that I might get incorrect.
And so I like the idea of focusing the conversation with someone in the C suite, specifically, maybe the CEO of before, when they say to you, what do I need to know? You kind of push back a little bit with more questions than answers because what they need to know is going to differ case by case.
Christopher Penn 9:51
I mean, can you imagine going to the doctor’s office and the doctor saying you need this like you haven’t done anything? Like, what do you mean, I need the gallbladder surgery.
Katie Robbert 9:58
I don’t have to die.
BDS is my foot hurts.
Christopher Penn 10:03
So it’s very similar.
So if you think, for example, just using Trust Insights, as an example, of our business as a CEO, in these four quadrants, where do you think our need is right now?
Katie Robbert 10:18
I would say our need is not internal, our need is external.
And our need is optimization.
How’s the services that we provide are foundational, you know, we’re not executing on campaigns, we’re not drafting, you know, email copy and writing content for our, our clients.
We’re not ghost writing.
And so I would argue that those types of tasks, those more public facing things that you can actually see that are tangible are more of the innovation.
Oh, you could also say that those are optimization.
But I would say that our processes, the things that we do, a lot of what we do is data analysis, is it straddles that line of innovation and optimization, because it’s innovation in terms of the techniques, but it’s optimization in terms of getting to the answer faster, so that you can take action on it.
Christopher Penn 11:26
I think that makes sense.
That’s the kind of exercise that I think is really valuable.
Because once you do this exercise, once you you sit down and say, Okay, well, here’s, here’s where our needs are, we know this is the biggest area, then you can drill into that you can say, Okay, well, let’s say, you know, that you your customer service is just terrible.
You know, your NPS scores are in the toilet, nobody likes you, you have two stars on Yelp, whatever the whatever the measure of success is.
And you identify that that is an external problem.
And it’s not an optimization problem is an innovation problem.
They people just don’t like your products and services.
So you need a new product or service.
At that point, then once once you unpack that you can start you can bring out the five p framework and say, Okay, well, we know from this, this exercise, what the purpose is.
And now we can determine is a large language model or the AI technology choice? Is it a good fit for that, right? If you’re, if people just hate your company’s products and services, that might or might not be a language while a problem you can solve with the language model.
The if people hate your customer service, because they don’t like working with your customer service reps, now you’re into the territory of well, maybe a language model can help because maybe your your customer service team is so overburdened, that they can’t deliver good service.
But if you could do call deflection, and maybe chop 40%, your call volume off.
Now your team has more breathing room to deliver better service.
I remember talking to one person who worked at a major bank and what they’re saying, Yeah, we have to, we have to hard limit reps abilities to be on the phone to five minutes or less.
Like they can’t go over five, it’s just because we have so much volume like well, there’s some problems you can’t solve in five minutes, right? And so your CSAT scores go in the toilet, because customers are just pissed off.
If you could deflect 20 to 40% of your volume to a language model that can answer the easy questions like what’s the interest rate of my credit card? Or, Hey, I’m gonna be late with repayment, what do I do? And it can deliver good answers.
Now you can say, okay, we’re up.
Now, you could have 10 minutes, or maybe 15 minutes to solve a customer’s issue.
And you’re, you’ll just by virtue of doing that your CSAT scores are gonna go up because people feel like they’re getting better service.
So that’s an example where the innovation of a language model as an external service will help with the optimization internally.
So these quadrants are also connected internally.
Katie Robbert 13:57
Well, and so what I keep hearing you say, or at least what I keep hearing, the theme B is, we can’t even get to what the C suite needs to understand about a large language model, because there’s so much more work to do first, in order to provide valuable information.
So first and foremost, the C suite, the CEO, the CMO, the CEO, they need to understand where there are they almost need to do a SWOT analysis, first of like, where are the threats in the business? Where are their opportunities in the business? You know, maybe the C suite isn’t even aware that the NPS scores are in the toilet, you know, that would be the place to start.
So first, you need to surface up all of this information about what’s going on with your business.
First, you need to categorize it into okay, we’re doing these five things well, let’s just leave those be for now.
They’re not a high priority, or they’re going to be really, you know, low hanging fruit, small wins that are going to do a lot of big things for us if we introduce law reg language models into them.
And then we can refocus on all of the other things that aren’t going correct.
And so it sounds like what we’re collectively saying is answering the question of what do I need to know about large language models is the wrong place to start? It’s part of the conversation.
But first and foremost, we need to understand what’s going on in the business, both internally and externally, where there’s optimization and innovation opportunities, where there’s threats from our competitors, where our customers aren’t happy, where things are going really well, what processes need improvement, but also what processes are just swimming along and working? Well, then we can tailor responses to say, and now given all of that other information, this is what you need to know about a large language model.
Christopher Penn 15:48
So we had a consultation with the president of one of our clients recently, and one of the key takeaways from that conversation was they knew internally, what was broken, like they they had done that groundwork.
And they know like, for example, one of the big, big, big things that they have an issue with is hiring, right? We cannot hire people fast enough who are qualified to do this specific role.
And that’s an example where then you can say, Okay, now that we know what the problem is, that’s got your hair on fire.
Now, let’s pick apart that problem, the people the processes, the platform stuff and say, is there an opportunity to introduce the capabilities of a language model, so that it will alleviate some of the bird? So in the case of hiring, for example, one of the bottlenecks of hiring is getting a bunch of unqualified candidates.
So how do you how do you optimize the hiring process? Are there opportunities for language models? Yes, there are with a gigantic asterisk, which if you would like to know what the asterisk is, you can go to our YouTube channel, the Trust Insights to analyze YouTube and watch our most recent live stream on a gender bias and language models because that is the big asterisk with language models.
But the ability to then take that that known problem and pick it apart and say, here’s where this technology can make a difference.
Is, is where the magic happens.
Katie Robbert 17:18
I agree with that.
So there’s a lot of work to be done, before you even get into that conversation about a large language model.
But so, you know, let’s say, Chris, I came to you and said, What do we need to know about a large language model? And you push back and said, well, we need to do all of these exercises at first, I can 100% of the time, imagine the SEO pushback.
And like, Yeah, that’s great.
But I still need to know what this thing is, I need to understand the pieces of it.
So that I can start to think about how to frame these conversations, the acknowledging that the challenge there is once you start to give someone this information, they are going to start to frame all of these conversations around whether or not they realize that they’ll frame them around this solutions could because they think that this is already the solution.
So that’s going to be one of the larger hurdles, is giving someone the information in such a way that they don’t then already think, well, this is the solution.
This is this is what we’re gonna do.
And now we need to retrofit all of our problems into this solution.
Christopher Penn 18:30
The way that you can mitigate that, to some degree is, again, it’s not focusing on the mechanics of the technology, but understanding the use cases, the implementations that exist in any given discipline, you know, again, in the keynote talk, we talked, we typically walk through for any given industry or any given function, the language models, capabilities fall into six broad buckets, right? Generation extraction, summarization, rewriting classification question.
So those are the big six buckets.
And so when you talk to somebody and say, Okay, well, you’ve got this problem you have, and you have a candidate sourcing problem.
When you decompose that problem, you say, well, is that the processes that occur within that problem? Do they fit in any of these six buckets? If they don’t, then be answered, you know, to the person’s question is, it’s not going to help here.
Right? If if it doesn’t, if the process itself doesn’t have one of these six functions as a core part of the process, a language model is not going to help just gonna make things worse.
So helping so when somebody says, Well, what do I need to know this? This is the bare bones minimum of what anyone would need to know about a language model.
These are the six general things you can do.
There are obviously use cases for every single one of them that are specific to the job that’s being asked for.
You know, for example, if you’re talking about finance and the CEOs CFOs office, one The easiest biggest things you can do is say, Okay, well, here’s the new tax regulations for this year, summarize what’s changed so that we can adapt our processes quickly.
And that’s, that’s an example of really low hanging fruit very, very easy for a machine to do.
super valuable for the entire CFO office because they can go okay, well, now we know, we have to meet these new regulatory requirements, but it’s contingent upon the CFO knowing, hey, tax laws change, what do we need to know, to be able to ask that question? But this is where I would start when you say, what do I need to know, this is where this is what you need to know.
Katie Robbert 20:36
And I think that that makes sense.
Because you’re not getting into the technical pieces of it, you’re not talking about tokens, you’re not talking about fine tuning, you’re not talking about things that are relevant, but could be distracting to that any to that specific audience.
You know, I would imagine if you’re talking to the CFO, and you start talking about tokens, they may start because they are bottom line driven, trying to add up in their head, well, if I only have this many tokens, and this many people and this many times I can run the thing like, what is that going to start to cost me over time, given what we want to do with the thing like it can be very distracting.
It’s all important information that has a time and a place.
But when you’re introducing these topics, introducing these technologies, to someone who’s unfamiliar with it, you have to focus the conversation.
Christopher Penn 21:32
This framework is essentially more or less the cookbook, right? You don’t need to know that a blender basically uses a series of electromagnets to power a motor that a brushless motor to to operate a certain speed.
What you do need to know is don’t use it for steak.
Use for soup, use it for smoothies.
Definitely use for margaritas, don’t use it for steak.
And so having this this sort of conceptual cookbook gets you to that for language models do use it for this, you will notice on here nowhere this say analytics, right? Nowhere doesn’t say like double account double entry bookkeeping, because those are not language tasks, and therefore they don’t fit in this language framework of what this tool can do.
And so even just understanding that distinction, will help reframe the conversation for a CEO, a CFO, a CTO, say like, Hey, Microsoft is going to be rolling this stuff out into Microsoft Office, and then it’s gonna be in Google workspace.
What are we supposed to do with it? Well, what language based things? Are you? Are you trying to optimize in your organization? What things do you have problems with not language? Language models will help with that.
language models will not help with that.
Katie Robbert 22:50
So what if the CEO comes to you and says, Well, how are we going to make more money using generative AI? Or can we make more money? Or, you know, can I let go of my whole sales team? Because generative AI can do all of this for me? What if the CEO says to you? You know, I heard that Bob down the street, my biggest rival of a CEO and always beats me in tennis, is using generative AI, I want to use it to
Christopher Penn 23:24
I mean, so to answer the questions about can I make I can’t fix Bob’s tennis
Katie Robbert 23:30
You need to get a better coach, I guess.
Christopher Penn 23:34
But we go back to, well, what what’s what’s most broken? Because fundamentally, machines do what they they do things better, faster and cheaper, they typically do a better job with most tasks and humans do for the same equivalent task.
They do stuff a lot faster than humans do.
And they typically do it much cheaper.
So when you look at your two by two matrix of of the issues that you have in your organization, one of the ranking factors for deciding how you prioritize should be is this costing me a whole bunch of money? Or am I not making enough money from this? So let’s say I used to work at a company that had a terrible, terrible sales team, they closed less than 1% of the opportunities they were given.
You could have replaced most of the sales team with the dog, and it would have come probably would have closed more deals, the dog would have close more deals because it just wouldn’t look cute and barked.
Katie Robbert 24:27
I buy it.
I don’t know what you’re selling, but I would have bought it.
Christopher Penn 24:33
And so a big question there is okay, well, great.
So we know the sales team is the problem.
It’s an optimization problem.
It’s internal, but it’s costing us a lot of money because it bleeds over into the external because we can’t sell to people.
Why? Because the salespeople themselves were not skilled salespeople.
They were just randos picked up off the street.
It felt that way anyway.
Is that a language problem? For a good chunk of the sales process? The answer is yes.
Right? They were they were they were doing the old Graham by the tie and show him to laybuy.
Right that which stopped working 20 years ago.
More than that, yeah, yes.
And you use a language model to solve that problem? Yes, by changing up what the salespeople say how they say it, who they say it, too.
And now you’re into the territory of okay, this thing can make me more money.
So it’s, again, it’s decomposing the problem to understand what the problem is understanding the people and the processes that are in place, and then seeing how the technology improved some of the processes and the in this example, the answer is yes.
To a degree, you still should probably fire most of the salespeople because they’re terrible, right? You can replace 80% of them with machines, and no one will know the difference because they’re so bad at their jobs.
To like the other questions of, can I replace XYZ person? Well, not really a person per se, you can replace tasks for sure.
And so a question that everyone should be thinking about is well, of the tasks that I do every day.
I’m another language based? And is there an opportunity to use a tool to help improve the way I use language to accomplish that task? And if the answer is yes, then there’s an opportunity there.
And again, for the CEO, you’re thinking about that organization wide? In a given department like HR, how much of your work is language based? How good is it right now? And then what are the opportunities to use language tools to improve the skill of which you use language?
Katie Robbert 26:40
Do you think and this is, this is a little bit off topic, but I think it’s still relevant because I can see where the conversations going to come up at the C suite level is, do you think that a company that is powered by AI has more of a competitive advantage than a company that doesn’t? So for example, you know, we do a lot of data analysis, we do a lot of trying to understand what’s in your tech stack? Just in your opinion, do you think that if we put on our website, you know, we are backed by artificial intelligence, our processes are AI driven, and then someone who does the exact same thing doesn’t do you think we have a competitive advantage?
Christopher Penn 27:31
In this particular example, the it’s a maybe does a company that has the AI is like a blender, right? Who gotta keep coming back to this, if you don’t know how to cook a blender is not going to help.
If you know how to cook, but all of your your line, chefs are still using hand whisks and knives, will a blender make those already skilled cooks better and faster.
Yes, and you will have a substantial competitive advantage over a competitor, who is still using hand whisks and knives, because your team can do stuff much, much faster.
And you can use fewer people maybe, or get those people retasked to doing other things in the kitchen, because they’re not you know, you’ll have 10 people all chopping up apples.
And this exact same thing with with AI.
One of the things that we say often in in the every thing because people that cinema question people ask us, you know, is this thing gonna take my job? The answer is, AI won’t do your job.
But a worker who is skilled with AI, will take the jobs plural of people who are not so a company that is skilled with AI, will have inherent up operational advantages over a company that is not deploying AI for any process that involves language.
Katie Robbert 28:59
And I think that that’s a really good distinction.
Because it is it’s a question that comes up a lot.
And then if you think about it from the perspective of the CEO, you know, there may be the well, how much of my workforce can I replace with artificial intelligence? How much money can I save? To look good to my investors by bringing in AI and letting go of 60% of my team? The answer is that’s a terrible strategy.
And it’s more the optimizing the tasks, and then rethinking the roles and responsibilities.
Christopher Penn 29:33
Like in our case, we have we don’t have enough people to replace the machines.
But what we are seeing and has been true for the the entirety of the existence of our company, is because we are skilled and skillful with the uses of machinery.
We don’t have to hire nearly as many people to do the same amount of work.
I mean, even just monthly reporting, the amount of reporting that we crank out for our clients should Take a team of five to say, five to eight people to do the reports that we do.
We do it with one person over a span of about two days.
And the report quality is as high or higher than what that team five day people would do.
Because no one’s manually copying and pasting, no one’s doing this that is all automated some of its machine learning based, some of its just straight up code automation.
And so, we’re not letting go of people, we’re not really hiring either not to not to the levels that you would expect for a company of our size and our revenue.
Katie Robbert 30:35
So if we go back to the original question of what is the CEO, what is the C suite need to know about large language models? There’s a lot of work that needs to be done before you can answer that question.
But if you get asked that question, the best way to approach it is to talk through the use cases of a large language model rather than trying to describe what composes a large language model where the data comes from how the tokens work, how you fine tune it, where the biases exist, start with the use cases, and then that should lead into the conversation of what is it that you, the CEO are trying to achieve by bringing in artificial intelligence into this organization? What is your purpose?
Christopher Penn 31:21
Exactly what is the purpose? And then where are where are the greatest needs? It’s the same question.
We’ve been asking people for decades, like what keeps you awake at night, and the when you start to decompose those problems, then you can start to see where there are opportunities to use any new technology, large language models, diffusers, transformers, whatever you take the the technobabble of your choice.
If you decompose those problems into their component pieces and break them down into the five piece, you can say okay, here’s where this technology is a good fit.
And more importantly, here’s with technology is not a good fit, and it’s not going to make things better.
Because one of the problems with shiny object syndrome that a lot of people have or I call it airline magazine syndrome is you think that a tool can be used for everything and it’s good to try it’s good to to do that exercise.
Okay, you’ll do it I try this.
But you will find out very quickly yes, there’s there are plenty of situations where it is just not a good fit.
You know, just like the blender, there are plenty of situations where there are foods you should just not blend you know, spaghetti and meatballs.
Don’t cook it in a blender
Katie Robbert 32:36
now, I hadn’t even considered that one.
Now you got me thinking
Christopher Penn 32:42
I can do so sauce in the blender.
Katie Robbert 32:44
i For the most part, stay away from kitchen appliances.
It’s you know, I can bake but for the most part, I just don’t touch anything with sharp edges in general, safer for everyone.
Christopher Penn 32:57
On that note, if you have comments or questions or things that you want to talk about when it comes to the the intelligent use of AI, Popeye our free slack group go to TrustInsights.ai AI slash analytics for markets where you and 3300 other professionals are asking answering each other’s questions every single day and wherever it is that you watch or listen to the show.
If there’s a challenge you’d rather have it on instead go to TrustInsights.ai AI slash ti podcast.
Thanks for tuning in.
I will talk to you next time.
Need help with your marketing data and analytics?
You might also enjoy:
Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!
Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new 10-minute or less episodes every week.