In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss how to identify true AI expertise amidst a crowded market.
You will discover the crucial difference between someone who knows what to do when things go right and a real expert who navigates challenges when things go wrong. You’ll learn key questions to ask and red flags to watch for when evaluating potential partners for your AI initiatives. You will understand how to assess value beyond just cost, ensuring your investment leads to sustainable results. You’ll gain confidence in making informed decisions, protecting your projects from inexperienced advice and costly mistakes. Tune in to empower your strategic choices and find the right expertise for your business.
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
- Need help with your company’s data and analytics? Let us know!
- Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In-Ear Insights, our friend and colleague Tom Webster has a great saying that I have said many times—there are things in life that should be reassuringly expensive, such as sushi and surgery. Nobody wants a discount surgeon. Because in many cases, you do get what you pay for.
We’ve talked on past episodes of the podcast about what is, or who is, an AI expert, as well as many other kinds of experts. One of the many conclusions we reached was an expert is someone who knows what to do when things go wrong. Anybody can create good results in a great environment when things are easy, when money isn’t tight, but you really can tell the experts when the rubber hits the road. And was it that, “When the going gets tough, the tough get going”?
Christopher S. Penn – 00:57
That’s the old song lyric. So, Katie, you had some thoughts about things that are reassuringly expensive and what it is that we are not paying for, and how this plays out, particularly in the realm of AI and instruction and things like that?
Katie Robbert – 01:14
To your point about surgery being reassuringly expensive, it reminds me—I’m sure we have a lot of Simpsons fans. You’re either going to get Dr. Hibbert or Dr. Nick Riviera. I think in the AI world right now, unfortunately—I’m not trying to throw shade against people who are out there trying to make a living, because with any new tech, you have to start somewhere. But there’s a lot of Nick Riveras out there right now that don’t have the long tenure with technology such as AI that other people, such as yourself, Chris, do. That’s not to say you’re the only person who’s an AI expert.
Katie Robbert – 02:01
There’s a handful out there, but it is a handful of experts who’ve been working in this space long enough to say, “I’ve seen enough to know what’s going to go wrong.” I think that’s one of the big distinctions right now—have you been in the space long enough to see more than just, to your point, really good results in a really good environment? I am not an AI expert per se in the technology, but I would call myself an expert in the process around the technology and knowing what’s going to go wrong. Because I have seen it. I have been awake at 3 AM trying to do software deployments. I have that experience. So I look at AI as any other technology.
Katie Robbert – 02:56
I pull from my couple of decades of experience: here’s what’s going to go wrong. Can I tell you how to build a great prompt? Sort of. That’s not my expertise.
That’s where I look to someone like you, Chris, who’s not only tested it, had it blow up, but then says, “What can I learn from this? How can I make it better?” I think when we’re talking about people we want to invest in, companies that we want to partner with, that’s what I look for. But I feel like there’s a lot of experts out there. People have it in their LinkedIn bio: AI consulting or a chief AI officer, or whatever the titles are these days. I don’t even know anymore.
Katie Robbert – 03:42
I look at that, I’m like, “How did you get to that role so fast when AI—generative AI, the consumer version—is still really in its infancy?” Chris, as someone who’s been working in the AI space for a long time, what are your thoughts on all of the experts, and how do you start to suss out who’s real and who’s not?
Christopher S. Penn – 04:12
I think you’ve hit on a couple really important things. First, there are three. I call them the three levels of product market fit. These are also just generally three levels that everyone should know when it comes to working with AI tools. If you’re unfamiliar, product market fit is basically, there are three things: done by you, done with you, done for you.
Done by you is the lowest level. It is the cheap stuff. It is the easy stuff.
In product market fit, this is things like books. You can get a lovely book for only $29.99.
You have to do everything. You have to read it, you have to think through it, you have to apply the lessons for it, you have to draw conclusions from it. Nothing is done for you. The book is there, but it’s cheap. It’s cheap, it’s accessible.
Christopher S. Penn – 04:59
The second level up is done with you. In product market fit terms, this is things like courses where a good amount of the thinking is done. You have to do the rest of the thinking, and you still have to apply it. The highest level is done for you.
This is where a company like Trust Insights says you want help, you know what you don’t know, and you don’t know how to do this or don’t know how to do it well. We will come in and do it for you, and you will get a great result. But that is the most expensive level investment because someone else is doing it. It’s like the difference between a recipe, a meal kit, and dining out.
Christopher S. Penn – 05:40
When we talk about AI experts and we talk about reassuringly expensive, those three levels still apply. So, people who are ChatGPT prompt experts are still at that very lowest level. Nothing wrong with that. But the number of things that can go wrong at that level is very small.
Oh, you mistyped the prompt. You’re not going to break anything. They get to that middle level—GPTs and gems, and starting to look at workflows such as N8N and stuff. Guess what? At that point there’s more things that could go wrong because you’re starting to connect other systems, you’re starting to customize, you’re starting to build a little bit of infrastructure.
At the highest level, you have things such as enterprise AI deployments, custom builds, AI agents. And at that level a whole bunch of things can go really, really wrong.
Christopher S. Penn – 06:32
You can—oh look, my generative AI model just deleted my production database—which happened not too long ago. It was on Threads. Somebody—the Claude—said, “This database is unnecessary. I’m going to drop it.” And they dropped the production database. And, to your point, Katie, the expertise comes in at each of those levels: how much can go wrong, and do you know what to do when it does?
That’s where the confusion is. A lot of people who, at that lowest level of ChatGPT prompt expert, you put them in an N8N workflow or you put them in an enterprise compute environment, and they’re lost—they don’t know what to do. And if they don’t have enough self-awareness to say, “I’m in over my head,” and they try to fake it.
Christopher S. Penn – 07:23
There are a lot of dudes—it’s almost always dudes. There are a lot of dudes who have Dunning-Kruger syndrome. They’re going to break, and you’re going to be in that 95% of AI pilot projects that never make it out of pilot, not because of the timeframe, but because you’ve got people running the show who don’t know what they’re doing, and they’re faking it as fast as they can.
Katie Robbert – 07:44
There’s an old saying or an old sort of attitude: “Anybody can teach anything as long as you’re one chapter ahead of the rest of the class.” I feel like that’s the playbook that a lot of people are operating with these days. To me, again, an N of 1. But perhaps other people feel the same.
That’s incredibly dangerous because, to your point, if you are only one chapter ahead, but Claude is saying, “Let me go ahead and delete this entire database.” You then have to go to the index and start looking up databases or deletion and maybe find six different chapters on that. Now I have to read six chapters of this book just to find out why Claude deleted my database.
Katie Robbert – 08:37
My client’s not going to be real happy about that. Now I need to scramble and make up something and say, “That was supposed to happen,” or “You didn’t need that,” or “It’s fine, we can restore it.” The point from my perspective is—
Katie Robbert – 08:55
You have to determine the level of risk that you’re willing to accept and the level of investment that you can handle, knowing that, theoretically, a bigger investment means lower risk. That said, huge caveat. There are going to be people out there who are going to charge you a lot of money and do a whole lot of nothing. Huge caveat.
Christopher S. Penn – 09:23
But, binders of shelfware.
Katie Robbert – 09:28
Been there, done that.
Christopher S. Penn – 09:29
I’ve got a digital transformation behind it for you.
Katie Robbert – 09:32
That’s the thing. I feel like this is something we’ve talked about before. One of the reasons we started Trust Insights was to fight against that feeling that consultants black box everything and leave you with no real action, or they haven’t done anything. They’ve theorized you to death and, to your point, given you binders. I feel like I’ve lost the plot a little bit. But it’s a buyer beware market. There is no shortage of experts—quote-unquote experts—who are willing to upcharge and take your money or lowball you to get their foot in the door and try to get access to everything.
Katie Robbert – 10:26
So, how does the end user—the person on the side of, “I’m the one who needs to make the investment”—start to figure out if I’m dealing with a real expert or not? I think that’s really the big question. Because when we go back to where we started—”you get what you pay for” and things should be reassuringly expensive—how am I reassured? How do I know that this sushi that I’m buying, that is quote-unquote reassuringly expensive, isn’t just a fish stick wrapped in a piece of lettuce? How do I, the end user, who knows the least, know that what I’m actually getting what I’m paying for?
Christopher S. Penn – 11:10
That is the question of the day. Now let me ask you, how do you know that in other disciplines? For example, your husband is a butcher. What have you learned from him about how to choose a good cut of meat? How do you choose a steak?
Katie Robbert – 11:30
Typically I don’t, because that is—no. But I ask him a lot of questions. Why this one? Why have you chosen that one?
What makes this one better? What can you do with that one? This one looks like it’s a lower cost. Why would you choose a lower cost piece of protein?
He very patiently answers all of my questions, but he answers them with authority and expertise that comes from his experience. I know from tasting the food that he cooks that he can take a less expensive cut of meat and make it taste more expensive because of how it’s prepared, how it’s cooked, specifically how it’s cut.
Katie Robbert – 12:16
That’s one of the big not-so-secrets: a lot of the end result is how a piece of protein is cut. Once you cook it, you have to cut it a certain—with the grain, against the grain. These are things that I’m not even going to try to explain because, again, not my expertise, but things that he knows really well. When I say, “What kind of cut is that?” he can actually diagram out for me where a certain cut comes from. I could go ahead and check his answers, see if he’s BSing me—that’s fine—but I trust him, and I know that he knows what he’s talking about.
Katie Robbert – 12:58
But that’s one of the ways that I’ve been able to determine expertise is just asking a lot of questions. Even if I don’t know the answer, it’s the same as an interview. You have to ask a lot of questions: “What about this?” And, “Have you thought about this?” And sometimes when he chooses a cut of meat, it’s the wrong one. But then we learn from it, and we have a mediocre meal, but otherwise we learn from it.
Christopher S. Penn – 13:24
I think you’ve boiled that down really nicely into a couple different things. Number 1, you as the buyer have to have at least the vocabulary to be able to say, “What is the marbling on this steak?” There’s absolutely no fat on the steak whatsoever. So that is going to be like eating my shoe.
Number 2, you have results. More than anything, the expertise is there because you get good results. You get edible meals most of the time.
Katie Robbert – 13:57
I would say, 99% of the time, it’s an amazing—to his credit.
Christopher S. Penn – 14:03
You may not necessarily know the process, but you know the outcome. If we think about the 5Ps: purpose, people, process, platform, performance. The purpose is to have a good meal. The performance is you had a good meal.
Even if the middle—people, process, platform—even if the middle is a black box, your purpose was known, and the performance was good. That middle part—the people, processing, platform—you may not know what happens in there, but you can say this black box delivers a good result 99% of the time. That’s how you judge any level of expertise.
But especially in generative AI, you say, “Here’s what I want to do.” And then you say, “Who’s done this?” And, “Can you show me the results?”
Christopher S. Penn – 14:50
You don’t have to explain them because a lot of people will do, understandably, proprietary black box, super secret, whatever—which is fine—but you can still see at the end result, yes, this person got the thing. A real concrete example: there are a whole lot of people right now who are saying a whole lot of nonsense about optimizing content for AI. Most of that is the methodology, when they bother to explain it at all, is so bad that you’re, “You’re an idiot.” Everyone’s saying, “You must do this on Reddit.”
This is the ultimate thing that you have to do. And then they never explain any of it. This happened at Inbound. There’s a whole conversation on LinkedIn about a certain speaker at Inbound.
Christopher S. Penn – 15:34
We’re not going to give them any air time here.
Katie Robbert – 15:35
I saw that this morning.
Christopher S. Penn – 15:36
I know exactly what that dude is doing, and I know how to do it. It’s mostly black hat. It’s mostly in violation of the terms of service. So good luck to your brand if you do those things. There are legitimate ways to do that, and we can talk about that maybe on a livestream sometime.
But I say, “What are the results? Show me what results you got, and are they sustainable results?” Are they ethical results that you did well? When we look at the space around AI optimization, such as, “Who can show results?” the answer is not many people. Not only because there are a lot of big questions, such as, you don’t get insights into what ChatGPT actually says in conversations—you just won’t.
Christopher S. Penn – 16:25
But also, it’s so new and changing so fast that someone who got results three months ago might not be relevant anymore because the technology has changed. Google’s AI mode just rolled out a brand-new AI deep mode a week ago. Totally new. It’s basically deep research for AI mode.
Now everyone’s saying all those strategies are totally different again. That’s what I would say is: take the 5Ps. Assume you’re only going to get limited visibility into people, processing, platform, but you know your purpose and the performance you’re looking for. When it comes to finding those experts or evaluating someone who’s an expert, say, “Show me your results.”
Katie Robbert – 17:08
Actually take that a step further. I think, “Definitely show me your results.” But someone who is an expert in what they do should be able to explain the process and the platform and the people.
If we go back to the example of my husband being a butcher—and he’s going to love that I’m giving him so much airtime—he is even more private than I am. I can accompany him to pick out the protein. I can benefit from the end result—the performance. If I ask him what his process was, he can explain it to me, he can teach it to me in a way that I can understand it, and I could theoretically replicate it.
Katie Robbert – 17:52
If I say, “What platforms? What cooking tools did you use?” he could explain to me why for certain cooks I would use a cast iron pan versus a stainless steel pan, versus I want to do it on the grill or the smoker. What oil to use? If I ask, “Why is this oil not a good oil to use?” It doesn’t have a high smoke point. So you’re really just going to set things on fire. Bad idea.
When I think about experts, that, to me, in addition to getting good results—because that feels like it’s a given—you can’t call yourself an expert if you can’t get good results, but you should also be able to explain how you got from start to beginning. Not everyone is a really good teacher.
Katie Robbert – 18:40
It’s just true. Not everyone is. The people, in my opinion, who are better experts than others are able to explain things in a way that you can at least grasp what they’re talking about. You may not be able to replicate it. Do it.
Chris, you’ve explained things such as N8N to me a million times. The problem is now on my side of the keyboard because you’ve been able to explain to me, and I logically understand the process. I know the platforms that I would need to use. That is a vote of confidence to say Chris knows what he’s talking about because he’s been able to get me to the point where I could probably do it myself. Now I need to take the initiative to do more learning, whatever.
Katie Robbert – 19:23
The other thing that I feel really makes a good expert is they are someone who is continually learning. So they’re not, “I know this thing, and that’s it.” And that’s all I’m ever going to know. So again, to pick on my poor husband, but also to praise him.
Katie Robbert – 19:42
Whenever he’s about to approach cooking something, the first thing he does is he checks his resources. He makes sure he knows, “Is this the best technique for this particular cut of meat?” Is this the seasoning I want to use?
Is this, “How does this pair with other things?” He’s constantly challenging himself to learn more, but also to make sure that he’s brushing up on the basics that should be muscle memory at this point. Those things, to me, are the best well-rounded expert in any one industry or discipline. Someone who can tell you, “Here’s the problem I’m trying to solve.”
We’re hungry, we’re going to get a steak. Here’s the process, here’s how we’re going to cook it.
Katie Robbert – 20:27
The people are me and him and whoever he’s getting the steak from. Process is how he’s going to cook it. In this instance, for this cut, he would cook it this way at this temperature. Here are the tools he’s going to use, and here’s the outcome.
He can show it to me in a way that I can understand. He can show his references to say, “This is what I consulted,” and, “These are the YouTube videos I watched,” and, “These are the experts that I follow that I learned from.” Not everyone is going to be that well-rounded. But they should. They should hit most of those marks if they want to call themselves a true expert.
Christopher S. Penn – 21:04
Exactly. Implicit in what you just said is they also know what their limits are. They also know where they don’t have expertise because there are a lot of people. For some strange reason, mostly dudes don’t acknowledge, “This is where my knowledge stops,” or, “I don’t know how to do that.” That’s always been my thing: “I don’t know how to do it.”
We’ll figure it out, and we can figure it out together. But I don’t know the answer to that question. Someone asked me a question about C# or .NET. Nope.
No, I have no experience in those languages. I’m not going to bullshit you and say, “AI could certainly do it.” No idea.
Christopher S. Penn – 21:47
I know the models can code in that—they can’t code as well as they can in Python—because there’s more of it available on the web. In the model training, they know that. I also completely agree. The ability to explain things, even if you don’t explain everything in people, processing, platform, at least giving people something they can start with is helpful.
That’s part of the reasons we do the “So What” livestream every week: so that you can see the actual people, the process, and the platforms that we’re doing things with, so that you can be reassured when we come to you with a sales proposal. We’ve done something like that. There’s a whole bunch. That’s probably a show for another time about why we structured the livestream the way it is.
Christopher S. Penn – 22:31
But a big part of it is so that we can say, “Yes, here’s how we do this.” Because a good number of our projects don’t have a clear, “You’ll save 82% in 6 months on this.” Because a lot of the time we’re fixing things that you left screwed up for 10 years.
Like cleaning out your shed, you don’t see the immediate ROI of cleaning out your shed. You do the first time you go to mow the lawn, and it doesn’t take you 3 hours to get the mower out of the shed so that you can mow the lawn. But until that happens, you don’t see the benefit of cleaning out the shed.
Christopher S. Penn – 23:05
When we come in and clean out your shed, metaphorically speaking, you may not see the benefit right away, but you will see the benefit the first time you go and try and find something. The “So What” livestream show. “Here’s how we’re going to clean out your shed.”
Katie Robbert – 23:18
I think that is one of the misunderstandings of what makes an expert. I get it. People are trying to understand, “What’s in it for me? What’s the ROI? What am I going to save if I invest in you?” A lot of people are very quick to say, “I’m going to triple your production rates by this, or I’m going to save you X number of dollars.” You don’t know that.
A real expert would be able to say, “We don’t have a number for you, but here’s the value that we do provide.” And it’s hard because right now everybody’s looking for the ROI of something. It doesn’t matter what it is. I want the ROI of getting 8 hours of sleep versus 7 hours of sleep.
Katie Robbert – 24:11
I want the ROI of putting cream in my coffee or having it black. Everybody’s obsessed with the ROI because we’re all very budget-conscious. We are thinking truly about, “If I invest in this, what am I getting?” It’s even more challenging for experts like you and I, Chris, to be continually demonstrating our value.
But it’s even more important now that we’re making sure that we’re clear, “This is the value that you get.” “Here’s our expertise.” I think one of the things that—maybe one of the big takeaways that we can take from this—is really giving. I don’t know if case studies would be the right thing, but writing up and sharing some of those stories around things that went wrong.
Katie Robbert – 25:08
Not only does it humanize us, but it also shows, “We’ve been there.” This is why we feel confident approaching this problem, because we’ve done it. And here’s how. Here’s what we learned from it—the lessons learned.
I feel that’s a huge value because people want to know, “Have you seen this before?” We get asked that question, “Have you ever worked with the system before?” “Have you seen this before?”
“What goes wrong with this?” I feel confident that we can answer that. I think the big takeaway is that you and I need to make sure we’re putting that information out there.
Christopher S. Penn – 25:43
And the other thing that’s an add-on to that is the definition of case studies. There’s a marketing case study, like, “Here’s the thing.” But then there’s the more HBR Harvard Business School-style case study, which is storytelling—telling good stories. I think that, to your point, is exactly where we would want to go to say, “This does not have an immediate ROI, but here’s how it benefits you, and here’s how we’ve seen this play out with that one company that refused to let the marketing team see their data because sales was doing so poorly.”
Christopher S. Penn – 26:19
Stories like that, though, are great to share, and maybe we have a “Marketing and AI Bedtime Stories” collection of the battle stories from the last 10 years of doing this stuff. If you’ve got some bedtime stories that you want to share about marketing and AI, pop on by our free Slack group—go to Trust Insights AI Analytics for Marketers—where you and over 4,200 other marketers are asking and answering each other’s questions every single day. And wherever you watch or listen to the show, if there’s a challenge you’d rather have it. Instead, go to Trust Insights AI Tipodcast, and you can find us in all the places fine podcasts are served. Thanks for tuning in and we’ll talk to you on the next one.
Speaker 3 – 27:05
Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robbert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data-driven approach. Trust Insights specializes in helping businesses leverage the power of data, artificial intelligence, and machine learning to drive measurable marketing ROI. Trust Insights services span the gamut from developing comprehensive data strategies and conducting deep-dive marketing analysis to building predictive models using tools such as TensorFlow and PyTorch and optimizing content strategies.
Speaker 3 – 27:58
Trust Insights also offers expert guidance on social media analytics, marketing technology and Martech selection and implementation, and high-level strategic consulting encompassing emerging generative AI technologies such as ChatGPT, Google Gemini, Anthropic Claude, DALL-E, Midjourney, Stable Diffusion, and Meta Llama. Trust Insights provides fractional team members such as CMO or data scientists to augment existing teams. Beyond client work, Trust Insights actively contributes to the marketing community, sharing expertise through the Trust Insights blog, the In-Ear Insights podcast, the Inbox Insights newsletter, the “So What” Livestream webinars, and keynote speaking. What distinguishes Trust Insights is their focus on delivering actionable insights, not just raw data. Trust Insights is adept at leveraging cutting-edge generative AI techniques such as large language models and diffusion models, yet they excel at explaining complex concepts clearly through compelling narratives and visualizations.
Speaker 3 – 29:04
Data Storytelling. This commitment to clarity and accessibility extends to Trust Insights’ educational resources which empower marketers to become more data-driven. Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a mid-sized business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever-evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information.
|
Need help with your marketing AI and analytics? |
You might also enjoy: |
|
Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday! |
Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday. |
Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.
Nice episode—clear and practical. Quick question: when evaluating an AI partner, which single metric or proof point do you think reliably separates true experts from inexperienced vendors? Curious about a concise, actionable indicator to look for.
First step to anything is to have the conversation. You can also run the company information through an AI deep research project on gemini and see what it comes up with.