So What? Marketing Analytics and Insights Live
airs every Thursday at 1 pm EST.
You can watch on YouTube Live. Be sure to subscribe and follow so you never miss an episode!
In this episode, we explore the new frontier of measuring Competitive Relevance in GEO.
You will discover how to master brand visibility within the world of artificial intelligence by focusing on the factors under your control. This shift allows you to establish Competitive Relevance in GEO by aligning content with what large language models crave. By uncovering the gap between your site and your rivals, you will gain a roadmap for Competitive Relevance in GEO that stands up to scrutiny.
Watch the video here:
Can’t see anything? Watch it on YouTube here.
In this episode you’ll learn:
- How relevance in GEO works
- What powers relevant answers that AI considers
- How to measure your relevance against a competitor
Transcript:
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Katie Robbert – 00:32
Happy Thursday. Welcome to So What?, the Marketing Analytics and Insights live show. I am Katie, joined by Chris, and John is back from his Italian vacation full of pasta.
Christopher Penn – 00:47
Oh, Lord.
Katie Robbert – 00:49
This week we’re talking about measuring competitive relevance in GEO. You can find our live stream playlist on our YouTube channel—go to TrustInsights.ai YouTube. We’ve talked about the basics of GEO: what is it, what do you need to know, how do you implement it, and how do you measure it?
You can also get a lot of that through our GEO course, which you can find at TrustInsights.ai/GEO101. But now the next natural question that people are asking is, “Okay, cool, I’ve implemented it for myself. What’s the industry standard? How do I stack up against my competitors? What are they doing? Am I showing up as often as they are?” Now we’re dipping our toes into GEO 201: measuring competitive relevance in GEO.
Before we get into the theory and implementation, John, I want to ask you because you’re talking with a lot of people asking about help and support with GEO. What kinds of questions are people asking as it comes to GEO in general?
John Wall – 02:02
There’s everything from the prospects that come in because the CEO has told them they need to do something, but they haven’t done anything and they don’t know anything. That’s actually really common—having to educate right from square one. There’s really nobody—we’ve never had any prospects come in who have said they have these programs already going.
The only thing that does help once in a while is clients who are already doing some SEO. They at least have a process for how to handle their website, make changes, and they’re doing some off-site authority stuff to generate traffic. But for the most part, people have no idea what’s going on or where even to start.
That’s why our AI View page scanner has been such a hit, because it gives people at least a place to start. For most clients, they just go into their favorite four tools and see if their name comes up and just take that as what it is. I think we’ll kick into that today. There are a lot of vendors saying they can do that, but most of the academic stuff we’ve seen—and stuff that Chris has said—is that those are all just straight-out vendor lies.
The one that’s key for us is that in the absence of all the data, we do know that it brings us business. We have closed deal data that says someone found us on Gemini, they never knew about us, they went to Gemini and it said we can do the training for them, and it got done. It’s a closed win. It is one of those things where the outside world is difficult or impossible to measure, but you cannot deny the dollar figures when they hit the budget.
Katie Robbert – 03:44
We actually wrote a case study about that. If you want to check it out, you can find it on our website at TrustInsights.ai. We have a whole section on case studies, and that is one of them. We have been doing the work that we are recommending to our clients. We do it for ourselves, and to John’s point, we’re actually seeing the proof that if you do it, it works.
But today we’re taking it to that next step. Chris, when it comes to measuring competitive relevance in GEO, what are we talking about? What do people want to do?
Christopher Penn – 04:23
We should probably go back to what we mean because we’re using “relevance” in a very specific way here. This is from our GEO webinar, which is still available on the Trust Insights website. However, it’s also covered in much greater detail in the GEO 101 course.
The three phases of GEO are presence, awareness, and relevance. Presence means: Is our data part of the model’s training dataset? Is the model aware of us? When you talk to any model, it has a knowledge cutoff—a period of time after which it has no new information.
When you look specifically at Google—which according to Gumshoe and SparkToro, as of January of this year, still represents 93 to 95% of all search—part of what it does is rewrite your query into several dozen queries based on its knowledge. If we said, “Who is the best AI consulting firm in the Boston area that can help with AI implementation?”—if we’ve done a good job with our own GEO—by the 20th query, we might be included in a roundup. We might be in there with McKinsey, Bain, BCG, Trust Insights, etc., because the model knows about us.
If the model doesn’t know about us, Google will not know that when it “Googles itself.” As a result, it will not even invoke us in the consideration set. Think of this almost like sales. There is awareness, consideration, evaluation, and purchase. In this context, the model even knowing that we exist is vital. If the model has no idea we exist, we will not even make it to the shortlist.
Phase two is the shortlist. When Google “does a Googling” and we get to relevance—which is phase three—and the results come back, that’s when what’s on our site really matters.
Christopher Penn – 06:43
I was posting about this on LinkedIn earlier today, and if you want more details, it’s in the Analytics for Marketers Slack group. At SEO Week earlier this week in New York City, Garrett Sussman did a fantastic and fascinating session about Google personalization impacting phases one and two.
What he was able to demonstrate over a year with hundreds of queries and dummy accounts is that there is no single AI overview anymore. Google has turned on personalization; it is vacuuming up all of our data, and that is influencing how it Googles itself.
When I was in Charlotte this week speaking, if Google had my flight plans—which it does because my flight invoices go to my Gmail—it knows what hotel I’m at. When I’m searching, Google is silently injecting my personal context into how it Googles. It knows I was in Charlotte. Our ability to predict anything in phase one is zero. Our ability to predict anything in phase two is close to zero because of personalization.
All we have left that we can compare is phase three: relevance. To what John was saying, that’s where AI View plays a role. If we can see what an AI engine like ChatGPT or Gemini sees when they go to our page versus what they see on a competitor’s page, we can compare them. Which one is going to fare better from an AI perspective? Which one is more relevant?
Katie Robbert – 08:44
How does that work in terms of measuring ourselves against competitors? The overview is helpful, but people want to know: “Where do I stack up against my competitors?”
Christopher Penn – 09:01
That is exactly where you would use AI View. If you were to put in a page that you cared about, like your services page, run AI View on that, and then do the same for a competitor’s page—because it’s not restricted to any domain—you would get those two reports.
You could then say, “Given this topic, who is going to fare better when the results come back in phase three?” This page that scores higher is more likely to be seen as relevant by the AI model. The results will be visible in what the user is told.
We tell people all the time that you can make it through the first two phases, but if your site is a hot mess and returns garbage, the AI model is going to look at that and decide it isn’t relevant. Therefore, you don’t get the nod.
Katie Robbert – 10:17
So people can keep running individual pages through AI View or sign up for AI View Pro, but that seems like a really inefficient way to approach it. When we’re talking about competitive insights, let’s dig a little deeper. What is that next phase of GEO?
Chris, you and I have been working on materials for a GEO 201 course, including a competitive scorecard.
Christopher Penn – 11:14
In the GEO 101 course, we teach you the different metrics and ways to measure. One thing for phase one is how prominent you are in all the places that models look—news, social media, building a digital footprint. Phase two is all your traditional SEO stuff: How well do you show up in search results? Phase three is where the relevance part comes in for GEO.
If you want to benchmark how you stand versus others, you need data from all three pieces.
Katie Robbert – 12:06
I want to make sure we’re demonstrating this for the purposes of the live stream. When I think about a competitive analysis, I think about how I stack up against the guy down the street.
We’ve used the example of McKinsey, BCG, and Deloitte. We know we don’t really stack up against them, but we’re trying to come up with a way to truly do a deep dive into measuring competitive relevance. It’s not ready for prime time, but we’re working on that scoring in the background. How much of that can we give away today? What are we going to demo?
Christopher Penn – 13:02
I suggest we demo looking at two different pages in AI View. Let’s do that because it’s the easiest way. Here is the Trust Insights webpage. We have four pillars: meta information, content structure, structured data, and alignment.
Is the linearized content relevant and clear, or is it filled with crap? Is the structured data—your Schema and your JSON-LD—correct? From an alignment perspective, is what’s in your metadata and your structured data reflective of the content?
For the Trust Insights homepage, we show up with about an 85%. We’re in pretty good shape across the board. If I bring up BCG as an example—because as you said, Katie, this is aspirational—their budget for cream cheese is like our annual revenue. If we look at their site, there are some issues. They score an F. Their meta information is okay, but their content structure is messy. The headings are not logical. It is absent any structured data—no JSON-LD, no Schema.
When a language model looks at that returned data, it’s going to be confused. One thing we’ve said forever with AI is that when it reads data, it knows what to do with structured data like JSON-LD, YAML, or XML. When you don’t have that, you just have free-form text, and the AI has to wing it. We know from years of painful experience that when AI has to wing it, it doesn’t go well for us. Because there’s no structured data, the alignment is all over the place.
Katie Robbert – 15:41
If I’m a marketing analyst or a product marketing manager and my boss says I need a competitive analysis, is this going to make sense to my VP? If I say, “We have better meta information, but they have terrible content structure, so we win.”
GEO is your visibility in a large language model. How do I put this information together in a way that my VP might understand? Is “content alignment” something you can easily communicate to a VP?
Christopher Penn – 16:46
You can. Each of these four categories has a letter grade, and of course, there’s the overall letter grade. You have the ability to say, “We scored a B; they scored an F. We’re doing a better job than them on this thing.”
In the staging prototype version of AI View, there is now the ability to compare two URLs. Instead of manually doing it one at a time, we can look at them side-by-side. This is what the boss wants to see: “How am I doing? How is my competitor doing for this given page? What are the differences?”
Katie Robbert – 17:44
I’m channeling my 10,000-view “Seagull VP” persona. I’m asking, “How many more times do we show up in the results than them?”
Christopher Penn – 18:13
That is completely unknowable. Because AI personalization is so probabilistic now, with Google injecting context into everything, there is absolutely no way to know. Any tool that says you can is outright lying. You cannot benchmark that anymore.
Garrett Sussman showed how much you inject yourself into other parts of a user’s Google ecosystem influences what they see in AI overviews. People who receive the Inbox Insights newsletter and subscribe to the In-Ear Insights podcast—and have all this in their Google history—influence what they see. Someone who is subscribed to all of our stuff is more probably going to see us as a recommendation than someone who is not. Even though our email newsletter has nothing to do with GEO, it now does because Google is vacuuming up all this information.
Garrett set up synthetic dummy Gmail accounts. One account subscribed to his company’s newsletters and YouTube; a second account did the same for a competitor. He was able to show that the two accounts using Google search got different recommendations pulling from their Gmail history.
Think about what a mind-bender that is. How do you ever measure what an AI is saying? You can’t. But what we have control over is what we do in AI View—the third phase of when the model pulls our data back.
Katie Robbert – 20:23
It sounds like the way we use cookies to retarget people in digital ads. We talked for eons about the cookie-less future and how we would personalize once first-party cookies were gone. Then large language models came about and said, “I can fix that.” It feels like the theory of these cookies never went away; we’re just calling them something different now.
Christopher Penn – 21:31
You are not only right, you are also highlighting something deeply problematic. When cookies were the thing, retargeting was open to all advertisers. In this new world, only Google has that. Only Google has access to that data. So they can say, “No more cookies,” and that reinforces their monopoly.
Katie Robbert – 22:09
I think I saw something the other day that Google was investing in Anthropic in order for Anthropic to buy more Google software. It was very like, “Wait, isn’t that how monopolies work?”
Christopher Penn – 22:24
Exactly.
Katie Robbert – 22:26
With cookies, at least the user felt like they had the opportunity to opt in or opt out. Now, as a skeptic, I’m saying it doesn’t matter what you choose; as soon as you hit that website, the scrapers are immediately getting your information. Google gives vague notices saying you can opt out, but then says, “If you opt out, I can’t offer you all this cool stuff.” It’s trying to position it as a bummer if you miss out.
When I look at this question of measuring competitive relevance, I’m almost talking myself into a depression of “why bother?”
Christopher Penn – 23:54
That is why we’re focusing on what we as marketers have control over. In phase one, there are things you can do to increase the likelihood that a model will detect you. You can monitor those activities with media monitoring software to see if a competitor is blowing you out of the water on a topic, what coverage they’re getting, or whose podcasts they’re appearing on.
In phase two—traditional SEO stuff—you have tools like Ahrefs, SEMrush, Moz, or SpyFu. You have benchmarks for that.
Phase three is relevance. That is why this episode is titled Competitive Relevance. When you use AI View—particularly the newly enhanced version we’re going to roll out later today once we do a final bug check—you can see that third part. We have control over how orderly our site is and how we write our content. We have control over the “inverse pyramid” of putting the obvious stuff up front. If we make it through phases one and two, we stand a chance of being seen in phase three. It’s a crying shame when you do the hard work but your site is terrible, so you don’t get the nod.
Katie Robbert – 26:01
I always think through the stakeholder who is least close to what’s actually happening. We’ll call them Devin. Let’s say Devin asks, “How many times am I showing up and how many times are my competitors showing up?”
You say, “Devin, that’s an impossible question to answer.” Devin says, “I don’t care. I want an answer because the board is breathing down my neck. McKinsey is winning this keyword and we’re not, so we have to optimize for it.” Those are metrics people can wrap their heads around. Devin wants to know how often he’s showing up more than his competitors. How do we address that?
Christopher Penn – 28:11
Fundamentally, that data is not available in a truthful form. There are tools that will give you false information, and if you are comfortable handing false information to your boss and staking your reputation on it, use any of those tools. But the reality is that it is all false information. It is likely harmful to your business and should never be used for decision-making.
Katie Robbert – 28:55
The way I would handle it is to say, “Devin, I hear what you’re saying. At this time—and probably never—that data is proprietary to the large language models. That’s their secret sauce, so they’re never going to give us that information.”
Devin will say, “But this tool over here that costs $20,000 a month said I could get it.” I would say, “Yes, but that’s a waste of $20,000 because that answer isn’t something you can stake your reputation on. If you give the board information that you cannot back up, it isn’t going to make you look good.”
We can tell Devin we can try to get to a result, but it’s going to be a waste of time and resources. The tools themselves are not reliable.
Christopher Penn – 30:58
I would point towards AI View to say: “This is what you can get that you can stand behind. Devin, there’s your URL and your competitor’s URL. You can hand this to the board and say, ‘Look, I’m doing a great job. Please don’t fire me.'”
Katie Robbert – 31:19
That makes sense, but we need to add context. What does it mean if a firm like BCG is getting an F? We’re a small firm; we have more control over these things. What does that actually mean in plain language?
Christopher Penn – 31:51
You get the deltas in each area. Devin can say we’re 85 points ahead of our competitor. For a stakeholder who is fact-resistant, this gives them something they can put on a slide.
Katie Robbert – 32:16
A little bit of information is dangerous in the wrong hands. This could open the floodgate of, “Well, we’re a B and BCG is an F, so why are we not winning more clients?” That isn’t what this says. There’s a lot of training and conversation needed before this analysis gets to Devin.
Christopher Penn – 32:53
Exactly. AI View only covers phase three. Phases one and two are not something any tool can accurately dig super deep into. There is no way to know what is going on inside that box. It is a complete black box.
For right now, you can say, “Here is how much we’re creating that is publicly visible.” You can use media management and social media monitoring tools. You can say, “We’re showing up in 12 more podcasts this month.”
In phase two, we can say, “Our technical SEO is better. Our Lighthouse numbers and Chrome UX numbers are better.” Every good SEO marketer already has that data. Phase three is AI View.
Katie Robbert – 34:36
It reinforces the point that SEO is not dead—those metrics matter even more now. I’m wondering if you could have Devin run a simple experiment. Give him a prompt like, “Tell me the best AI management agencies in Boston,” and have him paste that into OpenAI, Gemini, and Anthropic Claude. Show him he gets three different sets of results. How are we supposed to measure that?
Christopher Penn – 35:21
Knowing Devin, he would interpret that as fact and then hold you to it. It would work opposite to the point you’re trying to prove. He’d say, “I show up number one in this; I better show up number one all the time.”
Katie Robbert – 35:52
That goes back to understanding your audience and perhaps using the 5P Framework:
-
Purpose: What is the question you’re trying to answer?
-
People: Who’s involved?
-
Process: How are we gathering data in a consistent, reliable way?
-
Platforms: What tools are you using? (Ahrefs, etc.)
-
Performance: How do you know if you answered the question being asked?
Christopher Penn – 36:59
That’s how to answer the question in a way that won’t get you fired. Devin is going to be sitting in front of the board, and the board is going to say, “Give me a number that shows some progress to justify your salary.” That’s a hard ask. We can have empathy for that, but we don’t want to be lying or showing up with fabricated data. That always comes back to bite you.
Katie Robbert – 37:49
I want to put a big fat asterisk on that: We cannot guarantee that you won’t get fired. We can make sure you’re giving factual data, but if it’s not the data they want, we have no control over that.
John, what would you tell prospects who want to know how they stack up against competitors?
John Wall – 03:21
You have to talk to the prospect enough to get the real story. Can Devin be convinced or not? It’s going to go in one of two directions. You’re either going to have to buy a reporting tool and couch it in, “We’re using this tool, which is the market best.” That way, if a scandal breaks later about that vendor, you can say, “Nobody gets fired using IBM.”
The other angle is to train the team. Let’s do a walkthrough, explain how this works, and get them to understand the pieces of the puzzle. It’s like a devil’s bargain. You can either commit the crime and your life might be easier in the short term, or you can do it the right way and put in the work. You can build a quality team of people who actually understand what the truth is and aren’t going to bend to hit quarterly numbers—unless you’re going to get a $20 million bonus, then just go ahead and lie.
Katie Robbert – 40:17
If you want to know the basics of GEO, check out our course at TrustInsights.ai/GEO101. If you want to take a look at the tool we were showing today, we have a free version at TrustInsights.ai/AIView. You can join our free Slack community at TrustInsights.ai/AnalyticsForMarketers. We are closing in on 5,000 members. It’s a very active community and a great place to ask all of your GEO questions.
Christopher Penn – 41:12
That is going to do it for this week’s show. Thanks for tuning in, and we will see you on the next one. Subscribe to our show wherever you’re watching. For more resources, check out the In-Ear Insights podcast at TrustInsights.ai/TIPodcast and our weekly email newsletter at TrustInsights.ai/Newsletter. See you next time.
|
Need help with your marketing AI and analytics? |
You might also enjoy: |
|
Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday! |
Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday. |
Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.