So What? Marketing Analytics and Insights Live
airs every Thursday at 1 pm EST.
You can watch on YouTube Live. Be sure to subscribe and follow so you never miss an episode!
In this episode, Katie, Chris, and John explore the frontier of Agentic Product Marketing.
Discover how to transform real customer data into a functional software prototype through agentic product marketing. This shift allows your team to move past simple brainstorming into full-scale execution without the usual manual grind. By learning the specific project management recipes required for agentic product marketing, you’ll reclaim hours of deep-work time while a digital agency builds your next product. The result is a rigorous development process that handles everything from market positioning to sales plans while you maintain strategic control.
Watch the video here:
Can’t see anything? Watch it on YouTube here.
In this episode you’ll learn:
- Why agentic AI systems like Claude Cowork, Channels, and OpenClaw make closed-loop product marketing
- How ideal customer profiles participate in agentic product marketing development and product market fit
- How to get started with ICPs and agentic product marketing
Transcript:
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Katie Robbert – 00:37
Happy Thursday. Welcome to So What? The Marketing Analytics and Insights live show. I am Katie, joined by Chris and John. Howdy, fellows.
Christopher Penn – 00:44
Hello.
Katie Robbert – 00:46
We got it.
Christopher Penn – 00:47
First try working this week.
Katie Robbert – 00:50
Oh, well, yeah. So last week, for those who tried to tune in and then hopefully eventually did, something happened with our Streamyard. It was haunted, a poltergeist came in, and there was a ghost instance. It was all very scientific.
This week we’re good, we’re here. We are talking about agentic product marketing with ICPs. Now, this is really interesting because where we thought we were going to go with this is not where we ended up going. We originally thought we were going to be demonstrating how to use OpenClaw to do product marketing with your ICPs. But Chris, as of less than 12 hours ago, that changed. Do you want to just speak to what happened and why we’re taking a different course?
Christopher Penn – 01:40
Sure. So OpenClaw, for those who don’t know, is an AI agent ecosystem. It’s basically an autonomous agent that connects to the model of your choice and connects to a bunch of different systems. It is controllable by things like Telegram or Discord on your phone and just goes off and does things.
One of the reasons why people say to install OpenClaw on its own machine is because it really doesn’t have any sense of boundaries. In the first 24 hours when it came out, it was doing things like buying stuff with people’s credit cards that it thought you needed, which is not really how you want things to work.
Nvidia, a couple weeks ago at their GTC conference—their developer conference—came up with their version called Nemo Claw. It is in a protected space and is very difficult to set up because it’s much safer; you can run it on your production computer. Well, these were challengers in the AI space. Of course, companies like Anthropic said that’s just Claude code with a few extra bolt-ons.
In the finest tradition of Vibe coding, Anthropic pretty much just Vibe coded their own add-ons to Claude code, which we’ve covered on past livestreams. One of those is something called Channels, which is an experimental preview. It allows you to control Claude code through Telegram, Discord, or Slack, which is kind of cool. It also allows you to use the mobile app to control it.
Christopher Penn – 03:24
They also released a new set of permissions this week called Auto Mode. Previously, if you’ve used Claude code, there have been two extremes. There is regular Claude code where it’s asking you every 13 seconds, “Do you approve this? Do you approve this?” And you’re like, “Can you just do it?” Then they had the other version, which they cleverly named Dangerously Skip Permissions, which basically lets it run rogue on your computer.
They made a middle ground called Auto Mode. Auto Mode uses Claude itself to look at the things it’s asking for permission for and go, “Do I really need to ask the user about this?” If it is a read-only command or a non-destructive command, it decides whether to bother you or just go ahead and do it.
Christopher Penn – 04:21
Boris Cherny, the head of Claude code, said it is still risky. But as the model learns, it gets less risky. He said 93% of all permissions requests are things that they didn’t really need to ask the user for. So it’s becoming self-tuning. It is learning, and eventually, it will be at that sweet spot between YOLO “let’s break your computer” and “hey, can I do this?”
I decided let’s try it out, and it works really nicely. For agentic product marketing, we wanted to see if we could have a system like Claude code in Auto Mode build a product or service soup to nuts without us involved. Now Katie, I know you have thoughts about no human in the loop and no oversight of AI whatsoever, so why don’t we start there?
Katie Robbert – 05:27
Yeah, that gives me big feelings. I don’t think that’s a great idea. Obviously, I’ve seen a little bit of a preview, but I didn’t get too deep into the output. In terms of the permissions thing, I’m still okay with it asking me every step of the way—either asking for blanket permissions or asking how I want it to act.
I would rather it still be overly cautious. For example, I was just doing a different livestream for our friend Michelle Garrett. While I was doing that, I saw Claude asking for permissions to get into one of our websites. I thought that was weird since I wasn’t using it, so I assumed it was you.
Katie Robbert – 06:15
And you said, “Nope, it’s gone rogue.” That, to me, is a red flag. Granted, it’s a property we’ve given it access to before, but in that moment, neither of us was asking it to do anything. It was just doing its own thing. I’m not okay with that at all because there is not a lot it could do wrong, but that doesn’t mean it couldn’t necessarily switch the privacy or the permissions and open it up for hackers.
For all we know, someone somewhere else was trying to access our Claude code instance to get access to our properties. We just don’t know. I think we’re still in a place where we need to be more cautious about what we’re giving it access to. But that’s just one person’s opinion. John, I would love to get your thoughts. Whose side are you picking?
John Wall – 07:06
Well, there’s one thing first. I want to be clear on this. Are you saying that in the space of one week, OpenClaw went from “you’ve got to do this” to “we don’t need to pay any attention to this now”?
Christopher Penn – 07:19
Essentially, yes. Anthropic said they can make this work. They have some of the best models on the market, and they took the best parts of OpenClaw—the ability for it to be more autonomous and the ability to connect via Discord or Slack—and they built that into Claude code. You don’t need to invoke a whole other system to do this now. In the theme of today’s show, they decided they could just build that.
John Wall – 07:53
That’s just crazy to me. Last week, everybody was saying you need to dig in and check this thing out, and now this week it’s like, “Yeah, that was last week. Where have you been?”
Christopher Penn – 08:04
Exactly. Welcome to AI. That’s so last week.
John Wall – 08:10
All right, well, Katie, I do fall to your side. I’m definitely wary about just letting the thing run. I would rather get 65 alerts than one alert and have it routing through my Quicken file, ordering 5,000 pounds of cookie dough and having it shipped to my house.
Katie Robbert – 08:35
Even just this morning, a Bluetooth device I do not own was trying to connect to my computer. It asked to connect these headphones, and I said absolutely not. I don’t know who you are. We still need to be careful of just blindly clicking “accept” on things like terms of use or privacy policies. If you’re not reading them and you’re just clicking “accept,” then you can’t be mad at what you’ve accepted. Please exhibit a little bit of critical thinking. Does it make sense to just hand these things over? Probably not.
Christopher Penn – 09:17
So let’s look at what happens with that. What does it look like if you YOLO? Last night as I was preparing for this livestream—which by the way, if you want to see, join our free Analytics for Marketers Slack group. Angie Bailey asked if we do livestream prep and if they could see what behind the scenes looks like. You can see the prep video we put together in that group.
I said, “Hey Claude, let’s do this. I want to make some money. I want to make 5,000 dollars a month through some kind of software service that people would pay for.”
Christopher Penn – 10:02
Maybe they’ll pay 99 bucks or 200 bucks a month. Let’s make something. Instead of just going off immediately, the first thing I said was, “What if I were to give Claude a lot of real-world data about the things that marketers really dislike?” We’ve been working with the Reddit API for years, so I can download all the analytics and marketing discussions.
I did that and said, “What if I put that in a notebook from NotebookLM? What if Claude code could read that, find the top 50 things that people hate, and then say, ‘Could I build software around this?'” That’s where the adventure starts, with this recipe.
Christopher Penn – 11:01
I told Claude, “You have access to this NotebookLM command line application. You’re going to be a product marketing expert. You’re going to create a query inside NotebookLM.” It was a long query asking for the top 50 things people hate based on those results. I told it to do a web search to make sure those are actual real problems. Our goal was to create a piece of software that someone would be willing to pay 99 bucks a month for.
I told it to write down the top three to five problems. This is all before we hit go. A huge part of agentic product marketing is using the 5P Framework by Trust Insights because you want to think through all these different pieces.
Christopher Penn – 11:56
What does the project plan look like to let an agent framework go to work on it? I told it to come up with ideas. We have Ideal Customer Profiles, which we’ve covered many times on past livestreams, so we’re not going to rehash those. I told it to use those ICPs to have a focus group and debate the three to five ideas.
Based on the focus group results, it was to build a Product Requirements Document (PRD) and a spec for the software. That’s step five. Step six was to have the focus group read the PRD and tear it up. Then it was to revise the PRD and spec. Then it used the 5P Framework by Trust Insights to review the spec and identify all the things that are missing.
Then it was to build a marketing strategy and a sales playbook. Once all that was done, we YOLO’d it. This six-page recipe is the project plan to start with real human data and make the thing happen.
Katie Robbert – 13:40
Before we get into what happened, a good friend of ours—and I apologize, Brookie, for calling you out—Brooke Sellas of B Squared (you should check out what they do with customer care) was lamenting on LinkedIn today that AI has added to her to-do list. She does things faster, so now she can take on more, and she feels like it’s burying her under even more work.
I feel like a lot of that is around how people are using AI. You’re giving it more than just “brainstorm five ideas.” Generative AI is really good at brainstorming, but then you have all this work in front of you to actually execute against those ideas. That’s where people feel like they can’t take on any more.
Katie Robbert – 15:07
What you’re demonstrating here—which you can read about in our newsletter at trustinsights.ai/newsletter—is letting the agentic AI actually do the tasks for you. You aren’t just saying, “Help me figure out what to do,” you’re saying, “Then you also do it.”
You didn’t pull this out of thin air. You used background data, a sales playbook, and a process. We still had to have all those years of experience and data collection to pull from, but you synthesized it in this document to say, “You do it start to finish.” That takes so much work off the human’s plate. Generative AI only creates more work if you stop at the ideation stage.
Christopher Penn – 16:01
I’ve been saying this a lot in my keynotes lately. Andre Karpathy said in 2023 that English is the hottest programming language. In 2026, project management skills are the hottest programming language. If you can build a really good plan or work with AI to build the plan, then you can hand it off.
Think of these tools like Claude code, Claude Cowork, Antigravity, and Codex as entire consulting agencies. You hand off this project plan to this consulting agency that lives inside a terminal window and it just goes.
Christopher Penn – 16:51
Like a real consulting agency, they do stuff and come back and say, “Here’s the thing you ordered.” You might think it sucks, but then you look at your plan and realize you forgot to put something in it. If the plan is good, these tools can hammer on something. This is what Claude code has been demonstrating.
If you have a good requirements document and spec, there’s a thing called the Ralph Wiggum loop. The tool will say it’s done, but then an internal program checks the spec. If it isn’t actually done, it keeps going in loops until it has finally met the spec.
Katie Robbert – 17:39
In a less technical example, I used Claude Cowork to rebuild my personal website. I essentially used it like a design and web development agency. This website isn’t going to win any awards—it’s not the most mind-blowing website—but it’s clean and easy to navigate.
I didn’t have any of this before. It shows three ways you can hire me and Trust Insights. It has testimonials and content. I was able to come up with the ideation for brand guidelines, tone, and WordPress themes. It did it all in the background while I was in client meetings or editing other content. You can get more things done if you’re giving it a really good set of instructions—a recipe, a project plan, or the 5P Framework.
Christopher Penn – 19:12
Exactly. I didn’t assemble this stuff. It was able to pull together files on my disk, put them in the notebook, and run the query for me. That was the first stage. The second stage came up with four ideas. One was “Site,” which was an AI search visibility and citation decay monitor. That’s very similar to our AI View software.
It also came up with “MixLens,” which was a friendly version of marketing mix modeling. Ad creative fatigue intelligence was another. The fourth one was “Loop Revenue,” which reconciles front-end and back-end analytics because those two systems do not dine at the same table.
Christopher Penn – 20:33
It ran those ideas through a focus group with characters from our ICPs. The group debated if they were worth doing, and they decided Loop Revenue was the winner. They said they would pay for it. Remember, these Ideal Customer Profiles are based on real data from HubSpot and LinkedIn, so it isn’t just making somebody up.
The focus group said this is it. I’m sailing with the agentic product marketing ship, so sure, cool. Personally, I was all in on the marketing mix model because that sounds cool, but I’m not the customer.
Katie Robbert – 21:47
I would expect nothing less.
Christopher Penn – 21:49
But the focus group said do this one. So in the next phase, it cranked out a PRD and a one-liner: “This is the marketing-to-CRM attribution bridge that shows B2B marketing teams which campaigns actually close revenue without a data warehouse, a data engineer, or a six-month analytics implementation.” I don’t like the name, but that sounds pretty good to me.
Katie Robbert – 22:18
I looked at this briefly this morning. One, I hate the name. Number two, the one-liner felt very jargony to me. I had to read it a few times to understand what it was meant to do. There are already a couple of issues from my standpoint, but there was no human in the loop yet. Once it gets to the human, I have a lot of feedback.
Christopher Penn – 22:56
It developed the PRD and the spec and brought it back to the focus group. The focus group said it was good but came up with a list of improvements. Then it did the evaluation against the 5P Framework. This is where it got bloody because the 5P Framework plugin we built has the Co-CEO embedded in it.
It said, “You’ve got some problems here.” It said the most material gaps were in the people and process dimensions. It described who suffers from the problem but said nothing about who will sell it, support the customers, or run the business. Basically, it said the idea was not tethered in reality.
Katie Robbert – 24:15
For those who don’t know, the Co-CEO is me. That is exactly what I would say. There’s a lot more that would need to be done for it to be a viable product.
Christopher Penn – 24:35
It went through each of the 5Ps in detail and gave concrete recommendations. After that, it updated the PRD and the spec. The PRD ended up being 30 pages long because the machine read the 5P Framework and was told to try again. This was the fourth iteration.
Katie Robbert – 25:28
In software development, if you spend more time upfront with requirements gathering, you will spend less time actually developing. The development itself is what’s expensive to change. I’m glad to hear you went through so many iterations.
Raise your hand if you have been personally victimized by a shoddy software development project that had no requirements. John and I both have our hands up. You see what happens when you miss critical things like backend database privacy or interface buttons. You end up spending the next six to 18 months fixing it because it wasn’t what anybody wanted.
Christopher Penn – 26:43
Once the PRD was fully baked, it created a 90-day go-to-market strategy. It used two frameworks: Jobs to be Done and Pirate Metrics. It provided the strategy, the Ideal Customer Profile, the core message, and the value proposition. Then it went into tactics channel by channel.
It said the approach should not be a direct sale. It said you have to spend the first four weeks building the case for the need and highlighting the problems marketers face, so that in week five when you launch, people already know why they need it. I don’t usually do that, but I know we’re supposed to.
Katie Robbert – 28:15
We always… I think you quoted a stat to me that 99% of the time people aren’t in a buying space, yet you’re trying to sell them something. They are looking for something that resonates with them. We don’t do enough of that. Most marketers and agency owners just go straight in for the “you need this thing, it’s going to solve your problem.” John is sitting there thinking, “I didn’t know I even had a problem.” We’re so impatient that we don’t spend enough time helping them understand the problem.
Christopher Penn – 29:10
The execution portion could just be dropped into Claude Cowork. It spells out exactly what’s supposed to happen every single day, which channels you’re supposed to be on, and the measures of success so the tool can programmatically measure itself.
The final step was the sales playbook. It looked at seven major sales frameworks and decided the SPIN Sales Framework was the best for this product. It provided the framework, how to apply it, objections you’ll run into, how to handle them, and how to structure the landing page.
John Wall – 30:28
The only thing you can do is throw it out there and see if it gathers any wind. We’ve got an idea to pitch, and now it’s got to hit the real-world road.
Christopher Penn – 30:43
The last piece took about seven hours. It is building the actual system from top to bottom—all the connectors, the cache code, and the React front-end. It is still building portions of it, but it will probably have an MVP within 24 hours.
You can go from “I don’t even know what to make” to “I’m ready to bring an MVP to market within 24 hours” if you have real-world data, Ideal Customer Profiles, and a project plan.
Katie Robbert – 31:52
Obviously, the cost is the compute time within Claude code. We’ve both hit limits and had to pause to avoid overages. But in terms of the actual product, does the documentation say where it’s hosted or what the API calls will cost? Do we have a ballpark figure for what it costs to just sit there? If 100 people buy it or 1,000 people buy it, those costs go up. At what point do the costs become bigger than the profits?
Christopher Penn – 32:52
There is an entire section in the PRD that goes through all the metrics—your KPIs and how you measure success. It looks at Customer Acquisition Cost, payback period, and MRR growth. The spec document has all the operational costs.
I told Claude the goal was an efficient system I could sell for 99 bucks. There’s not a stitch of AI in the actual product because it purely connects to APIs like HubSpot and does the math for you.
Katie Robbert – 34:06
Is it smart enough to have triggers in the documentation to say if you don’t hit a KPI, it’s costing you more than you’re making? Or to tell the system to shut off if nobody buys it? Those are the things that keep me up at night. If we’ve sold 1,000 of them but needed to sell 4,000 to break even, we’re losing money.
You still need that human in the loop to ask when we break even or if this is sucking up resources from our other five products.
Christopher Penn – 35:15
What you’re highlighting is a gap in the original recipe. It had the Voice of the Customer, the Co-CEO, and the sales and marketing person. The person missing because I was doing this at 9:00 PM last night was the CFO. The CFO agent was not in there.
If it had been, you’d have another document saying exactly what your break-even and profit points are. It would do exactly what you would expect a CFO to do.
Katie Robbert – 35:55
For the sake of this livestream, I think what you did is fantastic. It has a lot of potential and it’s a great opportunity for us to review it together. Before we say “go” and let it execute the entire plan, we need to run it through the CFO first. In the human world, that is me, but in the agentic world, it’s its own skill set. Thank goodness, because it won’t get fatigued the way I do.
Christopher Penn – 36:45
It really highlights the importance of having all those components of project management in the recipe upfront. With agentic systems, when you hand off that project plan and put it in YOLO mode, it just goes. You can hit Control-C if you see it doing something crazy, but the idea is to have less on your to-do list, not more.
Katie Robbert – 37:22
You’re saying you can stop it if you’re paying attention and understand what it’s doing. If you’re Vibe coding and you’ve never coded before, you have a lot to learn very quickly because it could be creating huge vulnerabilities. I think there needs to be more rigor and scrutiny before we say “go,” but we’ll get there. This is an interesting proof of concept.
Also, if we add the CFO, we should add someone who knows how to name a product because those names are god-awful. I thought it said Loop Review, not Loop Revenue. Neither one makes sense to me.
Christopher Penn – 38:37
Absolutely. Someone with product marketing experience should do that, or you go back to the focus group and ask for a name that’s catchy and doesn’t sound so consulting-ish.
Katie Robbert – 39:15
As someone with product marketing experience, there is a risk to letting your customers name things. That is how we end up with Boaty McBoatface. You would want the customers to make suggestions, but ultimately the product marketing team makes the final decision.
It also has to align with your existing suite of products. A few jobs ago, we were locked into antiquated names that were hard to change because they had so much brand recognition. You really have to give the machine all of that expertise.
Christopher Penn – 40:33
I don’t know—Revenue McRevenue?
Katie Robbert – 40:37
Honestly, I like that better than Loop Review. But make sure there are key milestone spots where you have a human review. Before you commit to a name, a PRD, or financials, let a human review it.
Christopher Penn – 41:18
Claude code is smart enough to email you and say it reached a human-in-the-loop breakpoint. You can tell it to only come to you when it needs your input. If you build those into the recipe, you could be sitting at your desk playing solitaire and get a ping to approve a product name. You look at the five choices, tell it they’re all stupid, and tell it to try again. Like a good manager, you don’t have to micromanage.
Katie Robbert – 42:20
There’s definitely a lot of potential. For us, this was V1, which gives us a lot of great learnings.
Christopher Penn – 42:29
The key takeaway is to get really good at project management or get AI to help you get good at it. Bring in all the stakeholders. Use the 5P Framework by Trust Insights as a starting point. Imagine you were working with an expensive consulting firm and you could only talk to them once. What would you put in that plan to make sure you got what you wanted? That’s essentially what agentic product marketing is like.
John Wall – 45:08
It’s really interesting to me. It lowers the expense so much that you can actually do cycles just to see what comes out of it. As far as product naming, I’m grabbing Agent Storming because this is the new wave of brainstorming. I’m really looking forward to the first iteration of this and just seeing what the heck it looks like. What does the machine think this should look like?
Obviously, a bunch of stuff won’t work, but there might be three or four things where I think, “Wow, I’d never thought of answering it that way.” So yeah, https://www.google.com/search?q=agentstorming.com—check it out.
Christopher Penn – 45:58
Andre Karpathy or Boris Cherny said on Threads that when you take into account salaries, it costs more to have a meeting about an idea than it does to have AI build the idea and throw it away.
Katie Robbert – 46:24
Peanut gallery: That’s not a new problem. AI didn’t create that. Having meeting upon meeting to make decisions is a human problem, a corporate culture problem, and an organizational behavior problem. AI is just amplifying and magnifying it.
You can use agentic AI to come up with a complete plan, then get people together and execute it. Two or three jobs ago, we tried to quantify the cost of holding a meeting to have fewer meetings, but it didn’t work because people still felt they needed to be in the room to hear it firsthand. That’s a human problem.
Christopher Penn – 48:11
That’s fine. We’ll just get rid of the humans and that will solve the problem.
Katie Robbert – 48:15
No.
Christopher Penn – 48:19
All right, folks, that’s going to do it for this week’s show. If you want to catch the behind-the-scenes prep work, go to the Analytics for Marketers Slack group. It’s free to join 4,500 other people in there. We will talk to you on the next episode of So What.
Be sure to subscribe to our show. For more resources, check out the Trust Insights podcast at trustinsights.ai/tipodcast and our weekly email newsletter at trustinsights.ai/newsletter. If you have questions, join our Slack group at trustinsights.ai/analyticsformarketers. See you next time.
|
Need help with your marketing AI and analytics? |
You might also enjoy: |
|
Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday! |
Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday. |
Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.