So What? Marketing Analytics and Insights Live
airs every Thursday at 1 pm EST.
You can watch on YouTube Live. Be sure to subscribe and follow so you never miss an episode!
In this episode of So What? The Trust Insights weekly livestream, you’ll learn how to effectively leverage Google Opal for your marketing analytics and insights. You’ll discover how to create a newsletter on current AI topics using Google Opal, even without extensive technical knowledge. You’ll also learn the power of the 5P framework for structuring your prompts and getting the most relevant information from the tool. Finally, you’ll gain insights into refining your processes and providing your own data to optimize Google Opal’s performance.
Watch the video here:
Can’t see anything? Watch it on YouTube here.
In this episode you’ll learn:
- The basics of Google Opal
- How it compares to n8n
- What you can do with Google Opal
Transcript:
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Katie Robbert – 00:33
Well, hey, everyone. Happy Thursday. Welcome to “So What? The Marketing Analytics and Insights Live Show.” I am Katie, joined by Chris and John. Howdy.
Christopher Penn – 00:40
Hello.
Katie Robbert – 00:41
And we got the high five. We are all in one place for a hot second.
Christopher Penn – 00:46
Just this week.
Katie Robbert – 00:47
Just this week. Well, you know how it goes. This week, since we are all here, we are talking about Google’s Opal. You may be familiar with tools like N8N, Zapier, or Make, and Opal is akin to these tools. It is an automation tool. The difference between Opal and some of the other tools is that at this current moment, it is restricted to just things within the Google Workspace. So your Google Docs, your Google Drive, all that sort of stuff.
Whereas other tools like N8N, which we went over on a previous episode, which you can catch at TrustInsights.ai on YouTube, go to the “So What?” playlist. Opal does not at this time integrate with tools outside of the Google Workspace. I really like this tool. I find there’s a very low barrier to entry.
Katie Robbert – 01:41
If you have basic prompting, you can at least get started, but we’re going to go over all of those things. So, Chris, where would you like to start?
Christopher Penn – 01:48
We should probably start with what the heck is this thing? The place you would go to find this, and you do need a Google account for it, is opal.google. That is the new address, the new domain. Once you sign up and say yes to everything, this is still technically part of Google Labs, hence the experiment button up top. That does mean that at any point it could just vanish if Google decides, “You know what, we don’t think this is working out,” and it just goes poof. So there’s that big caveat because it’s Google Labs.
That also means your data privacy is not guaranteed because they’re looking at the data to see how people are using this tool to make product improvements and things. So please don’t use this with confidential data. That’s a big deal.
Katie Robbert – 02:41
I mean, that’s sort of a good best practice in general.
Christopher Penn – 02:44
It’s a good best practice in general, but this is a free tool. What we always say is, “If you’re not paying, you are the product.” So what have we got in Opal? Once you start a new app, this is an agent builder. It is what it is. At the bottom of the window, there’s a prompt box where you can describe what you want, and Opal will try to figure out what it is you’re trying to do and try to build some kind of app.
Up top are four different node types, four different controls. The controls are user input where when you turn on the app it will ask the user for something. You could say, “Type in the address of your website or this or that,” or anything the user could provide.
Christopher Penn – 03:37
The second class of nodes are called “generate nodes.” Generate nodes use AI. The tools that are available are Google’s Gemini 2.5 Flash, which is great for things like summarization, and 2.5 Pro, which is the smartest model that takes longer to think things through. There is an agent version of Flash called “Plan and Execute” where it will do its own multi-step assessment to say, “How should I try to approach this task?” You can tell it, “Here’s how to approach this task,” and it will try to carry it out.
There is the Deep Research tool, which you could say, “Do some research on the Marketing over Coffee podcast,” and it would spin up Google’s deep research tools to go out on the web and gather all the data and turn it.
Christopher Penn – 04:27
Then there are confusingly two different image generation capabilities, Imagen 4 and Nano Banana. Generally speaking, you should be using Nano Banana because it’s the newest of the two. There is an audio generator that will take text and create an audio file. There is Google’s VO model that will generate a video from an output, and then there’s Lyria 2 which will generate instrumental music from text and is generally terrible.
So that’s the second class of nodes: generates. After that, there’s a third class called “outputs.” This is, “What do you want this thing to do? Where do you want it to send stuff? Do you want to create, do you want stuff to be displayed exactly the way it comes out of the model? Do you want it to try and make a web page from your results?”
Christopher Penn – 05:16
Do you want to make a Google Doc, a Google slideshow which is purely text on slides, or a Google Sheet? So that’s the third class in it. So input, do stuff, output, and the fourth one is integrations. This allows you to upload a file, allows you to connect to your Google Drive, allows you to connect to a single YouTube video. You can provide a chunk of text or you can provide an image or drawing, and these can act as inputs to the different processes.
Those are the four node classes. It’s a good idea to know them and to know what their names are and what they do because very often when you’re using the prompt box below, Opal will say, “I’m not sure how to do that. Maybe try this instead.”
Christopher Penn – 06:09
And it comes up with some nonsensical suggestion or tells you, “You just can’t do that.” If you know what the node types are, you can say, “No, you have a Plan and Execute node. Use it to tell it what to do.”
Katie Robbert – 06:22
I feel like there’s a logical place to start with all of this.
Christopher Penn – 06:26
There sure is. What would that be, Katie?
Katie Robbert – 06:31
Got it. I win. It’s the 5P framework. The reason the 5P framework is so fantastic for something like Google Opal, where you can give it a prompt versus trying to use something like N8N, is that it can feel overwhelming because you’re trying to connect everything yourself, and there are so many different options. Opal, a lot like Google Gemini because they’re both built by Google, allows you to add in the prompt and tell it what you need it to do. If you structure your prompt with the five Ps, I can almost guarantee you’re going to get a lot farther, a lot faster.
The reason is you want to be very specific about what it is and specific in places like purpose. What is this thing for? What is it meant to do? Who is going to use it? Who is going to get the output process?
Katie Robbert – 07:25
Probably the most important part of this whole thing is if you have a very clearly defined process, if you have an SOP, if you have documentation, that’s a good thing to add to the assets. So the assets are basically just your attachments, and you can upload your document, add it from Drive. If you built a bunch of walkthrough videos and put them on YouTube, that’s a great place to start. Make sure that you are clear about the process. If there are steps that you’re not quite sure about, that’s okay. But if you have the high level, “Step one, it should do this. Step two, it should do this. Step three, it should do this,” you’re going to get a lot farther with the platform. We already know that we are restricted to anything within the Google Workspace.
Katie Robbert – 08:12
And so you’re using your Google documents or YouTube videos or slides or sheets, and then the performance is your output. What is it meant to do? Chris showed the output. So the performance is, I can then go to a web page or I then have a slide deck that tells me what I need to know. So I highly, highly recommend using the 5P framework to structure your prompt in Google Opal.
Christopher Penn – 08:43
Okay, so let’s build something in Opal, let’s create something, and because the 5Ps are so useful, let’s just use the 5Ps as the actual prompt. So I’ll make this a little bit bigger here so we can all see what’s going on. Let’s start with something simple. Let’s start with, “We’re going to use Opal to construct a newsletter.” So we’ll say, the purpose is to build a newsletter about current topics in AI for people to know the most important stories in AI this week.
The newsletter audience: business users who want to know what’s happening in AI this week. The newsletter should be non-technical to serve their needs. The process: Use the Generate Deep Research node to find news stories about generative AI, large language models, ChatGPT, etc., in the last seven days in English, and then the platform is obviously open.
Christopher Penn – 09:59
We don’t have to spend a whole lot of time there, but we want to make sure that we’re using the Deep Research node for this one. We might want it to have a Use the Generate, Plan and Execute node to decide which of the news stories is most relevant to our people and order the news stories in descending order by importance. Then we say, “Use the output node to generate a web page with auto layout for the results.” This is reasonably clear in terms of what we want this to do. Any suggestions or tweaks, Katie, you would make on this?
Katie Robbert – 10:44
Well, I don’t want to jump ahead, but when it comes to people, I feel like it’s a great opportunity to bust out your ideal customer profile that you built along with us from previous episodes of the livestream, which you can get on our “So What?” playlist. I’ve done something very similar to this in Opal, and what I found was the clearer I could be about the end user, the better the results were. Because, as you know, Chris, and as you know, John, there’s no shortage of articles coming out about AI every single day. A lot of them are developer-focused, a lot of them are overly technical. So I personally don’t feel like it’s enough to say, “It’s a non-technical person,” or, “It should be non-technical.”
Katie Robbert – 11:31
I feel like the more you can specify, even if you don’t have an ICP, if you can sort of list job roles. So this is for C-suite, this is for mid-level managers, this is for directors who are overseeing marketing teams or that kind of thing that really helps narrow down the amount of non-relevant data you’re going to get back.
Christopher Penn – 12:01
Okay, so let’s use that. Let’s go into the canvas here and get rid of everything, clean things out. I’m going to start by uploading a file, and the file I’m going to upload is the Trust Insights ideal customer profile. So that’s where we’re going to be. We’re going to put that in here as the starting point so that our prompt knows that it’s there. So let’s take this and see if Opal can, in fact, correctly parse this to understand what it is that we want to do. If it can’t, we can always build these steps individually.
Christopher Penn – 12:35
One of the great things about using the 5Ps is that if you sit there and think about this like this lovely error, you can say, “Well, if I think through the process, then even if the tool can’t for some reason understand what it is I’m trying to do, I can use, I can follow the five Ps to their logical conclusion to do this.” So yeah, it’s just not happy with this.
Let’s take those 5Ps. We’re going to have our generate Deep Research node. We’ll have our generate, plan and execute node, and we will have our output node. That will be the web page with auto layout. We’re going to tie this, we’re going to drag these little things together, chain them together like so. The only thing that is missing now is there are no prompts.
Christopher Penn – 13:26
When you use the automatic box at the bottom, it will generate the prompts for you, which is very nice in the individual notes because we don’t have that. We have to do this manually. So if we go back to our five Ps and we say the purpose is to build a newsletter about this, we’re going to. The newsletter is this. There’s the ideal customer profile. We’re going to use this to find the last seven days. Now this node has effectively the five Ps as its prompt. Then in our Plan and Execute, we’re going to say, “Decide which of the new stories,” just like so. Then we just make sure that’s connected to this output. Now, if we did this right, we have our ideal customer profile, we have these things.
Christopher Penn – 14:35
Then when I hit start, it should be able to immediately get started with these different components and try to create the newsletter. This, because it’s using Deep Research, is probably going to take anywhere from 5 to 15 minutes for it to generate its outcome. But you can see generally the thought process for putting this together.
Katie Robbert – 14:58
Do you have to use Deep Research?
Christopher Penn – 15:03
You don’t have to. You can have it do a regular web search instead because it doesn’t have a web search tool. However, it’s not going to be quite as good as Deep Research.
Katie Robbert – 15:16
How so?
Christopher Penn – 15:18
Deep Research is an agent system of its own that when you give it a search thing, like news stories about AI, it recursively goes and checks its results. It says, “I’ve got some news stories, I’m going to look at them. Nope, that’s not it. We’ll keep trying.” Like when you watch regular Deep Research in Gemini, it will say, “I found these articles. Nope, they weren’t right. I found some more articles. Nope, I’m going to go back and keep doing research until I satisfy the conditions of the original prompt.” Whereas if you don’t use Deep Research, it will then just say, “Hey, I did a web search, here’s a bunch of articles that came back, done, ready to go.”
Christopher Penn – 15:52
And I was like, “No, you need to think about this. Actually read the search results you got and decide how relevant they are.” The non-Deep Research web search just doesn’t do that.
Katie Robbert – 16:05
It’s almost like managing people.
Christopher Penn – 16:08
It is. In this case, knowing the individual nodes and what they do is really helpful for being able to pull off something like this.
Katie Robbert – 16:19
So, John, when you look at tools like Opal, do you immediately start thinking of your marketing over coffee workload, or the work that you do with Trust Insights, which is business development? Do you immediately start thinking, “Oh, I can build an automation for X”? Do those kinds of use cases immediately become apparent to you, or do you just like to play around with tools first to see what they do?
John Wall – 16:44
No, actually this is the process right here. I wait until something is baked enough so that Chris will do a webinar about it. If we get to the webinar and it doesn’t suck, then I will start playing it. Yes, I have a whole list of, “Here’s a whole bunch of stuff that could be automated and done.” But there’s just been so much failure. It’s going to come up. Are we going to get a bunch of graphics with people with six fingers and stuff, or is this actually finally going to do a deck of slides for us that you love? That’s really. It’s funny that this is the process because there’s just so much going on, there are so many tools all the time. I finally can’t even keep up. I’m just like, whatever.
John Wall – 17:22
And so this is the filter. Even where we’re at the point now, this is still like VC smoke and mirrors. Everything up to this point looks awesome. Like, look, it’s like workflow, and it’s connected, and we plug in the prompts, and it all looks awesome right now. We’ll see where we end up in about 15 minutes here. But yeah, so far I’m intrigued. This looks very cool. This was the promise of N8N that everybody was kind of like, “Yeah, it’s just not working yet.”
Christopher Penn – 17:51
So it has actually finished its work. On the right-hand side, what you’ll see are four different things. There are obviously themes you can make for these things. There is the preview, which is the final result of what it’s spit out. You can download the preview if you want as the output or you can share it with somebody else. But most important is the console. The console tells you what happened at each step in the process. We can see here, for example, the first Gemini call. We can see the individual parts and what it did. You can see it even wrote its own prompts to try and do this deep research. If I start expanding this out, we can see what those individual components are.
Christopher Penn – 18:33
So it did a web search there, given this query, and dug into what it tried to do at each stage throughout the process. This, by the way, is super interesting. If you are the kind of person who wants to know what’s going on under the hood, you can see in great detail what it tried, what Gemini, what Google decided the steps of the process were going to be. So start the research, do the research. Here’s the file data that contains the ICP, etc., the model response. It said, “Here’s a summary of generative AI news in the last seven days: regulatory landscape and ethical considerations, industry adoption and investment, emerging trends and development Conference, the Artificial Intelligence Marseille 2025 forum is taking place, public sector applications,” and, of course, you can see all the news sources.
Christopher Penn – 19:30
You can then go through and see the ranking, like how did you decide what to rank things and its responses, and ultimately how it came up with its answer. Then it creates the web page output and makes a nice product at the end. So this console is the audit trail to understand, “Did you do something what I wanted you to do?” This means that if it gives you a work product you don’t like, you can go through the audit trail and see where it went off the rails. Maybe it did okay with the research, maybe it found the right things, but then maybe our prompt in the ranking section was just so poorly written. That’s like, “I don’t know what to do here,” and as a result, it did not do what it was supposed to do.
Christopher Penn – 20:17
So you could say, “Okay, well this is the node that broke,” because the console shows it got good research and then just mangled it.
Katie Robbert – 20:25
That’s assuming you understand everything you’re looking at in the console. Could someone who isn’t you, Chris, basically take a copy of everything that’s in the console, give it to Gemini for example, and say, “I didn’t get what I wanted. Where did it break?”
Christopher Penn – 20:43
You absolutely could do that. Yes, you could do that. Or let’s actually, let’s get this strategy because I’m not sure that this works. “Take a look at the Rank News node. I want it to carefully arrange and screen the news articles based on the Trust Insights ideal customer profile. Can you improve that node’s prompt?” We’ll try this here and see if this works. You can also. Wow. No, that is quite the error message. Let’s see if we can do this here. Say, I want you to fix this because there is a prompt button. Well, that’s literally what I typed. That doesn’t help anything. I think we broke it.
Katie Robbert – 21:38
I think we broke it.
Christopher Penn – 21:39
Yeah. Oh, there. “Ensure a thorough and precise screening process.” Yeah, okay, that’s better. It’s saved. I’m going to refresh because that is just lovely there. That is a better prompt now. If we were in the console and we look at the steps, create the newsletter. There are no steps here because it hasn’t run. It erased the last run. But this time through, if we were to run it again, we could say, “Okay, this is a better prompt.” Same for this one here. We could hit the magic button here and literally say, “Improve this prompt to have it be more specific to our ideal customer profile.”
Katie Robbert – 22:23
I would also add to that, one of the things that people find helpful is, “Why is it relevant to you?” So what’s the “so what?” What’s the takeaway? Why should you care about this thing?
Christopher Penn – 22:36
Yep. Well, so that’s something we’d want to do in this section here. Say, as you rank the news stories, ensure that there is a clear “so what” to our ideal customer profile. Maybe that should be its own node. Something that can take the newsletter stories and set and write that separately. So let’s add that in. We’re going to add in a new generation node. Let’s glue that there, delete that one, and glue this here. Take out this and say, “Take the news stories from Rank News and then analyze them against our ideal customer profile. After that, write a short paragraph for each news story explaining why that story is relevant to our ideal customer profile.” So we’re going to add that in. “Unable to handle. I understand you want. Your request is incomplete.” I’ll just do it myself then.
Christopher Penn – 24:02
All right, so now we’ve got an extra node in here that will then process this and hopefully come up with an even better answer. Let’s see if I can bring the ICP back in here. Yeah, there we are. Let’s make sure the ICP is available here as well so that it’s available in all the steps. Now let’s hit start and see what happens. So what we’ve done is we’ve added another step to the process to say, “Write the ‘so what?’ Don’t just throw the stories at me.”
John Wall – 24:34
Right.
Christopher Penn – 24:34
So it explains why this is relevant.
Katie Robbert – 24:36
I think that’s a key piece. I mean, not just for prompting, but for this kind of an exercise. We know a lot of people want to create their own version of an AI newsletter. The big problem I see is I don’t know which things I should pay attention to. There’s so much information. Even our friends over at SmarterX, who do a fantastic job of the roundup of all the AI news every week, it’s still a lot of information.
Christopher Penn – 25:14
It’s still a two-hour podcast.
Katie Robbert – 25:15
It’s still a two-hour podcast, and I don’t know where I need to be paying attention. When people look to us to really guide them, our job is to make sure we’re being very specific and clear about what they need to look at because there’s just too much information. Almost every use case I’ve seen for Opal so far—and admittedly I’ve only seen like four—but every use case I’ve seen so far has been “build a newsletter” or “gather some articles.” What are some other use cases for a system like Opal, especially when you’re constrained to the Google Workspace?
Christopher Penn – 25:56
That’s a really important question because it requires you to think about the inputs and the outputs. What things can you put into this? Knowing that you have nodes that do summarization, that can do extraction, rewriting, classification, planning, synthesis, and obviously generation, what would you do with those capabilities? As an example, if I had in my Google Drive, say I had a document about Trust Insights and had a document about our main competitor, we’ll say Accenture, right?
Christopher Penn – 26:32
I mean, even though our annual revenues are like Accenture’s cream cheese budget, I had those two things there, and I had a Plan and Execute node that said, “Here’s a SWOT analysis. Perform a SWOT analysis, then build a strategy, then analyze the strategies, then do some role play and scenarios.” All the things that you would do in a sequence, in a chat, in a regular Gemini prompt window, you would instead make individual nodes. If you had all those nodes collected and then you had a little Google Docs thing at the end, instead of you having to copy and paste those prompts over and over again, or even have them in a multi-step gem where someone’s just talking, “What about this? And what about this?”
Christopher Penn – 27:16
And now what about this? You would have nodes that would do that if they were preloaded with all the knowledge available so that you just hit go. Maybe you upload a new piece of data, like, this month’s analytics from the company, and it would build you a new strategy or a new tactical plan or a work plan. So it is content, but it’s not content for public distribution. It’s content that you would use like, “Here’s the SWOT analysis and quarterly competitive analysis for this quarter.” In literally 15 minutes I’m going to be doing a webinar with our friends over at SMPS. One of the things that I talk about in that webinar is how do you do scenario planning with AI?
Christopher Penn – 27:54
Well, suppose I had three nodes here: worst-case scenario, steady-go scenario, best-case scenario. I had all the documents on the left-hand side, and I said, “Okay, do some scenario analysis.” It passes them all through, and each node fires and does its own thing, comes up with, “Here’s the best-case scenario, here’s the worst-case scenario,” etc. It comes up, and then it glues all the analysis together. I don’t have to prompt that anymore, and I don’t have to run it through a gem anymore because now it’s all baked into this little application. So here in our output, we see key topics for data-driven marketing leaders, executive training, preparedness, LLMs. As an AI chief of staff, that’s actually interesting. We have enhanced AI capabilities, personalization.
Christopher Penn – 28:43
Now you’ll note there are some things here that could use improvement, like, “What is the story? Could you put the URLs of the story here so we could see it?” Those are all things that you would iterate with this. But that’s essentially this stage here. This is just to make the thing, I might say, “You need to have these features in this output node to condition what you want to look like.”
Katie Robbert – 29:06
It’s once again almost like doing the 5Ps ahead of time is going to tell you all of those things because to your point, the output makes a web page. “Why would I look at a web page I can’t interact with?” Do your user stories as a persona. “I want to, so that, as the CEO, I want to see the top 10 articles so that I can read them and stay informed. In order to read them, I need to be able to click on them.” So there are those little nuance things to your point, Chris, about like, you can iterate with it, but we always say, the more work you do upfront in terms of the requirements gathering, the less development you’ll be doing on this side to try to perfect it and get it right.
Katie Robbert – 29:51
John, be honest. How many times a day do you use the 5P framework?
John Wall – 29:57
Every day. I mean, it’s all about people for me, really. That’s always the missing link for everybody because you run down the list, you’re like, “Oh yeah, you’ve got your platform straight, and you’ve got your process.” Have you talked to anybody about this? And it’s always, “Oh well, no, we’re just building this stuff.” So yeah, I live by the five Ps. Getting that last P in there is what closes the business.
Christopher Penn – 30:21
I just added an audio summary node and gave it the same instructions. Now there are limits. I believe Opal gives you either two or three video generations a day and I think five audio generations a day. You’d want to test out and make sure the rest of the workflow works first before you add in those very expensive things. I imagine if the product ever does come out of beta and successfully emerges that it will either tie to your Gemini subscription or tie to your Google Cloud subscription, and you’ll get a bill for when you use audio and video because those are very compute-intensive products. But for now, they give you some free samples.
Katie Robbert – 31:02
And I think that, again, sort of goes back to, “Why are you using this in the first place, and is this the right tool?” To your point about it’s about the people. A lot of times we see people jumping right to the platform like, “Oh, I found this great new thing called Opal. Let’s figure out where it fits in,” versus, “Hey, we have all of these different processes that we want to automate. What is the right tool for it?” So Opal might connect to all of your workspace, but N8N might cost less in compute to create a video or whatever the situation is. That’s a really good reminder of why you want to do that analysis upfront before picking the tool.
Katie Robbert – 31:51
Trying to retrofit a tool into everything else is really costly and 10 times out of 10 doesn’t work.
Christopher Penn – 32:00
Yeah, exactly. So let’s give this a listen, see how this sounds. We’ll play just a couple seconds of the Opal output, generate an audio summary.
Katie Robbert – 32:06
Of the top three stories that appeal to the ICP.
Christopher Penn – 32:10
Explain what they are, what’s the top.
Katie Robbert – 32:12
Three, and what the ICP should do with the news.
Christopher Penn – 32:15
Executive training and preparedness. While executive training and preparedness is a—
Katie Robbert – 32:20
General leadership topic, it is relevant to the data-driven marketing leader’s commitment to continuous learning and the strategic imperative to foster data literacy at all levels.
Christopher Penn – 32:29
Okay, so it’s basically taking the news. It sounds like we would need another node between the “Analyze News” and the “Create Audio Summary” that would basically do the transformation into a speaking script for the audio because it sounds like the audio node literally just reads aloud whatever data it’s getting.
Katie Robbert – 32:48
Yeah, I think the other thing that’s interesting and we would continue to iterate before anything going public is the “why is it relevant to the ICP?” It feels more like an internal report than something that you would publish externally and say, “This is why it matters to you.” It’s really more of an internal, “Hey guys, this is why I chose this, and this is why it’s relevant to your ICP. So, is it a go or no go? Great, thanks.” That’s part of the process that’s like, “No, just make those decisions. Just go ahead and do the thing. I don’t need to triple-check it out because that’s actually creating more work for me to have to go through and read everything.”
Katie Robbert – 33:25
So it’s just a slight tweak, but those are the things that, as you’re prompting, you want to be aware of.
Christopher Penn – 33:33
Exactly. As with so much of the rest of generative AI, when you look at the system, what’s really powering it is the data that you provide. We always say, the more data you bring to the party, the better AI is going to perform. In this case, we’re bringing in our entire ideal customer profile. I could plug in our entire sales playbook if I wanted to have it. I could bring in from Google Drive something like CRM records, where we could have any number of rows of data and say, based on a sales playbook, “Do this level of analysis on a row by row basis.” So all those things are possible. If we know what each node does and we have the data, do it. Using Opal without data, to me, is.
Christopher Penn – 34:29
You’re basically relying on Gemini to have the information, and that’s not something I would suggest.
John Wall – 34:38
Just trust the internet. It’s all true.
Christopher Penn – 34:41
Yeah, yeah. No. So, any other questions, Katie, on how we’re going to start using this thing? I would say in the hierarchy of tools and technologies, there’s prompting, there’s gems, there’s Opal, and then there’s N8N. Then there’s code would be sort of the hierarchy. It kind of sits in the middle of the hierarchy. It’s harder to use than a gem, it’s easier to use than N8N. Obviously, because of its limitations of the Google ecosystem, if you’re using ChatGPT, this is not available for ChatGPT. They do have their own agent builder, and it’s terrible.
Katie Robbert – 35:22
Okay. No, I mean, I think it makes sense. Again, I will just die on this hill with anything. Start with the 5P framework. Get your requirements right. You can get your own copy at Trust Insights AI 5P framework. Use it, love it, learn it, live it.
Christopher Penn – 35:42
All the things I would add to that. Also, if you want to use it, learn how to use it really well. We have a course for that. We literally have a course for this on how to use all these frameworks together to do some really cool stuff. A lot of what we’ve talked about today, stuff that’s covered in the course.
Katie Robbert – 36:02
Yeah, that’s factual. There are no lies, no hallucinations. John, what do you think? So now that Chris has demonstrated the usefulness of the tool, how are you going to start using it? What kind of things are you thinking about?
John Wall – 36:22
Well, the big thing is to start putting together some of the workflows and see what kind of results we get out of it. I mean, it does cross that line. There has yet to be a good agent builder. There literally has not been one that can build one that’s predictable and delivers quality results every time. This is the closest I’ve seen to that. The way Chris had laid it out as far as code N8N and then this. In some ways, I feel there are two axes going on. I feel like this could invalidate N8N. This could just destroy N8N if it does a better job of connecting and delivering results.
John Wall – 36:58
So, yeah, there’s a lot going on in trying to get these products to mature, but it is great to see because we’ve all seen this point where writing great prompts and getting stuff just doesn’t scale. You’ve got to have an agent to get to the next level. Otherwise, it’s just going to be the rest of your life sitting there turning the crank, which is fine. If you get better results than you’ve ever received before, that’s great. But until you get a way to automate it and make more happen with it, you’re not going to get that next level of productivity.
Christopher Penn – 37:30
Yep. Bring data. Bring as much of your data as you can. The tool is not great at getting it. I tried having it retrieve the contents of our YouTube channel, and one out of five times it worked. When I extracted that data myself and I just dropped it in as an upload, it was perfect. It was great. One of the things I want to try doing with this is I want to try having it write a book from our YouTube channel transcripts, both from “So What?” and from any insights. But it couldn’t do that. It was like, “That’s too much data,” because I asked it for.
Katie Robbert – 38:07
Just.
Christopher Penn – 38:08
Just pull all of 20, 25. It’s not that much. It’s only 100-some odd episodes. And it’s like, “No, I can’t do it.” Like, okay, I can see I’m going to have to do this myself. So bring your own data. All right, well, that’s going to do it for this week, folks. We will talk to you all on the next one. If you’re going to be at the MarketingProfs B2B Forum next week, Katie and I will both be there. Come find us and say hello. We’ll talk to you all on the next one. Thanks for watching today. Be sure to subscribe to our show wherever you’re watching it. For more resources and to learn more, check out the Trust Insights podcast at TrustInsights AI, the Trust Insights podcast, and our weekly email newsletter at Trust Insights AI newsletter.
Christopher Penn – 38:54
Got questions about what you saw in today’s episode? Join our free Analytics for Marketers Slack Group at trustinsights.ai/analyticsformarketers. See you next time.
|
Need help with your marketing AI and analytics? |
You might also enjoy: |
|
Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday! |
Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday. |
Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.