IN-EAR INSIGHTS CHATGPT PRACTICAL USE CASES

In-Ear Insights: Practical Use Cases of ChatGPT

In this week’s In-Ear Insights, Katie and Chris talk through a half dozen practical use cases of ChatGPT, why they work, and what you can do to make generative AI tools more useful and powerful in your own work.

[podcastsponsor]

Watch the video here:

In-Ear Insights: Practical Use Cases of ChatGPT

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.

Christopher Penn 0:00
In this week’s In-Ear Insights, we are talking about use cases for generative AI and large language models as well as what’s new.

You literally can’t go a week without the entire universe of generative AI changing.

So Katie, what kinds of things do you have in mind? You wanted to know more about given how fast things change.

Again, full disclosure, by the time we listen to this, this might be outdated.

Katie Robbert 0:27
You know, I think that, correct me if I’m wrong, Chris, but I feel like generative AI, first of all, I didn’t know that was the proper terminology for all of this, but it makes sense.

I understand what generative means in this context.

I guess what I’m trying to understand is this is all really cool and fascinating technology, but in my day-to-day as a marketer, what are the things that I need to know? So, we have two systems, if we want to just take as an example, there’s ChatGPT and now there’s GPT-4.

And again, I’m probably going to mix up the terminology, and it’s just hard to keep track of.

So the real question is, what does a marketer need to know? What’s a use case for a marketer like me who isn’t as technical? I can understand technology, but I can’t keep up with it as quickly as you can.

So, what do I need to know? What are the use cases of GPT-3 or GPT-4? I’m already losing it.

Christopher Penn 1:31
Okay, let’s tackle the terminology first.

So, ChatGPT is a web-based interface that allows a human to talk to a large language model.

For paying customers, when you first sign in, there’s a little dropdown that says, “Here are the models that are available.” There’s GPT-3.5 Turbo, which is the current one that most people will use.

This model is very fast, good at what it does, and though not the newest, still fairly new.

There’s the old version 3.5, which is available in the free version.

It’s slow but still does a decent job.

And then there’s the brand new GPT-4.

This is the latest and greatest model.

It can do much more complex reasoning and logic, does a better job of summarization, understands better what you’re asking of it, and has 40% fewer hallucinations, which is just a polite word for lying.

It is also slow.

Here’s what makes these models different.

They’re all trained on different amounts of text, with different numbers of parameters.

A parameter is like taking an average sentence and looking at the word statistics and frequencies in that sentence.

If I say, “I pledge allegiance,” the next logical word based on a whole bunch of texts is probably the word “flag.” So there’ll be a probability score attached to that word every time one of these models gets made.

When people say that it has 700 billion parameters or 6 billion parameters, all that means is that there are that many more probability scores.

GPT-4 has about 10 times the size of previous models, which means it has the ability to guess better at the next word, phrase, sentence, idea, or concept because it has more previous examples in terms of probability to guess from.

That’s the big difference.

You have ChatGPT, the web app that we talk to, which is one of the two ways to talk to the system, the other being an API.

And then there are three models within ChatGPT that you can choose from depending on the use case.

Today, GPT-4 has a cap that says, “Hey, you can only use 25 messages every three hours,” because the system is struggling to keep up.

For most people on the paying plan, they’ll use the 3.5 Turbo model because there are no usage caps on it.

It will do 95% of what the version four model can do much faster and with no annoying messages.

Katie Robbert 4:31
So if you’re four and a half minutes into this podcast with us and, like me, fighting really hard to not let the symbol-crashing monkeys in your brain take over this conversation, what I’ve heard so far is that ChatGPT is the interface, and there are currently three models that live inside: the legacy, the default 3.5, and the brand spanking new version four.

I think what it comes down to in terms of which model you would use within the ChatGPT interface is what your use case is, depending on what it is you need to do.

The legacy system or the current 3.5 may be good enough, especially if you’re doing something like rewriting content or generating a first draft, especially if it’s on something as general as “What is SEO?” or “What is digital marketing?” The older versions are good enough to get you started.

You still have to have that domain expertise, regardless of what you’re doing with ChatGPT, but the older systems are good enough for that.

Chris, I know you’ve looked at OpenAI’s published use cases and how people are using these systems.

We know that drafting new content or summarizing or rewriting content are some of the most common use cases.

But what can GPT-4 do that 3.5 can’t do?

Christopher Penn 6:09
GPT-4 has better reasoning skills.

In the example given in the demo, it was given a bunch of tax code, like tax laws, to assess what the standard deduction for a person would be under a specific scenario.

And because, again, it has many more parameters, it means it has many more abilities to guess.

It guesses correctly much more of the time.

Another example given in the white paper that was released is that it passed the bar, the AMA medical exam, the SATs with flying colors, and the LSATs.

Because it has the ability to do more reasoning and logical deduction on the text given, it has reached a point where it can do some really interesting stuff.

There are four big things that these models can do:

Generate new stuff.
Summarize, like concatenate.
Rewrite, like taking something old and making something new.
Transform, like how to transform one type of input data into another.

Again, because the fourth edition of the model is bigger, heavier, and has more capability, it can do more complex versions of these four big tasks.

Whereas older models can do it reasonably well.

I’ll gladly show you an example of a transformation.

This is kind of a fun one, and I’ll use the 3.5 version because it’s good enough for this.

I’m going to give this thing a bunch of lyrics, just lyrics, and I’m going to have it generate guitar tabs.

So essentially, it’s taking the chords for guitar and then spitting out guitar tabs.

If you’re a guitar player, you can take this output and start playing guitar with it.

I am not a guitar player, so this means nothing to me.

I actually had to have a friend who plays guitar look at this and play it.

I’m like, “Wow, that really sounds good!” It’s not what the original song was; I just gave it lyrics, and it essentially made up notes to go with it.

But it’s transforming one type of text into another.

That’s the major use case people don’t think of.

They think of very simple transformations like “rewrite the song lyrics to be about this kind of thing,” but they don’t think about changing different types of media.

In version four of the model, as you mentioned at the beginning of the show, it’s not available in the ChatGPT interface; it’s only available through the API.

This means additional software is needed to use it.

You’ll be able to pass it an image, and it’ll be able to take what’s in that image and transform it back into text in some way.

Some obvious use cases are “caption this photo,” but less obvious use cases are “here’s some sheet music” or “here’s a picture of the spike protein of the influenza virus, devise a protein sequence that could potentially bind to the antigen receptors on this.” Because there’s so much more information contained in images than there is in text, you know there’s the old cliche “a picture’s worth 10,000 words.” That’s literally true.

It would take you 10,000 words just to describe the shape of a spike protein, whereas you could just give it a picture and say, “Okay, now work with this.”

We don’t know how it’s going to perform because it’s not open to the public yet.

We don’t know what it will be capable of, but those are some of the ideas behind it, especially what’s different about the bigger model.

Katie Robbert 9:59
It’s interesting because, you know, we started off talking about the use cases for marketers, but what I’m hearing is that you still have to be a subject matter expert.

So you gave an example of one of the underutilized use cases of the GPT system in general, which is transformation.

But you yourself can’t read guitar tabs.

Now, I happen to be able to read guitar tabs; that’s a little fun fact.

For someone like me, this is actually straightforward, but you didn’t know that about me.

Therefore, you had to go somewhere else to find an expert who could help you understand this, which then takes up more resource time.

If you don’t have someone in your network, you might have to bring in an expert, and that may cost you money.

Thinking about the use cases for marketers, first and foremost, you need to be able to use these systems within your area of expertise.

So if we think about the common use cases, you said the four basic use cases are generate, summarize, rewrite, and transform.

If we think about generate or rewrite, if you have a piece of content or want to generate a piece of content around SEO, as a marketer, you first have to have a good working understanding of how SEO works.

If I said to ChatGPT, “Write me a 500-word blog post on the four major components of SEO,” I would first have to know what those four major components are to understand if the system is giving me something that is even close to being correct.

Christopher Penn 11:44
Exactly right.

Exactly right.

You You have to know whether or not but it’s spitting out his useful amount.

By the way, how does this sound? These guitar tabs?

Katie Robbert 11:56
Oh, you want me? I thought you were gonna play something?

Christopher Penn 11:58
No, no, I’m just looking at what this means.

Katie Robbert 12:03
It will take me more than a hot second to figure out how this sounds.

I can’t read them that quickly.

But I can read them.

Gotcha.

Christopher Penn 12:11
Yeah, that’s exactly why we have a bit of a gallery of just a few use cases.

This is actually from a talk that we’ve just started doing.

So the obvious one is generation.

You can ask it to write a blog post or write a blog post outline about predictive analytics, of course, it does that very capably.

It’s a straightforward use case.

I would be surprised if people did not know that this was one of its basic capabilities.

These are true for both 3.5 and 4.

If you’re an expert social media manager, you’re going to summarize this text and turn it into a YouTube caption, right? That’s what I actually use almost every week for Trust Insights.

I’ll take the transcript of the podcast, like this podcast, and say, “Okay, I just want you to write me a summary,” and YouTube captions have it.

I don’t have to think about it.

But here’s a fun one.

This is a form of summarization.

I’m going to take this text and ask it to write a Big Five personality analysis from this text, provide numerical scores on a scale of zero to 100 for each of the Big Five personality traits.

So it’s rating my writing on this personality scoring system, which I think is really cool.

I tried to have it do Myers Briggs, but it said there’s not enough information to draw those conclusions.

What it should have said is that Myers Briggs has been thoroughly discredited as a tool for actual analysis.

But these Big Five personality traits, this is a cool application of summarization to say, “Yeah, this is who this author is.”

Katie Robbert 13:54
Interesting.

I’m trying to figure out if any of this is actually true.

That will take me more than a second.

But I think that’s really interesting.

You know, this is where you start to get into, in terms of a use case, these systems like ChatGPT and others that have come out, they’re not search engines, but they are a library of information that you can pull from to summarize.

There’s a big difference between the two.

And I think that if your use case is, “Tell me what the Big Five personality traits are,” you’re not necessarily using the system correctly because you’re still using it like a search engine.

Instead, you should be using it to say, “Here are the Big Five personality traits.

Let me give you the information, and then let me give you content to summarize and say, what are those things?”

Christopher Penn 14:50
Exactly.

And in terms of what you would do with this information, there is some belief in some management circles that these traits can help you figure out which people are compatible to work with each other or not.

I’m firmly in the camp of “you make do with the resources you have,” but that is a capability of the system.

Other use cases include one of my favorites, writing code.

So for this one, I gave it some plain language instructions like, “Here’s the code I want you to write, go ahead and write the code for me.” It gets it right about 95% to 98% of the time, but there are usually one or two things that break.

And that’s where subject matter expertise comes in.

You look at it and say, “Oops, you forgot the semicolon here.

Okay, now we’re off to the races.” This saves me so much time.

This past week, we were doing some work for a client and building a forecasting model for their conversion tracking.

This particular use case saved me easily eight hours.

I told it what I wanted to do, and it wrote me the first draft of the code.

In just 15 minutes, I had the first draft of that code written.

Katie Robbert 16:06
I think that’s actually a really interesting use case.

To your point, if you don’t know what these systems are capable of, and you don’t know to ask for these things to be done, it can be a challenge.

But for someone like me who knows enough about coding to be dangerous, I can look at it and say, “I understand what that’s supposed to do.” However, I wouldn’t understand the nuance to say, “Oh, the reason it’s probably not going to work is because there’s a comma instead of a semicolon or there’s not a line break or whatever the command is.” I don’t know enough about it to know that that would be the fix.

So for me, that’s still not a great use case because I’m not a subject matter expert in code development.

Yep.

Christopher Penn 16:51
This is one example of integration.

So with Microsoft Visual Studio code, I can right-click on my code and say, “Find bugs, optimize my code, explain my code, what does this code even do, and add comments to my code?” I think this is really important because it’s something I don’t do at all.

For building documentation, you can see just how valuable this would be to automatically document my code.

It doesn’t change, rewrite, or alter functionality, but it can save you a lot of heartache, right?

Katie Robbert 17:29
One of my jobs when I was a product manager was to validate that my development team was commenting their code and checking things in and out correctly.

But a lot of times, that wasn’t happening because there was no log or record of it.

So in this particular use case, you have Microsoft Visual Studio connected to ChatGPT, using a GPT 3.5 database.

Instead of writing in the ChatGPT interface, you’re writing in Microsoft Visual Studio, powered by the GPT database.

You can ask it to comment on your code, read this thing and comment on it, summarize this thing, or tell me what this thing does.

That’s because you’re using the commands from Microsoft Visual Studio.

Christopher Penn 18:24
That’s right, exactly.

And this is where this technology is heading.

When people ask about the future of this technology, it’s not just about having a fancier web interface.

It’s about integration into other products.

We’re already seeing this with things like Microsoft Bing, which has an integration with GPT-4.

And we see this in the announcements for Microsoft co-pilot, which is going to be integrated into every part of Microsoft Office, kind of like a Clippy on steroids or a Clippy that’s actually useful.

You know, it’s frustrating when it looks like you’re trying to write a funeral note, and Clippy pops up and says, “Do you need help with that?”

Katie Robbert 19:02
That’s a very specific.

Christopher Penn 19:04
It’s “yeah”.

This is where stuff is going.

As you will see, this is all over the place.

We saw it from a marketing perspective.

Hubspot’s Dharmesh Shah created ChatSpot, which is a GPT-3 integration into the Hubspot CRM.

Integration will be the watchword for these models.

Every software company that has complex software will have to do some kind of integration into a large language model if they want to stay current.

I mean, think about this.

Imagine Adobe Photoshop, a complex piece of software, but the things you do in it are highly repetitive, such as changing color balance.

This is perfect for a large language model’s interpretation.

Can you imagine opening a photo in Photoshop and saying, “Colorize it, make it a little bit warmer, remove the extra person in the photo?” You don’t have to click all the buttons, but it’s not like a brand new button is going up here.

The large language model should be able to capably do that.

So that is something to be on the lookout for.

That was a use case.

Another favorite use case is having it create social media content.

This is done with review data.

So I took data from Google reviews for my teacher’s martial arts school and had the model generate Instagram photos and captions based on the five-star reviews.

I want to summarize it in a different format, but only for certain types of content.

Katie Robbert 20:45
Yeah, I feel that summarization is underutilized and not well understood in terms of its capabilities.

When I’ve heard the use cases for summarization, it’s usually about summarizing meeting notes and action items, which can be helpful, but it also means that someone or something had to take the notes in the first place.

So, is that really the best use of time or the system? What if the summary is incorrect? On the other hand, I know of another example of a restaurant that was struggling to get quality social media posts from the agency they were working with.

We showed them how they could generate samples using summarization.

This way, they could give the agency an idea of what they would like to see because the people we were working with were not social media experts.

They knew what they liked and didn’t like, but generating content wasn’t their forte.

Using the summarization use case was a real time saver because they could take the stack of recommendations from the GPT model based on the five-star reviews, our website, our menu, and all the other things the system quickly learned.

Then, they could give the agency a clear idea of what they wanted to put on social media.

For example, they hadn’t mentioned things like weekly specials, outdoor dining, dog-friendly policies, or gluten-free and vegan options.

However, these were things that other customers had mentioned in their reviews.

Summarizing this information into a different format was almost a combination of summarization and transformation.

Christopher Penn 22:32
To a degree, yes.

The reason why this example is important is because it removes the bias of the marketer.

It uses the words, phrases, and colloquialisms of the customers.

There is no stronger voice of the customer than using their own words.

These tools, whether they transform or summarize, can do it in ways that a human marketer cannot because a human marketer’s bias may lead them to add something that is not present in the data.

With these tools, you can only include what’s in the data.

Katie Robbert 23:09
You know, for me personally, the reason why I like this is because I am a self-admitted terrible marketer.

My inclination is always to start with some kind of quippy marketing content like, “Hey, check out our brand new thing” or “Look at this cool cat.” I’m terrible at writing short, concise marketing content.

This tool takes a lot of the struggle out of the first draft because then I can use a direct caption like “Discover the best martial arts school in Boston.” It says everything I was trying to convey without me being overly cheesy.

Christopher Penn 23:52
Exactly, another example you mentioned earlier is an admin assistant taking audited transcripts and producing meeting notes and action items.

It’s a super straightforward summarization task but it’s incredibly valuable because you don’t need a VA to listen to the meeting again, even with flaws in transcription.

Open AI models do an excellent job of inferring what was probably said.

For example, I recently used GPT-3.5 to clean up the transcript for the Trust Insights blog post on Thursday the 23rd.

The cleaned up version was much more refined than the original transcript, which had some weird things in it.

While I haven’t finished writing the code to do it programmatically, the process was still pretty good.

Here’s a fun one: I received an NDA from someone about Warrior Nun stuff.

It wasn’t related to Trust Insights because otherwise, you would kill me.

When I looked at the NDA, I thought it was terrible and refused to sign it.

It turned out that the person had copied and pasted it from Reddit.

Katie Robbert 25:16
Like guy.

Yes.

Okay.

So I said, listen, read it.

Christopher Penn 25:19
I asked you to act as a legal expert and rewrite this NDA properly, which you did.

I then asked several follow-up questions, such as which clauses are commonly found in other NDAs but missing in this one.

After going through it, you produced a legally sound NDA.

I then had one of our team’s human lawyers fact-check it and confirm that it was safe to sign.

The original NDA was crazy, but this is a rewrite of existing terrible content that carries a high risk.

Signing a bad NDA can put you at substantial risk.

Katie Robbert 26:01
Well, that goes back to when I feel like, you know, there are certain things that we always become broken records about.

The five Ps are a broken record, but there’s a reason for that.

With all of this generative AI, the use cases are cool, but you have to know what you’re looking at.

You have to be a subject matter expert or know someone who is to fact-check the information coming out of it.

This is going to be the new disclaimer for everything we talk about when it comes to artificial intelligence.

So, the question of what are the use cases for marketers? There’s a lot.

I think there’s a lot that is not being used to what you’re demonstrating, Chris.

I think the most common use case is “Write me a first draft of a blog.” Period.

Christopher Penn 26:51
That’s kind of like taking your Ferrari to the grocery store, like, yes, it will do that, but it’s under using.

Katie Robbert 26:57
Yeah, and I think that that’s so how, and we’ve covered this question, but I think you asked me, so now I’m going to ask you, you know, how does someone start to understand what these other use cases are? If they don’t know to ask the question?

Christopher Penn 27:17
Is it text? That is the broadest question I can ask.

Is what you’re talking about something that exists in text? If it is, you can use these tools on it.

Is it code, music, genomic sequences? If it is text that’s publicly available on the internet, it was probably consumed by the tools used to create these models, which means you can then use those examples to get to the heart of what you wanted to do.

If it’s not text, then no, you probably can’t do it, or can it be represented as text? For example, you can’t put musical notes as a score into the GPT models, but you can put the lettering, the notation in guitar tabs, and chords, and it will spit that right back out.

Think about all the things that you have used text for on a day-to-day basis.

All those are candidates to be used with a large language model: emails, chats.

So those are the obvious ones, but then things like slides, spreadsheets, you name it, if it is text, it can be manipulated.

You can even have the tool spit out spreadsheet-compatible data that you can copy and paste.

We use this for sentiment analysis.

We wrote some code for the Trust Insights newsletters, fed 50 posts about International Women’s Day, and said, “Give me a sentiment score between minus 10 and positive 10.” It’s just text.

So, if it’s text, it is fair game.

Here are a couple more examples of these use cases: build a mission statement.

So, you act as an executive coach and a management consultant, and they give a whole bunch of background information, and then it constructs a nice mission and vision statement.

One of the challenges that we’ve always had when we’re talking to each other is that we used to have these executive strategic retreats where we spent two days navel-gazing about our mission, and now let’s just feed it some text and have it do the first draft without our biases, our personal emotional biases in there.

You’ll act as an SEO expert.

He has the background information about Trust Insights, make an SEO keyword list.

Again, this is a type of summarization, but it’s a summarization that is semantic in nature because not all these words were in the original text.

And finally, this is one for marketers to work with their legal teams.

I said, “Make me a website privacy policy.” Here are the tools we use: Tag Manager, Twitter, etc.

Make me a website privacy policy.

I have done this: make me a GDPR compliant privacy statement, make me a CPRA compliant privacy statement.

This is super valuable, again, first draft.

You still must use a human lawyer to fact-check it, but I handed this off to the volunteer organization.

I read this, do I have to look at the letters like, how did you do this? Because this would save me so much time at work to get that first draft like, “Well, let me show you the joys of AI.”

These are the abridged examples of business-relevant use cases.

There are so many more, including non-business use cases or non-marketing business use cases, like guitar tabs, lyrics, limericks, and fan fiction.

Katie Robbert 31:27
One of the things that I can think of is academic research, medical research summarizing.

If you’ve never experienced the joy of reading an academic paper, and I say that somewhat sarcastically, they are long and dense.

And sometimes, if you’re not an expert, hard to follow, and the abstract that comes along with it doesn’t always tell you exactly what the paper is about, the topic, or the point of it.

Using a system like the GPT models to summarize these really dense pieces of text to get to the meat of it is a really good use of the system.

You’re not asking the system to come up with something new.

You’re taking existing, peer-reviewed, scientifically-backed information and saying, “Just summarize it for me, tell me what the gist of it is, give me the CliffsNotes of it.”

Christopher Penn 32:24
Exactly.

This is an example that, as long as it’s prior to September 2021, which is when this particular model has its cutoff date, it has to do with straightforward identification.

In this case, I asked about peer-reviewed academic papers looking at dopamine systems in our neurology interacting with social media.

And so, there’s actual academic research that we can go Google and pull up.

Being able to navigate through a lot of scientific literature, and making sure that they exist, have been properly peer-reviewed, and appear in a credible publication like Cell or Nature, is a great way to get the information.

Again, it’s not a search engine, but it functions better than a search engine for complex queries where you need some reasoning that a search engine can’t provide.

I think that’s an essential aspect of the use cases.

These tools can do reasoning that a search engine can’t do.

Katie Robbert 33:33
I can see, for a marketer who’s, rather than using the system to write a first draft, I can actually see how this system could be really beneficial in doing the research in order for you to write the first draft.

This is an interesting take on how dopamine systems in our neurology interact with social media.

That would be a fascinating article for a company, especially a social media company, to publish.

But if you’re talking about this, you want to have some research behind it.

This is an excellent way to start the research process so that you can then cite the latest research that backs up the case you’re making.

In a more straightforward example, Chris, what would it look like if you were to say, “What are the most authoritative blogs about SEO in 2023?” Is that the type of question that would be appropriate for a system like this? Do you think it could work?

Christopher Penn 34:37
It would work, you’d want to use something that has more timeliness because yeah, the OpenAI model stops is September 21.

So

Katie Robbert 34:45
If you go to a system, so if you’re not watching this, Chris is going to Bing which, similar to what we were talking about with the Microsoft Visual Studio interface that sits on top of the GPT-3 model, Bing has made it so that their search engine interface can sit on top of the ChatGPT model as well.

So in the use case that I’m giving, if you’re a marketer who wants to write about the best resources for SEO blogs, you could go to Bing powered by GPT and say, “What are the top, most authoritative blogs about SEO in 2023?”

What I’m looking at on the screen is a quick list of blogs that I’m familiar with.

As somebody who has some subject matter expertise, I’d say, “Yep, I trust the Moz blog.

I trust Search Engine Watch.

I trust Search Engine Land.

I trust the Neil Patel blog.

I trust Yoast.” Those are all credible resources.

This particular search has helped me very quickly.

Now, I’ve done my research.

Christopher Penn 35:56
Exactly.

It’s interesting as a point of comparison, and this is something that marketers should be aware of.

This is what Google Bard spit out, right? Webmaster Central Search Engine Journal.

So it’s a very similar list.

Here’s the big difference: none of this is clickable at all.

We have citations that you can hover over, and boom, you’re ready to go.

Feedspot lets you look at the different sources that it’s pulling from.

If you’re a marketer who does a lot of SEO, this is a problem.

I’m asking Google questions, and it’s giving me answers.

But it’s not sending traffic anywhere.

For a result where you would think the logical thing to do would be to provide clickable links, I could go check out each of these blogs, right?

Katie Robbert 36:51
Especially if you’re using Google.

The automatic thing is, okay, Google is the biggest search engine, I must be able to click on these things like I can, using Google.

Christopher Penn 37:01
Exactly.

When you click on the Google it button, all it does is give you a query that you then have to go and actually click to Google it.

I’m like,

Katie Robbert 37:08
Yeah, and that’s a terrible Yeah, no, I’m just going around in circles.

Christopher Penn 37:12
Exactly.

So, when we talk about ChatGPT or large language models in general and the risks, one of the risks to marketers is that these things consume a lot of unbranded search traffic and don’t give us any in return.

They consume our content, but they don’t send us traffic.

So, this has been a very long episode of the Trust Insights podcast, but we wanted to go through these use cases.

And again, one of the things that I think is a good parting statement is that you should be creating your own binder, your own collection of prompts that work well for you.

Because you’re going to discover more over time, and you’re going to trade them like Pokemon cards with other marketers.

But you want to have a record of them so that you can keep getting better and better at these systems.

Katie Robbert 38:10
Yeah, because the things that work for you are not necessarily going to work for other people and vice versa.

So, there are ways, Chris, that you have worked out writing prompts for the systems that I might look at and say, “That is exactly how I would say it.

I might tweak it a little bit, but it’s a good starting place.”

I already know that I’ll probably keep asking you until we create our own binder of things.

“Hey, what was that prompt? Hey, what was that thing? Do you remember that thing?” And that’s not a good use of your time or my time.

As you get into a larger team or organization that’s using this more consistently, getting those things into one central place so that you can get consistent results is going to be super important.

Christopher Penn 38:57
It’s going to be super important, and I strongly encourage people to read the documentation for any of these tools that provide it.

You’d be surprised at how forthcoming they are.

If you have use cases or prompts that worked well for you, pop on by our free Slack group at trustinsights.ai/analytics for marketers, where over 3,000 other marketers are asking and answering each other’s questions and prompts every day.

And wherever you watch or listen to the show, if you’d rather have the challenge on instead, go to trustinsights.ai/ti-podcast.

You can find us on most major channels.

Thanks for tuning in.

We will talk to you next time.


Need help with your marketing AI and analytics?

You might also enjoy:

Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!

Click here to subscribe now »

Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday.

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This