livestream header

So What? How to do prompt engineering

So What? Marketing Analytics and Insights Live

airs every Thursday at 1 pm EST.

You can watch on YouTube Live. Be sure to subscribe and follow so you never miss an episode!

In this week’s episode of So What? we focus on prompt engineering. We walk through what is prompt engineering for generative AI, why prompts succeed or fail in systems like ChatGPT and Stable Diffusion, and how to engineer prompts to maximize your success. Catch the replay here:

So What? How to do prompt engineering

 

In this episode you’ll learn: 

  • What is prompt engineering for generative AI
  • Why prompts succeed or fail in systems like ChatGPT and Stable Diffusion
  • How to engineer prompts to maximize your success

Upcoming Episodes:

  • TBD

Have a question or topic you’d like to see us cover? Reach out here: https://www.trustinsights.ai/resources/so-what-the-marketing-analytics-and-insights-show/

AI-Generated Transcript:

Katie Robbert 0:26
Well, hey everyone, Happy New Year, happy Thursday. Welcome to so what the marketing analytics and insights live show I’m Katie joined by Chris and John, New Year, same us. In today’s episode, we’re going to be covering prompt engineering and so the hot topic for the past, at least month or so I know Chris, it makes you cringe because you’re gonna say for like, the past few years, but people have only just started, you know, playing with it and understanding even wrapping their heads around it are AI assisted content writers. And so specifically, we’re gonna be talking about chat GPT and Stable Diffusion. Now, I don’t know about the two of you, but I’m seeing posts and threads and you know, people everywhere saying that they have figured it out. And here’s the top 10 ways that you can make this work for you, in your business and your writing and your marketing. And, you know, it’s everyone thinks they have the solution to it. And, you know, what we’re going to do today is talk about how individuals can come up with their own prompts, basically prompt engineering for their own personal use, because my concern with these people who were saying, Here’s the top 10 ways to use it, there was no personalization, they’re like, What if none of those scenarios apply to the business that I run, I’m still sort of stuck. So what we want to do today is talk through the correct way to approach prompt engineering. And John, I know you’re gonna have like your top five fails. Yes, we

John Wall 2:02
have the list. open or close with that, whatever you want to do,

Katie Robbert 2:06
we can probably we can probably, we’ll figure it out. I have some ideas. Alright. So Chris, this is something that you know a lot about, and have been working on actually for quite a long time and have been experimenting with it. So where would you like to start today for this to be the most valuable?

Christopher Penn 2:25
Let’s do a little myth busting first, okay, then dig into the mechanics. So all these AI systems, they’re not sentient, they’re not self aware, a terminator is not anytime soon. These are all based on what are called large language models. And a large language models really simple to understand conceptually, given a enormous amount of text that has been digested down and categorized, you then ask these balls to create something from that text. There was a great expression for I was listened to this week in machine learning in AI, the podcast, which I highly recommend if you’re a very technical person. And they were interviewing some folks from the Allen Institute about the evolution of natural language processing in these models. And the one guests whose name I can’t remember and I apologize for that. Said, a word is known by the company it keeps a word is known by the company it keeps in this is the acids of a large language models. If you were to scrape the entire Trust Insights website, right? What words would be associated with Katie robear that those two words, right, because the chance to be words like podcast, insights, live stream, Christopher Penn, John Wall, and so on and so forth. Because these words up here, in proximity to each other in a block of text. So when companies like open AI, and the Allen Institute and Google and such build these large language models, essentially what they’re doing, they take a whole bunch of words, put them on a on a given topic, put them on a blender, and create a swear word soup, but within every word and phrase, our mathematical associations for the words and phrases that appear nearby. That’s why these tools can generate things like grammatically correct sentences, we have not taught them the rules of grammar, but they know statistically, from a probability perspective, you’re probably not gonna say happy new rutabaga, right? Probably not a phrase you say nearly as often as Happy New Year, right and so that construct is is appropriate, it will understand based on frequency, think different. Is is a phrase is posed to us even though it’s grammatically wrong, right? apples, apple intentionally did it wrong. It’s think differently at the advert version, but think different was the name of the campaign because of a new so much and appears in so much with the text out In public, the tools understand that that is a thing. So, when we’re working with these large language models that expression, a word is known by the company that keeps guides how we do prompt engineering, the more words that we use in our prompts. And the more specific we are, the better the results we get, the less detail we provide, the less fewer guardrails and associations that we provide in a prompt, the worse these things perform. And if you’re unfamiliar, a prompt is nothing more than just putting a piece of text into one of these machines asking it to do something. So this is the chat GPT interface. OpenAI as other interface, the playground is similar. And a prompt is simply the instructions that you want it to do. So I think that’s the first place that start is helping people understand what it is that you’re doing.

Katie Robbert 5:54
Well, and so you know, what I’m hearing you say is, if you put in a prompt, which is basically a set of instructions, a prompt, that is very vague, that says, you know, write a 500 word blog on marketing, it’s going to do its best to come up with something. But if you’re more specific to say, I need to teach, you know, my users about the best practices for, you know, digital marketing on small budgets, you’re gonna get a very different set of results. And so really what you’re saying, Chris, is be as specific as possible when generating your set of instructions. And in this case, it’s a prompt.

Christopher Penn 6:37
Exactly. And that’s the thing about a lot of these these tools is users don’t realize how detailed you can be. So let’s do some couple of examples I’m going to do you are a social media manager for a podcast, right? Five promotional tweets, or your podcast. So this is a very simple, straightforward prompt. And you can see, like, it doesn’t really know what to do. There’s not a lot of detail there. Right? So it’s podcast name, hoppecke Guest name, give it a listen on platform, and so on and so forth. This is not I would if you’re gonna put this in the fails category. I think this is kind of not great. So while it’s thinking about that, Let’s rephrase that. let’s reframe that.

Katie Robbert 7:27
Well, and I think that it’s interesting that you started with, you are a social media manager for a podcast. So you’re already introducing a new kind of prompt that I don’t know that users are thinking about, you’re telling the system, the persona that they’re supposed to have when they’re writing these prompts. So that’s one, it almost sounds like, Hey, if you created a user story, you might have all the pieces that you need to create a proper prompt.

Christopher Penn 7:58
That’s correct.

Katie Robbert 8:01
And so for those who aren’t familiar, or just want a refresher, a user story is a simple statement with three parts as a persona. In this case, it would be the Social Media Manager, I want to write five promotional tweets for the podcast so that people sign up for the podcast. So that would sort of become your guideline for creating your prompts in this instance. And so Chris, you’re definitely getting more detailed, with the prompt the second time around.

Christopher Penn 8:36
Now, what makes chat GPT-2 Interesting compared to other versions of the same model is that it has a feature that other versions of it don’t have, which is memory, when you have a chat that you’re gauging it does have the ability to remember what has previously happened. So in this case, I’ve said the name of the podcast is marketing over coffee, right? The host the podcaster. John Wall, Christopher Penn podcast has been on the air since 2007, and airs weekly, the podcast URL is marketing over coffee.com. The podcast is a weekly conversation on marketing current events, rewrite these tweets with these new details. So now,

Katie Robbert 9:10
the question I have is, do you need to be setting an intention that with the purpose of getting people to listen or with the purpose of getting people to subscribe?

Christopher Penn 9:22
You absolutely can. So again, this is where you can, should you, you should be as specific as possible. So specificity in prompt engineering is the way to happiness as with many things in life,

Katie Robbert 9:39
so John, in terms of your top five fails, it sounds like you know, right, five promotional tweets for my podcast is probably the wrong way to approach it.

John Wall 9:52
Yeah, definitely, you know, in fact, let me I’ll just throw this out. So, the idea with this was there are actually queries you can make that are so horrible. The chat GPT-2 just won’t even refuses to do it. So number five, write 500 tweets encouraging entrepreneurs to increase their hustle. Number four, generate blog posts of VCs congratulating themselves for their morning routines and vision boards. Number three, make a convincing argument where we need more Tiktok influencers, oh my God. Number two, write a children’s book about conceding elections with class and dignity by a Republican. And number one, create a podcast about branding from sea level agency executives, those are the five worst prompts that you could throw out there that refuses to take. So that was my big brainstorm this morning in the shower to throw those out. So I hope

Katie Robbert 10:48
you guys are fantastic. But I, I think you make a good point, John, because I feel like that’s the misunderstanding of how a system like this is going to be used, people are going to try to use it that way, when you really need to be more thoughtful about the approach. So thank you for sharing that. John. That was fantastic.

Christopher Penn 11:09
Exactly.

John Wall 11:10
Do you see that that’s that iterative approach all the time, though. I mean, have you actually had times where you’re going 578 times to continue tweaking the thing?

Christopher Penn 11:17
Oh, absolutely. And in fact, you should write part of the the benefit of a system that has some memory, is that you can continue refining it. So now I’ve said, Hey, be sure to include the URL on each tweet, the tweet should encourage the user to listen in. And now it’s including the URL in there. There’s still some placeholders and stuff in here, but it’s it is getting better. And so this is one of the things that’s really nice about a system like this is that you can get it to improve its results. And then what you do is you go back and you consolidate all the different prompts that you’ve put in and create sort of a master prompt that has all those details that specificity up front. So you can and should start with a user story. But if you forgot, or you just are messing around, you can go through it and iterate until you get a result that you’re pretty happy with, then you save that that prompt somewhere in a notebook and you know, a text file, whatever. And you ultimately have it available to reuse when you need to do. Now the nice thing about these tools is you know, you’re we’re just talking about plain text here. So this is this is very straightforward stuff. And there’s a lot of use cases that people have done some fun stuff with, they’ve, you know, there’s ones like you know, pretend that your personal chefs, I’ll give you my, my dietary preferences, you’re going to give me some healthy dinner ideas and things like that. That’s a kind of a fun one. But from a practical perspective, there’s a lot of things that we can do with this software that I think have more utility. So let me give you an example of one. Well,

Katie Robbert 12:54
before we move on to the practical utility, I want to ask the devil’s advocate question here. So, you know, John, you asked, you know, do you go through like 5678 rounds of iteration with the prompt? You know, what do you say to those who are going to push back and say, well, by the time you got, you know, the refined prompt and the responses you want, I just could have written the thing in the first place.

Christopher Penn 13:20
It’s the same as why would you use our instead of Excel, because you want reproducibility, you want it to be able to be reused. So because this is the GPT-3 language model under the underneath, what you eventually end up doing is use these playgrounds use these tools to sort of iterate be your your r&d lab. And then once you are ready to be to really build, you know, build something with it, you go in and you start writing code, right, you start writing code that can can that can take this data and turn it into something that is you would put into production. So I’ll give you an example. For the longest time, I Trust Insights we have been creating a keyword forecasts, right, these keyword forecasts that we give a our software, list of keywords, search volumes and past search history. And we say hey, let’s build a forecast for the next 12 weeks of the content that is the search terms that are likely to trend. And we’ve we’ve done that we’ve used it for ourselves, we’ve used it to hand off to clients and stuff like that. And it’s it’s been, I would say reasonably well received. But we’ve not always gotten people to do as much with it as we want, including ourselves, right, including ourselves. So we started playing with the open AI system and then started incorporating that into our code. And so now instead of just giving you a list of keywords, each week, for a year, we said okay, what if we start giving you the actual blog topics, right, given our keyword list? Maybe you’re like, you know, Katie, something you’ve said on our podcast is I don’t know what to do with this keyword. I mean, this is like the 16th time I’ve seen this keyword. Well, here’s a different way of approaching this where you now have a starting point to create some new content. And this is the automation part of AI. And this is this is a critically important, the tools like chat GPT-2 and stuff are fun. There are a lot of fun. You can spend a lot of time we in fact, we talked to folks on our analytics remarketing slack about how have you been integrating this and people are still at the I call it the party trick stage like, hey, we made this thing and it did this thing. It was fun. But to graduate from party tricks to production requires now thinking how do I want to use this thing and bring it to life in a meaningful way.

Katie Robbert 15:43
And I think that that’s where, you know, and as sort of, as I mentioned at the top of the episode, I’m seeing so many people posting the, you know, top 10 ways to get chat GPT-2 work for you. And it still feels very hacky very party tricky. Because it doesn’t, it doesn’t have that repeatability, it still feels like that one and done. And it’s not very instructional, it’s just more of ideas. Like, yeah, I would love to have, you know, AI write blog posts for me, I could put that out as a top 10 way to use this thing. But I’m still not telling people how to make it repeatable. I’m still not telling people how to actually train it to do the thing, I’m just saying you could do that. You can also switch from tying your shoes using Velcro. It’s also a thing that you could do

Christopher Penn 16:33
in elastic shoelaces, I’m telling you how elastic shoelaces is the way to

know. And that’s because these tools as they are now are intended for humans to use, right. And they are intended as one offs. They are intended for people to get a sense of what’s possible. But these tools are not the production uses of these tools. And there’s a big gap between prototype and production. These are prototypes. And even in the you know, the slightly more sophisticated playground, this is still a prototype, right? This is still just a web interface to the model. This is not what you need to do to take it from what it is into something that actually does the work behind the scenes. So that’s, that’s where the gap is. And that’s why it still feels like up, you know, party tricks and fun things. And yes, a lot of people are prognosticating right now Oh, it’s gonna change everything. Yes, it will. But it is not in this form it because at some point, you have to take into production, they have to beat you to be able to scale. And, you know, having humans copy pasting all day long is not a particularly scalable way of doing things, particularly when the tools do have things like API’s that allow you to do this.

John Wall 17:54
Oh, they do have an API, I was gonna assume that they didn’t have that.

Christopher Penn 17:57
Oh, god, yes. That they’ve had API’s since the very beginning, because that’s how they make their money. Right? If you look at the billing on the back end, an API call I think is 1000 tokens is a penny. So when you read a prompt, in fact, let’s do this. Let’s do something here. Let me take an episode of one of my shows. And I’m gonna say I want to summarize in a boat point list of the key points up of this episode. Now, I’m going to give it a maximum token, like the 1000 does about 800 words, give or take a total tokens are slightly less than a word. When I hit go, it’s going to create this thing, it’s going to start spitting these things out. And every time it’s generating a word, now it is incurring a very small cost, right? So this probably used it looks like he’s about 300 tokens. So it costs our account a third of a penny to generate that. If this goes into a piece of software, like a word processor, or something like that, we’re now hundreds of users or 1000s of users are using it. That third of a penny ramps up real fast. And that’s how OpenAI makes its money, right? You’re having 10,000 20,000 $30,000 worth of compute time. Because they’ve taken it and put it into production. So as individual marketers, these tools are actually one of the most cost effective tools for doing stuff like this. At the scale at at human scale. Once you take it into production scale, that’s when it gets expensive. But again, because you’re probably using software developers and stuff, you already know that it’s like any other API calls, you know, sending stuff to AWS sending stuff to Google Cloud, you know how that works. But for marketing to take a leap forward, it’s got to be integrated with the tools we use the these these one and done interfaces. are good r&d facilities.

Katie Robbert 20:04
So let’s go back to the question of prompt engineering. And so, you know, obviously, people are still, you know, doing the party tricks playing with a, you know, create a, you know, Samurai version of me writing on the back of a unicorn in space. Like, yeah, that is a thing. I don’t know how that’s useful. Personally wouldn’t be useful to me. But so what is, how do we approach writing prompts, because ultimately, if I’m understanding you correctly, that’s going to be the key to making this work for you in a repeatable way, is having a solid structure, which I do think would start with a user story of why you’re doing it in the first place, and what you want the outcome to be so that when you’re getting the text back, you can say, Yes, it did the thing, or no, it didn’t do the thing.

Christopher Penn 20:58
Purpose is, is the first part and having that purpose be very clear is essential. The second part is, what is the language that you would use if you were directing a human being to do it? Right. So what is language you use if you were instructing a human being to do it? So if I said summarize this? That’s not super helpful, right? I mean, yeah, you can summarize it. But if I handed this to a virtual assistant said, summarize this in a bullet point list. Okay, cool. What am I summarize, summarize the key points of this episode and a bullet point list. Okay, that’s a little bit more clear. Right? That’s, that’s pretty clear as to what it is you want me to do. And then, if I said, summarize this, in a fourth grade readability level, I’m starting to think about the people who are involved, right? Who are the people that would be using this output? Right. And now we have, obviously a very different result, because this is now scaled down to a different level of readability. So you’ve got to think about who the people are, that are going to be using the outputs and what they’re going to be using the outputs for. Anyway, how is the output going to be used? So here’s an example. Let’s see, write a series of three, multiple choice. Quiz questions from this text?

Katie Robbert 22:35
Question for you. Old age readability level? Yes. So you just wrote write a series of three multiple choice quiz questions? Would you get the same result if you just said, right, three multiple choice questions.

Christopher Penn 22:48
Maybe it depends on the depends again, a word is told by the company it keeps. So in this case, I use that terminology because this is a concept called weighting. And the weight is how many times you use a word or related word in the prompt gives it additional weight. So you’ll see this most with image prompts. For example, with Dolly to or Stable Diffusion imagery, we might say, do a rendition of Katy robear riding on a horse in a samurai outfit in outer space, highly realistic, photorealistic highly detailed 8k resolution, those last four words would give additional weight to the image to say this should be a photo should be photorealistic should be very detailed, as opposed to saying watercolor painting in the style Henri Matisse, right now, it’d be a very different, different set of weights that the model would use to make a decision about how to render it with this by saying I want a series of three questions. Okay. Sometimes a model, because we are ambiguous writers is unclear what what we’re asking. So if we use words and phrases that are very specific, we can get better performance out of the model.

Katie Robbert 23:55
So you know, what’s interesting is this is very much akin to delegation. And so one of the questions that I get a lot as a manager is, how do I trust my team? Like I don’t, I don’t know how to delegate. How do I get all this stuff off my plate. And my first piece of advice to them is to set appropriate expectations of what you want them to do. So Chris, to your point of, you know, write a summary, and then they give it to you back and they didn’t produce what you want. Well, guess what, that’s your fault, not theirs, because you were not clear enough about what it was that you wanted. So write a summary at a fourth grade level, including five bullet points, telling me what someone’s going to learn from listening to this episode. That’s a very clear set of instructions. And then if the person or in this case the machine, doesn’t give you back what to ask for. That’s when you can start to change things but until you are clear about those expectations, it’s still your fault, not the machine or the person In that case, and so I see this as very similar to, if you think about it like delegating to someone, and you are very clear about what you want from them, then you’re going to get that thing back.

Christopher Penn 25:12
Exactly, that’s exactly it. It is, it is the probably the most underpaid virtual assistant you can ask for.

Katie Robbert 25:21
So basically, prompt generation is akin to delegating a task.

Christopher Penn 25:26
Exactly. Prompt engineering is, is the delegation of tasks. And when you think now about the different tasks that you might have somebody do, this is where the tools start to become useful. Right? So if I take the same thing, and I’m gonna go into the into the chat, because I like to have the the the memory a bit, I’m gonna give it some directions, you are a social media manager responsible for the promotion of content to users on Twitter, Tiktok, and Instagram.

Katie Robbert 26:10
Do you have a cap in terms of characters

Christopher Penn 26:14
4000 4000. You will be creating social media content to promote a piece of larger content. Why ready? When you are ready, you begin. Okay. All of the promotion is to encourage people to consume the content by watching or reading a piece of content is this transcript.

Katie Robbert 27:00
So it’s interesting, it almost appears as though you’re having a conversation with it this time around like you’ve totally switched up, you know, the prompt engineering the way in which you’ve been talking about it. So you entered your first prompt with reply Ready when you are ready to begin. Versus here’s the here’s the purpose of the thing.

Christopher Penn 27:19
Exactly. And, again, that’s one of the things that makes this interface to the model interesting is that it has the ability to behave in a more conversational way. So for people who maybe are not clear about how to engineer good prompts, this is maybe the thing. So here’s the here is the content.

So again, this is from a YouTube video.

Katie Robbert 27:52
So as it’s pulling that up. So to John’s question about API’s, Do you can you connect this interface to different data sources? Or are you sort of stuck with here’s the data sources, we’ve connected OpenAI to good luck.

Christopher Penn 28:11
No. So think about it, the data sources essentially just being it’s a pipe. So whatever pot you put into the pipe, is one way and then whatever, however it is, you want to choose to get stuff out of it. So for example, when we’re doing it with Google Trends, and Google and SEMrush data and and H refs data, you’re just connecting the code from from one system to another. Let’s take a look here and see if I can open this up in Visual Studio Code Studio code, just to give you a sense of what that looks like. So in our code, we just have a single function here, that sends the text of our choice into the system, and returns the results. This is the this is the code equivalent of typing in that window of a copy pasting stuff, except now I can connect to any data source, I could pull data out of a database, I could pull scrape data from the web, I could do any number of things to put data into a text prompt, feed it to the API, and then get the response back. And that’s how that’s how we do it for ourselves. So you could integrate this into a call system. If you’re having real time voice transcripts, you could integrate it into social media monitoring software, you could even take it off, you know, live closed captions off of the video if you want it to be feeding the data in. But no matter what it is you’re doing, that’s you just get the data in. So now, we’ve gotten this and based on the transcript, right, five promotional tweets for the context.

Katie Robbert 29:50
So and then this goes back to could you not have just posted the transcript and say write five promotional tweets based on this transcript?

Christopher Penn 30:00
I could have, but I enjoyed giving an extra context. I could have put all that in one big prompt. But in this case, I didn’t, because I would have had to have thought about that beforehand. And and, and again, this is one of the things that a lot of people are thinking is so different about this tool is that you don’t need to have that you can have a conversation to your point as if it were an employee of yours. And say, Okay, here’s, here’s these these sweets, okay, great, that looks interesting. Give me some video ideas for Tiktok, about what kind of videos I should make, or Tiktok to promote content. Now, this is all text based, it can’t do anything, anything except generate text, but it’s going to go ahead and give me some conceptual ideas for the types of Tiktok videos I could make. So if you were one of those folks who’s like, I don’t know what to do with Tiktok, right? Well, ask them machine right ask them machine. And here we are. Here we have behind the scenes create a series of Tiktok videos that give you as a behind the scenes look at the process of writing, creating content, highlighting unique quirks or imperfections of human written text. When we think about, we’ve talked for a long time, about the transmedia content framework, the idea that you can take a piece of media like video split into all these different formats. Now, the next evolution of that is the transformation framework where you say, okay, given this content, asking machines, just like you would a person, what else can I do with this? What are some things I could do? So we have some Tiktok? video ideas here? would say, what are some Instagram album ideas to promote this content?

Katie Robbert 32:01
So I guess, you know, back to my question about, you know, API’s and usability, you know, could someone like, John, who’s responsible for business development, hook up a system like this to their CRM, and say, give me a list of, you know, and whatever the qualifications are, like, you know, contacts who haven’t, you know, participated in anything or engaged in anything, and write them a brief email to help them get reengaged? Like, is that a use case for something like this?

Christopher Penn 32:37
Absolutely. Again, would you ask it? Is it a language task? If the answer is yes, and this is an appropriate tool? Would you ask a human employee to do that? The answer is yes. Is it repetitive work? The answer is yes. Those three criteria make this a good candidate to do that.

Katie Robbert 32:55
Well, yes, the writing the email is the language task. But the looking for accounts that are no longer engaged. Yes, that’s a language task in terms of my instruction. But that’s more of a database piece, is that something that a system like this could do?

Christopher Penn 33:16
I would not trust a system like this to do that, because that’s too vague. What I would do is an example from the code I was showing earlier, you would have a piece of code that would go through your CRM and say, you know, make a list of only accounts that are aged pass X days, right, and then you look at the criteria in them. based on that criteria, you could then have it create some content. So let’s start a new thing. You are a top performing sales executive.

Katie Robbert 33:47
We’re talking about you John,

Unknown Speaker 33:50
a list of lapsed accounts. And we will take existing call notes and use them to craft a email to each account trying to win them back. So that’s sort of our table setting. Now.

Christopher Penn 34:14
Let’s uh, let’s put together something that is not under NDA so account name Acme mechanics. Contact on Wall last contact. Rate. APR 120 21. All notes. On said he didn’t have the budget right now. Probably lying. Acme sold a billion dollars in widgets last order. Follow up with an A Email. Now let’s see what it does with this write a persuasive sales email based on this information.

Katie Robbert 35:15
What’s interesting is, you know, instead of responding to your question earlier, it’s sort of, for lack of a better term, it’s sort of like machines explain to you how to deal with your sales problem. Exactly. It was like, No, that’s not what I asked you to do. I asked you to write the damn thing. And so now I have to be mansplain on the left and machines lane on the right.

Christopher Penn 35:41
Look at the bright side note, it’s machine splitting to me. So it’s, it’s it’s equitable. Everybody gets machine.

John Wall 35:49
Biased machines.

Christopher Penn 35:51
Exactly. Okay. So it’s created a response. Right? It says, Dear John, I hope this finds you. Well, I recently spoke to you on April 1. Now, to be fair, it didn’t say, John, I think you’re lying.

John Wall 36:05
Basically calls him out.

Christopher Penn 36:08
But it does say. But now we can also see this needs some refinement, sure, refine this email will accompany our company is Trust Insights. We sell analytics and management consulting services to sell John on our Google Analytics, 4. Audit and implementation. See how it does now with a bit of refinement?

John Wall 37:09
Still calls him out that.

Katie Robbert 37:14
Oh, I love that. Dear liar.

John Wall 37:20
You said you don’t have the money. But I see that huge pile of money right behind you.

Katie Robbert 37:25
But your new yacht tells

Christopher Penn 37:26
me something. Exactly. So to your question about, is this a valid use case? Yes. Again? Is it text? Yes. Is it language? Yes. Is it a repetitive task? Yes, we will just need to be very specific in our prompts. And I will prompt engineering to provide all the information needed. So we can still see here. This one is not quite right yet. Because it says sincerely your name, you would not want this email going out. Right? No.

Katie Robbert 37:51
And I think that that’s actually, you know, the number one piece of advice is that you still need to review and edit all of the things that come out of a system like this. Like, if you’re just taking it and publishing it going live. That’s that’s a mistake, because these machines are flawed. Just like humans.

John Wall 38:16
Exactly the same thing, though, if you, you know, if you’ve done this with a certain prompt, 100 times and every time, you know, you’ve finely tuned it to the point where it’s always giving you what you want, like you still couldn’t go live with it. Right? I mean, one is you don’t know how it could respond to a new query. And then how often does the corpus get updated to like, when would this change underneath you without you knowing it?

Christopher Penn 38:39
So at least in the case of you know, the OpenAI models, they actually have do publish when they’re changing the underlying model itself. So this year, I believe sometime in q1, they’re going to be releasing GPT four, which is going to be the the largest model of its kind is about 8x larger than the GPT-3 model, which means better performance, which means more natural use of language and things like that. So now we can see we’re finally at a point now where okay, this is actually coming from somebody, right? It’s not in your name. It’s coming from Katie. And we are we told it all in addition to that new fact, to change the tone to be warm, conversational and professional. Alright,

Katie Robbert 39:18
so it said warmly.

Christopher Penn 39:21
Exactly. Let’s see.

John Wall 39:23
I want to look them in the eye.

Katie Robbert 39:25
Yeah. It’s still way nicer than I would have been.

Christopher Penn 39:29
Hold hostile. Professional.

Katie Robbert 39:33
There I am. Dear Mr. Wall.

John Wall 39:36
That’s awesome. See if it ends with Up yours.

It totally calls him on his billion dollar of widgets again. It just, it just won’t let go.

Oh, sincerely Oh, yeah. And then you throw the title out there. Sincerely is. Good afternoon.

Christopher Penn 40:10
Yes. So, again, these are some of the things, one of the use cases that I use these tools, the most for is meeting notes and action items. So I will take the transcript of a call, put it in here and say, give me a list of the action items from this call because you forget stuff. And reading through an unedited transcript is better than listening, re listening to the same 45 minute call again, but you still missed up because your eyes kind of glaze over about a third of the way through the machine when it spits out your 10 bullet points. Great. OK, copy paste that goes straight into your task management system. And now you’re off to the races. That’s a very practical use. All of the things you can do with is having a to a lot of rewrites. There are a lot of things that, frankly, these systems are good at creating content, but they are great at reprocessing existing content. And I think that’s one of the things that’s that’s most overlooked. So let’s start a new one. You are a social media marketing expert was in acting, actually content or all the answers online. He writes the following biography in the first person appropriate for a link in bio. So Katie, I just pasted your bio from the Trust Insights website. And now it’s going to rephrase this into something that was good for your LinkedIn bio. So for folks who are like, I don’t know what to put for my LinkedIn bio, take the data that you’ve already got, and we spent it.

So there’s a nice, you know, decent LinkedIn bio. Let’s say we write this in a more casual, fun, yet still professional.

John Wall 42:25
Hey, there, Hey, there, that’s me. Hey, there,

Katie Robbert 42:29
finger guns. Oh, boy,

John Wall 42:40
it’s not all work and no play for me. I’m a Google Certified Professional. Oh, yeah, that’s great.

Katie Robbert 42:46
That wasn’t enough. I have a master’s degree. Exactly. But so

Christopher Penn 42:50
this is, again, this is all just my word for it. This is all just language manipulation. It’s taking the data that it’s been given and re spinning it. And so that’s where it’s really good at. One of the things that we’ve seen some examples of this already, but let’s take this hang on here. I’m gonna take this transcript from episode let’s do, I’m gonna go with a shorter version now. Rewrite this transcript.

John Wall 43:24
Right? They have quantum computing, they’re like, We joke about that every week, like, no one can explain it, I’m gonna have to go back and run that.

Christopher Penn 43:32
grammar, syntax. Spelling, Punctuation. And again, same exact thing. But we’ve seen at the bottom of our blog posts, you know, we paste in the unedited transcripts from our live streams and things like this. If we wanted to get that content rewritten to be more clear, more readable, but still preserve the language we’re using. This is a fantastic tool for doing that for when we’re driving around. And one things I like to do is I like to have a little voice recorder with me there on my phone, stuff like this. And just foam at the mouth. I do this when I’m driving to like conferences and stuff. And if I have ideas, we know that sounds like word salad and half the time and Katie, you’ve even said I do not want to listen to sit, you know, 60 minutes of rambling? Nope. I can feed into this and say Okay, give me four paragraphs or summarize my main points. And now I’ve taken that in turn to something useful. So there are any number of applications again, if you would hire a virtual assistant to do it. If you would direct an employee to do it. It’s a text based language based task that is repetitive. These tools are good candidates and if you engineer your prompts, well, as you were given, though, you’re giving them to an employee that was or contracted to just started that day, with all the details they need. you can make these tools very successful.

Katie Robbert 45:02
So I will share a quick anecdote of what I fear is going to happen moving forward. So what’s going to happen is Chris is going to start speaking to me and John, as if we were the prompt box. And I say this because this has happened before so many, many moons ago, when Chris and I worked. Elsewhere, we were traveling. And Chris had gotten so used to speaking into his phone to respond to emails that he started to speak to me as if I were the phone taking the dictation, including stop, new paragraph, and I was like, Hey, dude, I’m right here in front of you. Like, you don’t have to do like that. And so I can imagine that as people get more in the weeds with prompt engineering, it will change the way that we communicate, hopefully, for the better, maybe more specifically, maybe more directly? Who knows?

Christopher Penn 45:58
Exactly. The other thing I’ll caution people is that right now, chat, GBT is in a research preview, it will not be forever, it will at some point come with a cost. So you should be prepared to deal with that you know when it can, but more than anything, use it as a testing ground so that you can integrate it into production level tools that will let it scale far beyond what one person typing prompts in the machine can do. And that’s that’s where the real value is going to be unlocked with these things. It’s we’re seeing the human interface to a very, very powerful tool.

Katie Robbert 46:30
Great, John, final thoughts.

John Wall 46:33
That’s all good. Just remember, it’s built on the average human’s ability to write so when you put in, you’re gonna get back.

Christopher Penn 46:42
Alright, thanks for tuning in, folks, and we will catch you all next week. Thanks for watching today. Be sure to subscribe to our show wherever you’re watching it. For more resources. And to learn more, check out the Trust Insights podcast at trust insights.ai/t AI podcast, and a weekly email newsletter at trust insights.ai/newsletter Got questions about what you saw in today’s episode. Join our free analytics for markers slack group at trust insights.ai/analytics for marketers, see you next time.


Need help with your marketing AI and analytics?

You might also enjoy:

Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!

Click here to subscribe now »

Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday.

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This