Katie and Chris answer your marketing, data, and AI questions every Monday.
This week, Sarah asked, “If two people asked ChatGPT, for help with a blog on the same topic, will we both get the same ChatGPT content? Or will it be different for each of us?”
Tune in every Monday to get your question answered!
Subscribe to our weekly newsletter to catch past episodes at trustinsights.ai/newsletter
Katie Robbert 0:00
Welcome back to another episode of mailbag Monday where Chris and I are tackling all of the questions that we get in through our contact forms and emails and slack groups. So Chris, what’s in the mailbag today?
Christopher Penn 0:11
This is a kind of a fun one from Sarah, who is asking, unsurprisingly, it’s about ChatGPT. If, if two people asked ChatGPT, for help with a blog on the same topic, will we both get the same content? Or will it be different for each of us?
Katie Robbert 0:26
Well, there’s a lot of it depends in that statement. You know, if, if you’re writing two different prompts, or if you’re writing the same prompt, but there’s like slightly different wording in it, you’re gonna get two different results. If Chris and I were to have two sessions of ChatGPT, open, but we’re using identical prompts, we’re likely to get roughly the same result back. Now that said, Chris, one of the questions I have for you is how often are the resources, the data sets the background information being updated in OpenAI, so that if I ran a prompt today, versus I ran a prompt tomorrow, what how big would the difference be?
Christopher Penn 1:09
It depends. So the two models that underlie ChatGPT, the GPT-3 point five model and the GPT-4 model, they have two different systems for updates. One is sort of a data update. And this data update is very infrequent, it’s about once a year, it gets new information to work with. The second thing it gets is what’s called RLH reinforcement learning from human feedback. Every time we use ChatGPT, and you give it a thumbs up or thumbs down on a response, it takes that information and is refining and retuning parts of its model. And those updates are released that once a month. Right. So the in terms of its ability to create content better, it does about once a month. So those are the two updates, the what conditions, a model to return information, like you said, Katie is all about the prompt, the longer and more detailed a prompt is, the more likely it’s going to be substantially similar. If two people were to copy and paste, Jim, if you were to prompt like write a blog post about content marketing, you’re gonna get wildly different answers every time because it’s such a short prompt that it’s going to create a very generic content and be kind of random content. If you write a post, that is substantially longer a prompt that is maybe like a page long, you’re going to get very substantially similar information. So here’s one, this is a Google Analytics, David, I’m having it write some recommendations. This there’s so much information in the prompts that is going to come up with similar recommendations. But in the previous episode of mailbag Monday, we ran this problem, and it came up with recommendations that are that sound the same but are not identical. Right? The last time it said, organic social was great, hey, this is the shot is really working well. This time, it’s a lot more warning, say like, Hey, this is over 50% of your total visits, you might want to to deal with that, that seems like a problem, you’re over dependent on one channel. So even though we’ve copied and pasted this exact same prompt, which is now this is like seven paragraphs of frumps texts, we got a response that is totally similar, factually identical, but very different language.
Katie Robbert 3:30
So to the question of if two people asked for help, will they get the results? The answer is likely yes. You know, the more you run the same prompt over and over again, the more to Chris’s point, you’re going to fine tune the model, it’s gonna say, Okay, you didn’t really like what I gave you the first time. Let me see if I can do a better job this time. And specificity is key. And so if you say, I need help writing a blog about content marketing, it’s really going to give back whatever is most closely available is just going to start grabbing stuff off the shelf and be like, well, this says content marketing, and this says content marketing. So here you go. Versus, I need you to be an expert in content marketing, I need you to write a warm and friendly tone. I need this to be about five paragraphs, this is the audience and so on, so forth. Very different results.
Christopher Penn 4:25
Exactly. So I literally just clicked my session copied and pasted the exact same thing, the exact same problem. And it’s, again, is factually identical. It totally the same, but the words are different. The words are different. Now, underlying this question is sort of the question of like duplicate content and things like that, and that gets into sort of SEO territory. Here’s the thing. AI content can’t be copyrighted. So even if you and somebody else put up the same piece of content from At own AI, neither you can claim it neither you can claim the others infringing, because none of you can, can hold a copyright on it because the the United at least the United States Copyright Office released a statement saying machine generated works are even eligible for copyright. So if that is a concern you have, you don’t have to worry about it because you can’t copyright it. The law is still unclear about AI assisted, right. So if it spit out an outline, and I write the the text that if the text itself is mine, the outline is not if I edit this post from ChatGPT, the portions that I have edited of mine, and are eligible copyright, but machine, portions are not right. So you can imagine this starts to get very, very messy. And the law is still unclear. From an SEO perspective. It How is Google going to interpret it? You know, it will be exact duplicate content? Well, we don’t know because Google has introduced its new large language models into its search. We did that back in 2019. With BERT, and now with the palm set of models, they have even even more greater language capacity. It is conceptually possible that the the palm model, which is the model underlying their barcode system, can look at two posts and say these are essentially the same things. It’s slightly different words. And so we’ll have to pick which which one result is more credible. Do we know is for it to be true? For fact, no, but is highly likely. Because you can ask ChatGPT to ask, you can even ask ChatGPT. You know, here’s two pieces of content, do they have the same author? And it will identify by style? Like, yes, this is probably the same author? I know, it’s probably not.
Katie Robbert 6:40
So to the question of, if two people put in a prompt, will they get the same results? The answer is likely no. They won’t. And the more you run, or the more I guess, rather, the more you work with the model, the more it’s going to be refined for your specifications. And specificity is key in terms of the prompt that you’re giving it.
Christopher Penn 7:01
Exactly. So if you’ve got a question for mailbag Monday, you want to ask, pop on over to our free slack group go to trust insights.ai/analytics for marketers, where you have over 3000 other marketers are asking and answering each other’s questions every single day. And if you wanna catch up on past episodes of mailbag Monday, you can get them in our newsletter. Go to trust insights.ai/newsletters subscribe there comes out every Wednesday. Thanks for tuning in. We’ll talk to you next time.
Transcribed by https://otter.ai
Need help with your marketing data and analytics?
You might also enjoy:
Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!
Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new 10-minute or less episodes every week.