So What header image

So What? What’s next in generative AI?

So What? Marketing Analytics and Insights Live

airs every Thursday at 1 pm EST.

You can watch on YouTube Live. Be sure to subscribe and follow so you never miss an episode.

In this week’s episode of So What? we focus on what’s next in generative AI. We walk through governance and processes around training AI models, what results you can expect, how your model usage changes and what’s likely coming in generative AI models in the next few months. Catch the replay here:

So What? What’s next in generative AI?

 

In this episode you’ll learn: 

  • Governance and processes around training AI models
  • What results you can expect and how your model usage changes
  • What’s likely coming in generative AI models in the next few months

Upcoming Episodes:

  • TBD

Have a question or topic you’d like to see us cover? Reach out here: https://www.trustinsights.ai/resources/so-what-the-marketing-analytics-and-insights-show/

AI-Generated Transcript:

Katie Robbert 0:30
Well, hey everyone, Happy Thursday. Welcome to so what the marketing analytics and insights live show I’m Katie joined by Chris and John. And without even planning it, we’re all wearing our Trust Insights shirts today. So we are what it John, what did you call this the board’s? I don’t know what the T is on I

John Wall 0:49
can’t remember the tagline. It’s Resistance is futile, futile. There we go. be assimilated in the Trust Insights matrix.

Katie Robbert 0:59
Well, that aligns nicely with the fact that we are talking about artificial intelligence today. Specifically, what’s next in generative AI. And so we’re going to talk through governance and processes around training the AI models, results you can expect, and what’s likely coming for generative AI models in the next few months. We’ve been talking about generative AI this week. So if you want to catch a more technical overview of what generative AI is, you can catch that episode at trust insights.ai/ti podcast where Chris and I break down what it is and what it isn’t from a technical standpoint. And then in our newsletter this week at TrustInsights.ai. AI slash newsletter, I give a non technical overview of what generative AI is. And Chris, you go into more of the technical again, in the data diary section. Exactly right. On today’s show, we want to talk about not only generative AI, but what’s next. Because artificial intelligence is a technology that does not stand still, it is evolving, even as we speak. So by the time this episode comes out, it’ll probably be out of date. Anyway. So Chris, where would you like to start with generative AI?

Christopher Penn 2:15
I think it makes sense to talk about to start how you should be using it depending on your needs, right? So because there’s there’s a series of pathways that go from, hey, let’s try this thing out to this is going to be a core part of our business and the use cases and the technologies that go with each stage are different. And right now, when you look around the landscape, when you listen to people in Slack groups and Discord servers and discussion forums and stuff like that, you don’t see people thinking strategically about the use of generative AI, everyone’s still kind of in the party trick mode, like, oh, I can make it do this, like, Yeah, that’s cool. You know, but you’re not thinking with any level of strategy. So you know, the very base level, the the entry level that everybody almost everybody knows and is familiar with, is good old fashioned, ChatGPT. Right? Here’s the web based prompts, you go in, you type your stuff, have a good time, copy, paste, people get this people understand this. And for I think, most entry level use cases, this is the tool that people will use. And I think this is for non sensitive, non restricted data, non private data, this is a great tool to use. So I’d say this sort of like the first stepping stone.

Katie Robbert 3:42
So if you want to write someone to Limerick for their birthday, then this is a good tool to use.

Christopher Penn 3:49
Exactly right. Now, the challenge is with the ChatGPT interface, and the models that are in here, the GPT-3 point five turbo and the GPT-4. If you’re paying customer, these are big models, but they’re very general models. These are models that have a lot of everything in them. And so they’re not as specific price as some people want them to be. And the process of using this tool still requires human being right. And this is kind of the big challenge with what I see a lot of people doing with with ChatGPT. They’re just coming up with cool stuff to do with it. But it’s still a person copying and pasting.

Katie Robbert 4:31
And so just to sort of step back for a second, just to sort of give like a quick definition of generative artificial intelligence. So Chris, you know, please do correct me if I’m wrong, but you gave me a really good way to remember it. And it’s the acronym fog. So there’s three kinds of artificial intelligence find organize and generate. So you have regression, which is fine. And so you actually have to give the model something to look at. Organized which is so it makes sense Everything you’ve given it, it puts it into its own classification system that it understands and then generate, which is generative AI, which is what we’re talking about today. And that’s where it creates the thing. And so today we’re focusing specifically on generative AI, but generative, relies on regressive regression and classification in order to operate. And so when we’re looking at a system like ChatGPT, all of those things are happening in the background when you’re talking about the large learning model. And so the large learning model, if you say, write me a limerick, the first thing it has to do is go into its library of sources and say, What the heck is a limerick? Do I know what a limerick is? And then it finds all of its different references to limericks. And if you say write me a limerick, about birthdays, then it starts to pick up limericks and birthdays and organize the information. And then it generates the limerick for you that you’ve asked. Is that all roughly correct?

Christopher Penn 5:59
Roughly Correct? Yeah, it doesn’t actually have any sources, it doesn’t have any, in the the underlying model is just a series of probabilities. It’s a library of probabilities is all just math does a big pile of numbers. Those numbers have associations. And one of the things we were talking about earlier, too, it’s kind of a fun example. You can ask these language models to explain things like explain to a fifth grader or my favorite, explain it to your dog. I did oh, no, really saying explain marketing attribution to your dog. And it ended as credible job with it. But there’s no actual words in the GPT model, it is just numeric probabilities that then get spit out through the decoding process into words. So this is the baseline right? This is this is now you can do some classification tasks. With this with these models. For example, you can have it do sentiment scoring, give it a piece of information of text and say score this, give us a sentiment score. Now, whether or not you agree with it is up to debate about how good you think it is. Although I will say it’s substantially better than a lot of older solutions. But you can even do classification with this. But again, we’re still stuck with you, the human being are copying and pasting or typing prompts here that does not scale well. So the next step in our journey for the use of generative AI is talking to these models programmatically. So this is a piece of code that we wrote. This is actually in the Trust Insights newsletter from a few issues back, where we’re using the GPT-3 point five turbo model, which is the one that ChatGPT uses on the back end. And there’s a prompt built right in it, the prompt says you will act as linguistics expert blah, blah, blah, blah, blah, gives us sentiment score from minus 10 to plus 10. And then we feed it text so we feed it URLs from our one of our databases that has the text of the different articles. And then it returns a table of sentiment scores, right? So now, instead of you or me, Katie, copy and pasting article after article into ChatGPT, we’re gonna programmatically have it do this, it’s still using the same base model, there’s a detailed prompt, but the output now is being programmatically run instead of one at a time by a human.

John Wall 8:26
I know I’m ready to spin up bar and start writing my own code here.

Katie Robbert 8:31
But I think I think that that’s, you know, Chris, as you’re describing, like, so ChatGPT was sort of a step one of like, that’s the accessible version. But there’s limitations to that version in terms of what it can do. And now you’re talking about, here’s step two, in terms of taking it beyond this interface, this chat interface.

Christopher Penn 8:52
That’s right, we’re leaving the the the Web Interface behind, we’re still using the GPT model, the underlying software, we’re just now using it programmatically instead of a human being doing it. So for if you have processes in your company, and this is where governance and and process management really are important. If you have processes, you identify like, yeah, we do the exact same thing. You know, week after week, we read the same exact press release, or you’ll rewrite the same financials. Imagine if you were a PR firm, and you had a client that had by law had to release a press release, stating the results of its financials right is required by regulation FD from the SEC, you would take the draw data, you would feed it into it, put it in a table, then you would give it a press release prompt. And now for your 50 or 100 or 200. Clients just have this and crank out those releases for you know, it’s it’s templated. It’s exactly the same thing more or less month over month. So just have it do it.

Katie Robbert 9:47
And I think that as you started, you know, talk about the use cases. Those are the use cases, from a strategic standpoint that companies they’re not quite there yet. There may be starting to think about it in that case, but there’s a Long way to go in terms of building it in this way.

Christopher Penn 10:05
Exactly.

John Wall 10:07
How does it run as far as API access? So if you have the free account, can you hit the API? Or do you have to be on the paid version to start making so

Christopher Penn 10:14
every account that uses the API has to be have a credit card in the system, because you are billed for every usage of the API. The billing rate is two tenths of a penny per 1000 tokens. So if this thing spits out 1000 Word Press Release. I’m just hypothetically, we would have to pay OpenAI two cents.

John Wall 10:38
Oh, so then it’s since you’re paying for every college just completely open access, they don’t care how much you burned? Because exactly they’d

Katie Robbert 10:45
How long do you think it will stay that way, in terms of pricing?

Christopher Penn 10:49
Oh, the prices keep getting cheaper. The previous model GPT-3 was two cents per 1000 tokens. This the GPT-3 point five model is two tenths of a cent per 1000 tokens. Now, the newest model GPT-4. That is back to two cents per 1000 tokens, because it’s an and there are different there’s differential pricing, depending on how much data you’re processing, because it can process up to 30,000 tokens. And after I want to say, after a certain point, it goes to like six cents per like 25,000 tokens or something like it’s, it’s still ridiculously cheap.

Katie Robbert 11:24
And is that because more and more people are using it so they can keep the prices lower?

Christopher Penn 11:29
Yeah, because effectively once the model is built, you just deploy it, right? They don’t update these models, very often. I they, the 3.5 Turbo model updates about once a month. And there have not been any declarations about how often GPT-4 has been updated.

Katie Robbert 11:46
And so, you know, again, back to what you were saying about thinking about it strategically and integrating it. Those are the kinds of things that may not be well known in terms of, well, what is the cost savings? If we use AI versus a person? Well, we have to re edit everything that the AI generates, however, it’s going to generate 100 things for you to add it at the cost of maybe you know a couple bucks.

Christopher Penn 12:11
Exactly, exactly. So this is step two in the journey. So step one is human being copy pasting into ChatGPT. Step two is talking to the GPT model with with your programming language, we’re still using the vanilla out of the box GPT-3 point five turbo model, which is very good, very, very good, but it’s still very generic. So that brings us to sort of the third evolution in our journey towards towards embracing these large language models, and that is OpenAI. In particular, in many of these other companies Hugging Face, you’ll look at AI, Carper, etc, allow you to take a model that they have running and you can fine tune that models to specific purposes. So these are the instructions for how you fine tune one of the GPT-3 models. Ada Babbage or Curie or DaVinci are the four models that are available. And you would basically create training data. Like for example, here’s the prompt, and here’s the completion you would provide this this data, and then it will, OpenAI will process that data and then give you a special version of its API for you to talk to specifically. So imagine Katie, I took all 50 6070 newsletter cold opens that you’ve ever written for the Trust Insights newsletter, and now the title of those newsletters, and I would write the prompt as write a newsletter about the uses of generative AI. And then in the completion, I would copy and paste your that what you wrote for that issue into this file format. And I just have to prepare this data. I would then load this to OpenAI and say, retune the DaVinci model on this specifically and what this is going to do is essentially, it’s going to learn from your writing style, learn from these prompts and say, Okay, now I’ve got a model that’s going to heavily favor the way that Katie robear writes. And so we will create a GPT-3 point five KT, and it has its own special model in the OpenAI ecosystem where it’s like now it’s talking like much more like you. It’s going to make your Vanilla Ice jokes.

Oh, it knows too much about me. Exactly. And it will, it will emphasize much less all the other different writing styles because we’re telling it, we want you to give extra weight to the types of words and language that Katie specifically uses. This is where, you know, if you think about like a again, a marketing agency imagine a marketing agency is creating content for a customer and maybe the customer is like or or old friends and Citrix Systems, they have a huge blog, like 3000 blog posts on the blog, you would take all 3000 blog posts, or maybe the top 10%. You know, the ones that drive the highest traffic goes into here and say no. Okay, now we’re going to train of base guys Citrix specific version of GPT-3. And what that does, the benefit of that is that when you now have an employee, who goes and says, Write a blog post about, you know, on some on premise device, the prompt can be a lot shorter. Because the model is now been tuned to the specific set of tasks like it’s, it’s the Citrix content engine now. And so you’d have to write four paragraphs, like you should sound like this and do this, because it’s now built into the model as part of fine tuning.

Katie Robbert 15:45
That’s incredibly exciting, and also a little scary, you know, and I don’t want to go down the rabbit hole of like the pros and cons, because obviously, there’s a lot of risks with doing something like that in terms of misinformation. But, you know, I’m thinking about, you know, those of us who work really hard on our, you know, creative briefs and standards. And, you know, here’s what the tone of the company should be. Now, you don’t have to worry that if you bring on a new writer or a new team member, you have to, you know, double check that they’re getting the tone, right, like you’re building it into the first draft to say, this is the tone. And I mean, that to me is like a huge time saver.

Christopher Penn 16:32
Exactly. Right.

John Wall 16:34
So does it have to create a separate instance of GP, d3 just for your trained you? Do you have to take the whole thing? And then that JSON file that’s there? Is that that just run on your machine every time it runs? Or is there like a container that has to be set up over on their side that

Christopher Penn 16:49
you have, so this is all hosted run by OpenAI? The whole thing

John Wall 16:52
is so unique, your training data is over there and an instance. So does this come with your account? Or you have to buy another instance, that’s this is like a different product to make this happen?

Christopher Penn 17:03
No, it’s the same thing with the pricing is higher. So instead of two cents per 1000 tokens now for your own trained models, 12 cents per 1000 tokens, right? So it’s a 6x cost increase?

John Wall 17:14
Yeah, but they do all the hosting, you don’t have to do anything like that’s, that’s huge value still.

Christopher Penn 17:19
Exactly, exactly. So this is sort of step three, in the evolution of your your generative AI journey. Step one is basic proof of concept. You know, Can we can we even use this? So you’re just sitting there to ChatGPT typing? Right? Step two is okay, we found some good use cases, we want to scale using them. So now using a programming language to programmatically step three is like, Okay, we want this to be specific to our industry, our customers, etc, let’s fine tune the model, we’re going to run that step four in the process, is when you say, Hey, we’ve got stuff that we want to do. But we work with sensitive data, healthcare, data, finance, data, military data, things like that. We’ve got information, we absolutely cannot, under any circumstances, send data to a third party, right? It’s just not allowed. Or maybe you’re just rightfully a little paranoid about giving the crown jewels. Or you say, hey, you know, what, we see the value in large language models, so much so that we know we’re gonna, we’re gonna beat the heck out of this thing. And our OpenAI bill is going to be like a gazillion dollars, because we’re making calls to the thing left, right and center. The fourth stage in the journey is okay, let’s tune and run our own model on our own hardware in house. And so this is an example here you can see on screen this is a Google colab notebook. So this is a virtual environment, we can do this, you any would do this on your machinery, like a good gaming laptop, you do this, you will download the free open source model from a provider like you’ll look at AI, the GPT J, six P model, 6 billion parameter model. And then on your own machine with your own data, you will fine tune that model for its very specific purposes. And you will run that on your own hardware at your company. This is for when you realize that a large language model is part of the secret sauce of your company, and you absolutely positively cannot let have it anyone else have access to it. Very recently, two weeks ago, Bloomberg, the financial services company announced Bloomberg GPT. What they did was they took the 41 years of proprietary data from the terminal every stock trade and transaction in the inquiry and analysts column awesome. And they fine tuned a version of this model to run internally. So now when you use the Bloomberg terminal, you can pop up a little window and say, show me 10 stocks that have had 5% CAGR over the last five years plus a minimum dividend of 9% or $9 per share, and show me those stocks and it will come up with that analysis for you. Or show me the Bollinger bands for this stock over this period of time. Can it right limericks, is not very well anymore, right? It’s certainly not gonna be quoting Vanilla Ice to you, but it is going to do exactly what it’s gonna do exactly what they want it to do within that context. And so for the most advanced companies, or the companies that have substantial risks, if the data goes outside their borders, this is how they’re going to use large language models. And this is sort of the pinnacle of the journey is, you’re basically your own AI company now.

Katie Robbert 20:30
I’m overwhelmed. I’m just gonna, like I’m overwhelmed. You know, because I think I fall into the category of the majority of end users who are really just scratching the surface, and I understand how a system like ChatGPT works. And that’s, again, only recently I understand it. And they’re still, you know, as I was describing my understanding of how these things work, you’re saying I was roughly correct. There’s still pieces that I don’t understand. So as you’re describing these more sophisticated versions of generative AI, like it’s overwhelming for a non data scientist. And so, I mean, this might be a big loaded question, but, I mean, how do we keep up? What is it that we need to understand in terms of what’s happening with generative AI? And what’s next? And that might be a big question.

Christopher Penn 21:28
It starts with the first P of the five P’s, which is what is the purpose? This is the, this has been true of AI for, you know, decades. Generally speaking, if you’re using AI, generative or otherwise, just to make things a little bit more efficient, it’s probably going to find a vendor and just, you know, like an OpenAI or a go Charlie, or any of these companies, and just use their software and let them deal with the headaches of all the advanced stuff, because all you want to do is just very specific tasks. And that and that was great. There’s you don’t have to keep up at that point, just use the tools as they’re intended to be used. You know, when Microsoft Office rolls out its GPT integration stuff, you can, you know, say, hey, Microsoft Word, you know, make a, make a two page summary document of the slide deck, and it will do its thing. And you’ll, you’ll enjoy the usage of it. If the generative AI is going to be part of the secret sauce of the company, like it’s going to be how your company delivers value, then yeah, you need to dig into the stuff and understand the technology. So real simple example. We are writing code right now. It’s actually it’s operational, and just haven’t told you yet.

Katie Robbert 22:45
You know, how much I love surprises, Chris.

Christopher Penn 22:46
I know, he loves giving away surprises on the air in public. To take sets of Google Analytics data, pass it to the GBT models and say, write an analysis of this data. Right? So now instead of us sitting and having to sit there and go, Okay, what do we see in this class data, the GPT model will do the first draft, then we look at the data and we inspect it in analysis and go, Okay, that was a pretty good analysis. So you miss this part here, and so on and so forth. That’s something that we’re going to protect, right? Where and eventually you may evolve, we may evolve into running our own language model of fine tuned models specifically for that purpose, because I see it being as part of the secret sauce of what Trust Insights does, and I don’t necessarily want to let somebody else, you know, have access to that. So it in terms of how do you keep up depends on how much you need to keep up.

Katie Robbert 23:40
Makes it well is, before we get back to that question, I will let you know that I haven’t even seen what this thing can do yet. I already have notes for you. I have some thoughts. So well, we’ll come back to that offline. But I mean, but that’s a really, you know, I think that that’s not something that non data scientists or non technical folks were aware that generative AI could be doing. Because, you know, we think of it as you know, write me a blog post or write me some social copy, or, you know, I actually saw and this was something that to me was a bit mind blowing was it can create a spreadsheet if you give it the right parameters. And I was like, Well, of course it can. But I hadn’t even thought that that was a use case for it. And so now if you’re saying that one of the use cases is actual data analysis, like that’s, again, oh, I’m still overwhelmed. You haven’t made me less overwhelmed. You haven’t made me whelmed.

Christopher Penn 24:42
Okay, yeah, I mean, the, the software can do all sorts of text, right? It is a language file that deals with text and text like objects. So let’s see if we can make this work here.

So what the world was that that was text? Right. So that was music can be written out as text. So what I did was I fed the GPT-4 model, a set of lyrics, I gave the song genre and said, write the music that accompanies these lyrics. And that’s what it generated now generated that as text, I then had to hand that to a guitarist to actually play it. But that’s an example of kinds of things that these tools can do. If it is text, or something like text, meaning is formatted like text, the tools can work on it, what’s formatted like texts, DNA sequences, right RNA sequences, you feed it a genome, and say, identify anomalies or develop a candidate vaccine for this novel virus that we’ve never seen before. There are so many different applications, and one of the most important things that you can do as a even as a beginning user of these tools is to go set up governance and track what you’re doing. So this is an example I homeschool, we homeschool our kids. And I wrote a prompt to grade papers, right? So that I can take my kids reports that they write and grade them. It’s text, it’s just processing data. Another one that I particularly enjoy is, here’s a list, you’re going to write a grocery list. Here’s the list of meals that I’m planning to cook this week. What are the probable list of ingredients again, this is all word association stuffs and it’s it extracts it out. And it makes me a nice grocery list that I can then go to the store and go wow, I forgot that I needed half of these things. Writing cease and desist notices summarizing, remember, we talked on a past podcast episode, the four major tasks, summarization, generation, extraction, and rewriting those the big things that these models are really good at doing is writing code. Developing answers for market research doing personality assessment, there’s no shortage of text, like things that it can do. So my, my thing I tell you to do, that’s the most important, save your prompts, right, and save them and tune them. And I would encourage you, if you have not already on the Trust Insights website, there is a a totally free, no strings attached, download on how to write an effective prompt, it gives you the structure, or really good prompt, go and grab this PDF, pin it up on your your office wall. But this, this will help you write better prompts. And then once you’ve got that you need the governance to store these things. So that a you can find them when you’re trying to remember what you did and be depending on a company, you may want to share them. You know, one of my favorite ones is I have I get junk email all the time. And so I have a prompt, from my assistant Grace Parker Thompson, aka GPT. To write go away letters.

John Wall 28:25
That’s great. And so because you’ve got that token for that every one is different. So people can’t like, you know, filter them out or whatever, you’ve got a unique rejection letter for everyone.

Christopher Penn 28:36
Exactly. Exactly. And you know, it’s it’s very, very straightforward stuff. Yeah, the models are capable of doing anything that that is text or text like so you can put markdown tables of data, that’s when we export data out of Google Analytics, it comes in as markdown, it can read it. If you’re familiar with the ABC notation of music and text, you can put an ABC notation of entire music scores in an avid process them if you can represent it as text, it can go in, and it can come back out.

John Wall 29:04
How about HTML two, that would be another thing. You could do iframes and have all kinds of HTML,

Christopher Penn 29:09
CSS, JavaScript. I was trying to help solve a problem for friend yesterday, there was this page on this one website, and it was a contest voting and someone was hacking the site, and we try to figure out where it was. And I figured out I just fed the backend JavaScript, but I didn’t read particularly well into ChatGPT. And said, find the vulnerability in this code. And it turns out that, you know, they basically did had there was a lot of proof and checking upfront. And on the validation side on the validation email, there’s no validation whatsoever. So that was how the hackers were spamming the system. A couple of weeks ago, Twitter released its open source recommendation algorithm, right? And this thing is gigantic like this. There’s so many 1000s of files and stuff in here. I O is also written in three languages Scala, Java, and thrift. Which you know, If I don’t write any of those languages, what do I do I find the keywords in the code that I care about, like, you know, retweet or something like that, find that code and paste it into ChatGPT and say, explain this code to me, like, what does this code do? What does it mean? And it did a very good job to the point where I can make recommendations to people now on what to do on Twitter, because GPT-4 explained it.

Katie Robbert 30:26
What all of that I mean, all of this, okay, I keep saying it, because I’m just sort of like, I need to process everything you’re going over. I mean, it all reeks of you need process and governance, you need planning, even if you do nothing else, then create a user story of so, you know, let’s take the very simple example of you creating a grocery list. And so, you know, by starting with a user story, so as a, you know, busy father, who also works full time, I want to, you know, use this generative AI system to help me plan a shopping list so that I get in and out of the store really, really fast, because I don’t have time. And so you’re like, Okay, so I know what I need to do, what do I need to give the generative AI prompt, so that I can get that grocery list, and then that becomes the thing that you start to see over and over again, and then you probably just swap out your meal planning for the week, which sounds really straightforward, because it’s just you and your family. But to your point, Chris, as you’re bringing it into larger organizations building it into the strategy, you really want to make sure that you have those user stories really tight and concise and clear. So that you can then build that repeated code to generate hundreds of first drafts to then farm out to all of your content writers and editors. Because I remember, we used to work with a client who would post like a dozen blog posts a day, um, you know, I don’t remember the exact number. But the amount of content was enormous, and wrangling all of the writers and the topics and editing, it was a big undertaking. And so being able to automate some of that, using these systems to me was like, too bad that didn’t exist then. But I hope they’re using it now.

Christopher Penn 32:27
And here’s what I love about user stories, the first two parts of the user story, you’re the prompt, right as a huge CR you are GPT model, I want you to do this thing, right. So that is our part as the humans to make sure it did the thing. But if you look at this product, for example, this is helping cheat, I have needed to help someone write some wedding vows this week, you will act as a secular Justice of the Peace officiating weddings, you specialize in writing wedding ceremonies, blah, blah, blah, blah, blah, your first task is to write some wedding vows for these two women, and so on and so forth. And as a, I want to, so that, right? So we’re basically taking that user story, kind of adapting it. And that is what we’re telling the GPT model to do. And if you if you have written out the user stories, and you’re super clear about them, guess what? It will generate good results. If you’re like, oh, just write a limerick about something like, well, you’re gonna get generic results, right? You’re gonna get poorly tuned results, as opposed to being very clear in your user stories in very clear your prompt writing to get very clear results.

Katie Robbert 33:39
John, hanging in there,

John Wall 33:41
limericks for everyone,

Christopher Penn 33:43
everyone. Now, let’s talk in last few minutes about what’s next. What’s next?

Katie Robbert 33:48
Well, I’m scared.

Christopher Penn 33:51
Okay, what’s next and exists already but not in not in blood circulation is the GPT-4 API has is called multi modal capability. Multimodal means it can take in more than one form input right now today, you have to put text in as everything. The new API permits you to load image data into the GPT-4 model and get text back. So if I were to take a photo of a screenshot right now, I could feed this screenshot and it would be able to interpret there are three people talking at Katy everywhere. John J will add CS Penn, it might describe our office backgrounds. So depending on our prompt, these are all things that are embedded in images. I could take some sheet music, right I’ll take a an image of some sheet music and lo that and say, interpret this music score and you know, transcribe it and then rescore it from C major to D minor. I could take and this is where for Trust Insights. We are very, very interested in doing this. I could take a screenshot of a report out of your Salesforce instance and then marry that with a screenshot For you Google Analytics instance to say, you know, tell me how well the marketing strategy is working based on the data in these two charts. And so these are all multimodal capabilities. It’s kind of the reverse of image generation, right? So we used to go, you know, a dog on a skateboard, where you go to to, you know, it makes a fanfiction. Now it’s like, Okay, here’s the picture, be detailed in explaining it.

Katie Robbert 35:23
I would imagine, you know, like, one of the projects that we’re working on for our own website is making sure that all of the images have the alt text, I would imagine that using something like that to help generate the alt text more, in a more descriptive way, in a more useful way. And then getting that into your website is going to be a huge time saver, but also, you know, a huge benefit to the people who really rely on that information.

Christopher Penn 35:54
Yep, another multimodal capability that if you think about this, if you were say, h&r block the company? Boy, wouldn’t it just save you time to scan a picture of a customer’s tax return and feed it to the GPT-4 model? And say, what tax implications did I miss? What deductions did I not do right and stuff? And it won’t go through and successfully audit that and say, Yeah, you miss these things? Can you imagine how much time that would save your accounting firm, to be able to say, Okay, here’s your your Schedule K one, and the are these seven anomalies, please have a human account and go check it out.

Katie Robbert 36:28
It’s interesting, because the, the point that keeps coming up in the back of my mind is this is all really amazing stuff. If you can convince people, that AI is not going to take their job, that AI is going to change the way that they do their job. And so that’s my first thought is you’re describing like, you know, your accounting firm, your content writers, whoever it is, they’re probably like, I don’t even want to bring AI in because I’m gonna lose my job. And so I feel like there there’s a whole other conversation and learning curve that needs to happen with all of this like, for us, we’re like, Yeah, bring it on, we can do so much more. But for a lot of companies, there’s going to be a lot of insecurity around introducing this, even though it’s only going to make what they do better.

Christopher Penn 37:15
That is 100%. True. And the challenge that these companies and people and the thing they need is basically need to understand is this. This is the truth is in our keynote that we do, a person with AI is going to be more valuable than a person without AI. Right? If you are, if you are skilled at the use of AI, you will be more valuable than the employee without the company that embraces AI is going to be more valuable and more nimble and more effective and more profitable than the company without it. So yes, we still need people. 100%, right. There’s no chart here with a robot and no person. But if your company is not, if you’re if your company is like no, no, we’re just gonna wait and see if this whole AI thing is a fad. It’s like, yeah, those companies like oh, yeah, that internet thing is totally a fad. It’s, it’s going nowhere. This is a major sea change for the way that people do business. And this is going to impact everybody. And one of the big strategic things that people need to be thinking about is how do you train employees to do this, because if you don’t, a competitors will beat you up. And be, there’ll be so few junior employees, because junior employee tasks are the easiest to automate that in 10 years time, you will not have enough of a pool of qualified managers and executives to run your company because everyone will, you know, older will have retired. And you’re like, well, we got rid of everybody for cost savings. But now there’s like five people left in the company and Bob over there keeps stealing stuff.

Katie Robbert 38:56
If you want help with that training, then you can reach us at TrustInsights.ai AI slash contact and you will reach John Wall, who is not as far as I know a robot. He is a real person, but he is the lead statistician of Trust Insights, as he established last week.

Christopher Penn 39:12
He is I would say there’s there’s four ways that an agency like Trust Insights can help you number one is the basics right? If you’re at level one and you’re trying to do what do ChatGPT You can obviously help you with that. Number two, if you want automation of your prompts, again, you write that code number three if you want to start fine tuning a model specific your business you know how to do that. And number four, as you saw I was playing around over the weekend was building a G GPT J six be fine to model to run on my own laptop, just because I can I’m training it to write fanfiction. I want to see if you know see if I can do that. Well, anyway, I don’t want to train on customer data. So

Katie Robbert 39:55
I I’m fully with you. It’s just I always learned so much on these live streams just in terms of the capabilities. And the way in which you’re thinking about it, though it that, to me is like The fascinating part. We do have a question. Yep. Go ahead. No good, is where would you draw the line on how to use it? I think there would be governance within companies. So they are sharing proprietary information such as data and process, correct? Yeah, it really depends on what kind of service you want to offer to your clients and your customers. And so, for us, you know, we want to keep a lot of it proprietary, so that we can share the output of it. But there are going to be companies, I mean, you’ve probably seen the, you know, here’s the 100 top most useful prompts to use for ChatGPT. There’s people who are willing to give it away for free, or, you know, for a small cost. So it really comes down to what your specific organization company consultancy, wants to be doing with the customers. Chris, John, what would you add to that?

Christopher Penn 41:09
What Shane’s talking about is at level four, right, which is what we talked about building a custom model that you run on yourself that you own, so that you know that that proprietary data, that sensitive protected information is not leaving your control, this is going to be especially important for any industry, where you’ve got SPI, but also regulatory. Right. So if you have, if you have you’re trying to operate, say in Italy, which just banned ChatGPT, because they say it violates GDPR, if you are building and running your own model that you can certify as based on certain training data that does not run afoul of GDPR, then you can use that model, even though you can’t use the GPT-3 family from OpenAI because you can demonstrate in a court of law, here’s how we are compliant with what went into the model, so that we can continue to use it. So yeah, you know, Shane, the it’s not that you draw a line, it’s which of the four stages we talked about, are you going to be at and if you’re talking about sensitive information, you go straight to stage four, which is running on your own stuff.

Katie Robbert 42:20
John, thoughts,

John Wall 42:22
it’s, this is definitely something you just need to keep an eye on the opportunities, you know, because this is an area of technology that’s emerging, and everybody’s going to be using old type thought to figure out what to do with this, you know, it’s, yeah, create selfies like, Well, yeah, that’s great, you know, that makes for a neat toy. But there’s going to be a lot of things that come out of this, that people will change the businesses that they’re working. I mean, we especially when you think about image analysis, there’s so much stuff that’s going to be available here, as far as security applications or classifying weather data, image, you know, all kinds of stuff, there’s just a million opportunities here. So play around as much as you can, because you know, you might stumble upon something that will completely change your vertigo,

Christopher Penn 43:03
I will say this, there is going to be a lot of money to be made on customization models. Because as companies adopt this board, they get past the right Limerick stage, they’ll go hmm, maybe we shouldn’t be giving our all of our data to a tech company, we don’t actually know all that well. And maybe we do want to do very specific things really, really well, because a fine tuned small and model runs really well beats the pants off the big models and stuff. So for or progressive business leaders who are willing to invest the time and the money to hire the right talent, or they’re willing to spend the money to bring in the right partners. That is that’s you can that’s where the money is in the next couple of years.

Katie Robbert 43:50
I’m looking forward for us individually having those you know, the John model, the Chris model, the Katie model, no, and I mean this in all seriousness, to help with some of the writing. You know, I’m, you know, I write very differently than either of you. And so having just one model to try to mimic all three of us isn’t going to work. So I’m very much looking forward to that. Because we all have very distinct voices when we write and when we talk about things

Christopher Penn 44:20
we do after this meeting, remind me show you the neural transfer prompt for ChatGPT. It’s kind of fun. On that note, thanks for tuning in today, and we will see you all next week. Thanks for watching today. Be sure to subscribe to our show wherever you’re watching it. For more resources. And to learn more. Check out the Trust Insights podcast at trust insights.ai/t AI podcast, and a weekly email newsletter at trust insights.ai/newsletter Got questions about what you saw in today’s episode. Join our free analytics for markers slack group at trust insights.ai slash analytics for marketers See you next time

Transcribed by https://otter.ai


Need help with your marketing AI and analytics?

You might also enjoy:

Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!

Click here to subscribe now »

Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday.

2 thoughts on “So What? What’s next in generative AI?

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This