In-Ear Insights Generative AI in 2024

In-Ear Insights: Generative AI in 2024

In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss generative AI in 2024 and the rapid advancements in generative AI over the past year, including significant developments in various AI models. They explore how these technologies impact businesses and the importance of understanding their capabilities and limitations. The conversation also delves into the role of data quality and organization in leveraging AI effectively and the potential risks and ethical considerations in AI implementation. Tune in to gain insights into navigating the evolving landscape of AI in 2024.

Watch the video here:

In-Ear Insights: Generative AI in 2024

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

[podcastsponsor]

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.

Christopher Penn 0:00

In this week’s In-Ear Insights, Happy New Year, it is 2024.

We are almost a quarter of the way through the century, which is mind boggling when you think about it.

And the topic du jour unsurprisingly, remains generative AI and all the absolutely insane year that 2023 was just a quick recap.

We saw OpenAI go from GPT-3, 2.5 and ChatGPT to GPT-4 to four v four v and four turbo, so three major releases within a year including the ability for it to see as well as read.

We saw Google go from bar to bar to to Bard with Gemini, although most recent benchmarks is that Gemini is not any better than OpenAI is free ChatGPT With three GPT-3 point five.

We saw anthropic go from Claude declawed to to cloud 2.1, before the end of the year having the largest context window 200,000 tokens, although it is not great.

And we saw the open source community developing some models that are absolutely mind blowing going from Metis first Lama release in early 2020 to Loma two, and then Miss draw the French company going from nothing to Mr.

l seven to the to MC straw, that mixture of experts model which is industry leading performance for models that you can run on your laptop, which when you think about you don’t need a server room full of tools to have generative AI working for you.

On the image side, we saw stable stability AI go from Stable Diffusion to Stable Diffusion, XL to SDL turbo, all within a year’s time, which allows for now real time animation, which is mind bending, with video clips up to 10 seconds long.

And on the regulatory side, we saw the first actual legislation regulating the use of generative AI come from the EU, the EU is AI Act, which was passed at the very, very tail end of 2023.

The big revision there for marketers to pay attention to of course, is disclosure and transparency.

So Katie, given this astonishing year that we just had, what are you looking at? And what are you looking forward to for the year ahead?

Katie Robbert 2:16

Well, before I get into that, I just want to sort of point out that I feel like regardless of all of these advances in technology, the same issue exists, developers do not give a hoot about version numbers.

So you have GPT, 3.5, GPT-4, GP two T, four v GPT-4, ABC, and then 4.2, and then six, and then nine.

And it’s as someone who used to manage software engineering teams like this drove me nuts, because we would outline a plan for what the version numbers would be.

And then they would do something completely different and be like, No, well, we did all those versions, you just didn’t see them, they’re not public facing.

So we went through one and 1.2 and 1.23, whatever.

And now we’re on six, I’m like, Well, wait a second.

So I just want to sort of point out that, as I’m looking forward to 2024, my focus is going to be on the fact that, you know, despite the new technology, the same issues exist.

And so I guess that’s sort of a Debbie Downer way of saying that, you know, I’m not overly focused on the new tech itself.

So you know, Chris, you have generative AI covered, I don’t need to hyper focus on what’s going on, if I need to know, thankfully, I’m in a position where I can just ask you, as my, you know, colleague and my co founder and my teammate, my focus needs to be on that it’s going to surface the same issues that we’ve always been having with people in process.

But there’s this misunderstanding.

So my focus, and prediction for 2024 is that there is a misunderstanding of what problems, generative AI will be solving for any given team or company or individual.

You know, I think I saw something that you posted about, you know, training algorithms, like if you only ever interact with dog posts on Instagram, then you’re gonna get more dog posts.

Or if you only ever, you know, cook one bread recipe, you’re gonna get really good at that bread recipe.

I’m sort of paraphrasing now.

And then I think your point was that if you continue to train generative AI on specific tasks, your generative AI is going to get really, really good at performing those tasks for you, which is totally true.

But then if you look behind you and your whole kitchen is on fire, then you haven’t really solved the problem.

That’s

Christopher Penn 4:54

fair.

That’s fair.

Yeah, that was more that that particular post was more in relation to as P but we’re looking at the year ahead and what they expected to see, I was reminding people that you do have you, as a human being have a lot of agency as to what the machines serve to you what content you choose.

See, I saw on threads the other day.

So I was like, there’s a lot of this content I don’t want to see on here and like, then you’re not using threads properly, because I only see the stuff that I want to see I’ve gotten it so well trained, like I want to see these 10 people and their posts, as much as possible.

I want to see these topics as much as possible.

And for me, it’s it’s a wonderful place to be because I see just the things I want.

But

Katie Robbert 5:38

I think that’s proving my point, though, is that there’s this misunderstanding that this new technology is suddenly going to solve this problem.

And you the human, don’t have to do anything.

Christopher Penn 5:49

Oh, absolutely.

That’s that’s 100% Correct.

Yeah.

If you just sit back and let the machine attempt to infer based on your behavior, and you’re not conscious and thoughtful about how you interact with absolutely, it’s going to be a crapshoot until.

So this is interesting.

This is a problem called sparsity.

And sparsity refers to not having enough data to make decision.

When, when if you were to use a brand new social network, and people saw this in the very early days of threads last year, when you use a brand new social network, there really isn’t a history with what you like and what you don’t like.

So the machine has what’s called a sparsity problems, like I’m just gonna throw a bunch of things and see what sticks, because, you know, do you engage with any of this at all? Do you do dwell and how long do when you’re thumbing through your feed? Do you stop on on this post to that post, and it tries to guess as quickly as possible, what to serve you based on limited signals, so a sparse data problem.

And then over time, as it has more data, it can can sort of fill in the blanks better.

But a big part of, of what you are served by a machine is conditioned, is primed by those early day interactions.

So if you, if you were angrily commenting on a certain politician on threads in the first seven days, there’s going to be a long term echo sort of a shadow of those interactions, because the model built, essentially its best guess about you from those early days.

And it takes a lot of intentional work to get past that, like, Okay, I’m gonna go to heart and love every single pit bull post.

And hopefully, over time, I see less of that animal and more of the pit bulls that we want.

So that’s, that, to your point, that is a people and process thing to the people have to understand the process of how the machine works and build their own processes to condition the machine in, in a sparse data problem.

If you are thinking about rolling out generative AI at your company.

One of the first things you have to think about from a process perspective is what is how do we address the sparsity problem early on, you can do it with things like, you know, rich training data and thoughtful curated training datasets and stuff, but a lot of companies that don’t know that just kind of rushing headlong and they’re like, why is our it was a Chevrolet of Watsonville just put up a basically an empty OpenAI ChatGPT endpoint.

And suddenly, people were using the full capabilities of the paid version of of ChatGPT, on Chevrolet of Watsonville, Chatbot.

Because they just think about how this thing worked.

So it’s like writing Python code for them.

It’s writing contractually, legally binding agreements on behalf of the dealer.

Well,

Katie Robbert 8:37

and I guess that sort of, you know, the, when I think about, you know, what’s coming in 2024, that’s where my head goes, is, there’s going to be more opportunity for companies like Trust Insights, to really educate and guide the conversation and the implementations of custom GPT-4 and generative AI in whatever form people are looking at, to make sure that they’re doing that upfront work.

And, you know, I mean, this is something that, you know, I am definitely a broken record, at this point, like, you have to do that upfront work, you have to do your requirements gathering, you have to know why you’re doing this.

And that example that you just gave Chris is 100% why I harp on this point.

And so I see more of that I see more human error.

I see more security breaches, I see more, you know, to use the technical term oopsies going public.

And so I think that, you know, humans are going to continue to get more and more careless and lazy, because they think the tech is going to solve that problem and automatically do it and so, looking ahead of the year, I think that where my focus is going to be is really trying to, you know, anticipate those things as much as I can with our clients and with our prospects and say, Let us help you get ahead of, you know, a big security breach like that, you know, local Chevrolet company just had where like, people are just using your ship for free, and you don’t even know it.

Christopher Penn 10:23

It’s true.

And the Chevrolet Watsonville example really highlights sort of that fourth quadrant in the Rumsfeld matrix, right is the known knowns, the known unknowns, the unknown unknowns and the unknown unknowns.

And the unknown unknowns is, though, you don’t know what you don’t know, if you’ve never worked in generative AI or any form of AI, you don’t know about the sparsity problem.

And because you don’t know it, that exists, you don’t know how to deal with it.

And that can become problematic.

So there are things that you know, you don’t know, right, you know, that you don’t know the inner mathematics of language models, that’s totally fine.

And it’s a question of whether or not you need to know that you can make that evaluation.

And there’s, there are some things that you don’t know that you know, that because you forgotten, or you have the institutional knowledge, or you may have made a new hire, for example, and that person knows some things, but you don’t know that they know those things.

Because it across all the staff and teams we’ve ever worked with Katie, we’ve had people who’ve, like, I didn’t know, you knew that like, that’s pretty cool.

Like, we had this one marketing analyst who was a certified Pyro technician, who could deploy fireworks safely.

I’m like, okay, that’s not a skill is gonna come in super handy the agency, but at the same time, I didn’t know that we we collectively had that knowledge within our organization.

But though, you don’t know, what you don’t know, is the big problem for a lot of companies in generative AI.

Because the field is so new, because it’s evolving so quickly.

And because the risks are still unclear, because a lot of people still don’t understand how these tools work.

There’s a big pile of you don’t know what you don’t know, and as demonstrated by some of these, these more glaring examples that can bite Yeah.

Katie Robbert 12:05

Well, and I think that, again, sort of, it’s an opportunity for companies like Trust Insights to do that education.

Because, Chris, I mean, I know you well enough to know that you are striving to stay on top of all of the information that’s coming out about generative AI as quickly as it’s coming out to, you know, do your best to be able to understand it.

And so I think our responsibility then becomes, you know, helping distill down that information to what people do need to know.

So do I need to understand the math behind how these things work? Probably not.

But do I need to understand sort of, like the bullet points, high level of the risks, and, you know, if I do this, then this is going to happen, sort of all of those if then statements, you know, with the prompts and with the software itself, and that’s where I think that 2024 is going to be a big year of education, you know, through courses through consulting through, you know, content through our live streams and podcasts.

So when I think about so I know, you’re asking the question, like what’s coming in 2024 for generative AI, but naturally, my thought is, what does that mean for Trust Insights as a whole, and that is education.

And I think if I take it to that bigger picture of generative AI, it still is education.

And it’s, there’s going to be a lot of trial and error, there’s going to be a lot of people experimenting, there’s gonna be a lot of missteps and mistakes and things that need to be, you know, fixed and cleaned up.

All for the sake of learning it.

And I think that if you are someone who is interested in knowing more about generative AI, you just need to start experimenting with it.

While having that learning of what could go wrong, and I think it is the what could go wrong, is where a lot of these conversations need to focus first, especially in regulatory industries, especially in companies that deal with protected health and personally identifiable information, especially in companies that deal with things like GDPR and other customer data.

And I’d say that’s where every company should be starting is, what are the risks? What is our most sensitive information that we need to be protecting?

Christopher Penn 14:35

I think it even goes beyond that love, just just an insensitive admission? Absolutely.

You want to make sure that it’s protected in your use of AI and you’re not using it in ways you shouldn’t be.

We actually just before the holiday break added an entire module to our AI course on the EU’s AI act.

But the one provision, I think is most cogent from the average marketer who’s not doing the deployment of general AI is the one on transparency on disclosure one So, to take a quick step back, because you mentioned GDPR GDPR was the data privacy regulation, the EU rolled out in 2018, that quickly became the gold standard planet wide for how we should keep data safe and to do business within the EU or to do business with EU citizens.

You had to be GDPR compliant, even if you didn’t mean to be doing business in the EU, right.

So the the Trust Insights website, for example, has to still meet basic compliance measures, not just because of the GDPR, but also because states like California adopted their versions of GDPR, first with CCPA, and then CPRA, which took effect last year, these two so the EU sort of lead the way and everyone else copycat it off of that, because then why not, because if you meet the most stringent standard, then meeting lesser standards is pretty easy.

The EU AI act promises to do pretty much the same thing.

So the EU was planted a flag in the sand and said, This is what we think companies should and should not do with AI.

And in the absence of other leadership, because there is an absence of leadership in the world around AI, the EU’s, versus probably going to become the gold standard again.

And so whether whether or not you are actively doing business in the EU, if EU citizens are using your services, and you’re using generative AI, you have to abide by it, but you have to you have to abide by it.

So for example, with our course, our courses available globally, and we have seen registrations from people within the EU, therefore we are governed by the EU AI X whether we want to be or not.

The big one that affects everyone is disclosure, you have to disclose when you are using gendered AI, we’ve had the page on the Trust Insights website I linked with every week in my own personal newsletter, about why it’s important to disclose these AI from US copyright perspective.

But I actually have to go and amend that page to say, and it’s also now required by law within the EU, if you use AI for content generation for anything that the public interacts with, you must disclose it’s no longer optional.

It’s no longer a good idea.

It’s now required.

Katie Robbert 17:02

I don’t want to go too far off course.

And I think maybe this is a discussion for another podcast is you know, why? Why are we disclosing use of AI when we aren’t disclosing the use of like, a copywriter or an editor, for example, a human copywriter or human editor? You know, and those are just some things that I think, you know, as individuals, and as companies are thinking about, you know, what do I need to know, as I’m going on this journey with generative AI, as I’m bringing it into, you know, my team, as we’re using it for content? Those are the questions they should be asking.

And so I think a big part of 2024 is going to be that curiosity that what do I what what questions should I even be asking? And if you’re not asking questions, then you’re definitely approaching it the wrong way, there’s no way that any one person is going to know all of the answers.

And you know, Chris, you’ll notice is the information is changing so quickly, that the questions that you’re asking and answering today, you’ll have to ask and answer again tomorrow, and the day after, and the day after that.

And so I would say if you know, you haven’t set your intention yet for 2024.

A good place to start is, you know, be curious.

Exactly.

Christopher Penn 18:22

from a technological perspective.

Go too big.

Well, there’s a bunch of big things this year, but a couple the ones that I think are important one, today’s models that claim to be multimodal kind of are but kind of aren’t their their performance definitely indicates that they’re kind of a mixed bag at this point, and

Katie Robbert 18:41

set back and define multimodal models.

Christopher Penn 18:44

So a multimodal model is the model which you chat with.

But you can give it an image and say, Hey, describe what you see in this image, that’d be an example.

Or you could upload an audio file and say, What do you hear? Well, we could upload a video and say, What do you see? And on the flip side, it would there are also things where you can talk to a, you know, chat with a model and say, Hey, make me a picture of a chicken wearing a wintertime hat and a red scarf.

And the model should come up with something that looks like this, whether it actually does or not, we’ll find out.

However, the models that we have these days, obviously appear to be ensembles, they don’t appear to be truly native like so.

And the best way you can see this is if you use ChatGPT and the dolly extension within it.

You’ll give it in some instructions, like maybe a passenger car driving down the highway the for four people singing, and it puts five people in the car like No, no, four people not five.

And it really it just it can’t because it can’t see what it’s creating.

It can only create prompts that it passes on to his engine and then gives back results and it becomes very clear that it has no idea what it’s creating because it’s not a true multimodal model.

It’s an ensemble.

Katie Robbert 19:57

I just want to stop you there for one second.

I apologize But like, I don’t know why the thought of it can’t see what it’s creating just blew my mind to edits.

You know, I think we take that for granted, you know, and sort of this, again, this goes into the education part of what generative AI can and can’t do.

Because, you know, you said there are some models that can see things and some of the can hear things.

But that doesn’t mean that when you say, create a chicken with a red and a hat, and all these things, the model itself doesn’t then sit back and go, yeah, that is what I understand red to be, or that is what I understand a chicken to be, let me just tweak that a little bit and make it a little bit more chickeny.

You know, and it was just, I think that the limitations of any of these things will forever exist, but it’s really just understanding what that actually means.

And so I just wanted to highlight that statement that you just made that it can’t see what it’s creating the same way that you or I, as humans can see and understand.

I think that’s so important for users of these tools to really wrap their heads around.

It’s a difficult concept to wrap your head around because it feels like it can see and do so many things.

But it’s really just algorithms.

Christopher Penn 21:19

Yeah, it’s all math.

At the end of the day, these things are just prediction engines.

And in a multimodal environment.

It’s not another form of media, it’s not none of the sense it is actually computationally very, very difficult.

And again, this is something I spent a lot of time on the break reading about the scientific papers about what’s going on under the hood and why these multimodal models are struggling so much.

And it turns out, it’s because there are not clear relationships between what you see, and the language you use to describe them.

Their language itself is inherently ambiguous, right? If I did not describe this, and I just held it up a your first reaction is

Katie Robbert 21:59

what? Why do you have a plastic chicken? Exactly.

And be

Christopher Penn 22:02

there’s so many different ways you could describe this that are equally correct, but are ambiguous, right.

And when visual models are trained, they’re typically trained on an image plus a caption.

And so for something like this, this is a product.

So it probably a model has been trained probably has the product caption from stores catalog, describing which is not an accurate way of describing the whole thing is a very short snippet of text saying you’ll buy a thing for 2499 or whatever.

And so this is probably a topic like you said for another podcast entirely about the mechanics of these different models.

We’re trying to bring together two things that don’t really go together.

It’s one of the reasons why generative AI has so far has done a really, really poor job of constructing music because music and language are not the same thing.

And music actually predates language in our brains.

And that because it’s so different, these models can’t blend them together well.

So a big part of 2024 looking ahead is we’re gonna start understanding more of the complexities of how these models can and can’t work together.

The big technological thing to keep that I’m keeping my eye on is mixture of experts.

So at the end of last year, the French company Ms.

Straw, released their model mix drawl, which is a mixture of experts in the best way I can describe this.

Instead of a kitchen, where there’s one head chef who’s really good like a Gordon Ramsay just doing everything very expensive, very talented, but one chef, you have sort of a head chef and a bunch of sous chefs, all of whom are not necessarily great, like kinda okay, you know, b b player, chefs, but there’s eight of them in the kitchen.

And under the direction of the head chef, they can each do they each have a specialization, like one guy can shop, the other guy uses the blender and so on.

Somebody’s really good at the blender but can’t do anything else do not let them go the dishwasher.

And in this mixture of X’s model, you have eight chefs working in the kitchen instead of one now obviously has to be a bigger kitchen.

But you can get a lot more done with eight chefs in the kitchen than you can with one because of the nature of multitasking and that this architecture came out last year in production.

It’s actually been around theoretically since 1991.

But it came in production last year.

And that model has topped the charts of so many different benchmarks.

It beats Google’s new model, which you’re like, how did that happen? Right.

Google went over that and it comes close on several tasks to OpenAI is paid model and exceeds OpenAI eyes free model.

We’re going to see this architecture become sort of the standard for a lot of the open source models and for companies that are looking to deploy a highly capable model within their walls where they will absolutely positively cannot let data go outside your walls for any reason like protected health information.

This will be the architecture that these companies will use.

So that’s another big one to keep an eye on this year.

Katie Robbert 24:58

What I find really is Interesting.

So you just described, you know, you have one head chef who does it all, or, you know, a head chef with a team of people that they delegate to.

And what I’m really excited about this is sort of more of like, my own personal thing for Tony show is to really see you wrap your head around how these architectures are all borrowed from organizational behavior.

Because what you just described, are the team structures that we used to have, and why it was so much more efficient to have a team of 10 with a combination of specialists and generalists versus just you doing everything.

And, you know, I’m wholeheartedly not saying this to pick on you.

But one of the things that you’ve always openly said is that, you know, you sort of struggle with the whole human team management interaction.

And what I’m seeing is that individuals who are more technically minded, you know, technically focused like yourself, who focus less on the human side of things are now starting to see and understand where those parallels are, you know, with like, an organization with like, a matrix organization, for example, because that’s basically what you just described.

And I’m like, Oh, well, yeah, that’s been around forever.

It’s been around longer since 1991.

But in a technical architecture, I can understand where you’re like, oh, this makes so much sense.

So now, it almost opens the door to different conversations, where you may have been struggling with, like, how do I relate to my data scientist? Well, now you have terms that you can put it in, that they will be more able to understand, like, if you put it in terms of a, you know, mixed role model, if I’m getting that correct, then you can say, imagine that our team is structured like a mixed role model.

And they’re like, oh, my gosh, I get it now.

That because it’s, you’re sort of meeting them where they are.

And so for people, managers like myself, I feel like it opens up a whole new vocabulary, a whole new language that yes, I need to learn.

But it gives me even more tools, to how can I relate to my more technical resources?

Christopher Penn 27:29

It’s funny, because I went the opposite way and said, This teaches me as a technical person, how much more quickly I can replace

Katie Robbert 27:35

the humans.

Yeah, that’s not going to happen.

I’m sorry, sir.

Christopher Penn 27:39

We took away different messages from

Katie Robbert 27:41

well, and you know, if you and I were exactly the same, this would not work.

Now.

Christopher Penn 27:48

The other thing I think, keep in mind this year, this will be the this is the going to be the year I think of AI mistakes, I think will be the year where people, people in 2023, we’re still just trying to wrap their heads around thinking, Oh, what is this thing and there’s a lot of people who are in that boat.

And if you are in that boat, there’s a course here you can take this year, I think there’ll be a lot more deployment, cautious and incautious deployment of AI and we’ll see more people doing things well and doing things really badly.

The in the USA, I think the 2024 presidential election cycle will really highlight the use of use of AI all the different ways that it can be used for good and ill that we haven’t anticipated yet.

But I also see some interesting things happening.

There is a world wide race for AI supremacy, if you will.

And we’re seeing this especially in the open source models like which nations are capable of releasing models that are best in class performance.

China has a what they claim as a mixture of experts multimodal model, which would not surprise me that’s a good architecture for that.

They’ve not released the model yet, but I suspect they will France, France was actually instrumental in negotiating certain ways parts of the EU act because they wanted to give advantage to their own native companies about it, Mr.

All being the leader in the marketplace right right now.

So I think this will be the year also of countries racing ahead to try and enable AI as best as they can, so that they can they can attract talent attract investment, Japan has made some changes with laws to attract talent, South Korea just opened up a digital nomad visa program for you can stay for up to two years as a digital nomad.

They’re specifically looking for people.

No surprise in the field of AI.

So there is a world wide race now for talent that this year is probably going to be very, very hot.

So if you’re thinking in your own career about how, what your career looks like, one of the areas to think about is how how will your career be affected by generative AI? And can you be an early adopter? Whatever your industry is, can you be an early adopter to to seize the Evangel All whilst on,

Katie Robbert 30:01

I think that goes back to this year is all about education.

So it’s educating others educating yourself, you know, finding your own process for staying up to date on things, figuring out what it is that you absolutely need to know, and focusing on those things, you know, working with companies like Trust Insights to fill in the blanks on everything else.

So I think, you know, it’ll be really interesting, especially since when we have this conversation at this time last year, it was a very different conversation.

We were focused on the rollout of Google Analytics 4, we were focused on.

So yes, OpenAI had launched it had launched around like October or November of 2022.

So it was still newer in the conversation, but it was trying to pick up.

But I don’t think we had really fully anticipated how much it was going to dominate the conversation.

And really change pretty much everything, the way that we were approaching it, and what we had to think about in terms of, you know, services, and courses and content and education, and really just focus.

And so I’m, I’m taking the home, the conversation that we’re having now a bit with a grain of salt, because things change so quickly, because there are so many things that we can’t anticipate.

And so, you know, for people who are interested in implementing artificial intelligence, here’s the good news, the foundational structure of how you implement pretty much any new technology, that’s not going to change.

And that’s something that, you know, we, you and I, Chris have really honed in, and, you know, I would say like, our experts in that sort of foundational, okay, you want to do this, here’s, here’s what you need to do, oh, you want to do this over here, instead, great, you still need to take the same steps.

So I feel very confident in that, regardless of the context of what’s happened in the industry, we’ve really focused on those foundational pieces.

And I think that people who are nervous going into this year of things are going to change.

So quickly, go back to your roots, go back to the foundations and make sure that those pieces are rock solid, because then you’ll be prepared for whatever comes regardless of you know, what it is there will still be a learning curve and an education to whatever the new context is.

But the foundational pieces shouldn’t have to change.

Christopher Penn 32:28

Yeah, that’s, that is generally true.

And what’s interesting, too, is people have forgotten, we talked about this, at the end of last year, people have forgotten that.

The two things really define a success in generative AI.

One is the quality and quantity of your ideas, right.

And that’s a people thing that’s not a bad is not a machine thing.

But because machines are only as creative as what you bring to them.

And the second thing that will that sets apart success or failure with generative AI is the quality of your data, right, because all of the the real value, the real advantage that generative AI brings comes from your data, like the models can do great stuff with generic data, right, you can go to ChatGPT, and say, let’s write a blog post about this or that.

But the value of like real value unlocks come from your data, or the data that you’re that you’re working with.

Just this morning, I was doing some reporting for a client, I took a de identified dataset from the client and put it into ChatGPT.

And said, Okay, write a summary of this data for the executive summary.

And it’s a very capable job.

Now the prompt for this is like now like three and a half pages long, because there’s very specialized components to the prompt to make it work really well.

That comes from our data that comes from our knowledge base.

And so if you gave that same data set to somebody else, they would get a different, less, probably less good result, because they don’t have that specific extra, that makes the machine work better.

Again, this is all this there’s a whole technical term for this called latent space optimization.

And the easiest way to think about it is if you have a kitchen, you also have it probably a pantry and then that pantry is all your supplies all your stuff.

The quality and quantity of what’s in that pantry dictates what the kitchen can produce because there’s only so much room in the kitchen.

And so if your your pantry is a disorganized hot mess, guess what your kitchen is going to produce less optimally.

If on the other hand, your pantry is well organized, everything is fresh and clean and well labeled and all that stuff.

Your Your kitchen is going to have a much easier time operating.

And that’s what we’ll set apart.

Yeah, people who have people in companies who are skilled with generative AI, people are not because the table minimum has been met like hey, write me a blog post about that.

That’s done.

Now it’s who has advantages Gandhi whose pantry is in the best order, and that again, is people in process, not platform.

And

Katie Robbert 34:59

I feel like we’ve sort of now come around full circle of, you know, new tech doesn’t solve old problems.

If you have poor data quality, if your data is unorganized.

If you don’t really know what you want to do with the data, generative AI isn’t going to fix those problems for you.

You know, if you’re standing up Google Analytics, and you have poor data quality and sort of things are a mess, Google Analytics isn’t going to fix that for you.

If you’re really curious about how you can improve your SEO, but your website metrics are a mess.

SEO tools aren’t going to fix that for you.

And so I think that sort of, to sort of wrap this up, what I’m looking toward the most in 2024, is helping people get out of their own way, by helping them resolve the people in process, so that regardless of the platform, they feel ready and confident.

Yep.

Christopher Penn 35:56

And I’ll just be over here trying to take over the world.

So

Katie Robbert 36:01

you got to get through me first, sir.

Good luck.

Christopher Penn 36:05

Fair enough.

If you have some thoughts about how you will be looking at 2024 And what’s on your docket or you have questions that you want to discuss, like, hey, maybe we’re not sure how to be tackled in 2024 pop by our free slack group, go to trust insights.ai/analytics For Marcus, where you have over 3000 other marketers are asking and answering those questions every single day.

And wherever it is, you watch or listen to the show.

If there’s a challenge, you’d rather have a dot instead, go to trust insights.ai/ti podcast where you can find our show on most major channels.

When you’re on the channel of your choice.

If you could leave us a rating and a review.

That’d be great.

It helps to share the show.

Thanks for tuning in and we will talk to you next time.


Need help with your marketing AI and analytics?

You might also enjoy:

Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!

Click here to subscribe now »

Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday.

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This