In-Ear Insights Data Privacy and Generative AI

In-Ear Insights: Data Privacy and Generative AI

In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the complexities of data privacy in the era of generative AI. They offer practical tips on safeguarding personal and company data, emphasizing the importance of understanding terms of service and the implications of sharing information online. You’ll gain insights into how different AI models handle data and the significance of opting for secure, private AI solutions for sensitive information. Tune in to navigate the challenges and opportunities presented by generative AI in a data-driven world.

Watch the video here:

In-Ear Insights: Data Privacy and Generative AI

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

[podcastsponsor]

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.

Christopher Penn 0:00

In this week’s In-Ear Insights, let’s talk about data privacy and generative AI.

It is no secret that for many of the tools that are out there, those tools are looking for more data to train on to get better feedback from users to say this was a good response.

This was a bad response, sometimes with not dubious consent, but with consent that is deeply buried or implicit.

So for example, about a year ago now, x, the service formerly known as square changed his Terms of Service and said, Hey, we’re going to use your data to train our AI on and the only way to opt out is to cancel your account.

There are many, many other services out there, the golden rule that we have said for 20 years now is, if you are not paying, you are the product.

And that has never been more true in the age of generative AI.

So Katy, last week, you did a seminar with MarketingProfs.

And one of the questions folks had was like, what should I be thinking about? When it comes to data privacy and generative AI? So you want to recap sort of how things went with that in terms of your perspective on it? Yeah, absolutely.

Katie Robbert 1:06

And so it’s a really good question.

I’m glad that people are asking it.

Because if you’re not asking about your data, privacy and security, then to your point, Chris, you are the product.

So the question was around the GPT model, so ChatGPT, anthropic, Claude, so and so forth, and they were asking, you know, what do I need to know about privacy and data security? How do I protect my information.

And so it’s a layered response, because it really depends on your specific use case.

But at the very least, you should have a good grasp of what personally identifiable information covers, and what kind of confidential information you don’t want to share about your company, because those are two different things.

So PII is a lot of what you would probably get out of your CRM, your customer relationship management system.

And so that’s a lot of first name, last name, address, phone number, any sort of biometrics demographics, those type that type of information, you don’t want to put that information into a publicly available custom model that you’re basically paying a subscription for, that kind of information is not appropriate to go in there.

Unless you have a privately held secure custom GPT model on your own servers.

The answer, the reason for that is because then you can be assured that the data is not going anywhere, but your system, and then confidential information, like your company’s financials, competitors, that kind of stuff, stuff that you wouldn’t want shared publicly is also data that you probably shouldn’t be putting in publicly available GPT models, generative AI models, rather not GPT models, because it then uses that data to train for other people, when they have questions about that kind of stuff.

And it might be like, you know, if I say, Well, who are my competitors? So I’d be like, you’re similar to Chris Penn and Chris Penn is competitors or this? And you’re like, oh, wow, those are Chris Penn competitors, great, I’m gonna take all of them and him down.

And so it just it becomes it puts you at a slight disadvantage if you’re sharing too much information about your company, accidentally.

Christopher Penn 3:26

Yeah, the I mean, the golden rule is read the Terms of Service, they’re boring.

But it’s important what in fact, one of the things you can and should do, is anytime you’re signing up for any piece of software, I think no one enjoys reading the Terms of Service, but you don’t have to select all copy, go to ChatGPT, hit paste and say, Read these terms of service.

What do what permissions Am I giving this company to my data, and it will say, hey, you know, and this line and this line, and the thing is there’s it looks like you’re giving all your data, your company permission to use your data for anything you want.

Like if you go on to Metis Terms of Service, anytime you post on Facebook, Instagram, what’s app threads, etc, by the terms of service, meta can use that data for whatever they want.

And

Katie Robbert 4:09

it this is a little bit of a digression, but it kills me.

When I see people on Facebook, they put every like, I think at least once a year, this circulates.

And people will say, by putting this here on my profile.

I do not give Facebook permission to use my photos or data.

And that’s yeah, I’m like, but you’re on the platform, you’ve already agreed to it.

That’s like Michael Scott saying I declare bankruptcy.

Like it doesn’t just magically happen.

There’s a whole process that you have to go through and you’ve already agreed to the thing.

And so I I agree with you.

Now, let me ask you this.

Is it ironic that you’re suggesting that people use ChatGPT to read the terms and services to know what data ChatGPT is using of theirs? I

Christopher Penn 5:00

don’t know if it’s ironic, per se, but it is I think, if it’s the choice between just not reading the terms of service and having a machine read for you definitely have the machinery to for you.

Because it varies, it’s vary so much.

So just with ChatGPT, the free version trains on your data by default, but there is a setting to turn it off the paid version trains on Indeed, unless you turn that setting off, the team’s version does not train on your data.

By default, the enterprise version does not train on your data, the API does not train on your data.

So you have to go and read through these things.

If you are a company that use is a Microsoft shop, and using Azure AI, as your AI, per the normal Terms of Service does not train on your data.

Right? So there’s so you could run a the enterprise version of ChatGPT, inside Microsoft Azure, and then your data is protected with Claude, the free version trains on your data, the paid version? I don’t know, I’ve never paid for it in the terms of service.

But even at the like the lowest level paying plans, a lot of companies will say that you were saying that there’s a setting even in Adobe Photoshop for it because Adobe has its own image generation model called Firefly, what did you find in those settings.

So

Katie Robbert 6:09

I saw someone posted about this the other day, and I was and I’ve only just started using Adobe Photoshop again.

And they have a generative AI toolbar within the system where you can like use it to assist with editing photos, or creating or whatever it is.

But it never occurred to me that this was yet another example, where they were using your data.

So someone posted, hey, everybody who’s using Adobe Photoshop, check your data and privacy settings.

They’re using your information to train their dataset, and I was like, geez, here we go again.

So I went into just this morning, my Adobe account, under data and privacy settings.

And there it is.

So it’s a setting called Content analysis.

So I can actually share my screen.

I’m not as swift with Chris, oh, here we go.

So it’s the data and privacy settings.

It’s this box here, it’s content analysis.

For those listening.

Adobe may analyze your content using techniques such as machine learning to develop and improve our products and services.

If you prefer Adobe, not to analyze your files, to develop and improve our products and services, you can opt out of content analysis in any time, the setting does not apply to in certain limited circumstances.

And you can click to learn more, so I just turned mine off.

But then I would recommend going to the content analysis FAQ to find out more about it like what is it doing? And then more specifically, it mentioned, what are the circumstances where this does not apply? Where it’s going to be using your information.

So Adobe Photoshop Improvement Program.

This allows you to submit images to help improve machine learning based features.

That makes sense to me, because you’re saying, Hey, I’m giving you my stuff Adobe Stock content submitted to contributors for to Adobe Stock, certain features that allow you to submit content as feedback, certain beta pre release in early access products or features.

So I Chris, I’m with you, I would definitely be careful because Adobe Photoshop isn’t one of those systems, that you automatically think they must be looking at my information, you’re like, I’m just editing photos, what does machine learning care about that, but clearly, it does, especially that they now that they have it built in, you really need to make sure that you’re careful about what information you’re sharing, because there may be, you know, assets and images that are confidential that can’t be shared, or you just want to, you know, hold on to your proprietary artwork.

Christopher Penn 8:45

Exactly.

And I think it’s important to to explain to folks, there’s two ways this information will be used to very broad techniques.

One is called fine tuning.

One is called retrieval, augmented generation.

Fine tuning is what a lot of language models use your data for.

So when you use ChatGPT, and you interact with it and returns a response and and you continue to interact with it.

What they’re doing is they’re going to take that data and retrain the model will say like, Hey, this is your first response.

Here’s what the user said in return that clearly was not sufficient, because you kept having this conversation.

And so that that input output paired with set of responses is used to retune the model.

The models basically said, hey, when you are told this respond like this, because it’s a better way to respond.

Right? So that’s called reinforcement learning with human feedback.

And it’s part of the fine tuning process.

That does to some degree, add new data to the model because obviously, if you respond like Hey, who are the competitors of Trust Insights, and, you know, Katie responds, well, no, that’s our competitors.

These are our actual competitors.

That information can become part, but it’s fine tuning is not used primarily to add data to a model is primarily used to get a model to behave more like the way the user wanted.

The second thing is called retrieval augmented generation.

This is where if you use custom Jeep If he’s really just ChatGPT, and you’ve uploaded your own documents, what you are doing, as you were saying to the model, Hey, you don’t know this, apparently, because you can be wrong.

So So here’s a bunch of new data.

Like, it’s like adding new books to the library.

And that in turn makes the model able to answer some questions better because it has more information.

Most AI companies will not use that data for retrieval augmented generation, because it’s kind of it’s questionable whether it’s, you know, how good the quality is.

And also there could be very serious legal issues with using that data depending on what it is most model companies are going to use the fine tuning stuff, the interactions to make the models respond better, but to your point, it is still teaching the model.

In some cases, this is more correct.

So if a model hallucinates a whole bunch of competitors, or Trust Insights, and says, Hey, we compete with Exxon Mobil, and Verizon like No, by telling it, we compete with McKinsey or Bain or BCG, we are implicitly saying this is the correct answer.

And therefore you are giving away some of that information.

So it’s important to have that understanding of what how that information is probably being used behind the scenes.

Katie Robbert 11:12

Let me ask you this question.

Let’s take AI out of the context of the conversation and just talk about, you know, end user agreements and data privacy for a second.

How is this any different from Google having the market share of search engines? And anytime I search for something, using that information to Phil, SEO tools and keyword tools, you know, we a couple years ago, everybody was buzzing about the cookieless future? And you know, Ken cookies follow you around? How is this? Obviously the tech is different.

But how is this any different from any of those conversations?

Christopher Penn 11:51

It’s absolutely no different.

It is absolutely no different.

It’s no different than the conversations people have about privacy and data on social media and what social media companies can use for it.

This is one of the reasons why a lot of people making a lot of noise right now, you know about saying like, hey, our model, your models are trained on all this stuff that you don’t have rights to.

If you if you think even for two seconds about it.

You already the big tech companies already have the state of OpenAI had to build it from like common crawl and stuff like that.

But Google has Gmail, Google Search, YouTube, Android, Chrome, and so on and so forth.

Facebook, meta has Facebook, Instagram, wats, app threads, etc.

So the big tech companies already have all that data.

And to your point about these end user license agreements, people who are, you know, artists who are saying, Oh, but you’re using my art without copyright.

Yeah, if you uploaded it to Facebook, Instagram, you gave them the license to use it.

You can’t take it back.

Once you give it to them, you can’t take it back.

So it’s in the model? Yes.

Did they scrape it out? If they if they scraped it off your website? Yeah.

Then then they may have used it without permission.

But if you put it on any of their services, they have your permission.

But

Katie Robbert 13:05

then you think about so to your that example of if you put it on your website, who’s hosting the website, who get a what servers is the website on, you know, what tools you’re using to build the website? And what are their end user license agreements as well.

And so we’re talking about it in the context of OpenAI, ng n, generative AI and all of these, you know, seemingly new systems that are suddenly using your data.

But guess what, they’ve been using your data for years, they don’t suddenly need to throw in a generative AI toolbar in order to get that information.

Anytime you put your, you know, you Google, you know, you know, what is this lump on my big toe? Google is collecting that information.

It has location information, it has demographic information, because you by default, have set up some kind of Google profile.

Now, if the information is correct or not, it’s a whole other, you know, conversation, but it has some information about it to your point, Chris, if you’re using Gmail, if you think Google’s not reading your email, you are sorely mistaken.

And you know, not to get all like Ron Swanson.

I don’t trust tech or the government or people.

But tech is reading all your stuff.

You have no privacy if you’re using tech.

Now, I don’t have smart devices in my home in the sense of I don’t have like an Alexa or Siri or one of those.

But I have a cell phone and conversations that I’ve had somewhere in the vicinity of my cell phone have been picked up and I’m getting ads for things that I should not necessarily be getting ads for like ridiculous things.

And, you know, if if if I’m here thinking like, well, I don’t have a smart home.

I’m safe.

I’m wrong.

You know, it’s going to get the information one way or another.

Christopher Penn 15:06

And with a lot of the ad tech stuff that I mean, a lot of people do believe that the devices are listening to them.

Certainly if you have a smart device like Alexa, yes, it’s in the terms of service is listening to you.

Katie Robbert 15:17

That’s one way to do it exactly what’s most to do?

Christopher Penn 15:22

With the whole? Is my phone listening to me.

The the data that’s been collected, and the studies have been done show that No, it’s not actually listening to you.

But what is happening is that where you browse on the internet, in your browser history, is so rich and so complete, and so thorough over a long period of time, that the models that are used to predict what ads to show you are highly effective.

And because there’s so much anyone who’s done any martec using lookalike audiences knows this you can your look alike, audiences can be incredibly powerful.

So even if Katie, you have never typed in a search bar, you know, chicken hats for my dog, right? But you’ve had that conversation.

But your profile is so thorough, and your browsing history is so thorough that someone else who types in chicken hats for my dog, like, hey, these two profiles like a 99% match, let’s show Katie the chicken ads for the dog because Katie’s friend, and just search for chicken heads for a dog, right? That’s how that works.

It’s it’s people, like, people are unaware of how much data you are sharing to your point, and be how similar in many ways we all are such that if you have a good cohort of people who are very similar, what one person does, the other people in that cohort are probably going to do right? If you and all of your friends are very, very similar.

And one of you searches for Taylor Swift tickets for the next concert, guess what everyone else i court probably can be shown the same ad.

And like 90% of you like, yeah, I want that.

Katie Robbert 16:56

Well, you know, and it’s, I think that there’s, I think it’s good.

And we should probably get into a different episode to really sort of explain these things.

But if you’re watching a streaming service, like Netflix, or Hulu, and if you’re like me, as you’re watching the show, you’re like, Wait, do I know that person from that? Sounds familiar? Haven’t I heard this storyline before you start searching for that information? And all of a sudden, you know, your search engine, likely Google is like, it already knows that that’s what you’re gonna search for, like you put in one letter, and the whole thing comes up and you’re like, Oh, my God, it knows that I’m watching or it’s in my brain, How did it know? And it’s like, no, it’s not.

It’s not a mind reader.

It’s a really good predictive engine.

Because Hey, dummy, you do this every single time you sit down to watch a show.

And so the first thing you always do is, you know, you Google the synopsis of the show.

And then you’re like, okay, and then 10 minutes later, you forget, and you come back and go, who is so and so and it’s, you know, the search engine starts to predict that you’re going to ask these questions, because you do it every time.

It’s not a mind reader.

It’s not sitting there staring at you going.

Wonder what show she’s going to watch next.

All right, let me get up all the actors and all of their backstories because she’s definitely going to want this in fish.

It’s like no, because you, human are, are a creature of habit.

You do this every time.

Christopher Penn 18:21

Exactly.

We are as a species actually very, very predictable.

Right? Yeah.

So all these behavioral patterns, right? This is all classical AI.

So this is not even gender.

To me.

This is, you know, one thing you talk about in your talks is fine, organize, generate the three classes of AI, this is regression analysis, the fine category, Hey, I see this pattern here.

Let’s try using that pattern to show people stuff.

And with every streaming service now, every streaming service now has ads, right? You know, Netflix just rolled out their ads to your, what do you think they base that data on? It’s not random?

Katie Robbert 18:54

No, it’d be well, and you have to figure that, you know, with us with the streaming services, they have to have had some kind of machine learning built in, in order to do their recommendations.

And so, you know, they’ll ask you, like, Did you like this? So we could recommend more, or did you not like this, and we’ll recommend less, even if you don’t participate in that.

I like this.

I don’t like this.

It’s paying attention to the DNA, basically, the markers of each individual show or movie or whatever it is, and saying, Let me serve up more of that, until something changes.

I’ve always been fascinated by that kind of technology, you know, Spotify, Pandora, all of the music streaming services, because I personally, I want to know, like, what are those categories? How are they DNA marking each of these songs and artists and shows and movies and pieces of content? And then how does it all match up? Because that’s it’s really just a very sophisticated recommendation engine.

Christopher Penn 19:59

Exactly.

And if you look at the data fields that are available, so one of the things that we have a lot of clients who have mobile apps and they use a tool called Firebase, Firebase, by the way, is made by Google huge surprise.

And when you look at the events that are available in Firebase for you to do analysis on as a user, it’s pretty astonishing.

Like here’s exactly where someone tapped on screen, what actions they took to the swipe up, swipe down, which direction on the screen, they swipe, how long was a screen or part of a screen on visible things.

When you think of all these behaviors that you can do? You can build an incredible behavioral profile, you would take someone who uses the Netflix app, well, how long do they watch that show? Do they skip around? Do they move to favorite parts, they skip over part, how often do they hit the forward and back buttons on this show, and all those things become data points, that again, you build profiles, and you build a recommendation engine, and anyone to your point, anyone who’s got a recommendation already, and recommendation engine already, it is trivial to put advertising and to is because it’s advertising at that point, it’s just another set of fields on the model that you already have.

So when we’re talking about generative AI, and how generative AI uses your data, it’s not that different, it’s all that’s being different now is that that data is being fed into a model for generation, whereas previously was being fed into a model for finding.

Katie Robbert 21:15

So if we boil it down, you know, it’s sort of astonishing that people are so up in arms about these generative AI tools, using their information and using their data.

When really, they already have it, you’ve already given it to them in one way, shape, or form.

And to think that these big tech companies aren’t sharing that information amongst each other behind closed doors is also ridiculous.

We like to think that, like Google and Microsoft are pitted against each other, and they probably are in the market, but behind closed doors, they’re like, hey, what do you got on Chris Penn, I got this, I’ll trade you this.

And you know, they’re just deepening their datasets and furthering the development of their products.

Christopher Penn 21:57

And if you think about it, like you know, people, even art, so you put you made a painting, right? Like this, this lovely painting here behind me, you make a painting, and someone goes to your art show, takes a picture of the painting, writes a little caption on Instagram posts on the Instagram profile, guess what that’s that is permitted use.

Because that has that data has been provided by a user who has consented to upload that data now can meta use your painting, even though someone else uploaded it, that has to be resolved in a court of law if it has not been but the reality is, if you don’t want any AI, raining on your data, you have to opt out of tech, like you said, you’d have to opt out of using technology.

Like

Katie Robbert 22:41

and not just a little bit of it, all of it.

And so what I don’t want to see happen as a result of this podcast episode is that everybody starts, you know, putting on their tinfoil hats and the conspiracy theories.

And you know, the government like, it’s not that deep.

It’s really just the same information that we’ve been using as marketers for years to really understand personas and look alike audiences.

But there are, you know, the rules haven’t changed.

Don’t give your personally identifiable information to the systems just don’t do that.

Build a private secure on your own, you know, infrastructure, server, and system, if you need to be doing that, like that’s not new.

That’s not new information.

And so, when it comes to data privacy, and security with generative AI, it’s the same set of rules that have always applied to technology.

Read the Terms of Use, make sure you understand the End User License Agreement.

And make sure you’re turning off any settings, go through your settings, and set up before you start using the thing.

If you’re experimenting with the free versions of ChatGPT or anthropic Claude, know what you’re getting into they you still have Terms of Use terms of their service agreements, that you have to agree to make sure you understand those, even if you’re like, I just want to test it out.

I’m not really going to do much with it, you still have to be aware of what you’re getting into.

Christopher Penn 24:16

Exactly.

The rule of thumb that I recommend to people is for any information working with generative AI, ask yourself whether you and your boss will be comfortable with posting information on Facebook, or LinkedIn or posting it publicly if the answer is no.

Probably don’t put it in a gendered AI system.

If you and your boss be like, Yeah, let’s put this on Facebook.

So you know, for example, Katie, if you are working on the cold open for the Trust Insights newsletter, would you feel comfortable posting on LinkedIn? Yes, probably.

Right.

Would you feel comfortable putting our financials on LinkedIn? Probably not.

Right? That’s a real simple rule of thumb is like should I do this? And to your point, one of the things that people should do and this is part of the five P’s is risk analysis, like what is the risk If this in fact, in leaks, right, so that is a process thing.

And that’s also a people thing.

So its purpose people process platform performance.

Part of the process is risk analysis, risk assessment, what could go wrong? If this information made its way outside the walls of our company? Right? What could go wrong? If someone sees our financials? What could go wrong? If someone sees our CRM data, right? Could we be sued? Could we lose business to competitors? If you if you’re not doing that, analysis, as part of using generative AI, you you put yourself at risk.

Katie Robbert 25:35

I mean, and that’s true of any tech, but especially now, where generative AI is making it so easy to use, the barrier to adoption is so low, it’s built into so many different systems that it can feel overwhelming of.

Well, I didn’t realize I was using it.

I didn’t know it, was there.

Use that same set of guidelines to say, is this information that I would want publicly shared or not? And it’s as it can be as simple as that? And if the answer is not, then don’t put that information into any tech system.

If you have a system specifically built to house that information, like a CRM, like a an accounting software, make sure you are aware of the flip side of that and how secure it is, and not sharing that information.

Christopher Penn 26:28

Exactly.

And if you want some more information about risk analysis, miscommunication, we actually have it in chapter 12, which is module six of the general AI course, go to TrustInsights.ai AI slash AI course.

And you can see a walkthrough of how to do this and also what your options are for fully private, generative AI.

If you’ve got some things you want to share about how you were using or not using generative AI and privacy issues, you want to share that go over to our free slack group go to trust insights.ai/analytics For markers, where you have over 3000 other marketers are asking and answering each other’s questions every single day.

And wherever it is that you watch or listen to the show.

If there’s a challenge you’d rather have it on instead, go to trust insights.ai/ti podcast, you can find us on almost every major platform.

Now while you’re on the platform of your choice, please leave us a rating and a review.

It does help to share the show.

Thanks for tuning in, and we’ll talk to you next time.


Need help with your marketing AI and analytics?

You might also enjoy:

Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!

Click here to subscribe now »

Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday.

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This