GOOGLE SEARCH CONSOLE FOR MARKETERS 25

{PODCAST} In-Ear Insights: Should AI Adopt a Clinical Trials Process?

In this week’s In-Ear Insights, Katie and Chris discuss the current state of AI deployment. Companies are rushing ahead to put models and algorithms into action with little to no due diligence, and the consequences can be disastrous. Should AI adopt a practice similar to clinical trials, where a model must prove that it causes no harm first? Listen in for how to think about AI and machine learning from a clinical perspective and ways to proactively adopt best practices from the pharmaceutical industry to your AI

[podcastsponsor]

Watch the video here:

{PODCAST} In-Ear Insights: Should AI Adopt a Clinical Trials Process?

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.

Christopher Penn 0:02
In this week’s in In-Ear Insights we are talking about AI and machine learning and rushing ahead without some due diligence and caution.

So, Katie, one of the things that has been discussed about AI is that there’s obviously a lot of problems with things like bias and unintended consequences and not being able to know what your models are actually doing.

I sat through a briefing last week with IBM on interpretability and explainability.

And we even had to level set at the beginning of that to say what those terms meant.

And it got me thinking, why isn’t there a process like clinical trials for AI? It’s easy Gotta be if you’re going to be having something that’s a life or death thing, which in a lot of cases, AI is making life or death decisions with self driving cars, and medical decisions and thinks it should have the same level of rigor.

So for those who are unfamiliar, would you walk through what the steps of a clinical trial are?

Katie Robbert 2:19
So the clinical trial very much follows the scientific method, which is something that we talk about a lot.

So you have to have a hypothesis that you’re testing Now, before I start to dig in and people start writing into correct me, I want to be clear, I worked on small business, innovative research, clinical trials.

And so while Yes, there were pools of people, we weren’t testing drugs on them.

We were testing software that would help with their journey in their substance abuse and pain management.

So I just want to sort of, you know, level set there that I was not involved in clinical trials where we were actually testing life or death drugs.

We were testing Testing, you know, interventions is really the term for it.

So a clinical trial is, basically you are testing out this idea or this thing.

And so, you know, in the software side of it, which would probably mirror more what you’re talking about with AI, you have to come up with a hypothesis, you have to sort of, you know, do your due diligence of research of people who have done something similar to this before, understanding the results that they got to learn from it.

I think that that, that first step, that initial research, is the thing that’s probably most often skipped over because if you’re building a new widget or whatever, using AI, you’re not necessarily going back saying, has anyone else done this before.

So a lot of times people are reinventing the wheel.

And so once you have your hypothesis, you then have to come up with this whole plan of how you’re going to get from hypothesis to result in a clinical trial.

It often involves recruiting people to test out this thing that you’re building.

And so you have to, you know, randomize, you have to have your control and your experimental groups, you have to have, you know, blinded studies, you have to have your a be testing, you have to make sure you’re tracking everything very carefully along the way, you need to make sure that people understand if there are any risks involved.

And then you have to collect a certain amount of data in order for it to be a significant sampling of that particular type of population, whatever that population is.

And then you you know, continue the process until you get some sort of an outcome to say, Okay, this is working, or this isn’t and then you continue to repeat that until you get a workable thing that can essentially go out into the market.

And so the clinical trials that I worked on, would last upwards of three and four years.

And that’s Just testing software that’s not passing drugs and waiting, you know, years to see if there are side effects that come into play with that with a person’s individual unique DNA.

So does that paint the picture? Does that answer your question, Chris?

Christopher Penn 5:15
Yeah.

And, again, you have worked in pharma and medical, I have not the closest I’ve gotten was pre med and in college.

But my understanding from having read up a bunch for the pandemic was that there’s basically four steps to a trial, right.

There’s, there’s phase one, which is does this thing cause harm? Like if it does, we should probably not use it to doesn’t work at all.

Three, does it work better than the existing standard of care? And then four doesn’t have any long term effects and the four occurs after a drug has been released to the market.

That’s my general understanding of the framework.

I know there’s there’s variations across sport.

And I think when it comes to AI, I think that feels like a good framework to use because what happens a lot of the time like we were talking two weeks ago about open AI as new GPT three model, people are all rushing and say we’ve got to use this thing.

Let’s get it in the field.

Let’s be gendering SEO content generating your social media content.

Let’s be writing new books with it.

And so we’re kind of rushing to deployment and we have done zero of the phases.

When we look at something like a self driving car, for example, that phase one, does it cause harm? Seems like it seems like a step we should not skip.

Unknown Speaker 6:32
You’re right.

Katie Robbert 6:34
No, we, yeah, no, we shouldn’t be skipping now.

But yet, so there’s, I guess there’s another factor in if you’re doing a clinical trial, you’ve likely been given some sort of a grant.

And so there’s accountability for how you’re spending that money.

Whereas when you’re developing AI, it’s likely just money generated from the company itself.

And so you can spend it however you want.

So in a clinical trial, at least I know it, one of the bane of my existence was submitting all the financials back to the government to say, this is how we spent the money.

And this is how we responsibly are doing this clinical trial.

And if for a second, they thought that you weren’t being responsible with your testing, where humans were involved, they would pull the plug, they would pull the money.

The same is not true.

If you’re developing AI in a private institution, like if you and I suddenly decide to start developing something, then who’s going to tell us we can’t do it? Or who’s going to tell us we’re doing it wrong?

Christopher Penn 7:44
Ah, that’s a bit problematic, because, you know, we’ve talked in the past about the example of this company that created this predictive algorithm for geo targeting your ideal customers, and they essentially just reinvented redlining, which is illegal.

How do we fix this? Can this be fixed? Or are we just gonna have a world full of Rogue AI’s out there that are just doing stupid stuff until the, you know, parent company gets sued for something for doing something incorrect? I mean, how do we, how do we bring this idea of not, you know, bureaucratic red tape and slowing everything down, just to slow things down, but to actually get people to stop and think, Hey, does this thing cause harm? Does it discriminate against a gender? Does it discriminate against a sexual orientation? I don’t think anybody’s doing that right now.

Katie Robbert 8:30
They’re not and so what you’re talking about is it’s not bureaucracy, it’s responsible accountability.

And so, you know, I don’t know how exactly this factors in because it’s, you know, my Monday morning, you know, stream of consciousness but you know, if you think about like, the top tech companies in the world, right now, they’re sort of under scrutiny because everyone’s worried about that monopoly and stuff.

If you have only four companies, Amazon, Google, Facebook and Microsoft,

Christopher Penn 9:09
Microsoft, IBM,

Katie Robbert 9:10
and IBM, you know, if they are the only ones calling the shots, then you have five really large companies where the internal issues and processes, who’s holding them accountable.

And so I feel like that’s part of why, you know, they want to start to break apart that monopoly where there’s a little bit more diversity, there’s a little bit more transparency and oversight.

But if the company that has the means to create AI, over and over and over again, is just sort of doing it in this black box, and we can’t tell what’s gonna happen, nothing’s gonna change.

And so there needs to be, you know, I hate to say it because I hate committees, but there needs to be some kind of a committee, you know, that really says, Is this a good idea? Do we need this thing Just some sort of accountability of like checking in like, Hey, we’re building a self driving car.

Okay, then committee goes, alright, how are you testing that? Oh, we’re not.

Okay, that’s a problem.

So let’s take a step back for a second.

companies don’t want to do that, because it’s going to cost them more money and take more time to do and consumers want things right now they want it instantaneously.

They want a self driving car already.

Whereas it’s not ready.

Because if you still have people physically driving a car, the self driving car has to constantly correct for me getting distracted and suddenly braking.

Whereas if every single car was a self driving car, then they could all operate on the same grid system.

So it’s like, I feel like I’m rambling a bit.

But I don’t know how to solve that problem.

While we have these tech monopolies, of the same five companies only being the ones to have the resources to develop any sort of sophisticated AI right and

Christopher Penn 10:58
AI with the example with autonomous vehicles, you have the National Transportation Safety Board, right? that exists as a as an agency to inspect for things.

There is no version of that for, say, a social media algorithm.

And yet, you could make the very plausible case that if you were spreading a massive disinformation, about a medical condition safely effectiveness of a drug for a pandemic, you could look that algorithm could legitimately end up killing just as many people as a as a car that just kind of goes and does its own thing.

So from a practical perspective, then how do companies who want to behave ethically and I realize it’s a big if but how do companies adopt this? Do they do the clinical trial process and build that process internally say, Okay, first, we’re going to test this for knowing harms.

Does it harm our business? Does it harm the customer? Does it produce unnecessary risk from a regulatory perspective? Like for example, does it discriminate? If so, is there a bias that is illegal? How do we think about building those processes So that we can avoid it.

It’s kind of like the Motion Picture Association, they they end up creating movie ratings of their own because they did not want the government to be regulating their content.

So you know, rated R rated nc 17, whatever they they voluntarily came up with this.

How do companies do the same thing so that the government doesn’t get up in our business? And then you have a bunch of, you know, octogenarian geriatric people trying to legislate against, you know, deep neural networks.

Katie Robbert 12:26
You know, it starts with transparency.

And so the reason why people feel like they need to intervene step in is because they don’t know what’s happening.

It’s like, well, I don’t know what you’re doing.

So let me just, you know, get in there and see for myself, but if you offer up and say, This is what we’re doing, we’re actually developing this kind of thing.

They’re like, Oh, that’s all you’re doing.

All right, I don’t care.

Let me go move on to something else.

And so it’s getting ahead of the speculation.

It’s getting ahead of people saying I need to know more, and I need to have an opinion about that thing.

And so, you know, you brought up the example of Facebook, which is You know, we could certainly talk about that for hours.

You know, Facebook is self governing.

They’re the only ones who are making the decision whether or not their algorithm is serving up the right content or the wrong content.

And if you boil that down to a very simple example, in best practices, developers should never QA their own code.

Because they’re too close to it.

You need someone outside who’s more objective to say, Well, I don’t understand what this does, or did you mean for it to do this way and start to poke holes in it, like a proper QA engineer, and the tester, someone who didn’t develop the thing knows what the thing is supposed to do, and then can objectively say it’s not doing the thing it’s supposed to do? Whereas the person who developed it is re blinded to it and says, Yeah, it’s working exactly as I want it to.

You know, it’s very much the same thing with developing AI.

You should have people who were not involved in building it, doing the testing of it, to make sure and then that group in and of itself should be comprised of, you know, different age groups, different backgrounds, different genders, you know all of those things in order to make sure that you don’t accidentally create something that’s incredibly discriminatory and racist.

Christopher Penn 14:14
I think there could be a cottage industry in AI auditing, honestly, when you’re listening to this, just saying, Hey, did you test for these things? Did you do these things? Did you know here’s the checklist of best practices? You your example of Facebook, I think is really important in terms of transparency, because there’s an interesting dichotomy with Facebook and LinkedIn, the two networks that have talked a lot about algorithms.

Facebook, doesn’t talk about his algorithm.

We can see the results of it, but they do share a tremendous amount of data that you can dig into the data.

You can use services like the company, they bought CrowdTangle to export massive amounts of data.

On the flip side, LinkedIn has its engineers out on podcasts and YouTube shows stuff talking all the time about their algorithm, but they don’t share their data.

They know their data is locked down tighter than Fort Knox.

And so maybe that’s an avenue we can explore to is like, Is there a minimum amount of data that these companies should make public just simply as a public good to say, like, yes, we’re, you know, here’s you can you can audit, it’s been, it’s been anonymized, but de identified, but someone else can audit independently and at the same time, have somebody saying, Yes, here’s how we’re doing.

You know, here’s how the newsfeed broadly works.

No, but you have to give away the secret sauce.

But when we look at the two algorithms, Facebook optimizes solely for engagement, like they want eyeballs on stuff all the time.

And we see the result of this in the fact that it tends to promote more extreme content all the time.

LinkedIn, their algorithm has seven different optimization points, one of which is complaints, and the other which is engagement because their objective is not eyeballs alone.

It’s one of them.

But because 40% of their revenue comes from CIT enterprise sales software, they need people to stay on the network and feel like it’s a safe place.

And so maybe that’s at the level where a national AI Safety Board could come in and say like, yeah, Facebook, you need to add some more safety features and LinkedIn, you need to show us the data to back up, what you’re saying is true.

Katie Robbert 16:16
So it’s interesting, we’ve started this conversation of why isn’t AI more like a clinical trial? So the big thing with a clinical trial is it’s all documented and published, and anybody about any kind of drug out there on the market that’s been run through a proper clinical trial can go and research exactly how it was tested from start to finish.

You cannot do that with AI and the example you’re giving about LinkedIn and Facebook is exactly that.

It all comes down to transparency.

And so no, you don’t have to go into the nitty gritty of exactly how the algorithm works.

But the less you’re giving away, the more people assume you have something to hide the more suspicious they become a bit more.

They’re like Something’s not right here, why can’t you just tell me what it is that you’re doing? And that is a big part of it.

And so, you know, Chris, one of our core values is we’re transparent.

We have nothing to hide, we’re not going to give you, you know, break down step by step of how we’ve built certain kinds of code.

But if you want to know how we did it, we’re going to tell you, you’ll probably be bored to tears and follow money.

But the point is, we have nothing to hide.

And I think that that is the big difference between developing AI and a clinical trial.

People are so protective about their technology right now, because it’s such a competitive market.

Everybody wants to be first.

Everybody wants to get rich.

Everybody wants to be that big household name, that they are sacrificing actual, like, usability.

And is this even a good thing to be doing? Because it’s all about the mighty dollar right now.

Christopher Penn 17:53
is interesting.

You mentioned that because I think there’s another angle to this, which is the phase three of a clinical trial which is it What you come up with better than the existing standard of care? And in the world of AI, I think there’s a very real problem where if you implemented that, you would realize that a lot of what’s being promoted out there is not better than just your standard of care.

So, real simple example.

When we do predictive analytics, and we talk about time series forecasting, you know, there are some companies are cranking out these incredible, you know, is super deep feed forward and recurrent neural networks and stuff like that.

And in academic testing has been shown that many of these neural networks don’t produce much more than a 1% increase in accuracy over the old old fashioned Arima model.

And so in a clinical trial setting, your brand new fancy, your super expensive, super cool sounding neural network is not better than the standard of care is computationally 10 times more expensive and doesn’t deliver better results.

So you would actually fail a clinical trial at that point because you would be able to you wouldn’t be able to show that you improved on the standard.

Okay.

If that was applied to AI, and we look at, you know Scott Brinker as Bartek, 8000 landscape, I think of all the companies that say they’re doing AI, we’d be able to knock off by 70 or 80% and say, Nope, not any better than the existing, not a better existing,

Unknown Speaker 19:14
which

Christopher Penn 19:16
I don’t know, how do you feel about that? Do those companies it? Are they just trying to take advantage of people who are like, Oh, it’s AI, it must be good.

Versus Oh, this will get me the business rules I want.

Katie Robbert 19:31
You know, not knowing the inner workings of all of those companies.

I would say it’s probably about 5050.

I would say if you lost a good chunk of those companies, you could still get the same results.

Chris, it was you last week we pointed out that the you know, first rule of Google’s machine learning handbook is if you don’t need to use machine learning, don’t use it.

It doesn’t necessarily make your thing better.

And so if you know we’ve talked about this, with You know, RPA.

And so if automating something doesn’t actually save you anything, if actually makes things more difficult than don’t do it, then just pull the numbers manually, it’ll be fine.

You know, you’ll get the exact same result.

If you’re, if your AI is constantly breaking, and you’re constantly fixing it, then maybe it’s not the best solution.

And so I think the, you know, when we’re thinking about these companies who are slapping the AI sticker, you know, on their website, it is a shiny object, and people have this misunderstanding that AI saves time, saves money, you know, makes things faster, shiny or better, smarter.

That’s not necessarily true.

Sometimes it causes more harm, because, you know, we know that AI is only as good as the data that it’s fed.

And if it’s fed bad data, if it’s fed biased data, if it’s fed racist data, then it’s going to produce that thing.

And so, you know, there should be some sort of You know, committee or board or something.

And I don’t just mean, you know, five people in the world to look over all of AI like, it has to be within your own organization.

It has to be within your own industry and sort of expand beyond there are people who are saying, Hey, is this a good idea? Yep.

Christopher Penn 21:24
So the takeaways, if you are thinking about implementing AI, or you’re evaluating vendors that say they use AI, give some thought to implementing that clinical trial framework.

Step one, does it cause harm, outlined the risks outlined what could go wrong? That’s number one in basic planning, what are the things could possibly go wrong? phase two, does the vendors tool or does the AI work at all? Is it just smoke and mirrors or there is there there there.

And then step three, is it better than what you’re doing currently, and with that framework, you’re probably going to make fewer mistakes, you’re probably not going to rush into very expensive mistakes.

And if you’re dealing with vendors that can’t credibly answer detailed questions about all three, that’s probably a good sign not to work with that vendor.

If you got follow up questions about this or anything else regarding analytics, data science and AI, drop by our free slack group, go to Trust insights.ai slash analytics for marketers, over 1200 professionals and they’re chatting about this stuff all day every day.

It’s totally free to join.

We look forward to seeing you there.

If you have questions about this episode, stop by the blog and the associated entries for this episode over at TrustInsights.ai dot AI Talk to you soon take care what helps solving your company’s data

Unknown Speaker 22:37
analytics and digital marketing problems.

Visit Trust insights.ai today and let us know how we can help you


Need help with your marketing AI and analytics?

You might also enjoy:

Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!

Click here to subscribe now »

Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday.

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This