{PODCAST} In-Ear Insights: Mitigating Biases in Market Research

{PODCAST} In-Ear Insights: Mitigating Biases in Market Research

In this episode of In-Ear Insights, Katie and Chris talk about mitigating biases when conducting market research. Whether you’re conducting surveys, 1:1 interviews, or focus groups, what you bring into your research can dramatically affect – and even invalidate – the outcome. Learn how to mitigate these problems and keep your research as free of tainted influences as possible. Tune in to find out how!

[podcastsponsor]

Watch the video here:

{PODCAST} In-Ear Insights: Mitigating Biases in Market Research

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.

Christopher Penn 0:02

This is In-Ear Insights, the Trust Insights podcast.

In this week’s In-Ear Insights, Katie, I have to ask you, I want to do a quick survey here.

How much do you like working with me a lot, super a lot, always the most amazing thing ever.

Katie Robbert 0:22

Option D,

Christopher Penn 0:26

this little problem here with a survey, I hope it’s apparent, right?

Katie Robbert 0:31

Is because you, you, the person with the agenda, or asking me the question, and therefore I don’t feel comfortable giving you honest feedback? Because it’s about you.

Christopher Penn 0:46

I have it’s a loaded question to at no point did I give it up? Say?

Katie Robbert 0:50

Right.

And so there’s two things wrong with it.

One is the questionnaire is biased in to the interviewers bias.

Christopher Penn 0:59

So if I want to get honest feedback about the company, or brand, a client’s products or services, maybe something more sensitive, like a healthcare matter, how do we get honest answers out of people that we can then take action on because feedback, you know, customer feedback, voice of the customer, all those things, we give a lot of lip service and marketing to saying like this is the most important thing you can do is listen to the customer.

But that approach can be fraught with risks if you’re not actually listening to the customer.

Katie Robbert 1:34

So it depends, it depends on what type of feedback you are going after.

And so it also depends on whether or not you were a gathering that information internally from your teams, or externally from your customers.

And so for those of you who haven’t heard the other 100 podcast where I’ve mentioned that I used to work in clinical trials, I used to work in clinical trials, one of the projects that I worked on that I don’t think ever saw the light of day, because it was actually a really challenging concept, not to execute, but to actually get enough people to participate to make it a statistically significant number of responses.

It was this model called computer adaptive testing, which is relevant to what we do today.

And essentially, the idea behind it was I worked in the substance abuse sector of mental health.

And we developed computerized intake survey.

So we adapted a standard intake survey, to be administered by a computer sounds straightforward, right? Well, when you’re dealing with people’s lives, and their livelihoods, and their honesty and their ability to recover from severe drug abuse, you need to determine whether or not they can be honest to a person or to a computer.

And the clinical trial, which was done numerous times numerous different ways showed that people were more honest in front of a computer than they were in front of human.

Great.

So that’s part one.

The second part was that we wanted to then create a computer adaptive testing version of this intake survey.

So when someone comes into the clinic, they would say, these are all the drugs that I’ve taken in the past 30 days, this is my readiness to change.

What we wanted to do was create a version where as the person answered the question, the question would either get easier, or harder depending on their response.

And so a little bit of, if you want to think about it, almost kind of like a recommendation engine algorithm, but slightly different.

In order to do that, there needed to be extensive interviews with these people, the people who were in substance abuse recovery, and that was up to four to eight hours at a time.

And so we the people running the survey, the running the clinical trial weren’t allowed to be the ones to interview these people.

The reason is because we have an agenda, even though we are trying to be scientifically neutral and just test a hypothesis, we still have an agenda.

So we actually need to bring on outside consultants to help do the interviews.

And it’s not just okay, here’s your list of question, go ask them.

You have to train these people in a specific way to say and not say certain words that are leading, much like making sure that you are developing an unbiased customer feedback survey, and you’re giving them outs, but then there also had to be tandem pairs.

The reason for that is because there’s something called interviewer fatigue, because after two hours or so, you as the interview don’t have the cognitive ability to basically keep everything straight and so you have to switch off.

So there was a lot of things going into that and What we’re talking about Chris is probably something like, Hey, you know, you bought a T shirt for me, were you satisfied? There’s obviously a bit of a difference, but the principles remain the same.

If you are the person who sold me the T shirt, and I didn’t like it, but you get in my face and say, Hey, did you like that thing I sold you? Do you want to buy another one? I’m going to feel a little uncomfortable giving you honest feedback.

Whereas if you sent me an email survey, after the fact, and said, Were you satisfied with this purchase? Were you not satisfied with this purchase? Then I feel a little bit more comfortable answering that question, honestly, because you’re not directly in my face.

And I don’t feel like you’re going to come after me and be like, why didn’t you like the thing?

Christopher Penn 5:46

Even though the feedback is still going to the same person,

Katie Robbert 5:50

the feedback is still going to the same person.

And so in the clinical trial, it was clear that the person who was in recovery was answering questions that would be going to their clinician, however, they felt like they could be more honest to a computer, even knowing that their clinician would then look at everything, and give the feedback.

It’s more of that sort of in the moment thing.

So they’ve already answered the questions.

Now they can have a conversation about it, versus someone staring at you saying, you know, did you take drugs in the past 30 days, your attendance is that

Christopher Penn 6:25

because of the lack of feedback, there’s just no feedback loop.

Katie Robbert 6:29

It’s not even a feedback loop.

Because the survey itself wasn’t giving them feedback in real time.

They were literally just answering questions.

What you removed was that person staring you down and asking questions in an inconsistent way.

And so by being given the survey, however, many times they had to take it, they could give the same response over and over because they knew the questions, were not going to change.

Christopher Penn 6:54

Gotcha.

It’s interesting, because it reminds me you actually have a separate story on a very different tangent about the ACLU, objecting to the use of Boston robotics, mechanical drones going into low income areas, and like taking people’s temperatures and doing COVID testing, saying was dehumanizing.

And I was thinking to myself, yes, but the flip side of that is that you’re getting a consistent process, a consistent procedure, the same machine is administrating the same test, as not being judgmental of the personnel while you’re you’re homeless, I’m gonna treat you differently than than this person is Louis, just gonna sit there and aim a infrared scanner, you say your temperature is this.

So when it comes to our ability, then to gather this data effectively? Do we want to have that layer of abstraction where machines simply is doing the work for us? And and we’re not participating at all?

Katie Robbert 7:47

I think partially, so somewhat.

So I think that you know, so let’s say, for example, you work in a large organization, and so you have multiple teams, across the board, who were maybe connected, maybe disconnected, maybe siloed.

Um, and you’re just trying to figure out like, Hey, what’s going on? Well, if I, as the CEO, saying, hey, I want to sit down and talk to you.

Even if I’m the nicest person on the face of the planet, there’s still that bit of fear from the person who I’m talking with of, well, they have the power to fire me.

So I better not say anything wrong.

And so whether or not that said aloud, is irrelevant, it still exists.

And you can’t get rid of that.

And so by creating some sort of a feedback survey of Hey, how happy are you with the way the organization is structured? or How happy are you, you know, with the way that I’m doing my job? Or do you feel like you’re getting the right amount of feedback, or the strategies or whatever the question is, if you remove the interviewer, if you remove that role, because we were talking a couple weeks about role power and relationship power, if you remove that power from the conversation altogether, you’re more likely to get honest feedback.

The other option is to bring in a completely neutral third party who has no emotional investment in the response one way or the other.

And then you can have a conversation with somebody who is just going to pass along the information.

So whether it’s a machine doing it, or a neutral third party is always a better option than someone who is actually connected to the thing.

Christopher Penn 9:30

How do you still avoid passing along that bias to the third party, though? I mean, I’ll give an example.

We were working with a client a couple years ago now.

They had commissioned a survey and we were asked to come in and help design and process it and they made a mess of it.

They just made a total mess of the thing.

They’re asking, you know, dozens of unrelated questions that that had no relationship.

There’s no central thesis to it.

And so their own biases, even though they engage the third party, their own biases, we’re still so strong that the results were not reliable.

How do you? How do you as a either as third party? Or how do you as a stakeholder, you know, asking a third party yourself from contaminating the process?

Katie Robbert 10:13

I mean, you can ask the question, but that doesn’t mean that the person who’s cutting the check at the end of the day is going to listen.

And ultimately, that’s on them.

And so if you go into that situation, and you see that, that’s the kind of data that’s about to be collected, the best you can do is give them as much information as possible.

of this is the quality of data that you’re going to get this is how reliable your data is going to be or not be.

This is the methodology that you need to use.

But ultimately, that’s, that’s out of your hands.

And so when we used to run surveys for our teams, when we worked at the agency, this was something we would run up against all the time and not, you know, not because the teams were trying to deceive the consumers who are responding the survey, but it was sort of that idea of things rolling downhill, where the client had the headline in mind of what they wanted to get.

And they said, run a survey, so that we can say people said this thing.

So then the teams would be like, Okay, my client has told me, I need to get this information so that we can run this headline.

And then they would come to us the people who are administering the survey saying I need you to develop a set of questions around this headline.

Well, we know that that’s not how it works, you actually don’t start with the outcome.

It’s the solution and sort in search of a problem thing.

And so you don’t start with the outcome and say, build a survey around it, because it’s already wrong.

And so there was there was always a lot of pushback.

And it again, it wasn’t that the teams were trying to do the wrong thing.

But their hands were somewhat tied as well, because the client had already said, This is what I want.

Now, some of it is stubbornness.

Some of it is lack of understanding of how unbiased surveys actually work.

And some of it was just not caring, because at the end of the day, they wanted what they wanted.

And so we would do our best we would push back and then sometimes the teams just wouldn’t come to us at all, because they didn’t want to get into the argument with us about what was scientifically correct.

And what was just Just get me the headline.

Christopher Penn 12:27

In cases like that, when somebody has a very clear agenda is it just says from from an outcome perspective and an intellectual Odyssey perspective, is it just easy to say, Hey, you know, it just make it up? Because the data is going to be no good either way?

Katie Robbert 12:47

I would know, I would never say Just go.

No, come on, Chris.

Really? No.

Christopher Penn 12:54

I was like, I feel exactly the same thing in the sense that it is so intellectually dishonest, right.

You know, tell us how important our brand is to you on a scale.

Oh, awesome.

Very awesome, right? I remember, there was a politician, you know, not too long ago, that sent out a survey, you know, saying, Tell me how this politicians do you know, what kind of job is politicians doing is great, amazing, you know, best president ever.

And I’m like, kind of this in a couple of choices on the Likert scale better, buddy.

And when you have data that is that corrupted, I mean, it really is no better than just making it up.

Katie Robbert 13:32

Well, okay, so I personally could not do that.

Just to ground the record.

Like I personally, for the record, we do not do that as a company.

Christopher Penn 13:41

As in that in that role, like, you know, without thinking of a particular person that our own agency was, who was a manager or a manager of a PR team at the time, they honestly would have been better off just making it up because they spent so much time and so much money and so much effort to not use the data they collected because it didn’t say what the client wanted, that they literally could have just asked a roomful of stuffed animals.

Katie Robbert 14:04

So our role in that scenario is to make sure that they have as much information as possible about the consequences of publishing that kind of data.

And the likelihood of anything actually ever happening was very low because of the field that we worked in.

And so we’re talking about electronics brands that like, you know, beep and make noise and make people happy.

You know, we’re talking about food that people can consume that, again, make people happy.

And so we’re not necessarily talking about, you know, life or death things.

We’re not talking about vaccine data, for example, we’re not talking about medication and medical intervention.

And so the likelihood of someone calling them out and saying your methodology is not sound and your data is incorrect.

It’s very, very low, so the risk was low.

That doesn’t mean that it’s still not our job to inform them of Hey, in the event If someone questions, your methodology, this is what’s going to happen.

And so maybe in that scenario, they have to retract the statement print an apology, you know, it looks bad on the brand and their reputation.

As long as they’re okay with those things, then Okay, move forward, because you at least are aware, it’s like signing a waiver.

Before you go bass jumping.

They’ve told you, you could die, you could crack your skull open, as long as you’re signing your life away.

Go ahead, they’ve covered their part.

And that’s what we were meant to do, as well as we were meant to be that like, you know, consent form, or do you agree to the following risks? If you publish this data as is? If they say yes, okay, go on your way.

And almost, I would say 10 times out of 10.

That’s our role, because building surveys is a completely different specialty that collecting that kind of information is a different skill set.

And it was something that we tried to educate the agency on.

And it just, it wasn’t the right environment to try to do that education, because that’s not what the agency was ever meant to do.

Christopher Penn 16:10

Yep.

When you think about those situations than where you have so much role power, you have a paying client says, I’m cutting you a check to do this thing.

It almost sounds like then you have to have a leadership that is willing to tell a client don’t like something sorry, we’re going to return your check, because we’re not willing to do that.

And that seems to be a relatively rare thing.

Katie Robbert 16:40

That’s absolutely right.

And so it’s, you have two options, you can either say no, and return the check.

Or you can go down that road that I’ve described, of making sure that the client signs off on all of the risks of publishing the data as is.

Now in a perfect world, the client would come to you and say I want to do this thing.

And you would say, here’s a better, more, you know, unbiased way to collect it and be like, Oh, great, we appreciate your really smart, you know, ideas, and you bring us all this good information, here’s even more money keep doing even better work.

For us.

That’s like the ideal situation that pretty much never happens.

And so the role that you were likely in is just making sure that you’re getting in writing here all the risks of doing it the way you want to do it.

And then just moving forward that the thing you can say no, at any point, but then you have to also you have to be okay with, can we move without that money? Now we’ve done that we’ve said, No, we can we’re okay with living without money.

Because I mean, at the end of the day, you and I are scientific at our core.

And it is very difficult for us to get out of that.

And it’s not a stubbornness thing.

It’s not an unwillingness to compromise thing.

But there’s certain things like mathematical equations, there’s only one way that the numbers add up, like and that’s it, if this is not the room for creativity.

Christopher Penn 18:11

It’s true.

I remember, we used to run surveys, one of the things we would tell clients is run the survey as though no one’s ever going to see the results.

Except you, right, assume that no one will ever care.

And you know, in the public relations in the journalists community, make it valuable enough for your own marketing that you get some usable data out of it, even if it never sees the light of day.

Like if you ask people a question like, what, what diet foods do you use? Do you consume, that market research should be enough for you to make some decisions with even if no journalist ever picks up the story, because what has happened in the public relations industry in particular, that is, you know, a little challenging for folks now is, so many people ran so many bad surveys that journalists like, Oh, don’t run around and survey stories anymore, because there’s the We know you’re lying.

Katie Robbert 19:07

Yeah, no.

And that’s, that’s really solid advice.

So I guess we’re sort of coming to the sell what part of the conversation is like, what do people do about this? And so you know, one of the things that I noticed, I haven’t used Google surveys in a while, but when I was programming, the consumer surveys into Google surveys, which is there’s it’s a very like low pricing thing.

I don’t remember the pricing off the top of my head, but they would help you build in those questions.

So let’s say you said Do you really like working with me? Do you super like working with me? Are you always excited to work with me? And those are your three response options.

Google surveys would be like, cool, you’re missing a couple of things.

How about none of the above? And how about a none.

And they would add those in and you couldn’t move forward until you would add those in.

So the AI in Google server was at least trying to help you get to a somewhat unbiased space.

So if you have no other tools, if you don’t have a data scientist or someone who specializes in survey creation on your team, then at least a tool like a Google survey can help you get some of the way there.

Chris, how did you learn about writing unbiased survey questions and answers? Like what was your, you know, aha moment of, Oh, this is how we should do it, and what sort of the rules that you try to apply?

Christopher Penn 20:34

I actually read the research guides by the a PR, the American Association of Public Opinion researchers, they publish a code of ethics and you know, guides for how to write questions properly, how to do things, Castro’s and other organization, and Pew Research, the Pew Institute has an enormous library of how to do research properly.

And you know, they’re they’re one of the gold standards in terms of Public Opinion Research and the ability to do it well.

But always comes down to if you run a piece of research, if you run some market research, and at least some of the answers don’t make you uncomfortable, you’re probably not doing it.

Right.

Right.

If you’re asking questions for which you cannot get answers that you don’t want, then that’s wrong, right? If you so if I ask people about their opinion, about TrustInsights.ai, I leave it as an open ended questions.

Okay, what is your opinion? Or was your likelihood to buy from TrustInsights.ai in the next 90 days? And one of the answers has got to be never like, okay, that doesn’t make me feel great, right? That’s like, I really wish it would.

But that’s a sign that you’re headed in the right direction with his answers that you’re not going to agree with that you wish were different, different.

And that might run counter to your agenda? It’s a really good way of surfacing.

Do I have an agenda? running this piece of research? If I’m asking a question I can answer? Oh, I don’t like that answer, then that might be an indicator that Yeah, you do have an agenda, right? As an example, if you ran a survey to your Twitter followers, how likely are you to buy something from you? Or how attracted Do you think I am? Or how do you like my room background? And, you know, you get scathing answers back and it makes you feel incredible that you had an agenda that you’re trying to accomplish? If you get answers back that you’ll feel anything about one way or the other, then you probably don’t have a strong agenda at play.

And and your things are probably, you know, not as risky to be asking because you’re like, Okay, people don’t like looking cool.

Katie Robbert 22:43

So to polar Chris Penn, what you know, so let’s say, you know, you’re asking these questions.

What if you don’t plan on making any decisions with it? Or what if you just use Okay, so in the example of you have an agenda? What if you don’t plan on doing anything about the negative feedback? Why ask the question in the first place?

Christopher Penn 23:05

That’s exactly a good indicator that you shouldn’t bother asking, right? It’s something we say about all analytics, if you’re not gonna make any decisions, don’t bother collecting the data, because it’s not going to change your Seth Godin famous quote, if you’re not going to change what you eat, or how often you exercise, don’t get on scale, because it’s not going to do anything for you, except make you feel bad and not motivate you to make changes.

We always say data without decisions are distractions, that if you’re not gonna make a decision, don’t waste your time, it’s kind of goes back to kind of what we were saying about the surveys, not not, don’t make things up.

But if you’re going to collect data, that is so bad that it is essentially lying.

Don’t bother, just don’t do it to save yourself, the time find something else, like you were saying, find something else to do for the company or for the client, whatever.

That is a better use your time.

Because maybe collecting data without making a decision is a waste of time.

Katie Robbert 24:02

It is it absolutely is, but it happens a lot.

When I’m thinking about asking questions, or creating a survey, you know, it’s hard to craft a like the perfect question unless this is exactly what you’re trained.

And so I’m trained in it, but it’s not my, you know, core skill set, and I haven’t done it in a long time.

So when I’m trying to put together a survey that, at least, is for the most part, unbiased.

When I think about the questions, I try my best to remove emotion from the question.

So how happy are you? How sad are you? Because even if your intention is to get negative feedback, you’re already leading someone down the line of Oh, this is a question about being happy.

This is a question about being sad.

And so trying to remove that information from the question at all, and so You know, it’s a little bit different, but saying, Are you satisfied? yes, no, or on a scale of one to five? how satisfied are you not at all too extremely and making sure that you were including both sides of the equation.

So making sure people have the option to say no.

And that people have the option to say yes, and that people have the option to say, I neither agree nor disagree with this.

And the middle, the middle ground, is kind of crappy data.

But it’s better than forcing people to say I’m always happy, or I’m always sad, because some people might feel neutral about it.

And that’s a really important data point as well.

The other side of this is, do you have the skill set or, you know, mechanism or resources to actually analyze the data, so collecting it as fine analyzing it as a whole other thing? And that’s probably a whole different episode.

Christopher Penn 25:56

Exactly.

It’s a whole different episode.

I think your point too, about.

If you feel like you have those, you know, potentially emotional reaction stances, then the best thing you could do is bring in an unbiased third party, it can be another manager in a different department, right? It can be somebody else in your organization can be a colleague at work, or colleague at a non competing company.

It can be an actual third party agency, like Trust Insights, but bring in somebody who, frankly, doesn’t care what the outcome is, they have no skin in the game.

And they can look at something and say, yeah, this this, this might be a little bit leading, or, or no, that’s probably not the best way to ask that question that will help a lot in reducing the impact of someone’s agenda on the work that you’re doing.

Katie Robbert 26:43

Absolutely.

And the other couple of tips that I would give, if you’re trying to collect customer feedback, and you’re hoping to get really honest, is don’t do it face to face.

And so if you have an email list, you know, send out the questions that way, if you want to send out a survey, or if you have some kind of an AI chatbot.

You know, people will talk to a chatbot all day long, because the chat bot is just you know, something that’s pre programmed.

And so, use your chatbot to get feedback, hey, you did a thing.

Can I get some feedback from you? And then let the chatbot get the information to your question earlier, Chris, the person responding has the awareness that the information is still going to come to you.

But then it’s not that in the moment of what you just said something negative.

How do you say something negative to my face, you’re fired.

I don’t want you as a customer anymore.

You’re out of this company and go you know, clean off your desk like that.

It removes that fear and intimidation from the conversation.

Christopher Penn 27:42

Exactly.

See machines for the win.

Again.

Katie Robbert 27:47

Definitely not listening to a thing I said again.

Christopher Penn 27:52

If you got questions or comments about anything we’ve talked about, in today’s episode, come on over to our free slack group Trust insights.ai slash analytics for marketers.

We have over 1900 folks talking about all sorts of questions and answers about research data.

I had a question last week about someone asked me a statistical analysis of newsletters, asked about what to do about a page that has a high bounce rate.

Lots of discussions, share your thoughts, ask your questions, and get your answers and where it is that you are watching or listening from today.

If there’s a place you’d rather get it from go over to Trust insights.ai slash ti Podcast, where we publish on most major channels where you can consume podcasts.

Thanks for tuning in.

We’ll talk to you next time.

Need help making your marketing platforms processes and people work smarter.

Visit Trust insights.ai today and learn how we can help you deliver more impact.


Need help with your marketing AI and analytics?

You might also enjoy:

Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!

Click here to subscribe now »

Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday.

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This