Mystery Shopper Research: Definition and Value (Part 2)

Daniel Ord talks on Mystery Shopper Research

Welcome to Part 2 in our Mystery Shopper Research Series, where I continue our discussion of this intriguing and often misunderstood form of Customer research.

In Part 1 of this series I introduced –

  • Mystery Shopper Research as another window into your Customer Experience
  • Some common misconceptions about Mystery Shopper Research
  • The simple answer as to why Mystery Shopper Research programs are often so poorly run
  • Why Mystery Shopper is best understood as a qualitative form of research

Here is a link to Part 1 in case you haven’t read it yet –

https://www.omnitouchinternational.com/mystery-shopper-research-a-window-into-customer-experience-part-1/

In this Part 2 post I cover –

  • Our definition of Mystery Shopper Research
  • The value that Mystery Shopper Research provides
  • Where Mystery Shopper results fit into classic CX data architecture
  • Why it’s important not to confuse Mystery Shopper Research with Customer Satisfaction surveys – and how they work together
  • What comes next in Part 3 of this series

What is a Mystery Shopper Research?

Here is our definition of Mystery Shopper Research –

Mystery Shopper Research is a qualitative research method that employs trained Mystery Shoppers to conduct preplanned scenarios across selected journeys and touchpoints.  They capture both objective scores for predefined attributes and subjective feedback to illustrate what Customers go actually through.

In Part 1 of this series we already shared how Mystery Shopper Research is best used as a qualitative research method.  Where the depth and richness of learnings are the research focus, not statistical sampling practices.

As we progress through these posts I will break down each component of our definition and explain what’s involved, the practices that help and real world examples.

What value does Mystery Shopper Research provide?

To understand the value of Mystery Shopper Research, it helps to know where  Mystery Shopper findings fit into a classic Customer Experience measurement framework.

As a refresher for CX folks, the Customer Experience measurement framework answers two big questions:

  1. Which Customer experiences are we going to measure?
  2. How are we going to measure them?

We will look at which experiences you can measure using Mystery Shopper Research once we introduce our 7-Step Design Process for a Mystery Shopper Program.

That will come in the next post in this series.

But before we can get into how to measure Mystery Shopper Research we should understand how it fits into our overall CX data architecture.

It’s time revisit our CX know-how

When it comes to how to measure Customer experiences, CX folks have learned to assemble different types of metrics to tell the Customer story.

Let’s take a look at these different types of metrics and how to assemble them.

The CX metrics ‘Layer Cake’

A CX Metrics Architecture can look like a Layer Cake

Here is a picture of a rainbow Layer Cake.

In classic CX metric architecture there are three layers of metrics that, when combined together, allow you to see and contextualize –

1.  What your Customer actually went through

2.  Their perception of what they went through 

3.  What they do as a result of their perception 

Examples can help here.

Here’s our first example –

  1. The building supplies sent to a Construction Manager’s job site were delivered late
  2. The late delivery cost the Construction Manager money, time and added frustration
  3. The Construction Manager soon began the search for alternate suppliers

Here’s another one –

  1. The Customer did not receive an email reply from the company they wrote
  2. The Customer felt ignored and disrespected
  3. The Customer posted a nasty comment about the company on social media

And one more –

  1. The Customer was upgraded from economy to business class by the airline
  2. The Customer felt happy and appreciated
  3. The Customer chose to fly with that airline again – even without expectation of an upgrade

The three types of metrics in your Layer Cake

Here are the three types of metrics in our Layer Cake with their official names and descriptions –

1  What your Customer actually went through – Descriptive metrics 

Descriptive metrics come directly from inside your organization’s internal systems.

Examples include how long the Customer waited for us to answer their chat (3 minutes and 26 seconds) or whether we delivered their order on time (we were two hours and 11 minutes late with the delivery).

Organizations have a variety of operational systems that can tell them what happened to or with a Customer.

Descriptive metrics do not involve asking the Customer how they felt or what they would score.  They are purely from internal operational systems.

2  The Customer’s perception of what they went through – Perception metrics

Perception metrics come directly from the Customer through the survey process.

When we solicit them to rate us – perhaps giving us a 4 out of 5 on a 5-point scale or 8 on a Net Promoter Score scale – we’re gathering Perception metrics.

It’s important to remember that all Customers have perceptions – whether they respond to our survey or not.  Or whether we survey them or not.

What the Customer does as a result of their perception – Outcome metrics

Outcome metrics reflect what Customers do (now or later) as a result of their perception.  It reflects their behaviors.

Outcome metrics tie to our organization’s business results. And help us see how perceptions impact business results.

Your CEO cares a lot about Outcome metrics

Of course your CEO cares about what really happens to Customers. And the perceptions they have of what happened to them.

But they likely care most about what Customers do (or don’t do) based on their perceptions – the Outcome metrics.

That’s because Outcome metrics tie to business results.

Outcome metrics include repeat business (or not), positive (or negative) word of mouth, Customer willingness to try new products and services (or not), Customers extending their ‘life’ with the organization (or not), impacts on cost (good or bad), and resilience.

Where the term resilience can describe how Customers are more willing to forgive an organization they trust for when that organization makes a mistake.

Put another way – Outcome metrics reflect the things a CEO cares about.

Because those Customer behaviors impact business results.

You can use the Layer Cake architecture to tell the story of what actually happened to Customers (descriptive), how that impacted what they thought and felt (perception) and how they took those thoughts and feelings forward into what they decided to do next (outcome).

Mystery Shopper results fit in nicely with our Layer Cake architecture

Take a moment and answer this question…

Across the three metrics categories, Descriptive metrics, Perception metrics and Outcome metrcis where do you think Mystery Shopper results ‘fit’ best?  

Rodin's The Thinker for the Mystery Shopper Question posed

If you said Descriptive metrics – what really happened to the Customer – you’d be right.

That’s what Mystery Shopper Research helps us illluminate.

We’re not asking real Customers about their perception of what they went through here. Mystery Shoppers are not real Customers.

And this isn’t a survey program. Mystery Shoppers are not real Customers.  That’s really important to understand.

As we will expand on further in this series, the best Mystery Shoppers are both good actors (when we need them to be) and good observers.

They help the organization study some aspect of a Customer journey or touchpoint to understand what ‘real Customers’ actually go through.

Let’s look at some examples.

Mystery Shopper adds to the depth of understanding what real Customers actually go through

I like to use the word ‘actually’ when I talk about the kinds of learnings we gain from Mystery Shopper Research.

  • Did the online application process actually work?
  • Did the email tone actually bring the company brand to life?
  • Did the Sales Advisor actually explain when the Customer’s first bill would arrive?
  • Were the theme park staff actually in character at the rollercoaster?
  • Were the Guest’s pillows actually changed after they complained to the hotel staff?
  • Were the right steps actually followed to confirm that this was a vulnerable Customer?
  • Did the Chatbot actually answer the question that was asked?

I think it’s a good mental exercise to add the word ‘actually’ when you’re talking about what Customers go through.

Because it replaces the assumptions that the organization – especially senior and functional leadership – has about what Customers go through.

And no matter how sophisticated the organization is, there aren’t systems that measure everything that a Customer goes through on a particular journey or with a particular touchpoint.

Mystery Shopper results are not the same thing as Customer Satisfaction survey results

Mystery Shoppers are not real Customers.  They are standing in for real Customers.

Our Mystery Shoppers employ their acting skills – where needed – and observational training to help the organization understand what real Customers actually go through.

And they receive payment for doing this.  Which should help underscore how they’re not real Customers. 

Just remember that our Mystery Shoppers help us gather findings that fall into the Descriptive metrics category of our CX metrics architecture.

When you want to know what real Customers think, then please ask real Customers.

We know that when we ask Customers to rate their perceptions we are gathering findings that fall into the Perception metrics category of our CX architecture.

Mystery Shopper Research and Customer Satisfaction Research can work together

These two research methodologies are not at odds.  They can readily work together.

Here’s are examples of what I mean –

  • Low Customer Satisfaction scores received on a Customer transaction or journey survey might highlight a need to study that touchpoint or journey more deeply
  • Comments received from real Customers can point to broken processes or untrained Employees that we need to study more deeply
  • Subjective comments received from Mystery Shoppers can give us ideas to improve a Customers’ touchpoint or journey in a meaningful way

We will look at many different real world ways that organizations use Mystery Shopper Research in upcoming posts in this series.

In closing and what comes next

For Part 2 in this series I covered –

  • Our definition of Mystery Shopper Research – which we will break down and explore as we progress in this series
  • The value that Mystery Shopper Research provides – particuarly in illuminating what Customers actually go through
  • How Mystery Shopper results fit into classic CX data architecture – namely in the Descriptive metrics category of what actually happened
  • Why it’s important not to confuse Mystery Shopper Research with Customer Satisfaction surveys – and how these two approaches work together

In Part 3 of this series we will carry on with our exploration into Mystery Shopper Research and –

  • Introduce our 7-Step Design Process for a Mystery Shopper Program
  • Work through Step #1 in our design process – “What do I want to learn?”
  • Share common research objectives for Mystery Shopper along with many real world examples.

Thank you for reading and see you again for Part 3!

Daniel Ord

[email protected] / www.omnitouchinternational.com

CXFeaturedMystery Shopper Research
Send me a message

Decode the Customer Ecosystem

Want to stay in touch with our articles, insights & offers?