Home Archives for OmniTouch International

Author: OmniTouch International

Some suggestions for industry Awards entrants

by OmniTouch International OmniTouch International No Comments

In this short article I share some suggestions for industry Awards entrants.

I like judging industry Awards

This year I’m scheduled to judge awards entries in Dubai, London, Amsterdam and Wiesbaden (Germany).

I think the benefits for an industry practitioner to judge an Awards are immense.

Because even if you work with a number of Clients in different industries and geographies – and even have some of your Clients enter and win Awards – your ‘exposure lens’ can still be narrower than it need be.

Judging Awards allows you to see what’s happening out there amongst organizations you may never work for or in geographies that you may not serve.

And it’s not always the ‘big names’ that put forward the best initiatives.

There’s a lot of gold out there in smaller and ‘local’ organizations too.

 

A few suggestions for industry Awards entrants

I tend to judge categories involving Customer Experience, Contact Centres, Digital Experience & Employee Experience.

So my suggestions here are drawn from those disciplines.  But I imagine the suggestions here can be extrapolated to other disciplines as well.

And with suggestions in general, it’s not just what to do – it’s also what not to do.

 

Is it a Group Award category or an Individual Award category?

Recently I judged a Face to Face presentation for a ‘Group Award’.  Unfortunately the Presenter used the word “I'” a lot.

I did this, I did that…because of me.

I looked down at the Judges timetable – yup this was a group award category.  So why so much ‘I’?

My suggestion is this.

If you’re involved in a Group or Team Award, the word ‘we’ goes a long way.

On the other hand, if you’ve entered an Individual Award of some kind then it’s appropriate to talk about you.  What you did, what you accomplished, how what you’ve done has made your organization a better place.

 

It’s great that you’ve won other Awards but…

In another judging experience, the Entrants began their Awards presentation by telling the Judges how many other awards they’d won.

It just felt awkward to have the presentation start off that way.  I sensed entitlement – as in – we’ve won so many other awards that surely we’re entitled to this one too.

I’d suggest this.

If your other Awards or achievements are specifically relevant to the Award you’ve entered then it’s worth mentioning at the right time and in the right context.

For example, in an individual Award the Entrant might say, “I was inspired when I won the Team Manager of the Year back in 20XX and that motivated me to enter this year’s Manager of the Year Award.”  

In this example, the Entrant’s sharing of their earlier Award was relevant to their current entry.

 

Superlative deeds matter more than superlative words

One thing nearly any Judge will tell you is that Entrants sometimes go overboard with ‘superlative words’.

Our unparalleled, dynamic, dream Team of inspired, culturally motivated self starters with entrepreneurial mindsets.

By the way, that’s not that far off the mark.

When everything is wonderful, fabulous, motivated, value-driven and so on, none of it feels real.  And overly puffed up language can actually take away from the great accomplishment being put forward.

You usually see the use of superlative words in written entries.  But I’ve also experienced it in face to face presentations where it comes off as a bit pompous or at the very least, unnatural.

What should be superlative is what got accomplished – the deeds.

So focus on the deeds.  And choose your descriptive words wisely.

 

Rehearse (rehearse, rehearse) your Face to Face presentation

Judges can always tell if you haven’t rehearsed your Face to Face presentation.  We’re not expecting a performance of Shakespeare’s Hamlet.

But folks who have rehearsed know that they have to communicate their key points and narrative within X time frame.

I’m a huge fan of Awards like the Awards International events where the timeframe is clearly set.  20 minutes for presentation (uninterrupted) and 10 minutes for Judges Q&A.

Look at Ted Talks – there’s an art & science involved in sharing your compelling story in a 20 minutes.  Elevator pitch, getting to the point, grabbing attention.  Sometimes less is indeed more.

So presentations that have been rehearsed tend to stick to the stipulated timeframe and rarely run over time.

Unrehearsed presentations, on the other hand, tend to be cut off before the material is done – and during Q&A there are usually awkward attempts to share slides or material from the content that didn’t get covered in time.

Don’t wing a Face to Face presentation.

 

Get creative

One of the best Face to Face presentations I witnessed was modelled as a talk show panel.

The Team presenting the entry had put together a fun and engaging narrative where the head of the initiative was a guest on a talk show and the host and other guests got to ask questions about their initiative.

It was fun and funny.  But most importantly they got their message across and you could sense the camaraderies amongst the Entrants.

There’s not a single model of presentation that ‘wins’ over others.

So don’t be afraid to engage the Judges – as long as you have your talking points and narrative well thought out try something different!

 

Follow the directions

Now and then you get the Entrant who doesn’t follow the directions for the structure of their presentation and what needs to be conveyed.

You can be a world-class speaking guru, but if your presentation doesn’t allow the Judges to readily score you across the requested categories or competencies you’re unlikely to make it to the Winner’s Circle.

I think that you learn as much through the process of completing your Awards entry – and preparing your presentation – as you do by delivering it and even winning.

So take the process seriously – you’ll benefit in the long run.

And – some Entrants tell me they turn around and use their Awards entry presentations inside their own organizations as well.  How smart is that!

 

In closing

I hope these few suggestions have been helpful and I look forward to judging your entry soon!

Daniel

 

 

 

 

 

 

 

 

 

 

 

 

Whatever happened to First Contact Resolution?

by OmniTouch International OmniTouch International No Comments

In this short article I consider this question – whatever happened to First Contact Resolution?

Last week I was judging Contact Centres

Last week I chaired a panel of Judges for a number of Contact Centre Awards entries.

One of the Judges on our panel asked several of the entrants –

“So how do you measure your First Contact Resolution rate?” or

“Based on the initiative you’ve shared, what were there changes to your First Contact Resolution rate?”

So that got me to thinking – is First Contact Resolution – or ‘FCR’ – still relevant in today’s Contact Centre?

 

First Contact Resolution is a multivitamin KPI

When I teach Operations I suggest Participants look at First Contact Resolution as a multivitamin KPI.

That’s because it does a few things for you.

FCR helps you to:

  • Improve Customer Satisfaction (through reduction of Customer effort)
  • Reduce cost (through reduction in unnecessary repeat contact volume)
  • Improve future Service Level (through reduction in unnecessary repeat contact volume)

No wonder FCR is referred to with such reverence in the Contact Centre industry.

 

But it’s always been hard to measure

I’ve seen First Contact Resolution formulas out there that would put Einstein’s formulas to shame.

They’re complex and require a lot of internal communication to understand and apply.

So, it’s worth considering why that’s so.

Everyone gets the general idea around FCR.  Assist the Customer to the degree that they won’t need to contact you again.  It sounds easy.

But the practical application is more complex, in part because there’s no industry standard for how to measure FCR.

Push-button KPIs

Many Contact Centre KPIs are push-button KPIs.  Push the button and you get your result.

Push the button and get your Service Level.

Push the button and you get your AHT.

Push the button and get the Occupancy rate.

You get the general idea.

But there’s no button to press for FCR.  It falls into the category best called ‘assembly-required KPIs’.

Think of some other assembly-required KPIs for a moment.

Employee Engagement, Customer Satisfaction, Turnover Analysis are all good examples.  To get at the data for these KPIs you can’t just push a button.

Getting at assembly-required KPIs requires you to design & implement a solid methodology for data collection & analysis.

Common data sources for First Contact Resolution

When it comes to FCR data collection, the most common sources are to:

  • Allow Agents to rate their own performance (not really recommended for obvious reasons)
  • Ask Quality Assurance folks to weigh in on FCR when they do their evaluations (this can be powerful and more on this soon)
  • Survey Customers and ask them if their need was met (but aren’t Customers getting tired of getting surveyed and is this the right question to ask?)
  • Run scans across the CRM system to see if a single Customer record shows multiple contacts for the same ‘reason’ within X time frame (based on business assumptions)
  • Use operational data (when the nature of the interaction is very transactional such as tracking shipments)

And because there are pros & cons to each data source, you choose multiple data sources, assign a weightage to each one and assemble the results together to get an outcome.  The purpose of blending different sources together is to alleviate the inherent advantages & disadvantages of each individual source.

I think of it like making a stew.

You have to select a variety of ingredients, throw them into a pot in the appropriate ratios, stir well and season to taste.

It’s a robust but complex process.

 

So how can we address some of this complexity?

It helps to remember that FCR is ultimately a measure of quality.

Sure – FCR helps reduce unnecessary repeat contacts – and that’s cool.

But at its heart Centres pursue FCR to help Agents create great conversations with Customers.

Conversations that address spoken and unspoken needs – not just deliver transactional answer-based service.

So with that direction in mind, how can we improve our FCR delivery while mitigating the complexity inherent in assembly-required KPIs?

 

Define what First Contact Resolution looks like for each of your Top 10 enquiry types

Every inbound Centre has a Top 10.   The Top 10 ‘reasons’ a Customer contacts you.

While your Top 10 changes over time, these enquiries easily represent 60% – 80% of your monthly contact volume (excluding one-off events of course).

So rather than looking for a magical or ‘industry standard’ FCR rate, take your FCR magnifying glass down to the enquiry type level.

For example, if your Enquiry Type #1 = Questions on room rates you’d sit down with a small group of folks and consider what FCR can and would look like.

What has to be conveyed, whether explicitly asked for or not, in that conversation.

But be careful.

Except for highly transactional enquiries you can’t rely exclusively on your internal determination of what FCR would look like.  You’re going to have to consider FCR from the Customer perspective as well.

And here I always suggest you do some qualitative research.

Bring in some real Customers.  Buy them lunch.

Ask them about their needs, expectations & wants (both expressed and unexpressed) when they ask about room rates.

I don’t see how we can talk about Customer-centricity without actually talking to real Customers face to face.

There seems to a tremendous amount of fear or skepticism or just plain lack of know-how around qualitative research.  That’s an article for another day.

Remember that if you pursue this Top 10 approach – your monthly FCR will fluctuate over time – in part due to changes in the enquiry mix.

For example, if in Month 2 – as compared to Month 1 – you got more volume for an enquiry type where FCR is ‘easy’ to achieve – that will weight up your overall FCR rate in Month 2.  You can’t simply assume this as an improvement in Agent performance – which is what folks tend to believe when they see FCR rates inch upwards.

So the key here is to be able to articulate why overall FCR rates change from month to month – was it a change in enquiry mix, a one-off event that weighted results up or down or did Agent Quality improve or decrease.  These are all potential factors.

By the way – it’s good to know that if your FCR rate is consistently high (let’s say high 80’s and 90s) that could be a sign of a poor self service strategy.  Why are Agents getting such simple enquiries which naturally lend themselves to a higher FCR rate?

That’s why I always smile (and grimace) inside when I hear a Centre say that their FCR rate is in the 90 percentile range.  That’s almost always bad from a self service strategy perspective.

As Centres shift the simpler enquiries to self service you see FCR rates naturally decline overall.

 

Accept that not every enquiry type might ‘qualify’ for First Contact Resolution

By the way – it may turn out the some of your Top 10 can’t be FCR for some reason.  That happens.

But in these cases I ask myself what has to be conveyed or gathered in that conversation to make the ensuing process as effective as possible – even when the overarching goal of FCR can’t be achieved from the Customer’s perspective.

Earlier this year a Contact Centre Manager from a travel company told me that FCR is a mindset and that mindset training would be enough to raise their FCR rate.

But I disagreed.

Yes – having a vision for FCR and putting it front and centre in your Agent’s performance basket matters.  But it’s not enough.

You’re going to have to get a bit more granular – and the Top 10 approach is a practical way to do that.

 

Ask yourself – does my current metrics system align to First Contact Resolution?

Contact Centres are important touchpoints within an organization.  But sometimes that very (self) importance leads to decisions which are good for the Centre but not necessarily good for the Customer.

Let me explain what I mean from a metrics perspective first.

If your Centre focuses heavily on Average Handling Time (AHT) as an Agent efficiency metric or on # of calls produced by the Agents you’re not really considering the Customer journey – you’re looking at what’s good for you.  Short call = lower cost (goes the reasoning).

That’s a touchpoint perspective.

FCR by its nature implies that we take the time needed to get the job done.  To provide the Customer with what they should know – whether explicitly asked for or not.

I’ve written extensively on Average Handling Time but for purposes of this article – if due to your Centre’s metrics perspective your Agent is more focused on quantity or time taken, it’s quality that takes the hit – and that includes  a hit to FCR.

Don’t get me wrong – cost efficiency is great.  But every financial model I’ve worked shows that reduction in future unnecessary contacts saves more $$ overall than trying to shave 30 seconds off current calls.

Why are you still talking about Average Handling Time?

 

Customers think in Journeys – not in Touchpoints

McKinsey writes that Customers think in journeys – not in touchpoints.

There’s a beginning, a middle and an end to a journey.  Some journeys go from start to finish and never touch the Contact Centre.

For other journeys the Contact Centre is a key participant – and important to the Customer’s overall perception.

In Service Design you learn that the various touchpoints need to work in harmony together – to avoid dissonance or distress.  So it makes sense to evaluate the harmony across the journey – not just look at what happens ‘inside’ the Centre.

 

Have your Agents been trained on Customer journeys?

I don’t mean journey mapping – that’s not needed at the Agent or Team Leader level.

I’m talking about sharing the motivations and experiences that led to the Customer contacting the Centre.  What was their mood, what was their ‘job to be done’ – what was the role of the Centre in helping the Customer achieve their goals?

On the other side of the interaction – where will the Customer go next in their journey?  Is there some way we can help them accomplish that better?  What can the Centre bring to the table to deliver a standout role in the Customer journey.

When I do Frontline training I often ask – “Do you know what your music on hold is?” or “Have you experienced your own IVR?  Your own Delay Announcements?”.

Because the Contact Centre Customer Experience doesn’t begin when you start talking (or typing).  It begins earlier upstream.  When the Customer begins to think and feel that they have to contact you.

Nine times out 10 the Agents hadn’t spent time studying the Contact Centre journey – much less the Customer journey.

I think this represents a real opportunity for training and discussion at the Agent & Team Leader level.

 

Should you pursue First Contact Resolution?

My personal belief system around First Contact Resolution is this.

It doesn’t make sense to implement an elective process where the costs and effort of the process aren’t outweighed by the benefits delivered by the process.

If you can prove out that your complex but solid methodology to get at metric-oriented FCR is yielding dividends – then by all means go for it.  Just keep Quality as your North Start for putting together your FCR program – it should always be aligned to what the Customer would say.

So I’m never surprised or judgemental when I meet Centres that don’t specifically measure FCR.  That puts me into the minority I think.

Lately I’ve seen some Centres take a less metric driven approach to FCR that I admire.  It’s also been quite effective for them.

They build the concept of FCR into their Service Vision & Principles.

If you haven’t heard of a Service Vision or Service Principles, they’re essentially a set of statements that answer the question – “What kind of service do we deliver around here?”

For example, if one of their Service Principles is ‘to be helpful’ – they consider all the ways they can be helpful to Customers (and each other) across their various interactions.  The successful behaviours  that enable ‘being helpful’ become codified across the Centre.  Culturally ingrained.

And the use of the Top 10 enquiry type approach works wonderfully here.

Measurement-wise – the use and impact of  these helpful behaviours are picked up in the normal Contact Centre monitoring processes through Quality Assurance, Team Leaders, Mystery Shopper providers and the like.

3 Suggestions for Contact Centre Leaders to transform into Customer Experience Leaders in 2019

In closing

I think FCR still has relevance in today’s Contact Centre.  That’s simply because it has to do with making Customers lives better through letting them know all that they need to know to achieve their goals.

And I think there are alternative ways to achieve the multivitamin benefits inherent in FCR.

If you can prove that your robust FCR measurement system yields results then well done – and keep it up.

But if robust measurement systems are a bit out of reach for your Centre, driving FCR-style behaviour through your Culture & Quality program is a viable alternative as well.  Service Visions & Service Principles are relevant for every Centre.

Thanks for reading!

Daniel

 

 

10 CCXP Exam Practice Questions for Customer Research Know How

by OmniTouch International OmniTouch International No Comments

In this short post we share 10 CCXP exam practice questions for Customer Research Know How.

CCXP = Certified Customer Experience Professional.  We’re proud to be a CXPA Recognized Training Provider and we help people earn their CCXP credential as well as grow in Customer Experience.

A quick look at the official CCXP Exam

The CXPA (Customer Experience Professionals Association) has identified six (6) Customer Experience competency areas for certification and each area is covered in the official CCXP Exam.

The (6) Customer Experience competency areas are:

  1. Customer-Centric Culture
  2. Voice of the Customer, Customer Insight, and Understanding
  3. Organizational Adoption and Accountability
  4. Customer Experience Strategy
  5. Experience Design, Improvement, and Innovation
  6. Metrics, Measurement, and ROI

There are currently 100 questions in the official CCXP Exam.

While there is not a designated competency specifically for Customer Research Know How, it’s still an important topic.

To succeed in the Metrics, Measurement & ROI competency as well as the Voice of Customer, Insight & Understanding competency, you’ll need some essential knowledge around what I call Customer Research Know How.

 

Our aim with sharing these practice questions

Our aim is to help and inspire folks who want to gain their CCXP credential or simply improve their understanding of Customer Experience as a business discipline.

That’s why we have developed a current bank of more than 250 practice questions with more underway.  We use these practice questions in our Customer experience training workshops as well as publish selected questions from time to time.

10 CCXP Exam Practice Questions

10 CCXP Exam Practice Questions for Customer Experience Strategy

 

If you want to learn more about the official CCXP credential and the CCXP exam process please visit cxpa.org.

 

10 CCXP Exam Practice Questions for Customer Research Know How

In this section, we share 10 CCXP Exam practice questions for Customer Research Know How.  These 10 questions are written in the same multiple choice format found on the official CCXP Exam.

Read through each question and choose the answer that you think is correct – that’s either a, b, c or d.

Remember that the official exam is no books, no notes. So answer as best you can from your current knowledge & experience.  Don’t look up any answers!

Here goes – and good luck!

Quiz – Customer Research Know-How

 1.  Complete this phrase, “Correlation does not equal _____________.”

a.  Causation

b.  Regression analysis

c.  The outcome of a Scatter Diagram

d.  None of the answers is correct

 

2.  Based on the image shown, which of these numbers is the mode?

 

a.  5

b.  4

c.  2

d.  The sum of all the numbers shown

 

3, What is the median for this data set?

102, 56, 34, 99, 89, 101, 10.

a.  10

b.  102

c.  None of the answers is correct

d.  89

   

4.  You earned $129, $139, $155 and $176 over the last 4 weeks. What is your average pay?

a.  $153.00

b.  $149.75

c.  $176.00

d.  $99.50

   

5.  Which of the following is the best definition of ‘Regression Analysis’?

a.  Regression Analysis refers to the most common number in a data set.

b.  Regression Analysis is a technique to assess the relationship between a dependent variable and one or more independent variables.

c.  The connection between two events or states such that one produces or brings about the other, where one is the cause and the other its effect.

d.  Regression Analysis is most often related to the average of a data set.

 

6.  Which answer is the correct interpretation of the diagram show below?

 

a.  There is a positive correlation between the mark in the general knowledge test and the IQ level

b.  There is a not positive correlation between the mark in the general knowledge test and the IQ level

c.  There is a negative correlation between the mark in the general knowledge test and the IQ level

d.  There is no correlation between the mark in the general knowledge test and the IQ level

 

7.  Which answer is the most appropriate interpretation of the diagram shown below?

                        

 

a.  There is a positive correlation between Weight and the Kilometers Run per Week

b.  There is a no correlation between Weight and the Kilometers Run per Week

c.  There is a negative correlation between Weight and the Kilometers Run per Week

d.  There is not a negative correlation between Weight and the Kilometers Run per Week

 

 8.  Which of the following is the best definition of a Confidence Interval?

a.  How close the survey results of the sample are to the actual results yielded if you were to survey the entire population

b.  The overall confidence you have that your figures are correct

c.  The percentage that indicates how sure you can be that the total population falls within the Confidence Interval

d.  None of the answers shown

 

 9.  Which of the following is the best definition of a Confidence Level?

a.  How close the survey results of the sample are to the actual results yielded if you were to survey the entire population

b.  The degree of certainty you have that your Confidence Interval is correct.

c.  The percentage that indicates how sure you can be that the total population falls within the Confidence Interval

d.  None of the answers shown

 

10.  Why do most organizations survey a random sample of Customers instead of all Customers?

a.  A random sample provides more reliable information

b.  Surveying a sample of customers is less expensive

c.  The confidence interval is higher when using a random sample

d.  None of the above

End of Quiz

Would you like to know how you did?

If you’d like to know if your answers are correct we’re happy to help.

We’ve intentionally gone ‘low-tech’ here.  There’s no need to register anywhere, set-up an account or pay to access the practice questions.

Once you’ve answered all (10) questions just drop an email to me at [email protected]

Let me know the question # and the answer that you chose (either a,b,c or d).

You can use the following format in your email to me:

  1. a
  2. d
  3. c
  4. c (and so on for all 10 Practice Questions)

And because we have a few Quizzes up here on our Blog page can you tell me ‘which’ Quiz you’ve taken?  This is the Quiz for Customer Research Know How.  I always do my best to answer quickly and let you know which ones you got right and which need correction.

Of course taking 10 CCXP exam practice questions won’t fully reflect the experience and effort that have gone into your Customer experience work and goals to date.

But in all these many years of running high level certification programs, we find that the more practice questions you take – and learn from – the better prepared you will be.

Thank you for reading!

Daniel

Daniel Ord / [email protected]

 

 

3 Suggestions for Contact Centre Leaders to transform into Customer Experience Leaders in 2019

by OmniTouch International OmniTouch International No Comments

In this short article I share 3 suggestions for Contact Centre Leaders to transform into Customer Experience Leaders in 2019.

First things first

I sometimes hear Contact Centre leaders say that their senior or functional management doesn’t support their Centre.

If you work at a cult status company like Zappos you’re clearly fortunate.  Your high level of Customer Experience (CX) ambition is aligned to and reinforces that of your company.

It’s a virtuous cycle.

But what if you’re the Centre Manager in a company where your purpose isn’t seen as mission-critical.  Where management doesn’t meaningfully embrace Customer centricity.

That’s a different scenario.

Sure – you can’t control the level of CX ambition in your company.  But go ahead and pursue your personal CX ambitions – even if they don’t align to the current CX ambitions of your company.

John Maxwell writes “Leadership is influence – nothing more, nothing less.”  Don’t settle for becoming an outcome of your culture. Consider yourself a driver of your culture.

I think that’s putting first things first.

Suggestion #1 – Get involved with the Customer Experience (CX) Vision

Not every company decides to pursue a CX strategy.  At the end of the day it’s a business decision.

And don’t let the false use of lingo in companies fool you.  Rebranding everything as ‘Customer Experience’ when it used to be called ‘Customer Service’ doesn’t make it so.

They’re different things.

Window dressing doesn’t equate to strategy.

A Customer Experience strategy – a big topic – addresses:

  1. What kind of experience you intend to deliver to Customers
  2. The objectives, goals & metrics you set to measure success
  3. The outside-in perspective of the Customer to ensure your aim is true
  4. The ways you plan to engage everyone within the organization to deliver
  5. The long- and short-term actions you take to achieve your objectives

I’ll cover CX Strategy more in a future article.

10 CCXP Exam Practice Questions for Customer Experience Strategy

 

But for our purposes today let’s look at Point #1 -what kind of experience you intend to deliver.

Because this is where your CX Vision lives.  It describes the intended experience in vivid and compelling terms so that everyone knows what that experience should look like and feel like.  In Service Design it might be called your Value Promise.

If your company has a defined Customer Experience (CX) Vision in place, life is good.  You’re in a great position to align your quality program & performance standards to that vision.

No more excuses to use weak standards like ‘Use the Customer’s Name 3x’.

From Contact Centre Management to Customer Experience Management – do you have what it takes?

What if your company doesn’t have a CX strategy in place?

If your company doesn’t have a CX strategy in place, then it isn’t likely to have a CX Vision in place either.

But hey – don’t let that stop you.

Sometimes Contact Centre Leaders need to shape their own destiny.  You can and should put together a strong Service Vision.

By the way, I tend to be very particular with terminology here.  I don’t call this a Customer Experience Vision.

The reason is simple.

A CX Vision by definition and application incorporates the entire organization and its ecosystem.  If your scope of authority extends only across the Contact Centre or Customer Service function, it’s better to be precise and call it a Service Vision instead.

Because it’s not organizational in scope.

But, over time and with your influence, a great Service Vision can readily evolve into an organizational CX Vision.

So think big when you craft it!

And the Service Vision often does double-duty for how we treat each other.  It doesn’t just have to be for Customers.  It can be for Employees too.

Sometimes I use the analogy of ice cream.  What ‘flavour’ of service do we deliver around here.

Coming up with your Service Vision

To come up with your Service Vision it helps to look  at what your company says about itself.

This is where I begin when I’m designing a Mystery Shopper research or Quality Assurance program.

Read your company website.  The company vision, mission and values can often be found there.  What’s your purpose?  Who are your intended Customers?  What role do you play in their lives?

Articulate how your company describes itself.

Next, look at your company’s brand attributes & values.

What kinds of promises does your company make to current and prospective Customers when they use your products & services?  What do your ads say?  What kind of images are used?  What kind of lingo appears in marketing communications?

Articulate the brand promises your company makes.

Now you can put these findings in front of the people who work in your Centre.  What do they think?  Does it ring true?

Your goal is to develop and codify a Service Vision (a statement), which is often supported by a focused set of 3 – 6 Service principles.

And by going through this process you’ll be better equipped – when the time comes – to help other departments and functions work through their CX Vision.

That’s influence!

Just imagine

When anyone asks your Contact Centre Agent what kind of service they deliver around here – they can tell you.  And specifically how they apply the vision & principles to their daily interactions.

Easy to talk about – but it’s the doing that sets you apart from others.

In closing, the CX Vision, the Service Vision and CX Strategy are big topics.  They’re worth taking the time and effort to read, study and discuss at a much deeper level than is presented in this short article.

But I’ve found over the years, the best CX & Service strategies begin with a solid vision.

 

Suggestion #2 – Please don’t call a horse an apple

It’s wearying to see how many Contact Centres have rebranded themselves as Customer Experience Centres and how many Contact Centre job titles have been changed to incorporate ‘Customer Experience’ into the title.

But you can point at a horse and call it an apple all day and that won’t make it so.

This type of rebranding exercise pollutes everyone’s understanding of what CX really is.  Because CX – by definition & application – must incorporate the organization as a whole.

Sure – your Contact Centre has some impact on the overall Customer Experience for those Customers who choose to use your resources. 

But their overall perception of your company is influenced by so many (other) factors and is fluid over time.

McKinsey writes that Customers think in terms of their journeys, not in touchpoints. That can be hard for Contact Centre leadership – in charge of large and labour-intensive touchpoint – to take onboard.

Especially when for years we’ve all been taught that the Contact Centre is the most important touchpoint in the company.

It’s helpful for Contact Centre people to understand that they’re a subset of a subset in the world of CX.

First comes CX which covers the entire organizational ecosystem.

Then within that ecosystem you have the Customer Service function – most easily viewed as the human to human interactions Customers have with you.

And within the Customer Service function you have the Contact Centre.

If I were training my Agents today I’d spend time sharing key Customer journeys.

Why did the Customer contact us?  Where did they come from? Where are they likely to go next?  What’s our role and opportunity in this experience?

When Contact Centre people stick their flagpole into the ground and claim they are Customer Experience, they do a big disservice to every other employee and stakeholder in the organization.

Ultimately, the smart use of Customer research allows you to evaluate the importance of the Contact Centre touchpoint to the Customer across key personas and journeys.

We talk about research next.

 

Suggestion #3 – Build your Customer Research Know-How

You’d hope that the Contact Centre leaders would be experts in Customer Research know-how.

That they’d jump at every opportunity to understand the needs, expectations and wants of their Customers.

That they’d bang on the doors of their Service Quality department and ask to be a part of the research programs undertaken.

That they’d be open to learning the (sometimes) harsh truth about what Customers have to say.

But one potential barrier I’ve seen often is this one.

When senior management has unrealistic expectations around quantitative outcomes, Contact Centre leaders may not be so keen to let poor results & findings see the light of day.

I met one Contact Centre leader who was so terrified of an upcoming management meeting on their Contact Centre survey results they called in sick for the presentation.

Fear is a terrible way to motivate change and when Customer research is seen as ‘scary’ that inhibits the desire to learn more about research.

Another potential barrier I see is this one.

Research is a fascinating but complex topic.  It involves a lot of what I call ‘First Principles’.

First Principles are the essential knowledge you need to understand the topic with some level of mastery.

In Customer Research that includes essential knowledge around topics like –

  • The role of qualitative research
  • The use of structured vs. unstructured data
  • Descriptive, predictive and outcome metrics
  • Forms of ethnographic research
  • Relationship vs. transaction survey practices
  • The role of statistical viability
  • Basic research terminology – mode, median, average,
  • More research terminology – correlation, regression, causality
  • Service & experience design research

To learn and understand these concepts take time and effort. But the payoff is tremendous.

In an era where more information and data is produced than at any other time in human history, dusting off those old statistics books and re-mastering quantitative & qualitative research matters.

Experience design is based on qualitative research methodologies in particular.

Get your Customer Research know-how up to speed.  It helps you make sound sense of  how you can understand Customers better.

In closing

Of course I could have had 13 suggestions – or 5 suggestions or 11 and so on.

But after some thought to my own personal experience, what I’ve learned working with Clients and the amount of time and effort required, I hope that these suggestions resonate with you and are helpful.

Here’s to all your CX ambitions for 2019 and thank you for reading!

How to learn more about Customer Experience and prepare for certification

Daniel

10 CCXP Exam Practice Questions for Customer Experience Strategy

by OmniTouch International OmniTouch International No Comments

In this short post we share 10 CCXP exam practice questions for the Customer Experience Strategy component of the overall CCXP exam.

CCXP = Certified Customer Experience Professional.  We’re proud to be a CXPA Recognized Training Provider and help people earn their CCXP credential as well as grow in Customer Experience.

A quick look at the official CCXP Exam

The CXPA (Customer Experience Professionals Association) has identified six (6) Customer Experience competency areas for certification and each area is covered in the official CCXP Exam.

The (6) Customer Experience competency areas are:

  1. Customer-Centric Culture
  2. Voice of the Customer, Customer Insight, and Understanding
  3. Organizational Adoption and Accountability
  4. Customer Experience Strategy
  5. Experience Design, Improvement, and Innovation
  6. Metrics, Measurement, and ROI

There are currently 100 questions in the official CCXP Exam.

To learn more about the CCXP credential and the CCXP exam process please visit cxpa.org.

Our aim with sharing these practice questions

Our aim is to help and inspire folks who want to gain their CCXP credential or simply improve their understanding of Customer Experience as a business discipline.

That’s why we have developed a current bank of more than w50 practice questions with more underway.  We use these practice questions in our Customer experience training workshops as well as publish selected questions from time to time.

10 CCXP Exam Practice Questions for Customer Experience Strategy

In this section, we share 10 CCXP Exam practice questions related specifically to the Customer Experience Strategy competency.

These 10 questions are designed to address specific know-how expected for the Customer Experience Strategy competency and are written in the same multiple choice format found on the official CCXP Exam.

Read through each question and choose the answer that you think is correct – that’s either a, b, c or d.

Remember that the official exam is no books, no notes. So answer as best you can from your current knowledge & experience.  Don’t look up any answers!

Here goes – and good luck!

 

#1. If you want your Frontline Staff to ‘go the extra mile’ correctly, you should:

a. Give them as much leeway as possible to do what they think is right

b. Ask them to use the Customer experience strategy as a guide

c. Ask them to talk to other Service Staff to see what they do

d. Advise them not to go the extra mile because it tends to be costly

 

#2. When developing your Customer experience strategy, it is best to:

a. Consider the needs of your Customers

b. Look at what kind of Organization you are

c. Adopt practices from other leading Organizations

d. Consider both the needs of your Customers & what kind of Organization you are

 

#3. Which of the following least describes an Annual Operating Plan?

a. Describes the tactics that will be used

b. Involves budgeting

c. Involves resource allocation

d. Outlines the plans and strategies for the next few years

 

#4. The following are effective examples of communicating a Customer experience strategy except:

a. Scheduling a one-time per year Town Hall for Employees to discuss business results

b. Develop a small handbook to be given to each Employee to carry with them

c. Create a physical space that immerses Employees in the desired experience

d. The creative use of video to share the intended experience with Employees

 

#5. Choose the word that best applies to this statement.  “The best Customer experiences are not __________.”

a. Consistent

b. Intentional

c. Accidental

d. Relevant

 

#6. A shared Customer experience vision enables you to:

a. Align strategic initiatives across the organization

b. Increase prices for your products & services

c. Pay your Employees a little bit less than market value

d. Do away with core values

 

#7. A shared Customer experience vision is applicable for:

a. Employees

b. Employees and Partners

c. Senior management

d.  All organizational stakeholders

 

#8. You talked to your Marketing Department and they shared that the brand value that resonates most with Customers is that of being ‘small-town’ or ‘heartland’ in character.  Which of the following behaviors might be implemented in your Contact Centre as a result of this brand value?

a. Be professional

b. Understand how Customers use the mobile application

c. Be as efficient as possible

d. Feel free to chat with Customers

 

#9. Which of the following best exemplifies a shared Customer experience vision:

a. We will aim to deliver a differentiated Customer experience – each Customer, each time, everywhere we are

b. We aim to deliver the highest possible shareholder returns for shareholders

c. At ABC company, your satisfaction is our ultimate reward

d. Dedication to the highest quality of  service with a sense of warmth, friendliness, individual pride

 

#10. Which of the following answers best addresses the statement, “It helps a lot if the Team developing the Customer experience strategy is ___________”:

a. Cross-functional

b. Certified in Customer experience

c. Has at least 5 years of experience in Customer experience

d. Defers to the CEO for the final decision


End of Quiz

Would you like to know how you did?

If you’d like to know if your answers are correct we’re happy to help.

We’ve intentionally gone ‘low-tech’ here.  There’s no need to register anywhere, set-up an account or pay to access the practice questions.

Once you’ve answered all (10) questions just drop an email to me (Daniel Ord) at [email protected]  Please be sure to tell me which Quiz you took.  This one is for Customer Experience Strategy.

Let me know the question # and the answer that you chose (either a,b,c or d).

You can use the following format in your email to me:

  1. a
  2. d
  3. c
  4. c (and so on for all 10 Practice Questions)

I always do my best to answer quickly!

Of course taking 10 CCXP practice questions won’t fully reflect the experience and effort that have gone into your Customer experience work and goals to date.

But in all these many years of running high level certification programs, we find that the more practice questions you take – and learn from – the better prepared you will be.

Thank you for reading!

Daniel

Daniel Ord / [email protected]

 

 

 

Implementing appropriate Contact Centre Wait Time Metrics

by OmniTouch International OmniTouch International No Comments

In this short article I talk about implementing appropriate Contact Centre Wait Time Metrics and strategies.

It’s not about the know-how – it’s about implementing that know-how

I’ve been teaching advanced Contact Centre operations for nearly 20 years.  And there’s a lot of knowledge to pick up.

So I thought it might be helpful to share some implementation tips to help Contact Centres take their know-how and bring it to life.  Because at the end of the day it’s not the know-how – it’s the implementation.

In this short article I’ll address Contact Centre Wait Time metrics and strategies.

In future articles I’ll address other aspects of operational implementation including KPI selection, forecasting and quality.  I can envision a mini-series here.

One other note – this article is written for those who’ve been through a rigorous operations course.  So I’m speaking directly to those equipped with the know-how.  If you’ve not been through a rigorous operations course I hope you find this article helpful to some degree.

And if you find yourself saying you don’t agree, or you don’t understand – that’s usually because of a lack of know-how, not a lack of intelligence, passion or desire.

In this industry we simply don’t know what we don’t know if we don’t make a concerted effort to fill the gaps in our know-how.

Now on we go.

Wait Time Metrics

For Customer contacts that you’ll handle in 60 minutes or less you’ll use Service Level.

For Customer contacts that you’ll handle after 60 minutes you’ll use Response Time.

Because of the confusion around Service Level and related Wait Time metrics let’s hold off on Response Time measurements for a future article.

As operations experts know – the way you define, measure and plan for performance between Service Level & Response Time contacts is completely different and it doesn’t do either justice to ‘mix them together’.

 

Service Level

1. Ok first things first – set your Service Level

You’ve got to set a Service Level for your Centre.  No Service Level?  Then it’s time to set one.  Because you can’t plan and staff to a moving or non-existent target.

If you’re using an ACD you’ve got to be using Service Level – it’s that simple.

Remember there is not an industry standard for Service Level.

What works for Organization A is not going to work for Organization B.

Even within a single organization you may find up to a dozen different Service Level objectives depending on the nature of that Customer queue and the types of Contacts handled.

Because Service Level is a major driver of your labor budget, it is typically reviewed annually – at the normal annual budget cycle.  Avoid changing your Service Level frequently – that will make life difficult for everyone.

Annual reviews – with the right mix of senior Participants – works well.

 

2. Ok – now decide what interval you’re going to use for measurement

Service Level performance is always measured on an interval basis.

Typically at 30 minute intervals but if your contacts are long then at the 60 minute interval basis.

If you have a very small Centre or Queue – with just a few Agents – an hourly or even shift basis may be enough for you.

But if you have a larger Centre or Queue  you should be measuring to the 30 minute interval. Very large Centres and those that pursue a significant cost efficient strategy measure down to the 15 minute interval.

If you’re only reporting a daily or weekly average – for example to give to the bosses – you’re going to have to add an additional set of internal measurements & reporting around intervals.

Because the Customer Wait Time experience, the Agent Occupancy experience and your best bet for cost efficiency live within interval management – not with daily or weekly (or heaven forbid) monthly averages.

Some Centres use a green/amber/red system to indicate interval performance across a day.

For example, you have run a 24 hour operation and you measure down to the half hourly interval you have 48 intervals to ‘get right’.

So define what a green interval looks like.  For example Service Level performance of 90% or above = green.  And carry on that logic for amber and red intervals.

Imagine how easy it is to look at a color coded representation of your Service Level performance that day by interval.  How many greens?  Ambers?  Reds?  Where do patterns emerge?  Because you can only fix what you can find.

And use cool graphical representations of Service Level performance to share with everybody in the Centre. It takes everyone in the boat, rowing in the same direction, to achieve Service Level.

It shouldn’t be a secret – it should be displayed everywhere.

 

3.  Choose a Service Level calculation formula

A lot of folks don’t realize that there are at least 4 different formulas out there to calculate your actual Service Level performance.

And that this formula has been input into your ACD.

So the question is always this.  Do you know what your formula is?  Are you happy with it?  Is it consistent across the organization to allow for some level of apples to apples comparison?

It’s an important decision.

Average Speed of Answer

Are you using Average Speed of Answer?  If so, why?

If you have an ACD, Service Level is the best and key metric to measure the Customer Wait Time experience.

Leading Centres don’t use ASA because it is an outcome of Service Level performance.  If Service Level goes down where does ASA go?  It goes up!

Learn to chase drivers – not outcomes.  Fix a driver and you automatically fix an outcome.

Where does ASA come into play?

Well it’s in the Erlang B calculation to calculate how many trunk lines you need for your Centre.  But that’s automatic.

And admittedly it’s easier to graph using ASA vs. Service Level.  Simply run Erlang C for your desired Service Level, identify the outcome ASA figure (let’s say 12.7 seconds) and use that for graphing.  It’s ‘equivalent’.

But if you have an ACD and you’re using ASA as a target or important metric stop, pause and ask yourself why.

It’s very ‘Jurassic Park’ and was in use in the days before ACDs were commonly installed.

 

Longest Wait Time

One Customer is going to experience a ‘Longest Wait Time’ even when you’re achieving your interval based Service Level.

Firstly – everyone should know what that is.  Because even if you’re achieving your 80/20 or 90/10 or 50/40 someone is going to wait a long time.

There are often widely different wait time experiences for Customers within a single half hour interval.  It’s good to be aware of that.  Particularly if you survey Customers on their wait time experience.

Lastly – and perhaps most importantly – whoever is reading your Readerboard and making decisions based on that data MUST know the Longest Wait Time for that interval.

The Longest Wait Time is an important piece of data for interpreting your Readerboard and making smart Service Level recovery decisions.

Fortunately Erlang C helps us calculate that piece of data for each and every interval.

And just like ASA, Longest Wait Time is an outcome of Service Level.  If your Service Level goes down your Longest Wait Time goes up.

But as I mentioned before – don’t chase outcomes, find and fix the driver(s) – in this case Service Level.

 

Abandonment Rate

There’s a lot of confusion around this metric.

Clearly, Centres that generate revenue – such as food orders or hotel reservations – care a lot their Abandonment Rate.  But they address that ‘care’ by setting very high Service Level objectives.  That means that Customers get answered so quickly they hardly have time to abandon – though of course an Abandonment Rate still exists.

For non-revenue generating Centres – and that’s most of them – Abandonment Rate is an outcome.  An outcome of Service Level.

If Service Level goes down, it is likely (though not assured) that Abandonment Rate will go up.  Or when Service Level goes up, it is likely (though not assured) that Abandonment Rate will go down.

Smart Centres see Abandonment Rate as an outcome.

So rather than targeting it they examine it.  When do most people abandon?  What intervals experience higher or lower Abandonment Rate?  When do we play our delay announcements?  Should we move our announcements around?

Because Abandonment Rate is a human behaviour – not a mathematical behaviour.  It lies in the hands of the Customer.

We’re in control of (drum roll please) Service Level.  When we’re achieving Service Level by interval we’re accomplishing our mission. We’re delivering a consistent Customer Wait Time, Agent Occupancy and Organizational promise experience.

Trying to chase abandoned calls is like trying to catch a greased pig at the county fair.

Getting a handle on Abandonment Rate in the Contact Centre

 

Service Level (again)

I like to say that Service Level has a Driving License and the other metrics are all passengers in the car.

So if Service Level turns left – they all turn left.  If Service Level turns right – they all turn right.

You get the idea.

And if it helps remember that there are only 3 drivers of Service Level for any interval.  They are:

  • Contact Volume
  • Average Handling Time
  • Agent Capacity

If you’re struggling to achieve Service Level for any interval or set of intervals your root cause lies within one of more of these 3 variables.

More articles on Operations soon and thank you for reading!

Daniel

 

 

 

 

 

 

 

 

A few things I’ve learned about great Telemarketing Operations

by OmniTouch International OmniTouch International No Comments

I’ve learned a few things about great Telemarketing operations and share some of these in this short article.

Just like the inbound Contact Centre, great Telemarketing operations involves mastering the ecosystem – not just flogging Telemarketers to ‘do better’.

Here are some of the things I’ve learned and that I teach in my Telemarketing work.

1.  The language of Telemarketing

Like any industry, there is a unique language for Telemarketing.

Before your Team can do the data analytics and conduct performance management and coaching, everyone has to have the same understanding of the terms involved.

I always begin every Telemarketing Operations session with an exercise to allow folks to ‘guess’ at defining terms.

Here are some of the terms I ask them to consider:

  • Abandon Rate
  • Attempted Calls
  • Call Status Code
  • Contact
  • Contact Rate
  • Cost per Contact
  • Cost per Minute
  • Penetration Rate
  • Effective Calls
  • Ineffective Calls
  • Conversion Rate
  • Productivity
  • Quality
  • Qualification
  • Minimum Standard
  • The Telemarketing Script

Remember that the people in the room work in Telemarketing operations.  This is their daily bread and butter.

So you’d expect the exercise to turn out well.

But that’s rarely the case.

Of course by the time we end our session, they’ve mastered the terminology and know how to use it.

That’s the planned and desired outcome.

Lesson:  If everyone isn’t on board with the same language and understanding of terms you’ll get the Tower of Babel effect – and that’s not good.

Test:  Take a few of these terms and try it out with your Telemarketing management team.  How close or far off were they from the right answer?  And even more importantly – how close or far off were they in relationship to each other?

 

2.  Hire right

Put bluntly, ask yourself – how many of my Telemarketers can actually sell?

And how many of my Coaches can actually coach to a sales conversation?

Probably less than you’d like.

And yet, perhaps because of inertia or because of positive relationships between Team Members, Telemarketing operations carry folks who can’t sell or who can’t coach to sales.

Putting aside the need to clean house, the more important learning is – how can I reverse engineer what I’ve learned from my high performers and factor that back into my hiring criteria?

Because while it may take time, the effort to clear out those that can’t perform and hire in folks who can will pay off in better results.

Lesson:  If the wrong people are on the Team, you’re going to struggle to get where you want to go. 

Test:  Look at the spread of performance amongst individuals.  Is there a wide spread of performance?  That’s rarely a good sign.  We cover spread analysis in the next section. 

 

3.  Measure right

The focus here should be on sales.  And in particular, a metric like Sales per Hour which allows for individual and Team cross-comparisons, regardless of how many hours were actually worked.

Conversion rate is an important part of this – obviously.  But focusing only on the conversion rate results in poor performance management practices.

And enough about Quality already.

Yes – Quality matters.  But Quality is table stakes in telemarketing.

If your folks aren’t delivering your defined level of minimum standard quality – even after training & coaching – exclude them from your incentive system.  Send the message clearly.

The same for ‘Attendance’ which is a hygiene factor.

We always advise our Clients to consider the role of Productivity as well as the role of Conversion when measuring  and comparing the performance of individual Telemarketers.

How many Contacts (per hour) did the Telemarketer achieve?  How does that look in relation to others?

Be careful though.

The goal in Telemarketing isn’t productivity – it’s most often Sales per Hour.  But better Productivity ‘supports’ that outcome.

 

Just like the inbound Contact Centre – if the ‘spread’ of performance across your Team is all over the place, you’ve got a management problem – not a Telemarketer problem.

It’s the role of great Team Leaders & Managers to train, coach and manage individuals either up or out.  We’ve intentionally shown a diagram here that would be a source of alarm for a Telemarketing Director.

And use your management time wisely.

Does it make sense to spend hours helping a high performer increase their performance a few percentage points?

Or is it better to consider how to get your bottom third to perform at the level of your middle third?

You need to be able to consider where to spend your time so that the ‘most’ results will be achieved.

Lesson:  If you focus on the wrong measurements, you’ll struggle with getting better outcomes.  Sales per Hour, supported by Productivity along with minimum standards for Quality & Attendance put everyone on the right path. 

Test:  Go look at what your Telemarketing Team is measuring.  Does it work?  Is it yielding the right outcomes?  Does everybody know what we measure, how we define it and why we measure it this way?

 

4.  Train right

Most ‘good’ outbound training programs cover what needs to be covered.  But I think the real question is this.

Based on reverse engineering from the high performers on the Team – AND based on our regular and constant call monitoring – have we selected and defined the most important behaviours to be covered in training?

Because your Telemarketers deserve more than just to be trained.  They deserve training that has been proven to help them succeed in the job.

And that only comes through extensive data analytics that ties together the right behaviours drawn from high performers and extensives call monitoring.

As a closing thought for training, the use of a relevant script or call-flow pattern helps a lot.

Too many Centres hand their Telemarketers a set of FAQs or a set of brochures.  But Customers don’t talk like brochures or FAQS so getting a conversational flow going back and forth is important.

Lesson:  Everyone on the Team should be able to to name and describe the top 5, maximum 10 behaviours that high performers exhibit that correlates to better sales outcomes.  

Test:  Individually ask each Team Manager and Team Leader what they believe (or know) are the top 5 behaviours that the high performers are demonstrating that correlate to better sales outcomes.  Can they name them?  Is their alignment across the management population in what they chose to answer?

 

5.  Manage right

This category is a biggie.  So let’s jump right in with coaching.

Coaching

If you’re not going to provide valuable, ongoing coaching then there’s little point to train.  Understanding and behavioural change happen under the guidance of a great sales Coach.

Training gets everybody on the same page – but it doesn’t fix a poor hire and it doesn’t provide lasting explanation and reinforcement of what it takes to improve sales performance.

And yet in Centre after Centre you’ll usually find little to no ongoing coaching.  Or – which is perhaps worse – coaching that centers around Quality – not Sales.

And for a Telemarketer that can be confusing.

Performance Management

Individual & Team results for telemarketing performance should be measured and managed hourly.  That’s one of the biggest lessons I learned in running telemarketing campaigns.

What’s the spread of productivity across my Team right now.  What’s the spread of conversion across my Team right now. 

By the time you look at weekly or (heaven forbid) monthly reports – the game is over.

All the improvements, suggestions, nudges and motivational opportunities have been missed.

I had a discussion with a senior Telemarketing Manager earlier this year.  She said, ‘Dan, our Team Leaders can barely do a daily basis analysis much less an hourly one.’

It took all my diplomacy at that moment to not say anything.  But what I thought was this – ok – as long as you’re fine with less than optimized performance.

I’m not a big fan of folks that put up barriers.  Yup there are always barriers.  But sometimes this is really resistance to change masked as a barrier.

If you knew something would work to give you better telemarketing results – wouldn’t you try?

Incentives & Rewards

This is a massive topic on its own.  So I’ll hardly do it justice here.

But high performers are driven by and large by money.  By creating a lifestyle for themselves and their family.

The right incentive system design for the right performance yields significant ROI.

Program Analytics

When you set up a powerful Call Status Classification Model, you’re able to analyze the various outcomes of your telemarketing campaign by day, by week, by campaign to date up to the ‘End of Campaign’.

What percentage of Nos are we getting?  What are the top 3 Nos we’re getting?  How many callbacks are happening?  Are there a lot of wrong numbers in the list?  For successful conversions, which of the product offerings are popular and which are less popular?  What are the top 3 reasons we can’t reach the decision maker and what can we do to overcome those?

When you learn to design great call status code classifications, you gain tremendous insight into the overall campaign – some of which allows you to make adjustments or tailorings on the fly.

For example, you may find that one of your Objection Handling scenarios isn’t working well.  So you can dive in, listen to calls and craft a modified response.

Obviously waiting until the end of the week or month will be too late.

Lessons: 

Regular, valuable and relevant (Sales) coaching helps translate training into action.  It helps people succeed.  It yields results.  

Great performance management in telemarketing is conducted hourly, daily and campaign to date.  Waiting for the end of a week or month is too late. 

Great Telemarketers work for the money.  Money matters.  So the incentive & rewards systems matter too. 

Program analytics help you understand what is working, what is not working and where to focus improvement efforts in the ongoing ‘life’ of your telemarketing campaign. 

In closing

Of course any telemarketing operation is going to be impacted by laws, regulatory authorities, the calibre of the CRM and systems in use.  That’s a different set of topics.

But when it comes to understanding the totality of the telemarketing ecosystem, I hope that some of the points made in this short article are helpful to you.

Daniel

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

When good people follow bad Contact Centre process – a story

by OmniTouch International OmniTouch International No Comments

In this short article I look at an example of how otherwise ‘good’ people follow bad Contact Centre process.

Sitting around our workshop table, one of the Participants – a former Contact Centre Agent from a Philippines-based BPO – shared.

“Dan – it starts like this.

QA walks over to our station and while we’re talking to a Customer they give us the time out sign.  That’s their signal telling us to wrap the call up quickly so they can conduct our side by side coaching session.

That time out sign approach is a little off-putting but you have no choice but to get used to it.  

After they settle in and connect their headset to our phone, they pull out a scorecard.  

And as I log back into the system and receive my next call, they quietly mark their paper while I’m talking.  

When the call is done, I log back out and they talk me through each tick-box they made.

Mostly I just hope that my score is a ‘pass’ because if it isn’t, they can just go on and on about my mistakes. 

So of course while they’re sitting next to me I do everything in my power to achieve a pass.

I never knew that there were ‘right’ and ‘wrong’ ways to do side by side monitoring.  Your course is the first time I heard this.

I only have my own experience to go by.  And it wasn’t a good one.”

There’s a lot that’s wrong in that story

At this point in our workshop, when the story gets shared, we’re talking about the power of the side by side method for monitoring & coaching.

The relationship building, the power of personal connection – the time to build trust.  The opportunity to make the time to spend with the people who work for you.

But in the many years I’ve taught this method – admittedly one of my favorites – I find that very few either practice it (we have no time!) or they practice it in a way that damages the relationship – not strengthens it.

There are a few things wrong in this story – and practices like these are more common than you’d think.

  • Using hand signals to summon people is rude – these should be reserved for animals – not human beings
  • Using a scorecard at a side by side session makes no sense – talk about frightening
  • Everyone’s faking it here – especially the Agent who is put in a no-win situation
  • The entire point of helping someone do ‘better’ has been lost
  • The focus on what went ‘wrong’

But the QA person in this story isn’t the villain

It’s easy to say – oh – the QA person you’re describing is the singular villain in this story.

But you’d be wrong in most cases.  Because what happens is this.

Good people readily conform to and carry out bad processes.

To ‘fit in’, to ‘get the job done’ to ‘show they’ve got the stuff’ for advancement & promotion.

And to be fair –  it may be the only way they know because that’s all they’ve ever experienced or been taught.  I see this a lot in the Contact Centre industry.

Even former Agents – who disliked everything we’ve just talked about – will readily jump in the saddle and carry on a legacy process that’s broken.

The villain in this story is the bad Contact Centre process.  In this case around side by side monitoring & coaching.

 

It’s not so great from a values perspective either

As a side observation to this story there’s an impact on ‘culture’ here too.

It’s likely that this Philippines-based BPO has the word ‘respect’ in their core values and if not ‘respect’ then something similar and equally lofty sounding.

We ‘respect’ each other, we ‘respect’ our Customers’, etc.  The posters are everywhere.  And here are pictures of all of us on our annual Team building showing our respect for each other.

But culture is nurtured through the actual behaviours of people at work.

Especially those in leadership and professional roles.

Summoning people with hand gestures and scoring them when they’re trying to serve a Customer aren’t really brilliant examples of respect.

So if you’re after building a ‘culture’ (and today who isn’t) it’s a worthwhile effort to filter your processes – and they way they’re executed – through the lens of your values.

Why are you still talking about Average Handling Time?

Is the Contact Centre industry really that mature?

I once received a comment from a reader who said – “Dan, why do you constantly write about the Contact Centre industry?  It’s a very mature industry already.”

And that comment made me think.  Sure – it’s a mature industry.

But do we always run it in a mature way?

Thanks for reading!

Daniel

 

 

 

 

 

 

 

 

 

How a group of Lifeguards brought Customer Experience to life at the Waterpark

by OmniTouch International OmniTouch International No Comments

In this short article we share how a group of Lifeguards  brought Customer Experience to life in a Singapore-based waterpark.

It was Marcus’ birthday.

So we decided to visit the Adventure Cove Waterpark for the day to celebrate and enjoy the rides.

Towels, swimming shorts and sunscreen were all packed.  And the morning sky was clear as we boarded the subway for Sentosa Island.

Some years back we had been engaged by Universal Studios to conduct extensive Customer Experience Mystery Shopper research which had included the Adventure Cove Waterpark.

And on this visit – which was purely personal – we noticed some positive changes.

In Customer experience you learn that every single Employee job description should contain specific  Customer Experience functions or activities pertinent to that job role.

These are usually referred to as Customer experience standards and will (of course) differ by job function.

And on this visit to the Waterpark we saw a terrific example in action.

Here is what happened.

Adventure River

In Adventure Cove there’s a very relaxing ‘ride’ where you recline in a big plastic inner tube and float around a long lazy river that meanders around the entire Waterpark for maybe 45 minutes or so.

And there are Lifeguards everywhere – stationed perhaps 50 – 100 meters along the entire river – which is great from a safety standpoint.

But years before, when we did our original Customer Experience Mystery Shopper research, the Lifeguards were silent and inert – doing their job yes – but not part of the show.

 

But yesterday the Lifeguards were different – in a good way

As we floated happily by on our inner tubes, so many of the Lifeguards smiled.  A couple asked how we were.

And one young standout lady, after asking us if we had eaten yet, gave us lunch recommendations (yes you do move that slowly).

When a family with children was in the vicinity, a few of the Lifeguards would pull out water pistols and open up a mock battle with the kids squealing and the parents laughing along.

What a difference it all made.

The Lifeguards were doing their job – yes.  Safety first.  But they had also become part of the experience.

And that didn’t happen by accident.  Someone – with great clarity – put the engagement with Guests into the Lifeguard job description.

Well done!

In closing

Take a moment and consider the opportunity. Have you put Customer experience standards into every Job Description in your organization?

Because good things happen when Customer Experience is everyone’s job.

Thank you for reading!

Daniel Ord

 

 

 

 

Shiny Toy Syndrome – thoughts on Technology in the Customer Experience

by OmniTouch International OmniTouch International No Comments

In this short article I share why I think shiny toy syndrome is an ineffective approach to improving the Customer Experience along with some thoughts on how to consider the role of technology in the Customer Experience.

Last year I spoke at a government event

Last year I was invited – along with other Speakers – to talk about technology change in the Contact Centre industry in particular.

On the sidelines I had a chance to catch up with the other Speakers.

Each one told me the same thing.

In the brief received before the event, they were instructed to incorporate the topic of drones into their talk.  Whether or not drones were part of what their profession, it was a mandated requirement to participate in the event.

And it was painful to watch as each Speaker grafted on a shiny toy slide or two at the end of their presentations.

It didn’t ring true and you could see the look of apology on each Speaker’s face as they tried to sound like futurist geniuses.

Shiny toy syndrome can blind you

Being anti-shiny toy syndrome doesn’t mean you’re anti- technology.

I can already hear some of the villagers gathering their pitchforks and torches to come burn down your Luddite castle.

It means that you’ve been around the block enough times to know that technology without purpose can be a mess.  Or that technology should be in the ‘service’ of something bigger.

The key is to put the technology in context.

In Customer Experience programs I like to talk through the following steps when considering the role of technology in the Organization.

1.  Your Customer Experience Strategy

I always begin with our Customer Experience Strategy.

Who are we?  What do we promise to our Customers either explicitly or implicitly?

Because ultimately our Customer Experience strategy will be the filter through which we make decisions on the kind of experience we are going to offer.  And that includes the role of technology.

It also unifies our thinking across disparate job roles and locations so that everyone knows what kind of experience we deliver around here.

Our North Star – a superb first step.

2.  What do Customers expect?

While we would have considered this in Point #1 – the Customer Experience Strategy – it’s worth blowing this out into a domain of its own.

Sometimes referred to as Customer Understanding, Customer Insight or Voice of Customer, understanding what Customers expect from us serves as a great guide to what we offer to them.

Obviously quantitative and qualitative research have a big role to play here.

And the good news is that important learnings around Customer Expectations have already been codified and are well understood.

For example, let’s look at Customer expectations related to a frictionless Customer Experience.

Don Peppers in his terrific book, “Customer Experience: What, How and Why Now” lays out the four attributes of a frictionless Customer Experience.

These are:

  • Reliability
  • Value
  • Relevance
  • Trustabiltiy

So if your Customer Experience strategy is heavily weighted towards delivering a frictionless experience, you’d work through these four attributes to establish how to bring them to life in your Organization.

And you’d look at the technology that helps you achieve these things – particularly at scale.

I read a wonderful case study for an insurance company in the U.S.

They allow Customers to make claims by shooting a short video on their mobile phone and uploading that directly to the claims department for approval.  Apparently approvals are issued within minutes.  Wow.

3. The role of Customer Journeys

If you’re rolling your eyes now I’m with you.

Some folks want to make Journey Mapping as complex as possible so that it seems beyond the grasp of mere mortals.

I’ve found that the difference between a successful journey mapping program and one that is not so successful is the calibre of the people included and their willingness to take action based on what they learned.

Because a pretty map is just that – pretty.  If that’s your goal just buy a painting.

There are a couple of things that are wonderful about Journey Maps.

Firstly, they cross functional boundaries – so they require cross-functional collaboration to serve a higher purpose.

Secondly, McKinsey noted years ago that Customers don’t think in touchpoints – they think in journeys.

When you mix that up with the ‘Peak-End’ construct (Kahneman & Tversky) on how Customers remember their experience you’ve got a compelling case for working through your most important Customer journeys.

And as you do your mapping, ask yourself.

Where would technology enable this journey?  Not just save cost (which is too often the catchphrase) but how can it create a new ‘opportunity’ for Customers that they never had before?

When you apply this mental discipline to the role of technology in your Customers’ lives you’ll find many more relevant opportunities than just saying “let’s buy a chatbot!”.

Read this marvelous case study shared by Bill Gates on how chatbots improved enrolment into higher education:

https://www.vox.com/2018/8/3/17639142/poor-kids-college-dont-enroll?linkId=55392932   (you will need to copy and paste this link)

4.  The role of Imagination

You can’t talk about technology today without talking about the role of imagination.

Think of something, prototype it, test it, try again till you find something that works.

Not everything is going to come out of the mouth of a Customer nor can everything be copied from somebody else.  Or, heaven forbid, look at ‘best practices’.

I think imagination at work is a trait that is highly under-rated.  And in many command & control style Organizations it’s actually squashed.

You shouldn’t win an industry award because you implemented a chatbot

I’m not a fan of shiny toy syndrome. I’m a big fan of technology.

And I’m a bigger fan of learning about the context of how technology created a better Customer Experience.

No – you shouldn’t win an industry award because you implemented a chatbot.

But perhaps you’re an Awards candidate if you can explain the context of what led you to the use of your chatbot and how that chatbot supports your Customer Experience strategy overall.

Thanks for reading!

Daniel