Sometimes customer experience measurement programmes are great, but occasionally they lack something…
I guess as an MD of an Insight agency that specialises in customer experience measurement I take more of an interest in how and when companies gather feedback than some people do. Generally they make sense and on average my experience is positive. However, there are times when I really do question the rationale being applied and whether the process and insight being derived has really been thought through.
Some people may argue that any feedback is good, “what harm can it do”? And I suppose there’s a degree of truth in that. However I don’t subscribe to that view and I have three bugbears that frustrate me with certain customer experience programmes:
- Are questions simply measurement for measurement’s sake? What action can you take with the information gathered?
- Timing and appropriateness – Are you asking questions at an appropriate time, that are relevant, and do they make sense? Could they annoy the customer and do more harm than good?
- What value is derived? How will the feedback impact customer experience? How is it going to improve your profitability?
Each of these is a subject in its own right so, as not to make the blog too long, I will discuss each one separately in a series of three posts. The first one being:
# Part 1 – Customer experience – Measurement for measurement’s sake
I’m sure we have all experienced this at some point. We’re in a shop or restaurant; we’ve used an online service; taken a train journey; or even used a toilet. At some point we are asked our satisfaction or whether we’d recommend the service just received. Invariably there’s some preamble about our opinion mattering and please take time to respond. And occasionally we are even shown some evidence as to how well they say they are performing. It sounds reasonable doesn’t it, so why do I have a bugbear about it at times?
Well I guess its a question of whether the feedback is useful or just simply just measurement for measurement’s sake? Let me give you the most recent example I came across when on a business trip to Estonia. I was visiting my business partner at his office and we went to lunch in the office restaurant. It’s a large self-service restaurant comprising three separate food outlets, all serving different styles of food and menus. Regardless of which outlet you chose, everyone ate in a common dining area of shared tables. Once finished you returned your tray, glass, dishes and cutlery to a shared conveyor belt for cleaning. At the exit to this cleaning station was a sign encouraging feedback and a nice ‘smiley face machine’ to register your vote.
On the face of it a great idea; simple encouragement for customers to provide easy hassle free feedback. In addition a nice positive reinforcement of how everyone was positive about the service.
Ok so that all seems good, so where’s my issue?
Well it actually took me a day or so to think about it and several visits to the restaurant before I realised where my issue was. Let me explain…
Customer experience measurement should be actionable
Taking this example, on the face of it the restaurant was performing very well. 93% of customers (who fed back) rated the food as tasty, and you can see they felt justifiably proud. But I started to think about my own experiences. On some days the food was excellent, on other days it was average, and on other days poor. Yes I was able to vote accordingly and that impacted the overall score. However I didn’t eat in the same section each time and I only chose one of the menu options available. And that’s where I started to question how actionable the feedback was?
To do this I imagined I was the Manager of the overall restaurant. And I asked myself how I could use this information to make the food or service better? The reality was that I couldn’t. Yes at a very high level I knew how satisfied my customers were overall on a daily basis and yes I could track a trend over time. However as the Manager I wouldn’t have a clue which of the three outlets was scoring high or low on a particular day, and I would’t know which meal was going down well and which wasn’t. So how would I know what action to take to put it right?
This made me feel that this was perhaps just measurement for measurement’s sake, although well meaning. Consequently I started to think about it further and my customer experience as a whole; the food, the environment, the time spent queuing, etc. As already explained, on some days the food was good and on other days not so good. But on some days we had to queue for ages, and on others it was quick. And occasionally we found the queuing system was a total mess. Thus my overall customer experience was quite mixed. And the question I was asked, “Was your lunch tasty?”, really limited my ability to give valuable feedback.
Whether the food is tasty or not is clearly important to those that eat there, and the Manager, but that does not equate to a positive customer experience. Certainly if the food was consistently poor the restaurant takings would likely show a downward trend. However the all-important loyalty of the customer depends on the food for sure, but it also depends on the friendliness of the staff, the speed of service, the comfort of dining, and the cleanliness of the premises.
Over lunch one day I asked my business partner whether he’d voted, and how he voted. His response was that if his food had been good but his overall experience poor he would score lower. So I then wondered about the reliability of the measure; were scores accurate? Looking at the results it suggested that this couldn’t be the case as the score was 93% positive, so perhaps I was just being critical?
But then I took a closer look at the detail, and firstly at the results poster which showed that 93% were ‘Positive’. Yes the wording only says ‘Positive’, which is accurate. However the smiley green face used actually represented ‘Very Positive’, not ‘Positive’ as well. Hence I found that a little misleading, but more importantly only 68% actually selected ‘Very Positive’. Thus what is really clear is that 346 people (32%) felt there were definitely things to improve. But what?
As the Manager of the restaurant I have an amalgamated score over the last month. I can see a monthly trend but there isn’t really any insight that is easily actionable.
So how do we ensure customer experience measurement is working and not just a score that’s meaningless?
Obviously for some of you, this this example may be considered a little ‘simple’ and not representative of many customer experience programmes. And it may sound like I’m picking on this Estonian canteen, belittling their attempts to improve their service, and communicate their achievements to their customers. I’m absolutely not. They’ve made it very easy for their customers to give feedback, and they clearly feel customer experience is important. And perhaps they are also close enough to their customers to apply common sense and insider knowledge to the results? However I’ve used this example to illustrate a point, and that brings me back to my first bugbear, which is that measurement of customer experience needs to be actionable and not just a score.
Naturally customer journeys are often more complex than the example I’ve used. Frequently they involve multiple touchpoints with the brand, products, services, and/or their agents. As a result businesses often invest significant resource and capital to understand their customers’ experience. However, this doesn’t necessarily mean the programme is working effectively, or that it delivers the deeper insight that the business requires.
For that reason, and based upon our experience, we have put together the following guide, ‘The Fundamentals of Customer Experience Management’. In it we take a simple and practical look at the key elements of an effective customer experience programme and how you can maximise the return on investment. You can download a copy by clicking on the image below.