322
Customer Satisfaction Nigel Hill, Greg Roche and Rachel Allen The customer experience through the customer’s eyes

Customer Satisfaction

Embed Size (px)

DESCRIPTION

The Customer Experience Through The Customer's Eyes Customer Satisfaction is our latest title in the field of satisfaction measurement and the customer's experience. It's been written by Nigel Hill and co-authors Greg Roche and Rachel Allen who are all experts in customer satisfaction measurement at The Leadership Factor Limited.

Citation preview

Page 1: Customer Satisfaction

CustomerSatisfactionNigel Hill, Greg Roche and Rachel Allen

The customer experience through the customer’s eyes

Page 2: Customer Satisfaction

Customer Satisfaction

Nigel Hill, Greg Roche and Rachel Allen

Cogent

THE CUSTOMER EXPERIENCE THROUGH THE CUSTOMER’S EYES

contents Sat 5/7/07 09:48 Page i

Page 3: Customer Satisfaction

Published by Cogent Publishing in 2007

Cogent Publishing Ltd26 York StreetLondonW1U 6PZ

Tel: 0870 240 7885Web: www.cogentpublishing.co.ukEmail: [email protected]

Registered in England no 3980246

Copyright © Nigel Hill, Greg Roche and Rachel Allen, 2007

All rights reserved. This book must not be circulated in any form of binding or cover other thanthat in which it is published and without similar condition of this being imposed on thesubsequent purchaser. No part of this publication may be reproduced, stored on a retrievalsystem or transmitted in any form, or by any other means, electronic, mechanical, photocopying,recording or otherwise, without either prior permission in writing from the publisher or a licencepermitting restricted copying. In the United Kingdom Licences are issues by the CopyrightLicensing Agency, 90 Tottenham Court Road, London, W1P 9HE. The right of Nigel Hill, GregRoche and Rachel Allen to be identified as authors of this work has been asserted in accordancewith Copyright Designs and Patents Acts 1988.

A British Library Cataloguing in Publication record is available for this publication.

ISBN 978-0-9554161-1-8

Printed and bound in Great Britain by The Charlesworth Group, Wakefield, West Yorkshire.

Available from all good bookshops. In case of difficulty contact Cogent Publishing on

(+44) 870 2407885.

Cogent

contents Sat 5/7/07 09:48 Page ii

Page 4: Customer Satisfaction

About the authors

Nigel HillNigel founder of The Leadership Factor, a company that specialisesin helping organisations to measure, monitor and improve theircustomers’ experience. With offices in the USA, Australia, Russia,Spain, Portugal and France as well as the UK, The Leadership Factorprovides research services, advice and training worldwide. Nigel haswritten three previous books and many articles about customersand speaks at conferences and events around the world. He hashelped organisations such as Manchester United FC, Chelsea FC,the BBC, ASDA, and Land Securities amongst many others.

Greg RocheClient Director at The Leadership Factor. Greg is one of the UK’sleading experts in helping organisations to use data fromcustomer satisfaction surveys to improve their customerexperience. He has worked with many different organisationsacross all sectors of the economy including Royal Bank ofScotland, Visa, Tarmac, Irish Life, Allied Irish Bank, Churchill,Privilege, Jurys Doyle Hotels, Sainsbury’s Convenience and TheBank of New York.

Rachel AllenRachel is Client Manager at The Leadership Factor. She is anexpert on customer satisfaction research and complaint handling.Rachel has written many articles and speaks widely on thesesubjects at conferences, seminars and other events. She works withmany different organisations on surveys and complaint handlingincluding Direct Line, Tesco, Royal Borough of Kensington andChelsea, HBOS, Forensic Science Service and Royal Bank ofScotland International.

If you would like to contact any of the authors go towww.customersatisfactionbook.com and follow the contact instructions.

contents Sat 5/7/07 09:48 Page iii

Page 5: Customer Satisfaction

iv ACKNOWLEDGMENTS

Acknowledgements

Many people have helped in the preparation of this book. Particular thanks to RobertCrawford, Director of the Institute of Customer Service for writing the Preface and forbeing a continual source of honest advice, stimulating views and professional support.Thanks also to the many clients and contacts from companies and organisations acrossall sectors of the economy who have helped to develop our ideas and understandingwhilst grappling with their real work of improving customer satisfaction. Amongstthese, very special thanks to those who reviewed this book, including Tim Oakes fromthe RBS, Mark Adams from Virgin Mobile, Scott Davidson from Tesco Personal Financeand Quintin Hunte from Fiat. All made many useful suggestions for amendments oradditions. Needless to say any opinions, omissions or mistakes in the book are theresponsibility of the authors.

There is much more to publishing a book than writing the words. Ask Rob Ward who notonly did the typesetting and produced the diagrams but also had to amend it all, manytimes, as the authors had second, third, fourth thoughts and more. Thanks also to RuthColleton who cross-checked every single reference on the internet and, along with JanetHill, corrected the proofs. Thanks to Rob Ward and Rob Egan for the cover design and toCharlotte and Lucy at Cogent Publishing for organising the never ending list of tasks thatturn a manuscript into a printed book that you can buy in shops or on the internet!

contents Sat 5/7/07 09:48 Page iv

Page 6: Customer Satisfaction

CONTENTS v

Acknowledgements Introduction

CHAPTER ONE DISPELLING THE MYTHSCHAPTER TWO THE BENEFITS OF CUSTOMER SATISFACTIONCHAPTER THREE METHODOLOGY ESSENTIALSCHAPTER FOUR ASKING THE RIGHT QUESTIONSCHAPTER FIVE EXPLORATORY RESEARCHCHAPTER SIX SAMPLINGCHAPTER SEVEN COLLECTING THE DATACHAPTER EIGHT KEEPING THE SCORECHAPTER NINE THE QUESTIONNAIRECHAPTER TEN BASIC ANALYSISCHAPTER ELEVEN MONITORING PERFORMANCE OVER TIMECHAPTER TWELVE ACTIONABLE OUTCOMESCHAPTER THIRTEEN COMPARISONS WITH COMPETITORSCHAPTER FOURTEEN ADVANCED ANALYSIS: UNDERSTANDING THE CAUSES

AND CONSEQUENCES OF CUSTOMER SATISFACTIONCHAPTER FIFTEEN USING SURVEYS TO DRIVE IMPROVEMENTCHAPTER SIXTEEN INVOLVING EMPLOYEESCHAPTER SEVENTEEN INVOLVING CUSTOMERSCHAPTER EIGHTEEN CONCLUSIONS

GlossaryIndex

ivvi

1182943576981

110125150166185201226

250268282289

295307

Contents

contents Sat 5/7/07 09:48 Page v

Page 7: Customer Satisfaction

vi Introduction

Introduction

This book is about building successful businesses through doing best what matters mostto customers. In one volume we explain why this is so important, how it is achieved andhow to measure and monitor the organisation’s success in doing so. Our ambition is toinspire you to take action, make your customers more satisfied and loyal and yourcompany more successful.

The book is organised in a clear report style format, familiar to most managers anddesigned to make it easy to read and navigate. All chapters are fully referenced for thosewanting more detailed information.

If you are still hungry for more knowledge, have unanswered questions or want todebate an issue, the book’s website, www.customersatisfactionbook.com is the place foryou. You can use it to email the authors, find relevant customer satisfaction links, checkout the blog or simply to keep up with the latest events and ideas in the customersatisfaction world.

We look forward to hearing from you.

Nigel HillGreg RocheRachel Allen

August 2007

contents Sat 5/7/07 09:48 Page vi

Page 8: Customer Satisfaction

CHAPTER ONE

Dispelling the Myths

This book is based on the premise that organisations succeed by doing best whatmatters most to customers. Human beings seek pleasurable experiences and avoidpainful ones, so tend to return to companies that meet or exceed their requirementswhilst shunning organisations that fail to meet them. These self-evident truths aremost easily described by the phrase ‘customer satisfaction and loyalty’. Customerswhose needs are met or exceeded by an organisation form favourable attitudes aboutit. Since people’s attitudes drive their future behaviours, highly satisfied customersusually display loyal behaviours such as staying with the company longer, buyingmore and recommending it – all of which are highly profitable to the companyconcerned. This book is about how organisations can accurately monitor customers’attitudes (satisfaction) in order to make decisions that will drive favourable customerbehaviours (loyalty), thus making them more profitable – a concept that is simple aswell as sensible. In recent years, however, there have been many attempts tocomplicate this process leading to confusion, doubt and many myths aboutorganisations’ relationship with their customers; an unfortunate state of affairs thatwe intend to address in this first chapter.

At a glanceIn this chapter we will examine the 6 main myths about measuring customer satisfaction:

a)Customer satisfaction is old hat. It’s all about wowing the customer.

b)Only loyalty matters.

c)Improving customer satisfaction and loyalty is difficult.

d)Surveys don’t work.

e)Consulting customers isn’t the only way of monitoring customer satisfaction.

f)Surveys reduce customer satisfaction and loyalty.

1.1 Customer satisfaction is a limited concept This book is about how organisations succeed by putting customers at the top oftheir agenda. From the 1980s in America and by the 1990s in most other countries,customer satisfaction was rarely challenged as a key organisational goal. In morerecent years however, a growing industry has developed around modifications or

Dispelling the Myths 1

Chapter one 5/7/07 09:49 Page 1

Page 9: Customer Satisfaction

enhancements to the concept of customer satisfaction spawning a multitude of wordsand phrases to describe it. The list is endless, but amongst the most common arecustomer loyalty, the customer relationship, the customer experience, customerfocus, customer delight, wowing the customer, the loyalty effect, customerretention, the advocacy ladder, emotional attachment, service quality, servicerecovery, zero defections, customer win-back and the list goes on. Needless to say,people get very passionate about defending their own little set of words, but they’reall just semantics. They’re just different words that describe the same phenomenon– the attitudes or feelings that customers form based on their experiences with anorganisation. Satisfaction is a convenient generic word to summarise all theseattitudes and feelings.

We’re in favour of anything that makes things better for customers. We think it’sfantastic if organisations can delight their customers and even better if they can makecustomers feel some kind of emotional attachment to them. However, those feelingsare no more than descriptors for the type of attitudes customers hold at the highestlevels of satisfaction, just as disgust could describe extreme dissatisfaction andindifference the mid-range of the satisfaction spectrum.

KEY POINTThe word “satisfaction” is the most appropriate label for the range of attitudesand feelings that customers hold about their experiences with an organisation.

1.2 Only loyalty mattersWhatever you call these customer attitudes, they are massively important to allorganisations since they determine customers’ future behaviours. Collectively knownas loyalty, it is the behaviours rather than the attitudes that really interest companies.The best concise description of what loyalty is and why it’s so important is providedby Harvard Business School. They call it the 3Rs1.

The 3Rs are customer behaviours – staying longer, choosing to use more of theproducts or services supplied by an organisation and recommending it to others. Forexample, Starbucks discovered that a ‘highly satisfied’ customer spent an average of£4.42 per visit and made an average of 7.2 visits per month. By contrast an‘unsatisfied’ customer spent £3.88 and visited 3.9 times per month2. Over one year,

FIGURE 1.1 The 3Rs of customer loyalty

RetentionRelated salesReferrals

2 Dispelling the Myths

Chapter one 5/7/07 09:49 Page 2

Page 10: Customer Satisfaction

that’s £381 compared with £181. See Chapter 14 for details on how Starbucks relatedthese satisfaction attitudes and loyalty behaviours to the customer experience. Thereis conclusive evidence that loyalty behaviours such as these contribute hugely tocorporate profitability. This is because a customer’s value to a business typicallyincreases over time, (known as customer lifetime value). One-off, transientcustomers are typically a cost, whereas loyal, long-standing customers become highlyprofitable. The evidence for the profitability of loyal customers is fully explained andreferenced in Chapter 2 of this book.

Since these customer behaviours have such an obvious direct link with organisations’financial performance it has prompted some commentators to question the value ofcustomer satisfaction, using phrases like ‘the satisfaction trap’3. Some argue that sinceloyalty has a financial value, companies should focus all their efforts and resources onbuilding customer loyalty4,5. Following the same logic, the fact that satisfaction per sehas no financial value would suggest that monitoring it is a pointless waste ofresources, customer loyalty being the ‘true measure’6. The fact that several companiesincluding Xerox7, GM8 and Forum9 reported that satisfied customers do defectseemed to further devalue the whole concept of customer satisfaction, especiallywhen Frederick Reichheld claimed in Harvard Business Review that 65% to 85% ofcustomers that switched supplier were satisfied with their previous one10. This hasprompted other authors to make claims such as “one thing is certain: currentcustomer satisfaction measurement systems cannot be used as a reliable predictor ofrepeat purchase”6 or “it is impossible to accurately forecast customer retention ratesfrom levels of customer satisfaction”11.

In reality, most customer experts now recognise such views as superficial, simplydisplaying a very poor understanding of how the relationship between organisationsand their customers actually works. As Johnson and Gustafsson12 of MichiganUniversity point out, “to argue that quality or satisfaction or loyalty is what mattersmisses the point. These factors form a chain of cause and effect, building on each otherso they cannot be treated separately. They represent a system that must be measuredand managed as a whole if you want to maximize results.”

There are 4 key reasons why effectively monitoring customer satisfaction providesessential management information for organisations to optimise the benefits of theirrelationship with customers:

1.2.1 Attitudes precede behavioursWhether we call them satisfaction, delight, emotional attachment or the latestconference buzzword, the attitudes customers hold about an organisation determinetheir future behaviour towards it. Measuring customer satisfaction is therefore themain lead indicator of future customer behaviours, which, in turn, will determinecompany profitability.

Dispelling the Myths 3

Chapter one 5/7/07 09:49 Page 3

Page 11: Customer Satisfaction

CSM customer satisfaction measurement is totally focused on the first oval in Figure1.2 – measuring customers’ attitudes about how satisfied they feel with theorganisation. As lead indicators, customers’ attitudes provide by far the most usefuldata for managing organisational performance. Of course, customers’ behaviours,especially their loyalty behaviours, are extremely important to companies, but theyhave already happened. By the time a customer has defected or chosen an alternativesupplier for a related product or service, the opportunities have been missed. That isnot to say that customer behaviours should not be monitored. Information such ascustomer defection rates, average spend and complaints are all extremely usefulmeasures of organisational performance, (and will be covered in Section 1.5) but theyreflect what has already happened in the past and do not tell you how to improve onthat. Providing information on how to improve in the future is the main purpose ofcustomer satisfaction measurement.

KEY POINTCustomer satisfaction is a lead indicator that predicts future customer behaviours.

1.2.2 How satisfaction affects loyaltyUnderstanding the difference between customers’ attitudes and behaviours, and howthe relationship between them works, is crucial for managers involved in any aspectof customer management. Whilst it is broadly true to say that satisfied customers willbe more loyal than dissatisfied ones, so customer satisfaction must be important, thatis almost as simplistic as concluding that customer satisfaction can’t be importantbecause some satisfied customers defect. In the real world, there are different levels ofcustomer satisfaction and these can affect companies in widely differing ways.

In the 21st century, virtually all organisations perform sufficiently well to deliver areasonable level of customer satisfaction; at least in markets where customers havechoice and can switch suppliers with relative ease. Few perform badly enough todissatisfy a significant proportion of their customer base. That may be progresscompared with two or three decades ago, but customers’ expectations have also risensince then. In most markets suppliers need to do much more than not dissatisfycustomers if they want to maximise the benefits of customer satisfaction. As Harvardpoint out, the zone of indifference just isn’t good enough1.

Why would customers in the zone of indifference stay with a supplier other than

FIGURE 1.2 Attitudes and behaviours

Customerattitudes

Customerbehaviour

Organisationaloutcomes

4 Dispelling the Myths

Chapter one 5/7/07 09:49 Page 4

Page 12: Customer Satisfaction

through inertia? Why would they buy an additional product or service or recommendthe business? They wouldn’t. These days most customers think they can do betterthan ‘OK’, ‘average’ or ‘good enough’. To keep customers, suppliers have to deliversuch great results that rational people will conclude that it would be difficult to dobetter elsewhere.

KEY POINTSatisfaction is the main driver of loyalty, but ‘mere satisfaction’ is not enough.Customers have to be highly satisfied.

According to Jones and Sasser, most organisations don’t understand the extent towhich ‘very satisfied’ is more valuable than ‘satisfied’7. Some managers with a poorunderstanding of the satisfaction-loyalty relationship, have expressed surprise whenthey have discovered that satisfied customers are not always loyal – using it asevidence that investing in good customer service is pointless. Perhaps if they hadmonitored the percentage of their customers that were in the ‘zone of indifference’they would have been less surprised. Building on the Harvard work of Heskett,Schlesinger, Sasser, Jones1,7 and others, Keiningham and Vavra13 coined the phrase‘mere satisfaction’ to emphasise the extent to which merely satisfying customers isn’tenough for today’s demanding consumers. To realise the full benefits of customersatisfaction, managers must understand the difference between making morecustomers satisfied and making customers more satisfied. This remains a widespreadproblem as evidenced by the frequent use of verbal rating scales and simple singlequestion headline measures of overall satisfaction. (See Chapters 8 and 11). In reality,there is no universally applicable curve that reflects the relationship between

FIGURE 1.3 Satisfaction - Loyalty relationship

Loya

lty

Apostle

Zone of affection

Zone of indifference

Zone of defection

SaboteurSatisfaction

20%

40%

60%

80%

100%

1 5 10

Dispelling the Myths 5

Chapter one 5/7/07 09:49 Page 5

Page 13: Customer Satisfaction

customer satisfaction and loyalty. Figure 1.3 merely illustrates the concept. InChapter 14 we will explain how a company can identify its own satisfaction-loyaltycurve in order to make the best decisions about how to manage customers foroptimum loyalty.

1.2.3 Satisfaction is the main driver of loyaltySo whilst it is true that satisfaction is not an end in itself and that ‘merely satisfied’customers do defect, it is also true that customer satisfaction is the main driver of thereal goal of customer loyalty. In their excellent article “Why satisfied customersdefect”7, Harvard’s Jones and Sasser point out the obvious answer. Satisfied customersdefect because they’re simply not satisfied enough. Now that we fully understand thenon-linear nature of the relationship between customer satisfaction and loyalty, it isclear that to ensure loyalty, most companies will have to make their customers highlysatisfied, not ‘merely satisfied’.

Many studies in the 1990s concluded that customer satisfaction was a primarydeterminant of loyalty, including those by Rust and Zahorik14, Rust, Zahorik andKeiningham15 and Zeithaml, Berry and Parasuraman16. White and Schneider17 foundthat customers with better perceptions of service quality were more likely to remaincustomers and to tell other people about their experiences. In the Value-Profit Chain,Harvard’s Heskett et al state that the lifetime value of the most satisfied customers is138 times greater than that of the least satisfied18.

However, the idea that customer satisfaction affects companies’ financialperformance only through customer loyalty under-values the importance ofcustomer satisfaction. Johnson and Gustafsson point out that customer satisfactionhas direct effects on profit, including lower costs since dissatisfied customers aremuch more likely to consume organisational resources through handling complaints,resolving problems and asking for help. Based on the vast database of the AmericanCustomer Satisfaction Index19, Michigan University’s Fornell et al challenge the viewthat customer satisfaction is less important than loyalty since it is satisfactionmeasures rather than loyalty data that enable organisations to take action to improvetheir relationship with customers. “The risk is that companies begin to focus too muchon managing loyalty per se rather than building profitable loyalty through customersatisfaction.”20 It is actionability that we now turn to.

1.2.4 Taking actionTo maintain the high levels of customer satisfaction needed to keep customers loyal,companies must continuously improve the service they deliver. Moreover, they mustfocus their improvement efforts in the right areas. To make customers highly satisfied,organisations have to do best what matters most to customers. It’s no use being good atthings that aren’t important to customers21.

6 Dispelling the Myths

Chapter one 5/7/07 09:49 Page 6

Page 14: Customer Satisfaction

As we will explain in this book, the whole essence of CSM (customer satisfactionmeasurement) is about identifying the extent to which an organisation is doingbest what matters most to customers (exceeding, meeting or failing to meet theirrequirements) and pinpointing the best opportunities for improving thatperformance. A good customer satisfaction survey is therefore based oncustomers’ most important requirements so that it can provide specific,actionable information on where the organisation is falling short in customers’eyes and where it would achieve the best returns from investing in actions orchanges to improve customer satisfaction. Chapters 12-15 explain how toproduce actionable outcomes from a CSM survey.

Some organisations monitor measures that are simply not actionable. In hisHarvard Business Review article ‘The One Number you Need to Grow’22,Reichheld maintained that since his tests showed propensity to recommend tobe the single question that had the strongest statistical relationship to futurecompany performance, there was no point asking any other questions incustomer surveys. This led to his concept of the ‘net promoter’ score (achievedby subtracting the percentage of respondents who would not be willing torecommend from those who would be willing), as the only survey measure thatorganisations need to monitor. We would agree that of the range of loyaltyquestions that can be asked, recommendation is usually the closest proxy forloyalty for most (but not all) organisations. However, apart from the fact that asingle item question is much less reliable and more volatile than a compositeindex (see Chapter 11), what actual use is a net promoter score for decisionmaking? Customer research is not just about knowing a score or a trend, it’sabout understanding, so that managers can make the right decisions. If theheadline measure (whatever it is), goes down or fails to meet the target,managers have to know what to action or change to improve it. Providing thatinformation is the fundamental purpose of CSM.

KEY POINTThe main purpose of measuring customer satisfaction is to make decisions onhow to improve it. Actionable information on how to make customers moresatisfied is therefore a crucial outcome.

1.3 Improving satisfaction and loyalty is difficult!Improving customer satisfaction is not difficult. It’s not very difficult. It’sextremely difficult.

In reality, few managers would claim that it’s easy, but organisations’ behaviourdemonstrates that they don’t fully appreciate the difficulty or importance of the task.Yes, they want to improve customer satisfaction, but they also want to minimise costs.

Dispelling the Myths 7

Chapter one 5/7/07 09:49 Page 7

Page 15: Customer Satisfaction

Few get this balance right. Responsibility for customer satisfaction is often vested injust one of the organisation’s departments, often called Customer Service. In somebusinesses its head isn’t even a main board member and, due to many organisations’predominant focus on controlling or reducing costs, ‘quick wins’ to improvecustomer satisfaction become highly attractive, if not the only option for the ‘head ofcustomer service’. So desperate are many managers to make a difference at no, orvirtually no cost, that they become real suckers for the latest quick fix hype thatthey’ve heard at a conference or read in a book.

1.3.1 My daughter’s ruined the policy documentOne of the authors recently attended a conference where the keynote speaker waxedlyrical about the imperative of touching customers’ emotions and related thefollowing anecdote to illustrate how organisations could attain this great prize at verymodest cost.

A customer of a UK insurance company, he said, telephoned the call centre asking fora replacement policy document as her daughter had scribbled all over the original. Ittranspires that this company gave each of its call centre operatives a £25 budget to useany way they wished to improve customer satisfaction. The operative involved usedsome of this budget to enclose a pack of crayons and a colouring pad for the childwith the replacement policy document. A nice touch. The customer was no doubtvery pleased. It may or may not have influenced the customer’s loyalty behaviour atrenewal time. But even if it did, how much difference is this kind of approach goingto make to the ability of a large insurance company with millions of customers toachieve the financial benefits of maximising customer satisfaction and loyalty?According to Barwise and Meehan21, not much. They maintain that: “Branding andemotional values are great if you are already providing an excellent functional productor service. Outside the box strategy is terrific – when it works. But because even some ofthe best organizations are performing badly on the basics, we recommend that they startinside the box, ensuring that they reliably meet customers’ reasonable expectations on theproduct or service itself. Once the basics are securely in place, the organization has a solidplatform for great emotional branding and for more radical innovation.”

It’s not the £25 budget or the crayons and colouring pad that are the problem. It’s thefact that many organisations place considerable emphasis and hope on strategies of thisilk, which, at best can make only a very marginal difference to the satisfaction andloyalty of the total customer base if the organisation is not consistently meetingcustomers’ basic requirements. To quote Barwise and Meehan again, organisations“must focus on what matters most to customers, usually the generic category benefits thatall competing brands provide, more or less, and not unique brand differentiators……….Everything hinges on giving customers what matters most to them, even if thatproposition seems less exciting than focusing on novelty, uniqueness or the latest

8 Dispelling the Myths

Chapter one 5/7/07 09:49 Page 8

Page 16: Customer Satisfaction

management or technology fad.” They illustrate their view with the contrasting fortunesof two of the big players in the UK mobile telephony market.

KEY POINTCustomer satisfaction is not improved by low cost gimmicks and quick fixes.It takes real investment in the basic essentials of meeting customers’ mostimportant requirements.

1.3.2 Doing best what matters most to customersHaving been awarded identical and simultaneous licences, and with access to exactlythe same technology, but following completely different strategies, One2One andOrange became the 3rd and 4th companies to enter the UK mobile phone market inSeptember 1993 and April 1994 respectively. One2One pursued differentiation and astrong customer acquisition strategy, offering free off-peak local calls. This appealedto consumers, differentiating One2One from the business-focused strategies of theincumbents, Vodafone and Cellnet, and enabled it to acquire twice as manycustomers as Orange in its first six months of operation.

Orange focused on getting the basics right. It was well known in the industry thatcustomers were dissatisfied with the frustrations of mobile telephony; frequentcall terminations, inability to get through due to lack of capacity and coverage,the perceived unfairness of the operators, onerous contracts, and extortionatepricing strategies such as full minute billing. Orange simply addressed thesedrivers of dissatisfaction, offering per-second and itemised billing and investingin network reliability.

Meanwhile, One2One had attracted large numbers of price sensitive customers whoclogged its limited network capacity with their free off-peak calling and becamefrustrated with its poor service. By the end of 1996 there was telling evidence of whowas doing best what mattered most to customers. A Consumers’ Association survey23

found that whilst 14% of Orange customers reported that they could not alwaysconnect with the network, nearly four times as many One2One customers couldn’talways connect; a figure that was double the industry average. The survey also showedOrange’s customers to be far more loyal than those of the three other suppliers.Moreover, at £442 Orange had already achieved the industry’s top per customerrevenue figure. One2One was over £100 behind at £341. Orange was alsodemonstrating that satisfied customers will pay more. By this time it was around 5%more expensive than Vodafone and Cellnet and its prices were a massive 30% higherthan those of One2One.

Conventional strategy would have dictated that a late entrant into a commoditymarket needed a USP, a ‘silver bullet’21, like One2One’s free off peak calls to stand any

Dispelling the Myths 9

Chapter one 5/7/07 09:49 Page 9

Page 17: Customer Satisfaction

chance of success. Instead, by focusing on getting the basics right, Orange acquiredcustomers at a slower rate, but kept them longer and made more profit out of eachone, and in doing so delivered three times the shareholder value achieved byOne2One. In August 1999 Deutsche Telekom bought One2One for £6.9 billion. Twomonths later Mannesmann acquired Orange for £20 billion.

1.4 Surveys don’t workOver the years we have met quite a few managers at conferences and similar eventswho have lost faith in their customer satisfaction surveys. Many of them work forlarge organisations that have been monitoring customer satisfaction data for manyyears but claim that whatever they do, they don’t seem to be able to improve customersatisfaction; their headline measure typically fluctuating within a fairly narrow rangebut showing no upward trend. Why is this happening? Is the real problem that theycan’t improve customer satisfaction or that their customer satisfaction surveys simplydon’t show it? There is plenty of evidence that it’s the latter. In “The one number youneed to grow”22, Reichheld has this to say about customer satisfaction surveys. “Mostcustomer satisfaction surveys aren’t very useful. They tend to be long and complicated,yielding low response rates and ambiguous implications that are difficult for operatingmanagers to act on.” Based on research conducted by Texas University24, Griffin6

makes very similar statements, saying that customer satisfaction measures suffer froma number of problems that tend to inflate the score such as positively biasedquestions and flaws in self-completion surveys. This is rather like reporting toshareholders that the company is struggling to make a profit but it’s because theaccounts produced by the finance department aren’t very accurate!

Professor Myers25 from the Drucker School, Claremont Graduate University, hasexpressed serious concern about the methodologies used by many organisations tomeasure customer satisfaction, “from overly sophisticated experiments by academicsto overly simplistic surveys conducted by many market research firms.” Manyorganisations even fail to ask the right questions in their customer satisfactionsurveys, making it extremely unlikely that they will produce information that willhelp them to improve satisfaction and loyalty. We will address this problem inChapters 4 and 5. Failing to understand the difference between customer satisfactionand other forms of market research, some organisations use scales that are notsufficiently sensitive to detect the relatively small changes in customer satisfactionthat typically occur. In Chapter 8 we explain how to develop a CSM process that willmake it possible to ‘move the needle’.

KEY POINTMany organisations monitor flawed measures that don’t reflect how satisfied ordissatisfied customers feel and are of no value for improving customer satisfaction.

10 Dispelling the Myths

Chapter one 5/7/07 09:49 Page 10

Page 18: Customer Satisfaction

When we question the people who tell us their organisation can’t improve itscustomer satisfaction scores, we almost invariably discover serious problems in theirCSM methodology. As we point out in the next section, improving customersatisfaction and loyalty is difficult enough without attempting to achieve it with thehandicap of misleading information generated by flawed surveys.

1.5 Customer surveys are not the only way of monitoring customersatisfaction

Surely there are many other ways of monitoring how successfully an organisation ismeeting its customers’ requirements that are easier and less costly than conductingcustomer satisfaction surveys and often can be done with information theorganisation already possesses. Analysing complaints is a good example. Otherpossibilities include analysing customer defections, feedback from employees orsimply monitoring whether sales are increasing. Internal metrics such as speed ofsolution, percentage of deliveries on time or speed of answering the telephone canprovide accurate information on service quality at little cost. Mystery shopping canalso generate detailed information on the customer experience.

1.5.1 Incomplete measuresCustomers’ feelings about their total experience with an organisation form theattitudes that drive their future behaviours. Consequently companies cannot managethis process without a complete understanding of these feelings and attitudes.Consulting customers is the only way of producing this level of understanding.

All alternative measures are incomplete. Internal metrics can provide accurate anduseful information on the hard factors but not the soft ones such as how friendly andhelpful the staff are. The way an organisation handles problems is an important partof the customer experience, but again only part of it, so analysing complaints doesn’tcome close to an understanding of customer satisfaction. Nor do exit interviews withlapsed customers, who may give views on their entire customer experience, but formonly a small part of the customer base, and have levels of satisfaction that are notrepresentative of customers generally.

1.5.2 Lagging measuresGathering feedback from lost customers highlights another disadvantage. It’s too late.Whilst a thorough exit interview process may recover a few defecting customers, theunsatisfactory aspects of their customer experience that led to their behaviourhappened in the past. Organisations need much earlier feedback on areas ofcustomer dissatisfaction in order to address the problems before they drive customersaway. Equally, rising or falling sales are very good indicators of customers’ loyaltybehaviours, but not of the attitudes that caused those behaviours. A good customer

Dispelling the Myths 11

Chapter one 5/7/07 09:49 Page 11

Page 19: Customer Satisfaction

satisfaction measurement process provides current information on whether theorganisation is succeeding or failing to make customers more satisfied with theirexperience. If the latter, it provides a lead indicator of problems that lie ahead for thebusiness in time to address them.

1.5.3 Performance measuresEven on the hard issues, internal metrics provide only half the picture. As TomPeters pointed out over 20 years ago – perception is reality26. Even if customers doform mistaken perceptions about completely factual aspects of a supplier’sperformance, these are the attitudes on which they are basing their loyalty andsupplier selection decisions. If companies want to manage their future stream ofrevenues from customers, they need to be inside the customers’ heads,understanding how they see their customer experience and how it is leading themto form attitudes about the organisation that will drive their future behaviours.Feedback from staff, as well as being incomplete and often biased, can only provideinformation on how the supplier believes it has performed with customers. Sincemany customers don’t voice complaints or compliments, employees can never fullyunderstand how customers feel.

1.5.4 Mystery shoppingSome organisations view mystery shoppers as customer substitutes. True, they haveto go through a typical customer journey. If they’re mystery shopping a hotel, theywill stay overnight, eat dinner and breakfast and use any other facilities such as ahealth club. But are they the same as real customers? Of course they’re not.Professional mystery shoppers are exactly that. They are trained to observe andrecord many detailed aspects of the service delivery process and consequently providehighly detailed information that is very useful to operational managers. Examplesmight include whether the hotel receptionist was wearing a name badge, addressedthe customer by name and provided clear directions to the room. They can recordwaiting times at check-in and check-out as well as in the restaurant. They can alsomake judgements on levels of cleanliness or staff friendliness and helpfulness.Technology even permits surreptitious video recording of staff, though companiesneed to think carefully about the implications of this for organisational culture andvalues27. So mystery shopping provides many practical benefits for operationalmanagers for use in staff training, evaluation and recognition, but can’t provideunderstanding of how customers feel about the customer experience and theattitudes they are forming about the company.

Since mystery shoppers’ profession is to make observations on companies’ customerservice performance, they cease to be normal customers, becoming highly aware andoften much more critical than typical customers25. Whilst this is good for their role,it doesn’t provide an accurate reflection of how normal customers feel28. Morrison et

12 Dispelling the Myths

Chapter one 5/7/07 09:49 Page 12

Page 20: Customer Satisfaction

al reported other inconsistencies with mystery shopping such as males and olderpeople producing less accurate reports than females or younger ones29.

KEY POINTMystery shoppers are not the same as real customers. Reliable information aboutcustomers’ attitudes and their likely future behaviour will be generated only fromconsulting the customers themselves.

Smile schoolIn their book “Loyalty Myths”, Keiningham et al use the experience of Safeway inAmerica to illustrate the dangers of mystery shopping27. They explain how Safewaybased its strategy in the 1990s on delivering superior customer service and investedin an extensive mystery shopping programme to monitor employees’ performance indelivering it. Employees were expected to do things like thank customers by name,offer to carry their groceries to the car, smile and make eye contact: all very desirablecustomer service behaviours which should lead to customer satisfaction. And theydid. Throughout the 1990s Safeway’s customer satisfaction levels and financialreturns were very high. However, in stark contrast to the teachings of the Service-Profit Chain1, customer satisfaction and employee satisfaction were moving inopposite directions. This was because employees who failed to achieve a targetmystery shopping score were sent for remedial training (called Smile School by theemployees), and could be dismissed if their performance failed to improve.Moreover, female employees’ concern that the smiling and eye contact could send thewrong signals to some male shoppers was confirmed by an increase in the number ofsexual harassment incidents committed by customers. This led to a number ofcharges filed against Safeway by the employees’ union and some individual femaleemployees. In the end, the Service-Profit Chain wasn’t wrong. Poor employee moraleadversely affected customer satisfaction and Safeway’s financial performance.According to the ACSI19, Safeway’s customer satisfaction levels rose substantiallyfrom 70% to a high of 78% by 2000 as a result of its focus on customer service.However, as problems with employees intensified, the customer satisfaction gainswere virtually all lost, Safeway’s score falling back to 71% by 2003.

In the European Union there are restrictions on the use of mystery shopping thatprevent it being used for disciplinary purposes against individual employees. It isincreasingly recognised by good employers that mystery shopping is best used for factualrather than judgemental aspects of service and to provide positive feedback andrecognition to employees. Good companies also understand that it provides operationalinformation rather than a reliable measure of how satisfied or dissatisfied customers feel.

Dispelling the Myths 13

Chapter one 5/7/07 09:49 Page 13

Page 21: Customer Satisfaction

1.6 Surveys reduce customer satisfaction and loyaltyIt has been claimed that consulting customers to find out how satisfied they are withtheir customer experience and to gather feedback on improvements they would liketo see actually offends customers and reduces their satisfaction and loyalty30. Theargument is that since many people have busy lives, a survey is seen as such aninconvenient and unwelcome intrusion that it has a negative effect on respondents’attitudes and behaviours.

In fact, academic tests prove the opposite to be true. Paul Dholakia from Houston’sRice University and Vicki Morwitz at New York University’s Stern School of Businesswere interested in the many research studies that had shown that surveys had atendency to increase customers’ loyalty31 and their propensity to buy a company’sproduct32,33,34 but felt that the studies were too restricted, focusing on short termattitude change or one-off behaviour like a single purchase35,36,37. They determined tounderstand whether surveys had a more permanent effect on customers’ attitudesand behaviour. To do so, they undertook a field experiment with over 2,000customers of an American financial services company. One randomly selected groupof 945 customers took part in a 10-minute customer satisfaction survey by telephone.The remaining 1,064 customers were not surveyed and acted as the control group. Ayear later the subsequent behaviour of all the customers in the sampling frame wasreviewed, demonstrating uneqivocably that customer satisfaction surveys makecustomers more loyal38,39. According to Dholakia and Morwitz’s conclusions:

The customers who took part in the customer satisfaction survey were much moreloyal. They were:

More than three times as likely to have opened new accounts.Less than half as likely to have defected.More profitable than the control group.Even 12 months later people who had taken part in a ten minute customersatisfaction interview were still opening new accounts at a faster rate anddefecting less than customers in the control group.

Customers like to be consulted.The authors conclude that customers value the opportunity to provide feedback,positive or negative, on the organisation’s ability to meet their requirements.

Surveys can also heighten respondents’ awareness of a company’s products, servicesor other benefits, thus also influencing their future behaviour.

KEY POINTConducting customer satisfaction surveys has a very positive effect on theorganisation’s reputation in the eyes of participants.

14 Dispelling the Myths

Chapter one 5/7/07 09:49 Page 14

Page 22: Customer Satisfaction

Conclusions1. Customer satisfaction is simply a convenient phrase to describe the attitudes and

feelings that customers hold about an organisation.2. It is an irrelevance to consider the relative merits of satisfaction and loyalty. They

are different links in a chain of cause and effect – satisfaction attitudes drivingloyalty behaviours. Both must therefore be monitored and managed to achieveorganisational success.

3. Since attitudes precede behaviours, customer satisfaction is a lead indicator offuture organisational performance. Loyalty behaviours are extremely importantbut are lagging measures.

4. It is true that satisfied customers often defect in some markets. That’s becausethey’re not satisfied enough.

5. To reap the full benefits of customer loyalty, companies need to make customershighly satisfied. The zone of indifference, or ‘mere satisfaction’ is not good enough.This highlights the importance of understanding the difference between makingmore customers satisfied and making customers more satisfied.

6. Even though the relationship between satisfaction and loyalty is not linear, it iswidely recognised that satisfaction is the main driver of loyalty.

7. Since customers’ loyalty behaviours are driven by their attitudes (primarilysatisfaction levels), loyalty must be managed through satisfaction rather thandirectly, emphasising the importance of producing actionable outcomes fromcustomer satisfaction surveys.

8. Many organisations have failed to use the information generated by customersurveys to improve satisfaction. This is not because customer satisfaction surveysdon’t work but because many are based on flawed methodologies.

9. Even with accurate and actionable information from surveys, it is extremelydifficult to improve customer satisfaction. Many organisations attempt to achieveit on the cheap, forcing the managers responsible to opt for faddish quick winsrather than the long game of getting the basics right and doing best what mattersmost to customers.

10. Some also attempt to monitor it using misleading substitute measures such asinternal performance metrics, complaints analysis or mystery shopping.

11. In the light of conclusions 8, 9 and 10 together, it’s not surprising that mostcompanies do not achieve sufficiently high levels of customer satisfaction andloyalty to derive the full financial benefits.

12. Organisations that conduct professional customer satisfaction surveys can expecttheir CSM process to have a positive impact on customers’ views of the company.

Dispelling the Myths 15

Chapter one 5/7/07 09:49 Page 15

Page 23: Customer Satisfaction

References1. Heskett, Sasser and Schlesinger (1997) "The Service-Profit Chain”, Free Press,

New York2. McGovern, Court, Quelch and Crawford (2004) “Bringing Customers into the

Boardroom”, Harvard Business Review, November3. Reichheld, Markey and Hopton (2000) "The Loyalty Effect – the relationship

between loyalty and profits”, European Business Journal 12(3)4. Bhote, Keki R (1996) "Beyond Customer Satisfaction to Customer Loyalty: The

Key to Greater Profitability”, American Marketing Association5. Gitomer, Jeffrey (1998) "Customer Satisfaction is Worthless, Customer Loyalty is

Priceless”, Bard Press6. Griffin, Jill (2002) "Customer Loyalty: How to Earn it, How to Keep it”, Jossey-

Bass, San Francisco7. Jones and Sasser (1995) "Why Satisfied Customers Defect”, Harvard Business

Review 73, (November-December)8. Hill and Alexander (2006) "The Handbook of Customer Satisfaction and

Loyalty Measurement”, 3rd Edition, Gower, Aldershot9. Stum and Thiry (1991) "Building Customer Loyalty”, Training and Development

Journal, (April)10. Reichheld, Frederick (1993) "Loyalty-Based Management”, Harvard Business

Review 71, (March-April)11. Stewart, Mark (1996) "Keep the Right Customers”, McGraw-Hill, London12. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty and

Profit: An Integrated Measurement and Management System”, Jossey-Bass, SanFrancisco, California

13. Keiningham and Vavra (2001) "The Customer Delight Principle”, McGraw-Hill,Chicago

14. Rust and Zahorik (1993) "Customer satisfaction, customer retention and marketshare”, Journal of Retailing 69(2)

15. Rust, Zahorik and Keiningham (1994) "Return on Quality (ROQ): Makingservice quality financially accountable”, Marketing Science Institute, Cambridge,Massachusetts

16. Zeithaml, Berry and Parasuraman (1996) "The behavioral consequences ofservice quality”, Journal of Marketing 60

17. White and Schneider (2000) "Climbing the Commitment Ladder: The role ofexpectations disconfirmation on customers' behavioral intentions”, Journal ofService Research 2(3)

18. Heskett, Sasser and Schlesinger (2003), "The Value-Profit Chain”, Free Press,New York

19. The American Customer Satisfaction Index, www.theacsi.org 20. Fornell, Claes et al (2005) "The American Customer Satisfaction Index at Ten

Years: Implications for the Economy”, Stephen M Ross School of Business,University of Michigan

16 Dispelling the Myths

Chapter one 5/7/07 09:49 Page 16

Page 24: Customer Satisfaction

21. Barwise and Meehan (2004) "Simply Better: Winning and keeping customers bydelivering what matters most”, Harvard Business School Press, Boston

22. Reichheld, Frederick (2003) "The One Number you Need to Grow”, HarvardBusiness Review 81, (December)

23. Which? Online (1996) "Mobile Phone”, Consumers’ Association, (December)24. Peterson and Wilson (1992) "Measuring Customer Satisfaction: Fact and

Artifact”, Journal of the Academy of Marketing Science, (Winter)25. Myers, James H (1999) "Measuring Customer Satisfaction: Hot buttons and

other measurement issues”, American Marketing Association, Chicago, Illinois26. Peters and Austin (1986) "A Passion for Excellence”, William Collins, Glasgow27. Keiningham, Vavra, Aksoy and Wallard (2005) "Loyalty Myths”, John Wiley and

Sons, Hoboken, New Jersey28. Szwarc, Paul (2005) "Researching Customer Satisfaction and Loyalty” Kogan

Page, London29. Morrison, Colman and Preston (1997) "Mystery customer research: cognitive

processes affecting accuracy”, Journal of the Market Research Society 46 (4)30. Snaith, Tim (2006) "Why customer research is undermining customer loyalty”,

Customer Management 14 (6) 31. Reinartz and Kumar (2000) "On the Profitability of Long-Life Customers in a

Non-contractual Setting: An Empirical Investigation and Implications forMarketing”, Journal of Marketing 64

32. Morwitz, Johnson and Schmittlein (1993) "Does Measuring Intent ChangeBehavior?”, Journal of Consumer Research 20 (June)

33. Fitzsimons and Morwitz (1996) "The Effect of Measuring Intent on Brand-LevelPurchase Behavior”, Journal of Consumer Research 23 (June)

34. Fitzsimons and Williams (2000) "Asking Questions Can Change ChoiceBehavior: Does it do so Automatically or Effortfully?”, Journal of ExperimentalPsychology: Applied, 6 (3)

35. Spangenberg and Greenwald (1999) "Social Influence by Requesting Self-Prophecy”, Journal of Consumer Psychology 39 (August)

36. Morwitz and Fitzsimons (2000) "The Mere-Measurement Effect: Why DoesMeasuring Purchase Intentions Change Actual Purchase Behavior?”, WorkingPaper New York University, New York

37. Fitzsimons and Shiv (2001) "Nonconscious and Contaminative Effects ofHypothetical Questions on Subsequent Decision Making”, Journal of ConsumerResearch 28, (September)

38. Dholakia and Morwitz (2002) "How Surveys Influence Customers”, HarvardBusiness Review 80 (5)

39. Dholakia and Morwitz (2002) "The scope and persistence of mere-measurementeffects: Evidence from a field study of customer satisfaction measurement”,Journal of Consumer Research 29 (2)

Dispelling the Myths 17

Chapter one 5/7/07 09:49 Page 17

Page 25: Customer Satisfaction

CHAPTER TWO

The benefits of customer satisfaction

Customer satisfaction isn’t a new concept. Just the opposite – it’s at least 200 years old.As long ago as the 18th century Adam Smith clarified the fundamental premise on whichfree markets operate1. He maintained that since human beings continually strive tomaximise their utility (get the greatest benefit for the least cost), they migrate graduallybut inexorably, to the suppliers that come closest to delivering it. In other words, theysearch out and stay with companies that do best what matters most to customers.Customer satisfaction is the phrase commonly used to encapsulate this phenomenon. Itmeans that suppliers make more profit as customers become better off.

230 years later this win-win equation still fuels most markets worldwide. It’s based onthe almost irresistible forces of people getting what they want. People runningcompanies want maximum profits. Their customers want maximum ‘utility’ – thegreatest possible gratification at the least cost. Unsurprisingly, the more gratifying thecustomer experience is, the more likely they are to repeat it, and vice-versa. This isdemonstrated at macro level by 12 years of ACSI (American Customer SatisfactionIndex) data showing that in the USA, changes in customer satisfaction have accountedfor more of the variation in future spending growth than have any other factorsincluding income or consumer confidence2. In other words, if American consumers aremore satisfied generally by the things the American economy is delivering to them (andby the way in which they are delivered), their rate of spending increases. If theirsatisfaction goes down, so does their spending and the country’s economic growth.

At a glanceThis chapter explains why customer satisfaction matters and will cover:

a)How customer satisfaction translates into profits through Customer LifetimeValue.

b)The close relationship between customer satisfaction and employeesatisfaction.

c)How customer satisfaction affects returns to shareholders.

d)The macro-economic implications of customer satisfaction.

e)The arguments for customer satisfaction in the public and not-for-profit sectors.

18 The benefits of customer satisfaction

Chapter two 5/7/07 09:50 Page 18

Page 26: Customer Satisfaction

2.1 Benefits for companiesIt is now widely accepted that whilst the ultimate goal of a private sector companymay be to deliver profits to shareholders, it will be achieved through delivering resultsto customers3. This is based on the fundamental psychological principle that peoplewill want more of the experiences that give them pleasure whilst avoiding theunpleasing or dissonant experiences4. It explains why it is more profitable to keepexisting customers than to win new ones – five times more profitable on average,according to figures released by the American Department of Consumer Affairs aslong ago as 1986. This section outlines some of the commonly recognised reasonswhy customer satisfaction matters for private sector companies. Much of the dataquited is from America, simply because it has much more published data than othercountries on the financial outcomes of customer satisfaction. Since the relationshipsdescribed are economic rather than attitudinal or cultural, they are applicable to alldeveloped economies.

KEY POINTThe profitability of customers increases the longer you keep them.

2.1.1 Customer Lifetime ValueCustomer retention is more profitable than customer acquisition because the value ofcustomers typically increases over time5,6,7,8. Shown in Figure 2.1, this is due to thefollowing factors:Acquisition – the cost of acquiring customers occurs almost exclusively in their firstyear with the company (i.e. before and as they become customers).Base profit – is constant, but often will not begin to offset acquisition costs until thesecond year or later.Revenue growth – as customers stay, and provided they are satisfied, they tend tobuy more of a company’s products/services as their awareness of the productportfolio grows.Cost savings – long term customers cost less to service, since they are more familiarwith the organisation’s procedures and more likely to get what they expect.Referrals – highly satisfied customers will recommend companies to their friends.Referral customers eliminate most of the cost of acquisition, and they also tend to bebetter customers because they are like existing customers.Price premium – long-term customers who are very satisfied will also be prepared topay a price premium since they trust the supplier to provide a product/service that isgood value for them.

Summarised by Harvard as the 3Rs (retention, related sales and referrals), and basedon 30 years of research, Harvard concluded that ‘loyal’ customer behaviours explaindifferences in companies’ financial performance more than any other factor. Harvard and others have also pointed out that customer satisfaction is the main driver ofcustomer loyalty3,9,10.

The benefits of customer satisfaction 19

Chapter two 5/7/07 09:50 Page 19

Page 27: Customer Satisfaction

2.1.2 Links with employee satisfactionThe link between customer satisfaction and employee satisfaction has been recognisedby the work of Harvard and others – Harvard labelling it “the customer-employeesatisfaction mirror”. They have demonstrated not only that employee satisfactiontypically produces higher levels of customer satisfaction (since more satisfiedemployees are more highly motivated to give good service), but also that highercustomer satisfaction produces higher employee satisfaction since employees preferworking for companies that have high levels of customer satisfaction and low levels ofproblems and complaints. More satisfied employees stay longer, keeping valuableexpertise and customer relationships within the organisation. Conversely, high staffturnover has a negative effect on customer satisfaction3,11.

This was fully reflected in the Safeway example quoted in Chapter 1. At the sametime Safeway’s rival Kroger also had problems with employee satisfaction resultingin a similar fall in its customer satisfaction index to Safeway’s level of 71%16.

2.1.3 Sales and profitSome companies have built fully validated models that precisely quantify therelationship between employee satisfaction, customer satisfaction and financialperformance. These include the Canadian Imperial Bank of Commerce (CIBC) whobuilt a service-profit chain model demonstrating that each 2% increase in customerloyalty would generate an additional 2% in net profit. They also quantified the causallinks in the chain back from customer loyalty to customer satisfaction and to

FIGURE 2.1 Customer value increases over time 5

An

nu

al C

ust

omer

Pro

fit

7

6

5

4

3

-2

1

0

-1

Year 0 Year 1 Year 2 Year 3 Year 4 Year 5 Year 6 Year 7

Price Premium

Referrals

Cost savings

Revenue growth

Base Profit

Acquisition cost

20 The benefits of customer satisfaction

Chapter two 5/7/07 09:50 Page 20

Page 28: Customer Satisfaction

employee satisfaction. For example they found that to produce an additional 2%gain in customer loyalty an improvement of 5% in employee satisfaction wasrequired12. An example from retailing is Sears Roebuck who, using a similar profitchain modelling approach to that adopted by CIBC, demonstrated that a 5% gain inemployee satisfaction drives a 1% gain in customer satisfaction which, in turn, leadto an additional 0.5% increase in profit13.

Aggregate data from the American Customer Satisfaction Index have alsodemonstrated a very strong link between customers’ satisfaction with individualcompanies and their propensity to spend more with them in future. In fact every 1%increase in customer satisfaction is associated with a 7% increase in operational cashflows, and the time lag is as short as three months, although this does vary by sector14.

2.1.4 Shareholder valueThe University of Michigan has reported that the top 50% of companies in the ACSIgenerated an average of $42 billion of shareholder wealth (Market Value Added), asagainst $23 billion for the bottom 50%15.

Based on the ACSI database, a 1% increase in customer satisfaction drives a 3.8%increase in stock market value. Between 1997 and 2003 (a period that saw huge risesand falls in stocks) share portfolios based on the ACSI out-performed the Dow by90%, the S&P by 208% and the NASDAQ by 344%. Almost echoing the words ofAdam Smith, Professor Fornell says, the reason for this is simply that “…oureconomic system works. It was designed with the idea that sellers should compete forbuyers’ satisfaction. Satisfied customers reward companies with, among other things,their repeat business, which has a huge effect on cumulative profits”2.

KEY POINTCompanies with higher customer satisfaction produce better returns forshareholders.

This is consistent with an earlier study based on winners of the Malcolm Baldrigequality awards (the largest single component of which is customer satisfaction).When challenged in a lecture that many Baldrige winners had not been financiallysuccessful, quality guru Joseph Juran responded that he was sure that a shareportfolio based on Baldrige Award winners would out-perform a general trackerfund. When Business Week decided to test this theory, Juran was proved correct, withthe Baldrige-based fund achieving an 89% return against 33% overall for theStandard & Poor’s 50017.

The benefits of customer satisfaction 21

Chapter two 5/7/07 09:50 Page 21

Page 29: Customer Satisfaction

Eleven years of ACSI data have also produced many company-specific examples, bothpositive and negative, of the link between customer satisfaction and shareholdervalue2. Take two contrasting examples in the computer industry. Since its inclusion inthe ASCI in 1997, Dell improved customer satisfaction and revenues at first by asignificant margin. As a growing proportion of PC purchases are for replacement, thecustomer satisfaction – loyalty link is increasingly important in this sector. Thisproved unfortunate for Gateway, a direct competitor of Dell, whose large falls incustomer satisfaction (despite extensive price cutting) were matched by its poorfinancial performance. However, in more recent years Dell’s customer satisfaction hasalso fallen, especially in 2005 when it fell substantially by 5% to 74%. It was nowDell’s turn to see aggressive cost cutting matched by poorer service and a large fall incustomer satisfaction; the company eventually admitting its customer serviceproblems. By 2006, Dell’s share price reached a five year low, against a backdrop ofsubstantial increases in stock prices generally over the same period.

One of the biggest declines in customer satisfaction occurred in the telecoms sector– a 26% fall for Qwest Communications between 1995 and 2002. Perhaps Qwest’sshareholders should have been monitoring customer satisfaction. The share pricedidn’t react until 2000, but since then the company has lost 90% of its market value(and the shareholders most of their investment). In 1994 Hyundai had the lowestcustomer satisfaction of any car manufacturer, down at 68%, and with a very poorreputation for quality and reliability. Ten years on it had gradually raised customersatisfaction to 81% (and subsequently to 84% by 2006), largely throughimprovements in product and service quality. The customer satisfaction gains havebeen fully reflected in Hyundai’s higher sales and stock price.

FIGURE 2.2 Customer satisfaction and shareholder value in the USA

Top and Bottom 50% of ACSI Firms

16

$10.1

$1.8

1994$0

MV

A (

billi

ons)

$5

$10

$15

$20

$25

$30

$35

$40

$45

1995 1996 1997 1998 1999 2000 2001 2002

$3.6$5.0

$7.9$9.4

$6.7

$11.2

$5.1$2.8

$14.7$17.6

$27.7

$35.7$39.7

$26.1

$34.0

$20.1

Top 50%ACSI Firms

Bottom 50%ACSI Firms

22 The benefits of customer satisfaction

Chapter two 5/7/07 09:50 Page 22

Page 30: Customer Satisfaction

A well known case study from the Harvard profit chain literature3,18 is MBNA. Overa couple of decades, MBNA climbed from the 38th to the largest issuer of credit cardsin the USA. The rise started in the early 1980s when the company identified that itwas barely keeping its customers long enough for them to become profitable5.MBNA’s President, Charles Cawley, responded by basing the company’s futurestrategy on maximising customer lifetime value through delivering superb customersatisfaction. For 20 years MBNA has measured customer satisfaction daily andcontributes cash to a bonus fund every day that its customer satisfaction index isabove target. The accumulated bonus is paid to all staff every quarter and typicallyenables employees to boost their earnings by 20%. Customer satisfaction-related payis covered in Chapter 15.

2.2 Benefits for the economySince there is no comparable information source in the UK, the evidence outlined inthis section is drawn from the conclusions of the University of Michigan based onAmerican Customer Satisfaction Index data2. As they point out, “At the macro level,customer satisfaction and household spending are at the hub of a free market. In oneway or another, everything else – employment, prices, profits, interest rates, productionand economic growth itself – revolve around consumption.” If customers reduce theirspending the economy moves into recession. If they increase it, albeit by a very smallpercentage, the positive effects on economic growth will be significant. As we haveseen, customers reward companies that satisfy them and punish those that don’t. Thisfact is fundamental to the way free markets operate – driving them to deliver as muchcustomer satisfaction as they can in the most efficient way possible. Thisphenomenon has been strengthened by the growing power of customers based ontheir higher levels of education and confidence plus dramatically increased sources ofinformation. This has resulted in the production-led economies of the past turninginto today’s customer-driven markets. There is also growing evidence that today’saffluent customer in developed economies has become more interested in quality oflife (doing things) than material wealth (owning things)19.

KEY POINTCustomers today are placing more emphasis on experiences rather than possessions.

2.2.1 The value of experiences All the way back to Maslow, studies have shown that once the basic needs of food andshelter are met, extra material wealth does not necessarily lead to greaterhappiness20,21,22,23,24. Summarising this mountain of research in 1999, Frank25

concluded that “increases in our stocks of material goods produce virtually nomeasurable gains in our psychological or physical well being. Bigger houses and fastercars, it seems, don’t make us any happier.” Clearly, quality of life is becoming more

The benefits of customer satisfaction 23

Chapter two 5/7/07 09:50 Page 23

Page 31: Customer Satisfaction

important than quantity of possessions. Van Boven and Gilovich’s 2003 study19

demonstrated that experiential purchases (doing) brought people more long termsatisfaction and happiness than material ones (having). They concluded thatexperiences are more central to a person’s identity than possessions, they tend to bemore favourably viewed as time passes and they have greater social value (in otherwords they are more interesting to talk about). Although most of the precedingresearch was conducted in America, similar trends have been identified in the UK andEurope. Future Foundation report that whilst materialistic accumulation remainsimportant to European consumers, they are increasingly “seeking satisfaction fromthe growing ‘experience economy’.”26 This involves a greater emphasis on hedonism,self-development, holidays and ethical consumption. In a separate study of 1,000 UKadults, almost 50% (and over 50% of baby boomers) chose personal fulfilment astheir main priority in life, more than double the number that selected it 20 yearsago27. According to Future Foundation: “Our affluent society prioritises personalfulfilment and this culture fuels increasing and more diverse leisure participation.”

Pine and Gilmore have labelled this phenomenon ‘the experience economy’,suggesting that developed countries have evolved not just from manufacturing toservice economies but on a stage further28,29. In experience economies, suppliersshould focus on providing customers not just with a product or service, but with asatisfying and memorable experience. Driven forward by more literature30,31, there isgrowing awareness amongst organisations of the importance of the customerexperience. This should be seen as the total customer experience, in other words, thesum of all functional and emotional benefits perceived by customers as a result oftheir experience with a supplier32. Suppliers should therefore consider all the cues thatinfluence the total experience that the customer perceives33 and aim to orchestratethem into a planned and consistent message34.

2.2.2 The role of customer satisfactionOne could therefore say that whilst GDP is a measure of the amount, or quantity ofeconomic activity (having), customer satisfaction is a measure of its quality(experiencing). If it is true that people seek to repeat high quality, pleasurableexperiences but avoid those of low quality, we would expect to see a relationshipbetween these two indicators. Analysts at the University of Michigan have identified“a significant relationship between ACSI changes and subsequent GDP changes, arelationship that operates via consumer spending”2. Whilst it is obvious that the levelof consumer spending is based on the amount of money that people have to spend,it is crucial to understand that it is also affected by their willingness to spend it35.Whilst some spending is down to necessity (e.g. the food and shelter necessary forsurvival), most spending in developed economies is beyond that level and is drivenby the anticipated amount of satisfaction that the spending will produce. To quotethe University of Michigan again, “The importance of this can hardly be overstated.

24 The benefits of customer satisfaction

Chapter two 5/7/07 09:50 Page 24

Page 32: Customer Satisfaction

Since its inception, the data show that ACSI has accounted for more of the variation infuture spending growth than any other factor, be it economic (income, wealth) orpsychological (consumer confidence).”2

KEY POINTAs a measure of the quality as opposed to the quantity of GDP, customer satisfactionis a key lead indicator of consumers’ willingness to spend and relates strongly toeconomic growth.

2.3 Benefits in the public and not-for-profit sectorsMost of this discussion has focused on the bottom-line arguments for customersatisfaction, which are taken by many to be more or less self-evident. But profitability,per se, is not a prime consideration in the public or not-for-profit sectors. What thenis the argument for satisfying customers in these sectors?

2.3.1 Financial argumentsAlthough not motivated by profit, organisations in these sectors must be very awareof the cost implications of dissatisfied customers. Dissatisfied customers complainmore, soaking up valuable resources in dealing with their complaints5.

It has also been shown that customer satisfaction and employee satisfaction arerelated (the “mirror effect”)3,11. Organisations with satisfied customers are more likelyto have satisfied and engaged employees, which in turn leads to lower turnover andabsenteeism, thus lowering the cost of employment.

2.3.2 ReputationOrganisations with more satisfied customers tend to have a better public image andreputation. Such reputation benefits often lag somewhat behind actual performance,so can sometimes seem unfair, but in time they tend to gravitate towards an accuratedepiction of an organisation’s ability to satisfy customers. Ultimately the aim formany organisations in these sectors is to establish trust with the public in general. Agood reputation built on a solid basis of high levels of customer satisfaction is key toestablishing that trust.

2.3.3 CultureSimilar benefits accrue internally for organisations that are good at satisfying theircustomers. As well as having more satisfied employees, organisations with satisfiedcustomers tend to have better morale, and employees are more likely to feel pridein their place of work. It becomes easier both to recruit and retain good staff underthese circumstances.

The benefits of customer satisfaction 25

Chapter two 5/7/07 09:50 Page 25

Page 33: Customer Satisfaction

2.3.4 For the public benefitFinally, and perhaps most compellingly for the public sector, customer satisfaction isthe ultimate arbiter of the success of public organisations. Such organisations exist toserve the public, rather than shareholders or owners, and as such their success shouldbe judged by their ability to deliver what the public wants. This has been the policy ofsuccessive governments in the UK for many years now, although they have failed, sofar, to implement an accurate and consistent CSM system to monitor their success.

2.4 ConclusionsIt’s now over 20 years since the American Department of Consumer Affairs informedthe world that keeping existing customers is far more profitable than winning new ones.This is because the profitability of customers grows over time – as long as theirrequirements are met or exceeded. In summary, customer satisfaction pays because:1. They gradually buy a wider range of a supplier’s products or services.2. They become less price sensitive.3. They cost less to service.4. They recommend the supplier more and evidence shows that referred

customers tend to be much more loyal than those acquired through sales andmarketing activities.

5. Every customer that a company keeps rather than loses and every customer thatit gains through recommendation reduces the need for the heavy investmentrequired to win new customers.

6. Some companies, like the Canadian Imperial Bank of Commerce, havecalculated the precise financial value of each 1% gain in customer satisfaction.This type of ‘profit chain modelling’ requires extensive information andextremely complex statistical modelling but has considerable value forinvestment decisions, especially if the profit chain is traced back to employeesatisfaction.

7. Without realising it, many companies are still falling into the trap of failing tokeep their customers long enough to reap anything like the full reward of the 3Rs.To rectify this problem companies need to follow MBNA’s example and developan accurate understanding of their current customer lifetime value beforedeveloping and implementing a strategy to increase it.

8. At the macro level, organisations like Harvard and Michigan Business Schools(supported by huge databases such as the ACSI) have published copious evidencethat companies with highly satisfied customers are far more successful financiallythan those providing poor service.

9. There is now compelling evidence that people in developed economies areincreasingly driven by experiences or quality of life rather than materialpossessions. At the national level customer satisfaction is a measure of the quality(as opposed to the quantity) of GDP.

26 The benefits of customer satisfaction

Chapter two 5/7/07 09:50 Page 26

Page 34: Customer Satisfaction

10. Since people seek to repeat pleasurable experiences but avoid unpleasant ones, itis not surprising that the University of Michigan has identified a pivotal role forcustomer satisfaction in determining future customer spending and henceeconomic growth.

References1. Smith, Adam (1776) "The Wealth of Nations”2. Fornell, Claes et al (2005) "The American Customer Satisfaction Index at Ten Years:

Implications for the Economy Stock Returns and Management”, Stephen M RossSchool of Business, University of Michigan

3. Heskett, Sasser and Schlesinger (2003) "The Value-Profit Chain”, Free Press, NewYork

4. Festinger, Leon (1957) "A Theory of Cognitive Dissonance”, Stanford UniversityPress, Stanford

5. Reichheld, Frederick (2001) "The Loyalty Effect” 2nd edition, Harvard BusinessSchool Press, Boston

6. Rust and Zahorik (1993) "Customer satisfaction, customer retention and marketshare”, Journal of Retailing 69(2)

7. White and Schneider (2000) "Climbing the Commitment Ladder: The role ofexpectations disconfirmation on customers' behavioral intentions”, Journal ofService Research 2(3)

8. Reichheld and Sasser (1990) "Zero Defections, Quality Comes to Services”, HarvardBusiness Review 68, (September-October)

9. Sasser and Jones (1995) "Why Satisfied Customers Defect”, Harvard BusinessReview 73, (November-December)

10. Rust, Zahorik and Keiningham (1996) "Making Service Quality FinanciallyAccountable” in "Readings in Service Marketing”, Harper Collins

11. Heskett and Schlesinger (1997) "Out in Front, Building High Capability ServiceOrganisations”, Harvard Business School Press, Boston

12. Tofani, Joanne (2000) “The People Connection: Changing Stakeholder Behavior toImprove Performance at CIBC”, Conference paper, ASQ Customer Satisfaction andLoyalty Conference, San Antonio, Texas

13. Rucci, Kern and Quinn (1998) "The Employee-Customer Profit Chain at Sears”,Harvard Business Review 76, (January-February)

14. Gruca and Rego (2003) "Customer Satisfaction, Cash Flow and Shareholder Value”,Marketing Science Institute

15. Fornell, Claes (2001) "The Science of Satisfaction”, Harvard Business Review 79,March-April

16. The American Customer Satisfaction Index HYPERLINK "http://www.theacsi.org"www.theacsi.org

17. (1993) "Betting to Win on the Baldie Winners”, Business Week October 18th

The benefits of customer satisfaction 27

Chapter two 5/7/07 09:50 Page 27

Page 35: Customer Satisfaction

18. Heskett, Sasser and Schlesinger (1997) "The Service-Profit Chain”, Free Press,New York

19. Van Boven and Gilovich (2003) "To do or to have? That is the question”, Journal ofPersonality and Social Psychology 85

20. Maslow, A H (1943) "A theory of human motivation”, Psychological Review 50 21. Richins and Dawson (1992) "A consumer values orientation for materialism and its

measurement: Scale development and validation”, Journal of Consumer Research 1922. Kasser and Ryan (1993) "A dark side of the American dream: Correlates of financial

success as a central life aspiration”, Journal of Personality and Social Psychology 6523. Kasser and Ryan (1996) "Further examining the American dream: Differential

correlates of intrinsic and extrinsic goals”, Personality and Social Psychology Bulletin22

24. Kasser, T (2002) "The High Price of Materialism”, MIT Press, Boston25. Frank, R H (1999) "Luxury fever: Why money fails to satisfy in an era of excess”,

Free Press, New York26. Quorin, M (2006) "Personal Aspirations in Europe”, Future Foundation, London27. Quorin, M (2006) "A Life of Leisure”, Future Foundation, London28. Pine and Gilmore (1998) "Welcome to the Experience Economy”, Harvard Business

Review 76, (July-August)29. Pine and Gilmore (2002) "The Experience Economy: Work is Theatre and Every

Business a Stage”, Harvard Business School Press, Boston30. Lasalle and Britton (1999) "Priceless: Turning Ordinary Products into Extraordinary

Experiences”, Harvard Business School Press, Boston31. Diller, Shedroff and Rhea (2006) "Making Meaning: How Successful Businesses

Deliver Meaningful Customer Experiences”, New Riders Publishing32. Shaw and Ivens (2002) "Building Great Customer Experiences", Palgrave Macmillan,

Basingstoke33. Berry, Carbone and Haeckel (2002) "Managing the Total Customer Experience",

MIT Sloan Management Review Vol 43 No2 (Spring) 34. Zaltman, Gerald (2003) "How Customers Think", Harvard Business School Press,

Boston35. Katona, George (1979) "Toward a macropsychology” American Psychologist 34(2)

28 The benefits of customer satisfaction

Chapter two 5/7/07 09:50 Page 28

Page 36: Customer Satisfaction

Methodology essentials 29

CHAPTER THREE

Methodology essentials

Chapter 2 outlined the plentiful evidence that high levels of customer satisfactionpay. As much of that information is more than ten years old, one would have expectedto see huge progress in companies’ ability to satisfy customers over the last decade,especially since most organisations claim that customer satisfaction is an importantgoal. That progress hasn’t happened. This chapter considers the reasons for thisfailure and suggests some solutions.

At a glanceIn this chapter we will

a) Present the evidence that customer satisfaction is not improving.

b) Provide a definition of customer satisfaction plus an explanation of theconcept and how it affects organisational performance.

c) Review the reasons why many organisations fail to take effective action toimprove customer satisfaction.

d) Explain the necessity for measures.

e) Highlight the fundamental essentials of an accurate CSM methodology.

3.1 Customer satisfaction isn’t improvingBased on over ten years of data from the American Customer Satisfaction Index,Figure 3.1 shows that customer satisfaction in the USA remains below its startingpoint in 19941. Whilst there is no comparable trend data from the UK, the satisfactionbenchmarking database of specialist customer research company, The LeadershipFactor, based on several hundred customer satisfaction surveys per annum, leads to asimilar conclusion and is shown in Figure 3.22.

KEY POINTDespite its importance, many organisations are failing to improve customersatisfaction.

Chapter three 5/7/07 09:51 Page 29

Page 37: Customer Satisfaction

So in the face of all this overwhelming evidence about the benefits of customersatisfaction, and despite the lip service paid to it, why is it that companies have beenso unsuccessful at improving it? There are three reasons:

1) People don’t understand customer satisfaction. More specifically they don’tunderstand the implications of different levels of customer satisfaction and thelevel they need to achieve to benefit their own organisation.

2) They don’t have an accurate measure of customer satisfaction so they lack themost fundamental tool for making sure the organisation is achieving therequired level of satisfaction. Since the science of measuring satisfaction is nowat least two decades old, there is little excuse for this.

FIGURE 3.2 Customer satisfaction trends in the UK

1997 1998 1999 2000 2001 2002 2003 2004 2005 200675%

76%

77%

78%

79%

80%

81%

2007

FIGURE 3.1 Customer satisfaction trends in the USA

1994

1995

1996

1997

1998

1999

2001

2000

2002

2003

2004

2005

2006

70

71

72

73

74

75ACSI 1994 to 2006

30 Methodology essentials

Chapter three 5/7/07 09:51 Page 30

Page 38: Customer Satisfaction

3) Even if they do have accurate and actionable measures, they don’t take thenecessary action – often linked to the first point but not always.

We’ll consider the first and third reasons initially before moving on to outline theessential aspects of a CSM methodology that will provide an accurate measure of howsatisfied or dissatisfied customers feel as well as reliable information on how toimprove it.

3.2 Understanding customer satisfaction

3.2.1 A definition of customer satisfactionThe most straight forward definition of customer satisfaction was provided byAmerican marketing guru Philip Kotler. “If the product matches expectations, theconsumer is satisfied; if it exceeds them, the consumer is highly satisfied; if it fallsshort, the consumer is dissatisfied.”3 Crucial in this definition is the view thatsatisfaction is a relative concept encompassing the customer’s expectations as well asthe performance of the product4. Whilst early definitions were product focused, it hassince been recognised that customer satisfaction applies equally to services as well asto any individual element of a customer’s product or service experience. Hence,Oliver has defined customer satisfaction as “a judgement that a product or servicefeature, or the product or service itself, provided (or is providing) a pleasurable levelof consumption-related fulfilment, including levels of under- or over-fulfilment.”5

Although customers make satisfaction judgements about product and service,customer satisfaction should not be confused with service quality. Firstly, customersatisfaction is broader in scope than service quality, which is “only one component ofa customer’s level of satisfaction.”6 Secondly, a product or service must beexperienced to make a satisfaction judgement, but that is not an essential pre-requisite for developing an attitude about quality.5 For example, it is possible forpeople to form opinions about the quality of a car or the service quality delivered bystaff in a hotel based on advertising, reputation or word-of-mouth, whereas it is notpossible to be satisfied or dissatisfied with them without driving the car or staying inthe hotel. Thirdly, although most quality management academics and practitionerswould subscribe to the ‘user-based approach’ rather than the ‘technical approach’7,judgements of satisfaction are typically much more subjective and emotional thanquality judgements. It was this principle that prompted Tom Peters to coin hisfamous “perception is reality” phrase. He emphasised that whilst customers’judgements may be “idiosyncratic, human, emotional, end-of-the-day, irrational,erratic”8, they are the attitudes on which customers everywhere base their futurebehaviours. As Peters says, the possibility that customers’ judgements are unfair isscant consolation once they have taken their business elsewhere.

Methodology essentials 31

Chapter three 5/7/07 09:51 Page 31

Page 39: Customer Satisfaction

DEFINITIONCustomer satisfaction, or dissatisfaction, is the feeling a customer has about theextent to which their experiences with an organisation have met their needs.

3.2.2 Attitudes and behavioursSo customer satisfaction is a relative concept. It’s the customers’ subjective judgementor feeling, the attitudes they hold, about the extent to which their requirements havebeen met by a supplier. However, satisfaction is rarely an end in itself because whilst itis pleasing that customers hold favourable attitudes, that’s of little value if they’re notbehaving like loyal customers. As we saw in Chapter 2, it is customers’ behaviour thatenables companies to achieve their objectives, particularly desirable behaviours such asbuying more often, spending more or recommending the organisation to others. Thereason why the measurement of customer satisfaction is so important is that attitudesdrive behaviours, so customer satisfaction is a key lead indicator of future customerbehaviours and, therefore, future company performance. However, to maximise thebenefit of this powerful management tool, it is vital to separate the attitudinal andbehavioural aspects of customer satisfaction, as illustrated in Figures 3.3 and 3.4.

KEY POINTSatisfaction is an attitude, loyalty is a behaviour.

CSM is totally focused on the first oval in the diagrams – measuring customers’attitudes about how satisfied they feel with the organisation. As lead indicators, theseattitudes provide by far the most useful data for managing organisationalperformance. Obviously, customers’ behaviours, especially their loyalty behavioursare extremely important to organisations, but they are lagging indicators. By the timea customer has defected or chosen an alternative supplier for a related product orservice, the opportunities have been missed. That is not to say that customerbehaviours should not be monitored. Information such as customer defection rates,average spend and complaints are all extremely useful but they should not beconfused with measures of customer satisfaction.

FIGURE 3.4 How satisfaction translates to profit

Customersatisfaction

Customerloyalty

Companyprofit

FIGURE 3.3 Attitudes and behaviours

Customerattitudes

Customerbehaviour

Organisationaloutcomes

32 Methodology essentials

Chapter three 5/7/07 09:51 Page 32

Page 40: Customer Satisfaction

KEY POINTSatisfaction is a lead indicator. Loyalty, sales and other measures oforganisational performance are lagging ones.

3.2.3 How satisfaction affects loyaltyAnother widely misunderstood aspect of customer satisfaction is how it translates intoloyalty and profit. To say that satisfied customers will be more loyal than dissatisfiedones, whilst broadly true, is far too simplistic. As we said in Chapter 1, there are differentlevels of satisfaction and these can affect companies in widely differing ways.

Whilst it is obvious that dissatisfied customers will rarely be loyal whereas highlysatisfied ones will be, what about those in the mid ranges of satisfaction? As Harvardpoint out in figure 3.59, the zone of indifference isn’t good enough for mostcompanies. To maximise customer lifetime value, suppliers have to make customersso satisfied that there is no point even thinking about switching10. Jones and Sasser11

point out that most companies don’t understand the extent to which high levels ofsatisfaction rather than ‘mere satisfaction’12 are necessary.

KEY POINTSatisfaction does not affect organisational performance in a linear manner. Highlysatisfied customers are much more valuable than ‘merely satisfied’ ones.

In our experience it would be unfair to imply that there are many companies ormanagers that still don’t believe in the importance of customer satisfaction. But itwould be reasonable to conclude that most organisations still don’t fully understandthe extent to which they need to invest in achieving very high levels of customer

FIGURE 3.5 The relationship between satisfaction and loyalty

Loya

lty

Apostle

Zone of affection

Zone of indifference

Zone of defection

SaboteurSatisfaction

20%

40%

60%

80%

100%

1 5 10

Methodology essentials 33

Chapter three 5/7/07 09:51 Page 33

Page 41: Customer Satisfaction

satisfaction rather than settling for the ‘zone of mere satisfaction’. The same point appliesto customer loyalty. Very loyal customers are far more profitable than quite loyal ones.

3.3 Companies don’t actAs we said earlier, Adam Smith knew that there was no long term future for sellersthat maximise profits at the expense of their customers’ gratification. So why is it thatsome companies still don’t seem to realise that? It’s that word ‘competition’ – thedouble edged sword of customer satisfaction. Too little of it, and the suppliers don’tneed to satisfy their imprisoned customers, as witnessed by the disingenuousindifference to the customer experience of many public sector organisations andprivate sector monopolies. Too much of it, and companies’ continuing inability tomaster the utility equation’s cost-benefit trade-off results in too many short termdecisions to reduce costs at the expense of customer satisfaction. Of course, in thelong run customers are almost always the beneficiaries of very competitive markets.

KEY POINTMany companies don’t understand that cost reduction is a false economy if itreduces customer satisfaction.

In our many years of helping organisations to measure and improve customersatisfaction, the authors have noticed that the biggest single difference betweencompanies at the top of The Leadership Factor’s Satisfaction Benchmark League Table2

(with the highest levels of customer satisfaction) and those at the bottom is the latter’sfailure to take appropriate action to address the issues that would make the biggestdifference to improving customer satisfaction. Often they take virtually no action and ifthey do, it is often focused on the wrong things; typically things that are cheap or easy toaddress rather than confronting the real issues that are upsetting customers.

By contrast, companies at the top of the League take focused action to address theareas where they are least meeting customers’ requirements – whatever they are. Theydon’t take easy options, they do whatever it takes to “do best what matters most tocustomers” because they understand that if they do that, rational customers will stayand spend more with them in the future. Chapters 12 to 15 of this book explain howto identify the precise areas where making improvements would lead to the greatestgain in customer satisfaction.

3.4 The science of CSM

3.4.1 Why measure satisfaction?Phrases such as ‘you can’t manage what you don’t measure’ reflect the widely heldview that without measures organisations lack the focus to make improvements even

34 Methodology essentials

Chapter three 5/7/07 09:51 Page 34

Page 42: Customer Satisfaction

in areas regarded as very important. For example, introducing measures of quality,such as statistical process control was crucial to manufacturers in western economiesimproving their quality levels during the 1980s and 1990s.

Some go further and suggest that organisations are defined by what they measure.They maintain that “what a business measures shapes employee thinking,communicates company values and channels organisational learning.”13 Conversely,employees don’t take seriously things that are not measured largely because it’simpossible to base performance management and rewards on them - a fact that wasdiscovered by Enterprise Rent-A-Car in the 1990s14.

Founded in 1957 with seven hire cars, Enterprise Rent-A-Car had grown to 50,000vehicles 30 years later. There was, however, growing anecdotal evidence of customerdissatisfaction, which was contrary to the very customer focused ethos that thecompany’s founder had built from its inception. To counter this problem Enterprisebegan to measure customer satisfaction in 1989 but by 1994 satisfaction levels hadshown no improvement. There were two reasons for this. First, the measures were notcredible, mainly because sample sizes were relatively small, so the results only gave anational and regional overview and did not reach down to local operating units.Branch managers could assume that the problem of customer dissatisfaction wascaused by other branches and not their own. Secondly, it didn’t matter anyway,because branch managers’ reward, recognition and promotional opportunities werebased on sales growth and profitability, and no link was established between theseimportant business measures and customer satisfaction. To improve the situationEnterprise Rent-A-Car addressed both of these problems. They made the customersatisfaction measures credible and managers accountable by massively increasing thesample size to 100 randomly sampled customers per branch per quarter and changedfrom self-completion questionnaires to telephone interviews. This meant over 2million interviews per annum, conducted by an external agency. Secondly, they madesure the results were taken seriously by demonstrating the link between customersatisfaction, loyalty and profit and by making customer satisfaction a fundamentalpart of branch and regional managers’ performance appraisal. The result was a steadyimprovement in customer satisfaction over the following decade and the rise of thecompany to a clear market leadership position.

KEY POINTThe right measures are essential to effectively manage employees’ behaviourand organisational performance.

3.4.2 The accuracy of customer satisfaction measuresIf organisations make decisions and take action on flawed information, it would notbe surprising if their efforts resulted in little gain. This is a major reason for many

Methodology essentials 35

Chapter three 5/7/07 09:51 Page 35

Page 43: Customer Satisfaction

organisations’ failure to improve customer satisfaction and is far more widespreadthan people realise. Many customer satisfaction measures are virtually useless, whichis highly regrettable since the fundamental methodology for measuring customersatisfaction has been established for over two decades.

Some people may still question the extent to which intangible feelings can beaccurately measured, but this is a very outmoded view. Whilst the feelings may besubjective, modern research methods can produce objective measures of them –measures that can be accurately expressed in numbers and reliably tracked over time.Their level of reliability can be accurately stated and they can be used to developpowerful statistical models to help us understand both the causes and consequencesof customer satisfaction, as we explain in Chapter 14.

The origins of the science of customer satisfaction measurement can be traced backto the mid-1980s and the work of Parasuraman, Zeithaml and Berry. TheirSERVQUAL approach15,16,17, developed and refined in the second half of the decade,established a number of key satisfaction measurement principles such as:

Measuring subjective perceptions as the basis of user-defined quality.Using exploratory research to identify the criteria used by customers to makeservice quality judgements prior to a main survey to gather statistically reliable data.The multi-dimensionality of customers’ judgements.The relative importance of the dimensions and the fact that the most importantwill have the greatest effect on customers’ overall feelings about an organisation.The use of a weighted index to reliably represent customers’ overall judgements.The use of gap analysis to identify areas for improvement.

However, aspects of the SERVQUAL model have been heavily criticised in morerecent times, especially its authors’ assertion that customers’ judgement of anyorganisation’s service quality could be reliably measured across five standarddimensions – reliability, assurance, tangibles, empathy and responsiveness(sometimes labelled the Rater scale). There has also been much debate concerningthe value of the service quality measures compared with the much wider measure ofcustomer satisfaction. We will address both issues in the next two sections.

3.4.3 Standard versus customer defined requirementsMany academic researchers have tested the five SERVQUAL dimensions in their ownsurveys and have drawn different conclusions concerning both the number andnature of the dimensions. Some have concluded that there should be fewerdimensions18,19,20 whilst others have advocated more than five21,22,23. Sector specificdimensions were proposed24,25 and even the SERVQUAL originators in a later studyfound the dimensions changing26.

In their book “Improving Customer Satisfaction, Loyalty and Profit”27, Michael

36 Methodology essentials

Chapter three 5/7/07 09:51 Page 36

Page 44: Customer Satisfaction

Johnson and Anders Gustafsson of the University of Michigan Business School tookthese findings one step further when they introduced the concept of ‘the lens of thecustomer’, which they contrasted with ‘the lens of the organization’. Suppliers andtheir customers often do not see things in the same way. Suppliers typically think interms of the products/services they supply, the people they employ to provide themand the processes that employees use to deliver the product or service. Customerslook at things from their own perspective, basing their evaluation of suppliers onwhether they have received the results, outcomes or benefits that they were seeking.

Since customers’ satisfaction judgements are based on the extent to which theirrequirements have been met, a measure of satisfaction will be generated only by asurvey based on the same criteria used by the customers to make their satisfactionjudgements. This means that to ask the right questions, customers’ requirements haveto be identified before the survey is undertaken and the questionnaire based on ‘thelens of the customer’.

KEY POINTAn accurate measure of how satisfied or dissatisfied customers feel can begenerated only if the survey is based on the lens of the customer.

Requirements are identified by qualitative research, a process in which focus groupsor depth interviews are used to allow customers to talk about their relationship witha supplier and define what the most important aspects of that relationship are. Thisprocess is explained in Chapter 5.

FIGURE 3.6 The lens of the customer

Lens of theorganization

Lens of thecustomer

outcomes

benefits

results

productspeople

processes

Methodology essentials 37

Chapter three 5/7/07 09:51 Page 37

Page 45: Customer Satisfaction

3.4.4 Satisfaction or service qualityIf any measure of customers’ attitudes is to be a reliable lead indicator of their futurebehaviour, it is fundamental to its accuracy that the survey instrument is based on thecorrect requirements. As already explained, much of the early debate around theSERVQUAL methodology focused on the extent to which the five ‘rater’ dimensionswere the correct ones, with several academic studies suggesting alternative or moreappropriate ones.

There have been studies that have demonstrated the organisational value of improvingservice quality in terms of increasing market share28, margins29, recommendation26 andprofitability30,31,32. However, most commentators prefer the much broader concept ofcustomer satisfaction rather than the more restrictive measures of service quality or theprescriptive SERVQUAL framework33,34,35,36,37,38. Clearly, customers normally judgeorganisations on a wider range of factors than service quality alone – product qualityand price being two obvious examples.

Before moving on it is necessary to cover two more reasons why organisations’measures of customer satisfaction may not be providing suitable information formanagement decision making. The first is caused by insufficient knowledge ofresearch, the second, paradoxically, can result from too much.

3.4.5 Unscientific surveysSome organisations give responsibility for customer satisfaction measurement topeople who do a relevant job (e.g. Customer Service Manager or Quality Manager)but who have no experience or expertise in research techniques. Research is ascientific process. It’s not enough to approach the task with good intentions andcommon sense. Without sufficient training they will make the most basic errors thatwill render the output totally unsuitable for monitoring the organisation’s success insatisfying its customers. Common problems include asking the wrong questionsbased on the lens of the organisation (see Chapter 4), introducing bias (Chapters 6 to9 explain three common sources of bias) and attempting to monitor a measure whosemargin of error will be far greater than the amount that customer satisfaction couldbe expected to rise or fall over a twelve month period. The issue of statisticalreliability is explained in Chapter 6 on sampling and in Chapter 11 for calculating anaccurate customer satisfaction index. As well as being a complete waste of resources,amateurish customer satisfaction surveys are a key reason why many organisationsfail to attain the benefits of improving customer satisfaction.

3.4.6 Misguided professional researchThere are many people in agencies and in research departments in companies whoare very experienced market researchers. They would not make the basic errorsoutlined above because they are well versed in sample size and confidence intervals,

38 Methodology essentials

Chapter three 5/7/07 09:51 Page 38

Page 46: Customer Satisfaction

they understand the biasing effect of low response rates and unbalanced scales orquestions and they know that a composite index is more reliable than a singlequestion. However, whilst a valid customer satisfaction survey will always be foundedon sound research principles, an effective one will also be based on a deepunderstanding of customer satisfaction. Improving customer satisfaction is verydifficult. Maintaining a sustained improvement in customer satisfaction over a fewyears is exceptionally challenging and will not happen unless the organisation has acustomer satisfaction measurement methodology that is totally suited to the task –and many perfectly valid research techniques are not suited to providing data formonitoring and improving customer satisfaction. Rating scales illustrate the point.

The market research industry engages in perennial debates about the advantages anddisadvantages of different rating scales, such as verbal versus numerical or 5-pointversus 10-point scales. A 5-point verbal rating scale is a totally valid researchtechnique that is completely suitable for many forms of market research. Formonitoring and improving customer satisfaction, however, it is vastly inferior to a 10-point numerical scale. The reasons (explained in Chapter 8) are customer satisfactionrather than market research reasons. Generating reliable customer satisfactionmeasures that lead most effectively to customer satisfaction improvement requiresextensive customer satisfaction knowledge as well as adequate research expertise andfew people have both.

Conclusions1. Customer satisfaction is a relative concept based on the extent to which an

organisation has met its customers’ requirements.2. Customer satisfaction is an attitude based on customers’ subjective perceptions of

an organisation’s performance.3. Loyalty is a behaviour that is driven primarily by customers’ satisfaction attitudes.4. Many managers don’t understand the extent to which they have to make

customers very satisfied, rather than ‘merely satisfied’ to achieve the fullorganisational benefits of customer satisfaction.

5. By over-emphasising the importance of cost control, many companies makedecisions that adversely affect customer satisfaction and, in the long run,customer loyalty and business performance.

6. Measures are essential to effectively manage employees’ behaviours andorganisational performance.

7. Many organisations fail to improve customer satisfaction because their measuresare based on flawed methodologies.

The main essentials of an accurate CSM process are:Using the lens of the customer as the basis for CSM surveys.Measuring the relative importance of customers’ requirements.

Methodology essentials 39

Chapter three 5/7/07 09:51 Page 39

Page 47: Customer Satisfaction

Basing the headline measure that is monitored over time on a composite indexthat is weighted according to the relative importance of its components.Identifying the areas where the organisation is failing to meet its customers’requirements as the basis for actions to improve customer satisfaction.Customer satisfaction measurement needs to be conducted by specialists, not byamateurs who will make basic research errors or by market researchers whodon’t understand enough about the specific demands of a reliable CSM process.

References1. The American Customer Satisfaction Index www.theacsi.org 2. The Leadership Factor’s customer satisfaction benchmarking database:

www.leadershipfactor.com/surveys/ 3. Kotler, Philip (1986) "Marketing Management: Analysis, Planning and Control”,

Prentice-Hall International, Englewood Cliffs, New Jersey4. Swan and Combs (1976) "Product Performance and Customer Satisfaction: A

New Concept”, Journal of Marketing, (April)5. Oliver, Richard L (1997) "Satisfaction: A behavioural perspective on the

consumer”, McGraw-Hill, New York6. Schneider and White (2004) "Service Quality: Research Perspectives”, Sage

Publications, Thousand Oaks, California7. Helsdingen and de Vries (1999) "Services marketing and management: An

international perspective”, John Wiley and Sons, Chichester, New Jersey8. Peters and Austin (1986) "A Passion for Excellence”, William Collins, Glasgow9. Heskett, Sasser and Schlesinger (1997) "The Service-Profit Chain”, Free Press,

New York10. Rust, Zahorik and Keiningham (1994) "Return on Quality: Measuring the

financial impact of your company’s quest for quality”, McGraw-Hill, New York11. Jones and Sasser (1995) "Why Satisfied Customers Defect”, Harvard Business

Review 73, (November-December)12. Keiningham and Vavra (2001) "The Customer Delight Principle: Exceeding

customers’ expectations for bottom-line success”, McGraw-Hill, New York13. Reichheld, Markey and Hopton (2000) "The Loyalty Effect – the relationship

between loyalty and profits”, European Business Journal 12(3)14. Taylor, Andy (2003) "Top box: Rediscovering customer satisfaction”, Business

Horizons, (September-October) 15. Parasuraman, Berry and Zeithaml (1985) “A conceptual model of service quality

and its implications for future research”, Journal of Marketing 49(4) 16. Parasuraman, Berry and Zeithaml (1988) "SERVQUAL: a multiple-item scale for

measuring perceptions of service quality”, Journal of Retailing 64(1)17. Zeithaml, Berry and Parasuraman (1990) "Delivering Quality Service”, Free

Press, New York

40 Methodology essentials

Chapter three 5/7/07 09:51 Page 40

Page 48: Customer Satisfaction

18. Babakus and Boller (1992) "An empirical assessment of the SERVQUAL scale”,Journal of Business Research 24

19. Cronin and Taylor (1992) "Measuring service quality: An examination andextension”, Journal of Marketing 56

20. White and Schneider (2000) "Climbing the Commitment Ladder: The role ofexpectations disconfirmation on customers' behavioral intentions”, Journal ofService Research 2(3)

21. Gronroos, C (1990) "Service management and marketing: Managing themoments of truth in service competition”, Lexington Books

22. Carman, J M (1990) "Consumer perceptions of service quality: An assessmentof the SERVQUAL dimensions”, Journal of Retailing 66(1)

23. Gummesson, E (1992) "Quality dimensions: What to measure in serviceorganizations”, in Swartz, Bowen and Brown, (Eds) "Advances in servicesmarketing and management”, JAI Press, Greenwich CT

24. Stevens, Knutson and Patton (1995) "DINESERV: A tool for measuring servicequality in restaurants”, Cornell Hotel and Restaurant Administration Quarterly36(2) pages 56-60

25. Dabholkar, Thorpe and Rentz (1996) "A measure of service quality for retailstores: Scale development and validation”, Journal of the Academy of MarketingScience 24(1)

26. Parasuraman, Berry and Zeithaml (1991) "Refinement and reassessment of theSERVQUAL scale”, Journal of Retailing 79

27. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty andProfit: An Integrated Measurement and Management System”, John Wiley andSons, San Francisco, California

28. Buzzell and Gale (1987) "The PIMS Principles: Linking Strategy toPerformance”, Free Press, New York

29. Gummesson, E (1993), "Quality Management in Service Organisations”, ISQA,Stockholm

30. Narver and Slater (1990) "The effect of market orientation on businessprofitability”, Journal of Marketing 54

31. Schneider, B (1991) "Service quality and profits: Can you have your cake and eatit too?”, Human Resources Planning 14(2)

32. Deshpande, Farley and Webster (1993) "Corporate culture, customer orientationand innovativeness in Japanese firms: A quadrad analysis”, Journal of Marketing57

33. Gilmore and Carson (1992) "Research in service quality: Have the horizonsbecome too narrow?”, Marketing Intelligence and Planning 10(7)

34. Lam, S S K (1995) "Assessing the validity of SERVQUAL: an empirical analysisin Hong Kong”, Asia Pacific Journal of Quality Management 4(4)

35. Buttle, F (1996) "SERVQUAL, review, critique, research agenda”, Journal ofMarketing 60

Methodology essentials 41

Chapter three 5/7/07 09:51 Page 41

Page 49: Customer Satisfaction

36. Genestre and Herbig (1996) "Service expectations and perceptions revisited:adding product quality to SERVQUAL”, Journal of Marketing Theory andPractice 4(4)

37. Robinson, S (1999) "Measuring service quality: Current thinking and futurerequirements”, Marketing Intelligence and Planning 17(1)

38. Newman, K (2001) "Interrogating SERVQUAL: A critical assessment of servicequality measurement in a high street retail bank”, International Journal of BankMarketing 19(3)

39. Myers, James H (1999) "Measuring Customer Satisfaction: Hot buttons andother measurement issues”, American Marketing Association, Chicago, Illinois

42 Methodology essentials

Chapter three 5/7/07 09:51 Page 42

Page 50: Customer Satisfaction

Asking the right questions 43

CHAPTER FOUR

Asking the right questions

To say that you need to ask the right questions when undertaking a customersatisfaction survey may appear to be a statement of the obvious. Unfortunately, whenit comes to customer satisfaction measurement, it’s the biggest single mistakeorganisations make. Many simply don’t ask the right questions, even though theyoften devote considerable time and effort to deciding what the questions should be.That’s because they approach the task from the inside out, looking at it through the‘lens of the organisation’ rather than how they should, from the outside in, seeing itthrough the ‘lens of the customer’. Since customers’ requirements and their relativeimportance form such a fundamental part of an effective CSM process this chapter isdevoted to providing a full examination of the subject.

At a glanceIn this chapter we will:

a) Illustrate why customer satisfaction surveys have to be based on the lens of thecustomer.

b) Explain why it is so crucial to develop an accurate understanding of the relativeimportance of customers’ requirements.

c) Examine the differences between stated and derived measures of importance.

d) Compare different techniques for producing statistically derived measures ofimportance.

e) Draw conclusions on the best way to understand customers’ requirements andtheir relative importance.

4.1 The lens of the customerMany organisations assume that designing a questionnaire for a customer survey iseasy. They might arrange a meeting attended by a few managers who, between them,suggest a list of appropriate topics for the questionnaire. There are two problems withthis approach. Firstly, the questionnaire almost always ends up far too long becausemanagers tend to keeping thinking of more topics on which customer feedback

Chapter four 5/7/07 09:52 Page 43

Page 51: Customer Satisfaction

would be useful or interesting. The second, and more serious problem is that thequestionnaire invariably covers issues of importance to the company’s managersrather than those of importance to customers. This is fine if the objective is simply tounderstand customers’ perceptions of how the organisation is performing in thespecified areas, but it will not provide a measure of customer satisfaction. Thisfundamental misunderstanding of how to arrive at the right questions is perfectlyillustrated by the following example.

Forget the free coffee. Just make the trains arrive on time.In 1999 Which? Magazine published an article entitled “Off the rails”1. It illustratedhow CSM surveys can be hijacked by organisations (or managers within them).According to Which?, the train operating companies’ “surveys are close to useless”because the questions avoid customers’ main requirements.

If an organisation really wants to know how satisfied its customers feel, the questionsasked in the survey have to cover the same criteria that customers use to judge theorganisation. Companies are tempted to include questions on areas where they’veinvested heavily or made improvements, but if these are of marginal importance tocustomers they will make little impact on how satisfied customers really feel. Which?conducted a survey to identify rail passengers’ main requirements and the top 10 areshown in the chart.

Not a single train operator included all of the customers’ top ten priorities on itsquestionnaire, questions about the punctuality of the trains being particularlyconspicuous by their absence. The worst culprit at the time was GNER, whose surveycovered only one item from the top ten criteria on which customers were judging it.They did ask about the on-train catering, and about staff appearance. Both cameclose to the bottom of customers’ priorities in the Which? survey.

FIGURE 4.1 Passengers’ main requirements

Punctuality of trains

Availability of seats

Train frequency

Information on delayedand cancelled trains

Cleanliness of trains

New rolling stock

Safety and security on trains

Cancellations

Announcements on trains

Journey time

44 Asking the right questions

Chapter four 5/7/07 09:52 Page 44

Page 52: Customer Satisfaction

Since customers’ satisfaction feelings are based on the extent to which theirrequirements have been met, a measure that truly reflects how satisfied or dissatisfiedcustomers feel will be generated only by a survey based on the same criteria used bythe customers to make their satisfaction judgements. This means that to ask the rightquestions, customers’ requirements have to be identified before the survey isundertaken and the questionnaire based on what’s important to customers ratherthan what’s important to the organisation.

In Chapter 3 we introduced the concept of the ‘lens of the customer’, first articulatedby Michael Johnson and Anders Gustafsson from the University of Michigan2. It isbased on the fact that suppliers and their customers often do not see things in thesame way. Suppliers typically think in terms of the products they supply, the peoplethey employ to provide them and the processes that employees use to deliver theproduct or service. Customers look at things from their own perspective, basing theirevaluation of suppliers on whether they have received the results, outcomes orbenefits that they were seeking.

KEY POINTTo produce accurate measures of customer satisfaction, surveys have to be basedon what’s important to customers.

4.2 Understanding what’s important to customersBasing a customer satisfaction survey on the lens of the customer produces anaccurate measure of how satisfied or dissatisfied customers feel because it employsthe same criteria that the customers use to make that judgement. They are thecustomers’ most important requirements – the things that matter most to them. Toachieve this we have to introduce an added layer of complexity, becauseunderstanding what is important to customers is not as simple as it may appear. Infact, market researchers have debated this topic more than almost any other aspect ofCSM methodology, especially the relative merits of stated or direct measures ofimportance versus derived or indirect methods. To a large extent the debate has beenjustifiably fuelled by the fact that different methods of measuring importance havebeen shown to produce results that can differ very widely3,4,5. Yet, as Myers6 points out,getting this right “is arguably the single most important component of a customersatisfaction survey” for three reasons:

1) It ensures that the survey does not include anything that is not important tocustomers and does not influence their satisfaction judgement.

2) It provides the basis for identifying PFIs (priorities for improvement) – areaswhere the organisation should focus its resources for maximum gain incustomer satisfaction (see Chapter 12).

3) It enables the calculation of an accurate headline measure of customersatisfaction for tracking purposes – a composite customer satisfaction index

Asking the right questions 45

Chapter four 5/7/07 09:52 Page 45

Page 53: Customer Satisfaction

that is weighted according to what’s most important to customers. Sincecustomers base their judgement of suppliers more heavily on the factors thatare most important to them, a weighted index provides the only accuratemeasure of monitoring the organisation’s success in improving customersatisfaction (see Chapter 11).

4.3 Stated importanceThe simplest way to understand what’s important to customers is to ask them. Onecould ask them to simply say what’s important to them, starting with a blank sheet ofpaper and with no prompts provided. Called ‘elicitation’3,5, it is the ideal startingpoint for beginning the process of understanding the lens of the customer, it is astraight forward thing for customers to do and it is easy to analyse by simply countingthe number of times each requirement was mentioned. However, in a totallyunprompted process customers will articulate only a small number of requirements,which means that an extremely large number of interviews have to be conducted forelicitation to stand any chance of uncovering the full extent of customers’requirements. Even then, elicitation does no more than produce a list ofrequirements. It does not provide a measure of relative importance. Therefore, itwould be more usual to provide prompts or other stimulus material to build a muchmore comprehensive list of customer requirements and then to understand theirrelative importance by asking customers to score the importance of the list, preferablyon a 10 point scale, where 10 out of 10 means ‘extremely important’ and 1 out of 10means ‘of no importance at all’.

KEY POINTStated importance provides a measure of the relative importance of customers’requirements that will be easily understood by all managers and employees.

Called stated (or direct) importance, the average scores generated by this exercise willprovide a very clear and reliable view of the relative importance of customers’ priorities,as seen by the customers themselves. It is a very clear and simple process for customersto follow when they are interviewed and for colleagues to understand when the resultsare presented. However, stated importance has been criticised on two counts:

4.3.1 High stated importance scoresFirstly customers have a tendency to give fairly high importance scores, although thissimply reflects reality – many things are very important to customers. Even thoughthe range of average importance scores will be at the upper end of the scale, there willbe a range. Some commentators maintain that stated importance scores always fall ina very narrow range (average scores above 8 out of 10). If this happens, it suggests thatthe questions have not been properly administered. If the correct procedures arefollowed (see Chapter 5.1), a very wide range of average importance scores will be

46 Asking the right questions

Chapter four 5/7/07 09:52 Page 46

Page 54: Customer Satisfaction

generated by qualitative exploratory research, often from a high of almost 10 to a lowof less than 3 on a 10 point scale although less suitable scales such as verbal scales or5-point scales will provide a much narrower range of scores. Using a 10-point scaleeven the main survey, which only includes customers’ most important requirements,will typically produce stated importance scores from around 7 to almost 10. Since thepurpose of this exercise is to understand the relative importance of customers’priorities, the average scores will clearly highlight which of the requirements (all ofwhich are important), customers see as the real top priorities. Moreover, if anyrequirements record importance scores below 6 out of 10 on the main survey or on aquantitative exploratory survey (see Chapter 5), it provides conclusive evidence thatthey are not very important to customers and should not form part of any measureof customer satisfaction.

KEY POINTStated importance is sometimes said to produce blanket high scores, providinglittle discriminatory power for understanding relative importance. However, ifthe correct scale is used, this criticism is greatly exaggerated.

4.3.2 GivensThe second criticism of stated importance is that customers tend to emphasisecertain things when scoring the requirements, typically givens such as safety, price,cleanliness etc. Consider, for example, your own judgement as an airline passenger. Ifyou were surveyed and asked to score out of 10 the importance of safety, you wouldalmost certainly give it the top score. However, if you recall the basis on which youchose that airline, it’s very unlikely that safety was high on your list of selectioncriteria. It’s a ‘given’. Under normal circumstances, safety is not a factor thatdifferentiates between airlines. Therefore, in order to fully understand the criteriaused by customers to select or evaluate suppliers, it is helpful to also use the secondway of measuring what’s important to customers, known as impact.

4.4 Impact

4.4.1 DeterminanceThe concept of givens and differentiators goes back to the 1960s7,8,9, and is sometimesreferred to as ‘determinance’. It is now well established that some things that are veryimportant to customers won’t always make a big impact on how they judge anorganisation because they are givens. Sometimes misleadingly called ‘derived importance’,impact essentially highlights the things that are ‘top of mind’ for customers – the factorsthat ‘determine’, or make a big impact on how they select and evaluate suppliers.

Imagine that we approached, at random, a customer of an organisation and askedthem for a quick view: “overall is XYZ a good company to do business with?” Unless

Asking the right questions 47

Chapter four 5/7/07 09:52 Page 47

Page 55: Customer Satisfaction

it operates in a very restricted market, any organisation will be exposed to this kindof ‘word of mouth’ all the time. In that situation, are its customers saying good or badthings about it, and what is making them say those things? The answer is that theirtestimonial, good or bad, will have been based on the aspects of dealing with theorganisation that have made the biggest impact on them. The measure we are aboutto describe will highlight those factors.

4.4.2 Measuring impactWe are trying to identify the aspects of an organisation’s performance that are mostclosely associated with customers’ overall judgement / opinion of it. Conveniently,there is a statistical technique called correlation that does just this. To utilise thistechnique, a customer satisfaction questionnaire must contain a simple overallsatisfaction question such as: “Taking everything into account, how satisfied ordissatisfied are you overall with……… XYZ?”

The overall satisfaction question must be scored on exactly the same scale used for allthe other satisfaction questions – preferably a 10-point numerical scale. The data fromthe overall satisfaction question is then correlated against the customers’ satisfactionscores for each of the other requirements. This can be easily executed in any statisticalpackage. In Microsoft Excel, for example, it is ‘CORREL’ on the drop-down formulamenu. The output of a correlation is a ‘correlation coefficient’, which is always a numberbetween 0 and 1 (or 0 and -1 for a negative correlation). To utilise the impact measure,the only statistical knowledge required is a basic understanding of what the correlationcoefficient means. This is illustrated by the examples in Figures 4.2 and 4.3.

FIGURE 4.2 Low correlation

Ove

rall

Sati

sfac

tion

Staff Appearance1 2 3 4 5 6 7 8 9 10

1

2

3

4

5

6

7

8

9

10

Customer X

Customer Y

48 Asking the right questions

Chapter four 5/7/07 09:52 Page 48

Page 56: Customer Satisfaction

In the hypothetical example shown in Figure 4.2, 20 customers have scored theiroverall satisfaction with the supplier and their satisfaction with staff appearance,which seems to make little impact on customers’ overall satisfaction, achieving a verylow correlation coefficient of 0.1. Close examination of the scatter plot clearly showswhy this is the case. Customer Y does not like the supermarket (scoring 2 out of tenfor overall satisfaction), but has no problem with staff appearance (giving it a veryhigh score of 9 out of 10). By contrast, customer X rates the supermarket as a wholevery highly, giving it an overall satisfaction score of 9 out of 10 despite having a verypoor opinion of staff appearance, scoring it 2 out of 10. From that picture, (and givenan adequate sample size for statistical reliability), we can draw the conclusion, or‘derive’, that staff appearance makes very little impact on customers’ overalljudgement of the supplier. Statistically, the correlation co-efficient of 0.1 tells us thatthe two variables have virtually no relationship with each other.

On the other hand, figure 4.3 shows that staff helpfulness achieves an extremely highcorrelation coefficient of 0.9, which means that it has a very strong relationship withoverall satisfaction. The scatter plot of the 20 imaginary customers that have take partin the survey shows that each one gives a very similar score for their satisfaction withstaff helpfulness and their overall satisfaction with the supplier. There are nocustomers who think the supplier is very good even though the staff are unhelpful, orvice-versa. We can therefore conclude that staff helpfulness makes a very high impacton customers’ overall judgement of that supplier.

KEY POINTHigh impact scores reflect factors at the top of customers’ minds when theythink of an organisation.

FIGURE 4.3 High correlation

Ove

rall

Sati

sfac

tion

Staff Helpfulness1 2 3 4 5 6 7 8 9 10

1

2

3

4

5

6

7

8

9

10

Asking the right questions 49

Chapter four 5/7/07 09:52 Page 49

Page 57: Customer Satisfaction

It is very unusual to produce such a wide range of correlation coefficients in a realcustomer satisfaction survey. A more typical range is shown in Figure 4.4, whichcompares the importance and impact scores generated by a survey of restaurantcustomers. It demonstrates how some requirements, e.g. ambience, décor, and pricein this example, were not scored particularly highly for importance by customers butwere making a bigger impact on their overall judgement of the restaurant. Converselythere can be requirements that are scored highly for stated importance, cleanliness ofthe toilets being a good example in this case, that actually make little difference tocustomers’ overall judgement of the supplier – a classic given. In Chapter 5 we willexplain how to use stated importance and impact in an exploratory survey to makeabsolutely certain that the main survey asks the right questions.

4.5 Bivariate and multivariate techniques

4.5.1 CorrelationWhilst the use of statistical techniques to derive what is important to customers(‘indirect’ methods) is widely advocated in the CSM literature2,6,10,11,12, there is lessagreement on the best technique to use. In section 4.4 we explained the use ofcorrelation to derive importance, and the impact data shown in Figure 4.4 werecalculated using a Pearson’s correlation coefficient. However, other statisticaltechniques are used by some CSM practitioners for calculating derived importance;most commonly multiple regression.

A bivariate correlation, such as Pearson’s Product Moment Correlation, involves

FIGURE 4.4 Importance and impact scores

Cleanliness of the tableware

Cleanliness of the toilets

Cleanliness of the restaurant

Quality of food

Professionalism of staff

Friendliness of staff

Welcome on arrival

Ambience

Air quality

Availability of food

Seating

Choice of food

Décor

Layout of the restaurant

Price of food

Customer requirement Importance Impact

9.16

8.91

8.88

8.84

8.40

8.32

8.16

8.13

8.02

7.40

7.29

7.27

6.86

6.84

6.82

0.49

0.30

0.48

0.63

0.59

0.54

0.40

0.53

0.40

0.37

0.37

0.42

0.45

0.38

0.48

50 Asking the right questions

Chapter four 5/7/07 09:52 Page 50

Page 58: Customer Satisfaction

correlating each requirement separately against overall satisfaction. This provides avery accurate measure of the extent to which each individual attribute co-varies withoverall satisfaction as illustrated in Figures 4.2 and 4.3. A high correlation coefficient,as recorded by ‘quality of the food’ in Figure 4.4, indicates that the satisfaction scoresgiven for ‘quality of the food’ and the scores given for the overall satisfaction questioncontain a large amount of shared information. This is illustrated in Figure 4.5. Theactual amount of shared information can be quantified by squaring the correlationcoefficient, which is expressed r2. So in the case of ‘quality of the food’, the coefficientswould be:

r = 0.63r2 = 0.40

In other words, 40% of the information in ‘quality of food’ is shared with theinformation in ‘overall satisfaction’.

4.5.2 CollinearityA characteristic of CSM data is that surveys are based on a number of customerrequirements, many of which are quite similar to each other. Consequently, there isshared information amongst the requirements as well as between each requirementand overall satisfaction. This phenomenon is known as collinearity and isdemonstrated in Figure 4.6, the correlation matrix for the restaurant survey. Thisshows that requirements which are different aspects of the same topic, e.g. the threecleanliness attributes, correlate quite strongly with each other. So ‘cleanliness of therestaurant’ correlates highly with both ‘cleanliness of the tableware’ and ‘cleanliness ofthe toilets’, and there is quite a lot of correlation across many of the requirements.

KEY POINTCollinearity means that information is shared across attributes and is acharacteristic of CSM data.

Since it only compares one attribute at a time with overall satisfaction, a bivariate

FIGURE 4.5 Shared information

Quality of the food

Overallsatisfaction

a b c

Sharedinformation

Asking the right questions 51

Chapter four 5/7/07 09:52 Page 51

Page 59: Customer Satisfaction

correlation completely ignores all this collinearity, as illustrated in Figure 4.7. As weknow, ‘quality of food’ correlates with ‘overall satisfaction’ (ellipse A) as does ‘choice of food’ (ellipse B) and they also share information with each other (ellipse C). Sincea bivariate correlation is oblivious of everything outside the information containedin the two variables it is comparing, it double counts the shaded area (D), includingit in the coefficient for ‘choice of food’ as well as ‘quality of food’.

KEY POINTA bivariate correlation ignores the collinearity amongst the customerrequirements.

Since the collinearity is multiplied across many requirements, as shown in Figure4.5, there is a lot of double counting going on. For this reason some CSMcommentators claim that correlation is an inappropriate technique for customersatisfaction data13, arguing that it is necessary to use a multivariate technique suchas multiple regression.

FIGURE 4.6 Correlation matrix

Quality of food

Price of food

Availability of food

Choice of food

Cleanliness of the tableware

Cleanliness of the toilets

Cleanliness of the restaurant

Air quality

Professionalism of staff

Welcome on arrival

Friendliness of staff

Ambience

Layout of the restaurant

Seating

Décor

Overall satisfaction

Ove

rall

sati

sfac

tion

Qu

alit

y of

food

Pri

ce o

f fo

od

Ava

ilabi

lity

of fo

od

Cho

ice

of fo

od

Cle

anlin

ess

of th

e ta

blew

are

Cle

anlin

ess

of th

e to

ilets

Cle

anlin

ess

of th

e re

stau

ran

t

Air

qu

alit

y

Pro

fess

ion

alis

m o

f st

aff

The

wel

com

e on

arr

ival

Frie

ndl

ines

s of

sta

ff

The

am

bien

ce

The

layo

ut o

f th

e re

stau

ran

t

The

sea

tin

g

Déc

or

0.63

0.48

0.37

0.42

0.49

0.30

0.48

0.40

0.59

0.40

0.54

0.53

0.38

0.37

0.45

0.63

0.56

0.34

0.32

0.43

0.18

0.34

0.21

0.43

0.22

0.35

0.29

0.26

0.21

0.26

0.48

056

0.34

0.37

0.34

0.14

0.31

0.31

0.32

0.30

0.37

0.30

0.20

0.22

0.25

0.37

0.34

0.34

0.47

0.43

0.27

0.41

0.33

0.36

0.25

0.32

0.22

0.30

0.30

0.32

0.42

0.32

0.37

0.47

0.43

0.28

0.38

0.24

0.30

0.26

0.34

0.22

0.26

0.24

0.27

0.49

0.43

0.34

0.43

0.43

0.46

0.62

0.43

0.47

0.32

0.41

0.32

0.35

0.38

0.33

0.30

0.18

0.14

0.27

0.28

0.46

0.56

0.32

0.29

0.18

0.28

0.16

0.21

0.24

0.31

0.48

0.34

0.31

0.41

0.38

0.62

0.56

0.48

0.41

0.33

0.40

0.35

0.39

0.39

0.47

0.40

0.21

0.31

0.33

0.24

0.43

0.32

0.48

0.32

0.23

0.23

0.25

0.30

0.40

0.36

0.59

0.43

0.32

0.36

0.30

0.47

0.29

0.41

0.32

0.48

0.63

0.33

0.25

0.25

0.27

0.40

0.22

0.30

0.25

0.26

0.32

0.18

0.33

0.23

0.48

0.67

0.34

0.20

0.29

0.19

0.54

0.35

0.37

0.32

0.34

0.41

0.28

0.40

0.23

0.63

0.67

0.42

0.25

0.26

0.20

0.53

0.29

0.30

0.22

0.22

0.32

0.16

0.35

0.25

0.33

0.34

0.42

0.40

0.31

0.32

0.38

0.26

0.20

0.30

0.26

0.35

0.21

0.39

0.30

0.25

0.20

0.25

0.40

0.61

0.39

0.37

0.21

0.22

0.30

0.24

0.38

0.24

0.39

0.40

0.25

0.29

0.26

0.31

0.61

0.42

0.45

0.26

0.25

0.32

0.27

0.33

0.31

0.47

0.36

0.27

0.19

0.20

0.32

0.39

0.42

52 Asking the right questions

Chapter four 5/7/07 09:52 Page 52

Page 60: Customer Satisfaction

4.5.3 Multiple regressionMultiple regression simultaneously looks at the information that all the requirementsshare with overall satisfaction. This removes the collinearity problem by eliminating alldouble counting, but often produces a very different outcome, as shown in Figure 4.8.

All the multiple regression scores (known as beta coefficients) are lower than theircorresponding correlation coefficients simply because the double counting has been

FIGURE 4.8 Coefficients from correlation and multiple regression

Quality of food

Price of food

Availability of food

Choice of food

Cleanliness of the tableware

Cleanliness of the toilets

Cleanliness of the restaurant

Air quality

Professionalism of staff

Welcome on arrival

Friendliness of staff

Ambience

Layout of the restaurant

Seating

Décor

Customer requirement CorrelationMultiple

regression

0.63

0.48

0.37

0.42

0.49

0.30

0.48

0.40

0.59

0.40

0.54

0.53

0.38

0.37

0.45

0.35

0.07

-0.01

0.07

0.02

0.01

0.01

0.07

0.20

-0.04

0.17

0.16

-0.01

0.02

0.10

FIGURE 4.7 Correlation and collinearity

Quality of food

Choice of food

Overallsatisfaction

A

B

C D

Asking the right questions 53

Chapter four 5/7/07 09:52 Page 53

Page 61: Customer Satisfaction

eliminated. Closer inspection, however, reveals some major differences in the relativeimportance implied by the two columns of data. The beta coefficients suggest thatnone of the cleanliness requirements makes any difference to overall satisfaction, andthe same for many other attributes such as ‘availability of food’, ‘welcome on arrival’,‘layout of the restaurant’ and ‘seating’. Moreover, three requirements show negativecorrelations, albeit only tiny ones, suggesting that, to take one of the examples, the lesspleasant the welcome on arrival, the more the customers like it – a clearly nonsensicalconclusion to draw. The reason why this happens is illustrated in Figure 4.9.

Correlations represent the unique contribution made by each requirement to overallsatisfaction. As we saw in Figure 4.7, ‘quality of food’ and ‘choice of food’ co-vary, butsince the Pearson’s correlation relates them individually to overall satisfaction, itobviously double counts any shared information, which therefore appears in thecoefficients for both requirements. Multiple regression does no double countingsince it identifies the incremental contribution to overall satisfaction of eachrequirement when combined with all the remaining requirements. Unlikecorrelation, it cannot allocate area B in Figure 4.9 to both quality and choice of foodso reflects only the incremental contribution made by each requirement in the betacoefficients. As shown in Figure 4.8, the incremental contribution made by ‘quality offood’ to explaining overall satisfaction is far greater than that made by ‘choice of food’.Consequently, the relative impact of some requirements is over-emphasised whilstthat of many others is heavily under-stated. In the real world, of course, customers atthe restaurant do not consider the incremental amount of satisfaction they derivefrom each of the 15 requirements whilst holding all the others constant!

KEY POINTMultiple regression eliminates the collinearity, but in doing so under-states theimpact of many of the requirements.

FIGURE 4.9 Multiple regression and collinearity

Quality of food

Choice of food

OverallsatisfactionA

B

C

54 Asking the right questions

Chapter four 5/7/07 09:52 Page 54

Page 62: Customer Satisfaction

Some market researchers have been very attracted to multiple regression because ofits focus on a small number of so-called ‘key drivers’. However, the process it uses toeliminate collinearity often distorts the results – cleanliness and choice of food areimportant to people dining out and do influence their judgement of the restaurant.The measures provided by correlation therefore more accurately reflect the relativeimpact made by the attributes in customers’ minds. For this reason we do notrecommend the use of multiple regression to derive measures of impact orimportance in CSM, a view supported by other commentators such as Myers6 andSzwarc14. As Myers points out6, “Multiple regression coefficients can be distorted ifcollinearity among attributes is high (as it often is). The problem here is that multipleregression coefficients can be very misleading because one attribute will get a highcoefficient while a very similar one will get a much lower coefficient.”

KEY POINTBy allocating shared information to only one requirement, key driver analysisproduced by multiple regression often produces misleading conclusions.

4.5.4 The complete pictureGustafsson and Johnson found that stated importance, compared to statisticallyderived measures, correlated relatively more strongly with loyalty than satisfaction,supporting the view that stated importance provides a more stable measure and vitalinformation on the longer-term drivers of customers’ behaviours15. On the otherhand, correlation offers a good reflection of the issues that are currently ‘top of mind’with customers and should therefore be seen as a measure of impact rather thanimportance14. They’re simply measures of different things, so the best understandingof customers’ requirements and their relative importance is produced by using bothmeasures. In the next chapter we will explain how this will help to ensure that themain survey questionnaire asks the right questions and in Chapter 10 how it shouldbe used in the analysis of the main survey.

4.6 Conclusions1. If a CSM process is to provide an accurate measure of how satisfied or

dissatisfied customers feel, it must be based on the ‘lens of the customer’, usingthe same criteria that the customers use to judge the organisation.

2. Stated importance is a clear and simple measure of what customers say isimportant. Whilst it is heavily criticised by some for emphasising givens, it isnot only the most accurate measure of what is important to customers, it is theonly one.

3. All statistically derived measures reflect impact rather than importance, and ofthese, correlation provides the best indication of the extent to which the differentattributes are currently influencing customers’ judgement of the supplier.

Asking the right questions 55

Chapter four 5/7/07 09:52 Page 55

Page 63: Customer Satisfaction

4. Impact measures or “key drivers” are changeable, reflecting current problemsrather than actual importance. Stated importance provides a more stablemeasure.

5. For a full understanding of ‘the lens of the customer’ organisations shouldtherefore use both stated importance and impact for CSM.

References1. Which? Magazine (1999) "Off the rails”, (January pages 8-11)2. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty and

Profit: An Integrated Measurement and Management System”, John Wiley andSons, San Francisco, California

3. Jaccard, Brinberg and Ackerman (1986) "Assessing Attribute Importance: Acomparison of six methods”, Journal of Consumer Research 12 (March)

4. Heeler, Okechuku and Reid (1979) "Attribute Importance: Contrastingmeasurements”, Journal of Marketing Research 8, (August)

5. Griffin and Hauser (1993) "The Voice of the Customer” Marketing Science 12,(Winter)

6. Myers, James H (1999) "Measuring Customer Satisfaction: Hot buttons andother measurement issues”, American Marketing Association, Chicago, Illinois

7. Foote, Nelson (1961) "Consumer Behavior: Household Decision MakingVolume 4”, New York University Press, New York

8. Myers and Alpert (1968) "Determinant Buying Attributes: Meaning andMeasurement”, Journal of Marketing 32, (October)

9. Alpert, Mark (1971) "Identification of Determinant Attributes – A Comparisonof Methods”, Journal of Marketing Research 8, (May)

10. Cronin and Taylor (1992) "Measuring service quality: An examination andextension”, Journal of Marketing 56

11. Parasuraman, Berry and Zeithaml (1988) "SERVQUAL: a multiple-item scale formeasuring perceptions of service quality”, Journal of Retailing 64(1)

12. Schneider and White (2004) "Service Quality: Research Perspectives”, SagePublications, Thousand Oaks, California

13. Allen and Rao (2000) "Analysis of Customer Satisfaction Data”, ASQ QualityPress, Milwaukee

14. Gustafsson and Johnson (2004) "Determining Attribute Importance in a ServiceSatisfaction Model”, Journal of Service Research 7(2)

56 Asking the right questions

Chapter four 5/7/07 09:52 Page 56

Page 64: Customer Satisfaction

Exploratory research 57

CHAPTER FIVE

Exploratory research

As we said in the last chapter, an accurate measure of customer satisfaction will beproduced only if the survey is based on ‘the lens of the customer’. To do this, it isessential to talk to customers before the start of the survey to find out what’simportant to them. This is called exploratory research. The questionnaire for themain survey will then be based on the things that are most important to customers.

At a glanceIn this chapter we will

a) Introduce the concept of qualitative research

b) Explain how to conduct depth interviews for CSM

c) Explain how to conduct focus groups for CSM

d) Outline the advantages of an exploratory survey

e) Consider how often to repeat exploratory research

5.1 Qualitative researchBeginning the exercise with ‘exploratory research’ is widely advocated. Mostcommonly, exploratory research will be qualitative1,2,3. Qualitative research involvesgetting a lot of information from a small number of customers. Lots of informationis needed because at this stage an in-depth understanding of what’s important tocustomers is essential for including the right questions on the main surveyquestionnaire. To achieve this it is important to get respondents talking in detailabout their experiences, their attitudes and, especially in consumer markets, theiremotions and feelings4,5. So, from qualitative research a large amount of in-depthinformation is gathered from each customer producing a lot of understanding.However, since it involves only small sample sizes it’s not quantitative so it’s notpossible to draw statistical inferences from it. This is of no concern since the onlypurpose of exploratory research is to understand customers’ requirementssufficiently well to ask the right questions in the main survey. It is the main survey,which is undertaken with a larger sample and is quantitative, that establishesstatistically reliable measures of importance as well as satisfaction. However, it can

Chapter five 5/7/07 09:53 Page 57

Page 65: Customer Satisfaction

also be useful to include a quantitative element at the exploratory stage before thequestionnaire for the main survey or ongoing tracking is finalised1,6,7, and this optionwill also be examined later in the chapter. Initially, however, we will examine the twomain qualitative exploratory research techniques; depth interviews and focus groups.

5.2 Depth interviewsAdvocated by Johnson and Gustafsson8, depth interviews are usually face to face andone to one. The duration of a depth interview can range from 30 to 90 minutesdepending on the complexity of the customer – supplier relationship. Depthinterviews are more commonly used in business to business markets, where thecustomers are other organisations so we will describe the depth interview processmainly in that context.

Due to the qualitative nature of exploratory research and to the relatively lowvariance in customers’ requirements, sample sizes do not need to be large. Around 12depth interviews are typically adequate for CSM exploratory research in a B2Bmarket. As illustrated in Figure 5.1, Griffin and Hauser9 identified that 12 depthinterviews will identify at least 90% of customers’ requirements. A very smallcustomer base might need fewer. A large and complex customer base would needmore interviews to ensure a good mix of different types of customers such as:

High value and lower value customersCustomers from different business sectorsCustomers from different channels such as manufacturers and distributorsDifferent geographical locationsA range of people from the DMU (decision making unit).

5.2.1 Who is the customer?It has been established for many years that in all but the smallest organisations,

FIGURE 5.1 Sample size in exploratory reseach

0

10

20

30

40

50

60

70

80

90

100

% o

f n

eeds

iden

tifi

ed

0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30

Face to face

Focus groups

Number groups or face to face interviews

9

58 Exploratory research

Chapter five 5/7/07 09:53 Page 58

Page 66: Customer Satisfaction

supplier selection decisions are not normally made by a single individual but by a‘DMU’, sometimes called a ‘buying center’10,11,12. As far as customer satisfaction isconcerned, there will be several individuals who are affected in some way by asupplier’s product or service and they will communicate with each other, formally orinformally, to determine the customer’s level of satisfaction as well as various loyaltydecisions. The members of the DMU are typically identified in terms of the roles theyplay since job titles vary enormously across organisations. The five traditional DMUroles are13:BuyersTypically in the purchasing department, usually place the orders and often have themost contact with suppliers.UsersThe main recipients and users of the product or service, e.g. production managers ina manufacturing setting.DecidersThe ultimate decision maker could be anybody from the CEO to another member ofthe DMU to a whole committee.InfluencersAlthough ‘deciders’ are often seen as making the decision, they may be heavily influencedby other colleagues, especially those with technical knowledge such as engineers.GatekeepersControl the flow of information, often shielding DMU members from suppliers.

Exacerbated by gatekeepers, suppliers’ knowledge of DMU members may beincomplete14,15 and this can be a problem for the depth interview and main surveysampling process. The fact is, there is no standard DMU composition and their sizecan vary widely from 2 or 3 individuals to 15-20, though 5 to 6 is the most commonsize16. The relative influence of DMU members is also highly variable17.

The exploratory research, and later the main survey, must reach the full spectrum ofindividual roles in the DMU if it is going to be accurate. For qualitative exploratoryresearch, it is valid to select the participants (organisations and individuals) using‘judgmental sampling’18 (using good judgment to ensure that the small sample isrepresentative of the business). For quantitative surveys sampling will need to bemuch more sophisticated, and this is covered in Chapter 6.

The fundamental purpose of exploratory research is therefore to generate a list ofcustomers’ most important requirements to use as the basis for the main surveyquestionnaire. To achieve this, a depth interview should be conducted according to acarefully defined process, which is outlined in the next four sub-sections of this chapter.

5.2.2 Indirect questioningA depth interview is not a conventional interview where the interviewer asks a sequence

Exploratory research 59

Chapter five 5/7/07 09:53 Page 59

Page 67: Customer Satisfaction

of short questions each followed by short answers. To achieve the objectives listed above,it is essential to get the customer talking as much as possible, so it is most effective to thinkin terms of asking indirect questions, such as the following ‘decision process’ question:

“I’d like you to imagine that you didn’t have a supplier of (product/service) and youhad to start with a blank piece of paper and find one. I wonder if you could talk methrough what would happen in this organisation from the first suggestion that youmight need a supplier of (product/service) right through to the time when a supplierhas been appointed and evaluated. As you talk me through the process, perhaps youcould also point out the different people in your company who might get involved atdifferent times. What sort of things would each of those people be looking for from thissupplier and what roles would they each play in the decision making process?”

A more sophisticated approach to the elicitation method described in Chapter 4.3,this is not a question that will be answered briefly. Indeed, in some organisations itwill stimulate a complicated explanation that may continue for 15 or 20 minutes.While the respondent is talking through this process, the interviewer should jot downeverything that seems to be important to anybody in the DMU - a list of customerrequirements. Once the customer has outlined everything he can think of, it isperfectly valid to prompt with any factors that may be important to customers, buthave not been mentioned. The key thing is not to lead the respondent19.

KEY POINTAsk indirect questions to maximise the amount of information provided bycustomers.

An alternative indirect question that is simple but effective is to ask customers todescribe their ideal supplier. This could involve the customer talking through animaginary guided tour around this perfect supplier’s organisation. Another approachis to ask customers to imagine that they are about to leave their job. They musttherefore brief someone about everything they would need to know to manage thecompany’s supply of (product or service in question), including meeting therequirements of all their internal customers. This is a very effective way ofunderstanding the composition of the DMU plus the criteria that its members will beusing to judge the supplier.

5.2.3 Customer requirementsAfter some prompting the depth interview will have generated a very comprehensivelist of things that are important to the customer, and it might be a very long list; oftensixty or more customer requirements in a B2B market, which is far too many for themain survey questionnaire. From this long list it is therefore necessary to identify thethings that are most important to customers. One way is to simply ask them by goingdown the list and asking the customer to rate each item for importance. The trouble

60 Exploratory research

Chapter five 5/7/07 09:53 Page 60

Page 68: Customer Satisfaction

with this approach is that virtually everything ends up being important to customers.It is more productive to use a ‘forced trade-off ’ approach to provide a much moreaccurate indication of the relative importance of a list of requirements. There areproprietary trade-off techniques such as Conjoint Analysis, but the problem with manyof these is that they rely on making a large number of trade offs which get more than alittle tedious if 50 or 70 customer requirements have been suggested! For CSMexploratory research it is therefore necessary to use a technique such as the ‘top prioritytrade off ’ that provides the required degree of accuracy but can be administered in the20 to 30 minutes that will now remain from a one hour depth interview.

5.2.4 Top priority trade offCustomers must initially select their top priority, if there was only one thing theycould select from the entire list of requirements. Then the customer gives their toppriority a score out of 10 for its importance to them, where 10 is extremely importantand 1 is of no importance at all. They will almost invariably score 10 since it is theirtop priority. (A ten point numerical rating scale is only one of many scales used inmarket and social research. A full assessment of rating scales is provided in Chapter8.) Having established a scale, customers can then be asked to score everything elsefor importance compared with their top priority. Using this trade off approach willprovide a more accurate reflection of the relative importance of each item and willgenerate a far wider range of scores than simply going down the list withoutestablishing the ‘top priority’ benchmark.

Once the depth interviews are completed it is a simple task to total the scores givenfor each requirement. Those achieving the highest scores are the most importantcustomer requirements, and these are the items that should be included on thequestionnaire for the main survey. For a reasonable length questionnaire, around 15-20 customer requirements for the main survey would be normal, but this topic willbe covered in more detail in Chapter 9.

5.3 Focus groupsSome organisations prefer using focus groups for qualitative research since the groupdynamics often stimulate more customer requirements than would be generated by one-to-one depth interviews. Focus groups are also very good for clarifying ‘lens of the customer’thought processes and terminology, ensuring not only that the main survey questionnairewill be relevant to customers but also that it will minimise mis-interpretation since it willuse the words used by customers to describe their requirements20.It is normal for CSM exploratory research to run four to eight focus groups witharound eight21,22 customers in each, although more groups may be held for a complexcustomer base requiring segmentation. As shown earlier in Figure 5.1, this willnormally identify at least 90% of customers’ requirements. Where there are segments

Exploratory research 61

Chapter five 5/7/07 09:53 Page 61

Page 69: Customer Satisfaction

of customers who may hold very different views it is helpful to segment the groups.For example, the views of younger people and older people towards health care orpensions are likely to differ considerably. If so, it is not productive to mix them in thesame focus group but better to run separate groups for younger and older customers.When running focus groups the following elements should be considered.

5.3.1 RecruitmentFocus group participants can be invited personally at the point of sale, through streetinterviews or by telephone. It is important to provide written confirmation of all thedetails, such as time, location and anything respondents need to bring with them, e.g.spectacles. As well as reminding people the day before, usually by telephone, it is alsonormal to offer them an incentive23 to provide an extra reason to turn out of thehouse on a cold winter night, instead of settling down to watch TV. In the UK,incentives can vary from £20 to over £100 or more for very wealthy individuals. £50is average. Higher rates are needed in London than the provinces and the moreaffluent the customer, the larger the incentive needs to be. Another critical factor isthe strength of the relationship between the customer and the supplier. The weaker itis the more difficult it can be to generate any interest in or commitment to the focusgroup on the part of the customers. We were once simultaneously recruiting andrunning two sets of focus groups for two clients in financial services. One was intelephone banking and even with very high incentives recruitment was difficult andthe attendance rate poor. The customers had no relationship with, and littlecommitment to, the supplier. The second client was a traditional building society.Many customers had held mortgages or savings accounts over a long period,personally visited their local branch and were loyal customers. Although the topicsfor discussion were virtually identical for both groups, the building society customerswere far easier to recruit and the attendance rate was almost 100%.

5.3.2 VenuesFocus groups can be run in hotels, at the supplier’s office, or even on site (e.g. bars,restaurants, sports venues) – anywhere with room for customers to sit round a table inreasonably comfortable and quiet surroundings. It is a good idea to consider what kindof venue will make the participants feel most relaxed. It needs to be somewhere they arefamiliar with and somewhere they see as convenient. The local pub’s function room forexample, will often work better for attendance rates than the smart hotel further away.In many major cities there are specialist ‘viewing facilities’ for hosting focus groups.Though expensive they enable easy videoing and the ability to view the proceedings live.Information on finding these venues is provided in Appendix 2.

5.3.3 Running focus groupsFocus groups are run by a facilitator (sometimes called a moderator) who will explainthe purpose of the group, ask the questions, introduce any activities and generallymanage the group. The facilitator clearly needs to be an excellent communicator and

62 Exploratory research

Chapter five 5/7/07 09:53 Page 62

Page 70: Customer Satisfaction

an extremely good listener. They must also be strong enough to keep order, insist thatonly one participant is speaking at one time, prevent any verbose people fromdominating the group and adhere to a time schedule. As well as needing a high levelof expertise, the facilitator must also be objective which is why it is always preferableto use a third party facilitator for focus groups, especially for CSM22.

The group will often start with a few refreshments giving an opportunity forparticipants to chat informally to break the ice. Once the discussion starts it is veryimportant to involve everybody right at the beginning, so a focus group normallystarts with a few easy questions that everyone in the group can answer21,23. Examplesinclude simple behavioural questions such as the length of time they have been acustomer, the products or services they buy and the frequency of purchase. Alsosuitable are simple customer experience questions, such as examples of great or poorservice they have received in the sector concerned.

5.3.4 Identifying customers’ requirementsOnce everyone has said something a CSM focus group is effectively divided into twoparts. The first part involves achieving the over-riding objective of CSM exploratoryresearch – identifying the things that are important to them when selecting or evaluatingsuppliers. This can be done by simple questioning and discussion but it is more effectiveto use ‘projective techniques’24 to stimulate discussion, encourage some lateral thinkingand, often, generate ideas that would not have resulted from normal question and answersessions. There are many projective techniques that are used in qualitative marketresearch, but the following are particularly appropriate to CSM exploratory research.

(a) Theme boardsOne projective technique is grandly known as thematic apperception, which simplymeans using themes or images to uncover people’s perceptions25. Examples of thetechnique in action include asking people to draw pictures or cut pictures out ofmagazines which symbolise or remind them of the relevant area of customer activity.Less time consuming is to use theme boards as the stimulus material. There wouldtypically be one board showing images which are positive or congruent with thebrand concerned and one showing negative or incongruent images relating to theproduct/service in question. Theme boards will stimulate considerable discussionabout customer experiences that have made an impact on participants and,consequently, things that are important to them.

(b) Creative comparisonsA creative comparison is an analogy, comparing an organisation or product whichmay have few distinctive features with something else that has far more recognisablecharacteristics. A common example would be a question such as:“If ABC Ltd were an animal, what kind of animal would it be?”Answers may range from sharks to elephants, but having elicited all the animals from

Exploratory research 63

Chapter five 5/7/07 09:53 Page 63

Page 71: Customer Satisfaction

the participants, the facilitator will ask why they thought of those particularexamples. The reasons given will start to uncover customers’ perceptions of thecompany in question. In addition to animals, creative comparisons can be made withpeople such as stars from the media or sporting worlds, all with the objective ofhighlighting things that are important to customers26.

(c) The Friendly MartianThe Friendly Martian is an excellent projective technique for getting respondents totalk through the decision process (the way they make judgements between onesupplier and another) in order to get some clues about which things are important tothem as customers. In CSM focus groups for a restaurant, for example, the FriendlyMartian technique would be introduced as follows:

“Let’s imagine that a Friendly Martian (an ET-type character), came down from outerspace. He’s never been to the earth before, and you had to explain to this FriendlyMartian how to arrange a meal out in a restaurant with some friends. What kind ofthings should he look out for, what does he need to know, what kind of things should heavoid? You’ve got to help this little guy to have a really good night out and make sure hedoesn’t end up making any mistakes. What advice would you give him?”

Since the little Martian doesn’t know anything, participants will go into much moredetail and mention all kinds of things that they would have taken for granted if adirect question had been asked.

5.3.5 Prioritising customers’ requirementsHaving used some of the techniques outlined above to identify a long list of thingsthat are of some importance to customers, the remainder of the focus group will bemuch more structured, following broadly the same steps that we outlined for thedepth interview. First list on a flip chart all the customer requirements that have beenmentioned in the discussions during the first half of the focus group and see ifanybody can think of any more to add. Next ask all participants to nominate their toppriority and score it out of ten to establish a clear benchmark in their minds. Thisshould be done individually, so it is best to give out pencils and answer sheetsenabling everybody to write down their individual views.

Having established everybody’s top priority each participant can read down the list,again individually, and give every customer requirement a score out of ten, to denoteits relative importance. Having completed all the groups you can again add up thescores given by all the participants and, typically, the top 15-20 requirements will beused for the questionnaire for the main survey.

5.4 Quantitative exploratory surveysAs we have said, the qualitative phase will identify many factors of importance to

64 Exploratory research

Chapter five 5/7/07 09:53 Page 64

Page 72: Customer Satisfaction

customers – too many for a reasonable length questionnaire for the main survey. Tobe absolutely certain that the questionnaire focuses on the right issues (i.e. the factorsmost responsible for making customers satisfied or dissatisfied), it can be very helpfulto add a quantitative element to the exploratory research. Usually this would take theform of a telephone survey, though different methods of conducting quantitativesurveys will be reviewed in detail in Chapter 7. To make the exploratory researchquantitative, a statistically reliable sample size is necessary. This topic will be coveredin detail in Chapter 6, but the minimum sample size for a CSM quantitativeexploratory survey would be 200 interviews.

As we said in Chapter 4, understanding what is important to customers is not assimple as it may appear so a reliable sample size provides the opportunity to usecorrelation techniques as well as stated importance to produce a fully rounded viewof how customers judge the organisation. Figure 5.2 shows the stated importancescores out of 10 and the impact coefficients for a bank. The first column lists theattribute number, showing the order in which they were listed on the exploratorysurvey questionnaire. If the questionnaire for the main survey is based on the 15 most‘important’ requirements, it would not include ‘friendliness of staff ’ or ‘appearance ofstaff ’, both of which record high ‘impact’ scores. If it is based on all requirementsscoring over 8 for importance, ‘friendliness of staff ’ would squeeze in, but‘appearance of staff ’ would remain excluded. A more rounded view would beprovided by the total importance matrix shown in Figure 5.3.

FIGURE 5.2 Importance and impact scores for personal banking

104

142

157631

1885

2016192217112412

925132123

ConfidentialityQuality of adviceEfficiency of staffReliability of transactionsExpertise of staffTreating you as a valued customerAbility to resolve problemsEmpowerment of staffSpeed of service in branchReputationFlexibility of bankLevel of personal serviceCleanliness of the branchInterest rates on borrowingsAccuracy of fees and chargesSpeed of response to applicationsInterest rates on savingsFriendliness of staffThe telephone serviceEase of access to branchOpening hoursATM serviceAppearance of staffLayout of the branchDécor of the branch

Importance9.629.459.399.219.018.998.928.868.738.648.498.448.368.368.288.158.078.017.887.677.447.427.237.046.86

Impact0.310.660.480.310.460.610.690.590.420.650.660.450.270.530.360.300.430.490.340.110.210.120.510.290.41

Exploratory research 65

Chapter five 5/7/07 09:53 Page 65

Page 73: Customer Satisfaction

The total importance matrix is constructed from the stated importance scores (y axis)and the impact coefficients on the x axis. The axis scales are simply based on the rangeof scores for each set of data. The attribute numbers from the first column of Figure 5.2are shown on the chart as there is not sufficient space for the requirement names. Therequirements that should be carried forward to the main survey and subsequenttracking surveys are those closest to the top right hand corner. Based on the gapsbetween the diagonal clusters of attributes, it makes the decision of what to include onthe main survey questionnaire relatively easy for the bank since there is a clear gapbetween the requirements above, or extremely close to the middle diagonal. This wouldresult in a main survey questionnaire containing 20 customer requirements, which isthe maximum length advisable (see Chapter 9), but does not exclude any attribute thathas a strong case for inclusion based on either importance or impact scores.

KEY POINTAlthough necessary only for organisations with a mature CSM process, usingimportance and impact scores from a statistically reliable sample will provideadditional certainty that the main survey is asking the right questions.

5.5 Repeating the exploratory researchIt is not necessary to do exploratory research every time a customer satisfactionsurvey is undertaken. It is essential to do it before the first survey, or for anyorganisation that currently has a survey with ‘lens of the organisation’ questions. Ofcourse, taking the concept of the lens of the customer to its logical conclusion, one

FIGURE 5.3 Total importance matrix

Impact

Stat

ed I

mp

orta

nce

Hig

hLo

w

Low High

Zon

e 3:

Im

port

ant

Zon

e 4:

Mar

gin

al

Zone 2: Very Important Zone 1: Critical

2512

9

21

23

13

24

2220

19 17 1116

51

152

10

14

37

4

618

8

66 Exploratory research

Chapter five 5/7/07 09:53 Page 66

Page 74: Customer Satisfaction

can’t assume that the factors determining customer satisfaction tomorrow will be thesame as those responsible for it today. For example, environmental or ethical criteriamay play a much bigger part in customers’ judgement of organisations in the futurethan they do today – or they might not. The point is, we just don’t know. For anaccurate measure of customer satisfaction, the survey must always be based on thesame criteria that customers use to make their satisfaction judgement. To this end,exploratory research for CSM would normally be repeated every three years toaccommodate any newly emerging requirements that are important to customers orto confirm that the survey is still asking the right questions. It is not necessary toalways ask the same questions for tracking comparability, since the headline measureof customer satisfaction (see Chapter 11), is a measure of the extent to which theorganisation is meeting customers’ requirements. If their requirements change, it isin fact essential that the questionnaire also changes to maintain the comparability ofthe measure. Consider the implication of not repeating the exploratory research forten years. Whilst one could argue that asking exactly the same questions gavecomparability, the customer satisfaction index would be a measure of the extent towhich the company is meeting the requirements customers had 10 years ago!

Conclusions1. To ensure that CSM surveys are based on ‘the lens of the customer’, customers’

requirements must be accurately identified at the outset.2. The only way to do this is to conduct qualitative exploratory research with customers.3. Depth interviews are typically used for exploratory research in business markets

whilst focus groups are more common in consumer markets.4. For a greater degree of statistical confidence that the main survey is asking the

right questions, a quantitative exploratory survey can also be conducted with thefinal decision based on a combination of importance and impact scores.

5. The 15 to 20 requirements of most importance to customers will usually form thebasis of the questionnaire for the main survey.

6. Exploratory research should be repeated at least every three years to ensure thatthe survey remains focused on customers’ most important requirements.

References1. Dabholkar, Thorpe and Rentz (1996) "A measure of service quality for retail

stores: Scale development and validation”, Journal of the Academy of MarketingScience 24(1)

2. Helsdingen and de Vries (1999) "Services marketing and management: Aninternational perspective”, John Wiley and Sons, Chichester, New Jersey

3. Zeithaml and Bitner (2000) "Services marketing: Integrating customer focusacross the firm”, McGraw-Hill, Boston

4. Cooper and Tower (1992) "Inside the consumer mind: consumer attitudes to

Exploratory research 67

Chapter five 5/7/07 09:53 Page 67

Page 75: Customer Satisfaction

the arts”, Proceedings of the Market Research Society Conference, The MarketResearch Society, London

5. Cooper and Branthwaite (1977) "Qualitative technology: new perspectives onmeasurement and meaning through qualitative research”, Proceedings of theMarket Research Society Conference, The Market Research Society, London

6. Cronin and Taylor (1992) "Measuring service quality: An examination andextension”, Journal of Marketing 56

7. Carman, J M (1990) "Consumer perceptions of service quality: An assessmentof the SERVQUAL dimensions”, Journal of Retailing 66(1)

8. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty andProfit: An Integrated Measurement and Management System”, John Wiley andSons, San Francisco, California

9. Robinson, Faris and Wind (1967) "Industrial Buying Behavior and CreativeMarketing”, Allyn and Bacon, Boston

10. Brand, Gordon T (1972) "The Industrial Buying Decision”, John Wiley andSons, New York

11. Buckner, Hugh (1967) "How British Industry Buys”, Hutchinson, London12. Webster and Wind (1972) "Organizational Buying Behavior”, Prentice-Hall,

Englewood Cliffs, New Jersey13. Bonoma and Zaltman (1978) "Organizational Buying Behaviour”, American

Marketing Association, Chicago14. O’Rourke, Shea and Sulley (1973) "Survey shows need for increased sales calls”,

Industrial Marketing Management 58, (April)15. Van Der Most, G (1976) "Purchasing Process: Researching Influencers is Basic to

Marketing Planning”, Industrial Marketing Management 61, (October) 16. Harding, Murray (1966), "Who really makes the purchase decision?”, Industrial

Marketing Management 51, (September)17. Crimp, Margaret (1985) "The Marketing Research Process”, Prentice-Hall, London18. Rubin and Rubin (1995) "Qualitative Interviewing: The Art of Hearing Data”,

Sage, London19. Rust, Zahorik and Keiningham (1995) "Return on Quality (ROQ): Making

service quality financially accountable”, Journal of Marketing 59(1)20. McGivern, Yvonne (2003) "The Practice of Market and Social Research”,

Prentice Hall / Financial Times London21. Szwarc, Paul (2005) "Researching Customer Satisfaction and Loyalty”, Kogan

Page, London22. Gordon, Wendy (1999) "Goodthinking: A Guide to Qualitative Research”,

Admap, London23. Haire, Mason (1950) "Projective Techniques in Marketing Research”, Journal of

Marketing, (April)24. Gordon and Langmaid (1998) "Qualitative Market Research”, Gower, Aldershot25. Hill and Alexander (2006) "The Handbook of Customer Satisfaction and

Loyalty Measurement” 3rd Edition, Gower, Aldershot

68 Exploratory research

Chapter five 5/7/07 09:53 Page 68

Page 76: Customer Satisfaction

Sampling 69

CHAPTER SIX

Sampling

Asking the right questions is the most fundamental factor that determines theaccuracy of a customer satisfaction measure but it is also essential to ask them to theright people. This is a matter of accurate sampling, and to highlight the problemscaused by unrepresentative samples, let’s briefly consider the issue of ‘voodoo polls’.With the widespread adoption of instant communication methods such as mobilephones, texting and email, the media are awash with ‘voodoo polls’. They are aparticular favourite of radio, where listeners are encouraged to make their viewsknown on a topical issue of the day by sending a text message or an email. The radioprogramme will later announce the result of their ‘survey’. “76% of the British publicthink the death penalty should be restored” or “58% of listeners think the PrimeMinister should resign”. It is vital to understand the difference between a controlledsurvey with a representative sample of the targeted population, and any kind ofvoluntary forum such as a phone-in or its electronic equivalent. Voluntary exerciseswhere anyone motivated to do so phones in, emails or sends a text, suffer notoriouslyfrom unrepresentative samples dominated by people holding extreme views. Theseexercises have been labelled ‘voodoo polls’ and their results often bear little relationto what most people think, so are completely unreliable. Although the necessity ofbasing the results of a CSM survey on a representative sample of customers is widelyacknowledged, the technical aspects of doing so are little understood and oftenneglected. Many are no better than voodoo polls.

At a glanceThis chapter will explain the theory and practicalities of sampling. In particular we will:

a) Explore the statistical basis for sampling theory.

b) Demonstrate how to generate a sample that is unbiased as well as representative.

c) Explain how large a sample needs to be for CSM.

d) Examine the special requirements of sampling in business-to-business markets.

6.1 Statistical inferenceMost scientific principles were developed by drawing conclusions or inferences from

Chapter six 5/7/07 09:53 Page 69

Page 77: Customer Satisfaction

‘observations’, typically generated by experiments. If the observations wererepresentative of a much larger number of similar occurrences (like Newton’s appleand gravity), an important scientific fact was discovered. For this process to be of anyuse, scientists need to be confident that their sample of observations really does applyto the total population of such phenomena, events or behaviours1. First of all, thismeans that the sample of observations must be representative, i.e. without bias. Youcouldn’t say that water freezes at 0oC unless your experiments covered the full rangeof temperatures that it might freeze at. The way to ensure that samples arerepresentative is to randomly select them. For the water freezing experiment youwould randomly select your sample of observations from a much larger number oftests that were conducted. If you wanted to be confident that 76% of the Britishpublic really do want the death penalty back, you would have to ensure that yoursample was randomly generated, so that it was totally unbiased, rather than rely onpeople who felt sufficiently opinionated to phone a radio programme. Secondly, sincethere is always some error associated with the process of experimentation ormeasurement, it is important to be able to quantify the amount of error, or ‘marginof error’ that might apply to the results.

6.2 Measurement errorAs long ago as the 17th century scientists studying astronomy and geodesy weretrying to understand why measures of the same thing, such as size, shape or distance,often differed, albeit only slightly. Was it inaccurate instruments or errors by thescientists? The difference between individual observations and the real measurebecame known as observation or measurement error2. The types of error beingconsidered in the 17th century would now be called systematic error.

6.2.1 Systematic errorImagine we wanted to establish the average height of adult males in the UK. A veryreliable way to do this would be to take a tape measure around the country, physicallymeasure every male over 18 years old and calculate the mean height. Whilst accurate,this would be a very time consuming and costly exercise. It is therefore commonpractice to base the results on measuring a sample but to fully understand theoutcome, it is necessary to be aware of the types of measurement error that can occur.The first is ‘systematic error’, and is relatively easy to identify and eliminate. If oursample contained a lot of young men but not many old ones (there was a systematicage bias), we would almost certainly conclude that British males are taller than theyreally are. This problem can obviously be eliminated by checking that the samplecontains the correct proportions of each age group. Another possible measurementerror would be a poor tape measure that was systematically over or under recordingthe real height of the males measured. With either of those errors present, it wouldn’tmatter how many times we did the survey or what the sample size was, we wouldalways get the answer wrong in a systematic way.

70 Sampling

Chapter six 5/7/07 09:53 Page 70

Page 78: Customer Satisfaction

6.2.2 Random errorBy the 17th century, scientists began to realise that errors could happen by chance andthis second type of measurement error became known as random error3. This is harderto explain in an intellectual way because, fundamentally, it means bad luck. When wewere measuring the men, we could have had a scientifically calibrated measuring deviceand a sample that was representative according to every demographic variable yetinvented. However, if we were unlucky, our randomly sampled young, middle aged andolder men could have been slightly shorter than other men of their generation, givingus an average height that was below the real mean for all UK adult males. As we will seein Chapter 11, provided the sample is random and representative, this margin of erroror ‘confidence interval’ can be quantified4.

6.3 Reliable samplesOrganisations that want to know the accuracy of their customer satisfaction index, orany other customer satisfaction measures they are monitoring, will have to ensurethat their sample is random, representative and large enough. This section explainshow these essential elements of a reliable sample are achieved.

6.3.1 Random samplesThe technical term for a random sample is a ‘probability sample’. Its keycharacteristic, as far as research is concerned, is that it is totally without bias, becausein a probability sample, everybody in the population concerned stands an equalchance (probability) of ending up in the sample5. An obvious example of a randomsample is a lottery. Each ball or number remaining in the lottery pool stands an equalchance of being the next ball drawn. Clearly, no element of bias can affect theselection of numbers in a lottery. Since absence of bias will be a critical factor in thecredibility of a customer satisfaction measure, a probability sample is clearly essential.

KEY POINTOnly a random sample will ensure an unbiased result.

To draw a random sample a clearly defined sampling frame is needed. Broadlyspeaking it is an organisation’s population of customers, but its precise definitionrequires careful thought6. Organisations that measure customer satisfaction annuallywould typically include in the sampling frame all customers who have dealt with thebusiness in the last twelve months. However, that would not be very sensible for a callcentre measuring the satisfaction of customers that had contacted it to make anenquiry, renew a policy or change a direct debit. For this type of limited customerexperience it would be normal to use a much shorter time frame, such as customerscontacting the call centre in the last month.

Sampling 71

Chapter six 5/7/07 09:53 Page 71

Page 79: Customer Satisfaction

6.3.2 Generating an unbiased and representative sampleTo randomly generate a representative sample for a CSM survey first sort the databaseinto relevant customer segments, for example age and gender. Using the exampleshown in Figure 6.1, the database is sorted so that all the under 25 year old males arefirst on the list followed by all the under 25 females and so on.

If there were 1000 customers on the list the starting point is to generate a randomnumber between 1 and 1000 and begin sampling from that point. So, if the randomnumber came out as 346, the 346th customer on the list would be the first to besampled. For a sample of 100, which is 1 in 10 of the population concerned, every10th customer is sampled. So in this example the process would sample the 356thcustomer on the database, the 366th, the 376th, and every 10th name thereafter, untilarriving all the way back round to the 336th customer on the list. This would producea random sample of 100 customers. Before the randomly generated number, everycustomer on the list stood an equal chance of being included in the sample. So thesample will be completely without bias, and it will also be representative since it willinevitably have sampled 1 in 10 of the customers in each segment. This is known asa systematic random sample and is the best way of producing a sample that is bothrandom and representative7. Systematic random sampling would be problematicalonly if there was a structure to the database that would cause a particular type ofcustomer to be completely missed by the exercise. For example, if every alternatename on the list was male and every other name female, any sampling fraction thatwas an even number would select only males or females. Of course, in the real worldthis is a very unlikely eventuality for CSM, especially if the database is sorted intoblocks of segments. In our example described above, the sampling automaticallyselects the same proportion of both genders and all other segments of interest.

FIGURE 6.1 Database sorted into segments

U

nder 25s 25-34s

35-44s

Ove

r 44

s

F

F

M

MMF

F

M

72 Sampling

Chapter six 5/7/07 09:53 Page 72

Page 80: Customer Satisfaction

6.4 Sampling in business-to-business marketsIn a B2B market, sampling will be a two-step process. First a randomly selected andrepresentative sample of organisational customers must be generated. Of course, onlyindividuals can be surveyed, not organisations, so the second step involves samplingthe individual contacts. They must also be representative and randomly selected.

6.4.1 Sampling the organisationsFor many companies in B2B markets that have a strong Pareto Effect in theircustomer base, systematic random sampling will not produce a satisfactory outcome.If a large proportion of a company’s business comes from a small number of highvalue customers and a much smaller percentage from a very large number ofrelatively low value customers, any random sampling process will inevitably capturemany small customers and few big ones, as shown in Figure 6.2. This would clearlynot be representative, so to achieve a sample that is representative as well as unbiasedin most B2B markets, stratified random sampling has to be used8.

KEY POINTTo achieve a representative sample, companies with a strong Pareto Effect totheir customer base need to use stratified random sampling.

Producing a stratified random sample involves dividing the customers into valuesegments first and then sampling randomly within each segment. Illustrated inFigure 6.3, the sample will be representative according to the value contributed to thebusiness by each segment of customers. In the example shown, the company derives70% of its turnover from high value customers. The fundamental principle ofsampling in a B2B market is that if a value segment accounts for 70% of turnover (orprofit, or however you decide to define it), it should also make up 70% of the sample.

FIGURE 6.2 Random sampling and the Pareto Effect

% o

f re

ven

ue

0%

5%

10%

15%

20%

25%

% of customers

5%

15%

25%

35%

45%

55%

65%

75%

85%

95%

Systematic random sampling has the same effect

Random sampling would generate too many small and not enough largecustomers perceptions

Sampling 73

Chapter six 5/7/07 09:53 Page 73

Page 81: Customer Satisfaction

If the company has decided to survey a sample of 200 customers, 140 respondents(70% of the sample) would be required from the high value customers. There are 35high value customers so that necessitates a sampling fraction of 4:1, meaning 4contacts from each customer in the high value segment. In business markets it iscommon practice to survey several individuals from the largest customers. Sincethere will often be quite a large number of people in the DMU of a large customer(See Chapter 5), having enough contacts to survey is rarely a problem.

In the example, the medium value customers account for 20% of turnover so they mustmake up 20% of the sample. That means the company needs 40 respondents from itsmedium value customers. Since there are 120 customers in that value segment thesampling fraction would be 1:3, necessitating a random sample of 1 in every 3 mediumvalue customers, which could be easily produced using the same systematic randomsampling procedure described earlier. First generate a random number between 1 and120. If the random number came out as 71, the 71st medium value customer on the listwould be sampled, followed by the 74th, the 77th and so on until the sampling processcame back round to the 68th medium value customer on the list.

Finally, 10% of the company’s business comes from low value customers so they mustmake up 10% of the sample, requiring 20 respondents in this example. There are 300low value customers, which would mean a sampling fraction of 1:15, again producedusing systematic random sampling from a random starting point within the lowvalue customer segment. By the end of the process the company would haveproduced a stratified random sample of customers that was representative of itsbusiness and, due to its random selection would also be without bias.

6.4.2 Sampling the contactsThe procedure described above has produced a random and representative sample ofB2B customers but the individual respondents who will take part in the survey mustalso be selected. Organisations often choose the individuals on the basis of convenience– the people with whom they have most contact, whose names are readily to hand. Ifthe individuals are selected on this basis, an element of systematic error is introduced.It would mean that however carefully a stratified random sample of companies hadbeen drawn, at the 11th hour it has degenerated into a convenience sample of

FIGURE 6.3 A stratified random sample

Value segment

High

Medium

Low

% of turnover

70%

20%

10%

% of sample

70%

20%

10%

No of customers

35

120

300

Sampling fraction

4:1

1:3

1:15

74 Sampling

Chapter six 5/7/07 09:53 Page 74

Page 82: Customer Satisfaction

individuals that somebody knows – little better than a voodoo poll. To avoid that majorinjection of bias the individuals must also be randomly sampled. Compiling a list ofindividuals who are affected by the product or service at each customer in the sampleand then selecting the individuals randomly from that list is the way to do this.

KEY POINTFor a reliable result, individual contacts in a B2B market must also berandomly sampled.

Illustrated in Figure 6.4, the process works as follows. First list the DMU roles ina random order. In our hypothetical example, the DMU roles are Sales (S),Quality (Q), Purchasing (P) and Senior Management (M). It is important to beclear that these are roles not job titles, as titles vary considerably acrossorganisations. For the high value customers a census of contacts as well as acensus of companies will be required. To sample the medium value customers arandom number of 71 was generated, so the 71st medium value customer on thelist would be sampled with a contact from Sales to be surveyed. Taking every thirdmedium value customer, the 74th on the list would need someone withresponsibility for Quality, the 77th a Purchasing contact and the 80th someone ina Senior Management role. As shown in Figure 6.4, the same procedure is thenfollowed for the low value segment.

In business-to-business markets, following the stratified random sampling approachdescribed above is essential for an accurate and credible CSM result9. It provides arandom and representative sample of organisations and individuals, so it will be anaccurate result whose statistical reliability can be justified. At least as important inB2B markets, colleagues not versed in the technical aspects of CSM will see it as‘reliable’ and credible because it accurately reflects the realities of the business,covering all the key accounts and only a sample of smaller ones.

FIGURE 6.4 Randomly sampling the individuals

Customer 1Customer 2

3

11111

1

1

1

11111

11111

11111

45

Etc.

S P Q M

Etc.

Customer 71Customer 72

7374757677

Customer 191Customer 192

1931

1

194195196197198199200201202203204205206207Etc

S P Q M

Larg

eM

ediu

m

Smal

l

Sampling 75

Chapter six 5/7/07 09:53 Page 75

Page 83: Customer Satisfaction

6.5 Sample sizeThe process so far described has generated a random and a representative sample,both of which are essential for an accurate measure. However, as we said at thebeginning of this chapter, organisations need to know how much measurement errorthere might be in their result, and the size of the sample is instrumental indetermining this margin of error. We will therefore consider how many customers itis necessary to survey to achieve a statistically reliable result.

Some companies, typically in business-to-business markets, have a very smallnumber of accounts. Other companies have millions of customers. In a businessmarket the size of the population for a CSM survey is the number of individualcontacts rather than the number of organisations on the database. Even so,companies in B2C markets often have many more customers than their B2Bcounterparts. This will help to illustrate a very commonly misunderstood rule ofsampling. Statistically the accuracy of a sample is based on the absolute size of thesample. A bigger sample will always be more accurate than a smaller one10 regardlessof how many customers the organisation has. Asking what proportion of customersshould be surveyed is not a relevant question. Imagine the answer to the question was10%. A B2B company with 1,000 customer contacts would have to survey a sampleof 100, which seems OK. However, a company in a B2C market with 2,000,000customers would have to survey 200,000 – clearly an excessive number. Alternatively,if the answer were 0.1%, the B2C company would have to survey 2,000 customers butthe B2B supplier would be surveying only 1 person! Clearly, the answer cannotpossibly be any specific percentage of the customer base.

KEY POINTThe reliability of a sample is based on the absolute size of the sample rather thanthe proportion of the customer base surveyed.

Back to our quest to discover the average height of adult males. Whilst it would be agross exaggeration to say that no two men are the same height, the range, from theshortest to the tallest does vary widely – from around 3 feet to over 7 feet. If we decidedto base our average height of UK adult males on a sample of ten, our result would havea high margin of error. Even if our sample was random and representative and hadabsolutely no systematic error there would be a high risk of a small sample of tenmales being affected by random error. One of our randomly selected males could be 7feet 9 inches tall. Unlikely, but it is within the known range and could happen if wewere unlucky. If it did, it would have a strongly disproportionate effect on the result.As the sample size increases, two things happen to greatly improve its reliability. Firstly,the impact made by an exceptionally tall or short male on the mean height reduces asthe sample size grows. By the time the sample size reaches 200, an exceptionally tallmale could not distort the mean height by more than about 0.1 inch.

76 Sampling

Chapter six 5/7/07 09:54 Page 76

Page 84: Customer Satisfaction

Secondly, the probability of ending up with many unusually tall or short malesdecreases as the sample size gets bigger for the simple reason that there aren’t many ofthem. Most men are not very tall or very short, they’re somewhere in the middle. Closeto 5 feet 10 in fact. That’s why it is the average, because that’s more or less the heightmost men are. So as the sample size increases the probability of getting more men ofnormal height increases simply because there are a lot more of them out there.

Provided the sample is random and representative, the accuracy of a survey result willbe determined by two things. Firstly the sample size and secondly the extent to whichthe customers, people or units in the population differ. If all adult males were exactlythe same height, the number of men you would need to measure to know the averageheight, even with a population of 20 million, would be precisely one. If all thecustomers held identical views, you would have to interview only one customer tounderstand the attitudes of the entire customer base. Conversely, if the customers’views differ widely, you would have to interview a lot of them to be confident of youranswers. Equally, the more variety in the height of adult males, the larger the sampleneeded to produce a reliable measure of average height.

The normal measure of variation in numerical data is the standard deviation, whichis explained in Chapter 10. Standard deviations are unique to every set of data,although there are norms for different types of survey. Since our company conductsseveral hundred customer satisfaction surveys each year, we know that for CSM,standard deviations are relatively low compared with many other types of survey. Infact the average standard deviation for a customer satisfaction survey is around 11 ona 100 point scale.

FIGURE 6.5 Normal distribution curve

Extreme data Normal data Extreme data

Sampling 77

Chapter six 5/7/07 09:54 Page 77

Page 85: Customer Satisfaction

Figure 6.6 shows how the reliability of a sample increases as it gets bigger. At first,with very small sample sizes the reliability increases very steeply, but as it grows thereare diminishing returns on reliability from further increases in sample size. Tounderstand how the level of reliability is calculated, see the explanation of margin oferror in Chapter 11. Figure 6.6 is specific to customer satisfaction surveys. It showsthat the curve starts to flatten at around 50 respondents, and by the time the samplesize has reached 200, the gains in reliability from increasing the number ofrespondents in the sample are very small. Consequently, a sample size of 200 is widelyconsidered to be the minimum sample size overall for adequate reliability for CSM.Companies with a small customer base should simply carry out a census survey. Sinceresponse dates will vary considerably across different industries and between methodsof data collection (see next chapter), companies with up to 600 customers will oftenfind is most efficient to include a census, especially for self-completion surveys.

.

Two additional factors must be taken into account when considering samplereliability; firstly, the extent to which the result must be broken down into differentsub-groups and secondly, the response rate.

6.5.1 Drilling downSo, we have established that a sample of 200 provides good reliability for an overallmeasure of customer satisfaction whether the customer population is 500 or 500,000.However, organisations that want to drill down into the results to compare thesatisfaction levels of different segments may need a larger sample. For example, asample of 200 broken down into 10 regions would result in a small and unreliable

FIGURE 6.6 Sample size and reliability

Rel

iabi

lity

Sample size0 50 100 150 200

78 Sampling

Chapter six 5/7/07 09:54 Page 78

Page 86: Customer Satisfaction

sample of 20 customers for each region. Therefore, it is generally accepted that theminimum overall sample size is 200 and the minimum per segment is 50 – the pointat which the curve starts to flatten.

KEY POINTFor a reliable measure of customer satisfaction, the minimum sample is 200respondents overall and at least 50 per segment.

For some companies, therefore, the total sample size may be determined by howmany segments they want to drill down into. For example, organisations wanting todivide their result into 6 segments would need a sample of at least 300 to ensure 50in every segment. This can have a major impact for companies with multiplebranches or outlets. On the basis of 50 per segment, a retailer with 100 stores wouldneed a minimum sample of 5,000 if customer satisfaction is to be measured at storelevel. However, our view is that if comparisons are to be made between stores andmanagement decisions taken on the basis of the results, at least 100 customers perstore should be surveyed and preferably 200. For a retailer with 100 stores, this wouldresult in a total sample size of 20,000 customers for a very reliable result at store level.

6.5.2 Sample size and response ratesOne final point on sampling. The recommended sample size of two hundred foradequate reliability is based on responses and not the number of customers sampledand invited to participate. For example, if the response rate to a postal survey was50%, 400 customers have to be sampled and mailed questionnaires. However, if theresponse rate is very low, it is not statistically reliable to compensate by simplysending out more questionnaires until achieving 200 responses. Low response ratesare extremely detrimental to the reliability of customer satisfaction measures and willbe explored in more detail in the next chapter.

KEY POINTFor a reliable result it is essential to acheive a good response rate as well as 200responses.

Conclusions1. For an unbiased result a probability (random) sample is necessary.2. For CSM, the sampling frame would typically be all current customers, but may

need a time frame qualification for brief, one-off customer experiences.3. In B2C markets systematic random sampling will generate a sample that is both

unbiased and representative.4. In B2B markets a sample that accurately represents the huge variation in customer

values will be achieved only through stratified random sampling.

Sampling 79

Chapter six 5/7/07 09:54 Page 79

Page 87: Customer Satisfaction

5. Sampling should be based on a sampling frame comprising relevant individuals.In B2B this will often involve a number of individual contacts (occasionally a largenumber) from high value customers.

6. Based on typical standard deviations for customer satisfaction surveys, 200responses is the minimum sample size for a reliable measure at the overall levelwhatever the size of the customer base.

7. Organisations with fewer than 200 customers or contacts should conduct a censussurvey.

8. If the results are to be broken down into segments, the minimum sample size persegment should be at least 50 responses. In such cases the total sample size will bethe number of segments multiplied by 50.

9. As well as enough responses it is also essential to achieve an adequate responserate, and this will be covered in the next chapter.

References1. Norman and Streiner (1999) "PDQ Statistics”, BC Decker Inc, Hamilton, Ontario2. Bennett, Deborah (1999) "Randomness”, Harvard University Press, Cambridge,

Massachusetts3. Pearson and Kendall (1970) "Studies in the history of statistics and probability”,

Charles Griffin and Co, London4. Kotler, Philip (1984) "Marketing Management: Analysis, Planning and Control”,

Prentice-Hall International, Englewood Cliffs, New Jersey5. Hays, Samuel (1970) "An Outline of Statistics”, Longman, London6. Kish, Leslie (1965) "Survey Sampling”, John Wiley and Sons, New York7. McGivern, Yvonne (2003) "The Practice of Market and Social Research”, Prentice

Hall / Financial Times, London8. Crimp, Margaret (1985) "The Marketing Research Process”, Prentice-Hall,

London9. McIntosh and Davies (1996) "The sampling of non-domestic populations”,

Journal of the Market Research Society 38 10. Kotler, Philip (1984) "Marketing Management: Analysis, Planning and Control”,

Prentice-Hall International, Englewood Cliffs, New Jersey

80 Sampling

Chapter six 5/7/07 09:54 Page 80

Page 88: Customer Satisfaction

Collecting the data 81

CHAPTER SEVEN

Collecting the data

Exploratory research and sampling can be seen as essential pre-requisites ofconducting a customer satisfaction survey. A questionnaire will also be needed, butthis can be designed only when it is known how the survey will be administered.Market research textbooks call this the method of data collection.

At a glanceIn this chapter we will:

a) Describe the different methods of data collection

b) Review the advantages and disadvantages of each method

c) Explore ways to maximise response rates

d) Explain how to introduce the survey to customers

e) Discuss respondent confidentiality

f) Consider how often customers should be surveyed.

Fundamentally there are only two methods of collecting data for a customer survey1.Customers can be interviewed or they can be asked to complete a questionnaire bythemselves. Of course, there are different ways of interviewing people and more thanone type of self-completion questionnaire, so we will start by clarifying the optionswithin each method of data collection.

7.1 Self-completion surveysFor customer satisfaction measurement there are two basic choices of self-completion survey – electronic or paper. When considering the different types ofpaper or electronic surveys, the choice is between different ways of getting thequestionnaire out to and back from the customers.

7.1.1 Electronic surveysWe should initially distinguish web from e-mail surveys. An email survey involvessending questionnaires to customers by e-mail, either in the form of a file attachmentor in the body of the email itself. The customer completes the questionnaire off-lineand returns it to the sender.

Chapter seven 5/7/07 09:54 Page 81

Page 89: Customer Satisfaction

A web survey involves logging onto a web site and completing a questionnaire on-line.When the respondent clicks a button to submit the questionnaire on completion, theinformation is automatically added to the database of responses. A web survey isnormally conducted over the internet but can be set up on an intranet for internalcustomers. It is usually preferable to an e-mail survey since it avoids the softwareproblems that can be experienced with file attachments, looks more attractive andprofessional, tends to facilitate more questionnaire design options (such as routing),and eliminates data entry costs. However, a disadvantage of web surveys in B2B marketsis that in some organisations there will be employees who are authorised to use e-mailbut not the internet so the target group may not be fully accessible.

People who do respond to electronic surveys will often do so more quickly than forother types of self completion survey but there is no evidence that response rates arehigher. Indeed, whilst a few years ago, the term ‘junk mail’ typically referred tounsolicited postal mail, it is increasingly email that suffers from this problem,especially in B2B markets. It is not in the least unusual for managers, especially inlarger organisations, to receive well over 100 emails each day. Since instant deletion ofeverything unnecessary is most people’s survival strategy in this situation, the oddsare stacked heavily against a survey email.

Web surveys, typically in the form of site exit surveys, are useful for e-commercebusinesses, especially for measuring perceptions of the website itself. However, evenfor e-commerce businesses this type of exit survey should not be confused with a fullmeasure of customer satisfaction since it would precede order fulfilment and ignoreany requirement for after sales service. For a worthwhile measure of customersatisfaction, e-businesses should invite a random sample of customers to complete aweb survey that covers the total customer experience. Even then, a web survey with alow response rate will suffer from non-response bias in the same way as a postalsurvey with a poor response rate.

B2C businesses whose customers come to them, e.g. hotels, restaurants and retailerscan set up an electronic survey on their own premises on a laptop or a specialist touchscreen computer. Apart from the initial investment in the capital equipment, thiswould have all the advantages of a low cost paperless survey, but has to be very simpleto allow for people who are not computer literate. As with any survey conducted atthe point of sale, it may not capture the entire customer experience and it will reflectcustomers’ ‘heat of the moment’ attitudes rather than their more objective long termjudgements, which will be a better lead indicator of their future loyalty behaviour.

A similar method is IVR (interactive voice response), which involves customers usingthe telephone keypad to respond to an automated survey. If IVR is used, customersare typically transferred to the survey at the end of a call centre interaction, so itsuffers from the same ‘heat of the moment’ disadvantages as the touch screen. Also,

82 Collecting the data

Chapter seven 5/7/07 09:54 Page 82

Page 90: Customer Satisfaction

unless there is an automated process that randomises the sample transferred to thesurvey, call centre staff may not put through dissatisfied customers. IVR surveyssuffer from customers’ general dislike of automated telephone response systems anddo have to be very short to minimise early terminations. IVR is therefore notnormally suitable for measuring customer satisfaction.

Email surveys are easier to set up than web surveys since a good web survey processneeds to:

Be set up and hosted on a websiteHave a system for informing and inviting the target group to visit the site toparticipate (normally an email invitation with a link)Be able to issue potential respondents with a unique passwordHave appropriate security in place to ensure that only targeted respondents withverifiable passwords can complete the surveyEnsure that respondents can only complete the survey once, by checkingagainst passwordsBe able to remind non-responders if necessaryBe simple but attractive in design and quick to loadAllow for tick-box, scoring and text responsesHave various checks in place to ensure that only valid responses are allowed andto check for completeness of responseBe thoroughly tested for technical operationTransfer responses to a data repositoryTransform responses into an input file format compatible with the statisticalanalysis programmmeBe capable of transferring the data to the analysis system at regular intervals oron request.

Despite the greater setup costs of web surveys compared with email surveys, they doexploit the benefits of the electronic medium much more extensively, so theadvantages and disadvantages of electronic surveys will be reviewed in the context ofweb surveys.

KEY POINTTaking everything into account, web surveys are more efficient than email surveys

(a) Advantages of web surveys1. The major advantage is the lower cost. With no printing, postage or data

capture costs, a large survey will be considerably cheaper. For smaller surveysthe savings would be lower since the fixed cost elements will account for a largerproportion of the total.

2. One benefit of the instant data capture is that the results of the survey can beexamined whilst it is in progress.

Collecting the data 83

Chapter seven 5/7/07 09:54 Page 83

Page 91: Customer Satisfaction

3. With appropriate software, routing (missing out questions that are notrelevant) and probing of low satisfaction scores are possible.

4. Sophisticated web interviews are increasingly feasible and can be undertakenprovided the respondent has equipment of the right standard. This mightinvolve playing music, videos, showing pictures, using speech in questions andcapturing speech in responses. These will increase in popularity as thetechnology becomes more widely adopted.

(b) Disadvantages of web surveys1. Web surveys must be easy and thoroughly tested since respondents will quickly

give up if they experience problems completing the questionnaire.2. Most internet users do not stay very long at any site so questionnaires have to

be short, resulting in the collection of less information than from most othermethods of data collection.

3. Since only a few questions can be displayed in one screen view, thequestionnaire will often feel longer as respondents move from one ‘page’ to thenext, with the full extent of the questionnaire being difficult to visualise. Bycontrast, the length of a postal survey can be assessed almost instantly.

4. Some respondents will be worried about their responses being identified withtheir name or e-mail address, thus rendering the survey non-confidential. This isa particularly pressing concern when employees or internal customers aresurveyed by means of an intranet questionnaire. This will be reduced by using anindependent third party as the invitation e-mail address and the host of the webquestionnaire. Interestingly, the most suspicious respondents are IT specialists.

5. Low response rates will afflict many types of self-completion survey, but for allthe reasons listed above, they are a particular problem for web surveys. Surveyswith a low response rate suffer from the problem of ‘non-response bias’2. Thismeans that the sample of respondents that completed the questionnaire is notrepresentative of the full population of customers. Results can be adjusted at theanalysis stage to eliminate some forms of bias such as demographic variables.Attitudinal bias, however, cannot be corrected since it is not possible to knowwhat attitudes the non-responders hold. It is attitudinal bias that is normally thebiggest problem for customer satisfaction surveys with low response rates, sincecustomers that have encountered a problem or hold negative attitudes about thesupplier are typically much more motivated than the silent majority to respond.Since the response rate needed to eliminate non-response bias is at least 30% andthe average response rate for self-completion customer satisfaction surveys is20%, the scale of the problem is considerable. Web surveys tend to suffer fromeven lower response rates than postal surveys. In extreme cases this will result inthe inverse of a normal distribution, with many respondents scoring very low orvery high and few in the mid-range of the scale.

6. Regardless of response rates, it is extremely difficult for many B2C businesses to

84 Collecting the data

Chapter seven 5/7/07 09:54 Page 84

Page 92: Customer Satisfaction

generate a representative response from web or e-mail surveys since many of theircustomers are not active internet users. Household access to the internet in the UKwas 61% at the end of 2006, only 4% higher than 2 years previously, with the rateof growth slowing3. This suggests that it will be some time before electronic surveyswill be capable of providing representative samples for most B2C businesses.

7. Off-the-shelf software, whilst plentiful and cheap, can be unsuitable forcustomer satisfaction surveys. Common problems include inadequatepassword protection systems, an inability to probe low satisfaction scores andthe failure to offer a ‘not applicable’ option. Some web surveys refuse to allowthe respondent to proceed to the next question if a score has not been given,forcing customers with no views or no experience of that attribute to invent ascore. This quickly leads most people to disengage from the process. Some websurvey software purports to do the analysis and reporting of the results as wellas the data collection, but analysis modules are often very simplistic andrestrictive, making it impossible to achieve the essential outcomes customersatisfaction measurement must provide (see chapters 11 to 15).

KEY POINTIt is not possible for many B2C organisations to achieve a representative samplefrom an electronic survey

7.1.2 Paper-based surveysSelf-completion surveys have traditionally been conducted on paper, usually throughthe post although fax is possible as are other distribution media such as a customernewsletter. An ideal way of undertaking a self-completion survey is to personallydistribute questionnaires to customers and then collect them in once completed – anapproach that is often feasible with internal customers or with external customerswho visit the company’s premises. Distribution and collection is far preferable tosimply making questionnaires available for customers who choose to take theopportunity to fill them in – an approach often associated with hotels. Response ratesfor this latter type of survey are extremely low, often below 1%, resulting in enormousnon-response bias. This type of survey can fulfil a valid role as an additionalcomplaints channel but should never be taken seriously as a measure of customersatisfaction. Distribution and collection is hugely preferable since a very goodresponse rate can typically be achieved from this approach.

Most paper-based surveys are postal and would involve a questionnaire, anintroductory letter and a postage paid reply envelope mailed to a representative andrandomly selected sample of customers. A deadline for responses should always beclearly marked on the letter and on the questionnaire and this would normally be twoweeks after customers have received the questionnaire. In addition, it is good practiceto allow at least another week for late responses before cutting off the survey andanalysing the results.

Collecting the data 85

Chapter seven 5/7/07 09:54 Page 85

Page 93: Customer Satisfaction

(a) Advantages of postal surveys1. Although slightly more costly than electronic surveys, postal surveys are usually

much cheaper than interviewing customers.2. From a practitioner’s point of view, postal surveys are very easy to conduct.

Web survey software does not have to be purchased or learned and interviewersdo not have to be recruited, briefed or monitored.

3. If professionally designed and printed, paper questionnaires can be madevisually attractive for customers.

4. There is no risk of interviewer bias.5. Many customers will see a postal questionnaire as the least intrusive form of survey.6. A postal survey returned to a third party will also be seen by respondents as the

most confidential and anonymous survey method. (See Section 7.4 for a reviewof the advantages and disadvantages of respondent anonymity).

(b) Disadvantages of postal surveys1. Response is slow. Even with a clearly marked deadline, some questionnaires will

come back much later.2. As we said for web surveys, a low reponse rate results in ‘non-response bias’

which seriously distorts the result. The lower the response rate the bigger theproblem. It is vital to understand that the sample size and the response rate aretwo completely different things. It is essential to have a sample of at least 200responses and a response rate of at least 30% - not one or the other.

3. Compared with telephone interviews, neither routing nor probing of lowsatisfaction work well on paper-based questionnaires. These drawbacks can bealleviated by providing a clearly marked ‘not applicable’ option for questionsthat are not relevant to a respondent and by including one or more spaces forcomments with encouragement to customers to explain any low satisfactionscores that they gave. On average, comments are written by one in threerespondents to postal surveys.

4. Due to the inability to probe, little explanatory detail will usually be generated1.When trying to understand the reasons for any customer dissatisfaction, this isa considerable disadvantage.

KEY POINTFor a reliable result the sample should be at least 200 responses and the responserate at least 30%.

7.1.3 Maximising response ratesWhilst the average response rate for customer satisfaction surveys by post is around20%, this masks an extremely wide variation from below 5% to over 60%. Typically,the more important the topic is to the customer, the higher the base response rate willbe4. For example, a satisfaction survey for a membership organisation is likely togenerate a higher response rate than a survey by a utility company. In business

86 Collecting the data

Chapter seven 5/7/07 09:54 Page 86

Page 94: Customer Satisfaction

markets customers are more likely to complete a survey for a major supplier than aperipheral one. Since it is vital to avoid non-response bias, it is worthwhile to makeas much effort as possible to maximise the response rate. This section outlines themain ways of doing so. We will start with things that are essential before examiningthe additional measures that can be taken in order of their effectiveness.

(a) Basic foundations of a good response rateThere are two things that won’t increase the response rate to a customer satisfactionsurvey but can significantly reduce it. The first is an accurate, up to date databaseincluding contact names, addresses, telephone numbers, email addresses and jobtitles as appropriate. The accuracy of databases can erode by 30 per cent annually aspersonnel change in business and as consumers move house or change telephonenumbers and email addresses. The second essential is a postage paid reply envelope5.Expect a significantly reduced response rate if it is omitted. Some people are temptedto try fax-back questionnaires in business markets on the grounds that it might beeasier for respondents to fax their responses back. Experience shows that generallythis assumption is mistaken. Many people in large offices do not have easy access toa fax machine, and their use is declining as email grows. Therefore, by all meansinclude a fax-back option and a prominent return fax number, but include a replypaid envelope as well. International reply paid envelopes are also available and shouldbe included for overseas customers.

(b) Effective techniques for boosting response ratesIntroductory letter

The introductory letter is the single most effective technique for boostingresponse rates. Research by Powers and Alderman6 found that covering lettershad a significant impact on response rates. Since it is so important, Section 7.4 isdevoted to explaining how to introduce the survey to customers. In ourexperience a good covering letter highlighting benefits to respondents andpromising feedback will boost response rates by around 30 per cent on average.To clarify the figures, this is a 30% increase over the base response. Therefore, anaverage response rate of 20% achieved using none of the techniques detailed inthis chapter could be boosted by 30%, thus lifting it to a 27% response rate. If theintroductory letter is mailed on its own, two or three days before thequestionnaire, it will typically achieve an additional 15% uplift7, boosting theresponse rate in this example to around 31%.

RemindersA follow-up strategy is also widely endorsed by the research studies. The wordstrategy is important because more than one reminder will continue to generateadditional responses, albeit with diminishing returns. A multiple follow-upstrategy has been widely reported to have a positive effect on response rates8,9,10. Itis advisable to send a duplicate questionnaire with the follow-up plus a letter

Collecting the data 87

Chapter seven 5/7/07 09:54 Page 87

Page 95: Customer Satisfaction

repeating the reasons for taking part in the survey. A reminder boosts responserates by 25% on average, lifting our hypothetical example to approximately 39%.Subsequent reminders will also stimulate more responses, albeit at a decliningrate. In practice it would be very unusual to issue more than two reminders, but asecond follow-up up will typically improve the response rate by a further 12%,increasing the total in our example to around 43%.

The questionnaireQuestionnaire design, more than length, is a significant factor. If respondents’initial impression is that the questionnaire will be difficult to complete theresponse rate will be depressed. Apart from very long questionnaires, length is aless significant factor11, so it is better to have clear instructions and a spaciouslayout spreading to four sides of A4, rather than a cluttered two pagequestionnaire. More specifically, it makes no difference to response whetherpeople are asked to tick or cross boxes or to circle numbers or words, nor whetherspace is included for additional comments. Since people are more likely torespond when they are interested in the subject matter2, any design techniquesused to stimulate customers’ interest will be worthwhile. Manchester United andChelsea, for example, display background images of star players on their fansatisfaction survey questionnaires, as well as sending introductory letters fromwell known names such as Sir Alex Ferguson and Peter Kenyon. A professionallydesigned questionnaire that is appealing, easy to read and spacious can improveresponse rates by up to 20%, resulting in our fictitious example now climbing toa response rate of around 50%. Of course, a cluttered and difficult to read orotherwise amateurish questionnaire will significantly reduce response rates.

AnonymityIt is conventional wisdom that response rates and accuracy will be higher whererespondents are confident of anonymity and confidentiality. Practitioner evidencestrongly supports this view for employee satisfaction surveys and most types ofcustomer satisfaction survey, especially in business markets, where the respondentenvisages an ongoing personal relationship with the supplier. In mass marketswhere personal relationships are normally absent there is no conclusive evidencethat anonymity increases response5. The best approach is to promise anonymity atthe beginning of the questionnaire (or interview) and at the end, when respondentsknow what they have said, give them the option of remaining anonymous or beingattributable. See section 7.5 for a full discussion of anonymity.

KEY POINTA good introductory letter is the best way to maximise response rates for acustomer satisfaction survey.

88 Collecting the data

Chapter seven 5/7/07 09:54 Page 88

Page 96: Customer Satisfaction

(c) Marginal techniques for boosting response ratesCompared with the suggestions already made, other response boosting techniqueswill generally be marginal in their effectiveness, but there are some that can makesmall differences.

MoneyMoney is one of them. Not incentives generally but money specifically. Andmoney now! Research in the USA has shown that quite modest monetary reward,such as $1 attached to the introductory letter will have a significant effect inbusiness as well as domestic markets12. It can also work very well in less developedcountries where dollars have real street value. However, it is important that themoney goes with the questionnaire. Future promises of payment to respondentsare less effective and incentives such as prize draws, much less effective. This isconfirmed by UK research from Brennan13, Dommeyer14 and James andBolstein15. Some people cheekily suggest that researchers can reduce costs byenclosing money only with the first reminder, a tactic that might be feasible if youhave a sample of individuals who are unlikely to communicate with each other!

ColourThe use of colour is a contentious issue. Some people advocate the use ofcoloured envelopes or the printing of relevant messages on envelopes. However,it is generally accepted that around 10 per cent of mailshots are discarded withoutopening, so if the colour or design of the envelope give the impression of amailshot they are likely to depress the response rate. However, since customerswould normally open letters from existing suppliers, such as their bank, utilitysupplier, local authority or any organisations they deal with, we recommendincluding the organisation’s name and logo on the envelope. Apart from that itshould be a plain white window envelope (not a sticky address label), personallyaddressed to the customer5.

Use of colour on the questionnaire should also be considered. It is generallyaccepted that the use of more than one colour for printing the questionnaire willenhance clarity of layout and ease of completion and will therefore boost responserates. This is part of the earlier point on good questionnaire design. Some peoplethink that printing the questionnaire on coloured paper may also help because itis more conspicuous to people who put it aside, intending to complete it in a sparemoment. However, there is no conclusive evidence on paper colour, and since texton coloured backgrounds will be more difficult for some people to read, werecommend white paper with two or preferably four colour print.

(d) Ineffective techniques for boosting response ratesThere are some frequently used response boosting techniques that are rarely cost-effective since there is no conclusive evidence that they consistently improve response

Collecting the data 89

Chapter seven 5/7/07 09:54 Page 89

Page 97: Customer Satisfaction

rates, they may reduce quality of response and they are usually costly. They concernvarious types of incentive including:

Prize drawsFree giftsCouponsDonations to charity

When considering incentives it is important to distinguish between direct mail andcustomer satisfaction measurement. There is widespread evidence in the directmarketing industry about the effectiveness of appropriate incentives in boosting theresponse to mailshots. Most direct mailshots involve sending out huge volumes ofletters, with a very small percentage of those mailed expected to purchase theproduct. Due to the volumes a prize draw can be amortised over the cost of a largemailing, and if it boosts the response rate from 1% to 1.3% for those placing an orderor taking out a subscription it will have been successful. By the same token, anattractive free gift may be a cost-effective price to pay for a new subscriber who, oncehooked, may renew for many years.

Customer satisfaction surveys are different. Since the response rate without anincentive is much higher than for the vast majority of direct mail, the uplift in responsehas to be much greater for customer satisfaction surveys to make an incentiveworthwhile. Secondly, the value of each response is not commensurate with a purchasefrom most mailshots. An attractive free gift costing £20 is hardly going to be cost-effective for each returned questionnaire. Thirdly, and most importantly, you shouldconsider the impact that typical incentives will make on your customers. They’re notappropriate. They give the impression that you’re trying to sell them something. Theydevalue the very serious purpose of a customer satisfaction survey and obscure the realbenefits for customers that are inherent in the process. For these reasons, incentives forcustomer satisfaction surveys will often be detrimental to the quality of the responsewithout even boosting the response rate by a worthwhile amount.

KEY POINTIncentives are generally not a cost-effective technique for boosting responserates in customer satisfaction surveys.

Research carried out in the USA including a study by Paolillo and Lorenzi16 suggeststhat the chance of future monetary reward, e.g. a prize draw, makes no differenceunless it is very large. Also in the States, Furse and Stewart17 reported no effect fromthe promise of a donation to charity. Research in the UK by Kalafatis and Madden18

suggests that the inclusion of discount coupons can even depress response rates,probably because they give the impression that the survey is sales driven. Customersare increasingly suspicious of incentives and prizes, since there has been muchadverse publicity for scams involving unsolicited phone calls to people who have,supposedly, won a valuable prize. They are then fraudulently asked to pay an amount

90 Collecting the data

Chapter seven 5/7/07 09:54 Page 90

Page 98: Customer Satisfaction

of money to secure the prize. Sometimes, credit card or bank account details arestolen as well as the money.

Based on all the academic research and tests referenced in this chapter, plus theexperience of ourselves and other practitioners, Figure 7.1 indicates the average effecton response rates of the measures discussed. It assumes a reasonable questionnairemailed to a correctly addressed person and including a postage paid reply envelopeand suggests the likely increase in your base response rate. So a 25 per centimprovement on the average 20 per cent response rate would result in a 5% uplift toa 25% per cent response rate.

7.2 InterviewingCustomers can be interviewed face-to-face or by telephone. Although telephoneinterviews are much more common for customer satisfaction measurement, we willinitially consider the face-to-face options.

7.2.1 Face-to-face interviewsThere are many options for conducting personal interviews.

Exit interviews are conducted as people complete their customer experience.Customers can be surveyed as they leave a shop, finish their meal in arestaurant or check out of a hotel.Customers can also be interviewed during their customer experience. This canbe very cost-effective if customers have time on their hands, such as waiting atan airport or travelling on a train.Doorstep interviews are convenient for consumers, and, with priorarrangement for long interviews, can be conducted inside the home.Business customers can be interviewed at work at a time convenient to them.Street interviews are efficient if a large part of the population falls within thetarget group.

Whilst the most common method of personal interviewing involves writing therespondents’ answers onto paper questionnaires, there are alternatives. If speed isvital, computer assisted personal interviews (CAPI) can be used. Interviewers are

FIGURE 7.1 Boosting response rates

Introductory letter

First reminder

Respondent friendly questionnaire

Advance notice letter

Second reminder letter

Incentive

30%

25%

+/-20%

15%

12%

<10%

Collecting the data 91

Chapter seven 5/7/07 09:54 Page 91

Page 99: Customer Satisfaction

provided with palm top computers from which responses can be downloaded daily.With enough interviewers, a large survey can be conducted within days. Longinterviews (typically in the home or the office) can be recorded so that the detail ofcustomer comments can be captured more efficiently. However, althoughappropriate for depth interviews at the exploratory stage, this is not a commonoccurrence at the quantitative stage of customer satisfaction surveys. For CSM mainsurveys interviews will usually be short (typically 10 minutes), and most questionswill be closed. As with all methods of data collection, face-to-face interviews haveadvantages and disadvantages.

(a) Advantages of face-to-face interviewsPersonal interviews have a number of important advantages:

1. It is easier to build rapport with the respondent in the face-to-face situation.2. It is much easier to achieve total respondent understanding. Not only can

complex questions be explained but also with face-to-face interviews it isusually possible to see if the respondent is having a problem with the question.

3. Visual prompts such as cards and diagrams can be used, to visibly demonstratethe range of responses on a rating scale for example.

4. Personal interviews can be very cost-effective with a captive audience, such aspassengers on a train, spectators at a sporting event or shoppers in a busy store,because where there are plenty of people in one place it is often possible toconduct large numbers of interviews in a short space of time.

5. In some situations, such as visiting people at home or at their place of work, itis feasible to conduct quite long interviews, up to half an hour, allowing plentyof time to explore issues in some depth and gather a considerable amount ofqualitative information.

(b) Disadvantages of face-to-face interviewsThere are disadvantages to personal interviews, mainly relating to the cost.

1. Personal interviews will almost always be the most costly data collection option.2. Customers are often scattered over a wide geographical area so more time can

be spent travelling than interviewing. It is not unusual in business-to-businessmarkets to average fewer than two personal interviews per day, and often toaverage below one per day if the survey is international. As well as the timeinvolved, the travel itself is likely to be costly.

3. Since many people do not like to give offence, (by giving low satisfaction scoresfor example), there may be a tendency to be less frank in the face-to-facesituation. This unintended interviewer bias19 will be exacerbated if theinterviewer is employed by the organisation conducting the survey. Since theinterviewer makes much more impact on the respondents in the face-to-facesituation than in telephone interviews, even genuinely unintended behavioursuch as body language and tone of voice may influence respondents20,21. A

92 Collecting the data

Chapter seven 5/7/07 09:54 Page 92

Page 100: Customer Satisfaction

typical example in customer satisfaction interviews is showing empathy withcustomers detailing a very distressing experience with a supplier. Whilst it is anatural human instinct to be sympathetic, this may encourage the respondentto dwell on their dissatisfaction and may negatively bias subsequent responses.

4. With interviews taking place remotely and due to the problems outlined above,face-to-face interviewers have to be particularly well trained and quality controlprocedures extensive and consistently followed22. This adds significantly to thecost of face-to-face interviewing.

5. The challenges outlined in points 3 and 4 above can be further magnified inbusiness markets, where the respondents will usually be senior people. Theywill soon become irritated and often alienated from the process if they feel thatthe interviewer does not fully understand the topics under discussion, and in aface-to-face situation this soon becomes obvious. Therefore, to achieve theadvantages of longer interviews with in-depth comments, it is essential to usehigh calibre ‘executive’ interviewers who can hold a conversation at the samelevel as the people they are interviewing, and this is very costly.

6. Obtaining a representative sample can also be very difficult1 as some types ofcustomer, e.g. old people and wealthy people tend to be reluctant to welcomean interviewer into their home. There are also other types of customer, e.g.those living in deprived, potentially dangerous neighbourhoods, whereinterviewers are increasingly reluctant to go.

KEY POINTTelephone interviews are much more common than face-to-face for customersatisfaction surveys.

7.2.2 Telephone interviewsA second interview option involves contacting customers by telephone, typicallyat work in business markets and at home in consumer markets. Responses can berecorded on paper questionnaires or, using computer assisted telephoneinterviews (CATI), data can be captured straight onto the computer. CATI systemshave significant capital cost implications and also higher set-up costs for eachsurvey compared with paper-based telephone interviewing. Consequently, whilstCATI will not be cost-effective for small scale surveys, it will be significantly lesscostly for large samples and frequent surveys23. For CSM main surveys telephoneinterviews are much more common than face-to-face since they have a number ofadvantages.

(a) Advantages of telephone interviews1. Telephone interviews are almost always the quickest controllable way of

gathering main survey data.2. They are relatively low cost and normally much less costly than face-to-face

interviews.

Collecting the data 93

Chapter seven 5/7/07 09:54 Page 93

Page 101: Customer Satisfaction

3. The two-way communication means that the interviewer can still explainthings and minimise the risk of misunderstanding.

4. It is possible to gather reasonable amounts of qualitative information in orderto understand the reasons underlying the scores. For example, interviewerscan be given an instruction to probe any satisfaction scores below a certainlevel to ensure that the survey identifies the reasons behind any areas ofcustomer dissatisfaction.

5. Distance is not a problem even in worldwide markets.6. It is by far the best data collection method for achieving a random and

representative sample.7. Telephone interviews reduce interviewer bias as perceived anonymity is greater1.8. The ability to monitor interviewers also makes it the method of data

collection with the tightest quality control. CATI further improves qualitycontrol by eliminating the possibility of many errors such as incorrect routingor the recording of inadmissible answers2. Tests show that the impact ofinterviewer bias is neither increased nor reduced by CATI compared withpaper-based telephone interviews24.

9. Call-backs can be managed to maximise response rates.10. Provided CATI is used, headline results can be continuously provided as the

survey progresses.

(b) Disadvantages of telephone interviews1. Interviews cannot be as long as those achievable by face-to-face interviews. 10

minutes is enough for a telephone interview especially when interviewingconsumers at home in the evenings. Up to 15 minutes can be acceptable forbusiness interviews during the day. For the vast majority of CSM mainsurveys this is an adequate length of time and is comparable with the timeone can expect customers to devote to filling in a self-completionquestionnaire.

2. Questions have to be straightforward. As we will see when we look at ratingscales in Chapter 8, there are certain types of question that cannot be used onthe telephone.

3. One of the biggest frustrations with telephone surveys is that people are notsitting on the other end of the telephone waiting to be interviewed! It isusually necessary to make multiple call-backs to get a reliable sample25, asshown by the statistics in Figure 7.2. However, although multiple call-backsadd to the cost of a telephone survey, they are very feasible and form asignificant reason why telephone interviews are more likely to provide arandom and representative sample (and hence a reliable result) than any othermethod of data collection.

4. Although not as acute as in face-to-face interviews, there is still potential forthe interviewers to bias the responses20,21. Telephone surveys require highly

94 Collecting the data

Chapter seven 5/7/07 09:54 Page 94

Page 102: Customer Satisfaction

Collecting the data 95

trained interviewers22. For all interviews they need to be sufficientlyauthoritative to persuade respondents to participate and sufficiently relaxedand friendly to build rapport without deviating from the question wording.As with personal interviews, telephone interviews in business markets need‘executive’ interviewers of high calibre who can communicate at the samelevel as the respondent.

5. Whilst less costly than face-to-face interviews, telephone interviews are morecostly than self-completion methods.

(c) Call-backsIn household markets the hit rate tends to be better than the figures shown in Figure7.2 but in business markets it can easily be worse. For that reason it would be goodpractice to make at least three call backs for domestic interviews and at least five inbusiness markets to ensure good sampling reliability2.

7.3 Choosing the most appropriate type of survey

7.3.1 Interview or self-completionMost organisations will reject the personal interview option because of cost andpracticality and web surveys due to the impossibility of achieving a reliable sample.Therefore, the choice for most organisations will be between a telephone and a postalsurvey. The postal option will almost certainly be cheaper, so the first question wouldbe to consider the viability of achieving a reliable sample using a self-completionquestionnaire. If the questionnaire can be personally handed to customers andcollected back from them, a good response rate will be easily achievable. Passengerson an aeroplane or customers in a restaurant would be good examples. Sometimes,as with shoppers in a store, personal collection and distribution is feasible but asuitable location for filling in the questionnaire would have to be provided. However,although this method is good for response rates it will often not provide an accurateor useful measure of customer satisfaction because it cannot cover the full extent ofthe customer experience. What if customers have a problem with subsequent deliveryreliability of the goods purchased in store, or aeroplane passengers experience a

FIGURE 7.2 Call-backs in telephone surveys

1 attempt reaches 25% of the sample

5 attempts reach 85% of the sample

8 attempts reach 95% of the sample

12 attempts reach 100% of the sample

Average number of attempts required to make contact in telephone surveys

Chapter seven 5/7/07 09:54 Page 95

Page 103: Customer Satisfaction

problem on landing, retrieving their baggage or leaving the airport?

For most organisations customers are more remote, so mail remains the onlypractical distribution option for a self-completion questionnaire. The probability ofachieving an acceptable response rate will therefore have to be estimated. The keyfactor here is whether customers will perceive the organisation as an importantsupplier or the product as one that interests them. If they do, it will be feasible toachieve a good response rate. If they do not, and most suppliers over-estimate theirimportance the customer, even following all the advice in this chapter will probablynot be sufficient to lift the response rate to a reliable level. Examples of organisationsin this position include utilities and many financial and public services. In these cases,a telephone survey is the only sensible option.

KEY POINTUnless very large samples are necessary, telephone interviews are usually the bestdata collection option for a reliable customer satisfaction measurement process.

Even when an adequate response rate can be achieved by mail, the telephone responserate will be higher and much more detailed customer comments will be gathered. Inparticular, reasons for low satisfaction can be probed and this depth of understandingwill be very helpful when determining action plans after the survey. Telephonesurveys are therefore often the preferred option of many organisations both inbusiness and consumer markets. A very large sample size is typically the main reasonfor selecting the postal option, since large samples will significantly increase the costdifferential between postal and telephone surveys. A supermarket, for example, maywant a reliable sample for each of several hundred stores and this would be extremelycostly if customers were interviewed by telephone.

7.3.2 Mixed methodsIf feasible it is strongly recommended to use one method of data collection for CSMsince it avoids the unnecessary introduction of variables that may skew the results. It isvery unusual for a survey method that is suitable for most customers to be impracticalfor some important customer groups. Usually when mixed methods are considered itis for the wrong reasons e.g. cutting costs. For example, it may be feasible to conduct alow cost web survey for some customers (e.g. those for whom the organisation hasemail addresses), with a different method of data collection used for the rest. However,this will make it very difficult to draw reliable and actionable conclusions about how toimprove customer satisfaction and impossible to unequivocally monitor theorganisation’s success in improving customer satisfaction over time as the mix ofcustomers for whom the organisation has email addresses changes.

A more valid reason for adopting mixed survey methods may occur in business

96 Collecting the data

Chapter seven 5/7/07 09:54 Page 96

Page 104: Customer Satisfaction

Collecting the data 97

markets due to the very large differences in the value of large and small accounts. Justas the organisation invests more in servicing its key accounts, it might also choose toinvest more in surveying them. Personal interviews might therefore be used for keyaccounts since a longer interview will be possible and this will enable the company togain a greater depth of understanding of key accounts’ perceptions. It will also haverelationship benefits since it will demonstrate to the key accounts that they areconsidered to be very important, an impression that may not be conveyed by atelephone interview or a postal survey. The data collection methods could be mixedeven further with telephone interviews used for medium value accounts and a postalsurvey for small ones.

Provided the same questions are asked in the same way, the responses will have somelimited comparability but it is important to ensure that this happens. Any additionaldiscussion or additional questions used in the personal interviews with key accountsmust come after the core questions from the telephone and postal surveys to ensure thatadditional discussions (which are absent from the telephone and postal surveys) cannotinfluence the respondents’ answers to the core questions. Although the responses to thequestions will be somewhat comparable, the reliability of the results across the threemethods may not be. It is likely that lower response rates will reduce the reliability of theresults from the low value customers compared with the other two segments, but thismay be considered by many companies to be a price worth paying. Assuming that thealternative would be a telephone survey across the board, the net effect of this three-tiered approach is to shift investment in the survey from the low value accounts to thekey accounts, a decision that is likely to be compatible with the company’s normalapproach to resource allocation across the customer value segments. However, even ifthe same method of data collection (typically telephone interviews) were applied to allcustomers, the stratified random sampling approach described in the previous chapterwould ensure that the survey focused far more on large accounts than small ones. Wewould therefore not normally recommend a mixed approach even in business markets,particularly since, if it were adopted, it would be vital for future tracking that exactly thesame mixed methods of data collection are used across customer groups, in the sameproportions, for future updates, and this would add a great burden of time, resourcesand costs to the CSM process in the long run.

KEY POINTIt is rarely beneficial to use mixed methods of data collection for customersatisfaction surveys because comparability will be compromised.

7.3.3 Consulting customers about survey decisionsSometimes organisations believe that customers would prefer to be consulted aboutwhether or how they want to be surveyed and that following this approach willincrease response rates. In its simplest form this could involve writing to customers

Chapter seven 5/7/07 09:54 Page 97

Page 105: Customer Satisfaction

before the survey with a freefone number for them to call if they would prefer not totake part. A more costly approach would be to also ask those happy to participate howthey would prefer the survey to be administered. To do this, a response mechanism,such as a reply-paid card, would have to be included, or customers could betelephoned. A major UK consumer finance company recently tried this approachthrough a pre-survey telephone call. Most of the customers sampled agreed to takepart; approximately 80% selecting email as their preferred method. This resulted in amainly email survey with only 20% receiving a postal questionnaire. Despite the useof the costly pre-notification by telephone, only 20% of customers emailed responded.By contrast, the postal survey achieved an exceptionally high 70% response, buoyed bythe very high effectiveness of the telephone pre-notification. Had all customers beensurveyed by post, the overall response rate would have been much higher and thecompany would have avoided the problem of data comparability.

KEY POINTThe views customers express about how they would like to be surveyed do notrelate to subsequent response rates.

This mistaken approach originates in some organisations’ belief that their customersare somehow different to other human beings. Whilst they may have specificrequirements of the organisation’s product or service (hence the need for exploratoryresearch), they are not different from other people in most aspects of their daily lives.People are customers of many organisations, so conclusions about how customersgenerally respond to surveys should guide data collection decisions. The keyconclusions are that customers appreciate being consulted in a professional mannerabout their satisfaction, so very few take advantage of an opt-out option. However,the views they express about how they would like to be surveyed do not provide areliable guide to subsequent response rates, so organisations should use the singlemost suitable method: telephone for the best response, postal where good responserates are achievable or electronic for a customer base of heavy internet users.

7.4 Introducing the surveyAs suggested earlier, the way the survey is introduced to customers will make thebiggest single difference to the way they perceive the exercise, improving both theresponse rate and the quality of response. It is crucial therefore that all customers inthe sample receive prior notification of the survey in the form of an introductoryletter, whatever the method of data collection26. As we saw earlier in the chapter, thenotification should ideally be prior to, rather than simultaneous with the survey. Ifthe introductory letter is included with a postal questionnaire or read out whencustomers are telephoned for an interview, it will be much less costly, but also lesseffective. To fully appreciate this, think about people’s typical decision making

98 Collecting the data

Chapter seven 5/7/07 09:54 Page 98

Page 106: Customer Satisfaction

Collecting the data 99

process when invited to take part in a survey. It is usually instantaneous and based onwhether the individual is busy and it is a convenient time. Often people will declineto take part purely on grounds of inconvenience with very little thought about thenature of the survey. People are far more likely to respond if they are interested in theaims and outcomes of the research and see it as useful. They are far less likely to takepart if they perceive the survey as a general information gathering exercise of nobenefit to themselves, and especially if they associate it with selling. This is why anintroductory letter is more effective as a stand alone mailing before the questionnaireis sent or the customer is telephoned. In business or consumer markets people willopen a personalised letter and read it, especially if it is from an organisation they dealwith. This enables the supplier to make customers think about the purpose andbenefits of the survey at a time when they are not being asked to take part. As wepointed out in Chapter 1, most people think it is very positive when an organisationasks for feedback on its performance. Consequently, if they receive the introductoryletter beforehand, with more time to think about the purpose of the survey, they aremuch more likely to take part when the questionnaire arrives or they are contactedby telephone, and the survey’s PR value will be maximised.

KEY POINTFor maximum effectiveness the introductory letter should be sent on its own, afew days before mailing the questionnaire or starting interviews.

As we will explain in more detail in Chapter 17, carrying out a customer survey alsoprovides an opportunity to enhance the organisation’s image by demonstrating itscustomer focus, and the introductory letter will play an important role here.Conversely, carrying out a customer survey in an amateurish or thoughtless waycould damage its reputation. There are three main aspects of introducing the surveyand these concern whom to tell, how to tell them and what to tell them.

7.4.1 Who?As a minimum everyone sampled to take part in the survey must be contacted, butsome organisations inform all customers since it demonstrates to the entire customerbase that the business is committed to customer satisfaction and is prepared to investto achieve it. This can be a significant factor in making customers see the organisationas customer-focused. Where companies have a very large customer base, thiscommunication could become costly, depending on the media used. If budgets arenot sufficient, the survey can be introduced to those sampled to participate in thesurvey, and communication to the entire customer base provided in the form offeedback after the survey.

7.4.2 How?This will clearly depend on the size of the customer base. For companies in businessmarkets with few customers it may be productive to explain the process personally to

Chapter seven 5/7/07 09:55 Page 99

Page 107: Customer Satisfaction

each one through well briefed customer contact staff. For most organisations apersonalised introductory letter is the most cost-effective option. With a very largecustomer base a special mailing would be costly although it is worth considering itslong-term effectiveness in building customer loyalty compared with a similar spendon advertising. If cost does rule out a special mailing, it is often possible to useexisting communication channels to inform customers of the CSM programme. Thismay include printing a special leaflet to be enclosed with an existing mailing orcreating space in an existing customer publication such as a newsletter. If feasible,communications at the point of sale or service can be very effective in pre-conditioning of customers who may later be contacted to take part in the survey.These could include posters in hospitals, on stations or in retail stores outlining thebenefits to customers that have resulted from the organisation’s CSM process. Theycould include informative leaflets in hotels, restaurants or any other premises visitedby customers. They could also include a letter handed to customers when a servicehas been completed or a product purchased, which also gives the member of staff theopportunity to encourage customers to take part in the survey.

7.4.3 What?There are 3 things that customers should be told when the survey is introduced:

(a) Why it is being done.(b) How it will be done.(c) The feedback that will be provided afterwards.

(a) The purpose of the surveyAn example of an introductory letter is shown in Figure 7.3. The starting point is toexplain that the purpose of the survey is to identify whether customers’ requirementsare being fully met so that action can be taken to improve customer satisfactionwhere necessary. It is worth emphasising the high priority of customer satisfaction forthe organisation, its commitment to addressing any problems perceived by customersand the importance of feedback from customers to highlight the areas concerned.

(b) The survey detailsCustomers clearly need to know what form the survey will take, so tell them whetherit will be a telephone interview, a postal questionnaire or any other type of survey. Ifthe introductory letter accompanies a postal questionnaire the method of survey willbe obvious but it should still explain the instructions for completing and returningthe questionnaire. For all methods of data collection, the introductory letter shouldemphasise that the time commitment will not be burdensome – ten minutes toundertake a telephone interview or complete a questionnaire. Assuming theorganisation is adhering to the good practice of not asking any individual to take partin a survey more than once a year, the letter can emphasise that customers are onlybeing asked for a maximum of 10 minutes per annum to provide feedback on theirsatisfaction. Last but not least, for interviews, the second paragraph should stress that

100 Collecting the data

Chapter seven 5/7/07 09:55 Page 100

Page 108: Customer Satisfaction

Collecting the data 101

an appointment will be made to interview customers at a time convenient to them.

(c) FeedbackResearch evidence suggests that promising feedback is the single most effectiveelement of the introductory letter for increasing response rates6. The introductoryletter must therefore inform customers that they will receive feedback on the resultsand on the key issues that have been identified by the survey. It should also promisethat the organisation will share with customers the actions that it plans to take toaddress any issues. This helps enormously in convincing customers that taking partin the survey will be a worthwhile exercise.

7.5 ConfidentialityThere has been much debate about whether respondents should be anonymous ornamed in customer satisfaction surveys. Before suggesting an approach, we willreview both sides of the confidentiality debate.

FIGURE 7.3 Introductory letter

Introductory letter

Dear...

As part of our ongoing commitment to customer service at XYZ, we are about to conduct our annual survey to measure customer satisfaction. I would therefore like to enlist your help in identifying those areas where we fully meet your needs and those where you would like to see improvements. We attach the utmost importance to this exercise since it is your feedback that will enable us to continually improve our service in order to make all our customers as satisfied as possible.

I believe that this process needs to be carried out independently and have therefore appointed ABC Ltd, an agency that specialises in this work, to carry out the exercise on our behalf. They will contact you in the near future to arrange a convenient time for a telephone interview lasting approximately 10 minutes. Since we undertake not to ask customers to participate in a survey more than once a year, we are asking you for no more than 10 minutes per annum to provide this feedback. ABC will treat your responses in total confidence and we will receive only an overall summary of the results of the interviews. Of course, if there are any particular points that you would like to draw to our attention you can ask for them to be recorded and attributed to you personally if you wish.

After the survey we will provide you with a summary of the results and let you know whataction we plan to take as a result of the findings. I regard this as a very important step in our aim of continually improving the level of service we provide to our customers and I would like to thank you in advance for helping us with your feedback.

Yours sincerely

XXXXXXChief Executive OfficerXYZ Ltd.

Chapter seven 5/7/07 09:55 Page 101

Page 109: Customer Satisfaction

7.5.1 Confidentiality – the advantages1. In the market research industry, respondent anonymity has traditionally been

the norm on the assumption that confidentiality is more likely to elicit animpartial response from respondents1,4,27. This is based on evidence showingthat respondents’ answers can differ significantly when they are interviewed bypeople whom they know. If customers are likely to have a continuingrelationship with an employee, such as an account manager, they may not wantto offend the employee or may want to protect a future negotiating stance. Forexample, if a salesperson personally conducts a customer satisfaction interviewwith his or her customers, are they likely to give honest answers to questionsabout the performance of the salesperson? Even if the salesperson does notpersonally conduct the interview (or if a self-completion questionnaire isused), respondents’ answers may still be influenced if they know that theirresponses will be attributed and the salesperson will see them.

2. Problems caused by lack of confidentiality are often exacerbated in post-eventsurveys that are completed on the spot by the customer and collected in by thesupplier. A typical example would be a service provided in the home, such as anelectrical or plumbing installation. Although good for response rates,customers will often be deterred from honesty by the employee, who hasprovided the service, watching them fill it in, especially since many of thequestions will refer directly to the individual concerned. The system is alsoopen to abuse by unscrupulous employees who may try to influence therespondent or may contrive to ‘lose’ questionnaires recording low scores. Aswith non-response bias, a survey suffering from employee-induced bias couldbe very misleading and a very dangerous basis for decision making.

3. Confidentiality is also supported by considering the distinctive role of researchcompared with other customer service initiatives. Research should focus on the‘big picture’ rather than on individual customers. It is normally undertakenwith a sample of customers rather than a census but is intended to accuratelyrepresent the entire customer base. Its value for management decision makingis in highlighting trends, problems with processes, widely held customerirritations which can be addressed by introducing improvements that benefitall customers and improve the company’s position in the marketplace. Thepurpose of research is not to highlight specific problems with individualcustomers for two reasons. Firstly, even a very serious problem (between onecustomer and an individual sales person, for example), may not berepresentative of a wider problem that needs corrective action. Secondly,research is not the best way to identify specific problems with individualcustomers. Organisations should have effective customer service functions andcomplaints systems for this purpose. Relying on a periodic survey of a sampleof customers to identify problems with individual customers suggests very poormanagement of customer service.

102 Collecting the data

Chapter seven 5/7/07 09:55 Page 102

Page 110: Customer Satisfaction

Collecting the data 103

4. Whatever the frequency of customer satisfaction surveys, when they take placerepeatedly customers learn about the process and draw conclusions that affecttheir future behaviour. If they can be open and honest without any personalrepercussions, they are more likely to be honest in future surveys. On the otherhand, if customers learn that the organisation, their sales person or any otheremployee is using survey responses to manage specific relationships withcustomers, they may gradually learn to ‘use’ the survey process. At first they maysimply avoid making negative comments that may harm personal relationshipswith one or more employees of the supplier but over time they may learn tobecome more manipulative, seeking to use the survey process to influencepricing, service levels or other aspects of the cost benefit equation.

7.5.2 Confidentiality – the disadvantages1. The obvious disadvantage of respondent confidentiality is that it gives the

organisation no opportunity to respond to and fix any serious problemscausing dissatisfaction and perhaps imminent defection of individualcustomers. Some organisations see the customer satisfaction survey as anadditional service recovery opportunity, using a ‘hot alert system’ toimmediately highlight any serious problems with individual customers, so thatthey can be resolved. Companies using this approach maintain that theopportunity to prevent possible customer defections outweighs the advantagesof respondent confidentiality. Even one potential defection is worth averting.

2. In business markets, particularly those with very close customer-supplierrelationships, the case against confidentiality may be even stronger. In thesecircumstances, suppliers may feel that customers would expect them to knowwhat they had said in the survey and to respond with proposals to address theirspecific problems28.

3. A further disadvantage of anonymity is that it makes it impossible to addresponses to the customer database to use modelling techniques to projectresponses onto other, similar customer types. For organisations with a verylarge customer base this can be very helpful in classifying customers andidentifying segments of customers for tailored CRM initiatives.

7.5.3 The right approach to respondent confidentialityOur view is that the most important principle underpinning data collection is thatthe views gathered should accurately represent the views held by respondents. If thereis a chance that lack of respondent confidentiality could compromise the reliability ofthe data collected, the price paid for knowing ‘who said what’ is far too high.However, a compromise approach is possible. Respondents can be promisedconfidentiality but at the end of the interview, with full knowledge of what they wereasked and how they replied, can be asked whether they wish to remain anonymous orwhether they would be happy to have their views linked with their name. A space can

Chapter seven 5/7/07 09:55 Page 103

Page 111: Customer Satisfaction

be included on a self-completion questionnaire to provide name and personal detailsfor any respondents wishing to do so. Some customers, especially those in businessmarkets who have developed a partnership approach with suppliers, will prefer theircomments to be attributed to them. Equally, for those who prefer anonymity, theconfidentiality of the interview and the survey process will be protected. In consumermarkets customers will often provide their details if they require a response from thesupplier, e.g. to resolve a problem. With this approach, a hot alert system can still beused, but only with respondents who have consented to being named. Of course, ifan organisation decides that its survey will not be anonymous, the Data ProtectionAct, as well as ethics, dictate that this must be made totally clear to respondents beforethey take part – a disclosure that will typically have an adverse effect on response ratesas well as the accuracy of the data.

KEY POINTCustomers should be promised confidentiality at the start of the interview orquestionnaire, but at the end can be asked if they are happy for their views tobe attributed.

7.5.4 Legal issuesThe main legislation affecting surveys in the UK is the 1998 Data Protection Act. Itsmain purpose as far as surveys are concerned is to ensure confidentiality in thecollection and use of personal data. For anonymous research surveys this is not anissue, but where respondents’ responses are to be linked with their name, or otheridentifiable personal data, they must be told beforehand or asked to give theirpermission at the end of the interview or questionnaire. If attributable, data shouldbe stored for no longer than necessary. The Act does not specify the length of time,but one year would be reasonable since it is not unusual for data to be re-analysed toproduce additional information at a later date.

A further relevant distinction is the purpose of the research. Surveys conducted solelyfor research purposes should conform to the requirements outlined above. Surveyscollecting data for other purposes such as sales and marketing activities must specifythe exact use(s) to which that data will be put before asking the customer to agree totheir responses being recorded, stored on a database and used for those purposes. Ifthe respondent agrees, the data can subsequently be used only for the specificpurposes that the customer has approved. So if the customer had agreed to the databeing used for targeting mailshots for example, it would not be permissible to use theinformation for tele-sales. The Market Research Society Code of Conduct (which isnot legally binding but guides good practice), gives detailed guidance about theimplications of the Data Protection Act for researchers.

For full details of the Data Protection Act or the Market Research Society Code of

104 Collecting the data

Chapter seven 5/7/07 09:55 Page 104

Page 112: Customer Satisfaction

Collecting the data 105

Conduct, see the web addresses in Appendix 2.

7.6 When to surveyThe remaining decisions to be taken about the survey concern timing and frequency.Of these, frequency should be considered first.

7.6.1 Continuous tracking or periodic surveysCustomer satisfaction surveys can be periodic or continuous. As well as surveyswhere the data are collected continuously and reported monthly or quarterly,continuous tracking also refers to frequent surveys, e.g. every month or every quarter,even if the data are not collected continuously over that time. For periodic surveys thedata collection happens at a point in time, providing a snapshot picture of customers’level of satisfaction. Improvement initiatives can then be undertaken with the nextperiodic survey providing evidence about the organisation’s success in improvingcustomer satisfaction. Surveys are categorised as periodic if the survey intervals are atleast six months apart.

(a) Continuous trackingSurveys are more likely to be continuous when the customer-supplier relationshiprevolves around a specific event or transaction, such as taking out a mortgage, buyinga computer, calling a helpline or going on vacation. In these circumstances it is veryimportant that customers are surveyed very soon after the event before their memoryfades. Surveying customers weeks after an uneventful stay in a hotel or a briefconversation with a telephone help desk is completely pointless. Unless the event wasvery important, e.g. the purchase of a new car or a piece of capital equipment,customers should be surveyed no more than four weeks after the event. Results fromcontinuous tracking surveys are usually rolled up and reported monthly or quarterly.The big advantage of frequent reporting is that managers do not have to wait too longfor evidence that their customer service initiatives are working and it helps to keepthe spotlight on customer satisfaction within the organisation.

Event driven surveys tend to be tactical in nature and operational in focus. They oftenfeed quickly into action but are more likely to be volatile. The closer to the event, themore volatile they will be. If customers are surveyed at the point of service delivery,as they check out of a hotel for example, their responses will be very heavilyinfluenced by their recent experience27. A disappointing breakfast a few minutesearlier may result in poor satisfaction ratings across the board. Conversely, a verypleasant experience could have a positive ‘halo effect’. Consequently, this type of ‘posttransaction’ survey may not give an accurate measure of customers’ underlyingsatisfaction nor be a reliable guide to future purchasing behaviour. The iratecustomer who was made to wait too long to check out of the hotel may vow, in the

Chapter seven 5/7/07 09:55 Page 105

Page 113: Customer Satisfaction

heat of the moment, not to return, but some weeks later when next booking a hotel,the incident will have waned in importance and a more measured purchase decisionwill be made. Therefore, whether data collection is continuous or periodic, surveyingcustomers at least a week or two weeks later, away from the point of sale or servicedelivery will provide a much more accurate measure of underlying customersatisfaction and a better prediction of future loyalty behaviour29.

KEY POINTSurveying customers away from the point of sale provides a more reliablemeasure of underlying customer satisfaction and loyalty.

(b) Periodic surveysPeriodic surveys are more suited to ongoing customer-supplier relationships and areoften more strategic in focus. Questions will cover the total product and will focus oncustomers’ most important requirements. Periodic surveys are normally conductedannually or bi-annually. Before making a decision on frequency it is useful toconsider the issues highlighted by the Satisfaction Improvement Loop. As we allknow, the purpose of measuring customer satisfaction is not to conduct a survey butto continually improve the company’s ability to satisfy and retain its customers.Figure 7.4 illustrates the sequence of events that must occur before the next customersurvey can be expected to show any improvement in customer satisfaction. It istherefore necessary to consider how long the Satisfaction Improvement Loop willtake to unfold in the organisation. Unless it is very slow at making decisions or takingaction it should not take as long as one year but monthly or quarterly surveys will bemore appropriate for organisations that are capable of swift decision making andimplementation of actions to improve customer satisfaction.

FIGURE 7.4 The Satisfaction Improvement Loop

CustomerSatisfaction

Surveyresults

Internalfeedback

Decisions onactions

Implementationof actions

Serviceimprovement

Customersnotice

improvements

Customerattitude change

Survey

106 Collecting the data

Chapter seven 5/7/07 09:55 Page 106

Page 114: Customer Satisfaction

Collecting the data 107

Of course, the big disadvantage of less frequent reporting of customer satisfaction is thelonger delay before the success of satisfaction improvement initiatives can be evaluated.It also makes it much more difficult for organisations to keep employees focused on theimportance of customer satisfaction. Periodic surveys are therefore most suitable forbusiness-to-business companies that often have a small customer base and fororganisations where the customer satisfaction improvement loop will be lengthy.

KEY POINTFrequent reporting of customer satisfaction minimises the delay between takingaction and seeing improvement, but periodic surveys may be more appropriatefor B2B companies with a small customer base, or for organisations that are slowto implement change.

7.6.2 TimingThe main point about timing, especially for annual surveys is that it should beconsistent. Don’t survey in the summer one year and in the winter the following year.Any number of factors affecting the customer-supplier relationship could change acrossthe seasons. Companies will be aware of significant seasonal events in their industry, e.g.annual price rises, and these potentially distorting factors should be avoided.

Conclusions1. To conduct a customer satisfaction survey, most organisations will choose

between a telephone survey and self-completion questionnaires, typically in theform of a postal survey or possibly a web survey. Of the two methods, self-completion surveys are cheaper but telephone surveys will usually provide moredetail and greater reliability due to higher response rates.

2. Electronic surveys will often be inappropriate for customer satisfactionmeasurement in consumer markets due to unrepresentative samples.

3. In theory, methods of data collection can be mixed provided core questions occurat the beginning of the questionnaire and are asked consistently across allmethods used. In practice however, one data collection method for the wholesurvey is preferable since it eliminates unnecessary variables.

4. Low response rates will render the results of a customer satisfaction surveymeaningless, so telephone interviews are normally undertaken unless a goodresponse rate (at least 30%) can be achieved.

5. In consumer markets, huge samples sometimes dictate the use of self-completionquestionnaires, but in business markets, where sample sizes are usually relativelysmall, telephone interviews are typical.

6. To improve response rates, a good introductory letter is crucial and remindersvery effective. Affordable incentives typically don’t work.

7. To guarantee honest and objective answers, respondent confidentiality must be

Chapter seven 5/7/07 09:55 Page 107

Page 115: Customer Satisfaction

offered. If respondents are happy to be named, a hot alert system will enable theorganisation to address any specific instances of high dissatisfaction.

8. Continuous tracking with monthly or quarterly reporting provides quickfeedback on service improvement initiatives and helps to keep the spotlight oncustomer satisfaction. Periodic surveys tend to be more appropriate forcompanies in B2B markets.

References1. McGivern,Yvonne (2003) "The Practice of Market and Social Research”, Prentice Hall

/ Financial Times, London2. Dillon, Madden and Firtle (1994) "Marketing Research in a Marketing Environment”,

Richard D Irwin Inc, Burr Ridge, Illinois3. Ofcom (2006) "The Consumer Experience: Telecoms, Internet and Digital

Broadcasting”, HMSO, London4. Crimp, Margaret (1985) "The Marketing Research Process”, Prentice-Hall, London5. Yu and Cooper (1983) "Quantitative Review of Research Design Effects on Response

Rates to Questionnaires”, Journal of Marketing Research, (February)6. Powers and Alderman (1982) "Feedback as an incentive for responding to a mail

questionnaire”, Research in Higher Education 177. Schlegelmilch and Diamantopoulos (1991) "Prenotification and mail survey response

rates: a quantitative integration of the literature”, Journal of the Market ResearchSociety 33 (3)

8. Dillman, D A (1978) "Mail and telephone surveys: the Total Design Method”, JohnWiley and Sons, New York

9. Peterson, Albaum and Kerin (1989) "A note on alternative contact strategies in mailsurveys”, Journal of the Market Research Society 31 (3)

10. Sutton and Zeits (1992) "Multiple prior notification, personalisation and remindersurveys: do they have an effect on response rates?”, Marketing Research: A Magazineof Management and Applications 4(4)

11. Kanuk and Berenson (1975) "Mail Survey and Response Rates: a Literature Review”,Journal of Marketing Research (November)

12. Yammarino, Skinner and Childers (1991) "Understanding Mail Survey ResponseBehavior: a Meta-Analysis”, Public Opinion Quarterly

13. Brennan, M (1992) "The effect of a monetary incentive on mail survey response rates:new data”, Journal of the Market Research Society 34, 2

14. Dommeyer, C J (1988) "How form of the monetary incentive affects mail surveyresponse”, Journal of the Market Research Society 30 (3)

15. James and Bolstein (1992) "Large monetary incentives and their effects on mail surveyresponse rates”, Public Opinion Quarterly 56 (4)

16. Paolillo and Lorenzi (1984) "Monetary incentives and mail questionnaire responserates”, Journal of Advertising 13

108 Collecting the data

Chapter seven 5/7/07 09:55 Page 108

Page 116: Customer Satisfaction

Collecting the data 109

17. Furse and Stewart (1982) "Monetary incentives versus promised contribution tocharity: new evidence on mail survey response”, Journal of Marketing Research 19

18. Kalafatis and Madden (1995) "The effect of discount coupons and gifts on mail surveyresponse rates among high involvement respondents”, Journal of the Market ResearchSociety 37(2)

19. Kotler, Philip (1986) "Marketing Management: Analysis, Planning and Control”,Prentice-Hall International, Englewood Cliffs, New Jersey

20. Freeman and Butler (1976) "Some sources of Interviewer Variance in Surveys”, PublicOpinion Quarterly, (Spring)

21. Bailar, Bailey and Stevens (1977) "Measures of Interviewer Bias and Variance”, Journalof Marketing Research, (August)

22. Tull and Richards (1980) "What Can Be Done About Interviewer Bias?”, in "Researchin Marketing” ed Sheth, J, JAI Press, Greenwich, Connecticut

23. Havice and Banks (1991) "Live and Automated Telephone Surveys: a Comparisonof Human Interviewers and Automated Technique”, Journal of MarketingResearch, (February)

24. Groves and Mathiowetz (1984) "Computer Assisted Telephone Interviewing: Effectson Interviewers and Respondents”, Public Opinion Quarterly

25. Kish, Leslie (1965) "Survey Sampling”, John Wiley and Sons, New York26. Walker and Burdick (1977) "Advance Correspondence and Error in Mail Surveys”,

Journal of Marketing Research, (August)27. Szwarc,Paul (2005) "Researching Customer Satisfaction and Loyalty”,Kogan Page,London28. Vavra, Terry (1997) "Improving your Measurement of Customer Satisfaction”,

American Society for Quality, Milwaukee29. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty and

Profit: An Integrated Measurement and Management System”, Jossey-Bass, SanFrancisco, California

Chapter seven 5/7/07 09:55 Page 109

Page 117: Customer Satisfaction

CHAPTER EIGHT

Keeping the score

If you want to measure anything you have to have a measuring device. Unfortunately,no measuring device for customer satisfaction has ever achieved the universaladoption of the Celsius scale, the speedometer or the 12 inch ruler. That’s partlybecause there isn’t a definitive answer to the question “which is the most suitablerating scale for market research?” However, much of the reason for the proliferationof rating scales used for CSM is due to practitioners’ ignorance about thecharacteristics of different scales and their advantages and disadvantages forcustomer satisfaction measurement. Since the choice of rating scale is one of the mostcontentious areas in customer satisfaction research, we will devote a complete chapterto the issue before we examine the final design of the main survey questionnaire.

At a glanceIn this chapter we will:

a) Consider whether a precise measure or general customer feedback is moreappropriate for monitoring customer satisfaction.

b) Explain the difference between parametric and non-parametric statistics andtheir implications for CSM.

c) Explore the relative merits of verbal and numerical scales.

d) Outline the arguments for and against a mid-point.

e) Discuss the suitability of expectation scales.

f) Consider how many points should be included on a scale for measuring satisfaction.

g) Recommend the most suitable scale for measuring and monitoringcustomer satisfaction and loyalty.

8.1 Why do you need a score?Before we start it’s worth considering why a customer satisfaction survey shouldresult in a measure or score. Organisations could easily gather feedback fromcustomers without worrying about a lot of time-consuming survey methodology andcomplicated statistical analysis. They could simply listen to customers, take note of

110 Keeping the score

Chapter eight 5/7/07 09:55 Page 110

Page 118: Customer Satisfaction

the things they don’t like and fix the problems. Whilst the simplicity and relatively lowcost of this approach is very appealing, it suffers from two fundamental problems:

1. Taking actionSince organisations can’t address everything simultaneously they need toprioritise the allocation of resources. Without measures it would be impossible tomake reliable decisions about the best areas to focus resources to improvecustomer satisfaction.

2. Judging successIf customer satisfaction is a key indicator of business performance, trying toimprove it without a yardstick for judging success would be like trying to improveprofits without producing financial accounts.

Therefore, the choice of rating scale for customer satisfaction measurement shouldbe based on its suitability for achieving these two objectives rather than its validity formany other kinds of market research.

8.2 Parametric and non-parametric statisticsStatisticians refer to two types of data – parametric and non-parametric. Inparametric statistics the data is a measured quantity, such as a volume of liquid, thespeed of a vehicle, the temperature or, in research, numerical scores generated byinterval or ratio scales1. With parametric data you can draw bell curves (see Chapter6), and a normal distribution is defined by two parameters, the mean and thestandard deviation. Parametric, normally distributed data permit researchers to drawinferences about the extent to which a result from a sample applies to the wholepopulation and can be analysed using multivariate statistics such as analysis ofvariance and covariance, regression analysis and factor analysis.

In non-parametric statistics, the data is not a measurable quantity of something but acount or a ranking2, such as how many people have fair hair or black hair, how manytimes it was sunny or rainy or how many customers were satisfied or dissatisfied. Mosttypes of non-parametric data are literally just counts of how many times somethingoccurred, such as the number of days in the year that there was zero rainfall. Non-parametric statistics are analysed by counting up the number of incidences, such ashow many customers ticked the ‘satisfied’ or ‘very satisfied’ boxes, and this is known asa frequency distribution. From non-parametric statistics you can draw conclusionsabout how many days it didn’t rain, or how many customers are satisfied, but notabout the average rainfall or about the average level of satisfaction that yourorganisation is delivering. The statistical techniques used to analyse numbers can’t beemployed with non-parametric statistics, which have to be analysed using counts andfrequency distributions and tests such as chi-square.

Keeping the score 111

Chapter eight 5/7/07 09:55 Page 111

Page 119: Customer Satisfaction

KEY POINTData are either parametric or non-parametric and each must be analysed withappropriate statistical techniques. Data generated by verbal scales are non-parametric so have limited analysis possibilities such as counts and frequencydistributions and cannot be treated as though they were numbers. Numericalscales produce parametric data that can be analysed using the non-parametrictechniques plus a wide range of statistical techniques suitable for numbers.

8.3 Interval versus ordinal scalesIt is not unusual in satisfaction research to see simple verbal scales, where each pointon the scale is given a verbal description (e.g. ‘strongly agree’, ‘agree’ or ‘very satisfied’,‘satisfied’ etc). These are illustrated in Figures 8.1 and 8.2. The problem is that suchscales have only ordinal properties. They give an order from good to bad or satisfiedto dissatisfied without quantifying it. In other words, we know that ‘strongly agree’ isbetter than ‘agree’ but we don’t know by how much. Nor do we know if the distance

between ‘strongly agree’ and ‘agree’ is the same as the distance between ‘agree’ and‘neither agree nor disagree’. This is why verbal scales have to be analysed using a

FIGURE 8.2 Likert scale

Below are some features of eating out at _______. Please place an’X’ inthe box which most accurately reflects how much you agree or disagree

with the statement or in the N/A box if it is not relevant to you.

N/ADisagreestrongly

DisagreeNeither

agree nordisagree

AgreeAgree

strongly

1. The restaurant was clean

2. The service was quick

3. The food was high quality

FIGURE 8.1 Verbal scale

Below are some features of visiting _______ Dental Practice. Please place an ‘X’in the box which most accurately reflects how satisfied or dissatisfied you are with

each item or put an ‘X’ in the N/A box if it is not relevant to you

N/AVery

dissatisfiedQuite

dissatisfied

Neithersatisfied

nordissatisfied

Quitesatisfied

Verysatisfied

1. Helpfulness of reception staff

2. Location of the surgery

3. Cost of the dental treatment

112 Keeping the score

Chapter eight 5/7/07 09:55 Page 112

Page 120: Customer Satisfaction

frequency distribution, which simply involves counting how many respondentsticked each box.

This means that data from verbal scales can be manipulated only with ‘non-parametric statistics’, based on the counts of responses in different categories.According to Allen and Rao, “The use of ordinal scales in customer satisfactionmeasurement should be discouraged. It is meaningless to calculate any of thefundamental distributional metrics so familiar to customer satisfaction researchers.The average and standard deviation, for example, are highly suspect. Similarly, mostmultivariate statistical methods make assumptions that preclude the use of datameasured on an ordinal scale.”3. Without mean scores for importance and satisfactionit is not possible to calculate a weighted customer satisfaction index, which is themost accurate type of headline measure for monitoring success nor ‘satisfaction gaps’– a huge handicap for the actionability of customer satisfaction surveys. (For detailssee Chapters 11 and 12 respectively).

The Likert scale poses some additional problems for satisfaction research. Developedby Rensis Likert in 19702, it has proved to be very useful for exploring people’s social,political and psychological attitudes. Likert scales work best with bold statements, likethose shown in Figure 8.2, rather than neutral ones4, but this introduces an element ofbias. To minimise the so-called ‘acquiescence bias’ (people’s tendency to agree with aseries of statements), the list of statements should be equally divided betweenfavourable and unfavourable statements5, so that respondents with a particular attitudewould find themselves sometimes agreeing and sometimes disagreeing with thestatements. In practice, this tends to be a problem for CSM because organisations arevery reluctant to use strong negative statements (e.g. “the restaurant was filthy, theservice was very slow,……. agree / disagree”). Consequently, satisfaction surveys usingLikert scales tend to suffer from a very high degree of acquiescence bias.

KEY POINTLikert scales tend to suffer from acquiescence bias when used for satisfactionsurveys unless around half of the statements are negatively biased, which tendsto be politically unacceptable.

Shown in Figures 8.3 and 8.4, interval scales use numbers to distinguish the points onthe scale. They are suitable for most statistical techniques because they do permitvalid inferences concerning the distance between the scale points. For example, weknow that the distance between points 1 and 2 is the same as that between points 3and 4, 4 and 5 etc. Consequently, data from interval scales are assumed to follow anormal distribution, (see Chapter 6), so they can be analysed using ‘parametricstatistics’. This permits the use of means and standard deviations, the calculation ofindices and the application of advanced multivariate statistical techniques to

Keeping the score 113

Chapter eight 5/7/07 09:55 Page 113

Page 121: Customer Satisfaction

establish the relationships between variables in the data set – an essential pre-requisite for understanding things like the drivers of satisfaction and loyalty. (SeeChapter 10 and 14 for further explanation of these analytical points). For a scale tohave interval properties it is important that only the end points are labelled3; thelabels (e.g. Very satisfied…….Very dissatisfied) simply serving as anchors to denotewhich end of the scale is good / bad, agree / disagree etc.

8.4 The meaning of wordsSome people subjectively prefer verbal scales because they feel that they understandthe meaning of each itemised point on the scale, and at the individual level it is truethat each person will assign a meaning that they understand to each point on thescale. The same people would often say that, by contrast, a numerical score doesn’tappear to have a specific meaning – does one person’s score of 7/10 refer to the samelevel of performance as another person’s score of 7.

It is true that the word ‘satisfied’ has more verbal meaning than a score of 8/10.However, whilst individuals will ascribe a meaning to ‘satisfied’ that they arepersonally happy with, the problem is that it often doesn’t have exactly the same

FIGURE 8.4 10-point numerical scale

Below are some features of shopping at XYZ. Using the scale below where 10 means “completely satisfied” and 1 means “completely dissatisfied” please circle the number that most accurately reflects how satisfied or dissatisfied you are with XYZ or circle N/A if it is not relevant to you.

1. Ease of parking

2. Choice of merchandise

3. Queue times at checkout

Completely dissatisfied

Completely satisfied

N/A 1 3 5 7 102 4 6 98

N/A 1 3 5 7 102 4 6 98

N/A 1 3 5 7 102 4 6 98

N/A 1 3 5 7 102 4 6 98

FIGURE 8.3 5-point numerical scale

Below are some features of shopping at XYZ. Using the scale below where 5 means “completely satisfied” and 1 means “completely dissatisfied” please circle the number that most accurately

reflects how satisfied or dissatisfied you are with XYZ or circle N/A it is not relevant to you.

1. Cleanliness of store

2. Layout of store

3. Helpfulness of staff

N/A 1 2 3 4 5

N/A 1 2 3 4 5

N/A 1 2 3 4 5

N/A 1 2 3 4 5

Completely dissatisfied

Completely satisfied

114 Keeping the score

Chapter eight 5/7/07 09:55 Page 114

Page 122: Customer Satisfaction

meaning to everyone. It is certainly undeniable that numbers, such as a score of 8/10,don’t have a meaning that people can readily interpret into words, but the fact thatnumbers do not have a meaning is their big strength for measuring. It’s why numberswere invented. Imagine if the early traders had to use a verbal scale such as ‘fair’, ‘veryfair’ or ‘satisfied’, ‘very satisfied’ to judge the amount of wheat that should be tradedfor a horse. Numbers make it possible for people to understand measures becausethey know not only that 4 kilos is heavier than 3 kilos, but how much heavier.

The same logic applies to satisfaction measurement. If a random sample of customersgives a set of numerical scores for satisfaction this year, and next year another randomsample gives a slightly higher set of numbers, we know they are more satisfied, and byhow much. Moreover, provided the sample is large enough we can be sure within a verynarrow margin of error that the higher level of satisfaction applies to the wholecustomer base. We may not be able to give a verbal meaning to a satisfaction index of83.4% or a score of 7.65 for ‘ease of doing business’, but they will be truly accurate andcomparable measures of the organisation’s success in delivering satisfaction from oneyear to the next. Moreover, since people interpret words in a variety of ways, it wouldbe pointless to attempt to apply a verbal description to the scores achieved. (In practice,benchmarking is the way to achieve this, as explained in Chapter 12).

KEY POINTNumbers provide the most objective and unambiguous basis for monitoringchanges in customer satisfaction.

There are two particular types of CSM survey where numerical scales are much betterthan verbal scales for ease of completion. Firstly, numerical scales work much betterfor interviewing since respondents simply have to focus on giving a score, out of 10for example, rather than struggling to remember the different points on the verbalscale, and anybody who has ever tried to interview customers using a verbal scale willknow exactly how difficult it is, especially on the telephone. Secondly, forinternational surveys the problem of consistent interpretation of verbal scales ishugely exacerbated by language and cultural differences. Any company conductinginternational research would be extremely unwise to consider anything other than anumerical rating scale.

KEY POINTNumerical scales are much more suitable than verbal scales for telephoneinterviews and for international surveys.

8.5 The mid-pointAmateur researchers tend to worry about the mid-point on a rating scale, often believingthat the mere existence of a mid-point encourages everyone to use it as an easy option.

Keeping the score 115

Chapter eight 5/7/07 09:55 Page 115

Page 123: Customer Satisfaction

Evidence from CSM research totally contradicts this popular myth. As suggested inChapter 5, the main difficulty when measuring importance is people’s tendency to scorethe higher points on the scale. As far as satisfaction is concerned, it is well established thatcustomer satisfaction data is almost always positively skewed (see Section 8.6). However,the evidence from thousands of customer satisfaction surveys conducted by TheLeadership Factor is that respondents score it how they see it, with relatively few goingfor the middle of the scale unless the organisation is a very mediocre performer.

In theory one should include a mid-point on the scale, since it is poor research to forcerespondents to express an opinion they don’t hold. They may genuinely be neithersatisfied nor dissatisfied. However, we would have few concerns whether a scale had amid-point or not. If you feel happier with 4 or 6 points rather than 5 or 7, we don’tbelieve it will make much difference, though bear in mind that since people don’t targetthe mid-point, there won’t be any detriment to including it either. Interestingly, a 10-point scale doesn’t have a mid-point, although this is academic since tests show thatabove 7 points, respondents typically don’t focus on where the mid-point is.

8.6 Aggregating data from verbal scalesSince it is not statistically acceptable to convert the points on a verbal scale intonumbers and generate a mean score from those numbers, the normal method ofanalysing verbal scales is a frequency distribution (see Chapter 10). This leadsorganisations to report verbal scales on the basis of “percentage satisfied” (i.e. thoseticking the boxes above the mid point). As shown in Figure 8.5, this often maskschanges in customer satisfaction caused by the mix of scores within the ‘satisfied’ and‘dissatisfied’ categories. In fact, if results are reported in this way there is little pointhaving more than 3 points on the scale – satisfied, dissatisfied and a mid point. Bycontrast, the mean score from a numerical scale will use data from all points on thescale so will reflect changes from any part of the spectrum of customer opinion.

FIGURE 8.5 Aggregating data from verbal scales

Cleanliness of the restaurant

Cleanliness of the restaurant

Quality of the food

Quality of the food

50%

50%

50%

50%25% 15% 10%

10% 15% 25%

25% 25%

25% 25%

15% 15% 20% 10% 40%

15% 15% 20% 40% 10%

Percentagesatisfied

Verydissatisfied

Dissatisfied Neithersatisfied nordissatisfied

Satisfied Very satisfied

Key

116 Keeping the score

Chapter eight 5/7/07 09:55 Page 116

Page 124: Customer Satisfaction

KEY POINTSatisfaction measures from verbal scales do not use all the data so provide a poorbasis for monitoring changes in customer satisfaction.

8.7 Expectation scalesSome organisations use expectation scales in an attempt to measure the extent to whichcustomers’ requirements have been met. (See Figures 8.6 and 8.7). Whilst these scaleshave some intuitive appeal, they suffer from three serious drawbacks for CSM. The firstis that, like any verbal-type scale they have only ordinal properties so suffer from all theanalytical limitations outlined above. A much bigger drawback, however, is theirunsuitability as a benchmark for judging the organisation’s success. As pointed out byGrapentine6, if the measure changes in future is it because the company’s performancehas improved or deteriorated or is it down to changes in customers’ expectations? Inthe same article, Grapentine also highlights the third problem with expectation scalesfor measuring customer satisfaction. For many ‘givens’, such as ‘cleanliness of therestaurant’, ‘accuracy of billing’ or ‘safety of the aeroplane’, customers never score abovethe mid-point. Whatever the level of investment or effort required to achieve them,clean restaurants, bills without mistakes and aeroplanes that don’t crash will never domore than meet the customer’s expectations.

KEY POINTExpectation scales are not suitable for measuring and monitoring customersatisfaction.

FIGURE 8.7 A 3-point expectations scale

Please comment on how the service you received compared with your expectations by ticking one box on each line. Please tick the N/A box if it is not relevant to you.

Exceeded myexpectations

Met myexpectations

Did not meet my expectations

N/A

Cleanliness of the toilets

Waiting time for your table

Waiting time for your meal

FIGURE 8.6 A 5-point expectations scale

Please comment on how the service you received compared with your expectations by ticking one box on each line. Please tick the N/A box if it is not relevant to you

Muchbetter

BetterAs

expectedWorse

Muchworse

N/A

Helpfulness of staff

Friendliness of staff

Cleanliness of the restaurant

Keeping the score 117

Chapter eight 5/7/07 09:55 Page 117

Page 125: Customer Satisfaction

8.8 Number of pointsIt is not practical to have many points on a verbal scale. 5-point verbal scales, likethose shown in Figures 8.1 and 8.2 are the norm. This is a considerable disadvantagesince the differences between satisfaction survey results from one period to the nextwill often be very small.

As we have already mentioned in Section 8.5, one of the characteristics of CSM datais that it tends to be skewed towards the high end of the scale. This merely reflects thefact that companies generally perform well enough to make most customers broadlysatisfied rather than dissatisfied. (It is interesting to note that scores from situationswhere high levels of dissatisfaction do exist, typically when customers have no choice,do exhibit a much more normal distribution). What most companies are mainlymeasuring therefore is degrees of satisfaction and since they are tracking smallchanges in that zone, it becomes very important to have sufficient discrimination atthe satisfied end of the scale, and, for analytical purposes, a good distribution ofscores – and this is the big problem with five point scales.

The problem is exacerbated by a tendency amongst some people to avoid theextremes of the scale. Even if we’re mainly measuring degrees of satisfaction, this isn’ta major problem on a 10-point scale because there are still 4 options (6,7,8 and 9) forthe respondent who is reluctant to score the top box. With at least four choices it istherefore quite feasible for customers to use the 10-point scale to acknowledgerelatively small changes in a supplier’s performance. By contrast, it’s a big problem ona 5-point scale because for anyone reluctant to use the extremes of a scale there’s onlyone place for the satisfied customer to go – and because so many people go there ithas become known as the ‘courtesy 4’! This often results in a narrow distribution ofdata with insufficient discrimination to monitor fine changes in a supplier’sperformance, so the slow, small improvements in satisfaction that one normally seesin the real world will often be undetected by CSM surveys using 5-point scales.

Consequently, whilst one can debate the rights and wrongs of different scales from apure research point of view, the disadvantages from a practical business managementperspective are obvious. Often it will lead to disillusionment amongst staff with thecustomer satisfaction process on the grounds that “whatever we do it makes nodifference, so it’s pointless trying to improve customer satisfaction”. It is thereforeessential from a business perspective to have a CSM methodology that isdiscriminating enough to detect any changes in customer satisfaction, however small.As well as being more suitable for tracking small changes over time, scales with morepoints discriminate better between top and poor performers so tend to have greaterutility for management decision making in situations where a company has multiplestores, branches or business units.

118 Keeping the score

Chapter eight 5/7/07 09:55 Page 118

Page 126: Customer Satisfaction

As illustrated by the charts in Figure 8.8, whilst both 5 and 10 point scales exhibit askewed distribution, data from the 10 point scale are more normally distributed andshow more variance.

It should also be noted that variance can be further improved on numerical scales byincreasing the bi-polarity of the anchored end points. A 10 point scale with endpoints labelled ‘dissatisfied’ and ‘satisfied’ would generate a less normal distributionthan end points labelled ‘very dissatisfied’ and ‘very satisfied’. Even better would beend points labelled ‘completely dissatisfied’ and ‘completely satisfied’.

KEY POINTTo maximise variance, the end-points of 10-point scales should be labelled‘completely satisfied’ and ‘completely dissatisfied’.

From a technical research point of view there is a compelling argument for the 10-point scale because it is easier to establish ‘covariance’ between two variables withgreater dispersion (i.e. variance around their means). Covariance is critical to thedevelopment of robust multivariate dependence models such as identifying the driversof customer loyalty, or establishing the relationship between employee satisfaction andcustomer satisfaction. In fact, many sophisticated statistical modelling packagesassume that data are only ordinal if scales have fewer than 6 points.

In the light of the above arguments, it would be valid to ask the question, ‘why stopat 10 points?’ From a data point of view it would be better to have even more points.Federal Express uses a 100 point scale to track ‘micro-movements’ in customersatisfaction in its frequent measures. 20 point scales have also been used. However,questions must be easy for respondents to understand in order to have a high level of

FIGURE 8.8 Distribution of data across scales

10 point scale

Perc

enta

ge

Perc

enta

ge

60

50

40

30

20

10

0

5 point scale

30

25

20

15

10

5

01 2 3 4 5 1 2 3 4 5 6 7 8 9 10

Keeping the score 119

Chapter eight 5/7/07 09:55 Page 119

Page 127: Customer Satisfaction

confidence in the validity of the answers. People find it most easy to respond to 5point verbal scales and 10 point numerical scales. This may be because giving (orreceiving) a score out of 10 tends to be familiar to most people – whether it be fromtests at school or from the reviews of footballers in newspapers. Numerical scales withfewer or more than 10 points are more difficult for people as are verbal scales withmore than 5 points.

Following a test at Cornell University in 1994, Wittink and Bayer concluded that the10-point endpoint anchored numerical scale was most suitable for customersatisfaction measurement. Their reasons included respondent perspective issues suchas simplicity and understandability as well as reliability issues such as repeatability(the extent to which the same scores are given by respondents in successive tests).Most importantly they concluded that it was the best scale for detecting changes overtime and for improving customer satisfaction7.

KEY POINT10-point numerical scales are most suitable for measuring and monitoringcustomer satisfaction.

Michael Johnson from Michigan University Business School and Anders Gustafsonform the Service Research Centre at Karlstad University (the originators of theAmerican Customer Satisfaction Index and Swedish Customer SatisfactionBarometer) are in no doubt that a 10-point numerical scale should be used forcustomer satisfaction measurement8.

8.9 The danger of over-stating customer satisfactionDue to the two problems outlined above (narrower distribution of scores andaggregation of data), verbal scales invariably generate higher customer satisfactionscores than numerical scales, tempting organisations to adopt a dangerous level ofcomplacency regarding their success in satisfying customers.

According to Allen and Rao3, a company that routinely scores 90% for overallcustomer satisfaction on a 5-point scale will typically score 85% on a 7-point scaleand only 75% on a 10-point scale. This is corroborated by our own experience whenwe have to convert an aggregated customer satisfaction index from a verbal scale intothe weighted customer satisfaction index described in Chapter 11 and generated by a10-point numerical scale. This is done by asking some questions twice to the samesample in the same survey, using first one scale then the other. Usually they would beat different ends of the questionnaire, separated by intervening questions, and byinterview rather than self-completion questionnaire to minimise the risk that theanswers provided for the first scale will influence the scores given for the second scale.

120 Keeping the score

Chapter eight 5/7/07 09:55 Page 120

Page 128: Customer Satisfaction

An alternative, but more costly approach is to ask identical questions in separate,simultaneous surveys with random and representative samples of at least 200customers using the 5-point scale with one group and the 10-point scale with theother. Self-completion questionnaires would be acceptable for this latter approach.Figure 8.9 illustrates the results from five such tests showing how the headlinemeasure of customer satisfaction from the two different scales compared:

This can lead to a dangerous level of complacency. In Example 5, the 92.3% producedby the 5 point verbal scale suggests that the company is doing very well at satisfyingits customers. In fact, plugging its customer satisfaction index of 75.8% into TheLeadership Factor’s benchmarking database demonstrated that it was in the bottomhalf of the league table in its ability to satisfy customers! It is therefore hardlysurprising that companies misleading themselves with unrealistically high levels ofcustomer satisfaction from verbal scales complain that their ‘satisfied’ customers areoften defecting. They then begin to question the point of customer satisfaction. Whatthey should be questioning is their misleading CSM process. Their customers areactually well below the levels of satisfaction that would guarantee loyalty.

KEY POINT5-point verbal scales dangerously over-estimate customer satisfaction.

The risk of over-estimating customer satisfaction by monitoring the top two boxes on5-point scales was demonstrated by AT&T9 who were regularly getting headlinesatisfaction measures of over 90% and bonusing staff on it, but in 1997 had doubtswhen some businesses began making major losses despite these apparently high levelsof customer satisfaction. On investigation, they discovered that repeat purchase rateswere substantially different for customers rating “excellent” compared with thoserating “good”. They also found that overall satisfaction scores of 95%+ werecorrelated with scores in the low 80s on their measure of “worth what paid for”.

A huge amount of evidence collected over a 30 year period by Harvard BusinessSchool also supports this view. They have found very strong correlations betweencustomer satisfaction and loyalty, but only at high levels of satisfaction10. Merelybeing satisfied isn’t enough in today’s competitive markets and a tougher measure

FIGURE 8.9 Satisfaction levels across scales

Example 1

Example 2

Example 3

Example 4

Example 5

% “satisfied”5 point verbal scale

78.6%

84.5%

87.6%

90.3%

92.3%

Satisfaction Index10 point numerical scale

65.3%

67.4%

70.9%

74.4%

75.8%

Keeping the score 121

Chapter eight 5/7/07 09:55 Page 121

Page 129: Customer Satisfaction

based on a 10-point numerical scale is necessary to highlight this.

8.10 Top performers and poor performersFor poor performers with low levels of satisfaction, the choice of rating scale matterslittle. They will get a fairly normal distribution with 5, 7 or 10 point scales and havelittle need for advanced analysis of the data since the problem areas that needaddressing will be obvious. By contrast, choice of scale becomes much more criticalfor top performing companies for several reasons:

1. Companies with high levels of satisfaction need a very tough measure if theyare to identify further opportunity for improvement.

2. Companies in this situation need to employ much more sophisticated statisticaltechniques that drill down into the data to uncover drivers of satisfaction ordifferences in satisfaction between groups of customers that may not previouslyhave been considered.

3. In situations where there are multiple business units (e.g. branches, regions,stores, sites etc) it is very important to be able to discriminate between thebetter and poorer performing units.

For all the above reasons the greater variance yielded by 10 point numerical scales andthe ability to employ advanced multivariate statistical techniques with good levels ofpredictive and explanatory power are extremely beneficial to high performingcompanies. Of course, poor performing organisations aiming to improve would bewell advised to use a scale that will also be suitable when they achieve their objective.

When General Motors, who were one of the pioneers of customer satisfaction

FIGURE 8.10 Satisfaction-Loyalty relationship

Loya

lty

Apostle

Zone of affection

Zone of indifference

Zone of defection

Saboteur

20%

40%

60%

80%

100%

1 5 10

Satisfaction

10

122 Keeping the score

Chapter eight 5/7/07 09:55 Page 122

Page 130: Customer Satisfaction

research, began to suspect that their CSM process, (based on a balanced 5-pointverbal scale) was not providing a sound basis for decision making they analysed 10years of back data involving over 100,000 customer responses9. It demonstrated thatthe relationship between customer satisfaction and loyalty was not linear butdisplayed the characteristics shown in Figure 8.10, with loyalty declining very rapidlyif satisfaction scored anything lower than ‘very satisfied’. This led them to draw twoconclusions. First, that only a ‘top box’ score represented an adequate level of businessperformance and second that having only one point on the scale that covered thewhole range of performance from adequate upwards was clearly useless. Theirsolution to the latter problem was to invent a positively biased scale, to provide thebasis for moving customers from satisfied to delighted:

Delighted: ‘I received everything I expected and more’Totally satisfied: ‘Everything lived up to my expectations’Very satisfied: ‘Almost everything lived up to my expectations’Satisfied: ‘Most things lived up to my expectations’Not satisfied: ‘My expectations were not met’

Although the four levels of satisfaction are much more useful than the two offered bya balanced verbal scale, this is matched by the 10-point scale, which also has all theother analytical advantages of numerical over verbal scales. The ‘top box’ concept,however, is interesting especially for high performing companies seeking to move‘from good to great’. On a 10-point numerical scale it is generally considered that 9 isthe loyalty threshold, so monitoring ‘top box’ scores (i.e. 9s and 10s) as well as anoverall customer satisfaction index can be very useful.

8.11 Up-to-date best practiceThe questions on the ACSI are rated on a 10-point numerical scale. According to theUniversity of Michigan this is for two main reasons. First because 10 points areneeded to provide the required level of discrimination at the satisfied end of the scale(and a ten point verbal scale is not workable, especially for telephone interviews) andsecondly because of the analytical benefits afforded by numerical scales11,12.

Conclusions1. 5-point verbal and 10-point numerical scales are both easy for customers to complete.2. Verbal and numerical scales differ massively in terms of the types of statistical

techniques that are permissible, verbal scales possessing very limited analytical power.3. Scales with more points generate more variance so are better for tracking

satisfaction over time, for discriminating between high and low performers andfor using sophisticated statistical techniques.

4. Scales with fewer points produce higher satisfaction results leading tocomplacency within the organisation and misunderstanding about whyapparently ‘satisfied’ customers are defecting.

Keeping the score 123

Chapter eight 5/7/07 09:55 Page 123

Page 131: Customer Satisfaction

A 2007 article by Coelho and Esteves13 tested the difference between 5 and 10point scales specifically for customer satisfaction surveys. They concluded that the10 point scale should be used because it was much better for analysis since the 5point scale produces a higher concentration of responses in the mid-range. Theyalso disproved the popular myth that 5 point scales are easier for respondents,showing no difference in non-response between the scales and no difference onease of completion across age groups or education levels.

5. For organisations that want to give themselves the best chance of improvingcustomer satisfaction as well as being able to reliably judge their success inachieving that goal, the 10-point numerical scale is the only suitable option forcustomer satisfaction measurement.

6. For that reason, it is the scale used by the University of Michigan for the AmericanCustomer Satisfaction Index.

References:1. Norman and Streiner (1999) "PDQ Statistics”, BC Decker Inc, Hamilton, Ontario2. Likert, Rensis (1970) "A Technique for the Measurement of Attitudes”, in

Summers, G F (ed) "Attitude Measurement”, Rand McNally, Chicago3. Allen and Rao (2000) "Analysis of Customer Satisfaction Data”, ASQ Quality

Press, Milwaukee4. Oppenheim, A N (1992) "Questionnaire Design, Interviewing and Attitude

Measurement”, Pinter Publishers, London5. Dillon, Madden and Firtle (1994) "Marketing Research in a Marketing

Environment”, Richard D Irwin Inc, Burr Ridge, Illinois6. Grapentine, T (1994) "Problematic scales”, Marketing Research 6, 8-13, (Fall) 7. Wittink and Bayer (1994) "Statistical analysis of customer satisfaction data:

results from a natural experiment with measurement scales”, Working paper 94-04, Cornell University Johnson Graduate School of Management

8. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty andProfit: An Integrated Measurement and Management System”, Jossey-Bass, SanFrancisco, California

9. Hill and Alexander (2006) "The Handbook of Customer Satisfaction and LoyaltyMeasurement” 3rd Edition, Gower, Aldershot

10. Heskett, Sasser and Schlesinger (1997) "The Service-Profit Chain”, Free Press, New York11. Ryan, Buzas, and Ramaswamy (1995) "Making Customer Satisfaction

Measurement a Power Tool", Marketing Research 7 11-16, (Summer)12. Fornell, Claes (1995) "The Quality of Economic Output: Empirical

Generalizations About Its Distribution and Association to Market Share",Marketing Science, 14, (Summer)

13. Coelho and Esteves (2007) “The choice between a five-point and a ten-point scalein the framework of customer satisfaction measurement”, International Journal ofMarket Research 49, 3

124 Keeping the score

Chapter eight 5/7/07 09:55 Page 124

Page 132: Customer Satisfaction

The questionnaire 125

CHAPTER NINE

The questionnaire

By the time the questionnaire design stage is reached, much of its content will alreadybe determined – chiefly the list of customer requirements identified by theexploratory research. Two other factors that will heavily influence the final design ofthe questionnaire will also have been decided by now – firstly, what type of survey toadminister and secondly the use of a 10-point scale. Of course, the survey instrumentused to collect the data must generate reliable information and this can becompromised by many elements of the final questionnaire design process.

At a glanceIn this chapter we will:

a) Review the importance of asking the right questions.

b) Consider how the questionnaire should be presented to customers.

c) Describe the types of question that can be used in surveys.

d) Specify the sections to be included in the questionnaire.

e) Explain how to score requirements for satisfaction and importance and how toprobe low satisfaction for more insight.

f) Describe and evaluate questions for measuring loyalty.

g) Discuss the length of the questionnaire.

h) Review question wording, especially the avoidance of common mistakes.

i) Explain how to close the questionnaire.

j) Consider the need for piloting.

9.1 The right questionsAs we know, customer satisfaction is a measure of the extent to which theorganisation has met customers’ requirements and this has two implications forquestionnaire design.

9.1.1 Meeting customers’ requirementsThe first implication was covered in Chapter 4, where we established that in order tomeasure whether customers’ requirements are being met, the questions asked must

Chapter nine 5/7/07 09:56 Page 125

Page 133: Customer Satisfaction

focus on customers’ main priorities. These are established through exploratoryresearch using the ‘lens of the customer’1. If the ‘lens of the organisation’ is used, andmanagement asks questions on the topics they wish to cover, the survey will notprovide a measure of whether customers’ requirements have been met. Comparedwith most types of market research therefore, this makes questionnaire design arelatively straight forward exercise for CSM. It is not necessary or even desirable toconsult other staff in the organisation to find out what they would like to see on thequestionnaire since its core contents will already have been determined by theexploratory research.

The second implication is that the questionnaire must cover both sides of theequation in our definition of customer satisfaction – importance and satisfaction2,3,4,otherwise the relative importance of customers’ requirements would never be reliablyunderstood. Measures of importance and satisfaction are both necessary to achievethe two main outcomes described in chapters 11 and 12.

The core content of the questionnaire will therefore comprise the list of thecustomers’ main requirements rated for both importance and satisfaction. There isno scope for debate over this if the results of the survey are to provide a measure thataccurately reflects how satisfied or dissatisfied customers feel.

9.1.2 Customers’ perceptionsThe information collected and monitored by a CSM process will be customers’perception of the extent to which the organisation has met their requirements. It willnot necessarily be an accurate reflection of the organisation’s real performance. As wesaid earlier in this book, customers’ perceptions are not always fair or accurate, butthey are the information on which customers base their future behaviours, such asbuying again and recommending. It is therefore an accurate understanding ofcustomers’ perceptions that is the most useful measure for the organisation tomonitor. This means that the questionnaire should focus on eliciting customers’genuine opinions and should definitely not attempt to lead them, by providinginformation about the organisation’s actual performance for example.

9.2 Introducing the questionnaireAn important objective of questionnaire design is to ensure that respondents relateto the questionnaire. According to McGivern5, the questionnaire should be seen as “asort of conversation, one in which the respondent is a willing, interested and ableparticipant”. This point applies particularly to interviews but should also be aguiding principle in the design of user-friendly self-completion questionnaires. Aswith any interaction with another person, the start of the process is especiallyimportant, hence the emphasis placed on the introductory letter in Chapter 7. This

126 The questionnaire

Chapter nine 5/7/07 09:56 Page 126

Page 134: Customer Satisfaction

The questionnaire 127

will have been sent before customers are approached for an interview and, ideally,before a self-completion questionnaire is received. The beginning of thequestionnaire or interview script will therefore have to repeat the main points of theintroductory letter and should cover the following points:

1. A reference to the introductory letter to remind the customer of its contentsand to emphasise the authenticity of the research.

2. If necessary, any qualification to make sure that the respondent concerned willbe able to answer the questions.

3. The fact that the survey is confidential and that the respondent’s anonymity willbe protected, introducing the name of any third party agency at this point ifappropriate. It can also be helpful to specify any relevant code of conduct suchas that of the Market Research Society in the UK. It is very important to conveythe message that this is not selling and that the exercise will be beneficial as wellas relevant to customers.

4. To further emphasise the credibility of the exercise, it is especially useful to beable to mention at this point if feedback will be provided to respondents afterthe survey.

5. How long it will take. If an interview, respondents should be asked if they wouldlike to make an appointment to do the interview at a more convenient time.

6. To maximise participation in interviews, it is crucial to strike the right balancebetween friendliness and professionalism. Clearly, an interviewer should notappear unfriendly, but a professional, even authoritative tone will help tomaximise participation rates.

9.3 Types of questionBefore considering specific questions it is helpful to consider the basic types ofquestion that can be asked in surveys.

9.3.1 Open questionsOpen or free-response questions allow respondents to say or write anything they wishin answer to the question. Since they tend to minimise the risk of the researcherleading the respondent they should elicit an answer that is the closest possible to thecustomer’s real feelings. This strength is also their biggest weakness, as they oftengenerate a huge volume of information that can be difficult to analyse or use.According to Oppenheim6, “free response questions are often easy to ask, difficult toanswer and still more difficult to analyse.”

The most common use of open questions is probing low scores to satisfaction andloyalty questions to understand customers’ negative feelings about the organisation.It is also possible to probe top box scores to understand what leads to positive feelingsin customers. Since probing demands more time from the customer and is extremely

Chapter nine 5/7/07 09:56 Page 127

Page 135: Customer Satisfaction

time consuming at the analysis and implementation stages if comments are to beutilised effectively, it is wise to be selective with probing. If forced to choose, mostorganisations would think it more useful to reduce dissatisfaction (and hencedefections) than to boost customer delight, so it is normal practice to probe lowsatisfaction scores rather than high ones. An exception would be the small percentageof organisations with exceptionally high levels of customer satisfaction whoseobjective will be move satisfied customers scoring 8s and 9s into highly satisfied onesscoring 10s. These companies do need to understand exactly what has produced thehighest levels of satisfaction in some customers. Most organisations, however, willhave significant levels of dissatisfaction, and reducing this should be their first priority.

KEY POINTUnderstanding reasons behind dissatisfaction is the most productive use of thelimited opportunity for open questions in a CSM main survey.

There will also be other opportunities for open questioning in CSM main surveys,particularly in B2B, where the survey is more likely to be administered by interviewrather than self-completion, and where customers will often have some veryknowledgeable views which they will be quite interested in sharing if given sufficientencouragement. A good example of this use of an open question is:“Imagine you were Chief Executive of XYZ. What is the most important change youwould make to the company (a) in the short term, and (b) in the long term?”When interviewing senior managers in a B2B market, a question like this canstimulate some very insightful responses, which could be extremely valuable to thecompany concerned. If necessary the interviewer would probe further, asking therespondent to explain his reason or to provide more detail.

9.3.2 Closed questions Closed questions are quick, low cost, easy for respondents and interviewers andfacilitate clear, unambiguous comparisons7. Since the main purpose of customersatisfaction surveys is to generate and monitor measures, most questions will beclosed. This means that respondents have a limited number of prescribed options fortheir answer. For CSM the response mechanism for most questions will be the ratingscale used to measure satisfaction and importance. However, many other types ofclosed question are possible. Closed questions can be attitudinal or factual and canhave any number of response options in a wide variety of formats. In addition to therating scales examined in Chapter 7, other types of closed question relevant to CSMinclude any of the following.

(a) Dichotomous question A dichotomous question will have only two possible answers, usually ‘yes’ or ‘no’.“Have you flown business class within the last three months?”

128 The questionnaire

Chapter nine 5/7/07 09:56 Page 128

Page 136: Customer Satisfaction

The questionnaire 129

They are often used in CSM for qualifying respondents so that one or moresubsequent questions will be asked only to customers with valid experience.Dichotomous questions can also be very useful for understanding the antecedents ofsatisfaction by forcing customers into two mutually exclusive tracks of the customerjourney, for example:“On your last visit to the supermarket were you greeted by a member of staff as youentered the store?”

(b) Multiple choice question Multiple choice questions are commonly used in CSM to place customers intocategories. They could be demographic categories such as:

“Which of the following age groups are you in?”Under 25 / 25-44 / 45-64 / 65 or overTick one box only.”

As well as covering the range it is essential that the categories do not overlap.Sometimes, as in the question above, a person cannot be in more than one of thecategories so should be instructed to give no more than one answer. For otherquestions, however, respondents could legitimately fall into more than one categoryso should be allowed to give more than one answer, for example:“Which of the following methods do you use when you need information about XYZ’sproducts or services? Tick any options that apply.XYZ web site / Customer Service Department by email / Customer Service Departmentby phone / Your sales representative by email / Your sales representative by phone / Yoursales representative in person / Other method / Have not needed information”Where there are very many possible answers, but only the most common are listed, itbecomes necessary to add an option such as “other” or “none of the above” and, wheresensible, a “don’t know” option.

9.3.3 Open question – closed responseSometimes it can be very helpful to try to secure the main advantages of both openand closed questioning by adopting the open question – closed response format.Only possible if customers are interviewed, the question is asked as an open question,allowing the customer to give any response. However, there is a response scale that isclearly visible to the interviewer, so if the customer gives a response that fits the scale,the interviewer ticks the appropriate box or boxes. If not, they can probe, typicallyusing the scale or part of it to make certain which response category is correct.Alternatively, but less commonly, the interviewer can be instructed to write in full anycustomer comments that do not fit the scale. The open question – closed responseapproach can be very useful when a list of multiple choice options is very long,making it time consuming for the interviewer to read and very tedious for therespondents. A good example would be an ethnic origin question where the list of

Chapter nine 5/7/07 09:56 Page 129

Page 137: Customer Satisfaction

options, especially the kind of list favoured by the public sector, can be extremelylong. Asking the customer an open question will usually generate a response that fitsexactly into one of the categories. If it doesn’t, the customer can be asked to give moredetail or, as a final resort, several or all of the possible responses can be read out.

KEY POINTIn interviews, use of the open question - closed response technique will oftenmake the questionnaire less tedious for customers and quicker to administer.

Another good use of the technique is to identify top of the mind feelings or perceptions,such as awareness of products or services, where customers might be asked:

“What products (and/or services as relevant) does XYZ provide?”

For some organisations the interviewer might have a long list of products with tworesponse options for each one. First, a box for unprompted awareness to cover all theresponses generated by the open question. The interviewer would then read down theremaining products that had not been mentioned, ticking the prompted awarenessoption for any that the respondent was aware of but had not previously mentioned.This information would be very useful to XYZ as it is only products with highunprompted awareness that customers are likely to make enquiries about.

9.4 The structure of the questionnaireBefore wording the questions it is necessary to plan the overall layout of thequestionnaire. The sections required for a CSM questionnaire and the order in whichthey should appear are summarised below.

9.4.1 IntroductionWhether a self-completion questionnaire or a questionnaire for an interview, it mustbegin with an introduction. As explained in 9.2, the first objective of the introductionwill be aimed at making sure customers do take part in the interview or complete thequestionnaire. However, since CSM is about measuring, there will also need to besome technical instructions about the rating scale, which should appear straight afterthe introduction and just before satisfaction and importance are scored. Althoughgiving scores on a 10-point scale is very easy for people, the scale should be explained,particularly labelling the end points so there is no misunderstanding about which isthe high scoring end and which is the low one. The introductory wording forsatisfaction and importance can therefore be clear but short and simple, such as:“I would now like you to score a list of factors for how satisfied or dissatisfied you arewith XYZ’s performance on each one, using a scale of 1 to 10, where 1 meanscompletely dissatisfied and 10 means completely satisfied.”And for importance:

130 The questionnaire

Chapter nine 5/7/07 09:56 Page 130

Page 138: Customer Satisfaction

The questionnaire 131

“I would now like you to score the same list of factors for how important orunimportant they are to you, again using a 1 to 10 scale, where 1 means of noimportance at all and 10 means extremely important.”

9.4.2 Scoring satisfaction and importanceThe customer requirements must be covered in two separate sections for satisfactionand importance. It is tempting, but incorrect to cover both importance andsatisfaction for each requirement before moving onto the next item. Adopting thisapproach results in an artificial correlation between the importance and satisfactionscores for each requirement. Separate importance and satisfaction sections shouldtherefore be used, but in what order? Although it is conventional to ask theimportance section before satisfaction, our tests at The Leadership Factor show thatit is better to start with satisfaction scores since this makes respondents familiar withall the issues before they are asked to score importance. When the importance sectionfollows satisfaction a wider range of importance scores is given and this providesgreater discriminatory power at the analysis stage. Scores given for satisfaction varylittle whether they are asked before or after importance.

Once the list of customer requirements has been scored for satisfaction, any lowsatisfaction scores can be probed. This is completely controllable with interviews,where the interviewer would return to the low scoring questions and ask therespondent to explain why they gave each of the scores. On web surveys, amandatory pop-up comments box can appear after each low satisfaction score.This achieves the desired effect of obtaining a comment for each low score but maydistort the data as some respondents will learn to avoid giving low scores to side-step the tedium of the comments boxes. On paper questionnaires an opencomments box invites customers to make comments, “particularly about anyitems you have scored low for satisfaction”, but typically only around one third orone quarter of respondents will write comments, and even then not for all of theirlow scoring requirements. After scoring and probing satisfaction, list all therequirements again and rate them for importance.

Although some text books would state that the requirements should be listed in arandom order and, strictly speaking, not in the same order on every questionnaire,on the grounds that earlier questions might influence respondents’ thinking on laterones, practical convention is to list the requirements in a logical order. This issupported by McGivern5 who states that illogical sequencing and strange non-sequiturs will damage rapport with respondents and will confuse them, leading toreduced commitment on their part and sometimes failure to complete thequestionnaire / interview. The order will be the same for both the satisfaction andimportance sections. In deciding which order the questions should be listed, there aretwo basic choices. One option is the sequence of events that customers typically go

Chapter nine 5/7/07 09:56 Page 131

Page 139: Customer Satisfaction

through when dealing with the company and that works very well for one off eventslike taking out a mortgage or making an insurance claim. However, for manyorganisations that have ongoing relationships with customers involving a variety ofcontacts for different things at different times, using the sequence of events as a basisfor question order will not work. In that situation it would be normal to use topicgroupings, with all the questions on quality grouped together, all the questions ondelivery together etc.

9.4.3 Additional questionsAsking a small number of ‘lens of the organisation’ questions is perfectly validprovided they come after the ‘lens of the customer’ requirements. This ensures thatthe satisfaction and importance scores that will be monitored over time are notinfluenced by any other factors asked earlier. If this rule is followed there is norestriction on the subject matter of the additional questions, which can coveranything else organisations would like to know. Nor does the type of question matter.They can be open or closed, and if the latter, can employ any kind of rating scale sincethey will be analysed completely separately from the satisfaction measurement part ofthe questionnaire. The number of additional questions that can be accommodatedwill be dictated by how much time remains after allowing for the satisfaction,importance, and classification questions. The time available is addressed in sections9.5 and 9.6. For managing and improving customer satisfaction and loyalty, two typesof additional question are most appropriate, loyalty questions and satisfactionimprovement questions and these will be covered in the next two sub-sections.

9.4.4 Loyalty questionsFor companies wanting to link customers’ attitudes with their behaviour theadditional questions will need to cover loyalty. This will make it possible to calculatethe drivers of loyalty (see Chapter 10). As additional questions they don’t have tofollow any specific format, but it is advisable to retain the 10-point scale, for thereasons explained in Chapter 8 and to facilitate any modelling work to establish thelinks between satisfaction and loyalty (See Chapter 14). It is also useful to ask severalloyalty questions, mainly for the benefit of having a loyalty index6 (see Chapter 11),but also because it can be helpful to cover different dimensions of loyalty.

As Johnson et al from Michigan University point out, satisfaction measurementmethodology applies universally across businesses but loyalty measures do not1,because the desired behavioural outcomes of satisfaction differ considerably acrosssectors and sometimes, due to company strategy, across different businesses withinthe same sector. This illustrates the fallacy of universal measures such as a netpromoter score based on a standard loyalty question. Typical dimensions of loyaltyinclude retention, commitment, recommendation, related sales, trust, value andpreference. It would be unusual to include loyalty questions covering every dimension,

132 The questionnaire

Chapter nine 5/7/07 09:56 Page 132

Page 140: Customer Satisfaction

The questionnaire 133

rather selecting the three or four most relevant to an organisation. We now outline thesuggested forms of wording for loyalty questions across all seven dimensions.

KEY POINTThere is no standard loyalty question that is equally applicable for allorganisations. The best approach is to ask several loyalty questions covering thedimensions of loyalty that are most relevant to the organisation concerned.

(a) Retention“On a scale of 1 to 10, where 1 means definitely not and 10 means definitely, do youexpect to be a customer of XYZ in 12 months’ time?” (Adjust time scale if appropriate).

This question is most relevant to companies in markets where there is a significantlevel of switching, but where before and after the switch, single sourcing would benormal. It is therefore appropriate for insurance companies, mortgage providers andmost utility companies, especially those such as mobile telephony where one yearcontracts are common. It is clearly not suitable for markets where customers can’tswitch, e.g. water utilities in the UK nor for those where switching is theoreticallypossible but unusual in practice. Many suppliers of computer systems or software arein this position as the cost and hassle involved in switching are prohibitively high. Itis also unsuitable for promiscuous markets, such as most retail markets, wherecustomers use many of the suppliers. Even if a customer is less loyal and has reducedher spend with a retailer, she will probably still be a customer in 3, 6 or 12 monthstime, albeit a less valuable one.

(b) Commitment“If you could turn the clock back and start over again, would you choose XYZ asyour (bank, internet service provider, waste disposal service, fleet managementsupplier etc)? Please answer on a scale of 1 to 10, where 1 means definitely not and10 means definitely”.

This question is particularly useful in markets where switching is possible but notvery common. As well as the computing sector mentioned above, it is relevant forbanking, any kind of subscription service such as satellite TV or a heating andplumbing call-out service and contractual arrangements in B2B markets, such asfacilities management, security, cleaning, out-sourced payroll contracts etc. It is avery good loyalty indicator in difficult-to-switch markets because it highlightscustomers who wish they could switch even though they probably won’t. This canserve as a very useful warning because in captive markets customers will endure quitehigh levels of dissatisfaction but will usually reach a ‘cliff edge’ where the pain ofremaining a customer outweighs the cost and hassle of switching. It can also be abetter indicator of loyalty than the retention question in many markets for two

Chapter nine 5/7/07 09:56 Page 133

Page 141: Customer Satisfaction

reasons. First, it is a question about the present rather than the future. The customermay not yet have given much thought to contract renewal but will definitely knowwhether they have no regrets about the current contract and would re-sign today ifnecessary. Second, the retention question is more threatening because it’s a directquestion about whether the customer is going to spend more money with thesupplier in the future. If the survey is not anonymous and not conducted by anindependent third party it will be seen by many as sales-motivated. The commitmentquestion is a non-threatening question that is much more likely to elicit an answerthat accurately captures the customer’s loyalty feelings.

A good use of a commitment question is illustrated by the Consumers’ Associationsurvey into the UK mobile phone market referred to in Chapter 18. Most contractshad stiff penalties for early termination so customers were asked if they would opt fora different network if they did not have to pay a penalty. On that measure, only 8%of Orange’s customers were uncommitted compared with the industry average of27% and 32% for One2One – a very telling lead indicator for both companies.

(c) Recommendation“On a scale of 1 to 10, where 1 means definitely not and 10 means definitely, wouldyou recommend XYZ to friends and family?”

Recommendation is relevant to most organisations so will almost always be one ofthe questions that makes up a loyalty index. It is easy for customers to answer and isa good indicator of customers’ loyalty feelings. Where possible, however, it is veryuseful to gather information about real loyalty behaviours, instead of the attitudinalquestions shown above or as well as them. It is easy to supplement therecommendation question with a behavioural one such as:

“Have you recommended XYZ to anyone in the last 3 months (time scale as appropriate)?If yes, to how many people?”

Response options will vary by business since the incidence of recommendingbehaviour is much greater in some sectors than others, but the purpose of the optionswould be to assess frequency of recommendation as well as to distinguish betweenrecommenders and non-recommenders. If both types of recommendation questionare asked it can also be insightful to understand whether, of those customers who arewilling to recommend in theory, some actually do so in practice much more thanothers. If this phenomenon does apply, and provided the variance can be linked tocustomer segments, there will be opportunities to target the types of customer whorecommend most to reward loyalty behaviours, and to encourage or incentivisesegments that are willing to recommend but in practice tend not to.

134 The questionnaire

Chapter nine 5/7/07 09:56 Page 134

Page 142: Customer Satisfaction

The questionnaire 135

(d) Related salesOn a scale of 1 to 10, where 1 means definitely not and 10 means definitely, will youconsider XYZ for (your next financial product / other related household services /servicing and spares….. as appropriate)?”

The wording of the related sales question requires much more tailoring acrossdifferent types of business. The wording shown is appropriate for companies such asbanks with a wide range of related products. For many companies related sales are thebiggest single element of customer lifetime value making this question a particularlyimportant component of their loyalty index.

(e) Trust On a scale of 1 to 10, where 1 means definitely not and 10 means definitely, do youtrust XYZ to look after your best interests as a customer?”

This is another good question for drawing out customers’ deep-seated loyalty feelingsabout an organisation and adds a new dimension that may not be captured by any ofthe previous questions. A supplier might be performing very well on all the practicalaspects of meeting customers’ requirements, so customers intend to stay with it,would sign up again if turning the clock back and may even have bought otherproducts and recommended. But is the organisation genuinely committed to itscustomers or, when the chips are down, more interested in delivering results toshareholders? Is it always managed ethically, or will it follow the most profitableroute? The memory of any earlier scandals will linger long after their perpetratorshave departed. Many people have a general feeling that companies, especially largeones, are much more interested in profits than anything else, including customers.Consequently, they often feel taken for granted and that their loyalty is not rewardedor valued. More than any of the other loyalty questions, the trust question will drawout any such feelings.

(f) Value“On a scale of 1 to 10, where 1 means very poor and 10 means excellent, how wouldyou rate the value for money provided by XYZ?”

In any situation where customers pay for the product or service, its cost or price willalmost inevitably have been one of the satisfaction questions as it is bound to be animportant ‘lens of the customer’ requirement. Where price is a particularlyprominent feature of a market the questionnaire will often benefit from more thanone satisfaction question covering aspects such as its competitiveness and fairness,the way price negotiations are handled, the stability of prices, special offers /promotions, the simplicity or otherwise of tariff systems etc. For satisfactionmeasurement it is strongly recommended to have one or more specific, actionable

Chapter nine 5/7/07 09:56 Page 135

Page 143: Customer Satisfaction

questions on price, not a very vague question such as value for money, which is clearlya double question. If there is dissatisfaction should the company reduce the price orincrease the value? It will also be double counting since the key value elements of thevalue for money equation will inevitably have been covered amongst the otherrequirements measured.

However, using value as a dimension of loyalty is completely different. Its purpose isnot to be actionable but to provide accurate insight into customers’ loyalty feelingsand the feeling that an organisation does or does not provide good value will oftenform a significant element of customers’ future loyalty behaviour. In fact, questionsdesigned to draw out customers’ deep seated feelings often work better if they arevery general, so using ‘value for money’ or ‘good value’ will be exactly the right kindof wording for a loyalty question. Value is another loyalty question that is widelyapplicable across markets. Even in the public sector where customers do not paydirectly for services such as health and education but do pay indirectly through taxes,the concept of providing ‘good value’ is widely understood.

(g) PreferenceThe preference question can also cover attitudes or behaviours as appropriate. Insectors where customers don’t use competing suppliers, it is appropriate to use a verygeneral preference question such as:

“On a scale of 1 to 10, where 1 means amongst the worst and 10 means amongst thebest, how does XYZ compare with other organisations that you use?”

This question can be focused where relevant, e.g. “compared with other Governmentdepartments.” However, preference questions are of most value to companies incompetitive markets. They can be attitudinal, such as:

“On a scale of 1 to 10, where 1 means the worst and 10 means the best, how does XYZcompare with other supermarkets that you use?”

Preference questions can also elicit specific ‘share of wallet’ information such as:“In a normal week, what percentage of your grocery shopping is done at XYZ?”

This type of ‘share of wallet’ question is particularly useful in markets where there isdual or multiple sourcing and often high levels of switching.

In Chapter 11 we will explain how to compile data from the loyalty questions into anindex and in Chapter 13 we will explore in more detail the predictive value ofdifferent loyalty questions. Now we need to turn our attention to the other line ofadditional questioning that is particularly useful in CSM.

136 The questionnaire

Chapter nine 5/7/07 09:56 Page 136

Page 144: Customer Satisfaction

The questionnaire 137

9.4.5 Satisfaction improvement questionsOn a conventional satisfaction measurement questionnaire the data will pinpoint theareas to address to improve customer satisfaction (see Chapter 11) and, if customersare interviewed, the probing of low satisfaction will provide considerable insight intothe reasons for dissatisfaction and the changes that customers would like to see inthose areas. However, to maximise the chances of improving customer satisfaction, itcan be very useful to have additional, precise information that relates to thecompany’s internal organisation and processes. These are totally ‘lens of theorganisation’ questions whose purpose is to enable the company to continually finetune its satisfaction improvement programme by monitoring the effect of specificchanges on customer satisfaction. The possibilities for such questions are almostendless, so in this section we will simply illustrate the concept with a two examples.

A common area of poor performance for customer satisfaction is handling problemsand complaints. Clearly, any additional questions in this area would be asked only tocustomers with recent experience of a problem or complaint. The focus foradditional questions will be provided partly by customers’ comments about theirdissatisfaction in that area and partly by management’s view of specific actions theorganisation could take to improve matters. One possibility could be to reduce thetime it takes to resolve a problem, in which case a suitable question would simply be:

“How long did it take to resolve the problem?”

Response options would include several quite detailed time frames relevant to theorganisation concerned. As well as being able to verify that ‘time to resolution’ doesaffect customer satisfaction it will be possible to identify the tipping point beyondwhich satisfaction deteriorates. Tracking surveys will show when the organisation’sinternal metrics on ‘time to resolution’ improvements have changed customers’perceptions and, as we will demonstrate in Chapter 14, the impact of the actions inimproving customer satisfaction and loyalty can also be quantified.

A second example could be the way in which the problem was handled – by email,letter, telephone or personally. Dichotomous questions are often very helpful forsatisfaction improvement initiatives because they are totally black and white. A policyis being implemented in the eyes of customers or it isn’t. In this example it would alsobe very useful to relate the questioning to one or more very specific points along theproblem handling journey. Typical questions would be:

“Did anyone from XYZ call you to understand the details of your complaint?

Did you receive an acknowledgement in writing that your complaint had been logged?

Did you receive a follow-up call after the resolution of your complaint?”

As well as honing such questions by incorporating the time taken for the action, their

Chapter nine 5/7/07 09:56 Page 137

Page 145: Customer Satisfaction

actionability is often improved by relating them to the specific individuals or teamsinvolved. Rules about double questions apply equally here, of course, so anyadditional digging must be done through separate questions such as:

“How did you bring your problem to XYZ’s attention?”

Response options would cover channels and individuals or teams. A question of thistype often works well as a ‘closed question – open response’ where more than oneresponse option is permissible. It will often demonstrate that customers withproblems end up more satisfied if they have used a certain channel and/or a specificteam or individual. Chapter 15 will provide more details on how to use this type ofinformation to improve customer satisfaction.

KEY POINTDichotomous questions can be very helpful in providing actionable informationto guide satisfaction improvement initiatives.

9.4.6 Classification questionsClassification questions should come at the end. Some people may be offended bywhat they see as impertinent questions about age, gender, ethnic origin, occupationor income, so it is always better to leave classification questions until after they haveanswered the other questions9. If the classification questions are placed at thebeginning of the questionnaire, respondents may abandon it or abort the interview.The one exception here would be quotas or qualification questions whererespondents’ suitability has to be verified before their inclusion in the survey.

It is good practice to be as consistent as possible with classification questions to aidcomparability over time and with other organisations. Some companies will have aninternal segmentation that is fundamental to their marketing strategy and thereforedictates the categories for their classification questions. If not, it could be sensible toadopt the standard versions for demographic questions used by the Office forNational Statistics (ONS) and available on their website10.

KEY POINTThe correct sequence of questions for CSM is:

1. Satisfaction scores2. Importance scores3. Additional questions4. Classification questions

9.5 Questionnaire length

9.5.1 Length of timeWhether a self-completion questionnaire or one that will be administered by interview,

138 The questionnaire

Chapter nine 5/7/07 09:56 Page 138

Page 146: Customer Satisfaction

The questionnaire 139

10 minutes is a reasonable time to ask of customers. This duration should be clearly andhonestly stated in the introductory letter. Moreover, organisations that follow theconventional good practice of surveying any individual no more than once a year, willbe able to say that they are asking for no more than ten minutes per annum of thecustomer’s time to provide feedback on how well their requirements are being met –clearly a reasonable request. In markets where customers are more interested in thesubject they will often take more time to make comments. This is common in B2Bcustomer satisfaction surveys and will often increase the average interview length to 15minutes. However, in any market, customers who are short of time and choose not tomake extensive comments should be able to complete the survey within 10 minutes.

9.5.2 Number of questions: interviewsWithin that 10 minute window up to fifty questions can be accommodated on a CSMquestionnaire. This may seem a surprisingly high number, but it is due to the repetitivenature of scoring the customer requirements for satisfaction and importance. Askingfifty unrelated survey questions using different question types and scales would take atleast 20 minutes. For CSM, however, the bulk of the questionnaire involves scoring thecustomer requirements for satisfaction, then importance, all on the same scale.Customers soon get used to the scale, especially a 10-point numerical scale, so the firsttwo sections of the questionnaire will be completed very quickly.

It is normal to include up to 20 requirements that are scored for satisfaction andimportance, giving 40 questions in total. These will be customers’ 20 most importantrequirements as identified by the exploratory research. If interviewed, customers will beprobed on any low satisfaction scores they give. The number of questions to be probedwill depend on how satisfied customers are and the threshold level set. An organisationachieving a reasonable level of satisfaction can expect to probe around four of the 20attributes on average if the threshold for probing is all satisfaction scores below six outof ten. This would mean that 44 of the 50 questions have been used, leaving six to splitbetween additional questions and classification data. This resultant distribution of the50 questions across a typical interview is shown in Figure 9.1.

This number and distribution of questions is not fixed and will be influenced by the

FIGURE 9.1 Composition of an interview questionnaire

Questionnaire Sections: Interview

IntroductionSatisfaction Scores

Probing low satisfaction scoresImportance scores

Additional questionsClassification questions

203

2034

Section Maximum questions

Chapter nine 5/7/07 09:56 Page 139

Page 147: Customer Satisfaction

customer experience. This can be very brief with some organisations, such as callinga helpdesk with a straightforward technical query or booking a ticket through anagency. In these situations, there may not be as many as 20 important customerrequirements to include on the questionnaire. This would enable the supplier to askmore additional questions or to have a shorter questionnaire. Organisations with avery complex customer-supplier relationship may face the opposite problem withmore than 20 important customer requirements that seem to merit inclusion. In thissituation it is advisable to resist the temptation to make the main surveyquestionnaire longer. Instead it may be possible to reduce the number of additionaland/or classification questions to create space for a longer list of customerrequirements. Alternatively it may be worth considering a larger sample for theexploratory research in order to be more precise about the exact make-up ofcustomers’ 20 most important requirements.

The only other variable on interview length will be the amount of probing. Ascustomer satisfaction reduces, more probing will be needed. Rather than reduce thenumber of customer requirements to accommodate the extra probing, it is preferableto lower the probing threshold to scores below four at low levels of satisfaction. Thiswill still generate enough qualitative information to fully understand the reasonsbehind customers’ dissatisfaction.

9.5.3 Number of questions: self-completionFollowing the 10-minute rule, 50 is also the guide for the maximum number ofquestions on a self-completion questionnaire. Due to the inability to probe on postalquestionnaires, the distribution of questions across the sections will differ slightlyfrom interviews. Instead of probing low satisfaction scores, it is normal to include acomments box on a paper questionnaire. This can be inserted at the end or straightafter the satisfaction section. Since the most useful qualitative information fororganisations is anything that helps them to better understand the reasons for anycustomer dissatisfaction, the following wording above the comments box is mostuseful. “Please include any additional comments in the box below. It would be veryhelpful if you could comment on any areas that you scored low for satisfaction.” Sincea comments box is considered equivalent to one question, a self-completionquestionnaire can accommodate slightly more additional or classification questionsif required, resulting in the kind of composition shown in Figure 9.2.

FIGURE 9.2 Composition of a self-completion

Questionnaire Sections: Self-completion

IntroductionSatisfaction Scores

Comments boxImportance scores

Additional questionsClassification questions

201

2054

Section Maximum questions

140 The questionnaire

Chapter nine 5/7/07 09:56 Page 140

Page 148: Customer Satisfaction

The questionnaire 141

With paper questionnaires fifty questions can be squeezed onto a double-sided A4sheet or can be spaced out to cover four sides. Although shorter questionnaires aredesirable ‘per se’, the four-sided questionnaire is likely to achieve a higher responserate and a better quality of response because it will look more attractive and will beeasier to navigate, understand and fill in9. Some respondents may never startquestionnaires that have small type or look cluttered as they will be seen as difficultto complete.

KEY POINTProvided most of the questions involve scoring customers’ requirements forsatisfaction and importance on a numerical scale, a maximum of 50 questionscan be answered within the recommended 10 minutes.

9.6 Design guidelinesThis advice applies only to self-completion questionnaires, which need to lookprofessional and aesthetically appealing. We have already suggested that questionsshould be spaced out, with an attractive layout even if it makes the questionnaire runinto more pages. Use of colour is also worthwhile. Even a two colour questionnairecan appear much more attractive because semi-tones can be used very effectively forclarification and differentiation. By all means include the organisation’s logo and, ifapplicable that of an agency to highlight the survey’s independence. Whereappropriate, some organisations can also include background images or photographsto add appeal or to emphasise any subject areas that will be of interest to customers.Companies in this position include leisure clubs and venues, holiday companies,membership organisations, charities and other special interest groups.

It is also important to consider the design requirements of customers with poorreading eyesight, which will include most older customers. In this respect, the RNIB(Royal National Institute for the Blind) recommend 12-point type, with a sans-seriffont such as Arial, no reversed-out text (e.g. white text on a dark background) and avery dark colour used for the text, preferably black.

Since we know that almost all the questions will be closed, especially on self-completion questionnaires, and most of those will be scaled, it is necessary toconsider precisely how customers will give their response. Using the 10-point scale asan example, customers could be asked to write a number to indicate their score, theycould circle their score on a row of numbers from 1 to 10, or they could be presentedwith a row of boxes and asked to put a tick or a cross in the appropriate one.

Although the first takes up the least space and may present an unclutteredappearance, it is not recommended since hand written numbers greatly increase therisk of error, with huge variations in people’s handwriting styles leading to confusion

Chapter nine 5/7/07 09:56 Page 141

Page 149: Customer Satisfaction

between 1s and 7s, or 3s and 8s. Whilst ‘ticking the box’ is frequently referred to,almost as though it is the generic option, it is actually the least precise of the threeremaining options, with ticks of varying sizes often covering large parts of the paperoutside the box concerned. Circling numbers is easy for respondents and minimiseserrors, but it is much less suitable for closed questions with verbal options, such asclassification questions, where very large and messy ovals will often be required. Sinceit would be bad practice to mix response options (e.g. circling numbers but tickingboxes for verbal categories), the only error proof method that is applicable to all typesof closed question is placing a cross in the appropriate box. This method is also by farthe best option if questionnaires are scanned.

For electronic questionnaires, the handwriting problem is obviously eliminated, butpeople generally prefer using the mouse than the keyboard, so typing a numberwould not be recommended. Circling numbers is not feasible so the options areclicking on a box, which insets an ‘x’ or clicking on a number, or other responseoption in a drop-down menu. Either option is acceptable although the formermethod is slightly quicker.

9.7 Questionnaire wordingThere are many potential pitfalls to avoid in wording CSM questionnaires, since anyone of them could reduce the reliability of the customer satisfaction measure. Figure9.3 summarises the main ones.

9.7.1 Knowledgeable answersThe first thing to consider is whether respondents will possess the knowledge toprovide accurate answers to the questions on the questionnaire. Not having it won’tstop them! People will often express opinions based on scant knowledge of the facts.

FIGURE 9.3 Questionnaire wording

Wording Checklist

Qualify respondents before including them in the surveyOffer a not-applicable option

1. Does the respondent have the knowledge?

Ambiguity of common wordsUnfamiliar or jargon wordsDouble question

2. Will the respondent understand the question?

Balanced questionBalanced rating scale

3. Will the questions bias the response?

142 The questionnaire

Chapter nine 5/7/07 09:56 Page 142

Page 150: Customer Satisfaction

The questionnaire 143

For example, customers might score a supermarket on ‘quality of products’, ‘level ofservice’ or ‘value for money’, even though it is months or even years since they shoppedthere. That would not be a problem if the supermarket wanted to understand thegeneral public’s perception of its quality, service or prices, but it would be verymisleading if it was trying to understand the real experiences of its customers.

A related problem is that respondents may not have experience of an organisation’sperformance on all the requirements covered. In a B2B market, a chief executive, forexample, may not have any real knowledge of a supplier’s on time deliveryperformance. To avoid gathering misleading scores from ill informed members of theDMU, a ‘not applicable’ option should be provided for each satisfaction question. Itis not necessary to provide a ‘not applicable’ option for importance scores sincerespondents will have a view on the relative importance of each requirementincluding those with which they are not personally involved.

KEY POINTAlways include a not applicable option when scoring the requirements forsatisfaction.

9.7.2 Ambiguous questionsThe second thing to consider is whether the respondents will understand thequestions, or, more accurately, whether they will all assign to the questions the samemeaning as the author of the questionnaire. For example, many of the words we useroutinely in everyday speech are problematical when used in questionnaires becausethey are simply not sufficiently precise. A pertinent example is shown in Figure 9.4.

What exactly does the word ‘regularly’ mean? Questionnaire wording has to beextremely precise, to the point of being pedantic. If anything is open to interpretationthe results will often be unclear when the survey is analysed. Figure 9.5 shows how thequestion about the newspapers would have to be phrased.

FIGURE 9.4 Ambiguous question

Which of the following newspapers do you read regularly?Please tick the box next to any newspapers that you read regularly:

Express

Guardian

Mail

Mirror

Sun

Times

Chapter nine 5/7/07 09:56 Page 143

Page 151: Customer Satisfaction

9.7.3 JargonAnother reason why respondents misunderstand questions is the use of unfamiliarwords. Everybody knows it is not advisable to use jargon but most people stillunderestimate the extent to which words they use all the time at work with colleaguescan be jargon words to customers. Of course, that is another very good reason forcarrying out the exploratory research so that the customers’ terminology can be usedon the questionnaire. As well as obviously technical names, even words such asfacility and amenity are liable to ambiguity and misinterpretation. The Plain EnglishSociety (see Appendix 2) provides good advice on wording that is clear andunderstandable for most people.

9.7.4 Double questionsDouble questions are a common reason for misunderstanding. A typical example is:

“Was the customer service advisor friendly and helpful?”

If the customer thought she was very friendly but not helpful the question would beunanswerable. Nor is it actionable. If friendliness and helpfulness are both importantto customers, it is necessary to ask two questions.

9.7.5 Biased questionsOne of the biggest problems on the wording of questionnaires is the danger that thequestionnaire itself will bias the response through unbalanced questions or ratingscales11. Typical questions on a customer satisfaction survey might be:

“How satisfied are you with the layout of the store?”

FIGURE 9.5 Precise question

How often do you read each of the following newspapers?Please tick one box for each newspaper

Everyday

Morethan

once a week

Weekly MonthlyEvery 3 months

Less thanonce

every 3months

Never

Express

Guardian

Mail

Mirror

Sun

Times

144 The questionnaire

Chapter nine 5/7/07 09:56 Page 144

Page 152: Customer Satisfaction

The questionnaire 145

“How satisfied are you with the speed of response for on-site technical support?”

Each of those questions has introduced an element of bias which is likely to skew theresults, and the problem arises in the first part of the question:

“How satisfied are you with……?”

The question itself is suggesting that customers are satisfied. It is just a matter of howsatisfied. To eliminate that bias and be certain that the survey is providing a measurethat accurately reflects how satisfied or dissatisfied customers feel, those questionsshould be worded as follows:

“How satisfied or dissatisfied are you with the layout of the store?”“How satisfied or dissatisfied are you with the speed of response for on-sitetechnical support?”

9.7.6 Biased rating scalesThe other part of the question that might bias the response is the rating scale. Biasedrating scales are commonly found on many customer satisfaction questionnaires, asshown in Figure 9.6.

For an accurate measure, customers must be given as many chances to be dissatisfiedas to be satisfied. The scale shown is not balanced and is likely to bias the resulttowards satisfaction. Most positively biased rating scales on customer satisfactionquestionnaires are probably there because the questionnaire designers are obliviousof the problem. However, some companies who are very experienced in CSMdeliberately use positively biased questionnaires on the grounds that only ‘top box’satisfaction matters, so it is only degrees of satisfaction that are worth measuring.There are two problems with this philosophy. Firstly, even if most customers aresomewhere in the very satisfied zone, it is still essential information to understand

FIGURE 9.6 A positively biased rating scale

Please comment on the quality of service you received by ticking one box on each line:

Helpfulness of staff

Friendliness of staff

Cleanliness of the restaurant

Cleanliness of the toilets

Waiting time for your table

Excellent Good Average Poor

Chapter nine 5/7/07 09:56 Page 145

Page 153: Customer Satisfaction

just how dissatisfied the least satisfied customers are and the extent to whichindividual attributes are causing the problem. In many ways it is more valuable to theorganisation to identify in detail the problem areas that it can fix than to have detailedinformation on how satisfied its most satisfied customers are. The second argumentagainst using positively biased rating scales is that it is not necessary. With a sufficientnumber of points on the scale one can accommodate degrees of satisfaction anddissatisfaction in equal proportions. As we saw in the previous chapter a 10-pointscale allows five options for degrees of satisfaction whilst still offering the samenumber of choices for customers who are less than satisfied.

KEY POINTBalanced questions and rating scales will offer customers equal opportunities tobe dissatisfied or to be satisfied, so will not bias the outcome.

9.7.7 Requirement wordingIf the wording of the questionnaire is not to influence customers’ answers, the list ofcustomer requirements that are scored for satisfaction and importance must beneutrally worded. Examples of wording that break this rule would be:

“How satisfied or dissatisfied were you with……….

Quick service at the checkout

An efficient check-in procedure

A warm atmosphere in the restaurant.”

Pejorative statements like those above are more likely to depress rather than increasesatisfaction scores because they are effectively asking the customer to rate the supplieragainst high standards. They will seriously inflate importance scores since they leadcustomers to focus on the adjectives. Of course it is important that the service isspeedy rather than slow and that check-in is efficient as opposed to inefficient. For anaccurate measure of satisfaction, it is essential that the wording of the attributes doesnot put any thoughts into respondents’ heads other than labelling, in a neutralfashion, each customer requirement to be scored. The requirements listed aboveshould therefore be worded:

“How satisfied or dissatisfied were you with……….

The speed of service at the checkout

The check-in procedure

The atmosphere in the restaurant.”

As we pointed out in Chapter 8, this is also a problem when Likert (agree – disagree)

146 The questionnaire

Chapter nine 5/7/07 09:56 Page 146

Page 154: Customer Satisfaction

The questionnaire 147

scales are used. Due to organisations’ reluctance to use the scales in the right way with asmany strongly negative as strongly positive statements, satisfaction surveys comprisinga list of 20 positive statements suffer from a high degree of acquiescence bias.

KEY POINTThe list of customer requirements should be worded in a neutral manner.

9.8 Closing the questionnaireWhether for interviews or self-completion, there are several things to consider betweenthe final question and the end of the questionnaire. After the last question it is courteousto thank respondents for their time and help, then give them a final opportunity to makeany other points they want to make about anything. On a self-completion questionnairethis purpose will be served by the comments box, which is normally placed at the end ofthe questionnaire. We have already covered the importance of anonymity, but said thatrespondents can be given the choice to forego their anonymity and be attributed. If it isoffered, this option should always be given at the end of the questionnaire. For self-completion questionnaires it is useful to prominently remind customers about thereturn date, even though it should already have been specified in the introductory letterand at the beginning of the questionnaire. As well as reminding them that there is a replypaid envelope to return it in, it is also a good idea to state the return address in case thereply envelope has been mislaid. Finally, it is good practice for agencies to offer thetelephone number of the Market Research Society and/or the commissioning companyfor respondents to use if they wish to check the authenticity of the agency or to make anycomplaint about the interview or questionnaire12.

9.9 PilotingIt is normal practice to pilot questionnaires and most textbooks will make referenceto this. However, as we have previously stated, CSM is not like most market research,and many of the distinctive aspects of CSM reduce the need for piloting. Firstly,unlike most research, CSM is preceded by an extensive exploratory research phasewhose sole purpose is to ensure that the questionnaire asks the right questions. Thecore of the questionnaire will be the 15 – 20 customer requirements that are scoredfor importance and satisfaction. These are determined by the customers during theexploratory research phase. The wording of the requirements will also be pre-determined, and will be based on words used by customers during the exploratoryresearch rather than any terminology used by the organisation. There is not muchroom for additional questions, and of those, there is a limited number of tried andtested options for any loyalty questions and the wording of classification questions isoften standard. For subsequent tracking studies it will be important not to tamperwith the wording of the original questionnaire to ensure comparability. Even the

Chapter nine 5/7/07 09:56 Page 147

Page 155: Customer Satisfaction

more peripheral aspects of the survey such as the introductory letter and theintroduction and close of the questionnaire have been tried and tested so many timesby experienced CSM practitioners that there isn’t really anything left to pilot. Unlessthe piloting was even more extensive than the exploratory research, how could a smallscale pilot be justification for changing anything determined by the exploratoryresearch? Other aspects of the methodology such as scoring importance andsatisfaction on a 10-point scale are so fundamental and essential that no purposewould be served by piloting. Consequently, it is not necessary to pilot a CSMquestionnaire that adheres strictly to the methodology explained in this book.

Questionnaire piloting would be necessary if the methodology has not beenfollowed. If exploratory research has not been conducted, some of the questions maybe irrelevant to customers. Others may be misleading or even incomprehensible. Thequestionnaire might be too long. If exploratory research has been done and themethodology has been scrupulously followed, the only remaining unknown is thetype of people that make up the sample. This is most applicable to telephoneinterviews, since some people, e.g. senior management, can be very difficult to reach.In this situation it is not so much the questionnaire being piloted as the difficulty ofachieving the required number of interviews. A similar example is the organisationhaving a poor database. Again, it is not the questionnaire but the feasibility ofachieving a sufficiently large and representative sample that needs to be tested. Themain reason for questionnaire piloting will be if many of the customers in the samplemay struggle to fully understand it, typically through old age, language difficulties orlow educational attainment. This would apply to any questionnaire, but especially ifa self-completion survey is envisaged. Where there are major understandingdifficulties, self-completion will often be impossible, with only very carefully guidedface-to-face interviews standing any chance of a reliable response.

Conclusions1. Most of the content, wording and sequencing of a CSM questionnaire are pre-

determined by the exploratory research and by non-negotiable aspects of theCSM methodology, such as scoring customers’ most important requirements forsatisfaction and importance on a 10-point scale.

2. Satisfaction should be scored first, then importance, with any additionalquestions asked next and classification questions last.

3. Since scoring 15 to 20 customer requirements for both importance andsatisfaction on a 10-point scale is very quick, a CSM questionnaire canaccommodate up to 50 questions and still be administered, by interview or self-completion in around 10 minutes.

148 The questionnaire

Chapter nine 5/7/07 09:56 Page 148

Page 156: Customer Satisfaction

The questionnaire 149

4. The most common use of additional questions is to measure loyalty and / or toask specific ‘lens of the organisation’ questions that will help to hone satisfactionimprovement initiatives.

5. When wording questionnaires it is important to avoid injecting bias, to avoid anyquestions with vague or double meanings and to ensure that satisfaction scores aregiven only by respondents with recent experience of the supplier’s performance.

References1. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty and

Profit: An Integrated Measurement and Management System”, Jossey-Bass, SanFrancisco, California

2. Parasuraman, Berry and Zeithaml (1985) "A conceptual model of service qualityand its implications for future research”, Journal of Marketing 49(4)

3. Parasuraman, Berry and Zeithaml (1988) "SERVQUAL: a multiple-item scale formeasuring perceptions of service quality”, Journal of Retailing 64(1)

4. Zeithaml, Berry and Parasuraman (1990) "Delivering Quality Service”, Free Press,New York

5. McGivern, Yvonne (2003) "The Practice of Market and Social Research”, PrenticeHall / Financial Times, London

6. Oppenheim, A N (1992) "Questionnaire Design, Interviewing and AttitudeMeasurement”, Pinter Publishers, London

7. Converse and Presser (1988) "Survey Questions”, Sage, London8. Which? Online (1996) "Mobile Phone” Consumers’ Association in Barwise and

Meehan (2004) "Simply Better: Winning and keeping customers by deliveringwhat matters most”, Harvard Business School Press, Boston

9. Sudman and Bradburn (1983) "Asking Questions”, Jossey-Bass, San Francisco10. Office for Mational Statistics website, www.statistics.gov.uk11. Dillon, Madden and Firtle (1994) "Marketing Research in a Marketing

Environment”, Richard D Irwin Inc, Burr Ridge, Illinois12. Market Research Sociey website, www.mrs.org.uk

Chapter nine 5/7/07 09:56 Page 149

Page 157: Customer Satisfaction

CHAPTER TEN

Basic analysis

Having designed a questionnaire and undertaken a survey, the data collected willhave to be analysed. This chapter will focus on analysing the core informationcollected by a customer satisfaction survey – measures of importance andsatisfaction, before moving on in Chapter 11 to use those measures to calculate atrackable customer satisfaction index. In Chapter 12 we will explain how to extractactionable outcomes from the large volume of data that will often be generated by acustomer satisfaction survey.

At a glanceIn this chapter we will

a) Describe different types of average.

b) Examine different ways of understanding what’s important to customers.

c) Show how to use importance and impact measures to distinguish givens fromdifferentiators.

d) Explain how the standard deviation is used to measure the variance of viewsexpressed by customers.

e) Consider how to identify and understand the dissatisfaction drivers.

f) Explain how to identify loyalty differentiators.

g) Describe the analysis of verbal scales.

h) Review analytical software for research data.

10.1 AveragesThere are three measures of the average of a set of numbers – the mean, the medianand the mode.

10.1.1 The meanUsually, when people use the generic term ‘average’, they are referring to the mean.This is the sum of the values divided by the number of values: For example:6, 14, 1, 3, 11, 4, 5, 9. The total is 53. There are 8 values, so 53/8 = 6.625, which is themean average of the 8 scores.

150 Basic analysis

Chapter ten 5/7/07 09:57 Page 150

Page 158: Customer Satisfaction

10.1.2 The modeThe mode is the most commonly occurring value in a string of values. Due to centraltendency (the fact that in the real world, most values are close to the average ratherthan in the extremes of the distribution), the mode will often be a goodapproximation of the average. For example, if we checked the shoe sizes of a randomsample of adult males we might produce the following data:10, 9, 6, 9, 8, 12, 9, 7, 8, 11. The mode is 9 and is probably a good indication of themost common shoe size amongst adult males and the mean shoe size, which in thisexample would be 8.9.However, there are two problems with the mode. For some types of data the modemay not be at all reflective of what most people would see as the average. If wechecked out the rainfall data at a holiday resort we might see the following (inmillimetres).0, 0, 5, 0, 24, 13, 3, 0, 0, 0, 0, 0, 4, 7. The mode is clearly 0, but most people would seethe mean of 4mm as a better reflection of the ‘average’ rainfall at that time of year. Ofcourse, it might also be useful to know that there were 8 days without rain and only6 days when it did rain, but that is simply a count of the values and nothing to do withthe average. The other big problem with the mode for CSM is that if the raw data isin whole numbers, the mode will always be a whole number, making it far tooinsensitive to reflect the gradual changes in customer satisfaction that typically occur.

10.1.3 The medianThe median is the middle value in a string of numbers. Sorted in descending order,the median of the following string of 11 values is 7.10, 9, 8, 8, 7, 7, 6, 6, 4, 3, 3.If there is an even number of values, as shown in the following example, the medianis the mid-point between the two middle values, 6.5 in this case.9, 8, 8, 7, 7, 6, 6, 4, 3, 3.The median can be a very useful measure of the average in situations where the rangeof data is very wide and the sample small. The example below shows, in ascendingorder, the value of 7 houses in a very small village.£220,000, £235,000, £260,000, £265,000, £272,000, £310,000, £895,000The mean is £351,000 but has been heavily influenced by the one very high anduntypical value. In this example, the median of £265,000 would be a better reflectionof the average house value in the village. This problem will not occur with CSM dataon a 10-point scale with a reasonable sample size. Like the mode, the median alsosuffers from lack of sensitivity as far as CSM data is concerned. Hence the use of themean for calculating average importance and satisfaction scores.

KEY POINTThe mean average is used for CSM data.

Basic analysis 151

Chapter ten 5/7/07 09:57 Page 151

Page 159: Customer Satisfaction

10.2 Understanding what’s important to customersIn Chapter 4 we examined the difference between stated and derived measures ofimportance and concluded that so-called ‘derived importance’ measures are actuallymeasures of impact rather than measures of what’s important to customers.Therefore, any debates about whether stated or derived importance is the bettermeasure are irrelevant. Neither is better or worse since they are measures of differentthings – importance and impact. Organisations that want a full understanding ofhow customers judge them will use both measures.

10.2.1 ImportanceA CSM questionnaire asks customers to score the importance of a list of customerrequirements on a 10 point scale, where 1 means ‘of no importance at all’ and 10means ‘extremely important’. Based on a sample size of at least 200 respondents, themean importance scores generated by this exercise will provide a very clear andreliable view of the relative importance of customers’ priorities, as seen by thecustomers themselves.

In this chapter we will use some fictitious data from a retailer to illustrate the outcomesof a customer satisfaction survey. For simplicity, the charts show only eight customerrequirements. As stated earlier in this book, a typical survey would measure the top 15to 20 customer requirements (as determined by exploratory research with customers).In the retail example shown, all eight requirements are important but choice ofproducts, staff expertise and prices are the customers’ top priorities. These aresignificantly more important than store layout, staff helpfulness and staff appearance.

FIGURE 10.1 Importance

6.5 7 7.5 8 8.5 9 9.5 10

Choice of products

Expertise of staff

Price level

Speed of service

Quality of products

Layout of store

Staff helpfulness

Staff appearance

152 Basic analysis

Chapter ten 5/7/07 09:57 Page 152

Page 160: Customer Satisfaction

10.2.2 ImpactAs we know from Chapter 4, so-called ‘derived importance’ is actually a measure ofimpact, essentially highlighting things that are ‘top of mind’ for customers.Technically it is a measure of the extent to which a particular factor is currentlyinfluencing customers’ judgement of an organisation. We also saw in Chapter 4 thatdue to the high degree of collinearity in customer satisfaction data, a bivariate

correlation provides a better reflection of relative impact than multiple regression.The correlation coefficient will be a value between 0 and 1 and a typical range forCSM data is shown in Figure 10.2. It shows how some requirements, staff helpfulnessin this example, can make a big impact on customers’ overall judgement of a suppliereven though customers don’t score them particularly highly for importance.Conversely there can be requirements that are almost always scored highly for statedimportance, price being a typical example, that sometimes make little difference tocustomers’ overall judgement of the supplier.

10.3 Using importance and impact measuresTo gain a full understanding of what’s important to customers, importance andimpact should be combined in the type of matrix shown in Figure 10.3. Theimportance scores are plotted on the y axis and the impact scores on the x axis, withthe range of scores determining the axis scale in both cases. The key area is the topright hand box, containing requirements with the highest scores for importance andimpact. These are the Satisfaction Drivers – the requirements that will have the mostinfluence on customer satisfaction. In the example shown, the retailer should

FIGURE 10.2 Impact scores

0.00 0.20 0.400.10 0.30 0.50 0.60 0.800.70 0.90 1.00

Choice of products

Expertise of staff

Price level

Speed of service

Quality of products

Layout of store

Staff helpfulness

Staff appearance

Basic analysis 153

Chapter ten 5/7/07 09:57 Page 153

Page 161: Customer Satisfaction

obviously focus very strongly on expertise of staff and speed of service. Since in mostCSM surveys the top 15-20 customer requirements will be covered, there will oftenbe a greater number of Satisfaction Drivers than the two shown in the example here.

KEY POINTRequirements that score highly for importance and impact are SatisfactionDrivers and will make a big difference to customer satisfaction.

The top left hand box contains the Givens – requirements that customers say are veryimportant but make relatively low impact on their judgement of the supplier.Provided performance is maintained at an acceptable level, Givens would notnormally be areas for investment. However, it is absolutely essential to maintain anacceptable level of performance since customers will punish suppliers very heavily iftheir expectations on Givens are not met – empty shelves in the supermarket or dirtytableware in a restaurant for example.

KEY POINTHigh importance and low impact implies Givens – requirements that will not makemuch impact on customers provided an adequate level of performance is maintained.

The bottom right hand box shows the Hidden Opportunities. These are requirementsthat customers don’t rate as highly important, yet strongly influence customers’judgement of the supplier. It’s not uncommon to find staff helpfulness in this cell

FIGURE 10.3 Satisfaction drivers

Derived Importance HighLow

Stat

ed I

mp

orta

nce

Low

Hig

h

Choice of products

Price level Expertise of staff

HYGIENE FACTORS

MARGINALS

SATISFACTION DRIVERS

Speed of service

Staff helpfulness

Staff appearance

Layout of store

Quality of products

HIDDEN OPPORTUNITIES

154 Basic analysis

Chapter ten 5/7/07 09:57 Page 154

Page 162: Customer Satisfaction

since a particularly good or a poor experience with a member of staff will beremembered by customers for a long time and will probably stimulate considerableword of mouth – positive or negative. Hidden Opportunities will often provide agood return on investment for suppliers since organisations can never give customerstoo many good experiences in areas that have high impact.

The requirements in the bottom left cell score relatively low for both importance andimpact. However, it would be misleading to see these factors as unimportantrequirements that can be more or less ignored since, if exploratory research wasconducted, all the requirements measured by the survey will be important tocustomers – it’s just a matter of degree. It’s best to view these requirements as seconddivision givens. They don’t usually need much investment, but expected levels ofreasonable performance must be maintained.

10.4: Understanding customer satisfactionAs well as scoring the list of requirements for importance, customers also score thesame list for satisfaction. Figure 10.4 shows the average satisfaction scores for theretailer. They are still listed in order of importance to the customer, a practice thatshould be consistently followed on all charts.

Average satisfaction scores above nine on a ten point scale show an extremely highlevel of customer satisfaction. Scores of eight equate to ‘satisfied’ customers, seven to‘quite satisfied’ and six (which is only just above the mid point of 5.5) to ‘borderline’or ‘much room for improvement’. Increasingly, successful companies are striving for‘top box’ scores, (9s and 10s on a 10-point scale), as work by Harvard and others hasdemonstrated that much stronger levels of loyalty are found amongst highly satisfied

FIGURE 10.4 Satisfaction

6.5 7 7.5 8 8.5 9 9.5 10

Choice of products

Expertise of staff

Price level

Speed of service

Quality of products

Layout of store

Staff helpfulness

Staff appearance

Basic analysis 155

Chapter ten 5/7/07 09:57 Page 155

Page 163: Customer Satisfaction

customers than amongst merely satisfied ones.

Average satisfaction scores of five or lower are below the mid point and suggest aconsiderable number of dissatisfied customers. It would be good practice intelephone surveys to probe any scores below 6 out of 10 to find out why the low scorewas given. This will enable the research to explain any poor satisfaction scores suchas speed of service in Figure 10.4. Self-completion questionnaires can ask forcomments for low satisfaction scores, but not all customers will give them, and theywill typically not provide as much insight as comments generated by probing lowscores in interviews.

10.5 The range of views

10.5.1 Range and varianceTechnically, the range is the difference between the highest and the lowest value in aset of data. Thus, if the highest satisfaction score is 10 and the lowest is 1, the rangeis 9. However, for CSM that isn’t very useful because the fact that the range is widedoesn’t tell us anything about the extent of consensus or variance in the viewsexpressed by customers. The histograms shown in Figures 10.5 to 10.7 illustrate thepoint. (A histogram shows how many people have scored each point on the scale.) Inall three cases, 20 people have taken part in a survey, scoring their level of satisfactionon a 10 point scale. In all three cases, the average score comes out at 5.5, but eachpaints a completely different picture of the supplier’s success in satisfying itscustomers. In Figure 10.5, there is a strong consensus of opinion with all 20respondents giving very close scores of either 5 or 6 out of 10. In other words, theservice is neither good nor bad, it is mediocre, and all customers think the same way.

FIGURE 10.5 Histogram 1

1 2 3 4 5 6 7 8 9 10

Nu

mbe

r of

res

pon

den

ts

Satisfaction score

0

1

2

3

4

5

6

7

8

9

10

156 Basic analysis

Chapter ten 5/7/07 09:57 Page 156

Page 164: Customer Satisfaction

In Figure 10.6 the 20 people surveyed are divided into two equal groups that holddiametrically opposed views. Half think the service is excellent – as good as it couldbe. The other ten customers have a very low opinion of the service – rating it as pooras it could be. This paints a very different picture to the one shown in Histogram 1,but the mean satisfaction score is still 5.5.

Finally, Histogram 3 shows a different picture again, with customers’ views equally spreadacross the full spectrum of opinion. Once more the average satisfaction score is 5.5.

With an average score potentially disguising such widely differing realities, it is clearlynecessary to understand what lies behind the average importance and satisfaction

FIGURE 10.7 Histogram 3

1 2 3 4 5 6 7 8 9 10

Nu

mbe

r of

res

pon

den

ts

Satisfaction score

0

1

2

3

4

5

6

7

8

9

10

FIGURE 10.6 Histogram 2

1 2 3 4 5 6 7 8 9 10

Nu

mbe

r of

res

pon

den

ts

Satisfaction score

0

1

2

3

4

5

6

7

8

9

10

Basic analysis 157

Chapter ten 5/7/07 09:57 Page 157

Page 165: Customer Satisfaction

scores. One way would be to have a histogram for each importance and satisfactionscore, but with 20 customer requirements on a typical CSM survey that wouldnecessitate 40 histograms. Moreover, since the real world does not produce suchextreme differences in the spread of scores as those shown in Figures 10.5 to 10.7 itwould be very difficult to distinguish the differences between the histograms. Muchmore useful would be a measure that clearly identifies the extent to which thecustomers surveyed agree with each other or hold widely differing views. Thatmeasure is called the standard deviation, which effectively shows the average distancethat all the scores are away from the overall average.

10.5.2 The standard deviationIn order to describe a variable in the sample properly we need to use a measure of centraltendency (usually the mean) and a measure of the amount of variance in the scores. Themean score is a measure of the centre of the sample. In order to know whether the meanis a good representation of the underlying sample scores we need to know whether mostpeople fall relatively close to the mean, or whether they are widely distributed. The mostuseful measure of variance with interval data is the standard deviation.

The standard deviation is a measure of the average distance between each score andthe mean score. The obvious way to work this out would be to calculate thestraightforward average distance between each score and the mean, by adding up thedistances and dividing by the number of cases. Unfortunately this would always equalzero, as the distances above and below the mean would cancel each other out. Knownas the average deviation, its formula is shown below, where X is each variable and nis the number of variables in the sample.

average deviation = (Sum(X-Mean)) / n

One solution to this problem is to square the distance between each score and themean before dividing by the number of cases. Squaring the negative distances willremove the negative sign and produce the average squared distance between eachscore and the mean, known as the variance. As you can see, the only differencebetween the two formulae is the squaring of the distance between each variable andthe overall mean.

variance = (Sum(X-Mean)2) / n

The problem with the variance as a measure of dispersion is that the numbers aremagnified due to the squaring, so the result is difficult to relate back to the originalscale and consequently hard to interpret. The easy solution is to calculate the squareroot of the variance. This is the standard deviation and its formula is shown below.

sample standard deviation = √(Sum(X-Mean)2) / n-1

In the final formula, the sample size has suddenly become n-1. So for a sample of 200,

158 Basic analysis

Chapter ten 5/7/07 09:57 Page 158

Page 166: Customer Satisfaction

199 would be used in the formula. Subtracting 1 from the sample size forces thestandard deviation to be larger and this simply reflects the good scientific principlethat if there is a risk that the estimate of the population (as opposed to the sample)standard deviation might be wrong, one should err on the side of caution. In practice,with even quite small samples such as 50, this procedure makes virtually no differenceto the standard deviations typically recorded for CSM data on a 10-point scale. Usingthe ‘STDEV’ formula, it is easy to calculate in Excel or other analysis software.

The standard deviations produced by CSM data typically fall in the kind of rangeshown in Figure 10.8. On a 10 point scale, a standard deviation of around 1 indicatesthat there is a strong consensus of opinion. For example, customers are very satisfiedwith choice of products and most customers feel that way. On the other hand astandard deviation above 2 demonstrates a wide disparity of views, and we can seethis with staff helpfulness. The way to use the standard deviation is to ignore it if it isbelow 2 but to investigate matters further in cases where it exceeds 2.

Based on the standard deviation for staff helpfulness, quite a lot of respondents musthave scored 8s, 9s and 10s for satisfaction whereas others must have scored it very low.Even if only 10 to 15% of customers were very dissatisfied with something, anorganisation needs to understand the problem in order to address it. There are twoways of doing this. Firstly, if respondents have been probed for any areas ofdissatisfaction the comments will indicate which aspects of staff helpfulnesscustomers don’t like. Perhaps for the retailer the comments will suggest that the maincause of customer dissatisfaction is not the response of staff when asked to help butthe difficulty of finding one to ask. A second option is to examine the classificationdata of all the very dissatisfied respondents (those scoring 1 to 3) to see if there areany patterns. Perhaps for the retailer it might show that they are primarily oldercustomers, over 70 years of age. Another piece can be added to the jigsaw by studyingthe importance scores too. In this example there might be a high standard deviationfor importance, with elderly customers placing much more importance on staffhelpfulness than most customers. Since satisfaction is a relative concept (based on theextent to which the supplier meets the customer’s requirements), the drilling down

FIGURE 10.8 Standard deviations

Satisfactionscore

Choice of products 9.2 0.91

Expertise of staff 7.9 1.53

Price 8.8 1.39

Speed of service 7.4 0.92

Quality of products 7.7 1.34

Layout of store 8.6 1.49

Staff helpfulness 7.5 2.73

Staff appearance 8.5 1.06

Standarddeviation

Basic analysis 159

Chapter ten 5/7/07 09:57 Page 159

Page 167: Customer Satisfaction

would indicate that a consistent level of staff helpfulness, which met the requirementsof most customers, was not sufficient to meet the needs of the elderly, who expectedmuch more help. This is a very useful finding since it would enable the retailer todevelop some focused actions on staff helpfulness targeted on elderly customers. Thiswould be much better than basing decisions on the average score and taking theinappropriate step of exhorting all staff to be more helpful across the board whenthey are already doing all that can be reasonably expected to help customers.

KEY POINTUse standard deviations to identify pockets of dissatisfied customers andcomments to plan targeted actions.

10.6 Dissatisfaction driversIn addition to monitoring levels of customer satisfaction, it is very useful tounderstand whether any aspects of the service are strongly irritating customers, or asegment of customers. These are the dissatisfaction drivers. They can be identified byexamining the percentage of customers that have given very low scores for eachrequirement. The threshold for low scores should be determined by the organisation’slevel of success in satisfying customers. A company with good levels of customersatisfaction (an index of 80% or higher), will benefit from highlighting any areas ofdissatisfaction, so the threshold would be scores in the bottom half of the scale – 1 to5 on a 10-point scale. For a less successful company with an index of 70% or below,it would be more appropriate to focus on areas of severe dissatisfaction, identified byscores of 1 to 3 on a 10-point scale. Dissatisfaction drivers can be highlighted by achart like the one shown in Figure 10.9.

FIGURE 10.9 Dissatisfaction drivers

10% 20% 30% 40% 50% 60%0%

Choice of products

Expertise of staff

Price level

Speed of service

Quality of products

Layout of store

Staff helpfulness

Staff appearance

160 Basic analysis

Chapter ten 5/7/07 09:57 Page 160

Page 168: Customer Satisfaction

If customers are interviewed, all satisfaction scores below the threshold would beprobed. This will ensure that as well as identifying what customers are dissatisfiedwith, the organisation will also understand why they are dissatisfied. The insightgained from customer comments will be extremely useful when deciding how toimprove customer satisfaction. Figure 10.10 illustrates this point for the retailer. If thecompany was basing decisions on the scores alone, it may be tempted to encourageits staff to be more friendly, polite and/or helpful with customers and perhaps toprovide training in customer contact skills or product knowledge. The insight gainedfrom the comments demonstrates that these actions would not be cost-effective inimproving customer satisfaction with staff helpfulness since it is staff availability thatis clearly the problem rather than their response when a customer does eventuallyfind someone to ask.

10.7 Loyalty differentiatorsAs well as knowing what makes customers satisfied or dissatisfied many companieswill also want to understand what makes them loyal or disloyal. To do this it isnecessary to ask one or more loyalty questions, possibly combining them into aloyalty index (see Chapter 11 for details). The loyalty data should then be used todivide respondents into three loyalty groups:

·Loyal – scoring 8-10 on the loyalty question(s)·Ambivalent - scoring 4-7 on the loyalty question(s)·Not loyal - scoring 1-3 on the loyalty question(s)

To highlight the differences between the most loyal and least loyal customers it is moreproductive to discard the middle group and contrast the satisfaction scores given by theloyal respondents and the disloyal ones. The resultant chart, Figure 10.11, shows thatsome requirements, such as ‘staff appearance’ and ‘store layout’ make virtually nodifference to customer loyalty – the least loyal customers scoring them almost as highlyas the most loyal. By contrast, ‘quality and choice of product’ as well as ‘staff helpfulness’

FIGURE 10.10 Problems with staff helpfulness

10% 20% 30%0%Couldn’t find

anyone to help

Staff in too much hurry

Not interested insolving my problem

Didn’t haveknowledge to help

Offhand / rude

Staff over-worked

Basic analysis 161

Chapter ten 5/7/07 09:57 Page 161

Page 169: Customer Satisfaction

highlight why the retailer’s most loyal customers like the company so much more thanits least loyal ones, making them the main loyalty differentiators.

10.8 Analysing verbal scalesAs explained in the previous chapter, verbal scales are not recommended for customersatisfaction research, but if they are used, the results have to be analysed using afrequency distribution – in other words, a count of how many people said what. Usinga retail example with different questions, a frequency distribution is shown in Figure10.12. The numbers are usually percentages, so in the example shown, 14% arecompletely satisfied with the location and 78.9% are quite satisfied. It is a totallyaccurate summary of the results, but it does not make a very strong impression.

FIGURE 10.12 Frequency distribution

Location

Range of merchandise

Price level

Quality of merchandise

Checkout time

Staff helpfulness

Parking

Staff appearance

Completelysatisfied

Quitesatisfied

Quitedissatisfied

Completelydissatisfied

14.0%

9.1%

17.3%

16.4%

4.2%

5.6%

33.3%

28.3%

78.9%

56.4%

55.8%

63.6%

70.8%

64.8%

54.4%

64.2%

7.0%

29.1%

26.9%

20.0%

22.9%

25.9%

10.5%

7.5%

0.0%

5.5%

0.0%

0.0%

2.1%

3.7%

1.8%

0.0%

FIGURE 10.11 Loyalty differentiators

5.5 6.5 7.55 6 7 8 8.5 9 9.5 10

Choice of products

Expertise of staff

Price level

Speed of service

Quality of products

Layout of store

Staff helpfulness

Staff appearance

Most loyal

Least loyal

162 Basic analysis

Chapter ten 5/7/07 09:57 Page 162

Page 170: Customer Satisfaction

It is possible to chart a frequency distribution showing varying levels of satisfactionor importance by attribute. This is shown in Figure 10.13, and is certainly easier toassimilate than the table of numbers.

However, the real problem with the analysis produced from verbal scales is theabsence of a single average score for each attribute. For example, it is not possible tomake a direct comparison between the importance score for location and thesatisfaction score for location, making it impossible to carry out a gap analysis todetermine priorities for improvement (see Chapter 12). Nor can a weighted customersatisfaction index be calculated. Whilst it may be very tempting to change thecategorical data produced by verbal scales into numbers for analysis purposes, this isstatistically invalid. As we explained in Chapter 8, verbal scales produce non-parametric data which lack interval properties so cannot be analysed like numbers orchanged into them.

KEY POINTCompared with numerical data, output from verbal scales is far less useful forCSM but it is statistically invalid to change verbal categories into numbers toimprove ease of analysis and clarity of reporting.

10.9 Clarity of reportingIf the results of a CSM survey are not clear and easy to understand, they will not beassimilated into the organisation’s thinking and will not lead to effective action toimprove customer satisfaction. This provides an additional reason for using a

FIGURE 10.13 Charting verbal scales

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Location

Range of merchandise

Price level

Quality of merchandise

Checkout time

Staff helpfulness

Parking

Staff appearance

Completely Dissatisfied Quite Dissatisfied Quite Satisfied Completely Satisfied

7.0 78.9 14.0

26.9 55.8 17.3

16.463.620.0

3.7 25.9 64.8 5.6

28.364.27.5

33.354.410.5

22.9 70.8 4.2

9.156.429.15.5

Basic analysis 163

Chapter ten 5/7/07 09:57 Page 163

Page 171: Customer Satisfaction

numerical scale, the simple average scores giving a much clearer outcome than theplethora of information presented by a frequency distribution. Consistency is alsovery helpful, hence the consistent listing of the requirements in order of importanceto the customer on all charts and tables. Use of colour will also help the clarity andconsistency of the message, particularly if based on a simple and universallyunderstood colour coding such as traffic lights. Even a frequency distribution willprovide a much clearer picture if red and amber, representing danger, are used for lowsatisfaction scores and shades of green for higher ones.

Whenever data is presented it is helpful to explain in simple terms how it wasproduced, otherwise it may not be believed and its apparent lack of transparencycould be used by detractors to cast doubt on the credibility of the CSM process andoutcomes. Statistically derived measures of impact are often poorly understood sotheir calculation and meaning must be clearly explained using the kind ofillustrations presented in Chapter 4 (Figures 4.2 and 4.3). Sometimes a concept suchas the standard deviation can be avoided altogether in reporting CSM results tocolleagues by presenting the information in a less technical manner. Instead of statingthat the satisfaction score for a requirement such as ‘staff helpfulness’ has a highstandard deviation, it can be more useful to use the dissatisfaction drivers and say ‘x%of customers are very dissatisfied with helpfulness of staff ’. Certainly, the basis of anykind of composite measure, such as a customer satisfaction index will have to beexplained if it is to have any credibility, and this will be the topic of the next chapter.

10.10 SoftwareAll the statistical procedures mentioned in this book can be conducted usingMicrosoft Excel. Since most people have Excel and are competent in its use, it isunlikely that the financial cost or training cost of adopting specialist software will bejustifiable unless large quantities of research data are being handled. If specialistsoftware is required there are two broad types available. The first will typicallyprovide a general entry level solution for many research tasks including questionnairedesign, data entry and web surveys as well as data analysis. Most are easy to use butfairly limited in the level of statistical analysis provided. An example of this type ofsoftware is SNAP1. For those requiring a much more sophisticated level of statisticalanalysis, a specialist statistical package such as SPSS will be necessary2. This type ofsoftware would be much more difficult to learn and would be worthwhile only forthose with a high level of statistical knowledge. Since specialist software can be verydifficult for the layman to evaluate, help is available, if required, from independentmarket research software consultants Meaning Ltd3.

Conclusions1. Importance scores are based on what customers say is important and provide the

only measure of the relative importance of customers’ requirements.

164 Basic analysis

Chapter ten 5/7/07 09:57 Page 164

Page 172: Customer Satisfaction

2. Statistically derived measures of impact are different and reflect the extent towhich a requirement is influencing customers’ judgement of an organisation.

3. For a full understanding of customer satisfaction, organisations should monitorimportance and impact and combine the two sets of measures to distinguishbetween Givens and Satisfaction Drivers.

4. To have satisfied customers organisations must perform sufficiently well on theGivens.

5. To achieve very high levels of customer satisfaction, strong performance on theSatisfaction Drivers will also be essential.

6. Satisfaction scores of 8 equate to satisfied customers but ‘top box’ scores (9s and10s) are required to generate loyalty.

7. Always probe low satisfaction scores to fully understand the reasons for anycustomer dissatisfaction.

8. Use standard deviations to identify pockets of dissatisfied customers andcomments to plan targeted remedial action.

9. Highlight loyalty differentiators by contrasting the satisfaction scores given by themost loyal customers with those given by the least loyal.

10. Output from verbal scales is analytically far less useful than from numerical scalesbut it is not statistically valid to assign numerical values to categorical or ordinal data.

References1. www.snapsurveys.com 2. www.spss.co.uk3. www.meaning.uk.com

Basic analysis 165

Chapter ten 5/7/07 09:57 Page 165

Page 173: Customer Satisfaction

CHAPTER ELEVEN

Monitoring performanceover time

Most organisations require a headline measure that reflects their performance insatisfying customers. Quite rightly so, since it serves some very useful purposes. Itenables senior management to have a top line figure that demonstrates how theorganisation is performing. It is essential for companies using a balanced scorecardsince customer satisfaction is usually one of its main components. It can be used forsetting targets and for judging the organisation’s success in achieving them. It is vitalfor benchmarking whether internally across business units, stores, regions etc. oragainst external organisations.

Although headline measures are very useful and widely used, there’s still muchconfusion and misunderstanding over how they should be produced. The three mostcommonly used techniques are a simple overall satisfaction question, a compositeindex based on a number of components of satisfaction or a weighted index based onthe relative importance of its component elements. There are also other importantoutcomes of customer management such as loyalty. Shouldn’t that be monitored aswell as, or instead of, customer satisfaction?

At a glanceIn this chapter we will examine:

a) The problems inherent in a percentage satisfied measure

b) The benefits of an index

c) Weighted and unweighted indices

d) How to calculate a weighted customer satisfaction index

e) The reliability of indices

f) Constructing a loyalty index

g) Monitoring loyalty behaviour

11.1 Overall satisfactionThe simplest way to get a measure of overall satisfaction is to ask the question:

166 Monitoring performance over time

Chapter eleven 5/7/07 09:58 Page 166

Page 174: Customer Satisfaction

“Taking everything into account how satisfied or dissatisfied are you overall with XYZ?”

The rating scale attached to this question could be verbal or numerical. If numerical,the headline measure would normally be an average score, if verbal it would typicallybe a percentage satisfied measure based on aggregating the respondents ticking boxesin the top half of the scale (the top two boxes on a typical 5-point verbal scale). Forall the reasons explained in Chapter 8, an overall satisfaction question with a verbalrating scale is by far the least useful headline measure. To re-cap, the key reasons are:

a) Since most organisations now have customers who are broadly satisfied ratherthan dissatisfied overall, this measure encompasses most customers, resultingin a very high score that often leads to a dangerous level of complacency withinthe organisation.

b) With only two scale points covering the entire satisfied zone (where mostcustomers are), the 5-point scale is not sufficiently sensitive to detect the smallchanges in customer satisfaction that typically occur.

c) Moreover, the percentage satisfied measure fails to reflect most of the changesin satisfaction that it does detect because its aggregation of data doesn’t showchanges across the two scale points in the satisfied zone, nor the three scalepoints below that level.

d) The financial benefits of customer satisfaction (continued loyalty and highcustomer lifetime value) occur mainly at high levels of satisfaction. Therefore,using a percentage satisfied measure for target setting and tracking meansmonitoring a measure that is not tough enough to produce any worthwhilebenefits. The percentage satisfied measure fatally perpetuates the confusionbetween ‘making more customers satisfied’ and ‘making customers moresatisfied’.

11.2 The benefits of an index

11.2.1 Random measurement errorEven if it is based on a 10-point scale, the single question measure is statistically byfar the worst option due to a phenomenon that is variously labelled random,observation or measurement error. It was Galileo as long ago as 1632 who firstpropounded that measurement errors are symmetrical1 (i.e. equally prone to under-or over-estimation). This enabled eighteenth century scientists such as ThomasSimpson to demonstrate the advantage of using the mean compared with a singleobservation in astronomy2 – the instances of over and under-estimation effectivelycancelling each other out. As we explained in Chapter 6, measurement errors are nowclassified as ‘systematic’ and ‘random’, and it is the random measurement error that isminimised by using an index. This is illustrated in Figure 11.1, where the mid-pointis the true satisfaction score that would have been obtained if there were no such

Monitoring performance over time 167

Chapter eleven 5/7/07 09:58 Page 167

Page 175: Customer Satisfaction

thing as random measurement error. Regardless of whether the score for each itemwas good or bad, the chart demonstrates that some of the requirements will havescored rather higher than they should have done, whilst others, due to random,inexplicable factors, will have scored somewhat lower. The net effect is that the over-and under-scoring is more or less cancelled out when a composite index is used.

As Oppenheim points out3, it has been demonstrated many times that attitudequestions are more prone to random error than factual ones. This is because attitudemeasurement is a combination of a person’s stable underlying attitude (e.g. theythink the organisation is pretty good or quite poor) plus a collection of momentary,unstable determinants such as questionnaire wording, context, recent experiences,mood of the moment etc. As shown in Figure 11.1, Oppenheim points out that thesedistorting effects will apply more to some questions than others and will distort theanswers in different ways, but the underlying attitude will be common across thequestions and will be what remains in an index once the random distortions havelargely cancelled each other out.

11.2.2 Moving the needleAs well as the fact that it is measuring attitudes, the random error problem isaccentuated in CSM because customer satisfaction changes only slowly, especially in anupwards direction. Since any survey based on sampling will have a confidence interval(margin of error) based primarily on the size of the sample, the random measurementerror of the single overall satisfaction question will compound the confidence intervalto produce a headline measure that is far too volatile to be useful for monitoring

FIGURE 11.1 Random measurement error

-0.5 -0.3 -0.1 0.1 0.5 0.5

Attribute 15

Attribute 14

Attribute 13

Attribute 12

Attribute 11

Attribute 10

Attribute 9

Attribute 8

Attribute 7

Attribute 6

Attribute 5

Attribute 4

Attribute 3

Attribute 2

Attribute 1

168 Monitoring performance over time

Chapter eleven 5/7/07 09:58 Page 168

Page 176: Customer Satisfaction

customer satisfaction. The more frequently companies track customer satisfaction, themore serious this business problem becomes, since staff will eventually decide thatthere is no relationship between movements in the measure and the real performancedelivered by the organisation. They consequently draw the conclusion that trying tomeasure and improve customer satisfaction is pointless, whereas, in reality, the problemis the instability of the measure they are tracking.

Myers4 calls this problem “moving the needle”, and agrees with us that it has posed asignificant problem for many American corporations. He advocates maximising thesensitivity of the scale by increasing the number of points and by strengthening thehigh and low end-point anchor statements, as well as minimising confidenceintervals by using the largest affordable sample sizes. The unsuitability of a singleoverall satisfaction question as the trackable measure has also been widely supportedelsewhere in the CSM literature5,6,7.

KEY POINTThe headline measure of customer satisfaction must be a composite index ratherthan a single item overall satisfaction question since the latter is much moreprone to random measurement error.

11.2.3 The customer’s judgementAn important characteristic of a headline measure is that it should reflect as closelyas possible the way customers make their satisfaction judgements. The use of acomposite index conforms with current understanding about how customers makesatisfaction judgements – based on multiple aspects of the customer experiencerather than one overall impression8,9,10,11. As we have said earlier in this book, thisphenomenon has been labelled ‘the lens of the customer’, and exploratory research isused to capture these key aspects of the customer experience, which then form thebasis of the CSM survey. This leads us to another fundamental question ofmethodology. Should all the customer requirements be treated equally by the indexor should they be weighted so that some contribute more to the index than others?

11.3 Weighting the index

11.3.1 Weighted or non-weightedThe most straight forward customer satisfaction index would be a non-weightedaverage of all the satisfaction scores. The appeal of this approach is its simplicity,making it easy to calculate and easy for staff to understand. The former benefit isminimal since calculating a weighted index presents no problem. Even the mostcomputing-intensive method of a customer by customer index is well within thecapabilities of a standard spreadsheet. When it comes to staff, however, transparency,

Monitoring performance over time 169

Chapter eleven 5/7/07 09:58 Page 169

Page 177: Customer Satisfaction

simplicity and ease of communication are very helpful. Apart from the organisationalresources consumed in explaining a complicated calculation, employees tend to besuspicious of ‘black box’ figures invented by management. American bank cardissuer, MBNA, has a very effective customer satisfaction-based bonus scheme basedon a non-weighted index across a number of customer requirements12. A sample ofcustomers is interviewed every day and the previous day’s score is posted for allemployees to see as they arrive at work. Every day that the index is above target thecompany contributes money to a bonus fund, which is paid out quarterly. The indexis clearly understood and universally accepted by staff, who stand a renewed chanceof earning bonus every day when they arrive at work.

Weighting the index, however, is widely advocated in the CSM literature on thegrounds that the relative importance of customers’ requirements will differ acrosssectors and from one individual to another and that people place more emphasis onthe things that are most important to them when making satisfactionjudgements9,13,14. The original SERVQUAL questionnaire was revised in 1991 toincorporate the scoring of the five dimensions for importance so that they could beweighted during the analysis15.

If the ‘lens of the customer’ principle is accepted, it is impossible to argue againstweighting the index, since the most important methodological requirement of aheadline measure is that it should reflect the customers’ judgements as accurately aspossible. Capturing the customers’ true underlying attitudes is particularly importantif the index is to be used for modelling the extent to which customer satisfactionaffects various desirable outcomes such as customer loyalty or the company’sfinancial performance16. Moreover, an unweighted index has, in reality, assignedrelative weightings to its components – equal ones. By implication, an unweightedindex is saying that customers place equal emphasis on all of its component partswhen judging the organisation. All things considered, the arguments point stronglytowards a weighted index. If so, the remaining question is how it should be weighted.

11.3.2 Weighting optionsThere are three methods of weighting a customer satisfaction index. The weightingfactors can be based on management judgements, statistically derived measures ofimpact or relative importance.

(a) Judgemental weighting factorsThere are several reasons why companies may choose to adopt judgemental weightingfactors. First, management may believe that they know what’s important to customersand can therefore base the weighting factors on their own judgements. This view clearlycontradicts the fundamental premise of this book, that if satisfaction is based onmeeting or exceeding the customer’s requirements, any measure of satisfaction can be

170 Monitoring performance over time

Chapter eleven 5/7/07 09:58 Page 170

Page 178: Customer Satisfaction

accurately produced only from the customer’s perspective.

A second and more valid reason for using judgemental weighting factors would be toalign with an organisational strategy that emphasises certain values, such asfriendliness, integrity, environmental concern etc. Using this method, the mostimportant organisational values would be weighted more heavily in the index. Thistype of approach would be justifiably adopted for many aspects of management, suchas incorporating a ‘living the values’ component into employees’ appraisals, but ismoving into a different type of customer research. It is rarely possible in research to‘kill two birds with one stone’. An image or perception survey to understand theextent to which the organisation is associated with its values in the outside world maybe a very useful exercise, but it is not the same as measuring customer satisfaction.

Myers4 points out that organisations adopting judgemental weighting factors oftenregret their decision at a later date as unproductive debates about the weightingfactors consume management time and thoughts. This is avoided by adoptingempirically justifiable weighting factors such as those explained in (b) and (c).

(b) Statistically derived weighting factorsThe CSM literature is divided between the use of weighting factors based on statisticallyderived measures of impact and the relative importance of the requirements to thecustomers. Some argue that statistically derived importance rather than statedimportance measures should be used16. However, as we saw in Chapter 4, so-calledderived importance measures are not really measures of how important requirementsare to the customer but rather indicators of the amount of impact made by eachrequirement on an outcome variable such as overall satisfaction or loyalty16,17.Sometimes, statistically derived measures, produced by a variety of statisticaltechniques are called ‘relative importance’ but are actually measures of relative impact.The fact that different statistical techniques are advocated for producing impactmeasures argues against their use in a headline index of customer satisfaction since itwould lead to a further debate about which particular statistical technique should beused. Moreover, practitioner experience shows that statistically derived impactcoefficients are much less stable than stated importance scores, a big disadvantage foran index that must be trackable over time. In reality, any mathematical derivation of‘relative importance’ is something quite different from asking the customers to scorefactors for importance18. It is therefore better to use both stated and derived importancemeasures for a fully rounded analysis of customer satisfaction data and for developingaction plans, but to use stated importance measures to produce the weighting factorsfor a trackable customer satisfaction index.

(c) Relative importance weighting factorsAs we have seen earlier in this book, importance scores are generated by asking

Monitoring performance over time 171

Chapter eleven 5/7/07 09:58 Page 171

Page 179: Customer Satisfaction

customers. In addition to scoring importance on a 10-point scale as explained earlier,there are more complex methods of generating stated importance scores such aspaired comparisons and points share. Whilst these methods have some appeal, basedmainly on the fact that their forced trade-off approach tends to generate a widerrange of importance scores, they are considered less appropriate to CSM than toother forms of market research due to the large number of variables typicallyinvolved in customer satisfaction research4.

This means that the stated importance scores generated by the main survey should beused for the weighting factors6,19, since these most accurately reflect the actualimportance of the requirements to the customer20 and will aid tracking bymaximising stability and comparability. Note that the importance scores from themain survey rather than the exploratory research should be used since the largersample size gives them greater reliability. The only exception would be if aquantitative exploratory survey (as explained in Chapter 5) has been conducted, inwhich case the statistical reliability would be perfectly adequate.

KEY POINTThe customer satisfaction index should be weighted according to the relativeimportance of customers’ requirements.

11.4 Calculating a customer satisfaction indexThe most accurate customer satisfaction index will be produced by calculating anindividual index for each respondent prior to averaging all the individual indices.Whilst average importance scores from across the whole sample and averagesatisfaction scores can be used, the resultant index will be less accurate for tworeasons. First, the relative importance of the requirements will vary betweenindividual customers so using respondents’ own importance scores to calculate theirweighting factors will be more accurate. The second reason concerns ‘not-applicables’, which will have an increasingly distorting effect on the index as theirvolume grows. We will use a question on complaint handling as an illustration. Acompany with high levels of customer satisfaction will typically have to handlecomplaints from only a small percentage of its customers. Consequently, if thequestion appears in a CSM survey, most respondents will score it ‘not-applicable’ forsatisfaction. Since complaint handling is an area where organisations are notoriouslypoor at meeting customers’ requirements, the minority of respondents that do scoreit will probably generate quite a poor average satisfaction score. If average scores areused to calculate the index this low complaint handling score will be unfairly appliedto all the respondents. The individual satisfaction indices for the majority ofrespondents, who had not scored complaint handling, would be higher since theirindices would contain no data for complaint handling. This is also intuitively soundsince we are measuring customers’ feelings of satisfaction or dissatisfaction with the

172 Monitoring performance over time

Chapter eleven 5/7/07 09:58 Page 172

Page 180: Customer Satisfaction

customer experience. Clearly, it would be wrong to include in the measure parts ofthe customer journey that they have not experienced.

11.4.1 Calculating the weighting factorsTo demonstrate the calculation of a customer satisfaction index, we will use thehypothetical supermarket example with only eight requirements. The first column inFigure 11.2 shows the importance scores given by one respondent. To calculate theweighting factors simply total all the importance scores. In this example they add upto 60. Then express each one as a percentage of the total. Using ‘staff appearance’ asan example, 3 divided by 60, multiplied by 100 produces a weighting factor of 5%.Taking ‘speed of service’, 10 divided by 60, multiplied by 100 equals 16.66%, so dueto its much greater relative importance for this customer, ‘speed of service’ will affecther index more than three times as heavily as ‘staff appearance’.

11.4.2 Calculating the Satisfaction IndexThe second step is to multiply each satisfaction score by its corresponding weightingfactor. The first column of data in Figure 11.3 shows the satisfaction scores for ourone respondent and the second column of data shows her weighting factors that werecalculated in Figure 11.2. Taking ‘staff appearance’ as the example, the satisfactionscore of 9 multiplied by the weighting factor of 5% produces a weighted score of 0.45.The overall weighted average is determined by adding up all the weighted scores. Inthis example they add up to 7.41, so the weighted average satisfaction score for ourone respondent is 7.41 out of 10. It is normal to express the index as a score out of100 so in this example, the respondent’s Satisfaction Index is 74.1%. Note that thissecond step is based solely on the satisfaction scores for the list of customerrequirements generated by the exploratory research. Scores for overall satisfaction,

FIGURE 11.2 Calculating the weighting factors

Customer requirement Importancescore

Weightingfactor

Choice of products

Expertise of staff

Price

Speed of service

Quality of products

Layout of store

Staff helpfulness

Staff appearance

TOTAL

7

9

8

10

8

6

9

3

60

11.66%

15.00%

13.33%

16.66%

13.33%

10.00%

15.00%

5.00%

For one respondent

Monitoring performance over time 173

Chapter eleven 5/7/07 09:58 Page 173

Page 181: Customer Satisfaction

loyalty questions or any other additional questions should not be included.

That procedure would now be repeated for all the other respondents and all theindividual indices averaged to produce the overall customer satisfaction index for theorganisation. On first reading this may seem to be a daunting task for a large sample.However, even basic computing skills would enable the formulae generated for thefirst respondent to be quickly transferred across the rest.

11.5 Updating the Satisfaction IndexIt is important that the Satisfaction Index is updateable. It has to provide acomparable measure of satisfaction that is trackable in the years ahead even if thequestions on the questionnaire have to change as customers’ requirements change.Basically, the Satisfaction Index answers this question:

“How successful is the organisation at satisfying its customers according to the 20things that are most important to them?” (Assuming 20 customer requirements onthe questionnaire.)

If the questionnaire has to change in the future because customers’ priorities havechanged, the Satisfaction Index remains a measure of exactly the same thing.

“How successful is the organisation at satisfying its customers according to the 20things that are most important to them?”

That comparability also applies to organisations with different customer groups who

FIGURE 11.3 Calculating the Satisfaction Index

Customer requirement Satisfactionscore

Weightingfactor

Weightedscore

Choice of products

Expertise of staff

Price

Speed of service

Quality of products

Layout of store

Staff helpfulness

Staff appearance

Weighted average

Satisfaction Index for one respondent

8

10

7

9

6

7

4

9

11.66%

15.00%

13.33%

16.66%

13.33%

10.00%

15.00%

5.00%

0.93

1.50

0.93

1.50

0.80

0.70

0.60

0.45

7.41

74.1%

174 Monitoring performance over time

Chapter eleven 5/7/07 09:58 Page 174

Page 182: Customer Satisfaction

need to be asked different questions in the same year. Provided the exploratoryresearch has been correctly undertaken, the Satisfaction Indices from two or moresurveys asking different questions are directly comparable. They’re both a measure ofthe extent to which each organisation met its customers’ requirements.

11.6 The reliability of an indexSurvey results are often accompanied by a measure of reliability. An opinion poll, aCSM survey or an estimate of male height could be reliable to +/- 1%. This is itsmargin of error. If you measured a random and representative sample of adult malesin the UK and recorded an average height of 5 feet 10 inches, with a margin of errorof +/- 1%, the true mean height of UK adult males could be anywhere between 5 feet9.3 inches and 5 feet 10.7 inches. Provided the sample is random and representative,the margin of error in its result will be caused by random error. Even if the samplewas completely representative of all the demographic groups, it may have included anunusually small set of young males, smaller than average older men, Scottish menwho were less tall than the average Scot etc. It would have been unlucky, but atrandom, with no explanation, it could happen. To understand the reliability of asample for a CSM survey, the following factors must be considered.

a) Sample sizeb) Confidence intervalc) Confidence leveld) Sub-groupse) Standard deviation

11.6.1 Sample sizeThe reliability of a sample is based on its absolute size and not its proportion to thetotal population, for two reasons. First, the bigger the sample, the less impact extremedata will make on the overall result. A 7 foot 6 inches tall man could skew the averageheight of a sample of 10 by fully 2 inches, but only by 0.2 inches on a sample of 100(well within our +/- 1% margin of error) and only by 0.02 inches on a sample of1,000. Secondly, the mean is what it is because most people are average or close to it,so the larger the random sample the greater the likelihood that most will be close tothe average and few will be in the extremes. For these reasons we said that a sampleof 200 should be seen as the minimum for a reliable result at the overall level. Biggersamples will be more reliable, but there will be increasingly diminishing returns inreliability as the sample size grows beyond 200.

11.6.2 Confidence intervalThe confidence interval is the margin of error. If a Satisfaction Index is 78.6% with aconfidence interval of +/- 1%, the lowest it could be if every customer were surveyedis 77.6% and the highest is 79.6%. The confidence interval is basically the precision

Monitoring performance over time 175

Chapter eleven 5/7/07 09:58 Page 175

Page 183: Customer Satisfaction

of the result. The only controllable method for improving the precision of aSatisfaction Index, lowering its confidence interval, is to increase the sample size. Theterm ‘confidence interval’ is somewhat unfortunate since it is often confused with adifferent concept known as the ‘confidence level’.

11.6.3 Confidence levelThe confidence level sets a reliability level for the confidence interval or margin oferror. The normal confidence level used in business research is 95%, but this isdiscretionary. It is possible to calculate a margin of error at any confidence level. If anindex is 78.6% with a confidence interval of +/-1%, at the 95% confidence level, itmeans that if the survey were repeated many times the index would be between77.6% and 79.6% at least 95% of the time, or 19 out of every 20 surveys. It is possible,like medical researchers, to set a more demanding level of reliability by using a 99%confidence level, which would increase the margin of error for any given sample size.Alternatively, by choosing to operate at the 90% confidence level it is possible to makean index look more accurate, at least to the uninitiated. However, it is stronglyrecommended that for CSM, organisations should work with the normal, 95%confidence level.

11.6.4 Sub-groupsA sample of 200 for a typical CSM survey will, on average, produce a confidenceinterval of +/- 1.5% at the 95% confidence level, but by the time it is divided into sub-groups, the precision at segment level will be much lower. Most organisations wouldaccept a less accurate result at segment level, the general consensus being that sub-groups should contain no fewer than 50 respondents, giving a confidence interval ofapproximately +/- 5%. As the sample gets smaller, confidence intervals will becomewider. Figures 11.4 and 11.5 show real data for a chain of ten restaurants with anoverall sample of 500 and 50 customers per restaurant. The confidence interval at theoverall level is +/- 0.8%. At the sub-group level it varies from +/- 2.0% in Glasgow to+/- 2.8% in Hampstead. The level of precision required is a policy decision for thecompany, but often the size of the overall sample will be determined by the number ofsub-groups and the margin of error that is acceptable at sub-group level. If therestaurant chain decided that the +/- 2.8% achieved in Hampstead was not sufficientlyaccurate it could opt for samples of 100 for greater reliability at restaurant level, which,on this survey, with lower than normal standard deviations, would have achievedconfidence intervals below +/- 2%. At the overall level, however, the bigger sample of1,000 would show relatively little gain in accuracy over the original sample of 500.

176 Monitoring performance over time

Chapter eleven 5/7/07 09:58 Page 176

Page 184: Customer Satisfaction

11.6.5 Standard deviationAs we said in Chapter 6, if all British adult males were identical in height, you wouldhave to measure only one of them to know, with absolute certainty the average heightof men in the UK, even if there were 20 million of them. The bigger the variation inmen’s height, the more you would have to measure to be reasonably certain of youranswer. The chief measure of variance for numerical data is the standard deviation,whose formula we explained in Chapter 10. As we said in Chapter 6, a higher standard deviation needs a bigger sample to achieve a given margin of error, other things being equal. Whilst typical CSM standard deviations are lower than in mostother forms of market research, they can vary considerably from one survey to

FIGURE 11.5 Sub-group confidence intervals illustrated

65.0% 70.0% 75.0% 80.0% 85.0% 90.0%

Overall

Cambridge

Cardiff

Wimbledon

Hampstead

Bath

Durham

Norwich

London

Manchester

York

Index Confidenceinterval

Samplesize

Individual restaurants

Overall 82.5% +/-0.8% 500

Glasgow 86.1% +/-2.0% 50

York 84.4% +/-2.7% 50

Hampstead 83.8% +/-2.8% 50

Wimbledon 83.5% +/-2.2% 50

Manchester 83.2% +/-2.6% 50

Leeds 81.7% +/-2.6% 50

Oxford 81.4% +/-2.2% 50

Cheltenham 80.9% +/-2.3% 50

Cambridge 80.8% +/-2.4% 50

Lincoln 78.8% +/-2.7% 50

FIGURE 11.4 Confidence intervals at sub-group level

Monitoring performance over time 177

Chapter eleven 5/7/07 09:58 Page 177

Page 185: Customer Satisfaction

another, simply reflecting the fact that people agree more about some things thanothers. The survey shown in Figure 11.4 had particularly low standard deviations,resulting in a high level of accuracy (+/- 0.8%) for the sample of 500. By contrast, thedata shown in Figure 11.6 has a wider margin of error with a confidence interval of+/- 1.3% at the overall level even though the sample contained 100 more customers.It can also be seen that the data for the service teams is less reliable than the indicesfor the individual restaurants shown in Figure 11.4 even though most of the teamshave larger sample sizes than the restaurants.

11.6.6 Calculating the margin of errorCalculating the margin of error, or confidence interval, is based on the variablesexplained in this chapter (sample size, confidence level and standard deviation) and itsExcel formula is CONFIDENCE. This is simply illustrated with the following example.

Imagine a sample of 200 customers produced a mean score of 8.26 for ‘expertise ofthe financial advisor’ with a standard deviation of 1.2. The margin of error is 8.26 +/-CONFIDENCE. To perform the calculation Excel asks for the sample size (200), thestandard deviation (1.2) and the ‘alpha’. Alpha is the significance level used tocompute the confidence level and its formula is 100 x (1 – alpha)%, so a confidencelevel of 95% gives and alpha of 0.05. The complete formula is:CONFIDENCE(alpha,standard_deviation,sample_size), or

CONFIDENCE(0.05, 1.2, 200).

IndexConfidence

intervalSample

size

Individual restaurants

Team 9

Service teams

Team 3

Team 1

Team 7

Team 6

Team 2

Team 10

Team 5

76.8%

76.8%

75.9%

75.7%

74.1%

73.8%

73.1%

72.1%

±4.2%

±4.4%

±3.7%

±5.0%

±4.6%

±3.3%

±4.3%

±4.5%

Overall 73.6% ±1.3% 600

Area 1 74.0% ±1.9% 300Area 2 73.3% ±1.8% 300

60

57

66

57

47

89

50

45

Team 8

Team 4

69.8%

68.7%

±4.7%

±4.2%

52

77

FIGURE 11.6 Sub-group confidence intervals: example 2

178 Monitoring performance over time

Chapter eleven 5/7/07 09:58 Page 178

Page 186: Customer Satisfaction

In our ‘expertise of financial advisor’ example, the formula produces an answer of +/-0.166, which we can round to 0.17. The confidence interval is therefore:

8.26 +/- 0.17 = 8.09 to 8.43.

This means that 95% of the time, customer satisfaction with expertise of staff acrossthe entire customer base would not be lower than 8.09 or higher than 8.43. However,there is a 5% (1 in 20) risk that the score for a census of customers could be outsideof that range.

At the 95% confidence level, the same calculation could be made for a customersatisfaction index of 77.43% with a standard deviation of 12, giving confidenceinterval of +/- 1.66. The range for the index would therefore be 75.57% to 78.89%.

11.6.7 Typical confidence intervals for CSMBased on vast amount of CSM data, The Leadership Factor has calculated that theaverage standard deviation for a customer satisfaction index is 11. Using the formulaexplained above produces the range of confidence intervals shown in Figure 11.7 forvarious sample sizes.

11.7 Constructing a loyalty indexIn Chapter 9 we examined the kind of loyalty questions that can be asked in acustomer survey and we said that companies typically use the three or four that aremost relevant to them and combine the data into a loyalty index. As we explained inChapter 1, some organisations have recommendation as the only loyalty questionand use the information to produce a ‘net promoter score’21. Of course, the reasonsfor monitoring a loyalty index rather than a single loyalty question are the same asthose explained for having a satisfaction index.

The main difference between the satisfaction index and the loyalty index is that thelatter will usually not be weighted. Since customer satisfaction is about the extent towhich the organisation met its customers’ requirements, the first part of the equation(customers’ requirements and their relative importance) has to be included in themeasure as well as the second part of the equation (customers’ satisfaction with the

FIGURE 11.7 Confidence intervals for CSM data

Sample size Precision Guide

100200500

10005000

+/- 2.16%+/- 1.52%+/- 0.92%+/- 0.68%+/- 0.30%

Monitoring performance over time 179

Chapter eleven 5/7/07 09:58 Page 179

Page 187: Customer Satisfaction

organisation’s performance). Loyalty is different. First of all it is a behaviour ratherthan an attitude. The loyalty questions are ‘lens of the organisation’ questionsdesigned to reflect as closely as possible the kind of loyalty behaviours that theorganisation would like to see. A loyalty index would therefore normally be calculatedas the simple mean score of the loyalty questions.If a loyalty index is weighted judgemental weighting factors would typically be used.Customer generated weighting factors are not relevant since one cannot say thatrecommendation is more important to customers than related sales or commitment.Some aspects of loyalty may be more important than others to the organisation, inwhich case judgemental weighting factors could be considered. For example, aninsurance company may take the view that retention (as measured by intention torenew) is the most important aspect of loyalty, followed by value for money, relatedsales and recommendation. If so it might weight the four aspects of loyalty 40%,30%, 20%, and 10% respectively. Or perhaps it might decide on 50%, 25%, 15% and10%. Ideally there would be some facts, such as a detailed customer lifetime valuecalculation to provide an empirically justifiable basis for the weighting factors,otherwise, management consensus will have to be used.

Moving on a further step in terms of technical difficulty, statistically derivedweighting factors could be used if appropriate data exists. As well as truly accuratecustomer lifetime value data, the company would have to be able to relate the loyaltybehaviours to a specific business outcome such as sales or profit. With adequate data(which rarely exists), advanced statistical techniques such as partial least squares orstructural equation modelling could be used to calculate the relationship between thefinancial outcome and the various components of loyalty. If available, they wouldprovide the best basis for weighting a loyalty index. Once the weighting factors areadopted, the calculation of the weighted loyalty index would proceed according to themethod outlined in Figure 11.3.

11.8 Monitoring loyalty behaviourThere is a very strong case for using the customer satisfaction index as the mainattitudinal measure and lead indicator of the organisation’s performance in deliveringresults to customers with real customer behaviours used to provide the loyalty measures– albeit lagging ones. Of the loyalty questions detailed in Chapter 9, the most useful areasto monitor are Harvard’s 3Rs of loyalty – retention, related sales and referrals.

11.8.1 RetentionThe best measure of retention for most organisations is the percentage of customersthat the company had one year ago who remain live customers today. The converseof that percentage, the defection or decay rate could equally be used. In marketswhere multiple sourcing is common, such as food retailing or many B2B markets forraw materials, this is a weak measure of loyalty. Customers could have used the

180 Monitoring performance over time

Chapter eleven 5/7/07 09:58 Page 180

Page 188: Customer Satisfaction

company within the last year whilst buying far more from competitors. The measureis also unsuited to markets where the product or service is typically bought less thanonce a year. These problems can be alleviated by reducing the time period inpromiscuous markets and extending it in those with a long purchasing cycle. Ratherlike satisfaction, retention is an essential pre-requisite of loyalty rather than an end initself, but it is a crucial measure because it is the first step in the process. If retention ratesare too low, companies never achieve the real financial benefits of customer loyalty.

11.8.2 Related salesOne of loyalty’s main financial benefits is that loyal customers generate more revenuedue to their greater usage of the company’s products and/or services. A very simplemeasure of this aspect of loyalty is the number of a company’s products or servicesbought by the customer. A bank for example may offer its customers a range ofinsurance products, loans, mortgages, life assurance and credit cards as well as the corebanking product. A car dealership could monitor its customers’ purchase of additionalvehicles, their use of servicing, their purchase of accessories and their adoption of otherservices such as insurance or ‘experiences’ (e.g. track driving or off road driving days).In both cases, monitoring the behaviour of the family unit will be a better indicator ofloyalty than that of the individual customer – an obvious measure that is incrediblyunder-utilised by many organisations. For companies with a huge range of products,like supermarkets, monitoring category usage would be more appropriate, whilst forsingle product organisations the amount of usage is the only feasible measure, like theper-customer spend figure reported for Orange in Chapter 1.

11.8.3 ReferralsReferrals are extremely valuable because as well as reducing customer acquisitioncosts, new customers acquired through recommendation are more profitable thanthose that come through advertising or other marketing programmes22. Accuratemonitoring of recommendation is rarely achieved by organisations because it takesreal effort. Every new customer must be thoroughly interrogated about how and whythey became a customer, and if through referral, which current customer hadrecommended them. Both the recommender and the referred customer must beflagged on the database. Suitable measures of recommendation are the percentage ofnew customers acquired through referrals each year and the percentage of existingcustomers that recommended a new customer. It is even better if the number of timesa customer has recommended is also recorded. Clearly, this information is a muchmore accurate measure of customer loyalty than a ‘net promoter’ score21 generated bya ‘likelihood to recommend’ question.

11.8.4 Customer lifetime valueBy far the best behavioural measure of loyalty is customer lifetime value, particularlyin view of Harvard’s assertion that the most loyal customers are up to 138 times moreprofitable than the least loyal22. Whilst this is a complex subject, there are some

Monitoring performance over time 181

Chapter eleven 5/7/07 09:58 Page 181

Page 189: Customer Satisfaction

fundamental principles that underpin an accurate calculation of customer lifetimevalue. First, customers must be divided into cohorts, usually based on the year thatthey first became a customer. Behavioural data is then monitored and comparedacross customer cohorts, typically demonstrating that a Year 5 or 6 customer, forexample, is far more valuable than a first or second year customer. A crude, butnevertheless useful, measure of customer lifetime value would simply be average per-customer spend in each cohort, although this would seriously under-estimate thetrue value of customer loyalty. Adding a recommendation value would be asignificant step in addressing this deficiency. A simple way of valuing referrals is tobase it on the average cost of acquiring a new customer through sales and marketingactivities. This does need to be the full cost, including the salaries and overheads ofall sales and marketing departments as well as spend on sales and marketingactivities. This total cost is then simply divided by the number of new customersacquired in the financial year, excluding referrals. Although accurate monitoring ofreferrals is highly desirable, where data is incomplete, it is still worth including arecommendation value by allocating customers of unknown origin in the correctproportions to referral and marketing channels. There are many ways of improvingthe sophistication of a customer lifetime value measure, such as basing the figures onprofit rather than sales, including a ‘cost of servicing’ figure (typically higher for newcustomers), or adding to the referral value an amount that reflects the known futurepremium of referred customers compared to customers won through sales andmarketing. An accurate measure of customer lifetime value will correlate far morestrongly with the company’s financial performance than any other measure of loyalty.

Where good customer lifetime value measures exist, companies can derive added benefitfrom their CSM processes by linking satisfaction with customer lifetime value. They canunderstand, for example, what’s most important to the most valuable customers, orwhat causes customers to defect in the cohort with the lowest retention rate.

Conclusions1. The least useful headline measure of customer satisfaction is provided by a single

overall satisfaction question, especially if it uses a 5-point verbal scale since it willbe far too insensitive to detect the relatively small movements in customersatisfaction that typically occur.

2. The major benefit of a composite index over a single question is its much greaterreliability and stability23 since the random measurement error is largely cancelledout across its component questions resulting in a much more accurate measure.

3. If the index is to reflect as closely as possible the customer’s satisfactionjudgement, it should be weighted according to the importance of its componentrequirements.

182 Monitoring performance over time

Chapter eleven 5/7/07 09:58 Page 182

Page 190: Customer Satisfaction

4. Weighting factors should be empirically justifiable rather than judgemental. Ofthe empirical options, stated importance is better than statistically derived impactmeasures because it more closely reflects the relative importance of therequirements to customers.

5. For the greatest accuracy a weighted index should be calculated for eachindividual respondent. All the individual indices are then averaged to produce acustomer satisfaction index for the organisation.

6. Provided the survey is based on the ‘lens of the customer’, the customer satisfactionindex is comparable over time even if the questions change (as customers’ prioritiesevolve) and is comparable across organisations since it is a measure of the extent towhich an organisation is meeting the requirements of its customers.

7. The reliability of an index is a combination of its precision or accuracy (theconfidence interval) and its repeatability (the confidence level). 95% is the normalconfidence level.

8. The confidence interval, or margin of error, will be affected by the standarddeviation but will be determined mainly by the sample size.

9. For customer satisfaction surveys a sample of 200 will typically have a confidenceinterval of around +/- 1.5%. A sample of 500 is necessary to be reasonably certainof a confidence interval below +/- 1%.

10. A lower level of precision is usually acceptable for segment results. A sub-groupsample of 50 will typically have a confidence interval for CSM of +/- 4% to 5%, withsamples of 100 achieving confidence intervals of around 2 to 2.5%.

11. A headline measure of loyalty can be produced from survey questions and shouldalso be an index rather than a single question but is not usually weighted.

12. The best measures of loyalty are based on real customer behaviour, with customerlifetime value being the most useful, but few organisations have the data capabilityto produce a worthwhile measure of customer lifetime value.

References1. Galilei, Galileo (1633) "Dialogue concerning the two chief world systems –

Ptolemaic and Copernican”, trans Drake, Stillman (1953), Third day discussion,University of California Press, Berkeley

2. Pearson and Kendall (1970) "Studies in the history of statistics and probability”,Charles Griffin and Co, London

3. Oppenheim, A N (1992) "Questionnaire Design, Interviewing and AttitudeMeasurement”, Pinter Publishers, London

4. Myers, James H (1999) "Measuring Customer Satisfaction: Hot buttons and othermeasurement issues”, American Marketing Association, Chicago, Illinois

5. Helsdingen and de Vries (1999) "Services marketing and management: Aninternational perspective”, John Wiley and Sons, Chichester, New Jersey

Monitoring performance over time 183

Chapter eleven 5/7/07 09:58 Page 183

Page 191: Customer Satisfaction

6. Oliver, Richard L (1997) "Satisfaction: A behavioural perspective on theconsumer”, McGraw-Hill, New York

7. Teas, R K (1993) "Expectations, performance evaluation and consumers’perceptions of quality”, Journal of Marketing 57

8. White and Schneider (2000) "Climbing the Commitment Ladder: The role ofexpectations disconfirmation on customers' behavioral intentions”, Journal ofService Research 2(3)

9. Parasuraman, Berry and Zeithaml (1985) "A conceptual model of service qualityand its implications for future research”, Journal of Marketing 49(4)

10. Parasuraman, Berry and Zeithaml (1988) "SERVQUAL: a multiple-item scale formeasuring perceptions of service quality”, Journal of Retailing 64(1)

11. Gummesson, E (1992) "Quality dimensions: What to measure in serviceorganizations”, in Swartz, Bowen and Brown, (Eds) "Advances in servicesmarketing and management”, JAI Press, Greenwich CT

12. Heskett, Sasser and Schlesinger (1997) "The Service-Profit Chain”, Free Press,New York

13. Zeithaml, Berry and Parasuraman (1990) "Delivering Quality Service”, Free Press,New York

14. Cronin and Taylor (1992) "Measuring service quality: An examination andextension”, Journal of Marketing 56

15. Parasuraman, Berry and Zeithaml (1991) "Refinement and reassessment of theSERVQUAL scale”, Journal of Retailing 79

16. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty andProfit: An Integrated Measurement and Management System”, Jossey-Bass, SanFrancisco, California

17. Schneider and White (2004) "Service Quality: Research Perspectives”, SagePublications, Thousand Oaks, California

18. Allen and Rao (2000) "Analysis of Customer Satisfaction Data”, ASQ QualityPress, Milwaukee

19. Cronin and Taylor (1994) "SERVPERF versus SERVQUAL: Reconcilingperformance-based and performance-minus-expectations measurement ofservice quality”, Journal of Marketing 58

20. Hill and Alexander (2006) "The Handbook of Customer SatisfactionMeasurement” 3rd Edition, Gower, Aldershot

21. Reichheld, Frederick (2003) "The One Number you Need to Grow”, HarvardBusiness Review 81, (December)

22. Heskett, Sasser and Schlesinger (2003) "The Value-Profit Chain”, Free Press, NewYork

184 Monitoring performance over time

Chapter eleven 5/7/07 09:58 Page 184

Page 192: Customer Satisfaction

Actionable outcomes 185

CHAPTER TWELVE

Actionable outcomes

Since the purpose of measuring customer satisfaction is to improve it, the firstpriority of the CSM process is to produce actionable outcomes that will driveimprovement in customer satisfaction. This may sound obvious, but it is easy tobecome obsessed with the survey process because it is vital that a sound methodologyis followed to ensure accurate results. For that reason, the validity and credibility ofthe methodology has been the overwhelming preoccupation of this book. So far. Butnow it is time to redress the balance, because although a reliable survey is essential, itis not an end in itself but the means to the much more important objective ofimproving customer satisfaction. This chapter will therefore begin to focus ontechniques and advice that will maximise the likelihood of a CSM survey leading toan increase in customer satisfaction and loyalty.

At a glanceThis chapter will:

a) Review the conclusions of the academic literature for turning CSM survey datainto outcomes.

b) Explain gap theory - the traditional way of setting PFIs (priorities forimprovement).

c) Detail other factors that can be used to determine PFIs.

d) Consider ways of benchmarking CSM data.

e) Outline techniques to maximise clarity of reporting.

12.1 Using CSM data to make decisionsAs early as the 1980s both academics and experienced practitioners abandoned theidea that outcomes would be based simply on the lowest satisfaction or performancescores1. This was based on the realisation that customers’ satisfaction judgementswere based not on an objective evaluation of the organisation’s performance but ona subjective opinion of the extent to which it had met their requirements2. Thisresulted in ‘gap theory’, which quite simply based satisfaction improvement outcomeson the size of the gap between importance and satisfaction scores, a large gap

Chapter twelve 5/7/07 09:59 Page 185

Page 193: Customer Satisfaction

indicating that the organisation had fallen well short of meeting customers’‘requirements’1,3,4,5,6,7. There has been much debate and confusion over whethercustomers’ requirements refer to expectations or needs. Parasuraman et al considerablyadded to the confusion when asserting that the meaning of ‘expectations’ was differentwhen applied to customer satisfaction (predictions by the customer of what is likely tohappen during a service encounter) and service quality (the desires or wants of thecustomer, or what the supplier should deliver). In practice, most academics andpractitioners would see the former as a definition of expectations and the latter as adefinition of requirements or needs. Most also conclude that expectations are difficultif not impossible to measure8,9. In particular, measuring customers’ expectations afterthe service encounter (which surveys inevitably do) is flawed because the expectationsare usually modified by the experience8,10.

KEY POINTIt is the relative importance of customers’ requirements that should be measuredfor CSM. Customer expectations do not provide a suitable basis for measurement.

Consequently, requirements are considered more suitable than expectations asmeasurable antecedents of the customer experience, and it is the relativeimportance of the requirements that is of interest. A CSM survey therefore remainsfaithful to our original definition of customer satisfaction by measuring the extentto which the supplier meets its customers’ requirements11. This accords with theintuitively sound notion that there is little commercial value in being good atsomething that doesn’t matter to customers. On the contrary, customer satisfactionis best achieved by ‘doing best what matters most to customers’ and failure to do thiswill be reflected in satisfaction gaps12. Whether presented as gaps betweenimportance and satisfaction scores (Figure 12.1 and 12.2) or in the form of a twoby two importance-satisfaction matrix13, as shown in Figure 12.3, the principle isthe same – the supplier’s priorities for improvement will be the factors where it isleast meeting its customers’ requirements12,14.

KEY POINTTo improve customer satisfaction, organisations should focus resources on areaswhere they are least meeting customers’ requirements.

However, in more recent years it has been increasingly recognised that there are morepowerful analytical models based on inter-dependence techniques that offer a moresophisticated approach to the analysis of customer satisfaction data15,16,17,18. In orderto improve further, companies already achieving good levels of customer satisfactionwould be well advised to utilise such approaches to maximise their understanding ofthe drivers of customer satisfaction since it becomes more difficult to improvecustomer satisfaction as levels increase. We will examine more sophisticated

186 Actionable outcomes

Chapter twelve 5/7/07 09:59 Page 186

Page 194: Customer Satisfaction

Actionable outcomes 187

approaches later in this book, but since many organisations have poor levels ofcustomer satisfaction because they do not meet customers’ requirements, the basic‘satisfaction gaps’ approach remains perfectly adequate for identifying appropriateareas for improvement. This is therefore where we will start our examination ofproducing actionable outcomes.

12.2 Satisfaction gapsIf we return to our retailer’s data, we can see an illustration of gap analysis in Figure12.1. Where the satisfaction score for a requirement is lower than the importancescore there is a satisfaction gap, indicating that the organisation is not meetingcustomers’ requirements. Not rocket science, gap analysis indicates that if thesatisfaction bar is shorter than the importance one the company may have a problem! But that is the main strength of the chart. It is clear, simple and obvious. Anybody inthe organisation can look at it, understand it and draw the right conclusions.

There are some areas, such as ‘choice of products’ and ‘price level’ where the companyconcerned is more or less meeting customers’ requirements. There are some, such as‘staff appearance’ and ‘store layout’ where customers’ requirements are beingexceeded. Most importantly there are some attributes where the company is fallingshort and these are the ones it needs to focus on if it wants to improve customersatisfaction. These are the PFIs, the priorities for improvement.

FIGURE 12.1 Meeting customers’ requirements

6.5 7 7.5 8 8.5 9 9.5 10

Choice of products

Expertise of staff

Price level

Speed of service

Quality of products

Layout of store

Staff helpfulness

Staff appearance

Importance

Satisfaction

Chapter twelve 5/7/07 09:59 Page 187

Page 195: Customer Satisfaction

Figure 12.2 simply shows the size of the satisfaction gap, calculated by subtracting thesatisfaction score from the importance score. Where customers’ requirements arebeing exceeded, as on ‘staff appearance’, the gaps chart shows a negative satisfactiongap. The bigger the gap, the bigger the problem, and you can see from Figures 12.1and 12.2 that the biggest PFI, the area with the greatest potential for improvingcustomer satisfaction is not the attribute with the lowest satisfaction score (‘speed ofservice’) but the one with the largest gap – ‘expertise of staff ’.

KEY POINTThe Satisfaction Gaps indicate the extent to which the organisation is meeting orfailing to meet its customers’ requirements.

Early forms of importance-performance analysis were typically in the form of a 2 x 2importance-satisfaction matrix19, shown in Figure 12.3. Requirements with highimportance and low satisfaction will be found towards the top left-hand corner of thematrix, so the PFIs are found in the top left-hand cell.

There are three reasons why the gaps approach illustrated in Figures 12.1 and 12.2 ispreferable to the matrix.

1. The charts in Figures 12.1 and 12.2 are much more clear and simple and willconsequently be much better for communicating the results across theorganisation. Figure 12.1 is a very effective way of encouraging staff to thinkabout where the organisation is meeting, exceeding or failing to meet

FIGURE 12.2 Satisfaction gaps

-1.5 -1 -0.5 0 0.5 1 1.5

Staff appearance

Layout of store

Staff helpfulness

Choice of products

Price level

Quality of products

Speed of service

Expertise of staff

188 Actionable outcomes

Chapter twelve 5/7/07 09:59 Page 188

Page 196: Customer Satisfaction

Actionable outcomes 189

customers’ requirements. It also helps them to understand that the areas inmost need of attention are those where the organisation is least meeting itscustomers’ requirements.

2. If satisfaction and importance have been scored for 20 customer requirements,the PFI cell could be quite crowded, reducing the actionability of the results.Although the theory clearly dictates focus on the requirements closest to thetop left-hand corner of the matrix, there would be temptation in manyorganisations to have initiatives and action plans for all the requirements in thePFI cell. If this resulted in the company trying to address too many PFIs, theeffectiveness of customer satisfaction improvement strategies would be severelyweakened. By contrast, Figure 12.2 provides a totally unambiguous picture ofthe gap sizes in order of magnitude, helping to reduce unnecessary debateabout what the PFIs should be.

3. A 2 x 2 matrix is a very useful vehicle for displaying information that is difficultto compare, such as the relationship between people’s height and weight oryears of full time education and subsequent salary. The scales on the x and yaxes of the matrix can be based on completely different measures of varyingmagnitude. That’s why the 2 x 2 matrix is so useful for comparing importanceand impact, as shown in Figures 5.2 and 10.3. Since the measures forimportance and satisfaction are directly comparable, on the same scale, there isno need for the greater complexity of the 2 x 2 matrix.

FIGURE 12.3 Importance - satisfaction matrix

Expertise of staff

Price

Layout of store

Appearance of staff

IMPROVE PERFORMANCE

Satisfaction OVER PERFORMANCE

MAINTAIN PERFORMANCE

Choice of products

Speed of service

Quality of products

Helpfulness of staff

SOME IMPROVEMENT

Imp

orta

nce

9.5

9.5

9

9

8.5

8.5

8

8

7.5

7.57

7

Chapter twelve 5/7/07 09:59 Page 189

Page 197: Customer Satisfaction

12.3 Determining the PFIsThe most effective way to improve customer satisfaction is to focus on one or a verysmall number of PFIs20. Making small improvements across many of the customerrequirements achieves little gain as they often go unnoticed by customers, and evenif they are noticed, it usually takes a lot of evidence to shift customers’ attitudes. Toimprove customer satisfaction therefore, big, noticeable improvements are necessary,and since most organisations have limited resources, this is feasible only if efforts arefocused on just one, or a very small number of PFIs. To focus the organisation’sresources to this extent, a clearly understood and widely accepted method ofdetermining the PFIs is essential. For organisations near the beginning of their CSMprocess it will often be sufficient to focus solely on the satisfaction gaps, nominatingthe two or three requirements with the biggest gaps as the PFIs. This has theadvantage of being clearly understood and intuitively sound – the organisation isfocusing on the areas where it is least meeting its customers’ requirements. Whencustomer satisfaction is first measured, most organisations will find that they havesome quite large satisfaction gaps, and these should be addressed first. Focusingsolely on the satisfaction gaps also has the great merit of minimising unproductivedebate about what to address and maximising time and effort devoted to improvingcustomer satisfaction.

KEY POINTFor organisations with poor levels of customer satisfaction or at the beginningof their CSM journey, Satisfaction Gaps usually provide a perfectly adequatebasis for selecting PFIs.

As the organisation closes its most obvious satisfaction gaps, it will need to take morefactors into consideration when determining its PFIs. Our hypothetical retailer, forexample, could base its PFIs on a combination of the following five factors, most ofwhich we outlined in the chapter on basic analysis:

1. Satisfaction gapThe most important factor will remain the size of the gap. Normally a greatergain in customer satisfaction will be achieved by closing a large gap rather than asmall gap. On a 10 point scale any satisfaction gap above 1 point is a concern andgaps in excess of 2 are serious. The gap sizes above 1 shown in Figure 12.2 suggestthat ‘expertise of staff ’ and ‘speed of service’ should be PFIs.

2. Satisfaction driversTaking impact as well as importance into account, the satisfaction drivers wereshown in Figure 10.3. They play a prominent role in customers’ judgements of thecompany and they also point to ‘expertise of staff ’ and ‘speed of service’ as the PFIs.

3. Dissatisfaction driversThese are the factors that are most irritating customers. Regardless of the average

190 Actionable outcomes

Chapter twelve 5/7/07 09:59 Page 190

Page 198: Customer Satisfaction

Actionable outcomes 191

satisfaction scores achieved these are the areas where the most customers aregiving very low scores and in the case of our retailer they are ‘expertise of staff ’,‘speed of service’ and ‘helpfulness of staff ’ (see Figure 10.9).

4. Loyalty differentiatorsIf a company can improve its performance on the loyalty differentiators, it willstrengthen the loyalty of its most loyal customers and reduce dissatisfaction anddefection amongst its least loyal (see Figure 10.11).

5. Business impactSome PFIs will be more difficult, more time consuming and more costly to addressthan others. Therefore, the decision to invest in customer satisfaction improvementwill often be a trade off between the cost of making the improvements and thepotential gain from doing so. To clarify this business impact decision it is helpful to plotthe potential satisfaction gain (shown on the x axis and based on the size of thesatisfaction gap) against the cost and difficulty of making the necessary improvements.A business impact matrix for the retailer is shown in Figure 12.4. Based on categorisingcustomer requirements into three broad bands according to the cost and difficulty ofmaking improvements, the Business Impact Matrix illustrates where the most cost-effective gains can be made. As shown in the chart, some requirements, particularlythose in the cells in the bottom right hand corner, such as ‘speed of service’ and ‘staffhelpfulness’, should bring high returns due to their large satisfaction gaps and low cost.However, requirements in the top left hand corner, such as ‘layout of the store’, wouldbring little benefit, due to low or non-existent satisfaction gaps and high relative cost.Whilst we are not advocating avoidance of the difficult issues, it is highly beneficial ifthere are one or more ‘quick wins’ that can be addressed relatively easily

FIGURE 12.4 Business impact matrix

Layout of store

Staff appearance

Price level

Quality of products

Expertise of staff

Speed of service

Choice of products

Staff helpfulness

Hig

hM

ediu

m

Cost

Low

Low

Benefit High

Chapter twelve 5/7/07 09:59 Page 191

Page 199: Customer Satisfaction

since it is very helpful if both customers and employees can see prompt action beingtaken as a direct result of the survey.

KEY POINTAs organisations’ CSM processes mature, more factors will be used to determinePFIs. An outcomes table is very useful for summarising the PFI selection process.

If the PFIs are derived from several sources of information, it is very helpful tosummarise everything in one easy-to-assimilate visual format such as the outcomestable shown in Figure 12.5. This enables everyone in the organisation to quicklyunderstand the reasons behind the selection of the PFIs, which minimisesunproductive debate and moves the company as swiftly as possible into theimplementation phase.

12.4 Benchmarking satisfactionOrganisations are increasingly interested in benchmarking their performance acrossall aspects of business management, hence the growing popularity of balancedscorecard21,22 approaches to management information and the desire of manyorganisations to have their balanced scorecard measures externally audited by bodiessuch as EFQM23 and Malcolm Baldrige24. Some areas of business performance lendthemselves much more readily than others to comparison against otherorganisations. Whilst many tangible metrics such as sales per employee, debtor days,

FIGURE 12.5 Outcomes table

Choice of products

Expertise of staff

Price

Speed of service

Quality of products

Layout of store

Helpfulness of staff

Appearance of staff

Requirements TO

TAL

BU

SIN

ESS

IM

PAC

T

LOYA

LTY

DIF

FER

EN

TIA

TO

RS

DIS

SAT

ISFA

CT

ION

DR

IVE

RS

SAT

ISFA

CT

ION

DR

IVE

RS

SAT

ISFA

CT

ION

GA

P

192 Actionable outcomes

Chapter twelve 5/7/07 09:59 Page 192

Page 200: Customer Satisfaction

Actionable outcomes 193

staff turnover, can be easily benchmarked across companies and sectors, customersatisfaction measures can be much harder to compare. The main difficulties arisefrom use of different methodologies and from asking different questions.

12.4.1 Different methodologiesIf different methodologies are used benchmarking is impossible. There is no way ofcomparing a measure of customer satisfaction generated by one company using a 10-point numerical scale with one produced by another organisation using a 5-pointverbal scale. Anyone wishing to change from one scale to the other whilst maintainingsome tracking comparability can do so only by duplicating several questions on thesame questionnaire with the same sample, comparing the outcomes as a percentageof maximum and calculating a conversion factor accordingly.

KEY POINTOnly customer satisfaction measures based on compatible methodologiescan be benchmarked.

12.4.2 Different questionsIf one organisation has asked exactly the same questions as another using the samemethodology, the two can obviously compare the answers to the questions. However,unless all aspects of the first company’s operations are identical to those of the secondcompany, this approach is very unlikely to compare accurate measures of satisfaction,since we know that asking the right questions, based on the lens of the customer, is afundamental element of a measure that truly reflects how satisfied or dissatisfiedcustomers feel. Put simply, unless you use the same criteria that the customers use tojudge the organisation, a survey will never arrive at the same satisfaction judgementas the customers. Consequently, it is almost inevitable that different organisations,even in the same sector, will be asking at least some different questions.

12.4.3 How to compareQuite simply, and logically, organisations should make comparisons in the same waythat customers do. At the overall level customers make judgements based on theextent to which suppliers have met their requirements – whatever those requirementsare. As we know from Chapter 11, the most reliable measure of overall customersatisfaction is a composite index with the individual components weighted accordingto their importance to customers. Organisations can therefore compare this measureof their success in meeting customers’ requirements with the customer satisfactionindex of any other organisations across all sectors.

12.4.4 Comparisons across sectorsIn fact, it is essential to compare across sectors since this is precisely what customers do.Customers typically base their expectations on the best service they have encountered

Chapter twelve 5/7/07 09:59 Page 193

Page 201: Customer Satisfaction

across the wide range of different suppliers they use from all sectors. Moreover, manysuccessful organisations pursue best practice benchmarking outside their own sector asthey see this as a much better way of making a paradigm shift than if they look solely attheir own industry where companies are broadly doing the same things. For example,Southwest Airlines achieved the fastest airport turnaround time in its industry bybenchmarking itself against Formula 1 teams in the pits. Sewell Cadillac benchmarksits cleanliness against hospitals and has adapted several medical technologies to help itsmechanics achieve better results when diagnosing and fixing car faults.

KEY POINTCustomer satisfaction should be benchmarked across sectors. A weightedcustomer satisfaction index from a survey based on the lens of the customerprovides a perfect basis for cross-industry benchmarking since it is a measure ofthe extent to which the organisation has met its customers’ requirements.

12.4.5 Benchmarking databasesThe American Customer Satisfaction Index25 is by far the biggest customersatisfaction benchmarking database since it claims to cover 60% of the US economy.At the time of writing, there is no UK equivalent. Closest is The Leadership Factor’sSatisfaction Benchmark database26 compiled using data from around 500 customersatisfaction surveys per annum across all sectors. However, the Institute of CustomerService has launched a UK Customer Satisfaction Index27 in 2007 which, over time,should offer a benchmarking resource similar to that provided by the AmericanCustomer Satisfaction Index.

12.4.6 Incorporating benchmarking into survey outcomesIt is always very useful to incorporate benchmarking into customer satisfactionsurvey outcomes at two levels. First, it is very helpful to know how good theorganisation’s customer satisfaction index is and this can be achieved only by seeingit from the customers’ perspective – how the company’s service compares with otherorganisations generally. Going back to our retailer, their customer satisfaction index,based on the eight questions we have importance and satisfaction scores for, wouldbe 82.2%. Figure 12.6, based on The Leadership Factor’s Satisfaction Benchmarkdatabase26, shows that the retailer is delivering a good level of customer satisfaction,but, compared with other organisations, not a very good one. It demonstrates to theretailer and its employees that there is plenty of opportunity to improve.

As well as the overall index, it can be even more useful to benchmark theorganisation’s performance on the individual requirements measured. Figure 12.7shows that the retailer is considerably worse than other companies at satisfying itscustomers on ‘quality of products’ and that its relative performance is poor on‘staff helpfulness’.

194 Actionable outcomes

Chapter twelve 5/7/07 09:59 Page 194

Page 202: Customer Satisfaction

Actionable outcomes 195

FIGURE 12.7 Benchmarking the requirements

Quality of products

Staff helpfulness

Expertise of staff

Speed of service

Staff appearance

Layout of store

Choice of product

Worse

Price

Better

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

FIGURE 12.6 Benchmarking the satisfaction index

ABC Ltd:82.2%

Topquartile

Bottomquartile

Chapter twelve 5/7/07 09:59 Page 195

Page 203: Customer Satisfaction

By contrast, although ‘price’ wasn’t the retailer’s highest satisfaction score, it is veryhigh compared with customer satisfaction on price generally. Price is a good exampleof an individual requirement that benefits tremendously from benchmarking. Sincecustomers will always be reluctant to express delight with prices, it is very commonfor companies to record low satisfaction scores for price. Based solely on the surveydata, price will often have a large satisfaction gap and will appear to be a PFI.However, when benchmarked, an apparently low satisfaction score for price is oftenshown to be close to the average achieved by other companies, and therefore not acause for concern. Looking at the benchmarking data in Figure 12.7, our retailershould spot an opportunity to increase its prices to fund improvements on ‘speed ofservice’ and ‘expertise of staff ’.

The value of benchmarking can be seen if a column is added to our retailer’soutcomes table to incorporate the new information it has provided. Shown in Figure12.8, the revised benchmarking table demonstrates that ‘staff helpfulness’ is an equalconcern with ‘speed of service’ and ‘expertise of staff ’ and that ‘quality of products’ isa bigger problem than indicated by the earlier data.

12.5: Clarity of reportingThe outcomes table is a good example of clarity of reporting. Most people in anorganisation do not have the time or the inclination to wade through large volumes ofsurvey output. They need very concise information, preferably in a visual form that is

FIGURE 12.8 Revised outcomes table

Choice of products

Expertise of staff

Price

Speed of service

Quality of products

Layout of store

Helpfulness of staff

Appearance of staff

Requirements TO

TAL

BU

SIN

ESS

IM

PAC

T

BE

NC

HM

AR

KIN

G

LOYA

LTY

DIF

FER

EN

TIA

TO

RS

DIS

SAT

ISFA

CT

ION

DR

IVE

RS

SAT

ISFA

CT

ION

DR

IVE

RS

SAT

ISFA

CT

ION

GA

P

196 Actionable outcomes

Chapter twelve 5/7/07 09:59 Page 196

Page 204: Customer Satisfaction

Actionable outcomes 197

easy to understand and leads to authoritative recommendations. Too muchinformation or lack of definite conclusions lead to unproductive debate and delays thedevelopment of satisfaction improvement action plans. Although someone in theorganisation needs sufficient understanding of the research to verify its accuracy, it isnot productive to cascade details such as segment splits, standard deviations,confidence intervals etc. It is much more useful to put effort into developing reportingmedia that enable relevant employees to receive just as much information as they needto motivate and help them to improve customer satisfaction. Our retailer, for example,might produce an action map like the one shown in Figure 12.9. It is based on the factthat whilst the survey data will highlight company-wide PFIs, it will not be possible forall staff across the organisation to contribute equally to addressing them. The actionmap therefore looks at the extent to which different parts of the organisation can makea difference to the PFIs and allocates PFI responsibilities by department, team, regionand store as appropriate. It also provides a clear visual guide for managementsummarising who is responsible for what, thus helping them to monitor theimplementation of satisfaction improvement action plans.

KEY POINTTo facilitate action to improve customer satisfaction, reporting of survey datashould be as clear and simple as possible.

If the retailer is large it could have several hundred stores. Action maps wouldtherefore be cascaded from national to regional to area level. It would also be useful

FIGURE 12.9 Action Map

ACTION MAPCentral functions Stores

Major PFI:

Minor PFI:

All Clear:

Requirement

Expertise of staff

Speed of service

Quality of products

Price

Choice of products

Staff helpfulness

Layout of store

Staff appearance

Cu

stom

er s

ervi

ce

Per

son

nel

Op

erat

ion

s

Faci

liti

es

Man

agem

ent

Mar

keti

ng

Lee

ds

Swin

don

Leic

este

r

Can

terb

ury

Oxf

ord

Chapter twelve 5/7/07 09:59 Page 197

Page 205: Customer Satisfaction

to consider alternative media such as the web for reporting the results. Interactiveweb reporting enables the results database to be stored on a secure website withauthorised staff able to interrogate it according to virtually any criteria they wish. Thestore manager in Oxford, for example, may want to look up the satisfaction scoresachieved by stores in locations with a similar demographic profile such as Cambridgeor Bath. Thinking about the PFIs for his own store he might want to interrogate thedatabase to discover which stores achieved the highest satisfaction scores for‘expertise of staff ’ and ‘speed of service’ so he can learn from their success.

KEY POINTInternal benchmarking is a very effective tool for improving customer satisfaction.

Any organisation that has multiple stores, branches, sites, business units etc. will findinternal benchmarking extremely effective in driving customer satisfactionimprovement strategies. For such companies, reporting should therefore be focusedas much as possible on making comparisons across the different units. Benchmarkingcharts at overall and attribute level like the ones shown in Figures 12.6 and 12.7should be adapted to make internal comparisons. This approach may not beuniversally popular, especially with managers of poorer performing units, but itprovides a powerful incentive to improve since nobody wants to be bottom of theinternal league table! This approach was used very successfully by Enterprise Rent-A-Car (see section 3.4.1) to improve customer satisfaction.

Conclusions1. The most effective way to improve customer satisfaction is to focus on a very

small number of PFIs (priorities for improvement) rather than diluting actionstoo thinly across too many customer requirements.

2. The starting point for prioritising improvements is gap analysis, which is thedifference between the satisfaction and importance scores, a satisfaction scoremore than one point below its corresponding importance score demonstratingthat the organisation is not meeting customers’ requirements.

3. For clarity of reporting, use a straight comparison of importance and satisfactionscores illustrated in a simple bar chart rather than the less obvious satisfaction-importance matrix.

4. For organisations at the start of their CSM journey gap analysis will be sufficientfor highlighting PFIs, but once the quick wins have been successfully addressed itwill be helpful to use additional factors to determine PFIs, including satisfactiondrivers, dissatisfaction drivers, loyalty differentiators, business impact andbenchmarking.

198 Actionable outcomes

Chapter twelve 5/7/07 09:59 Page 198

Page 206: Customer Satisfaction

Actionable outcomes 199

5. Since a weighted customer satisfaction index is a measure of the extent to whichan organisation is meeting its customers’ requirements, it can be benchmarkedagainst any organisation, whatever questions have been asked on its CSM survey– provided the questions are based on the lens of the customer.

6. Organisations should benchmark their index and the individual customerrequirements against other organisations from outside as well as inside theirown sector.

7. Companies with multiple business units should use internal benchmarking as apowerful driver of satisfaction improvement.

8. The outcomes of a CSM survey should be reported very widely around theorganisation, but the information reported should be concise, clear and simplewith authoritative conclusions.

References1. Parasuraman, Berry and Zeithaml (1988) "SERVQUAL: a multiple-item scale for

measuring perceptions of service quality”, Journal of Retailing 64(1)2. Peters and Austin (1986) "A Passion for Excellence”, William Collins, Glasgow3. Schneider and White (2004) "Service Quality: Research Perspectives”, Sage

Publications, Thousand Oaks, California4. Helsdingen and de Vries (1999) "Services marketing and management: An

international perspective”, John Wiley and Sons, Chichester, New Jersey5. Parasuraman, Berry and Zeithaml (1994) "Reassessment of expectations as a

comparison standard in measuring service quality: Implications for furtherresearch”, Journal of Marketing 58

6. Oliver, Richard L (1997) "Satisfaction: A behavioural perspective on theconsumer”, McGraw-Hill, New York

7. Churchill and Suprenant (1982) "An investigation into the determinants ofcustomer satisfaction”, Journal of Marketing Research 19

8. Carman, J M (1990) "Consumer perceptions of service quality: An assessment ofthe SERVQUAL dimensions”, Journal of Retailing 66(1)

9. Teas, R K (1993) "Expectations, performance evaluation and consumers’perceptions of quality”, Journal of Marketing 57

10. Clow and Vorhies (1993) "Building a competitive advantage for service firms”,Journal of Services Marketing 7(1)

11. Ennew, Reed and Binks (1993) "Importance-performance analysis and themeasurement of service quality” European Journal of Marketing 27(2)

12. Hill, Brierley and MacDougall (2003) "How to Measure Customer Satisfaction”,Gower, Aldershot

13. Hemmasi, Strong and Taylor (1994) "Measuring service quality for planning andanalysis in service firms”, Journal of Applied Business Research 10(4)

Chapter twelve 5/7/07 09:59 Page 199

Page 207: Customer Satisfaction

14. Joseph M, McClure and Joseph B (1999) "Service quality in the banking sector:The impact of technology on service delivery”, The International Journal of BankMarketing 17(4)

15. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty andProfit: An Integrated Measurement and Management System”, Jossey-Bass, SanFrancisco, California

16. Allen and Rao (2000) "Analysis of Customer Satisfaction Data”, ASQ QualityPress, Milwaukee

17. Ryan, Buzas, and Ramaswamy (1995) "Making Customer SatisfactionMeasurement a Power Tool", Marketing Research 7 11-16, (Summer)

18. Fornell, Claes (2001) "The Science of Satisfaction”, Harvard Business Review 71,(March-April)

19. Martilla J A and James J C (1977) "Importance-Performance Analysis”, Journal ofMarketing 41, (January)

20. Hill and Alexander (2006) "The Handbook of Customer Satisfaction and LoyaltyMeasurement” 3rd Edition, Gower, Aldershot

21. Kaplan and Norton (1996) "The Balanced Scorecard”, Harvard Business SchoolPress, Boston

22. The Balanced Scorecard Institute: www.balancedscorecard.org23. www.efqm.org24. www.baldrige.nist.gov 25. The American Customer Satisfaction Index, www.theacsi.org 26. The Leadership Factor’s customer satisfaction benchmarking database:

www.leadershipfactor.com/surveys/ 27. See the Institute of Customer Service and UKCSI:

www.instituteofcustomerservice.com and www.ukcsi.com

200 Actionable outcomes

Chapter twelve 5/7/07 09:59 Page 200

Page 208: Customer Satisfaction

Comparisons with competitors 201

CHAPTER THIRTEEN

Comparisons withcompetitors

As we said in the last chapter, customers tend to benchmark organisations verywidely, comparing them with their experiences across many different sectors. Oftencustomers’ recent experience is limited to one organisation per sector. They simplydon’t currently deal with more than one local council, one mortgage lender, onemobile phone provider or one doctor. At other times, however, customers are muchmore active in making comparisons; when they bought their house and needed a newmortgage for example or when their annual phone contract or insurance policy wasdue for renewal. In some markets customers may habitually use one supermarket, forexample, or drive one make of car for three or four years before replacing it, but arenevertheless frequently making comparisons between competing suppliers eventhough they are not switching. In other markets customer promiscuity is much moreprevalent. In most industrial supply markets, for example, dual or multiple sourcingis widespread. In the leisure sector customers often frequent more than onerestaurant, tourist destination or theatre. It is therefore very useful for somecompanies to understand how customers make comparisons and choices betweencompeting suppliers. This chapter explains how to do it.

At a glanceIn this chapter we will:

a) Outline a very simple survey approach.

b) Consider the methodological implications of competitor surveys.

c) Explain how to conduct a market standing survey.

d) Consider the added dimension of relative perceived value.

e) Explore switching behaviour and its drivers.

13.1 Simple comparisonA very easy method of making a simple comparison with competitors can be utilisedby any organisation that conducts a customer satisfaction survey. In its simplest form

Chapter thirteen 5/7/07 10:00 Page 201

Page 209: Customer Satisfaction

it requires the addition of only one question to the survey, such as:“Compared with other banks / supermarkets / office equipment suppliers / etc that youare familiar with, how would you rate XYZ?”

Customers can be given a simple range of options, including one for those with noexperience of other suppliers, resulting in the type of output shown in Figure 13.1.

This question can also be utilised by organisations such as local councils,membership bodies, charities or housing associations whose customers don’t dealwith competitors but can often make comparisons against other organisations thatthey perceive to be broadly similar. In these circumstances, the question wouldtypically be worded:“In your opinion, how does XYZ compare with other similar organisations?”

Whether against direct competitors in a promiscuous market or against broadlysimilar organisations in a less competitive one, understanding of how customersmake comparisons will be enhanced by adding a second question:“When making that comparison, which other organisations did you compare XYZ against?”

If it is added to a customer satisfaction survey, the big disadvantage of this questionin competitive markets is its biased sample. All the respondents have chosen to becustomers of the company conducting the survey but they have not all chosen to dealwith all its competitors. It is therefore a reasonable assumption that the sample willbe more favourably disposed towards XYZ than a randomly selected sample of allbuyers in the market.

FIGURE 13.1 Simple comparison

Better than most52%

The best11%

Not familiar with others 8%

The worst 1%Worse than most 4%

About the same as most 24%

202 Comparisons with competitors

Chapter thirteen 5/7/07 10:00 Page 202

Page 210: Customer Satisfaction

Comparisons with competitors 203

KEY POINTA simple comparison question provides a good overview of how an organisationis seen by its customers relative to other similar organisations, but will be lessuseful in highly competitive markets.

13.2 Methodology implications of competitor surveys

13.2.1 SamplingTo overcome the limitations of the simple comparison question described above, thecustomers taking part in the survey must be a random and representative sample ofall buyers in the market, not just the company’s own customers. This adds aconsiderable layer of difficulty to the survey process since most companies do notpossess a comprehensive database of all the buyers in the market. In addition to theirown customers, most companies do have a database of potential customers builtfrom enquiries, quotations and sometimes bought or compiled lists of customers. Insome industrial markets with relatively few customers it is quite feasible to compile avery accurate database of all buyers of a particular product or service but in massmarkets the task is much more difficult. For universal purchases such as groceries inB2C markets and stationery in B2B markets, it is easy to source lists of all consumersor all businesses. However, for products that are not universal but where there aremany suppliers and very large numbers of customers, e.g. personal pensions orbusiness travel, building a truly comprehensive database of all customers in themarket will be very difficult and often impossible. It is therefore acceptable to base thesample on a readily available list such as a trade directory or a bought list which,whilst almost certainly not covering the full universe of customers, will not be biasedtowards any of the competing suppliers.

An alternative is for competitors to organise a syndicated survey through their owntrade association, which can act as an ‘honest broker’. The competing suppliers eachprovide a random sample of their own customers to the trade association which thencommissions an agency to undertake the survey. When the survey is completed, theassociation typically provides all participating members with the same results, usuallyshowing scores for all the competitors. Alternatively, each participant can be giventheir own scores compared with the market average. The disadvantage of syndicatedsurveys is that all companies taking part receive the same information, so it providesno competitive advantage.

KEY POINTCompetitor comparison surveys must be based on a random and representativesample of all the customers in the market for the product or service.

Chapter thirteen 5/7/07 10:00 Page 203

Page 211: Customer Satisfaction

13.2.2 Data collectionIf a neutral body such as a trade association conducts the survey, a self-completionmethodology is usually feasible, the association’s reputation boosting response ratesand its perceived neutrality allaying respondents’ unease at scoring competingsuppliers. However, if a company undertakes its own competitor comparison survey,a self-completion methodology would not usually achieve an adequate response rate.Moreover, some customers feel uneasy about giving one company scores for itscompetitors, so if the information is provided it may not be reliable. If a companyundertakes its own competitor survey it is therefore necessary to interview customersand to use an agency to conduct them. Since it is important not to lead or biasrespondents, the agency would not divulge the name of the commissioning supplierat the beginning of the interviews, but would normally be prepared to disclose it atthe end. Even with this approach, some customers will be reluctant to participate, soresponse rates will be lower than for customer satisfaction surveys, reducing thereliability of the data. However, if a company wants this information without sharingit with competitors, this type of compromise has to be made.

13.2.3 The questionnaireThe questionnaire is very similar to the customer satisfaction questionnairedescribed in Chapter 9, with two main differences. At the beginning of the interview,the customer’s reference set of competitors must be established, by asking a simpleawareness question such as:“Can you tell me which supermarkets / household insurance companies / PC suppliers/ hydraulic seal manufacturers etc you are familiar with?”This question would usually be unprompted, with the interviewer having pre-codedoptions for all the leading competitors plus an ‘other’ option for any smallersuppliers. In some markets, such as cars, most customers will be aware of too manybrands to score in a reasonable length interview, so in these cases the interviewerwould restrict the options to the main competitors in a segment. For large executivecars for example, the options might be limited to Audi, BMW, Lexus and Mercedes.

Having established respondents’ reference set, they are asked to score all thecompetitors that they are familiar with but unlike customer satisfaction surveys it isnot essential that respondents have experienced a supplier’s product or service,particularly in markets where competing suppliers have a strong image andcustomers have quite detailed perceptions of companies they have not used recently,or perhaps ever. Provided respondents are asked to score performance rather thansatisfaction, this is quite feasible. It would also be made perfectly clear to respondentsthat they are scoring perceived rather than actual performance. Suitable wordingwould be:“I’d like to ask about your opinion of how the companies you mentioned perform on anumber of different factors. I would like you to give each one a score out of ten where

204 Comparisons with competitors

Chapter thirteen 5/7/07 10:00 Page 204

Page 212: Customer Satisfaction

Comparisons with competitors 205

a score of 1 out of 10 means that you believe the company performs very poorly on thatfactor. A score of 10 means that you believe they perform very well.”

KEY POINTIn competitor surveys, respondents score perceived performance rather thansatisfaction.

The interviewer should score each supplier the respondent mentioned on eachrequirement before moving on to the next factor. As with customer satisfaction surveysthe requirements must also be scored for importance and this should be done after theperformance scores have been collected. Of course, as with customer satisfactionsurveys, the 15 to 20 customer requirements scored in the main survey would be basedon the lens of the customer and identified through exploratory research.

13.3 Market standingA study based on the methodology outlined above is known as a market standingsurvey and should cover all the factors that influence customers’ choice andevaluation of suppliers in the market. Shown in Figure 13.2, the results enable acompany to see how it compares against its competitors on all the most importantsupplier selection criteria used by customers.

Provided the customer requirements have also been scored for importance, a weightedindex can be calculated for each supplier, using the formula explained in Chapter 11.Shown in Figure 13.3 for the three suppliers in this example, the outcome provides anaccurate reflection of their relative market standing as perceived by customers1.

FIGURE 13.2 Competitor comparisons

6.5 7.56 7 8 8.5 9 9.5 10

Fruit & vegetables

Stock availability

Bakery

Cleanliness

Queue times

Price

Fresh meat

Café

XYZ Ltd

Competitor 1

Competitor 2

Chapter thirteen 5/7/07 10:00 Page 205

Page 213: Customer Satisfaction

Since customers’ attitudes precede their behaviours, Figure 13.3 will typically providea very reliable guide to future customer behaviour in the market and its consequentimpact on market share so provides a sound basis for decisions about how toimprove, but the analysis will have to be slightly different from the steps outlined inChapter 12. The next two sub-sections explain.

13.3.1 Satisfaction gapsIt is always essential to ‘do best what matters most to customers’, so comparingimportance and satisfaction scores remains the starting point. Initially the same asthe analysis described in Chapter 12, Figure 13.4 shows the importance scores givenby customers and compares them with XYZ’s satisfaction scores that we have alreadyseen in Figure 13.2.

FIGURE 13.4 Doing best what matters most

7 8 8.56.5 7.5 9 9.5 10

Fruit & vegetables

Stock availability

Bakery

Cleanliness

Queue times

Price

Fresh meat

Café

Importance

XYZ performance

FIGURE 13.3 Market Standing

XYZ Ltd 85.8%

84.9%

77.5%

Competitor 1

Competitor 2

72% 74% 76% 78% 80% 82% 84% 86%

XYZ LtdCompetitor 1

Competitor 2

206 Comparisons with competitors

Chapter thirteen 5/7/07 10:00 Page 206

Page 214: Customer Satisfaction

Comparisons with competitors 207

Figure 13.5 simply shows the size of the satisfaction gap for each of the eightrequirements. As previously, requirements where satisfaction is higher thanimportance, indicating that customers’ requirements are being exceeded, are shownwith a negative gap.

13.3.2 Competitor gapsTo make the most impact on improving the satisfaction of its own customers, XYZshould focus on addressing a small number of PFIs (priorities for improvement) basedon its biggest satisfaction gaps. However, in a highly competitive market there is alsoanother dimension to consider; XYZ’s relative performance compared with its maincompetitors. Figure 13.6 shows the competitor gaps between XYZ and Competitor 1.

FIGURE 13.6 Competitor gaps XYZ versus Competitor 1

-1.5 -1 -0.5 0 0.5 1 1.5 2

Café

Fruit & vegetables

Cleanliness

Bakery

Meat

Stock availability

Price

Queue times

FIGURE 13.5 Satisfaction gaps for XYZ

-2.5 -1.5 -1 -0.5 0.5 1 1.5 2-2 0

Café

Cleanliness

Fresh meat

Bakery

Fruit & vegetables

Queue times

Price

Stock availability

Chapter thirteen 5/7/07 10:00 Page 207

Page 215: Customer Satisfaction

There are significant differences between figures 13.5 and 13.6. Stock availability isthe obvious PFI (priority for improvement) for XYZ if based on the satisfaction gapsbut queue times are a much bigger area of under-performance against Competitor 1.Since in the real world there would probably be at least 20 important customerrequirements covered on the survey, and all companies have finite resources, XYZmay have to make choices between increasing the satisfaction of its own customers orclosing the gaps with Competitor 1. Putting the two sets of data together into acompetitor matrix would be very useful for making this decision.

Requirements closest to the top left hand corner represent XYZ’s main areas ofweakness, in terms of failing to satisfy its own customers and under-performing itsmain competitor. Whilst ‘stock availability’ would emerge as XYZ’s main PFI basedon measuring the satisfaction of its own customers, the data from across the marketsuggest that improving ‘queue times’ and ‘price’ could also make a big difference toXYZ’s market position against Competitor 1. Before drawing conclusions aboutexactly where XYZ might decide to focus its resources, it is useful to consider analternative method of making comparisons against competitors.

13.4 Relative perceived valueThe disadvantage of Figure 13.7 is that it can show only two of the competing suppliersin the market place. This is overcome by an alternative technique known as relativeperceived value, developed in the USA by Bradley Gale2. The rationale for the techniqueis that customers buy the products and services that provide the best value, in other

FIGURE 13.7 Competitor matrix

Competitor gaps

Queue times

Sati

sfac

tion

gap

s

2

1.5 2

1

1.5

1

0.5

0 0.5

-1

-0.5

0

-0.5

-1.5

-1-15-2

-7

Price

Stock Availability

Cleanliness

Fresh meat Bakery

Cafe

Fruit and vegetables

208 Comparisons with competitors

Chapter thirteen 5/7/07 10:00 Page 208

Page 216: Customer Satisfaction

Comparisons with competitors 209

words the benefits delivered relative to the cost of obtaining the product or service.Benefits include all benefits and costs include all costs. This simply means all the thingsthat are important to customers and is totally consistent with the methodology coveredearlier in this book. A survey for relative perceived value would therefore covercustomers’ top 15 to 20 requirements, scored for importance and satisfaction. Asignificant difference at the analysis stage is that the requirements are split into twogroups; benefits and costs. As well as the obvious question on price, costs could includesome other factors. Some may be immediately identifiable as costs, such as deliverycharges, but others can be indirect costs such as travel. The cost and time involved intravelling to a more distant store, for example, are real additional costs to the customer.If two stores offer equal benefits, customers will make the rational choice and frequentthe closer one. Most of the requirements measured will usually be benefits, but the costdimension may contain three or four factors. Indices are now calculated for costs and forbenefits, using the methodology explained in Chapter 11. Since the relative importanceof the components of the cost index and the benefits index will differ, both indices areweighted. Each competitor therefore ends up with a cost index and a benefits index andthese can be plotted on the type of matrix shown in Figure 13.8.

If customers are highly satisfied with the benefits and the costs, the company wouldbe close to the top right hand corner, like Competitor 1. XYZ offers a significantlybetter combination of benefits and costs than Competitor 2, but a poorercombination than Competitor 1. Movements in market share are based on acombination of two variables. First is the extent to which a company meets its owncustomers’ requirements and therefore doesn’t lose many customers. Second is theextent to which it is seen to over- or under-perform the competition in the eyes of allbuyers in the market, a sign of its attractiveness to potential customers and its abilityto win new customers. Figure 13.8 shows those two dimensions and the diagonalsthat divide the chart into quartiles provide a good indication of the relativecompetitiveness of the three suppliers.

FIGURE 13.8 Competitive positioning

Cos

t in

dex

7075 76 77 78 79 80 81 82 83

75

80

85

90

ZONE 4

Benefits index

Competitor 2

Competitor 1

ZONE 1

XYZZONE 3

ZONE 2

84 85 86 87 88 89 90

Chapter thirteen 5/7/07 10:00 Page 209

Page 217: Customer Satisfaction

KEY POINTRelative perceived value offers a visual overview of the relative performance ofall competitors in a market in the eyes of customers.

Bradley Gale advocates one further step in the analysis process2. Instead of expressingthe cost and benefit indices as a percentage of maximum, Gale suggests presentingthem as a ratio of the market average. Taking the price dimension in Figure 13.8, themarket average is 81.3. If 81.3 is given a value of 1 and the scores for the threecompetitors expressed as a ratio of it, Competitor 1’s score would be 1.09 (9% betterthan the market average), XYZ’s would be 0.95 and Competitor 2’s 0.96. Similarly, themarket average for customer satisfaction with the benefits would be 82.9, givingCompetitor 1 a relative score of 1.02 (2% better than the market average), XYZ ascore of 1.05 and Competitor 2 a score of 0.93 (7% worse than the market average).The outcome is shown in Figure 13.9. Gale also adds a diagonal line to indicate ‘fairvalue’, which is a reasonable trade-off between benefits (or quality) and cost. Acompany with high prices, like XYZ could offer fair value in the eyes of customersprovided it delivers very high quality or a strong combined benefits package. Equally,a company offering lower quality and fewer benefits, like Competitor 1 can providefair value or better in the eyes of customers if it has very attractive prices. The fullservice versus low cost airlines would be good examples. According to Gale,companies offering better than fair value like Competitor 1 are in the ‘superior value’zone and can expect to gain market share whilst those in the ‘inferior value’ zone likeCompetitor 2 will lose market share.

KEY POINTCompanies offering ‘superior value’ will gain market share.

FIGURE 13.9 Relative perceived value

Mar

ket p

erce

ived

pri

ce r

atio

0.850.85 0.9 0.95 1 1.05 1.1 1.15

0.9

0.95

1

1.05

1.1

1.15

INFERIOR VALUE

Market perceived quality ratio

Competitor 2

Competitor 1

SUPERIOR VALUE

XYZ

LOSE

MA

RK

ET S

HA

RE GA

IN M

AR

KET

SH

AR

EFAIR VALUE

210 Comparisons with competitors

Chapter thirteen 5/7/07 10:00 Page 210

Page 218: Customer Satisfaction

Comparisons with competitors 211

There are two ways of generating data for the cost axis. The first is based on customers’perception of relative costs, collected by surveys. An alternative method is to use the realprices charged by the competing suppliers. This method is shown in Figure 13.10, withthe benefits / quality score on the x axis still generated by a customer survey. If realprices are used, the y axis scale is effectively reversed, a low score (low prices) now beinggood for customers and a high score offering lower value, so ‘superior value’ is nowshown towards the bottom right hand corner of the chart. Figure 13.10 also shows amore typical situation with a fairly large number of competing suppliers offering arange of cost – quality mixes, but most lying in the fair value zone. Suppliers 5 and 8offer superior value so can expect to gain market share whilst the market shares ofSuppliers 1 and 2 will be under pressure since they offer inferior value.

However, a chart like Figure 13.10 could be misleading for a company whose targetmarket is just one segment of the market. Offering ‘inferior value’ in the eyes of non-target customers would not be a strategic problem in this scenario. But, a sample of thesupplier’s own customers would still be biased. For an accurate picture of how well it iscompeting in its chosen market, the sample should be a random and representativesample of anyone that fits the relevant customer profile in the target market.

13.5 Market standing or relative perceived value?It is clear from the previous two sections that the methodology chosen to perform acompetitor analysis can affect the outcome. Based on market standing (Figure 13.3)XYZ is the leading supplier in the market and could expect to gain market share overboth of its main competitors. By contrast, Competitor 1 leads the market on relativeperceived value whether the figures are expressed as indices (Figure 13.8) or ratios(Figure 13.9). The difference between the two approaches is the impact of price on theoutcome. In the market standing example price is only one of eight components of each

FIGURE 13.10 Customer value map

Pri

ce r

atio

0.850.85 0.9 0.95 1 1.05 1.1 1.15

0.9

0.95

1

1.05

1.1

1.15INFERIOR VALUE

Market perceived quality ratio

Supplier 2

Supplier 1

Supplier 3

Supplier 4

Supplier 5

Supplier 6

Supplier 7

Supplier 8FAIR VALUE ZONE

SUPERIOR VALUE

Chapter thirteen 5/7/07 10:00 Page 211

Page 219: Customer Satisfaction

competitor’s index, and in a real world survey may be only one of 15 or 20 requirementsscored. Even though the index is weighted for importance, this probably under-estimatesthe role of price in customers’ supplier selection decisions in very price sensitive markets.Conversely, in the relative perceived value approach, price or cost inevitably makes asmuch impact on the outcome as all the other customer requirements combined, whichwill exaggerate the importance of price in many quality or benefits-driven markets.Companies should therefore base their choice of methodology on the price sensitivity oftheir target market, as explained in the next two sub-sections.

13.5.1 Low price sensitivityIn markets where customers’ choices and loyalty are driven mainly by quality, service,innovation, image or other non-price benefits, market standing offers by far the mostuseful approach. Provided price emerges from the exploratory research as one of the 15or 20 most important customer requirements (which it almost always does), it isincluded on the questionnaire and as one of the components of the index. Markets suchas executive cars, state-of-the-art technological products, designer clothing, privatebanking, first and business class air travel, luxury hotels, Michelin star restaurants,cruises and a host of personal services or leisure experiences for the affluent, will alwaysbe benefits rather than cost-driven. Whilst price has to be broadly in line with themarket, it will play a relatively small role in customer satisfaction and loyalty. Thebenefits, on the other hand, will be crucial to customers’ supplier selection decisions, soin this type of market it is vital that competitor comparison surveys fully explore thebenefits since companies’ ability to continually offer enhancements to quality andservice will be key to their continuing success. It is also essential that the overall marketstanding outcome accurately reflects the relatively low importance of price comparedwith the collectively critical influence of the range of benefits.

KEY POINTFor markets that are not too price sensitive, market standing will provide abetter picture of competitive positioning than relative perceived value.

13.5.2 High price-sensitivitySome products and services offer minimal differentiation so compete primarily onprice. Often described as commodity markets, typical examples include utilities suchas gas and electricity, no-frills airlines and many B2B markets such as raw materialsor basic services such as cleaning and security. In these markets price could be asimportant in the supplier selection decision as all other benefits combined, makingrelative perceived value the ideal methodology for depicting competitors’ relativeperformance. Indeed, since in some completely undifferentiated markets, price couldconceivably account for more than 50% of customer behaviour, even the customervalue map might under-emphasise its impact. In this type of market it is advisable toconduct exploratory research (see Chapter 5) with a larger than normal sample (e.g.50 depth interviews or 10 focus groups) and to use a ‘points share’ to establish the

212 Comparisons with competitors

Chapter thirteen 5/7/07 10:00 Page 212

Page 220: Customer Satisfaction

Comparisons with competitors 213

relative importance of customers’ requirements. In a points share, sometimes calledthe constant sum method3, customers are given a fixed number of points (typically100) to allocate across their supplier selection criteria, reflecting their relativeimportance. There is no maximum or minimum number of points that must beallocated to each factor. If price is overwhelmingly important, any customer would befree to allocate all their points to price and no points to any of the other factors.Rather than simply giving customers a list of factors to score, the exercise is morelikely to reflect their real-life supplier selection decisions if a purchasing scenario ispresented. For example, in the market for low cost flights, the following introductionmight be provided.“Imagine you are planning a long weekend break with your partner to a Europeandestination. Three airlines offer flights between the UK and your destination. Pleasethink about the criteria you would use to choose between the three airlines. You have100 points to share across the factors listed. Please allocate the points according to therelative importance to you of each factor when you are choosing between the availableflights. You can allocate any number of points, from 0 to 100, to each factor as long asthe total number of points you allocate does not exceed 100.”

Although in theory, customers could allocate an equal number of points to eachrequirement, this never happens in practice since some requirements are invariablymore important than others. Consequently, the points share forces customers tomake choices since they can’t increase the number of points allocated to onerequirement without reducing those allocated to another. It is particularly useful inprice sensitive markets since it will fully reflect the extent to which price is moreimportant than the other requirements. However, it is easy for customers to completeonly if there are few criteria, since if there are too many criteria to score, participantstend to focus more on the maths than on the relative importance of the requirements.

Customer requirement Points allocated

Facilities at airport e.g. shopping, catering Reputation of airline Travel time from home to UK airport Travel time from overseas airport to destination Price of ticket Availability of seat reservations Option to purchase in-flight food Provision of free in-flight food Flight time Availability of internet booking Availability of telephone booking Availability of booking through a travel agent Air miles awarded Safety record of airline Type of plane used Free luggage allowance up to 25kg Total points allocated (maximum 100 points)

Chapter thirteen 5/7/07 10:00 Page 213

Page 221: Customer Satisfaction

In fact, seven is regarded as the maximum number of factors even for the much moresimple task of ranking the items in order of importance4. Since a points share is moredifficult, such a low limit to the number of requirements that can be scored wouldmake it totally inapplicable for customer satisfaction research. However, ifappropriate steps are taken, this does not have to be a barrier. Firstly, using a pointsshare for CSM should not be conducted as a short quantitative interview, andcertainly not as a self-completion questionnaire, but more like the exploratoryresearch techniques described in Chapter 5. Either a depth interview or focus groupsetting is suitable. It is essential to allow sufficient time to explain the exercise toparticipants, to enable them to ask questions and to complete the points share at theirown pace. Providing the correct tools for the job is also essential, so a calculator isvital. Even better would be a laptop with the requirements pre-entered in aspreadsheet and the ‘total points allocated’ cell programmed to add the values aboveso that customers can experiment with their points allocations and immediately seeif they are exceeding the maximum. This overcomes the major problem of customersfocusing on the maths more than the relative importance of the requirements. It alsoenables the number of points to be increased, say to 1000, which can be useful wherethere is a large number of requirements, quite a few of which may be important.However, this dictates that the list of requirements to be scored by the points sharemust be already known, so a preliminary exploratory phase, using conventional CSMdepth interview or focus group techniques (see Chapter 5) would have to beconducted to establish customers’ most important requirements. The points shareexercise would then be conducted only with the 15 to 20 requirements to be used onthe main survey questionnaire.

If the points share data show that price does dominate customers’ supplier selectiondecision, using relative perceived value for the main survey analysis would beappropriate. However, if price is awarded considerably less than 50% of the pointsallocated by customers, even if it is the most important requirement, relativeperceived value would over-estimate its importance, so market standing would be themost suitable main survey analysis method.

KEY POINTA points share can be used to determine the most appropriate main surveyanalysis technique. Only if price is as important, or almost as important, as allthe other customer requirements combined is relative perceived value suitable.

The points share has been criticised for being an ipsative scale5. This means that it hasno anchors or reference points, so whilst it establishes the relative importance of thefactors scored, it provides no indication of the absolute level of importance of therequirements. This does make it essential to conduct conventional exploratoryresearch before the points share to ensure that the requirements scored are those ofmost importance to customers.

214 Comparisons with competitors

Chapter thirteen 5/7/07 10:00 Page 214

Page 222: Customer Satisfaction

Comparisons with competitors 215

13.6 SwitchingThe main characteristic of very competitive markets is the prevalence of switching.Customers see changing from one supplier to another as relatively easy so often feelit is worth switching for even a small increase in ‘value’. They may even switch just tofind out whether an alternative supplier offers better value, since it is easy to switchback if it doesn’t. In very competitive markets this promiscuity reaches its heightwhen customers switch simply for a different customer experience, e.g. visiting a newrestaurant ‘for a change’. Hofmeyr6 calls this ‘ambivalence’ and points out that insome markets customers are loyal to more than one supplier. They will sometimesvisit a different restaurant even though they are completely satisfied with theirfavourite restaurant. In some markets therefore, companies need a much deeperunderstanding of customers’ loyalty attitudes and behaviour.

KEY POINTIn highly competitive markets companies need a detailed understanding of thecustomers most likely to switch suppliers.

A competitor analysis must identify the customers most and least likely to switch7.This should include the company’s own customers and competitors’ customers sincethe company must understand how to defend its own vulnerable customers as well ashow to target and attract its competitors’ most vulnerable customers. Hill andAlexander1 suggest dividing one’s own and the competitors’ customers into loyaltysegments as shown in Figure 13.11. A loyalty index (see Chapter 11) would typicallybe used for this purpose. The components of the index need very carefulconsideration in promiscuous markets. Of the loyalty questions described in Chapter9, the commitment, trust and preference questions will be particularly important.Indeed, companies will often benefit from asking several preference questionscovering share of wallet and accessibility as well as attraction of competing suppliers.

FIGURE 13.11 Loyalty segments

Our customers Competitor’s customers

Strongly loyal, rate ourperformance highly, littleinterest in competitors

Strongly loyal, rate competitorhighly, little interest in us

Customers showing a strongpreference for alternativesuppliers

Competitors’ customers whoalready rate us superior totheir existing supplier

Faithful

Vulnerable

Flirtatious

Available

Apparently loyal customersbut high level of inertia orsome interest in competitors

Little positive loyalty, activelyinterested in alternatives

Repeat buyers withcompetitors but little positiveloyalty and some interest in us

Little loyalty to competitors,may be receptive to our advances

Chapter thirteen 5/7/07 10:00 Page 215

Page 223: Customer Satisfaction

Based on the information in Figures 13.11 companies can develop detailed loyaltystrategies to protect their own customers and to attract their competitors’ mostvulnerable customers.

Of course, few companies have the resources to successfully implement acquisitionand retention strategies across all segments. Figure 13.12 illustrates the situation fora supplier with one competitor but in a very promiscuous market there will be severalcompetitors, each with their own strengths, weaknesses and customer profiles. Thestarting point for strategic decisions on retention and acquisition strategies istherefore to understand the distribution of the customer base across the four loyaltysegments. Figure 13.13, for example depicts a company with a very secure customerbase, which should take steps to reward and protect the loyalty of its many faithfulcustomers, whilst implementing strong measures to attract any of the competitors’available and flirtatious customers, provided they have a suitable needs profile.

By contrast, the supplier shown in Figure 13.14 has a customer base that is typical ofa company devoting too much resource to winning new customers at the expense ofsatisfying and retaining its existing ones. This company needs to seriously re-think itsstrategic priorities. A relevant example is the MBNA reference from Chapter 2, where

FIGURE 13.13 Secure customer base

Faithful

Vulnerable

Flirtatious

Available

0% 10% 20% 30% 40% 50% 60% 70%

FIGURE 13.12 Loyalty strategies

Our customers Competitor’s customers

Reward loyalty, stimulatereferrals, strong focus on servicerecovery factors

Don’t target

May be worth targeting ifcompetitors are failing to meettheir need in areas where youperform strongly

Go for the jugular, especiallywhere you believe your strengthsmatch their priorities

Strong focus on PFIs,communications campaigns andloyalty schemes to build positive loyalty

Objective assessment of costsand benefits of retaining thisgroup. Strong focus on closingany perception gaps

Cut losses. Chances of retentionvery low

Should be easy prey but makesure they’re not habitual switchers

Faithful

Vulnerable

Flirtatious

Available

216 Comparisons with competitors

Chapter thirteen 5/7/07 10:00 Page 216

Page 224: Customer Satisfaction

Comparisons with competitors 217

the company was not keeping its customers long enough for them to becomesufficiently profitable. MBNA’s ‘zero defections’ strategy based on deliveringexceptionally high levels of service to targeted customers, moved the company fromthe 38th to the largest bank card provider in the USA over two decades8,9.

KEY POINTTo maximise market share, companies must efficiently focus resources on themost winnable potential customers.

To optimise strategic decisions of the type outlined in Figure 13.12, a company mustdevelop two additional areas of insight. Firstly, it must segment customers and builddetailed profiles of the predominant types of customer in its own and its keycompetitors’ loyalty segments. Secondly, it must understand what is makingcustomers faithful, vulnerable, flirtatious or available and what the company can doto maximise its appeal to targeted customer segments.

13.6.1 SegmentationTo effectively target retention or acquisition strategies, companies must understandhow customers differ across the loyalty segments. This will depend on recordingsufficient classification data covering all the likely segmentation variables includingdemographic, geographic, behavioural and lifestyle / psychographic details for thecustomers surveyed. Demographic information includes age, gender, family life cycle,income, occupation, education and ethnic origin. In some markets, such as pensionsor health care, customers’ attitudes and behaviours are heavily influenced bydemographic factors. In others, such as groceries and cars, a more complex level ofattitudinal and psychographic profiling is often necessary to fully understand thedifferences between loyalty segments. These may include core values such as theimportance placed on individual liberty, health and fitness and family values ordeeply held beliefs such as commitment to the environment, fair trade food orspecific political or charitable causes. Sometimes, the best way to profile customers isto start with their tangible behaviour such as when they buy, how they buy (channel),how often they buy and how much they buy, then search for demographic,psychographic or geographic differences within the behavioural segments. This can

FIGURE 13.14 Disloyal customer base

Faithful

Vulnerable

Flirtatious

Available

0% 10% 20% 30% 40% 50%

Chapter thirteen 5/7/07 10:00 Page 217

Page 225: Customer Satisfaction

be appropriate for many leisure markets. Yet another profiling variable that oftenuncovers significant differences between customers is needs segmentation based onthe relative importance of customers’ requirements; price versus quality-drivensegments being an obvious example. One of the earliest academic authorities oncustomer segmentation was Yoram Wind10, who suggested some less commonly usedsegmentation variables, which, in his view, often provided more insight than standardclassification data such as demographics. Wind’s preferred segmentation criteriaincluded:

Needs segmentation (called benefits segmentation by Wind)Product preferenceProduct use patternsSwitching behaviourRisk aversion (attracted by innovation and change versus preference forfamiliar things)Deal-pronenessMedia use (in other words, the media they use will indicate the type of personthey are)

Store loyalty / shopping behaviour.The last two are particularly interesting since they illustrate the idea that a companycan often draw insightful conclusions about its own customers’ loyalty by askingthem questions about their behaviour in other walks of life. Media usage is anobvious example. Some people are promiscuous users of media, hopping acrossmany TV, radio and internet channels, whilst others may get their information andentertainment from one newspaper, one or two radio stations and a small range ofTV channels. Rather than asking its customers direct questions about their behaviourin its own market (e.g. likelihood of renewing their policy), an insurance companymight ask about their media usage and shopping behaviour. Customers that use avery small range of media and are highly loyal to one supermarket for their groceryshopping are displaying a more favourable loyalty personality than those who oftenshop at three or four different supermarkets and have very diverse media habits.Whatever they say about their intentions to renew their policy, customersdemonstrating strong loyalty behaviours in other markets are more likely to be loyalinsurance customers.

KEY POINTThe ability to accurately target customers will considerably improve theeffectiveness of customer acquisition and customer retention strategies.

13.6.2 Profiling customersGiven sufficient classification data, there are several analytical techniques that can beused to profile customer segments. They can be split into two fundamental types; ‘apriori’ and ‘post-hoc’11, sometimes called ‘verification’ and ‘discovery’. ‘A priori’

218 Comparisons with competitors

Chapter thirteen 5/7/07 10:00 Page 218

Page 226: Customer Satisfaction

Comparisons with competitors 219

techniques involve the researcher comparing the data across pre-defined segments.‘Post-hoc’ techniques start with the survey data and discover where the biggestdifferences in the data can be found. They then define the segments ‘after the fact’based on groups of customers whose scores differed the most. In this section we willexplain three analytical techniques that are very suitable for customer profiling.

(a) Cross tabulationsThe obvious starting point is to split each loyalty segment into all the ‘a priori’ sub-groups available from the classification data. Using confidence intervals (see Chapter11 and Figure 11.5), statistically significant differences between the segment splits canbe identified. An example based on age is shown in Figure 13.15.

Simply looking at the information suggests that over 55s are more satisfied thanyounger customers. The cells with differences that are statistically significant arehighlighted. Producing cross tabs for all segments of interest will identify differencesacross sub-groups, but is very time consuming and not always conclusive. Atechnique that produces a much more definitive result would therefore be muchmore useful for decision making.

(b) Decision tree analysisA very clear, unambiguous technique that identifies the biggest differences betweensegments is decision tree analysis, sometimes called discriminant analysis. There areseveral computer programmes based on the AID algorithm (automatic interactiondetection algorithm) that sequentially divide a sample into a series of sub-groupswith each split chosen because it accounts for the largest part of the remainingunexplained variation. The easiest way to understand this process is to work throughthe decision tree shown in Figure 13.16.

FIGURE 13.15 Segment splits by age

Ease of contacting the call centreHelpfulness of call centre staffKeeping promises and commitmentsTreating you as an individualHandling of complaintsExpertise of call centre staffSpeed of serviceHelpfulness of branch staffExpertise of branch staffInformation provided by XYZOverall value for moneyConvenience of opening hours

Under 35s7.707.666.517.322.844.597.327.946.827.607.177.39

35-55s7.637.926.207.033.165.477.478.027.407.757.596.95

Over 55s8.468.036.888.315.925.988.628.987.457.957.468.42

Chapter thirteen 5/7/07 10:00 Page 219

Page 227: Customer Satisfaction

The process starts with the entire sample, indicated by the 100% above the first box,which is numbered 1 in its top right hand corner. The 81.3 refers to the customersatisfaction index for the sample in question. This could be the entire sample, or,more usefully a sub-set of it, such as the ‘flirtatious’ segment, or a competitor’s‘available’ segment. To keep matters simple we will assume it’s the entire sample. Thedata examined does not have to be overall satisfaction. It could be a loyalty index, asingle question such as recommendation or an individual PFI such as ‘quality ofadvice’. The process then looks for the single dichotomous variable that accounts forthe biggest difference in satisfaction variation across the sample and finds that it isage. It can split any variable into only two groups at each stage, and in this examplethe two age segments that account for the biggest variation in overall satisfaction areover and under-55s, which now become boxes 2 and 3. The over 55s are 46% of thesample and have a customer satisfaction index of 92.4 whilst the under 55s, whoaccount for 54% of the sample, are much less satisfied at 74.8. The computer will thenlook for the factor that explains the most variation in the satisfaction of the over 55sand the under 55s and we can see that for the over 55s it’s whether they are workingor retired and for the under 55s it’s whether or not they have children living at home.If the biggest difference within either group was a further sub-division of age (e.g.dividing the under 55s into under and over 25s), decision tree analysis would producethis outcome. The percentages shown above each box refer to that cell’s percentage ofthe total sample, the figures for ‘still working’ and ‘retired’ totalling the 46%, which is

FIGURE 13.16 Decision tree analysis

100%1

81.3

54%3

74.8

10%6

79.5

44%7

69.3

46%2

92.4

12%4

88.6

34%5

95.1

5%10

84.9

7%11

90.4

25%8

96.4

19%12

97.2

9%9

92.7

6%13

94.0

Over 55

Withoutchildren

Under 55

Withchildren

Income over£20000 p.a

London/South EastOutsideLondon/South East

ABC1 C2DE

Still Working Retired

Income upto £20000 p.a

220 Comparisons with competitors

Chapter thirteen 5/7/07 10:00 Page 220

Page 228: Customer Satisfaction

Comparisons with competitors 221

the proportion of the total sample accounted for by the over 55s. This makes it easyto profile the most satisfied or loyal customers. The company concerned would bewell advised to target retired over 55s on modest incomes outside London. As well ashaving very high levels of satisfaction with the benefits delivered by the company theyalso account for a sizeable 19% of customers in the target market. Of course, this laststatement holds true only if the survey sample is representative of the market ratherthan the company’s own customers.

KEY POINTDecision tree analysis helps a company to target its customer acquisition strategieson the type of customers that are most likely to be highly satisfied and loyal.

(c) Latent class regressionLatent Class Regression is a ‘post-hoc’ technique that facilitates the construction ofdifferent models along lines that may not have been suggested by existing customersegmentation data but is based on the way respondents form opinions. Mostanalytical techniques produce an ‘average’ picture across respondents. In some casessuch a view can be misleading if this average obscures fundamental differences in theway customers form opinions.

By identifying “causally homogenous” subgroups, latent class regression eliminatesthis problem. The example in Figure 13.17 shows the success of the technique inuncovering ‘price-driven’ and ‘quality-driven’ customers. As shown by the R2, thisimproves the predictive accuracy of the model. The overall model explained only32% of the variance across customers’ perceptions of value, but once latent classregression had identified clusters of price and quality driven customers, 69% and76% respectively of each segment’s value judgement was explained.

FIGURE 13.17 Latent needs segmentation

Price

Quality

Value

Price

Quality

Value

0.30 R =0.32

0.40

2

0.76/0.21

0.24/0.82

R =0.69/0.762

Overall sample

Price - and quality - driven segments

Chapter thirteen 5/7/07 10:00 Page 221

Page 229: Customer Satisfaction

Although latent class regression is a very sophisticated technique that will oftenuncover clusters of customers that would never be identified by ‘a priori’segmentation techniques, its big disadvantage is that it does not identify who thecustomers are in the population. It is left to the researcher to study the data and tomake judgements about the types of customers that make up the price and quality-driven segments. This element of uncertainty tends to reduce its utility for decisionmaking compared with decision tree analysis.

13.6.3 Drawing conclusionsAs we have stated many times in this book, the purpose of surveys is to take action toimprove the business. Loyalty segmentation will improve the effectiveness of actionby focusing it on the loyalty segments where the company can make the biggestdifference. To accurately draw these conclusions, companies should apply theanalytical techniques illustrated in Figures 13.2 to 13.10 not to the entire sample, butto each of the loyalty segments in turn. Obviously, only the scores given by thecustomers in the relevant segment would be used for the analysis. This does result ina need for large samples since at least 200 customers are needed in each segment foradequate reliability and there could be many segments. As well as four loyaltysegments for the company there will be four for each competitor, and in somemarkets there could be four, five or even more competitors. Most companies shouldstart with defending their own ‘at risk’ customers, especially those in the vulnerablesegment. For this, the starting point would be Figures 13.4 and 13.5, which wouldshow where the company is least meeting its own vulnerable customers’requirements. Provided the questionnaire asked about attraction and accessibility ofalternative suppliers, it will know which competitor its vulnerable customers wouldbe most likely to switch to. The information displayed in Figures 13.6 and 13.7 wouldpinpoint how to best counter this competitive threat. Of course, depending on thepercentage of customers in each loyalty segment and the prevalence of switching inthe market, it may be more sensible to focus retention strategies on the flirtatioussegment, or even the available one, although it is often not cost-effective to achievesufficient attitude change in the available segment.

Customer acquisition programmes are usually best targeted on competitors’ availablecustomers followed by the flirtatious segments, but which ones? If there are fivecompetitors there are ten segments of flirtatious and available customers. In thissituation half the task is to identify the customers who are most dissatisfied with theircurrent supplier. The second half is to pinpoint which of those are most likely to beattracted to the benefits offered by your own company.

Using Figure 13.18, it is possible to match the importance scores of a competitor’savailable or flirtatious customers with the performance scores they gave for your ownorganisation. As shown in the chart, if the café, fresh meat and prices were very

222 Comparisons with competitors

Chapter thirteen 5/7/07 10:00 Page 222

Page 230: Customer Satisfaction

Comparisons with competitors 223

important to these customers, the company would not be well placed to win and keepthem. However, the chart shows a very good match between the needs of thesecustomers and the company’s strengths, identifying winnable customers whoserequirements the company can meet or exceed. There is no point attracting acompetitor’s disgruntled customers who are going to be just as dissatisfied with thebenefits provided by your own company.

KEY POINTThere is no point winning customers you are unlikely to keep.

Clearly, this work is detailed and time consuming but will pay handsomely if itprevents a company from losing good customers it could have kept or if it saves itfrom incurring the high cost of winning customers whose long term loyalty it isunlikely to attain.

Conclusions1. A simple comparison question provides a good overview of how an organisation

is seen by its customers relative to other similar organisations, but will be lessuseful in highly competitive markets.

2. Competitor comparison surveys must be based on a random and representativesample of all the customers in the market for the product or service.

FIGURE 13.18 Win and keep the right customers

5.5 6.5 7.55 6 7 8 8.5 9 9.5 10

Fruit & vegetables

Stock availability

Bakery

Cleanliness

Queue times

Price

Fresh meat

Café

Importance

Performance

Chapter thirteen 5/7/07 10:00 Page 223

Page 231: Customer Satisfaction

3. Telephone interviews, conducted by an independent agency will normally be usedfor competitor comparison surveys, but response rates will be lower than forcustomer satisfaction surveys.

4. In competitor surveys, respondents score perceived performance rather thansatisfaction.

5. By comparing the satisfaction gaps of its own customers with the areas where itmost under-performs key competitors, a company can make decisions about howto improve its market position.

6. Relative perceived value is based on the assumption that customers choose thesupplier that provide the best value, in other words the benefits delivered relativeto the cost of obtaining the product or service.

7. Since it covers only two variables, quality and cost, relative perceived value canprovide a visual overview of the relative performance of all competitors in amarket, as perceived by customers.

8. According to the relative perceived value concept, companies offering ‘superiorvalue’ will gain market share.

9. For markets that are not too price sensitive, market standing will provide a betterpicture of competitive positioning than relative perceived value.

10. A points share can be used to determine the most appropriate main surveyanalysis technique. Only if price is as important, or almost as important as all theother customer requirements combined is relative perceived value suitable.

11. In highly competitive markets companies need a detailed understanding of thecustomers most likely to switch suppliers.

12. To maximise market share, companies must efficiently focus resources on themost winnable potential customers.

13. The ability to accurately target customers considerably improves the effectivenessof customer acquisition and customer retention strategies.

14. The clearest technique for identifying the biggest differences between segments incustomer satisfaction and loyalty research is decision tree analysis.

15. Decision tree analysis helps a company to target its customer acquisitionstrategies on the type of customers that are most likely to be highly satisfied andloyal.

16. There is no point winning customers you are unlikely to keep.

References1. Hill and Alexander (2006) "The Handbook of Customer Satisfaction and Loyalty

Measurement” 3rd Edition, Gower, Aldershot2. Gale, Bradley T (1994) "Managing Customer Value”, Free Press, New York3. Aaker, Kumar and Day (1998) "Marketing Research”, John Wiley and Sons, New

York4. Kervin, John B (1992) "Methods for Business Research”, Harper Collins, New York

224 Comparisons with competitors

Chapter thirteen 5/7/07 10:00 Page 224

Page 232: Customer Satisfaction

Comparisons with competitors 225

5. Myers, James H (1999) "Measuring Customer Satisfaction: Hot buttons and othermeasurement issues”, American Marketing Association, Chicago, Illinois

6. Hofmeyr, Jan (2001) "Linking loyalty measures to profits”, American Society forQuality, The American Customer Satisfaction and Loyalty Conference, Chicago

7. Rice and Hofmeyr (2001) "Commitment-Led Marketing”, John Wiley and Sons,New York

8. Heskett, Sasser and Schlesinger (2003) "The Value-Profit Chain”, Free Press, NewYork

9. Heskett, Sasser and Schlesinger (1997) "The Service-Profit Chain”, Free Press,New York

10. Wind, Yoram (1978) "Issues and Advances in Segmentation Research”, Journal ofMarketing Research, (August)

11. Green, P E (1977) "A New Approach to Market Segmentation”, BusinessHorizons, (February)

12. McGivern, Yvonne (2003) "The Practice of Market and Social Research”, PrenticeHall / Financial Times, London

13. Dillon, Madden and Firtle (1994) "Marketing Research in a MarketingEnvironment”, Richard D Irwin Inc, Burr Ridge, Illinois

Chapter thirteen 5/7/07 10:00 Page 225

Page 233: Customer Satisfaction

CHAPTER FOURTEEN

Advanced analysis:Understanding the causesand consequences ofcustomer satisfaction

The methodology covered so far in this book will enable organisations to achieve thetwo primary objectives of a CSM process. Firstly, and most importantly, it mustprovide a truly accurate reflection of how satisfied or dissatisfied customers feelabout their customer experience. Secondly, it must deliver clear conclusions andactionable outcomes that enable the organisation to make improvements. For mostcompanies, this straightforward approach will provide all they need from a CSMsystem. However, for a relatively small percentage of organisations that have alreadyattained unusually high levels of customer satisfaction, more complex analyticaltechniques will be necessary, and these will be covered in the next two chapters.

At a glanceIn this chapter we will:

a) Examine how asymmetry affects customer satisfaction data.

b) Introduce the concept of attractive quality.

c) Explore how customers’ requirements can be classified into categories such assatisfaction maintainers and enhancers.

d) Explain how to identify maintainers and enhancers.

e) Consider how to highlight the best improvement opportunities whererelationships are broadly linear.

f) Discuss the concept of delighting the customer.

g) Review the consequences of customer satisfaction, especially loyalty.

h) Explain how organisations can fully understand the relationship betweencustomer satisfaction and loyalty.

226 Advanced analysis

Chapter fourteen 5/7/07 10:01 Page 226

Page 234: Customer Satisfaction

14.1 AsymmetryAn important characteristic of customer satisfaction data is that it is often not linearin its relationships. Its asymmetric nature may affect conclusions about theantecedents and consequences of customer satisfaction1 – in other words, the wayorganisational performance affects customer satisfaction and the way customersatisfaction affects outcomes such as loyalty. We will use the relationship betweensatisfaction and loyalty to illustrate the point.

Figure 14.1 shows a linear relationship between satisfaction and loyalty. Every 1%increase in customer satisfaction would result in a 1% gain in loyalty. This would bevery convenient for planning purposes, but the real world is rarely so symmetrical.The relationship between the two variables is very unlikely to be a straight line, butmuch more likely to be curved, like the example shown in Figure 14.2.

FIGURE 14.2 Non-linear relationship

Loya

lty

Satisfaction

100%

80%

60%

40%

20%

1 5 10

FIGURE 14.1 A linear relationship

Loya

lty

Satisfaction

100%

80%

60%

40%

20%

1 5 10

Advanced analysis 227

Chapter fourteen 5/7/07 10:01 Page 227

Page 235: Customer Satisfaction

In Figure 14.2, the relationship between satisfaction and loyalty depends on wherethe company is on the curve. In the example shown, strong loyalty is achieved only atthe highest levels of satisfaction. More about this later.

14.1.1 Attractive qualityThe origin of theories about the asymmetric nature of customer satisfaction data wasthe work of Japanese quality expert, Dr. Noriaki Kano2,3,4, who focused on theantecedents of customer satisfaction; the relationship between customers’ needs andthe organisation’s ability to satisfy them. As long ago as 1979 Kano introduced theidea of ‘attractive quality’ to Konica. In the 1970s Konica had realised that to remaincompetitive its new product development programme had to radically differentiatethe company from what was available at the time from competitors. However,Konica’s sales department was reporting that customers were asking for only minormodifications to the existing models. Kano advised Konica to look beyondcustomers’ stated needs by developing a deeper understanding of the customer’sworld and uncovering their latent needs. Konica staff examined consumers’ photosat commercial processing labs and found many failures such as under and overexposure or blurred images. Customers couldn’t have been happy with many of theirphotos, but blamed their own inability to correctly operate the settings on thecamera. Addressing these hidden customer needs created new features such as autofocus and automatic exposure setting.

Kano’s advice to ‘understand the customer’s world’ has been widely adopted by CSMexperts. It was the origin of Michigan University’s lens of the customer concept5. It isalso the basis for using the type of projective techniques for CSM exploratoryresearch described in Chapter 5, in order to delve beyond customers’ ‘top of mind’requirements and uncover their less conscious needs.

14.1.2 The Kano ModelKano had basically discovered the difference between importance and impact coveredin Chapters 4 and 10. His perspective was QFD (Quality Function Deployment),which was concerned with assimilating the voice of the customer into the productdesign process. The Kano model6, based on his concept of attractive quality, wasdeveloped to help designers to visualise product characteristics through the eyes ofthe customers and to stimulate debate within the design team. In particular, Kanopointed out that there are different types of customer need as well as the fact thatcustomers’ requirements are not all equally important. Kano’s analysis was thereforeoriginally conceived as a tool to help the design team classify and prioritise customerneeds. Shown in Figure 14.3, Kano identified three types of customer needs, which hedescribed in terms of customers’ reactions to product characteristics.1. The ‘must be’ factors.

These are the absolute basics without which it wouldn’t be possible to sell theproduct in the first place. They are often refered to as ‘the licence to operate’. For

228 Advanced analysis

Chapter fourteen 5/7/07 10:01 Page 228

Page 236: Customer Satisfaction

example, a car that always starts and is watertight in wet weather. Failure to reachadequate quality / performance standards on ‘must be’ factors will result in veryhigh levels of customer dissatisfaction and defection.

2. The ‘more is better’ factors.Kano also called these ‘spoken’ or ‘performance’ characteristics where each smallimprovement in quality or performance will make customers marginally moresatisfied and vice-versa, such as the fuel consumption of a car. In his model theyhave a linear relationship with customer satisfaction, and are the productattributes most suited to a kaizen, or continuous improvement approach.

3. The ‘surprise and delight’ factors.These are the real ‘wow’ factors (attractive quality) that differentiate a productfrom its competitors, such as a car that automatically slows down in cruise controlif the vehicle in front is too close. As latent needs, their absence doesn’t result indissatisfaction since they are not expected, but when provided they will oftensurprise and always delight the customer.

14.2 Applying asymmetry to customer satisfaction researchKano’s work was aimed primarily at manufacturers and was very product-focusedbut more recent researchers have found that his fundamental principle of asymmetryremains valid for customer satisfaction data7,8,9, especially regarding the consequencesof satisfaction. Investigating Xerox’s concern that some of its satisfied customers weredefecting, Jones and Sasser10 found that ‘totally satisfied’ customers were six timesmore likely to repurchase than ‘merely satisfied’ customers.

FIGURE 14.3 The Kano model

Low qualityor performance

High qualityor performance

Lowsatisfaction

Highsatisfaction

Must be factors

More is better factors

Surprise and delight factors

3

2 1

Advanced analysis 229

Chapter fourteen 5/7/07 10:01 Page 229

Page 237: Customer Satisfaction

14.2.1 Using asymmetry to categorise customer requirementsRegarding the antecedents of satisfaction, several theories have evolved for today’slargely service-based economies. Oliver11 remained very close to Kano’s three originalcategories but introduced the idea that some attributes can make customers satisfiedbut not dissatisfied and vice-versa, using the chemical concept of valence to label his‘directional’ theories. Hence, ‘must be’ requirements such as ‘cleanliness of toilets’ aretaken for granted when good, but very conspicuous when poor. Oliver called them‘monovalent dissatisfiers’ since they can generate dissatisfaction but not satisfaction. Bycontrast, he called ‘surprise and delight’ requirements ‘monovalent satisfiers’ since, heclaimed, they can cause satisfaction or delight but not dissatisfaction. His ‘bivalentsatisfiers’ are the ‘more is better’ factors because they can result in both satisfaction anddissatisfaction depending on their level of performance.

Anderson and Mittal12 highlighted the importance of recognising the non-linearity ofcustomer satisfaction data, particularly when looking at consequences such as loyaltyand profitability. Keiningham and Vavra13 built on this for the antecedents as well asthe consequences of customer satisfaction, dividing customers’ requirements into‘satisfaction-maintaining attributes’ and ‘delight-creating attributes’. Satisfaction-maintaining attributes are expected by customers, display diminishing returnsbeyond a ‘parity-performance’ level and are incapable of delighting. Delight-creatingattributes will surprise and delight customers, displaying accelerating returns tocustomer satisfaction beyond a minimal performance level.

14.2.2 Delighting the customerThe concept of customer delight has been of great interest to some organisations inrecent years. Kano’s original ideas were developed in the 1980s by much academicresearch into customers’ emotional responses to consumption14,15. A common theme,consistent with Kano and highlighted by Plutchik16, is that delight is created by acombination of surprise and joy (extreme pleasure) in the customer experience. Basedon theme park customers, Oliver, Rust and Varki17 concluded that delight was based ona ‘surprising consumption experience’, which could be the provision of a benefit that wastotally unexpected or alternatively a surprisingly high level of performance on a benefitthat was expected. In either case, it is the element of surprise that ‘arouses’ customers andmakes an enduring impact on their attitudes and future behaviour.

KEY POINTSurprise is a crucial element in the ability to delight a customer.

14.2.3 Enhancers and maintainersMany CSM practitioners and authors have used asymmetry to divide customers’requirements into two broad categories such as ‘satisfaction maintainers’ and‘satisfaction enhancers’ or ‘satisfaction maintaining’ and ‘delight creating’ attributes13.Maintainers, such as ‘cleanliness of the toilets’, ‘on-time delivery’ or ‘reliability of the car’

230 Advanced analysis

Chapter fourteen 5/7/07 10:01 Page 230

Page 238: Customer Satisfaction

will behave more like the bivalent ‘more is better’ factors at the low and mid points of theperformance range. Unacceptable performance will cause extreme dissatisfaction, butimprovements in performance will increase customer satisfaction. The key characteristicof maintainers is that they will reach a point where additional improvement inperformance will not deliver a corresponding increase in satisfaction. Once the toilets areconsistently very clean the cost of continually polishing the ceramics and stainless steeluntil everything gleams will not produce a return on investment. As shown in Figure14.4, customer satisfaction climbs strongly as poor performance moves to good, butthen levels off. This type of curve would therefore be classified as a ‘satisfactionmaintainer’, since if customer satisfaction is at the required level, performance shouldsimply be maintained rather than making efforts to improve it.

KEY POINTSatisfaction maintainers are requirements where organisations must performadequately to satisfy customers, but where performance above the expected levelrarely translates into satisfaction gains.

Customer requirements where improvements in performance will continue toincrease satisfaction for much longer are known as ‘satisfaction enhancers’. Theseare a combination of the delighters and the bivalent ‘more is better’ factors sincethey are capable of making customers highly satisfied either by surprising themwith a benefit they didn’t expect or by delivering exceptional levels of service onexpected customer requirements.

KEY POINTSatisfaction enhancers are requirements where exceptional performance ismuch more likely to be translated into very high levels of customer satisfaction.

FIGURE 14.4 Satisfaction maintainer

Cu

stom

er s

atis

fact

ion

Cleanliness of toilets

Advanced analysis 231

Chapter fourteen 5/7/07 10:01 Page 231

Page 239: Customer Satisfaction

Scott Cook, founder of American software developer Intuit, realised that building a“customer evangelist culture”19 would be fundamental to the long term success of thecompany. Unlike most software companies, technical staff took turns in answeringcustomer calls, read customers’ suggestion cards and even took part in the “FollowMe Home” initiative, for which Intuit staff accompanied a new purchaser of itsQuicken personal finance software to their home, where they watched the customerunpack, install and start to use Quicken. This enabled them to understand the user-friendliness of the product from the customer’s perspective and, in particular,whether it was meeting the company’s objective that customers should be able to useit within 30 minutes of opening the box. However, the real ongoing satisfactionenhancer for Intuit in the long run was technical support, where the department’sgoal was not just to efficiently answer customers’ queries, but to “create apostles”. Infact, their goal was to treat customers so well that their experience prompted them totell five friends about Quicken. Unlike most competitors, Intuit recruited highlyintelligent representatives for its call centre, paid them well and trained them toanswer customers’ queries on wide ranging personal finance issues, not just thetechnical aspects of the software. Even though Quicken retailed for only around $20at the time, buyers were entitled to this high quality customer support for free and forlife. Whilst many companies would have regarded this policy as financial suicide,Cook realised that the customer lifetime value benefits from retention, related salesand referrals would far exceed the cost of the service20.

This type of customer requirement is therefore known as a satisfaction enhancer.Unlike Kano’s ‘surprise and delight’ factors, poor performance on enhancers willresult in dissatisfaction since good service, attitude and other soft skills are expected.However, unlike maintainers, continual striving for excellence in these areas (beinggreat not just good) will continue to have a positive impact on customer satisfactionbefore eventually hitting diminishing returns. This pattern of events would result inthe type of s-shaped curve shown in Figure 14.5.

FIGURE 14.5 Satisfaction enhancer

Cu

stom

er s

atis

fact

ion

Friendliness of serving staff

232 Advanced analysis

Chapter fourteen 5/7/07 10:01 Page 232

Page 240: Customer Satisfaction

14.3 Identifying enhancers and maintainersAs we know from Chapters 4 and 10, the strength of the relationship between twovariables, such as ‘cleanliness of the toilets’ and customer satisfaction, can beidentified through statistical techniques such as correlation or multiple regression.However, these methods will not identify the extent of any non-linearity in therelationship, so other approaches are needed to pursue concepts such as enhancersand maintainers. This section explains three methods for identifying asymmetry incustomer satisfaction data and understanding its implications.

KEY POINTStatistical techniques such as correlation and multiple regressiondemonstrate the strength of a relationship between two variables but not thelinearity of the relationship.

14.3.1 Intuitive judgementsAt a simple level, enhancers and maintainers can be estimated through experiencedjudgement. Givens such as ‘cleanliness of the toilets’ and ‘on-time delivery’ tend to bemaintainers, whilst ‘friendliness of staff ’ and ‘rewarding loyalty’ are more likely to beenhancers. A decade before Kano wrote about ‘attractive quality’, Theodore Levitt21

had already introduced the idea of differentiating the product, stating that: “The newcompetition is not between what companies produce in their factories but betweenwhat they add to their factory output in the form of packaging, services, advertising,customer advice, financing, delivery arrangements, warehousing and other things thatpeople value.” Levitt’s ‘total product’ concept22 is illustrated in Figure 14.6. If the

FIGURE 14.6 The total product

Genericproduct

Expected product

Augmented product

Potential product

Advanced analysis 233

Chapter fourteen 5/7/07 10:01 Page 233

Page 241: Customer Satisfaction

generic product is a hotel room, the expected product is equivalent to maintainerssuch as clean sheets and bathroom and the augmented product covers enhancerssuch as helpful staff and speedy room service. Levitt’s potential product is similar tothe ‘surprise and delight’ factors and, like Kano, this is where he felt manufacturersneeded to focus their competitive strategies.

At this level it is a simple intuitive task to identify maintainers and enhancers bydividing customers’ requirements into expected and augmented benefits. However,since gut-feel analysis is clearly a poor basis for management decision making, a morescientific method is required.

14.3.2 Internal metricsAn accurate, but time-consuming method is to track the relationship betweenperformance and customer satisfaction on specific requirements, as illustrated inFigure 14.7, which shows internal metrics for average daily response time over 24months on the left hand y axis and monthly customer satisfaction scores for responsetime on the right hand axis.

The S-shaped customer satisfaction curve shows that, for the organisationconcerned, improving response time from an unacceptable level of five days or longergenerates strong gains in customer satisfaction down to an average response time oftwo days. Beyond this point, the return from further improvements in response timerapidly diminishes. Any company presented with figures like this would have toconclude that the cost of lowering response times below two days would be betterinvested in other customer benefits. Based on the research of Finkelman23, Myers24

FIGURE 14.7 Internal and external measures

Months

Ave

rage

res

pon

se ti

me

in d

ays

Cu

stom

er s

atis

fact

ion

sco

re

0

1

2

3

4

5

6

7

8

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 240

1

2

3

4

5

6

7

8

9

10

Response timeSatisfaction

234 Advanced analysis

Chapter fourteen 5/7/07 10:01 Page 234

Page 242: Customer Satisfaction

calls these points ‘breakpoints’, and illustrates the concept with a fast food restaurantwhich found that waiting up to five minutes made no difference to customersatisfaction but that longer waiting time progressively reduced satisfaction. Fiveminutes was therefore the speed of service breakpoint that must not be exceeded.

KEY POINTWhere organisations possess accurate records of service levels, they can becompared over time with customer satisfaction data to identify maintainersand enhancers.

14.3.3 Survey dataWhilst hard data of the type shown in Figure 14.7 is ideal for making and justifyingcustomer management decisions, such objective internal metrics will often not be availablefor many aspects of the customer experience. For the softer skills such as staff behaviours,truly objective internal metrics will never be available, so survey data will have to be usedfor identifying enhancers and maintainers. This is best done by producing the type of chartshown in Figures 14.8 to 14.10. In all three charts, the left hand y axis shows customersatisfaction with the organisation overall (based on an overall satisfaction question), the xaxis shows customer satisfaction with the requirement concerned and the line plots therelationship between those two sets of scores. The right hand y axis and the grey bars showthe number of respondents that gave each score for the requirement. Although theoutcome variable shown in these three charts is customer satisfaction, it should bewhatever outcome is considered most appropriate. Figures 14.11 and 14.12, for example,will show customer loyalty as the outcome variable.

FIGURE 14.8 Linear example

Cu

stom

er s

atis

fact

ion

ove

rall

Nu

mber of respon

dents

1

2

3

4

5

6

7

8

9

10

1 2 3 4 5 6 7 8 9 10

36 47 57

122

204

348 344

478

0

100

200

300

400

500

600

Satisfaction with quality

Quality of product

4126

Advanced analysis 235

Chapter fourteen 5/7/07 10:01 Page 235

Page 243: Customer Satisfaction

Broadly speaking, the relationships depicted in the three charts are linear (Figure14.8), a satisfaction enhancer (Figure 14.9) and a maintainer (Figure 14.10).However, the main point is that when using real world data, customer requirementssimply do not cluster neatly into the type of ‘satisfaction maintaining’ and ‘delightcreating’ attributes discussed so far in this chapter.

FIGURE 14.10 Maintainer example

Cu

stom

er s

atis

fact

ion

ove

rall

Nu

mber of respon

dents

1

2

3

4

5

6

7

8

9

10

1 2 3 4 5 6 7 8 9 10

4462

97

253

562

468

285

101

0

100

200

300

400

500

600

Satisfaction with clarity of billing

Clarity of billing

3923

FIGURE 14.9 Enhancer example

Cu

stom

er s

atis

fact

ion

ove

rall

Nu

mber of respon

dents

1

2

3

4

5

6

7

8

9

10

1 2 3 4 5 6 7 8 9 10

44 66 95 111205

348

522

1436

0

200

400

800

1000

1400

1600

Satisfaction with friendliness

Friendliness of the customer service advisor

5140

600

1200

236 Advanced analysis

Chapter fourteen 5/7/07 10:01 Page 236

Page 244: Customer Satisfaction

KEY POINTIn the real world data for the antecedents of customer satisfaction tend to bebroadly linear; similar to Kano’s ‘more is better’ factors. Satisfaction maintainersand enhancers are appealing theoretical concepts that rarely exist in the real world.

Based on an analysis of hundreds of satisfaction surveys conducted by The LeadershipFactor and hundreds of thousands of customer responses, the less classifiable patternof data shown in Figure 14.11 or the broadly linear examples shown in Figures 14.8and 14.12 would be much more typical than the maintainer and enhancer examples inFigures 14.9 and 14.10. There are several reasons for this. Firstly, as we have statedearlier in this book, customer satisfaction data tends to be positively skewed. Therelatively normal distribution shown in Figure 14.10 (although still somewhatpositively skewed) is not typical because it reflects an organisation with quite lowcustomer satisfaction. In today’s competitive markets, the data distributions in Figures14.8, 9, 11 and 12 are much more typical of the level of customer satisfaction typicallyachieved. As well as making the relationship curve more volatile at lower levels ofsatisfaction, where there are relatively few respondents even with large samples, itreduces the likelihood of seeing the classic satisfaction maintainer effect because fewof the companies concerned are performing badly enough. Much more relevanthowever, is the fact that the real world simply does not often conform to theoreticalconstructs based on asymmetry, especially regarding the relationship betweencustomer satisfaction and its antecedents. In this respect our conclusions areconsistent with those drawn by Michigan University’s Johnson and Gustafsson, whostate: “We have found that concern over non-linearities when analyzing quality and

FIGURE 14.11 Unclassifiable data pattern

Cu

stom

er S

atis

fact

ion

ove

rall

Nu

mber of respon

dents

1

2

3

4

5

6

7

8

9

10

1 2 3 4 5 6 7 8 9 10

41 55129 104

187

297

446

1119

0

200

400

600

800

1000

1200

1400

Customer satisfaction with décor

Décor of the restaurant

27106

Advanced analysis 237

Chapter fourteen 5/7/07 10:01 Page 237

Page 245: Customer Satisfaction

satisfaction data is often unwarranted, especially when it comes to attributes andbenefits. Although non-linear relationships certainly exist, they tend to be observedmore over time……or across market segments. For any given market segment at onepoint in time, a linear relationship is usually all that is called for.”5 See section 14.3.5for further examination of this point.

Rather than attempting to force customer requirements into categories such asenhancers and maintainers, it will usually be more informative to draw specificconclusions for each attribute based on the gradient of the curve and the distributionof satisfaction scores, as explained in the next two sections.

14.3.4 Return on investmentMany companies will see a more or less linear relationship between customers’requirements and overall satisfaction or loyalty, but some of the lines will be steeperthan others. Figure 14.12 uses data from the same company as Figure 14.11. Based onthe steepness of the curve, the restaurant is likely to see a much better return oninvestment from efforts to improve the welcome on arrival than from re-decorating.The outcome variable in Figures 14.11 and 14.12 is loyalty (based on propensity toreturn and to recommend), rather than satisfaction, but it is clear that the décormakes relatively little difference to most customers’ loyalty – those least liking thedécor score an average of 6.2 across the two loyalty questions, whereas customers whomost like the décor score only two points higher for loyalty on average. By contrastthere is a range of five points across loyalty scores given by customers that were mostsatisfied and least satisfied with the welcome on arrival.

FIGURE 14.12 Good return on investment

Cu

stom

er lo

yalt

y

Nu

mber of respon

dents

1

2

3

4

5

6

7

8

9

10

1 2 3 4 5 6 7 8 9 10

57 69

189 178

324

488

589

1197

0

200

400

600

800

1000

1200

1400

Customer satisfaction with the welcome

The welcome on arrival

2758

238 Advanced analysis

Chapter fourteen 5/7/07 10:01 Page 238

Page 246: Customer Satisfaction

KEY POINTWhen drawing conclusions on how to improve customer satisfaction,organisations should focus on the steepness of the curve.

14.3.5 Customer segmentsAs stated by Johnson and Gustafsson5, asymmetry at attribute level is more likely tobe seen over time (as in Figure 14.7) or across segments. The latter effect may bepresent when the slope of the relationship varies at different points along the curve asshown in Figure 14.13. Clearly, this financial services company has very wellpresented employees in the main, but it could obviously benefit from understandingand addressing the concerns of the 10% of customers that score below 7 forsatisfaction with staff appearance. It may be that they are more likely to visit specificbranches, where management is less focused on staff appearance. It may be that acertain customer segment, such as older, more affluent customers (who may be veryimportant to the company), are more critical of staff appearance. The key point isthat by focusing satisfaction improvement efforts on the steepest part of the curve, asuperior return on investment will be achieved.

14.4 Customer delight: myth or reality?As we have seen earlier in this chapter, some customer requirements are said to be‘delight creating’13 or ‘surprise and delight factors’3,4. Many commentators haveclaimed that in today’s competitive markets customer delight rather than ‘mere

FIGURE 14.13 Segment differences

Rec

omm

enda

tion

Nu

mber of respon

dents

1

2

3

4

5

6

7

8

9

10

1 2 3 4 5 6 7 8 9 10

2 6 11

38

75

176188

101

0

20

40

60

80

100

120

140

160

180

200

Satisfaction with staff appearance

Appearance of employees

Advanced analysis 239

Chapter fourteen 5/7/07 10:01 Page 239

Page 247: Customer Satisfaction

satisfaction’ is necessary25,26,27. Recent research, however, has cast doubt on thefeasibility of delighting the customer. As reported earlier in this chapter, Oliver, Rustand Varki17 supported the positive impact of delight on customers’ future attitudesand behaviour, and found evidence of delight in a study of theme park customers. Inthe same article, however, the authors found less evidence of delight amongst classicalmusic concert-goers and questioned whether delighting the customer was feasible forless exciting products and services. To investigate this hypothesis further, Finnsurveyed website users and found little evidence of surprise or delight and a muchstronger relationship between satisfaction and intention to revisit than betweendelight and intention28.

It may be that for most organisations in the 21st century, the concept of customerdelight is fundamentally flawed. Kano4 and Levitt22 recognised that today’s delightersare tomorrow’s givens. In the real world, it is virtually impossible and certainly notcost-effective to continually surprise customers by providing something they didn’texpect. Kano’s pure theory is more applicable to the product development process,where setting an objective to introduce a completely new feature that customers hadnot demanded (or even a totally new product concept such as the original SonyWalkman) may be feasible due to the long timescale and high level of investmentinvolved. However, most organisations today are in service industries wheresurprising customers is not a practical goal. Organisations must also distinguishbetween the feasibility of surprising one individual customer (like the colouring bookexample in Chapter 1) and achieving that effect with enough customers to make anysignificant difference to the company’s financial performance.

KEY POINTDelighting an individual customer may be feasible but achieving surprise anddelight with enough customers to significantly affect the financial performanceof the business is not practical, especially in service industries.

In fact, due to the service intensive nature of many companies’ operations, keepingperformance on satisfaction maintainers at acceptable levels will often present aconsiderable challenge without getting distracted by schemes to wow individualcustomers.Although delivering a continual stream of unexpected benefits is not a realisticstrategy, generating very high levels of customer satisfaction by consistently meeting orexceeding customers’ conscious requirements is feasible, although far from easy.

However, even the widely accepted goal of exceeding customers’ requirements wherepossible has been challenged. Schneider and White29 comment on the widespreadassumption in service quality literature going back to the SERVQUAL model thatmeeting customers’ expectations is good but exceeding them is even better30. Theyquestion the prevailing view in the service quality field that ‘more is always better’31,

240 Advanced analysis

Chapter fourteen 5/7/07 10:01 Page 240

Page 248: Customer Satisfaction

suggesting that some service requirements can be ‘ideal point attributes’29, whereperformance beyond the ‘ideal’ level will be detrimental to customer satisfaction. Thisshould not be confused with satisfaction maintainers, where performance beyond theadequate or expected level is pointless since it will deliver little if any additionalbenefit in customers’ eyes, but would not reduce customer satisfaction. By contrast,exceeding customers’ expectations on ideal point attributes would actually have anegative impact on the total customer experience. To illustrate this concept Schneiderand White refer to earlier research in the convenience store market32 where excessivefriendliness and personal attention was found to conflict with the more importantrequirements of efficiency and speed of service. In the busy convenience storeenvironment, more smiles were not better beyond the ‘ideal point’.

When we consider the ‘ideal point’ concept in the context of everything we have saidabout customer satisfaction so far in this book, it holds few surprises. Customers basetheir satisfaction judgements on their feelings about the total customer experience. If twoor more requirements are somewhat contradictory, suppliers have to make choices abouttheir performance and the basis of their decision should always be the requirements’relative importance to the customer. As we have said many times, to succeed at customersatisfaction, organisations have to ‘do best what matters most to customers’.

Since many organisations fail to meet customers’ requirements even on the basics, itis achieving consistently high levels of customer satisfaction, rather than exceedingexpectations or surprising and delighting the customer that will achieve the greatestreturn for most companies. This argument is succinctly summarised by Barwise andMeehan33 in their book “Simply Better: Winning and keeping customers by deliveringwhat matters most”:

“We believe that your first priority should be to improve performance on the thingsmanagers often dismiss as ‘table stakes’, ‘hygiene factors’ or ‘order qualifiers’ (asopposed to ‘order winners’)………..companies assume that they need to offersomething unique to attract business. Secondly, they assume that years of competitionhave turned the underlying product or service into a commodity. In reality, whatcustomers care most about is that companies reliably deliver the generic categorybenefits, but, far too often, that does not happen. Therefore, most businesses have a bigopportunity to beat the competition, not by doing anything radical and certainly notby obsessing about trivial unique features or benefits, but instead by getting closer totheir customers, understanding what matters most to them, and providing it simplybetter than the competition.”33

KEY POINTThe vast majority of organisations will most effectively improve their businessperformance by focusing on elimination of negative customer experiencesrather than aiming to exceed customers’ expectations.

Advanced analysis 241

Chapter fourteen 5/7/07 10:01 Page 241

Page 249: Customer Satisfaction

Starbucks discovered this fact when its customer satisfaction levels fell, despite‘delighting’ customers with a succession of new and highly innovative coffeeproducts34. The company had always placed considerable emphasis on new productdevelopment and it conducted extensive research into customers’ tastes and attitudestowards new products. Customers did like the new beverages and demand for themwas strong, but Starbucks’ focus on innovation had resulted in the company takingits eye of a much less exciting but fundamental element of the customer valueproposition – speed of service. The new drinks were often complicated and labourintensive to prepare, increasing the time staff took to serve customers. Fallingcustomer satisfaction was very worrying since the company knew there was a strongrelationship between satisfaction and sales. It therefore extended its customersatisfaction research, including measures of importance as well as satisfaction. Thisshowed that fast, convenient service was far more important to customers than new,innovative drinks. Starbucks therefore spent $40 million increasing staff levels as wellas improving processes for taking orders and preparing drinks. This increased thepercentage of customers being served in under three minutes from 54% to 85%,resulting in a big increase in customer satisfaction.

14.5 The consequences of customer satisfactionAlthough they question the role of asymmetric data in causing customer satisfactionor dissatisfaction, Johnson and Gustafsson are much more convinced about theasymmetric relationship between satisfaction and loyalty5. To establish the links

FIGURE 14.14 Harvard’s asymmetric satisfaction-loyalty relationship

Loy

alty

SatisfactionSaboteur

1 5 10

100%

80%

60%

40%

20%

Apostle

Zone of affection

Zone of indifference

Zone of defection

242 Advanced analysis

Chapter fourteen 5/7/07 10:01 Page 242

Page 250: Customer Satisfaction

between customer satisfaction and its consequences, the survey questionnaire mustcontain questions about those outcomes - typically one or more relevant loyaltyquestions selected from those described in Chapter 9. As with the antecedents ofcustomer satisfaction, the overall strength of a relationship can be established usingstatistical techniques, but this will not account for the effects of non-linearity.

We have already seen in Chapter 2 the classic non-linear curve produced by HarvardBusiness School to illustrate the asymmetric relationship between customer satisfactionand loyalty, repeated here as Figure 14.1420. There is wide agreement that thisrelationship is often non-linear5,8,9,12,13. There is less agreement, however, on the precisenature of this asymmetric relationship. Whilst agreeing with Harvard’s principle thatcustomer loyalty is achieved only at the highest levels of satisfaction, Keiningham andVavra13 illustrate the relationship differently, as shown in Figure 14.15.

In practice, as we pointed out for the antecedents of customer satisfaction in Section14.3.3, there is no such thing as a standard curve that will accurately reflect therelationship between customer satisfaction and loyalty for all companies. This isconfirmed by Jones and Sasser10 who illustrate how the relationship typically differs

across five markets. (Figure 14.16). Each organisation must therefore identify its owncurve, since the nature of its satisfaction – loyalty relationship will have profoundimplications for its entire customer management strategy.

KEY POINTThere is no standard curve or formula that accurately depicts the relationshipbetween customer satisfaction and loyalty.

FIGURE 14.15 Zones of pain, mere satisfaction and delight

Zone of pain Zone of meresatisfaction

Zone ofdelight

Satisfiers Delighters

Cu

stom

er lo

yalt

y

Customer satisfaction

Advanced analysis 243

Chapter fourteen 5/7/07 10:01 Page 243

Page 251: Customer Satisfaction

Figures 14.17 and 14.18 illustrate the relationship for two different organisations. Inboth cases, the customer satisfaction index is plotted on the x axis and the loyaltyindex on the left hand y axis. The right hand axis shows the number of surveyrespondents at each level of satisfaction. The small caption states the overall customersatisfaction index for each company.

For Company 1, the slope of the curve is very steep for all levels of satisfaction downto 55%, after which it levels off as there is little loyalty left to lose below this point.The steep part of the curve covers most of the satisfaction range and, moreimportantly, most of the customers – 88% of the respondents in this survey. The

FIGURE 14.17 Satisfaction-loyalty relationship 1

100%

90%

80%

70%

60%

50%

40%

30%

20%

10%

Loya

lty

Customer satisfaction index

Nu

mber of respon

dents

200

180

160

140

120

100

80

60

40

20

025% 35% 45% 55% 65% 75% 85% 95%

10

2833

124113

75

ZONE OFOPPORTUNITY The overall customer

satisfaction index is74.8%

63

149

FIGURE 14.16 No standard satisfaction - loyalty relationship

Loya

lty

high

highlow

Hospitals

Automobiles

Personalcomputers

Airlines

Local telephone

Satisfaction

244 Advanced analysis

Chapter fourteen 5/7/07 10:01 Page 244

Page 252: Customer Satisfaction

steep gradient of the curve shows that with 88% of its customers, this organisationfaces both an opportunity and a threat. If it could increase satisfaction levels acrossthat range it would gain a large increase in customer loyalty. Conversely, if satisfactionfalls it risks losing many customers. The overall index and the shape of the histogramtell us that the satisfaction level of many customers (56% in fact), falls on the steepestpart of the curve. To achieve the maximum gain in customer loyalty, all organisationsshould focus on their ‘zone of opportunity’, which is a combination of where thecurve is steepest and where there are most customers. For Company 1, the zone ofopportunity is clearly between 55% and 75% satisfaction. Company 1 shouldtherefore base its PFIs on addressing the satisfaction gaps of customers in that zone.At these poor levels of satisfaction it will typically be improving its performance onthe basics, or satisfaction maintainers, that will be necessary.

The second company is in a very different position. The steep part of the curve isfrom a satisfaction index of 70% downwards, and only 23% of its customers are inthis zone. Above that level, not only is loyalty fairly constant, but it is at a very highlevel of around 90% and above. This may be because there are high switchingbarriers, so it may not be genuine loyalty, but at this point in time it forms an accurateillustration of the satisfaction – loyalty relationship for this company. In this situationthere is clearly little short term benefit in attempting to delight the customer. Movingthose scoring 75% to 85%, or 85% to 95% would produce little or no benefit in termsof loyalty. By contrast, working on the issues responsible for the low satisfaction ofthose in the 55% to 70% satisfaction zone could significantly reduce customer decay.Even though it covers only a minority of its customers, the steepness of the curvedictates that the zone of opportunity for Company 2 is customers with an indexbetween 55% and 70%.

FIGURE 14.18 Satisfaction-loyalty relationship 2

100%

90%

80%

70%

60%

50%

40%

30%

20%

10%

0%

Loya

lty

Customer satisfaction index

Nu

mber of respon

dents

100

90

80

70

60

50

40

30

20

10

055% 60% 65% 70% 75% 80% 85% 90%

5

14

39

98

77

59

ZONE OFOPPORTUNITY

The overall customersatisfaction index is

78.7%

53

100

95% 100%

39

11

Advanced analysis 245

Chapter fourteen 5/7/07 10:01 Page 245

Page 253: Customer Satisfaction

KEY POINTTo achieve the maximum gain in customer loyalty, companies should focus ontheir ‘zone of opportunity’, which is a combination of where the curve is steepestand where there are most customers.

It is also possible to use the charts to compare the two companies’ need to invest insatisfaction improvement. Company 1 clearly has more to gain from investing incustomer satisfaction and more to lose from not doing so. The steepness of the curvesuggests that it is operating in a market with few switching barriers and most of itscustomers are in the zone of opportunity where there is a strong relationship withloyalty. Even quite modest gains in customer satisfaction should show a significanteconomic return since, at the steepest point in the curve each 1% improvement insatisfaction leads to a loyalty increase of almost 2%, and we know from Chapter 2that even small gains in loyalty can be very profitable. By contrast, company 2 is in amuch safer position. Its overall index is some way above the steep part of the curveand most customers are on the flat part of the curve where changes in customersatisfaction are not associated with higher or lower customer loyalty. We can call thisthe ‘zone of stability’. However, Company 2’s curve does display the ‘cliff edge’phenomenon, where falling satisfaction reaches a point, just above 70% in thisexample, where loyalty is suddenly and strongly affected. Company 2 would thereforebe advised to monitor the situation to ensure that its overall index, and the bulk of itscustomers, remain above the cliff edge danger point. In the short term it could workon addressing the concerns of the customers just below the cliff edge to reduce thecustomer decay that is occurring. Compared with Company 1, Company 2 has amuch smaller percentage of its customers in the zone of opportunity and wouldtherefore expect a lower financial return on its investment.

KEY POINTCompanies in the ‘zone of opportunity’ have a much stronger business case forinvesting in satisfaction improvement than those in the ‘zone of stability’.

We can see therefore that improving customer satisfaction will be more profitable forsome companies than others, but that almost all organisations will maximise theirreturns by focusing their improvement efforts and investment where they willproduce the greatest return. Ways of improving the effectiveness of actions taken toaddress PFIs is the focus of the next chapter.

Conclusions1. If customer satisfaction relationships were linear, a given change in one variable

would always result in the same degree of change in its corresponding outcomevariable – for example, a 1% increase in customer satisfaction producing a 1%increase in loyalty whatever the level of satisfaction.

246 Advanced analysis

Chapter fourteen 5/7/07 10:01 Page 246

Page 254: Customer Satisfaction

2. In the real world the relationship is often much less symmetrical – the impact changesin customer satisfaction make on loyalty varying at different levels of satisfaction.

3. As well as the consequences of satisfaction (such as loyalty), the relationshipbetween customer satisfaction and its antecedents can also be asymmetric, andthis may affect decisions on PFIs.

4. For example, some customer requirements have been labelled ‘satisfactionmaintainers’. These are typically essential requirements, so poor performance bythe supplier makes a very large negative impact on customer satisfaction butperformance above a good level makes little difference.

5. By contrast, ‘satisfaction enhancers’, although often not amongst customers’ mostimportant requirements, can make a very strong positive difference to customersatisfaction at very high levels of performance. This forms the basis of conceptssuch as attractive quality, delighters and wow factors.

6. Enhancers and maintainers can be identified by tracking internal metrics againstcustomer satisfaction but this would have to be done over a lengthy period. Morepractical therefore is to use survey data, plotting each requirement against overallsatisfaction or loyalty.

7. In the real world, customers’ requirements rarely conform obediently withconsultants’ favourite theories and often do show a fairly linear relationship withoverall satisfaction. The gradient of the curve will therefore indicate therequirements most capable of improving customer satisfaction or loyalty.

8. Since surprise is an integral element of delight, it is not feasible for mostorganisations in today’s service intensive markets to pursue a strategy ofdelighting the customer. Performing consistently well on customers’ expectedrequirements, especially their most important ones, should be their objective.

9. There is widespread agreement that the relationship between satisfaction and itsconsequences, such as loyalty, is asymmetric, but there is no single universallyapplicable satisfaction – loyalty curve.

10. To achieve the best return on investment from improving customer satisfaction,companies should focus on the steepest part of their satisfaction – loyaltyrelationship curve.

References1. Anderson and Sullivan (1993) "The Antecedents and Consequences of Customer

Satisfaction for Firms”, Marketing Science 12, (Spring)2. Kano, Nobuhiku, Fumio and Shin-ichi (1984) "Attractive quality and must be

quality", Quality Vol 14 No2 3. Kano, Seraku, Takahashi and Tsuji (1996) "Attractive quality and must-be

quality”, in Hromi John D ed, "The best on quality”, ASQC Quality Press, Volume7 of the Book Series of the International Academy for Quality, Milwaukee

4. (1993) Special issue on Kano's methods for understanding customer-defined

Advanced analysis 247

Chapter fourteen 5/7/07 10:01 Page 247

Page 255: Customer Satisfaction

quality, Center of Quality Management, Journal Vol 2 No 4 (Fall)5. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty and

Profit: An Integrated Measurement and Management System”, Jossey-Bass, SanFrancisco, California

6. Johnson, Michael D (1998) For an analysis of the Kano model from the customersatisfaction perspective see "Customer Orientation and Market Action”, Prentice-Hall, Upper Saddle River, New Jersey

7. Schneider and Bowen (1999) "Understanding Customer Delight and Outrage”,Sloan Management, Review 41, (Fall)

8. Fullerton and Taylor (2002) "Mediating Interactive and Non-linear Effects inService Quality and Satisfaction with Services Research” Canadian Journal ofAdministration Sciences 19, (June)

9. Mittal and Kamakura (2001) "Satisfaction, Repurchase Intent and RepurchaseBehavior: Investigating the Moderating Effect of Customer Characteristics”,Journal of Marketing Research 38, (February)

10. Jones and Sasser (1995) "Why Satisfied Customers Defect”, Harvard BusinessReview 73, (November-December)

11. Oliver, Richard L (1997) "Satisfaction: A behavioural perspective on theconsumer”, McGraw-Hill, New York

12. Anderson and Mittal (2000) "Strengthening the Satisfaction-Profit Chain”,Journal of Service Research, Vol 3 No 2

13. Keiningham and Vavra (2003) "The Customer Delight Principle”, McGraw-Hill,Chicago

14. Westbrook, Robert A (1987) "Product/Consumption Based Affective Responsesand Postpurchase Processes”, Journal of Marketing Research 24, (August)

15. Holbrook and Hirschman (1982) "The Experiential Aspects of Consumption:Consumer Fantasies, Feelings and Fun”, Journal of Consumer Research 9,(September)

16. Plutchik, Robert (1980) "Emotions: A Psychoevolutionary Synthesis”, Harper andRow, New York

17. Oliver, Rust and Varki (1997) "Customer Delight: Foundations, Findings andManagerial Insight”, Journal of Retailing 73, (Fall)

18. Hill and Alexander (2006) "The Handbook of Customer Satisfaction and LoyaltyMeasurement” 3rd Edition, Gower, Aldershot

19. Taylor and Schroeder (2003) "Inside Intuit”, Harvard Business School Press,Boston

20. Heskett, Sasser and Schlesinger (1997) "The Service-Profit Chain”, Free Press,New York

21. Levitt, Theodore (1969) "The Marketing Mode”, Mc-Graw-Hill, New York22. Levitt, Theodore (1980) "Marketing Success through Differentiation - of

Anything”, Harvard Business Review 58, (January – February)23. Finkelman, Daniel (1993) "Crossing the zone of indifference”, Marketing

248 Advanced analysis

Chapter fourteen 5/7/07 10:01 Page 248

Page 256: Customer Satisfaction

Management 2(3)24. Myers, James H (1999) "Measuring Customer Satisfaction: Hot buttons and other

measurement issues”, American Marketing Association, Chicago, Illinois25. Daffy, Chris (2001) "Once a Customer Always a Customer”, Oak Tree Press,

Dublin26. Shaw and Ivens (2002) "Building Great Customer Experiences", Palgrave

Macmillan, Basingstoke27. Keiningham, Vavra, Aksoy and Wallard (2005) "Loyalty Myths”, John Wiley and

Sons, Hoboken, New Jersey28. Finn, Adam (2005) "Reassessing the Foundations of Customer Delight”, Journal

of Service Research 8 (2)29. Schneider and White (2004) "Service Quality: Research Perspectives”, Sage

Publications, Thousand Oaks, California30. Parasuraman, Berry and Zeithaml (1985) "A conceptual model of service quality

and its implications for future research”, Journal of Marketing 49(4)31. Brown, Churchill and Peter (1993) "Improving the measurement of service

quality”, Journal of Retailing 69(1)32. Sutton and Rafaeli (1988) "Untangling the relationship between displayed

emotions and organizational sales: The case of convenience stores”, Academy ofManagement Journal 31(3)

33. Barwise and Meehan (2004) "Simply Better: Winning and keeping customers bydelivering what matters most”, Harvard Business School Press, Boston

34. McGovern, Court, Quelch and Crawford (2004) “Bringing Customers into theBoardroom”, Harvard Business Review, November

Advanced analysis 249

Chapter fourteen 5/7/07 10:01 Page 249

Page 257: Customer Satisfaction

CHAPTER FIFTEEN

Using surveys to drive improvement

Improving customer satisfaction is very difficult. Whilst customers are often quick toform negative attitudes if they receive poor service, they tend to be much slower torevise their opinions in a positive direction when a supplier improves, possiblybecause customers expect good service so take it for granted when it happens. Addedto the difficulty of shifting customers’ attitudes, organisations often displayconsiderable inertia when it comes to making changes or improvements in processes,staff behaviours, policies or many other engrained practices that may needchallenging to improve customer satisfaction. This is why it is essential that the CSMmethodology contributes to rather than detracts from the organisation’s ability tomake improvements, and there are four main facets to this. First, it must be accurate,providing a measure that truly reflects how satisfied or dissatisfied customers actuallyfeel. As we explained in Chapters 4 and 5, basing the survey on ‘the lens of thecustomer’ is the key methodological requirement in this respect. Second, it must be atough measure, based on making customers more satisfied rather than making morecustomers satisfied, since a score that looks good will play right into the hands of anyvoices in the organisation that oppose change, investment or additional activity toimprove customer satisfaction. This was illustrated in Chapter 3. Third, it must besensitive enough to detect the small changes in customer satisfaction that typicallyoccur. As we pointed out in Chapter 8, use of a 10-point numerical scale is theessential requirement here. Although all the first three points are essentialfoundations of a sound CSM methodology, the most important aspect for improvingcustomer satisfaction is the fourth – the actionability of the outcomes generated bythe survey. Since this is the biggest weakness of many customer satisfaction surveys,we will devote this chapter to addressing it.

At a glanceIn this chapter we will:

a) Examine the disadvantages of too much data.

b) Review the type of survey information that should be reported.

c) Explore the differing reporting requirements of annual surveys compared withcontinuous tracking.

250 Using surveys to drive improvement

Chapter fifteen 5/7/07 10:02 Page 250

Page 258: Customer Satisfaction

d) Explain how to justify the survey’s conclusions and recommendations for PFIs.

e) Consider the potential conflict between actionable outcomes and an accuratesurvey based on ‘the lens of the customer’.

f) Explain how to use Customer Experience Modelling (CEM) to overcomethis conflict.

g) Show how CEM can help organisations to monitor improvement.

h) Outline a simple method for demonstrating that customer satisfaction pays.

15.1 What are not actionable outcomes?Judging by much of the CSM survey output that we have seen, many organisations, andeven their research agencies, don’t know the answer to this question. It is easy to giveexamples of common outcomes that are not useable for improving customer satisfaction.

15.1.1 Too much dataWorst of all is too much data, typically in the form of what researchers call ‘cross-tabs’,that split the results by every segment known to man. We have often seen table aftertable of cross-tabs filling a report the size of a telephone directory. Migrating frompaper to a more up-to-date method of delivery, such as an interactive web reportingsite, greatly improves the speed of finding specific pieces of information but does notaddress the problem of actionability of outcomes. This is because, rather like ‘idealpoint attributes’, more is not better. Whilst we are in favour of drilling down into thedata to identify any useful and statistically significant differences between customertypes, customer attitudes (e.g. customers with different requirements or differinglevels of satisfaction) or business units, most cross-tabs simply don’t show them.Therefore, whilst drilling down is a painstaking but potentially useful task that shouldbe carried out by someone in the organisation’s research department (or its researchagency), only the conclusions from the very small proportion of cross-tabs that willaffect the action taken by the organisation to improve customer satisfaction shouldbe presented to managers or employees. Wasting time studying and debatinginformation that may be interesting but will make no difference to any action takento improve customer satisfaction, is one of the main reasons for the failure of manyorganisations’ CSM processes.

KEY POINTIt is the quality not the quantity of CSM data that will lead to improvement incustomer satisfaction

15.1.2 ‘So what?’ conclusionsThe purpose of CSM is not to produce an interesting sociological study of customers’attitudes but to improve customer satisfaction, and hence customer loyalty and thebusiness performance of the organisation. One could report, for example, that 55%

Using surveys to drive improvement 251

Chapter fifteen 5/7/07 10:02 Page 251

Page 259: Customer Satisfaction

of customers are satisfied with speed of service but 45% are dissatisfied with it, thatover 50s are more satisfied than under 25s, or that 60% of customers are willing torecommend the organisation compared with 28% who are not (giving a net promoterscore of 32%), but none of it would have any value. However interesting it might be, itadds no value to the organisation because it doesn’t tell busy managers what to do toimprove customer satisfaction. To judge the value of CSM survey conclusions, applythe ‘so what?’ test. What difference would knowing that information make to anydecisions made or action taken by anyone in the organisation?

15.2 What are actionable outcomes?

15.2.1 Concise informationTo produce change, CSM survey outcomes will have to engage the attention of seniormanagement and operational middle management. To accomplish this difficult task,they must be presented with all the information they need (and none they don’tneed) in a clear and concise form. Whilst norms about exactly how muchinformation should be provided to management will vary between organisations, itis possible to generalise about how much information is typically needed to conveythe essentials of a customer satisfaction survey, and in this respect there will bedifferences between annual surveys and more frequent continuous tracking.

Annual surveys will need more background information in an executive summarysince the basics of the CSM methodology may not be remembered by all managersfrom one year to the next. It can therefore be useful to remind people that the resultsprovide an accurate reflection of customer satisfaction because the questions werebased on the ‘lens of the customer’, identified through exploratory research. Areminder of the bare essentials of the data collection will also be helpful, includingdates of fieldwork, method of data collection, representativeness of the sample and,especially for self-completion, the response rate. Next come brief details of the results– the headline measure of customer satisfaction and how good, or poor this is. If thesurvey has been conducted previously, comparing against the company’s ownprevious score provides the best performance yardstick, but benchmarking againstother organisations is also useful (especially to stimulate action if satisfaction ispoor), and is essential for first time surveys to provide context. Finally, and mostimportantly are the PFIs – the actions that the organisation must take to improvecustomer satisfaction. For first time surveys it can also be helpful to add brief detailsof any other useful customer satisfaction improvement initiatives such as the internaland external feedback covered in the next two chapters. Even for updates, if they area year or more apart, a brief reminder of the value of feedback would be advisable. Invery concise form, this information can be squeezed onto one sheet of paper, but twopages are more realistic. Ideally, the information would be presented to management,giving them the opportunity to ask questions.

252 Using surveys to drive improvement

Chapter fifteen 5/7/07 10:02 Page 252

Page 260: Customer Satisfaction

Executive summaries for continuous tracking surveys can be briefer since managerssoon become familiar with the basics of the methodology. Consequently there areonly two fundamental pieces of information that managers need to know on aregular basis. First, is the organisation succeeding in improving customersatisfaction, and second, what action should now be taken to improve it further (orreverse any decline)? This information can easily be provided on less than one page,although two aspects of continuous tracking should also be considered. Firstly,since monthly changes in customer satisfaction will typically be small they may notshow through in the results and the headline measure may move up and down alittle from one month to the next. To judge the organisation’s performance,managers must therefore be given enough information to understand the trend.Ideally this should provide a short and medium term perspective. For example, ‘At84.3% the customer satisfaction index is now 3.1% above its baseline 2 years ago andhas risen for 4 of the last 6 months.’ This type of trending enables the organisationto avoid getting bogged down in month-by-month technical details such asconfidence intervals, which almost always detract from, rather than add to, thecompany’s propensity and ability to take action.

A second characteristic of monthly tracking that affects reporting to management isthe fact that due to the slowness of improving customer satisfaction, the PFIs may notchange for quite a few months at a time, and this can give some managers theimpression that no progress is being made. This can be mitigated by presentingsuitable trend data for individual PFIs as well as for overall satisfaction, butperiodically it will still be helpful to remind managers that improving customersatisfaction is a long haul rather than a quick fix and that due to the nature ofcustomer satisfaction it is not abnormal for some requirements to be almostperpetual PFIs. This will apply when a requirement is extremely important tocustomers, is a ‘more is always better’ attribute and will be exacerbated by thetendency for customers’ expectations to increase, resulting in the organisation havingto improve performance just to prevent customer satisfaction from declining.Organisations that are prepared to continually invest in new and better ways of ‘doingbest what matters most to customers’ on a restricted range of requirements that arealways critically important to customers will achieve much higher levels of customersatisfaction, loyalty and profit than those always looking for new quick wins.

KEY POINTIt is not unusual for some highly important customer requirements to be long termPFIs. The most successful organisations recognise this and accept the long haul ofcontinually investing in new ways to do best what matters most to customers.

15.2.2 Clear authoritative conclusionsTo have credibility with managers in most organisations, conclusions and

Using surveys to drive improvement 253

Chapter fifteen 5/7/07 10:02 Page 253

Page 261: Customer Satisfaction

recommendations need to be clear-cut and definitive. As we suggested in Section15.1.2, trying to present all sides of an argument or providing a level of detail thatleads to lengthy debates about the real meaning and messages in the information isalways counter-productive for improving customer satisfaction. CSM should notfollow the reporting conventions of most market and social research and all academicresearch, where all sides of the argument are typically reported even if it leads to noconclusions whatsoever. Customer satisfaction surveys are different. They have onlyone purpose – improving customer satisfaction. As we have seen in Chapter 12,customer satisfaction improvement is best achieved by focusing on only one or a verysmall number of PFIs, since trying to make improvements across the board results inchanges being too small for customers to notice and to change their attitudes. When 15to 20 customer requirements are measured in a CSM survey, it is often possible to createarguments for taking action on quite a number of them. Some may have lowsatisfaction scores, some will be extremely important to customers, some will have highimpact coefficients, others will have attracted some very negative customer comments.Suggesting too many PFIs, even reviewing too many options for potential PFIs willalmost always be detrimental to the organisation’s success in improving customersatisfaction. Therefore, even when one could construct a credible case from the data formany of the requirements being PFIs, it is essential to select one or a very small number,using judgement if necessary as a tie-breaker, and authoritatively present them tomanagement and colleagues as the conclusions and recommendations.

KEY POINTClear, unambiguous conclusions, authoritatively presented, are essential forimproving customer satisfaction and loyalty.

15.2.3 Justifiable conclusionsThe fact that conclusions should be clear and authoritative rather than too lengthy oropen to debate does not take away the need to justify them. To believe in them,managers need to understand the logic behind focusing their customer satisfactionimprovement efforts and resources on a very small number of PFIs. As well asexplaining the rationale behind this approach, each PFI should be justified from thesurvey outcomes. It might have the largest satisfaction gap, the most negativecomments, or possibly be a fundamental satisfaction maintainer where performanceand customer satisfaction are not reaching the basic minimum level required. Thetable of outcomes shown in Figures 12.5 and 12.8 is an excellent concise and visualmethod for justifying the choice of PFIs.

15.2.4 Precise, actionable PFIsTo have the best chance of improving customer satisfaction, managers need to make

the changes that will be most valued by customers. It will be much easier for them todo this if the survey generates very specific, tangible actions that are easy to

254 Using surveys to drive improvement

Chapter fifteen 5/7/07 10:02 Page 254

Page 262: Customer Satisfaction

implement rather than very broad areas for improvement. For example, saying thatsupermarket floor staff need to be more helpful to customers is rather vague since itdoes not clarify how they should be more helpful. It is much more useful torecommend that when a customer asks for help in locating a product, they shouldlead the customer to the product, ascertain precisely what the customer wants, placethe required item(s) in the trolley and ask if the customer needs any more help.

15.3 Accuracy versus actionabilityTo maximise the actionability of customer survey outcomes, managers willunderstandably want to fill the questionnaire with highly specific questions, like thefollowing examples. “If you asked for help to find a product, did the assistant take youto the correct location?” “Did the assistant ask you if you needed any more help withanything else?” There are three major problems with this approach.1. Questions like those above are dichotomous not scalable, so cannot be used to

provide a measure of customer satisfaction. Without a trackable measure ofcustomer satisfaction it will be impossible to judge unequivocally if theorganisation is making progress in improving customer satisfaction.

2. To cover the entire customer experience with highly specific questions like theexamples shown would result in a questionnaire that was very much longer thanthe ten minutes recommended in this book.

3. Most seriously, questions like the ones above are typical ‘lens of the organisation’as opposed to ‘lens of the customer’ questions. As we explained in Chapter 4,when judging organisations, customers simply do not think in such specific,operational terms. Their attitudes are based on much more general impressions,such as how helpful or unhelpful the employees typically are.

KEY POINTHighly specific questions aid actionability but often conflict with the lens ofthe customer.

As we have emphasised earlier, for an accurate measure of customer satisfaction thattruly reflects how satisfied or dissatisfied customers feel, the questionnaire must bebased on exactly the same criteria that customers use to make that judgement. Hencethe exploratory research to understand the ‘lens of the customer’ as the basis fordesigning the questionnaire. The disadvantage of the ‘lens of the customer’ approachis that the broad constructs used by customers to form their satisfaction judgementswill, by definition, tend to be rather general and therefore not very actionable.However, there is a way to retain the accuracy of the ‘lens of the customer’ measurewhilst also giving managers the actionable outcomes that they want and need, asexplained in the next section.

Using surveys to drive improvement 255

Chapter fifteen 5/7/07 10:02 Page 255

Page 263: Customer Satisfaction

15.4 Customer Experience ModellingThe examples given in Section 15.3 provide a good illustration of the accuracy versusactionability conflict. Customers tend to judge organisations on broad perceptionssuch as ‘helpfulness of staff ’, but if it becomes a PFI many operational managers willnot see it as an actionable outcome since it doesn’t tell them precisely how to maketheir staff more helpful. By contrast, the much more specific questions on helpfulnessare totally actionable, so the starting point for CSM is to use the time available in theinterview for additional questions to insert some highly specific, actionable questionsjust on the PFIs. However, Customer Experience Modelling (CEM) involves far morethan simply inserting a few additional questions. It actually adds three benefits to aCSM process:1. It makes the survey outcomes more actionable.2. It helps the organisation to monitor progress, especially when frequent tracking

will not demonstrate a significant uplift in customer satisfaction every month.3. It can provide information on the return on investment in customer satisfaction

whether on specific actions taken or the return on generally improving customersatisfaction.

We will start with the actionability, firstly from the perspective of questionnairedesign and then in terms of turning the answers into actionable outcomes.

15.4.1 Designing CEM questionsSince the first purpose of CEM is to provide additional, more focused informationaround the PFIs to make the outcomes more useful for managers, decisions aboutwhat questions to ask are crucially important. Three factors should influence thefocus of CEM questions:1. Customer comments generated from probing low satisfaction provide a good

starting point for where to focus the questions. Analysing the comments for eachPFI will identify the main causes of dissatisfaction with the requirement andprovide excellent ideas for CEM questions.

2. Customer facing employees often have considerable insight into the problemsexperienced by customers and their views can be canvassed through workshops ordiscussion groups and more quantifiably through a mirror survey (see Chapter 16).

3. Since these are ‘lens of the organisation’ questions they must fit in with the waythe organisation works if they are to produce actionable outcomes so managersshould approve them for compatibility with processes, budgetary constraints andany other internal issues.

Having agreed the focus of the questions, their wording becomes critical toactionability. Dichotomous questions are often the most useful in this respect sincethey are very tangible and are easy for customers to answer.“Did your account manager agree specific deadlines with you for each phase of theproject?”“Did the customer service representative call you back on the agreed date?”

256 Using surveys to drive improvement

Chapter fifteen 5/7/07 10:02 Page 256

Page 264: Customer Satisfaction

Sometimes, dichotomous questions are not precise enough. Where time is involved,you need to know how long something took, whether in minutes, weeks or whateverunit of time is appropriate. Examples include:“How long is it since you met your account manager?”“After you reported the fault, how long was it before the engineer arrived?”Questions like these should be asked as open questions in interviews with a codingframe for interviewers to classify responses into appropriate blocks of time. With self-completion questionnaires, it would normally be a closed question with the customerticking one of the response options. However, as we will explain in Section 15.4.2, theresponse options may be determined by how it will be analysed.

Another effective CEM questioning routine is to take the customer through a shortsequence of events, as with the following questions designed to understandcustomers’ problem experience.“Have you experienced a problem with XYZ Ltd in the last 3 months (or appropriatelength of time)?” Yes / NoIf yes:“Please give brief details of the problem.” Interviewer can code responses into themost common categories e.g. “produce not fresh / faulty or substandard product(other than fresh produce) / unavailability of products / poor or off-hand serviceprovided in-store / etc”“Did you report the problem to anyone at XYZ?” Yes / NoIf yes:“Who did you report it to?” (Appropriate response options here e.g. “Store assistant/ Customer Service Desk in store / Telephoned the customer helpline / Made a formalcomplaint to a manager”)If no:“Why didn’t you report it?” (Appropriate response options here e.g. “Didn’t knowwho to report it to / Don’t like complaining / Didn’t think anything would be doneabout it / Problem discovered at home, difficult to report”)“How satisfied or dissatisfied were you with the way your problem was handled?” Thisis most usefully scored on the same 10-point scale as all the other satisfaction questions.

15.4.2 Driving actionsA key benefit of CEM is the provision of clear and simple results that link directly intothings that employees are doing, or not doing. Dichotomous questions areparticularly good for producing black and white, irrefutable information. A zerodefects policy for the two behaviours reported in Figure 15.1 seems reasonablecustomer satisfaction practice, providing managers in the companies concerned withtangible information for taking action.

Using surveys to drive improvement 257

Chapter fifteen 5/7/07 10:02 Page 257

Page 265: Customer Satisfaction

Time-based questions such as the length of time since customers met their accountmanager, or how long it took the service engineer to arrive can be analysed in simpleor complex ways. For actionability, simplicity is best, so the responses should begrouped into three or four time bands as shown in Figure 15.2. Clearly, if it is policythat all customers should see their account manager at least quarterly, or that serviceengineers should always visit within 3 days of a fault being reported, theorganisations and managers involved in these examples have work to do. Theactionability of CEM questions can be further improved by flagging customers byaccount manager, service teams etc so that the individuals or teams failing to meet thetargets are highlighted. The simple act of reporting and monitoring this informationusually stimulates significant performance improvement.

KEY POINTFor maximum actionability link CEM questions to specific individuals or teams.

When there is a sequence of questions like the problem handling examples, it enablesmanagers to identify precisely where failures are occurring to a level of detail that may

FIGURE 15.2 Time-based questions

Last saw account manager Speed of engineer’s visit

28%More than3 months

32%Less than1 month

9%

38%2 days

40%1- 3 months

More than 3 days

26%Less than24 hours

27%3 days

FIGURE 15.1 Dichotomous question outcomes

Specific deadlines agreed Called back on agreed date

No 6% No 22%

Yes 78%Yes 94%

258 Using surveys to drive improvement

Chapter fifteen 5/7/07 10:02 Page 258

Page 266: Customer Satisfaction

surprise even the employees doing the work. Answers to the individual questions canbe reported as pie charts, but CEM becomes a more powerful tool when questions arelinked together and reported as a flow chart. If several questions comprise thesequence, it is important not to overload the audience with too much detail. It ispreferable to be selective, including just the questions necessary to make anactionable point. Figure 15.3 illustrates the useful outcome that whilst customersreport most problems they seem to be reluctant to report service problems.

If there is a reluctance to report problems it is clearly useful to understand why. Agood CEM question sequence might show that there is no simple answer. Asillustrated in Figures 15.4 and 15.5, the reasons might vary according to the nature ofthe problem.

FIGURE 15.4 CIM flow chart (2)

Serviceproblemsreported

Why not reported

Sample =353 customers

Yes 28%

No 72%

Didn’t know whoto report it to 6%

Don’t likecomplaining 11%

Didn’t thinkanything would

be done 83%

Problem discoveredat home - hassle

to report 0%

FIGURE 15.3 CIM flow chart (1)

Problemexperienced

Problemwith

Problemreported

Yes 42%

No 58%

Sample =3000 customers

Freshproduce 38%

Yes73%

Otherproducts 13%

Yes94%

Poor service28%

Yes32%

Unavailabilityof product 21%

Yes76%

Using surveys to drive improvement 259

Chapter fifteen 5/7/07 10:02 Page 259

Page 267: Customer Satisfaction

Since it is desirable for any problem to be reported and handled as soon as possibleafter its occurrence, the company needs to convince customers that the in-storeCustomer Service Desk welcomes customers’ feedback on any type of problem orconcern and that all will be taken very seriously whether product or service related.

The situation with fresh produce problems is different. The minority that are notreported are mainly due to the fact that once back home, customers are not clearabout how to report them unless they go back to the store, which they may not bedoing for another week or more. Raising awareness of the telephone helpline is theobvious solution here and may also help for service problems if some customersprefer to report service problems using a less personal medium rather than face-to-face in-store.

15.4.3 Focusing PFI actions The actionability of CEM outcomes is obvious, but the technique also enablesorganisations to make decisions about whether taking action is worthwhile.Whatever the policies of the organisation about maximum response times or thepercentage of customers that should receive a follow-up call, there’s no pointinvesting in service enhancements that don’t improve customer satisfaction.

As we said in Chapter 12, selection of PFIs should be based mainly on where theorganisation is least meeting its customers’ requirements, but we know from Chapter14 that some requirements can make more impact than others on improvingcustomer satisfaction due to the asymmetry of CSM data. The same principle appliesto the very focused action that will be taken to address the chosen PFI(s). If our

FIGURE 15.5 CIM flow chart (3)

Fresh produceproblemsreported

Why not reported

Sample =479 customers

Yes 73%

No 27%

Didn’t know whoto report it to 11%

Don’t likecomplaining 9%

Didn’t thinkanything would

be done 17%

Problem discoveredat home - hassleto report 63%

260 Using surveys to drive improvement

Chapter fifteen 5/7/07 10:02 Page 260

Page 268: Customer Satisfaction

retailer had adopted ‘handling problems and complaints’ as its PFI, Figures 15.6 and15.7 show how CEM could help it to determine specific actions that would have moreimpact on improving customer satisfaction.

Whilst initial analysis suggested that improving the reporting rate for serviceproblems would be useful, Figure 15.6 demonstrates that it would not make muchimpact on customer satisfaction and loyalty, since in this example, customers notreporting service problems are almost as satisfied and loyal as those who do. Bycontrast, Figure 15.7 shows that although failure to report problems with freshproduce appears to be a much smaller issue, it actually makes far more impact.Customers who find it to be less than fresh when they arrive home and end upthrowing it away rather than reporting it change their attitudes and future behaviourtowards the store to a much greater degree, so promoting the use of the telephonehelpline for such problems would be the best action to take.

FIGURE 15.7 Using CEM to focus PFI actions (2)

Fresh produceproblemreported

SatisfactionIndex

Definitely use XYZ for

next shop

Sample =479 customers

Yes 73%

No 27%

86% 97%

67% 84%

FIGURE 15.6 Using CEM to focus PFI actions (1)

Serviceproblemreported

SatisfactionIndex

Definitely use XYZ for

next shop

Sample =353 customers

No 72%

76% 87%

79%Yes 28%

88%

Using surveys to drive improvement 261

Chapter fifteen 5/7/07 10:02 Page 261

Page 269: Customer Satisfaction

15.4.4 Monitoring improvementThe tangible nature of CEM outcomes makes it easy for managers to set targets forimprovement as well as enabling them to judge whether agreed actions and policiesare being implemented, and, more importantly, noticed by customers. Based on theinformation in Figure 15.2, the organisation clearly has a problem with 27% of visitsfrom service engineers taking longer than the specified maximum of three days fromthe fault being reported. Whilst it may not be realistic to expect immediateimplementation of the three-day maximum, managers can set tangible targets forreductions in the percentage of visits exceeding three days and can use CEM tomonitor the company’s progress – as perceived by customers.

The ability to monitor progress is a very useful feature of CEM, especially whencontinuous tracking may not demonstrate a significant uplift in customersatisfaction every month. This is particularly helpful with very broad customerrequirements such as ‘keeping promises and commitments’, ‘value for money’ or ‘easeof doing business’ where it will be slow and challenging to improve customers’perceptions. CEM will considerably help to address the problem of the same non-changing PFIs by providing visible progress on a sequence of small manageable stepstowards addressing the PFI.

KEY POINTWell designed CEM questions provide early feedback on the effectiveness oforganisations’ satisfaction improvement initiatives.

FIGURE 15.8 Monitoring progress

10%

20%

30%

0%Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

% of visits over

3 days from reporting fault

262 Using surveys to drive improvement

Chapter fifteen 5/7/07 10:02 Page 262

Page 270: Customer Satisfaction

15.4.5 Demonstrating return on investmentA very important purpose of CEM is to provide information on the return oninvestment in customer satisfaction whether the return on specific actions taken orthe return on generally improving customer satisfaction. Figure 15.9 is an example ofusing CEM for this purpose. In this example it shows that reducing the percentage ofoverdue service visits results in a big improvement in satisfaction with ‘speed ofservice’, a good improvement in overall customer satisfaction and, most importantly,an excellent improvement in loyalty as illustrated by intention to renew the contract.

Sometimes, actions taken by organisations will not translate into improved customerperceptions and on other occasions will not provide a financial return by drivingcustomers’ loyalty behaviours. CEM enables organisations conducting monthlytracking surveys to fine tune actions, continuing with those that are making animpact until the rate of return levels off but abandoning any actions that are clearlynot making a difference so that resources can be switched to a different area. CEMwill also help managers to keep employees motivated by providing quick feedback onefforts they have made as well as enabling them to report tangible improvements tosenior management.

KEY POINTCEM provides simple and convincing evidence of whether actions to improvecustomer satisfaction are also increasing profit.

Although CEM’s ability to visibly link customer satisfaction improvement withbusiness outcomes is of most value to commercial companies, it can also be useful tonot-for-profit organisations provided appropriate business outcomes are used in the

FIGURE 15.9 Using CIM to demonstrate return on investment

Date % service visitsover 3 days after

fault reported

Satisfactionwith speedof service

Customersatisfaction

index

1000 customersinterviewed on

both dates

January2006

January2007

27% 6.1

5% 8.4

79.2%

81.7%

% willdefinitely

renew contract

44%

59%

Using surveys to drive improvement 263

Chapter fifteen 5/7/07 10:02 Page 263

Page 271: Customer Satisfaction

model. All organisations are interested in cost control, and as we pointed out inChapter 2, the cost of servicing customers normally increases as their satisfactionfalls. Reputation and trust are also consequences of satisfaction that are desirable inthe public as well as private sectors. Using a complaint handling example, Figure15.10 shows a CEM model of relevance to profit and not-for-profit organisations.

Since most organisations with contact centres have call logging data matched withindividual customer codes, it is a simple task to look at the number of telephonecontacts with each customer and calculate the average number of calls per customer ineach category. If the information is available, an even more accurate figure would be thetotal duration of calls averaged in each category. This would be the total time spent bythe call handler including, for example, writing up time after the call. For organisationsmaking outbound sales calls, the figures might have to be based on inbound calls only,although the system would ideally distinguish between sales calls and customer servicecalls, in which case only the former should be excluded. In reality, there would often beother costs associated with servicing customers such as writing to complainants ordealing with problems face-to-face. If information is available on all relevant costs itshould obviously be used in the model, but if not, the call log data used in Figure 15.10would illustrate the point, albeit under-estimating the real cost of customerdissatisfaction. The final step is to incorporate the cost of all this time spent servicingcustomers. This should obviously be total costs, not salaries alone. Based on a callcentre hourly cost of £45, the chart shows that, for the organisation concerned, the cost

FIGURE 15.10 Demonstrating business impact in not for-profit sectors

Had a problem

Problemresolution

Number of calls

Total callduration

Yes38%

No62%

Sample of2000 customers

Cost toservice

Dissatisfiedwith problem

handling26%

Did not reportproblem

15%

Satisfiedwith problem

handling40%

Partiallysatisfied with

problem handling19%

Average1.9

calls

Average6.2

calls

Average0.7

calls

Average0.4

calls

Average3.1

calls

Average3.6 minutes

Average7.7 minutes

Average136.4 minutes

Average17.1 minutes

Average46.5 minutes

£2.70per customer

£5.78 per customer

£102.30per customer

£12.83per customer

£34.88per customer

264 Using surveys to drive improvement

Chapter fifteen 5/7/07 10:02 Page 264

Page 272: Customer Satisfaction

of servicing customers who were dissatisfied with the way their problem was handledaveraged over £100 per customer compared with only £12.83 for customers whoseproblem was handled well. Best of all, of course, at only £2.70, is the cost of servicingcustomers who didn’t have a problem in the first place.

KEY POINTCEM can also link customer satisfaction improvement to relevant businessoutcomes in not-for-profit sectors.

Since customers who did not report their problem cost only £5.78 to service, somepeople might suggest that encouraging customer complaints was not a sensiblepolicy. There will however be other factors to consider. Commercial companiesshould include information about relevant loyalty behaviours, such as propensity torenew the contract or policy, likelihood of buying more products, or simplyremaining a customer. All organisations can link the information with reputationand trust information as shown in Figure 15.11.

It is now clear to see that customers who did not complain about their problem havelower levels of overall satisfaction and trust the organisation less than any group apartfrom those with a badly handled problem. It is also possible to quantify the impact on thereputation of the organisation by adding the following questions to the CEM sequence:“Have you spoken to anyone else about XYZ organisation in the last three months?”

FIGURE 15.11 The impact of complaint handling on satisfaction and trust

Had a problem

Problemresolution

SatisfactionIndex

Trust to lookafter my interests

as a customer

Yes38%

No62%

Sample of2000 customers

Willing to recommend

Dissatisfiedwith problem

handling26%

Did not reportproblem

15%

Satisfiedwith problem

handling40%

Partiallysatisfied with

problem handling19%

83.1%

52.3%

61.5%

82.8%

67.4%

89.2%

38.7%

42.0%

87.3%

66.4%

95.4%

23.4%

34.8%

85.9%

59.8%

Using surveys to drive improvement 265

Chapter fifteen 5/7/07 10:02 Page 265

Page 273: Customer Satisfaction

“Approximately how many people did you speak to?”“What kind of things did you say about XYZ?” Interviewer to code as positive,negative or neutral.

The information shown in Figure 15.12 can be used to calculate the extent to whichproblems or poor complaint handling are damaging the reputation of theorganisation. Focusing on the customers who did not report the problem, andassuming the organisation has one million customers, the calculation is shown inFigure 15.13:

FIGURE 15.13 Calculating reputation damage

1,000,000customers

38% hada problem

380,000customersX =

380,00customers

15% did notreport problem

57,000customersX =

57,000customers

4.6 peoplespoken to

262,200conversationsX =

262,000conversations

77.3%net negative

202,681negative messagesX =

REPUTATION DAMAGE = 202,681 NEGATIVE MESSAGES

FIGURE 15.12 Reputation damage

Had a problem

Problemresolution

Average numberof people spoken to

Net positiveor negative

word of mouth

Yes38%

No62%

Sample of2000 customers

Dissatisfiedwith problem

handling26%

Did not reportproblem

15%

Satisfiedwith problem

handling40%

Partiallysatisfied with

problem handling19%

4.3

9.6

4.6

3.1

5.7

+98.4%

-97.9%

-77.3%

+86.5%

-23.8%

266 Using surveys to drive improvement

Chapter fifteen 5/7/07 10:02 Page 266

Page 274: Customer Satisfaction

KEY POINTThe reputation damage of negative customer experiences can be quantified.

Conclusions1. Since CSM has only one purpose, to improve customer satisfaction, it is wasteful

and even detrimental to make available the huge volume of data that can beproduced from a typical customer survey.

2. Information reported to managers must be brief so don’t waste time reportingfacts and figures (however interesting) that do not contribute directly to busymanagers’ ability to improve customer satisfaction.

3. Some background, methodological detail will be necessary when reportingannual surveys.

4. When reporting continuous tracking, managers must be given enoughinformation to understand trends, and hence judge progress, for individual PFIsas well as for overall satisfaction.

5. It is essential to be authoritative as well as concise when presentingrecommendations for PFIs to management.

6. PFI selection should be justified and the Table of Outcomes is very helpful for thispurpose.

7. Precise, tangible PFIs give managers the best chance of improving customersatisfaction but can be incompatible with a survey based on the lens of thecustomer since customers typically judge organisations on broad, less actionableconstructs.

8. Customer Experience Modelling (CEM) is the way to solve this problem since itallows the satisfaction measure to be based on the lens of the customer, thusproviding an accurate reflection of how satisfied or dissatisfied customer feel,whilst using the additional questions to produce highly actionable information.

9. CEM results offer a solution to the difficulty of monitoring progress fororganisations that continually track customer satisfaction.

10. Since it is worth investing only in improvements that make a difference, CEM canbe used to demonstrate the return to the organisation from achieving specificservice improvements. This enables managers to make fact-based decisions aboutwhether to invest in further improvement in the same area or whether to switchemphasis to different actions.

Using surveys to drive improvement 267

Chapter fifteen 5/7/07 10:02 Page 267

Page 275: Customer Satisfaction

CHAPTER SIXTEEN

Involving employees

A CSM process will not achieve its main goal of improving customer satisfactionunless employees are completely on board. There are some obvious factors aroundkeeping staff informed about what’s happening, such as when customers are beingsurveyed and how they are being surveyed, e.g. in-house or independently, interviewsor self-completion. This chapter will focus on some specific initiatives that have beenshown to increase employees’ feeling of involvement and, consequently, to enhancethe organisation’s ability to improve customer satisfaction.

At a glanceThis chapter will:

a) Explain how a mirror survey will make a very tangible difference to employees’involvement in the survey as well as identify ‘understanding gaps’.

b) Suggest ways of ensuring that employees will feel that the survey relates tothem and their work.

c) Explain how to effectively feed back the results of the survey to employees.

d) Consider the advantages of involving staff in decisions about how to improvecustomer satisfaction.

e) Examine ways of using reward and recognition as the ultimate technique forinvolving employees in the process.

f) Explore the concept of internal customers and its role in delivering externalcustomer satisfaction.

16.1 The Mirror SurveyWhile carrying out a customer survey it can be very enlightening to survey employeesat the same time as customers to identify ‘understanding gaps’ – areas where staff donot accurately understand what’s important to customers or fail to realise that thelevel of service they provide is not good enough. This exercise is known as a ‘mirrorsurvey’. Studies have identified strong correlations between this type of employeecommunication, the development of a service oriented culture and subsequentimprovement in customer satisfaction1,2,3.

268 Involving employees

Chapter sixteen 5/7/07 10:03 Page 268

Page 276: Customer Satisfaction

16.1.1 Administering the surveyA mirror survey involves administering a slightly modified version of the customerquestionnaire to employees. Exactly the same customer requirements are measuredbut a mirror survey questionnaire asks employees:

“How important or unimportant do you think these requirements are to customers?”

And:

“How satisfied or dissatisfied do you think customers are with our performance inthese areas?”

A mirror survey is normally based on a self-completion questionnaire on paper or inelectronic form. If paper-based, it should be given out and collected back in fromemployees to achieve the highest possible response rate. To preserve confidentialityand ensure honest answers, the questionnaire should be anonymous and employeesshould be provided with an envelope to seal it in so that their response cannot be readby anyone collecting it. An electronic survey would usually be conducted on theorganisation’s intranet, but if using this method, extensive communications will benecessary to achieve a high response rate. It is also very useful to include a commentsbox that employees can use to highlight any barriers that hinder their ability to delivercustomer satisfaction and to make suggestions for improvements.

KEY POINTMake a mirror survey anonymous and implement measures to achieve aresponse rate of at least 50%.

Unlike an employee satisfaction survey, where, even in the largest organisations, acensus is normal for reasons of inclusiveness, a mirror survey does not need to incurthe cost of a census. For data reliability purposes, a minimum sample of 200responses is adequate, with at least 50 in any sub-groups such as departments. Aswith all self-completion surveys, the response rate is also critical. The target for amirror survey should be at least 50%. Analytical techniques are the same as thosealready explained in Chapter 10, with the key outputs shown in Figures 16.1 and 16.2.

16.1.2 Understanding customers’ requirementsUsing the same results for the supermarket that we have seen earlier, the chart inFigure 16.1 shows the difference between the customers’ mean score for theimportance of each requirement and the average score given by employees.Employees think that most things are important to customers, scoring almosteverything more highly for importance than the customers did. This is very healthybecause if employees think a requirement is at least as important as the customers do,

Involving employees 269

Chapter sixteen 5/7/07 10:03 Page 269

Page 277: Customer Satisfaction

they should be giving it sufficient attention. However, alarm bells should sound whenemployees under-estimate the importance of a customer requirement. The chartshows that employees significantly underestimate the importance of ‘expertise ofstaff ’, scoring it 0.8 lower than customers. Further inspection shows that employeesgave broadly similar scores for all three staff requirements, demonstrating that theyfail to understand the additional importance that customers place on their expertisecompared with their helpfulness and appearance.

16.1.3 Understanding customer satisfactionThe second mirror survey chart, shown in Figure 16.2 shows the difference in meanscores for satisfaction given by customers and employees. The chart provides somevery interesting information as it shows that where employees perceive any difficultyin meeting customers’ requirements they appear to assign responsibility to thecompany, scoring low for such corporate issues as ‘choice of products’, ‘price’, ‘qualityof products’ and ‘layout of store’. On all the requirements relating to their ownbehaviour, they over-estimate customer satisfaction, especially for ‘helpfulness ofstaff ’, where the average satisfaction score given by employees was 1.2 higher than theone given by customers and for ‘expertise of staff ’ where employees score 0.9 higher.

FIGURE 16.1 Importance mirror

Choice of products

Expertise of staff

Price level

Speed of service

Quality of products

Layout of store

Staff helpfulness

Staff appearance

6.5 7 7.5 8 8.5 9 9.5 10

CustomersEmployees

270 Involving employees

Chapter sixteen 5/7/07 10:03 Page 270

Page 278: Customer Satisfaction

As well as highlighting understanding gaps on specific attributes, a mirror survey willsometimes uncover a much deeper malaise in the organisation. Whilst employees insome organisations have an incredibly accurate understanding of customers’ needsand perceptions, others can display a woeful misunderstanding across the board. Forexample, when employees give satisfaction scores that are consistently higher thanthose given by customers, it indicates a degree of unhealthy complacency across theorganisation. By contrast, if employees give significantly lower satisfaction scoresthan customers for all the attributes, it is a sign of poor staff morale.

KEY POINTA mirror survey will often present an enormous opportunity for staff training,providing tangible information to demonstrate key points to employees.

Even if the mirror survey does not identify any understanding gaps or highlight anywider problems within the organisation, taking part in the survey is a very tangibleway of involving employees in the CSM process and making them think about theissues of importance to customers. Once the results have been analysed, employeesfind it very interesting to compare their scores with those given by customers and thisadded interest helps to facilitate the internal feedback process.

16.2 Relating surveys to workMost people in all organisations are focused on the day-to-day requirements of their

FIGURE 16.2 Satisfaction mirror

Choice of products

Expertise of staff

Price level

Speed of service

Quality of products

Layout of store

Staff helpfulness

Staff appearance

6.5 7 7.5 8 8.5 9 9.5 10

CustomersEmployees

Involving employees 271

Chapter sixteen 5/7/07 10:03 Page 271

Page 279: Customer Satisfaction

job. Consequently, if CSM is to maximise its effectiveness across the organisation, itmust be apparent to everyone how it relates to their daily work. Communications tofeed back the survey results, the PFIs and the satisfaction improvement plans arehighly beneficial, but they are not ever-present and will not be at the top of mostemployees’ minds on a daily basis. Two techniques that will help employees to relatethe surveys to their own jobs are outlined in the next two sub-sections.

16.2.1 Survey resultsWhere staff have direct contact with customers, it is highly beneficial if the surveyresults can be drilled down to the lowest possible level. In Chapter 12 we emphasisedthe effectiveness of internal benchmarking across business units, branches etc. In thiscontext it is highly effective to break down the results to individual members of staffwhere possible, or at least to small teams as well as larger units such as call centres,stores or branches. Assuming the database links customers to the individualemployee or team that handled their call or manages their account, the main issue issample size. Provided the results are seen as indicative performance indicators, this isa worthwhile exercise with samples as low as 10 respondents, although 25 would bepreferable. For organisations involved in continuous tracking of customersatisfaction, it will be possible to roll up the samples of individuals/teams over severalmonths to improve the reliability of the data. An example of the type of output thatcan be produced is shown in Figure 16.3.

This kind of internal benchmarking will be most effective if it is undertaken in apositive manner without any hint of blame or recrimination attached to those with

FIGURE 16.3 Satisfaction scores by individual

Understanding your requirementsClear points of contactExpertise of account managerThe relationship with CMProfessionalism of account managerProactivity of account managerHelpfulness of account managerPresentation skills of account managerScheduling of projectsProject managementFeedback on project progressSpeed of response to requestsQuality of designsQuality of adviceHandling problems or complaintsValue for money

Customer Satisfaction Index

JC9.059.109.389.439.339.388.529.219.118.438.528.718.528.628.938.86

89.9%

LH8.849.089.568.769.088.928.719.118.719.169.408.969.088.798.359.38

90.5%

BD8.668.939.249.349.149.388.629.058.918.488.938.719.008.658.218.69

88.9%

HW8.708.919.398.689.009.188.459.178.378.508.878.308.398.848.309.00

87.5%

CP9.339.449.449.009.259.389.139.009.389.338.788.679.338.898.259.67

91.4%

AJ8.608.909.118.708.908.708.009.508.568.788.678.788.408.408.308.80

87.3%

Max9.339.449.569.439.339.389.139.509.389.339.408.969.338.898.939.67

91.4%

Min8.608.909.118.688.908.708.009.008.378.438.528.308.398.408.218.69

87.3%

Difference0.730.540.450.750.430.681.130.501.010.900.880.660.940.490.730.98

4.10

272 Involving employees

Chapter sixteen 5/7/07 10:03 Page 272

Page 280: Customer Satisfaction

the lowest scores. However, it is clear from Figure 16.3 that even though these scoresare all very high, demonstrating an excellent level of customer satisfaction, there is alarge variation in staff performance regarding ‘helpfulness of staff ’ and ‘scheduling ofprojects’. It would clearly be beneficial to see what HW and AJ could learn from CPto improve their performance on these two customer requirements. Addressing theseareas would make a big contribution to HW and AJ bringing their overall customersatisfaction indices closer to CP’s index.

KEY POINTInternal benchmarking is very powerful and will be more effective if the resultsare drilled down to the lowest possible level.

16.2.2 Customer commentsThere will also be customer comments, especially if telephone interviews are usedand low satisfaction scores are probed. Employees will usually be even moreinterested in the comments made by their own customers than they are in the scores.Comments are particularly useful for demonstrating to employees why they haverecorded low scores for some requirements. In business-to-business markets thecomments will be even more meaningful if attributed to specific customers but thismust be done only with respondents’ permission. There is an argument that if thecomments focus solely on reasons for low satisfaction scores it may be de-motivatingfor the staff concerned. This can be overcome by also probing top box scores so thatemployees will also understand what delights customers as well as what upsets them.Of course, it is important that the amount of probing does not unreasonably extendthe interview length, so for organisations with very high customer satisfaction, asingle open question about their account manager, consultant, customer serviceadvisor etc. would be more appropriate. An example of a suitable question would be:“Taking everything into account, how would you describe the service provided byyour account manager and what could he/she do better?”

16.3 Employee communicationsFeeding back the results to all employees is an essential element in the long-term healthof a CSM programme. Little action will be taken to improve customer satisfaction ifemployees don’t know enough about the results or their implications. The extent of thefeedback provided to employees will also send messages about how important thecustomer survey is to the organisation. Many studies have shown the importance ofsuitable employee communications in building a service-focussed climate4,5,6. Resultscan be communicated internally through a variety of media such as staff magazines ornewsletters, notice boards, intranet and e-mail. A more effective method is to present theresults personally, preferably to all employees but at least to all those who have animportant role in delivering customer satisfaction. It is true that for larger organisations,face-to-face feedback such as workshops or road-shows for large numbers of employees

Involving employees 273

Chapter sixteen 5/7/07 10:03 Page 273

Page 281: Customer Satisfaction

will be a costly exercise. However, the financial benefits of improving customersatisfaction and loyalty will almost always justify the costs involved.

KEY POINTThe extent of CSM feedback will send a key message to employees about theimportance of customers.

A suggested agenda for an internal presentation is shown in Figure 16.4. The sessionshould start by demonstrating that the survey was professionally conducted andtherefore provides a reliable measure – in short that the right questions were asked tothe right people. It is therefore important to explain the exploratory research that ledto the design of a suitable questionnaire based on the lens of the customer as well asexplaining the robustness of the sample.

KEY POINTExtensive internal feedback of the results will justify its costs by increasing thecustomer focus of the organisation.

The results should then be presented, especially the importance scores, thesatisfaction scores, the gap analysis and the satisfaction index. As suggested inChapter 12, it is also helpful to put the satisfaction index and the satisfaction scoresfor the individual requirements into context and demonstrate to employees how theirperformance compares with that achieved by other organisations.

FIGURE 16.4 Internal feedback

Internal feedback of CSM results

1. Questionnaire Exploratory research

2. Sampling Representative of customer base Random - without bias

3. Survey results Importance scores Satisfaction scores Gap analysis Satisfaction Index Benchmarking

4. Ideas for action Short term Long term

274 Involving employees

Chapter sixteen 5/7/07 10:03 Page 274

Page 282: Customer Satisfaction

Finally the workshop should look to the future, initially by reiterating the importanceof the PFIs and then by taking the opportunity to invite ideas about how the PFIs mightbe addressed. Time permitting, it is very useful to break employees into small groups todiscuss the issues. Ask them to brainstorm ways in which the PFIs could be addressed.Having generated a list of ideas, they should be sorted into two categories, those thatcould be implemented easily, quickly and at low cost (the quick wins) and those that arelonger term on grounds of cost or difficulty. Employees should then select the ones theyconsider to be the best three short term and best three long term ideas, and be preparedto present their suggestions to the rest of the workshop. This will result in a largenumber of ideas for action. The selection process can be taken a step further by askingeverybody to score the full list of ideas in order to identify the best overall short termand long term ideas. Apart from the fact that employees, who are close to the action,will often think of good ideas which would not have occurred to management, the greatadvantage of this approach is that employees are far more likely to enthusiasticallyembrace a service improvement programme that they have helped to shape rather thanone which has simply been handed down by management.

16.4 Reward and recognitionIt has been well documented that employees are more motivated if their efforts arerecognised and rewarded7,8. Recognition is often achieved in simple ways, such asthanking an employee who has worked hard to deliver good service to customers. Morepublic forms of recognition such as a small monthly prize for the employee who hasreceived the best customer commendation, or colleague nomination for great service,can also work well in some organisations. Public recognition of teams, departments andthe whole company will also be beneficial to celebrate success in increasing customersatisfaction, partly to recognise employees’ hard work and also to demonstrate that it’simportant to management. The best way to emphasise the importance of customersatisfaction and to motivate employees to improve is to introduce an element ofcustomer satisfaction-related pay. However, as with all aspects of employees’remuneration, any kind of customer satisfaction-related bonus will come under heavyscrutiny, so its basis needs to be carefully considered before introduction.

16.4.1 Basis of the schemeTwo methods are commonly used for determining customer satisfaction-related pay.The first is a company-wide scheme that pays a bonus to all employees based on onemeasure (typically the company’s customer satisfaction index). The bonus could be aflat rate sum for all employees or a fixed percentage of salary. Either way, it isperceived as a simple and transparent system that is the same for everybody. Itsdisadvantage is that some employees may not perceive all colleagues as making anequal impact on the company’s ability to achieve the customer satisfaction target9.However, this can be addressed by using action mapping (see Figure 12.9), cross

Involving employees 275

Chapter sixteen 5/7/07 10:03 Page 275

Page 283: Customer Satisfaction

functional teams and by focusing on the importance of satisfying internal customersas an essential step towards satisfying external customers (see Section 16.5).

The alternative model would be based on team or departmental specific schemes. It caneven be appropriate to have individual customer satisfaction-related pay for some rolessuch as account managers. Bespoke schemes can be very flexible, but bonus targetswould typically be based on customer satisfaction with requirements that can beaffected by the department concerned. Departments with little or no customer contactcan have a bonus based on the overall customer satisfaction index or, preferably, ontheir ability to satisfy their own internal customers. The big advantage of this secondmodel is that customer contact staff will see the bonus as much more closely linked totheir day-to-day work. The disadvantage is that some employees who think otherdepartments have more favourable schemes may see the system as unfair.KEY POINTCustomer satisfaction-related pay will do more than anything to demonstratethe importance of customer satisfaction to the organisation. The existence of ascheme is more important than its details.

There is no simple answer to the appropriateness of these two models to a specificcompany. Often it will depend on the organisational structure and culture of theindividual company. However, a good rule of thumb is that a company-wide schemewill usually be most suitable when customer satisfaction-related pay is firstintroduced because it is clear and simple, and the fact that it is the same for everyonewill be an advantage at the outset. As time passes, employees will become morefamiliar with the efforts required to improve customer satisfaction and the varyingroles played by different individuals and teams. As customer satisfaction increases itwill also become more difficult to improve it further. For both these reasons movingto a team or even an individual-based scheme, where appropriate, will become moreeffective as the scheme matures.

16.4.2 TargetsThe targets are as important as the basis of the scheme itself. As well as being extremelyvisible, targets must be sufficiently ambitious to benefit the company whilst still beingachievable9. It will be very de-motivating if targets are missed, especially in the first year.This may result in the scheme ceasing to motivate employees. Unfortunately, this oftenhappens as senior management has a tendency to set unrealistically high targets forimproving customer satisfaction. As mentioned earlier in this book, increases incustomer satisfaction are usually quite small and achieved only with much effort.Consequently, an average annual improvement of 2% in the customer satisfactionindex, if sustained over several years, would be a very good achievement and anambitious target for a customer satisfaction-related pay scheme.

276 Involving employees

Chapter sixteen 5/7/07 10:03 Page 276

Page 284: Customer Satisfaction

KEY POINTDon’t set unrealistically high targets for customer satisfaction improvement.

16.4.3 Frequency of measuresThe frequency of survey will dictate how often the organisation can pay customersatisfaction-related bonuses. The advantages of annual payments are that they can belinked in with employees’ performance appraisals and/or pay reviews and withannual business planning cycles for developing, justifying and implementingcustomer satisfaction improvement plans. The disadvantage is that an annual bonusmay not motivate employees on a daily basis. This can be addressed through aplanned programme of communications to keep the spotlight on customersatisfaction. For a typical B2B company with a relatively small customer base, annualsurveys and bonuses will usually be most appropriate. A smaller scale interim trackersurvey slightly more than half way through the year will be useful to indicate if thecompany is on course to meet the target and achieve the bonus. If it isn’t improving,and provided at least four months remain before the annual survey, there should stillbe time to make renewed efforts to address the PFIs and make a positive impact oncustomer satisfaction.

In a very service-intensive environment, such as a call centre, more frequent measuresand bonuses will usually be more effective. The customer base and throughput ofcalls need to be sufficiently large, but will be in most call centres and B2C businesses.American bank card issuer, MBNA, measures and bonuses customer satisfaction ona daily basis10. Their customer satisfaction index is based on telephone interviews andis displayed every day for the previous day’s activity, providing immediate feedbackto employees. Every day that the index is above target, the company contributes to afund, which pays out the bonus on a quarterly basis. Since customer satisfaction ismeasured daily, it means that every day when employees arrive at work they stand afresh chance of earning a bonus, however good or bad their customer satisfactionscores were previously.

16.4.4 Credibility of the methodologyIf customer satisfaction-related pay is to motivate, the measure on which thepayment is based must be credible. This will depend on its statistical reliability andon the fact that it really is an accurate measure of how satisfied or dissatisfiedcustomers feel. As we said in Chapter 6, a minimum sample of 200 responses isnecessary for good reliability whether the bonus is triggered on a daily, monthly orannual basis. Since interim tracker surveys are ‘indicative’ and will not be used as abasis for bonus payments, samples can be smaller, even as low as 50 responses.However large the sample, it will not provide a suitable basis for customersatisfaction-related pay unless the survey asks the right questions. As we know fromChapters 4 and 5, this is based on thorough exploratory research. Explaining how the

Involving employees 277

Chapter sixteen 5/7/07 10:03 Page 277

Page 285: Customer Satisfaction

questions were determined and the fact that the index consequently provides a truereflection of customer satisfaction is an essential step in convincing employees of thecredibility of a customer satisfaction-related pay scheme.

16.5 Internal customersIn most organisations, all employees contribute to customer satisfaction even if theydon’t personally have direct contact with customers because they are part of thecapability and culture of the organisation and they provide essential services to otherdepartments that do interface directly with customers. Consequently, inorganisations that are truly customer-focused, all employees will see themselves asdelivering services to customers7,11,12. The only difference is that some will be focusingon external customers, others on internal customers.

KEY POINT– Everyone in the company has customers, external or internal.Satisfying both is essential.

16.5.1 The importance of internal customersUnless customer-facing employees receive good service from support functionswithin the organisation, they will be seriously handicapped in their ability to delivergood service to external customers. If poor internal service continues over time, thereis little hope for improving customer satisfaction since customer-facing staff willbecome de-motivated and will adopt the poor service culture of the organisation7. Ina 1998 study by Schneider et al covering 132 bank branches, the score for ‘inter-departmental service’ was the strongest predictor of external customers’ perceptionsof service quality13. This has been widely supported by other research14,15,16, and in theService-Profit Chain, Heskett et al describe how companies like Southwest Airlines,that are renowned for external customer satisfaction, adopt measures such as stafftraining and team building exercises to encourage employees to focus on internal aswell as external customers10.

KEY POINTEmployee satisfaction with internal customer service is often the maindeterminant of the organisation’s ability to satisfy external customers.

16.5.2 Measuring internal customer satisfaction In view of its importance, growing numbers of customer-focused organisations arebeginning to measure and monitor the satisfaction of internal customers. This can bedone at departmental level, e.g. the IT department measuring the satisfaction ofemployees using IT services. More consistent and more useful however, is to conductan internal customer satisfaction survey across the whole organisation. Somecompanies would do this as frequently as quarterly, especially where many internal

278 Involving employees

Chapter sixteen 5/7/07 10:03 Page 278

Page 286: Customer Satisfaction

services are outsourced, although annual or bi-annual internal customer satisfactionsurveys are more common.

Since customer satisfaction measurement is about understanding peoples’satisfaction judgements, it makes no difference to the methodology whethercustomers are internal or external. Whilst some researchers have attempted tocompile a list of standard dimensions for measures of internal service17, a measurethat accurately reflects how satisfied or dissatisfied internal customers feel will beproduced only if the questions are based on the criteria that the internal customersthemselves use to judge the services. As covered in Chapters 4 and 5, exploratoryresearch should be conducted and the questionnaire based on internal customers’most important requirements for each service. Surveys of internal customers can beconducted on paper, on the intranet or by telephone interviews. The points made inChapter 7 about the advantages of telephone interviews, especially response rates andcollecting detailed comments, apply equally to internal customer satisfaction surveys,although interviews will obviously be more costly than self-completion.

KEY POINTIf internal customer satisfaction is accurately and frequently measured, staff willbe much more focused on the quality of service they provide to other employees.

The only significant differences between internal and external customer satisfactionsurveys will be in questionnaire design, particularly where quite large numbers ofinternal services are involved. In a large organisation there may be as many as 12 to 15internal services that need to be monitored, with a section on the questionnaire for eachservice. If the principles outlined in Chapter 9 were strictly followed this would resultin an extremely long questionnaire. It is therefore normal practice for internal customersatisfaction surveys to base the questionnaire on a much smaller number of customerrequirements, typically the four to six most important requirements for each servicecovered. It is also advisable to conduct quantitative exploratory research, partly toensure that the small number of requirements included for each service really are themost important ones to internal customers, and to provide importance scores withouthaving to further lengthen the questionnaire for the main survey by asking importance.With, say 15 services and an average of five requirements scored for each one, thequestionnaire will still be long – 75 questions scored for satisfaction plus anyclassification questions. However, in most organisations, few people use every singleservice on a regular basis, so if employees are asked to score only services that they haveused within the last month, respondents will, on average, only score around half of thesections, resulting in a reasonable completion time. For organisations where thequestionnaire may still be too long, perhaps because they have even more servicesand/or employees use most of them, it is possible to split the sample, each half scoringa different set of services. Provided the sample is large enough and randomly selected

Involving employees 279

Chapter sixteen 5/7/07 10:03 Page 279

Page 287: Customer Satisfaction

this will present no problems for comparability. However, when considering samplesize it is necessary to bear in mind that the number of interviews conducted orquestionnaires sent out must be sufficient to produce at least 200 responses for the lessfrequently used services.

Using the importance scores generated by the quantitative exploratory research(which must be updated every three years), it will be possible to calculate a weightedcustomer satisfaction index and satisfaction gaps for each department/service. Due tothe small number of customer requirements for each service it is advisable tohighlight just one PFI for each service. As well as enhancing the service culture of theorganisation, measuring internal customer satisfaction is very useful for companieswanting to introduce customer satisfaction-related pay at departmental level. Whereappropriate, employees’ bonus will be based on the index for their departmentgenerated by the internal customer satisfaction survey.

Conclusions1. A mirror survey involves employees in the CSM process and identifies

understanding gaps, which can be very serious if staff under-estimate theimportance of a customer requirement or are complacent about the level ofcustomer satisfaction they are delivering.

2. It is very important to achieve a response rate of at least 50% for a mirror surveyso employees’ anonymity should be protected and paper questionnaires should becollected in sealed envelopes.

3. Mirror surveys often identify staff training needs and provide great informationfor developing the content of training courses. Comments as well asunderstanding gaps are extremely useful for this purpose.

4. Employees will be more motivated to improve customer satisfaction if surveyresults and customers’ comments can be attributed to them personally, or at leastto small teams.

5. Extensive feedback of CSM results to all employees is an essential pre-requisite toimproving customer satisfaction.

6. Customer satisfaction-related pay will be very effective in motivating staff tomake efforts to improve customer satisfaction. A company-wide scheme is usuallythe best starting point but individual, team or departmental-based schemes willwork best in the long run.

7. Targets must be achievable. For most organisations, improving the customersatisfaction index by more than 2% year-on-year is not realistic.

8. To keep the spotlight on customer satisfaction, monthly or quarterly measures areadvisable in B2C markets, but for most B2B companies annual surveys are morepractical.

9. Fundamental to the success of customer satisfaction-related pay will be a credible

280 Involving employees

Chapter sixteen 5/7/07 10:03 Page 280

Page 288: Customer Satisfaction

CSM process that asks the right questions to the right customers, with samples ofat least 200.

10. Providing good service to internal customers is essential to achieving high levelsof external customer satisfaction, so many organisations have now adopted aformal process for monitoring internal customer satisfaction.

References1. Schneider, Ashworth, Higgs and Carr (1996) "Design, validity and use of

strategically focused employee attitude surveys”, Personnel Psychology 492. Schneider and Bowen (1985) "Employee and customer perceptions of service in

banks”, Journal of Applied Psychology 703. Schmit and Allscheid (1995) "Employee attitudes and customer satisfaction:

Making theoretical and empirical connections”, Personnel Psychology 484. Trice and Beyer (1993) "The cultures of work organisations”, Prentice-Hall,

Englewood Cliffs, New Jersey5. Schneider, Wheeler and Cox (1992) "A passion for service: Using content analysis

to explicate service climate themes”, Journal of Applied Psychology 77(5)6. Schneider and Bowen (1995) "Winning the Service Game”, Harvard Business

School Press, Boston7. Schneider and White (2004) "Service Quality: Research Perspectives”, Sage

Publications, Thousand Oaks, California8. Rynes and Gerhart (2000) "Compensation in Organizations: Current Research and

Practice”, Jossey-Bass, San Francisco9. Robertson, Raymond (2007) "The Together Company”, Cogent Publishing,

Huddersfield10. Heskett, Sasser and Schlesinger (1997) "The Service-Profit Chain”, Free Press, New

York11. Barwise and Meehan (2004) "Simply Better: Winning and keeping customers by

delivering what matters most”, Harvard Business School Press, Boston12. Heskett, Sasser and Schlesinger (2003) "The Value-Profit Chain”, Free Press, New York13. Schneider, White and Paul (1998) "Linking service climate and customer

perceptions of service quality: Test of a causal model”, Journal of AppliedPsychology 83(2)

14. Gronroos, C (1990) "Relationship approach to marketing in service contexts: Themarketing and organizational behaviour interface”, Journal of Business Research 20

15. Heskett, Sasser and Hart (1990) "Breakthrough Service”, Free Press, New York16. Hallowel and Schlesinger (2000) "The Service-profit chain: Intellectual roots,

current realities and future prospects” in Swartz and Iacobucci (eds) "Handbook ofServices Marketing and Management”, Sage, Thousand Oaks, California

17. Reynoso and Moores (1995) "Towards the measurement of internal service quality”,International Journal of Service Industry Management 6

Involving employees 281

Chapter sixteen 5/7/07 10:03 Page 281

Page 289: Customer Satisfaction

CHAPTER SEVENTEEN

Involving customers

In Chapter 7 we emphasised the importance of involving customers from the outsetand explained how to introduce the survey to customers to achieve the best possibleresponse rate. We also emphasised that a key part of the introductory letter is thepromise of feedback after the survey and we will cover this important part of theCSM process in this chapter.

At a glanceIn this chapter we will:

a) Explore the debate about whether customers’ perceptions provide a reliablemeasure of an organisation’s performance.

b) Consider the difference between ‘performance gaps’ and ‘perception gaps’.

c) Explain how to provide feedback to customers on the results of a customersatisfaction survey.

17.1 Perception or reality?There has been much debate over the years about the extent to which customersatisfaction measures provide an accurate reflection of the organisation’sperformance or whether they are subjective judgements on the part of customers thatmay not reflect reality and therefore should be treated with caution, if not ignored astotally unreliable. The idea of measuring customer satisfaction originally grew out ofthe quality movement in the USA in the 1980s, when consistency of quality becameseen as a key reason why Japanese manufacturers seemed to be more successful thantheir American competitors1. Quality measures tended to be factual and objective,typically concerned with the extent to which products conformed to specification.This became known as the ‘technical approach’2 to quality management. Over time,however most quality academics and practitioners came to favour the ‘user-basedapproach’ rather than the ‘technical approach’, on the grounds that the only measureof quality that mattered was the level of quality perceived by the customer2,3.Perceptions are basically mental maps made by people to give them a meaningfulpicture of the world on which they can base their decisions4. However, as Kotlerpoints out, due to the way people see and remember things, two people may havediffering perceptions of the same event or quality level5.

282 Involving customers

Chapter seventeen 5/7/07 10:04 Page 282

Page 290: Customer Satisfaction

A very early subscriber to the user-based approach was Tom Peters and it was thisprinciple that prompted him to coin his famous phrase6 “customer perception is theonly reality”. He emphasised that whilst customers’ judgements may be“idiosyncratic, human, emotional, end-of-the-day, irrational, erratic”, they are theattitudes on which customers everywhere base their future behaviours. As Peters says,the possibility that customers’ judgements are unfair is scant consolation once theyhave taken their business elsewhere. The notion that due to time constraints mostpurchase decisions, in business as well as consumer markets, are made on less thanperfect knowledge is widely supported in CSM literature. Customers rely on theirmemory to provide a level of information that makes them feel comfortable whenmaking most purchase decisions7,8,9. As a consequence, it is customers’ perceptionsthat organisations need to measure and customers’ perceptions that they mustattempt to manage.

KEY POINTCustomers’ perceptions may be “idiosyncratic and emotional” but companieswill dismiss them at their peril since they drive customers’ future behaviours.The fact that the organisation’s internal data provides a more accurate reflectionof its performance is scant consolation once the customers have defected.

This sequence of events was confirmed by AT&T as long ago as the late 1980s. Theyfound that real changes in product quality, as defined by internal quality assurance data,drove subsequent changes in customers’ perceptions of quality with an average threemonth time lag10. AT&T also demonstrated that changes in customers’ perceptions ofquality were followed only two months later by changes in market share.

17.2 Performance and perception gapsIn this book we have placed considerable emphasis on the satisfaction gaps that existwhen an organisation has not met its customers’ requirements. However, sincesatisfaction judgements are based on customers’ perception or recollection of events,we can distinguish two types of satisfaction gap – performance gaps and perceptiongaps. Most satisfaction gaps are performance gaps. For example, customers think theservice in the restaurant is slow, and, due to inadequate staffing, it often is very slow.This is a performance gap and the only way to successfully close it is to invest in morestaff to produce a real improvement in the speed of service.

Sometimes, however, satisfaction gaps will be perception gaps. This typically ariseswhen an organisation has improved its performance but customers have not yetmodified their attitudes. If a restaurant has a reputation for mediocre food, it maytake several visits before customers revise their perception of food quality after amore skilled chef has been recruited. If it is possible that customers’ perceptions may

Involving customers 283

Chapter seventeen 5/7/07 10:04 Page 283

Page 291: Customer Satisfaction

not be accurate or up to date, companies cannot assume that delivering high quality,excellent service and great value will guarantee customer satisfaction and loyalty. Itwill do so only if that’s how customers perceive it. Suppliers therefore need a two-pronged approach to increasing customer satisfaction. Of course, they must deliverhigh quality, excellent service and great value, but they must also usecommunications to make sure that’s how the customers see it too. As far as the CSMprocess is concerned, the main opportunity for influencing customers’ perceptionscomes from providing feedback on the survey results.

KEY POINTDelivering high quality, excellent service and great value will result in customersatisfaction and loyalty only if that’s how customers perceive it.

17.3 Feedback to customersInforming customers about the CSM results increases interest in the survey but alsoprovides an excellent opportunity to improve customer satisfaction bydemonstrating the organisation’s commitment to its customers. When providingfeedback on a customer satisfaction survey, companies need to consider three things:

Which customers should receive feedback?What information will be communicated?How will it be communicated?

17.3.1 Which customers?At the very least, feedback should be provided to all customers who took part in thesurvey. If the survey was an anonymous self-completion survey, the identity of therespondents will not be known so targeted feedback to customers who actually tookpart will not be possible. If an agency has carried out the survey, respondentconfidentiality can be assured without anonymity, so the agency would know whichcustomers had responded and should receive feedback. A second possibility is simplyto provide feedback to all customers in the sample, whether or not they responded,but if feedback is to be provided to non-respondents in the sample, why not tocustomers generally? For organisations with a very large customer base the obviousanswer is cost. As with internal feedback, the pertinent question is whether the costcan be justified by the benefit. Many organisations fail to realise the potential value offeeding back the CSM results to the entire customer base. Most proactivecommunications that companies send to customers are selling messages, and arerecognised as such by customers, who often have a cynical view about advertising,mailshots, promotions and other forms of marketing. Companies, however, investhuge budgets in marketing communications, often to little effect. Feeding back theresults of a customer satisfaction survey provides a rare opportunity to send adifferent kind of communication to customers. Since it is not a selling message, it is

284 Involving customers

Chapter seventeen 5/7/07 10:04 Page 284

Page 292: Customer Satisfaction

more likely to engage customers’ attention and interest and, consequently, to drive apositive change in their attitudes about the organisation.

KEY POINTProviding CSM feedback to customers is one of the most under-exploitedopportunities for improving customer satisfaction.

17.3.2 What information?The starting point is to produce a short feedback report containing the informationthat will be provided to customers. This should cover four areas, which could bepresented to customers in the following way:

1. Why do we survey customers?2. How is the survey done?3. What did you tell us?4. What action are we taking?

Why do we survey customers?This short introductory paragraph provides a great opportunity to improvecustomers’ perception of the organisation’s customer focus by emphasising that it islistening to customers and values their opinions highly. Assuming the survey isconducted on a regular basis, this should be explained and used to demonstrate thefact that continuous customer feedback forms a key input to management decisions.The date of the survey (or period to which the results apply) should also be stated.

How is the survey done?If the survey is conducted by independent experts this will enhance its credibility, sois the first point to make in this section. The second is the fact that the questions werebased on what’s important to customers, as specified by the customers themselvesduring a thorough consultation process. Brief details of the exploratory research (e.g.focus groups or depth interviews) should be provided to underpin this secondimportant element in the survey’s credibility. The third factor that builds customers’trust in the survey results is the representativeness of the sample; information thatcan be effectively illustrated by pie charts. Finally, the method of survey – e.g.telephone interviews, postal questionnaire, web survey should be briefly stated.

What did you tell us?Here the results of the survey should be reported factually and honestly, usually in theform of a clear, simple bar chart of the satisfaction scores. Whilst a truncated x axisscale may be used for internal feedback to emphasise differences, (as in Figures 10.4and 12.1 for example), the full 1 to 10 scale should be used for the feedback report.This is purely for PR purposes since the wider scale results in longer bars that makethe scores look better! As we have said before, it is better internally to make the scores

Involving customers 285

Chapter seventeen 5/7/07 10:04 Page 285

Page 293: Customer Satisfaction

look worse since this is more likely to stimulate action to improve satisfaction.Another divergence from internal presentation is that the requirements would belisted in questionnaire order rather than importance order since this will appear morelogical to customers. In fact, it isn’t necessary to feed back any information onimportance or impact. Customers will be interested in how satisfied other customersare, whether that’s improving and what’s being done about it. Trend information,especially for the satisfaction index, will therefore be very useful for demonstratingimprovement. In this respect it is very helpful to communicate the message that theorganisation listens to its customers and acts on their feedback. Any specific actionsthat have been implemented from an earlier survey which help to explain highersatisfaction scores provide powerful evidence of the organisation’s customer-focus soshould be highlighted. It’s in this way that the post survey feedback begins to shiftcustomers’ perceptions about the organisation.

What action are we taking?The “you told us so we are taking action” theme should be continued in this final, andmost essential part of the feedback report. It is helpful to provide as much detail aspossible about the actions to be taken and the timescales for implementation.Informing customers about changes that will occur, or, for fast-moving organisationshave already happened, will help to ensure that they notice the improvements, modifytheir perceptions and become more satisfied and loyal.

KEY POINTTo enhance the organisation’s reputation for customer-focus, emphasise themessage that it listens to customers and acts on what they say.

17.3.3 How to communicate it?How the information is provided depends mainly on the size of the customer base.Personal presentation is by far the most effective method and is quite feasible forcompanies with a small number of key accounts. For a medium sized customer base,a copy of the feedback report should be mailed with a personalised letter. If very largenumbers of customers are involved, mass market communications will need to beused. These might include a company newsletter or a brief survey report mailed withanother customer communication such as a bill. Retailers and other organisationswhose customers visit them can cost-effectively utilise point of sale material. Thismay include posters, leaflets, display cards or stands. Moreover, customer contact staffcan be briefed to enhance the feedback through their verbal communications withcustomers. Point of sale displays might, for example, encourage customers to ask stafffor further details. Even TV advertising has been used to communicate to very largecustomer bases the survey results and the fact that action is being taken.

A very low cost method of boosting customer feedback is to provide it in the form ofa web page. This could have links to other parts of the web site and could be

286 Involving customers

Chapter seventeen 5/7/07 10:04 Page 286

Page 294: Customer Satisfaction

signposted elsewhere using any of the media mentioned above. It is low cost and easyto update. Examples of web and paper feedback reports can be found atwww.leadershipfactor.com

17.4 Other communicationsAlthough feedback of the CSM results is extremely helpful, it should not be seen asthe sum total of the organisation’s efforts to use communications to modifycustomers’ perceptions. Although people can form negative attitudes quicklyfollowing a bad customer experience, they tend to change them slowly, especiallywhen it comes to feeling more satisfied. Consequently, organisations must doeverything possible to speed up customers’ attitude change and improve satisfactionby providing regular information about improvements that have occurred. All thechannels of communication mentioned in the previous section should be consideredand messages should emphasise the theme of listening to customers and acting ontheir views. It will also be useful to reinforce this principle by reminding customersabout any existing information on CSM results, such as a web feedback page and anyopportunities for them to express their views such as an email address, toll-freenumber etc.

Conclusions1. Satisfaction surveys provide a measure of customers’ perceptions about their

customer experience.2. Whilst perceptions may not always be an accurate reflection of the organisation’s

performance, they drive customers’ future behaviour so are the most usefulmeasures to monitor.

3. As well as taking action to improve their performance, organisations shouldrecognise that communications also provide excellent opportunities forimproving customer satisfaction.

4. Providing information on the satisfaction survey results and the actions to betaken should be seen as an essential part of the CSM process.

References1. Schneider and White (2004) "Service Quality: Research Perspectives”, Sage

Publications, Thousand Oaks, California2. Helsdingen and de Vries (1999) "Services marketing and management: An

international perspective”, John Wiley and Sons, Chichester, New Jersey3. Oliver, Richard L (1997) "Satisfaction: A behavioural perspective on the

consumer”, McGraw-Hill, New York

Involving customers 287

Chapter seventeen 5/7/07 10:04 Page 287

Page 295: Customer Satisfaction

4. Berelson and Steiner (1964) "Human Behaviour: An Inventory of ScientificFindings”, Harcourt Brace Jovanovich, New York

5. Kotler, Philip (1986) "Marketing Management: Analysis, Planning and Control”,Prentice-Hall International, Englewood Cliffs, New Jersey

6. Peters and Austin (1986) "A Passion for Excellence”, Fontana, London7. Howard and Sheth (1969) "The Theory of Buyer Behaviour”, John Wiley and

Sons, New York8. Webster and Wind (1972) "Organizational Buying Behavior”, Prentice-Hall,

Englewood Cliffs, New Jersey9. Bagozzi, Gurhan-Canli and Priester (2002) "The Social Psychology of Consumer

Behaviour”, Open University Press, Buckingham10. Gale, Bradley T (1994) "Managing Customer Value”, Free Press, New York

288 Involving customers

Chapter seventeen 5/7/07 10:04 Page 288

Page 296: Customer Satisfaction

Conclusions 289

CHAPTER EIGHTEEN

Conclusions

Customer satisfaction refers to the feelings customers have formed about a customerexperience. These feelings are attitudes and they drive behaviours such as Harvard’s3Rs (retention, related sales and referrals) that are typically called loyalty. Thepurpose of CSM surveys is not to produce information but to improve customersatisfaction and loyalty. Whilst we have tried in this book to familiarise readers withall the current thinking behind customer satisfaction surveys and the references thatunderpin it, this level of knowledge goes beyond what most practitioners need in thereal world. To successfully use CSM to improve customer satisfaction and loyalty,organisations do need to be very clever with the methodology, only quite clever withthe analysis and very simple with the outcomes and recommendations. They mustalso never forget the power of communications. In this concluding chapter, we willreview these critical essentials of CSM and pose a final challenge to readers.

At a glanceIn this chapter we will:

a) Remind readers of the essential elements of a CSM methodology that willaccurately reflect how satisfied or dissatisfied customers feel.

b) Review the arguments for relating the level of complexity of CSM analysis tothe organisation’s progress on its customer satisfaction journey.

c) Set some challenges for organisations in the public and private sectors.

18.1 The essentials of an accurate CSM methodologyIf a CSM process does not provide a totally accurate reflection of how satisfied ordissatisfied customers feel, it is pointless and could even be detrimental to theorganisation’s future success. To ensure they don’t fall at the first hurdle,organisations should adhere to all the 10 CSM essentials listed below:

1. To produce an accurate measure of how satisfied or dissatisfied customersfeel, surveys must be based on the same criteria the customers use to makethat judgement. This means conducting exploratory research to understandthe lens of the customer and basing the questionnaire on customers’ mostimportant requirements.

2. Since satisfaction is about the extent to which customers’ requirements havebeen met, importance as well as satisfaction must be measured.

Chapter eighteen 5/7/07 10:04 Page 289

Page 297: Customer Satisfaction

3. Stated importance is the only measure of the relative importance of customers’requirements. So-called statistically derived measures of importance actuallymeasure impact.

4. The most accurate measure of relative impact is provided by correlation, notmultiple regression.

5. Robust samples are essential for reliability. This means at least 200 responsesand a response rate of at least 30%.

6. To ensure there is no bias in the results customers should be randomlysampled using a method such as systematic random sampling that producessamples that are representative as well as random.

7. The only rating scale that is suitable for CSM is a 10-point numerical scale.This is partly for its analytical benefits but mainly due to its far superiorproperties for improving customer satisfaction.

8. Questioning must be neutral and balanced, giving customers as much chanceto be dissatisfied as satisfied.

9. Only an index provides a reliable overall satisfaction measure for trackingpurposes. The customer satisfaction index is based on customers’ mostimportant requirements weighted for relative importance.

10. A loyalty measure should also be based on a composite index from severalloyalty questions but would not normally be weighted.

These 10 essential steps to a reliable measure of customer satisfaction apply to allorganisations, but when it comes to analysis the picture is more complicated, asexplained in the next section.

18.2 Complexity of analysisThe level of complexity that organisations need to use when analysing CSM datadepends on two factors; how satisfied their customers are and how long they’ve beenmeasuring customer satisfaction. Of these, the former is much more important.

18.2.1 Level of customer satisfaction The less satisfied customers are, the more simple analysis can and should be.Organisations with a customer satisfaction index below 75% (calculated according tothe method specified in chapter 11), should keep customer satisfaction surveys verybasic and action-focused. The only reason for such poor customer satisfaction is thatone or more very important, and often basic, customer requirements are not beingmet. The organisation is simply not doing best what matters most to customers. Thecauses of this problem will be obvious from the simple gap analysis explained inChapter 12. However long the organisation has been conducting customersatisfaction surveys, no amount of sophisticated statistical analysis will change thisbasic fact. All available time, effort and resources should be directed to addressing thelargest satisfaction gaps (three at most), rather than indulging in pointless

290 Conclusions

Chapter eighteen 5/7/07 10:04 Page 290

Page 298: Customer Satisfaction

Conclusions 291

examination and debate of detailed data or clever statistical analysis. Full utilisationof the techniques outlined in chapters 16 and 17 to involve employees and customerswill also be very helpful.

18.2.2 The maturity of the CSM processIf organisations do take effective action on their PFIs and start doing best whatmatters most to customers, their customer satisfaction levels will improve.Consequently, many organisations find that as their CSM process matures, they haveaddressed the obvious satisfaction gaps, customer satisfaction has improved, and it isbecoming increasingly difficult to improve it further. This is normal. As we pointedout in Chapter 11, the higher customer satisfaction is, the more difficult it becomesto improve it, so targets have to be reduced as satisfaction increases. Organisations inthis situation will also typically find that the PFIs become less obvious, average scoresand overall indices provide insufficient granularity and it starts to become difficult torecommend actionable outcomes. This is when the analysis of CSM data needs moretime and sophistication. Survey outcomes that are particularly useful to organisationsat this stage include:

Customer experience modelling (CEM)As we explained in Chapter 15, this technique is very helpful for isolating highlyspecific and tangible actions that can be implemented by the organisation as wellas for demonstrating progress as the actions start to make a difference tocustomer satisfaction. CEM can also be used to identify the effect ofimprovements on customers’ loyalty as well as on their overall satisfaction.Internal benchmarkingIn the authors’ experience, one of the main factors explaining the success oforganisations with very high levels of customer satisfaction is their effective use ofinternal benchmarking. When customer satisfaction is very high, company-wideaction on PFIs is often wasteful since many parts of the organisation will alreadybe meeting or exceeding customers’ requirements in those areas. It is thereforemore effective to focus all efforts and investment on improving the performanceof business units, stores, branches etc with the lowest customer satisfaction. Tomake this work, internal customer satisfaction league tables are necessary to focusthe attention of managers in the poorer performing units. Also necessary is a verypositive culture, demonstrating that the league tables are about opportunities forimprovement, not about blame or naming and shaming. The culture must alsoensure that units with high customer satisfaction will not guard their secrets butwill gain reward and recognition through sharing them with less successfulcolleagues, e.g. through a customer satisfaction mentoring scheme.Satisfaction enhancersWhen organisations have very high levels of customer satisfaction, it is very likelythat they are meeting all fundamental customer requirements. They will already bedoing best what matters most to customers. As explained in Chapter 14, there is

Chapter eighteen 5/7/07 10:04 Page 291

Page 299: Customer Satisfaction

often little return on investment from improving satisfaction maintainers beyonda good level, so companies with very high customer satisfaction may need to focuson achieving exceptional performance on satisfaction enhancers such as‘helpfulness of staff ’ or ‘treating me as a valued customer’.Drilling downCompanies with particularly high levels of customer satisfaction and a matureCSM process will need to sharpen their focus even more. Using the asymmetricanalysis explained in Chapter 14, they need to focus actions on where the bestreturns can be made. At very high levels of satisfaction, average scores and anoverall index lose their utility – they are always very good, but the averages oftenmask the fact that the company is not totally consistent in delivering greatcustomer experiences. Asymmetric analysis enables companies to targetcustomers at a particular level of satisfaction, such as moving those in the ‘zoneof indifference’ scoring 7s and 8s into the ‘zone of affection’, where they score 9sand 10s for satisfaction and are much more loyal. Alternatively, it could meanfocusing on a specific demographic segment, a group of customers with uniquerequirements that are not being fully met or a behavioural segment such ascustomers who transact with a certain frequency or via a specific channel.Competitive advantageOccasionally companies have succeeded in making customers highly satisfied,but their market is so competitive that customers are extremely promiscuous,spreading their category spend across several suppliers. They may even be loyalto more than one of the competing suppliers. In these circumstances companiesneed detailed information on customers’ attitudes about all their maincompetitors. As described in Chapter 13, actions will often be focused oncustomer requirements where they under-perform competitors rather than theareas of lowest customer satisfaction. Decision tree analysis is often aparticularly useful aid to targeting in these circumstances.

18.3 The challenges

18.3.1 Challenges for the public sectorOne of the strongest conclusions from the authors’ many years of involvement incustomer surveys is that for most organisations customer satisfaction isn’t a realpriority. They talk the talk very well but don’t walk it. In the public sector especially,measures abound, but they’re typically poor to useless. Their real purpose is to satisfyGovernment or regulators, not to improve customer satisfaction. This is evidenced bythe frequent use of 5-point verbal scales and headline measures based on percentagesatisfied, making mediocre performers look good or very good when results arereported. Yet they are very far from good. Most organisations in the public sector havevery low levels of customer satisfaction. This is evidenced by an Institute of Customer

292 Conclusions

Chapter eighteen 5/7/07 10:04 Page 292

Page 300: Customer Satisfaction

Conclusions 293

Service study (the pre-cursor to the UKCSI), which showed that local councils had byfar the lowest levels of customer satisfaction across the ten sectors measured1.

The challenges for the public sector are:1. Adopt a consistent CSM methodology across the public sector instead of the

plethora of incomparable measures that currently exist.2. Have the guts to use a tough measure that accurately reflects how satisfied or

dissatisfied customers feel, especially the use of a 10-point scale and basing thequestions on the lens of the customer.

3. Abandon the obsession with the minutiae in the data and focus instead onaction to address the obvious PFIs.

18.3.2 Challenges for the private sectorEven in the private sector where there is plentiful evidence of the vastly superiorprofitability of highly satisfied customers, few companies go more than a perfunctoryextra mile to achieve it.Very few follow all the customer satisfaction essentials listed aboveand even fewer put their money where their mouth is and include any element ofcustomer satisfaction related pay in an employees’ reward package. Virtually none makesany serious attempt to link customer satisfaction to the company’s financial performance,which would, of course, demonstrate its benefits to everyone in the organisation.

The challenges for the private sector are:1. Since the goal is to improve customer satisfaction and loyalty rather than

measure it, never forget the Harvard dictum that dissatisfaction with the statusquo is an essential pre-cursor to change. So, resist the temptation to be popularby announcing good customer satisfaction news supported by great commentsfrom satisfied customers. Instead focus the methodology on producing themaximum help to satisfaction improvement efforts. Have a tough measure ona 10-point scale, benchmark your company against the best, not just your ownsector and study the comments from dissatisfied customers to understand howto improve.

2. Take the long term view, not the short term one. Customer loyalty is very hardwon but its value builds over time. Cost cutting may boost the bottom line now,but if there is any negative impact on customers it will come back to bite you.

3. Have the guts to use internal benchmarking in a big way. Don’t let the poorperformers on customer satisfaction hide. Publish high profile league tablessponsored from the top. Use them for reward and recognition. It will makemanagers take notice. But use them positively, saving the best rewards for theleague leaders who coach and share secrets with the poor performers.

4. If you’re in the top quartile, and especially if you’re in a very competitivemarket, the best returns will come from the highest levels of customersatisfaction. It will therefore pay to invest heavily in the more advanced

Chapter eighteen 5/7/07 10:04 Page 293

Page 301: Customer Satisfaction

methodologies covered in Chapters 13-15 of this book. Reduce the focus onaverage scores and the overall index and use asymmetric analysis and decisiontree analysis to pinpoint the best customers to target, CEM to fine tune actionimplementation and competitive analysis to defend your own vulnerablecustomers and to identify competitors’ customers who are most likely to beattracted to your own company’s strengths.

5. Be consistent. Many companies could advance from good to great on customersatisfaction just by being more consistent. Drilling down into their results showsthat most customers are highly satisfied but some are much less satisfied or thatmost customer experiences are excellent but a few have a very poor experience.Clearly, the company has the systems, processes and people to achieve very highlevels of customer satisfaction on average, but it doesn’t always happen. Strongfocus on the customers giving low or below target scores with extensive use ofCEM to pinpoint and eliminate the behaviours causing these problems will movecustomer satisfaction from good to great and boost profits too.

6. Top management must be the biggest champions of customer satisfaction.What is seen to matter most at the top will drive the behaviours at all levelsthrough the organisation. Points 7 and 8 are great ways for top management todemonstrate how important customer satisfaction is to the company.

7. Don’t just reward the managers. Include an element of customer satisfaction-related pay for all employees.

8. Invest in producing an accurate measure of Customer Lifetime Value. Calculateprecisely how it relates back to customer satisfaction and forwards toprofitability. Make it the cornerstone of the company’s growth strategy. This willalmost always show that highly satisfied customers are very profitable but thatless satisfied ones (probably due to the inconsistency highlighted in point 5) arevery costly to service. As well as reducing the high costs of customerdissatisfaction, CLV enables companies to do best what matters most………tothose who matter most.

18.3.3 Challenge the authorsWhether you are offended by our last section, intrigued by any parts of this book oreven inspired to take action, you may want to air your views, challenge the authors ordebate with other like-minded professionals. If so, this book’s website,www.customersatisfactionbook.com is the place for you. It also provides links toother useful customer satisfaction sources such as events and training courses as wellas news and updates about customer satisfaction and loyalty.

References1. ICS Breakthrough Research Report (2006) “Customer Priorities: What customers

really want”, Institute of Customer Service, Colchester

294 Conclusions

Chapter eighteen 5/7/07 10:05 Page 294

Page 302: Customer Satisfaction

Glossary 295

Glossary

Ambiguous questions A question which may confuse respondents, or which they may understand in adifferent way to that intended. For example, ‘which newspapers do you readregularly?’ – the meaning of the word ‘regularly’ is unclear.

Attitudinal questions Questions that seek to understand attitudes, motives, values or beliefs ofrespondents.

AverageCorrectly termed arithmetic mean.

Baseline surveyComprehensive customer survey carried out periodically to establish or update keybenchmarks such as customers’ priorities and organisational performance.

Behavioural questionsQuestions that are concerned with what people do, as opposed to what they think.

Bivariate analysis The analysis of the relationship between two variables – e.g. correlation.

Census A survey of the entire population.

Classification questions Used both for sampling and analysis, they serve as a check that the sample isrepresentative (for example in terms of gender, age and social grade) and also formthe basis of breakdown groups for cross-tabulations.

Closed questions Questions to which respondents are asked to reply within the constraints of definedresponse categories.

Code of Conduct The MRS Code of Conduct (available onhttp://www.mrs.org.uk/standards/guidelines.htm) consists of a set of rules andrecommendations adhered to by the members of the society. The code preventsresearch being undertaken for the purpose of selling, and covers issues of client andrespondent confidentiality.

Glossary 5/7/07 10:05 Page 295

Page 303: Customer Satisfaction

Coding The process of allocating codes to answers in order to categorise them into logicalgroups. For example if the question was ‘why are Xyz. the best supplier?’ codingmight group answers under ‘Product quality’, ‘Service quality’, ‘Lead times’ etc.

Collinearity A data condition that arises when independent variables are strongly related and isa problem when building regression models, leading to unstable beta coefficients.Approaches to counter this problem include factor analysis and ridge regression.

Confidence interval The range either side of the sample mean within which we are confident that thepopulation mean will lie. Usually this is reported at the 95% confidence level, inother words we are sure that if we took a 100 similar samples then the mean wouldfall into this range 95 times. Or more simply, we are 95% sure that the populationmean falls in this range.

Consumer marketsMarkets where the purchase is made by an individual for his or her ownconsumption or for the consumption of family, friends etc.

Convenience sampleSample selected merely because it is convenient; such as samples liable to bias.

Correlation When correlating two variables we measure the strength of the relationship betweenthem. The correlation coefficient is in the range –1 to +1, with the absolute valueindicating the strength. A negative coefficient indicates an inverse relationship (i.e. asone goes up the other goes down), 0 indicates no relationship and a positive coefficientindicates a positive relationship. In CSM we would only expect to find positivecoefficients. The most common type of correlation is Pearson’s Product Moment.

Creative comparisons A projective technique in which respondents are asked to liken an organisation tosomething (frequently a car or an animal) and give reasons, which is what theresearcher is interested in. For example: ‘If Xyz was a car, what kind of car would itbe? Why?’ – “A Ford Mondeo, because it does its job, but it’s unexceptional, thereare lots of others that would do just as well.”

CSM Acronym for Customer Satisfaction Measurement.

296 Glossary

Glossary 5/7/07 10:05 Page 296

Page 304: Customer Satisfaction

Glossary 297

Customer Loyalty Customer loyalty has been achieved when an organisation is the preferred supplierfor a customer, when the customer values his/her relationship with the organisationand enjoys dealing with it and when the customer is prepared to go out of his/herway to recommend and use the supplier.

Customer satisfaction Index (CSI)A customer satisfaction index is the best headline measure of overall satisfaction. Itis an average satisfaction score weighted according to the importance customersplace on its component requirements. As a composite measure it is more sensitiveand more reliable than any single measure.

Customer Satisfaction Measurement Customer satisfaction measurement is a measure of how your organisation's "totalproduct” performs in relation to a set of customer requirements.” …. In otherwords - are you delivering what customers want?

Dependent variable A variable that is assumed to be explained by a number of items (independentvariables) also measured. ‘Overall satisfaction’ is the usual dependent variable in CSM.

Depth interview A loosely structured, usually face-to-face interview used in exploratory research inbusiness markets, or if the subject matter is considered too sensitive for focus groups.

Derived importance Derived importance is based upon the covariation between an outcome variable anda predictor variable. It is usually established by correlation or multiple regression.

Desk research Research into secondary data, for example Mintel reports.

Diagrammatic scale Also known as a graphic scale, a form of scale without numerical or verbaldescriptors but which uses pictures, lines or other visual indicators.

Discussion guide The document used by the moderator of a focus group as the equivalent of aninterview script, though it is much less structured and prescriptive.

DMU (Decision making unit)A group (formal or informal) of individuals involved in a purchasing decision.

Glossary 5/7/07 10:05 Page 297

Page 305: Customer Satisfaction

Double questionsQuestions which have more than one aspect, for example ‘were the staff friendlyand helpful?’ – what if the staff were friendly but not helpful?

ESM Acronym for Employee Satisfaction Measurement.

Exploratory research Research undertaken prior to the main survey in order to gain understanding of thesubject. In CSM exploratory research should be used to understand whatcustomer requirements are.

Face to face interview An interview conducted in person, often at the respondent’s home or office or inthe street.

Facilitator See Moderator.

Factor analysis Used to examine relationships in a set of data to identify underlying factors orconstructs that explain most of the variation in the original data set. Factors areusually uncorrelated or weakly correlated with each other. Factor scores can becalculated and used in order to eliminate the problem of collinearity in data andreduce the number of variables.

Feedback Communicating the results of the survey – usually both internally and outside theorganisation.

Focus group A mainstay of qualitative research, used at the exploratory stage. A group ofaround eight people is guided in a discussion of topics of interest by a trainedfacilitator/moderator. Used for exploratory CSM in consumer markets.

Friendly Martian A projective technique in which respondents are asked to advise a friendly alien onthe process of interest (say getting a meal at a restaurant), covering all the things heshould do, what he should avoid and so on. Since the Martian has no assumedknowledge the respondent will include things that are normally taken for granted.

298 Glossary

Glossary 5/7/07 10:05 Page 298

Page 306: Customer Satisfaction

Glossary 299

Gap analysis Achieved by subtracting satisfaction scores from importance scores to reveal wheresatisfaction is most falling short of requirements. Requires interval-level data.

Group discussion See Focus group.

Independent variable One of a battery of questions assumed to explain variance in an ‘outcome’ variablesuch as overall satisfaction – with CSM data these are usually individualrequirements such as ‘product quality’.

Internal benchmarkingData gathered internally and used to quantify and monitor aspects of serviceperformance such as delivery reliability.

Interval data Numerical scales whose response options are equally spaced, but there is no truezero – e.g. the Celsius scale, the ten-point numerical scale.

Item A question on the questionnaire.

Kruskal’s relative importance One measure of relative importance. Produces the squared partial correlationaveraged over all possible combinations of the predictor variables in a regressionequation. Computationally very intensive.

Latent Class Regression LCR allows us to identify homogenous subsets of people in the data who formopinions in the same way, and build separate regression equations for each of thesegroups. A very young technique that promises to revolutionise the way models arebuilt, not as yet unproved.

Latent variable A variable of interest that cannot be directly measured (for example intelligence) buthas to be estimated through procedures such as factor analysis applied to a number ofmanifest variables deemed to be ‘caused’ by the latent variable (e.g. reading speed,exam results, etc…). Usually form the basis of Structural Equation Models.

Leading questions A question that is prone to bias respondents to answer in a particular way, oftenpositively. For example, ‘how satisfied were you…’ as opposed to ‘how satisfied ordissatisfied were you…’.

Glossary 5/7/07 10:05 Page 299

Page 307: Customer Satisfaction

Likert scale A scale running from ‘Strongly agree’ to ‘strongly disagree’ on which respondentsrate a number of statements. These should be a combination of positive andnegative statements to avoid bias.

Linear regression See Regression, assumes that the relationship between variables can be summarisedby a straight line.

Mean The most common type of average – the sum of scores divided by the total numberof scores.

MedianThe central value in a group of ranked data – useful for ordinal-level data. Onsome occasions the median may be a ‘truer’ reflection of the norm than the mean –for instance average income is usually a median, since the mean is distorted by afew people with very large salaries.

Mode The most commonly occurring response.

ModeratorThe researcher leading a focus group.

MRS The HYPERLINK "http://www.mrs.org.uk" Market Research Society – theprofessional body for market researchers in the UK. Implements the Code ofConduct by which most researchers abide and offers professional qualifications.

Multidimensional scaling (MDS) This can be thought of as an alternative to factor analysis. In a similar way it aimsto uncover underlying dimensions in the data, but a variety of measures of distancecan be used. A common example is to take a matrix of distances between cities(such as that found at the front of a road atlas). Using MDS an analysis in twodimensions would produce something very similar to a map.

Multiple regression An extension of simple regression to include the effects of more than one predictoron an outcome variable.

300 Glossary

Glossary 5/7/07 10:05 Page 300

Page 308: Customer Satisfaction

Glossary 301

Multivariate analysis The analysis of relationships between several variables – e.g. factor analysis.

Mystery shoppingAlso called ‘mystery customer research’ in business-to-business markets. Involvescollection of information by posing as ordinary customers.

Nominal data Scales that only categorise people, but have no logical ordering – e.g. Male/Female.

Non-response bias A major potential source of bias, particularly in postal surveys, in thatresponders’ opinion may differ from non-responders. For example it is typicallythose with extreme opinions who respond, or those who feel most involved withyour organisation.

Normal distribution Graphically represented as a bell curve. Most data has a tendency to fall into thispattern, with people clustering around the mean. The shape of this curve for avariable can be calculated from the mean and standard deviation. Thecharacteristics of the normal distribution are that 68% of scores will be within 1standard deviation of the mean and 95% will be within 2 standard deviations.This tendency is the basis of assumptions used in confidence interval estimationand hypothesis testing.

Numerical scale A scale for which each response option has a numerical descriptor, commonly 1-5,1-7 or 1-10. The endpoints are usually anchored to provide a direction ofresponse, for example ‘completely dissatisfied’ and ‘completely satisfied’.

Open questions Questions where the respondent’s reply without explicit response categories. Theseare either coded at the time of interview into existing categories or post-coded.

Ordinal data Response categories can be placed in a logical order, but the distance betweencategories is not equivalent – e.g. Very likely – quite likely – not sure – quiteunlikely – very unlikely.

Outcome variable See Dependent variable.

Glossary 5/7/07 10:05 Page 301

Page 309: Customer Satisfaction

Part correlation See Semipartial correlation.

Partial correlation The correlation between two numerical variables having accounted for the effects ofother variables. This could be used to assess the independent contribution tooverall satisfaction of ‘staff friendliness’ having removed a similar variable such as‘staff helpfulness’.

PFIs (priorities for improvement)Those areas where improvements in performance would make the greatestcontribution to increasing customer satisfaction.

Pilot surveys A survey conducted prior to the main survey using the same instrument, used toassess the questionnaire for potential problems such as respondent confusion orpoor routing of questions.

Population The group from which a sample is taken, e.g. all of an organisation’s customers for CSM.

Postal survey Any survey in which the questionnaire is administered by post. A mail survey inAmerican usage.

Post-coding Coding the answers to a question after the survey is complete.

Pre-coding The process of determining in advance the categories within which respondents’answers will fall.

Predictor variable See Independent variable.

Primary data Data collected specifically for the question of interest – the CSM survey producesprimary data.

Probability sampling See Random sampling.

302 Glossary

Glossary 5/7/07 10:05 Page 302

Page 310: Customer Satisfaction

Glossary 303

Probing A prompt from the interviewer to encourage more explanation or clarification of ananswer. These do not suggest answers or lead respondents but tend to be verygeneral: ‘Anything else’, ‘In what way?’, or even just sounds such as ‘uh-huh’.

ProductWhat is sold. It encompasses intangible services as well as tangible goods.

Projective techniques Common in qualitative research, these are a battery of techniques that aim toovercome barriers of communication based on embarrassment, eagerness to please,giving socially-acceptable answers etc. Examples include theme boards, the‘Friendly Martian’ and psychodrama.

Psychodrama A projective technique also known as role playing. Participants are assigned rolesand asked to improvise a short play.

Qualitative research Research that aims not at measurement but at understanding. Sample sizes aresmall and techniques tend to be very loosely structured. Techniques used includefocus groups and depth interviews.

Quantitative research Research that aims to measure opinion in a statistically valid way, where the limitsto the reliability of the measures can be accurately specified. Used at the mainsurvey stage in CSM.

Quota sampling A form of non-random sampling in which quotas are set for certain criteria inorder to ensure that they are represented in the same proportions in the sample asthey are in the population – for example a simple quota might specify a 40%-60%male-female split.

Random sampling Every member of the population has an equal chance of being selected.

Ratio data A scale that has a true zero – e.g. the Kelvin scale. You are unlikely to come acrossthis type of data in CSM work.

Glossary 5/7/07 10:05 Page 303

Page 311: Customer Satisfaction

Regression A model that aims to assess how much one variable affects another. This is relatedto correlation, but implies causality.

Requirement A single satisfaction/importance question.

Response rate The number of admissible completed interviews, normally represented as apercentage of the number invited to participate.

Routing Instructions to an interviewer (or respondent in self-completion questionnaires),usually directing them to the next question to be answered based on theirprevious responses.

SampleThe people selected from the population to be interviewed.

SamplingProcess of selecting a part, or subset, of a target population to investigate thecharacteristics of a population at reduced cost in terms of time, effort and money.A sample must therefore be representative of the whole.

Secondary data Data that already exists, for example government statistics.

Self-completion questionnaire A questionnaire that is completed by the respondent rather than by aninterviewer. Usually postal surveys, though recent innovations allow Web oremail surveys to be used.

Semipartial correlation The correlation between two variables with the effects of other variables removedfrom the predictor variable only.

SIMALTO scale Acronym for Simultaneous Multi-Attribute Trade-Off. A complex scale thatrequires respondents to rate their expected, experienced and ideal levels ofperformance on a variety of key processes. Requires the presence of a skilledinterviewer to be reliably completed.

304 Glossary

Glossary 5/7/07 10:05 Page 304

Page 312: Customer Satisfaction

Glossary 305

Social grade The most common (though now somewhat dated) means of classifyingrespondents according to socio-economic criteria, based on the occupation of thechief income earner in a household. Classes are A, B, C1, C2, D and E, thoughthese are often grouped into four: AB, C1, C2, DE, or even two: ABC1 and C2DE.

Standard deviation The square root of the variance. It can be taken as the average distance that scoresare away from the mean. It gives us vital information to reveal the pattern ofscores lying behind a mean score.

Stratified sampling The population is divided into subgroups of interest and then sampled within thesegroups. This could be used to ensure that the sample is representative of therelative size/value of the subgroups.

Street interview A face-to-face interview conducted in the street or other public place.

Structural Equation Modelling (SEM) A close relation of Confirmatory Factor Analysis, this is a powerful technique forhypothesis testing, implemented through specialist software such as LISREL andAMOS. It is a state-of-the-art and very rigorous technique for testing models.

Sum The total of all the values for a question.

Systematic random sampling Divide the population by the required sample size (e.g. 4000/400 = 10) choose astarting point at random and then select every nth (e.g. 10th) person for interview.

Theme board A projective technique involving the use of collages of pictures mounted on card toact as a starting point for a discussion among focus group participants. Picturesmight vary from illustrative to metaphorical.

Total productEncompassing the entire range of benefits that a company/organisation provides whenthe customers make a particular purchase. In addition to the core product it mayinclude added value benefits such as guarantees, fast delivery, free on-site maintenance.

TrackingRepeated surveys using the same basic questionnaire either continuously or atregular intervals to identify changes in respondents’ perceptions.

Glossary 5/7/07 10:05 Page 305

Page 313: Customer Satisfaction

Unbalanced scale A scale with unequal numbers of positive and negative response categories, leadingto a bias in responses. An example is “Excellent” – “Good” – “Average” – “Poor”.

Univariate analysis The analysis of a variable on its own – e.g. mean score, variance.

Variance A measure of the amount of diversity or variation in the scores received for aquestion. The analysis of variance is key to many statistical measures of association.

Verbal scale Any scale for which answers are given according to a range of phrases or words, asopposed to numerical or diagrammatic scales. The Likert scale is a common example.

WeightingProcess which assigns numerical coefficients (weights or weighting factors) to eachof the elements in the set, in order to provide them with a desired degree ofimportance relative to one another.

306 Glossary

Glossary 5/7/07 10:05 Page 306

Page 314: Customer Satisfaction

Index 307

accuracy 17, 35, 38, 61, 65, 69, 71, 76,77, 87, 88, 104, 117, 176, 178, 183, 197,221, 255, 256acquisition 9,19, 20, 181, 216-218, 221,222, 224action mapping 275actionability 6, 113, 138, 189, 250, 251,255, 256, 258, 260aggregating data 116alternative suppliers 4, 215, 222ambiguous questions 143annual bonus 277anonymity in surveys 86, 88, 94, 102-104, 127, 147, 280,284ACSI (American CustomerSatisfaction Index) 18, 21, 22, 23, 29,120,123, 124, 194asymmetry 227, 229attitudes 215, 250, 251, 254, 255, 283,285, 292attitudes and behaviours 4, 14, 32, 217attitudes -role in behaviour buying206, 217, 230, 240, 242, 261, 287, 289attitudinal questions 134attractive quality 226, 228, 229, 233,247available customers 222average 46, 47, 77, 84, 86averages 150, 292

base profit 19,20baseline survey 253behavioural questions 63beliefs 217benchmarking 29,121,166,185, 192-196, 252benchmarking - internal 198, 199, 272-274, 291,293benchmarking - league tables 34, 121,291,293benefits 37, 38, 39, 45, 83, 87, 90, 97, 99,100, 123, 166, 167, 181, 196, 209, 210,211, 212, 216, 218, 221, 222, 223, 224,232, 234, 238, 240, 241, 256, 274, 290,293beta coefficients 541bias 381bias - attitudinal 86bias - interviewer 86, 94bias - non-response 82,84-87, 102bias - positively biased rating scales145, 146bias - question induced 38bivariate techniques 50-54blame 272, 291boosting response 87-91, 204business impact 191, 192, 196, 198, 264business to business markets 58, 69,73, 75, 76, 92, 273

Index

index 5/7/07 10:06 Page 307

Page 315: Customer Satisfaction

call backs 94, 95Canadian Imperial Bank ofCommerce 20CAPI 91Cellnet 1, 9census surveys 78challenges 93, 289, 292, 293 Chelsea Football Club 88clarity of reporting 163, 185, 196, 198classification questions 132, 138-140,142,147, 148, 279cliff edge 133, 246closed questions 128, 142closing the questionnaire 147coding 164, 257collinearity 51, 52, 53, 54, 55, 153commitment 16, 27, 41, 62, 100, 101,131, 132, 133, 134, 180, 184, 215, 217,225, 284communcations - employee 273communications 216, 269, 277, 287,289communications - customer 284, 286,287comparison 56, 63, 109, 163, 192, 198,199, 201, 202, 203, 204, 212, 223competition 34, 41, 209, 233, 241competitive positioning 209competitor gap 207competitor matrix 208complaints 4, 6, 11, 12, 15, 20, 32, 102,137, 172, 219, 261, 265, 272concise information 196,252conclusions 14, 15, 23, 26, 36, 39, 43,55, 67, 69, 79, 96, 98, 103, 107, 111, 123,148, 164, 182, 185, 187, 197, 198, 199,208, 222, 223, 226, 227, 237, 238, 239,246, 251-254, 267, 280, 287, 289-294confidence intervals 38, 71, 168, 169,176-179, 183, 197, 219, 253confidence level 175, 176, 178, 179, 183

confidentiality 65, 81, 88, 101, 102,103, 104, 107, 269, 284Conjoint Analysis 61consulting customers 1, 11, 14, 97consumer markets 57, 67, 93, 96, 99,104, 107, 283continuous tracking 105, 108, 250, 252,253, 262, 267, 272convenience samples 74core questions 97, 107correlation coefficient 48-51, 153correlation matrix 51, 52cost savings 19, 20creative comparisons 63, 64credibility 71, 127, 164, 185, 253, 277,278, 285cross-tabulations 219culture 12, 24, 25, 41, 232, 268, 276,278, 280, 291customer behaviour 4, 32, 183, 206,212customer comments 92, 96, 129, 161,254, 256, 273customer decay 245, 246customer expecations - alternativesuppliers 2, 3, 6, 15customer expectations 186Customer Experience Modelling(CEM) 251, 256, 257, 258, 259, 260, 261,262, 263, 264, 265, 267, 291, 294customer lifetime value 3, 18,19, 23,26, 33, 135,167, 180, 181, 182, 183customer perception - internalsurveys 279customer perception - methodology10, 205customer perception - public relationsaspect of 78, 81, 84, 85, 92, 94, 96-98,100, 103, 105-107, 204, 252customer perception - sampling 183customer perception survey 263, 281

308 Index

index 5/7/07 10:06 Page 308

Page 316: Customer Satisfaction

Index 309

customer perceptions 2, 3, 16, 19, 27,218, 224customer perceptions - analysis andreporting of results 290customer perceptions - attitudinalquestions 128customer requirements 37, 46, 51, 52,60, 61, 64, 66, 125, 131, 132, 139, 140,146, 147, 148, 152, 154, 158, 169, 173,174, 189, 190, 191, 198, 199, 205, 208,212, 214, 224, 229, 231, 236, 238,239,247, 253, 254, 262, 269, 273, 279, 280,290, 291, 292customer value map 211Data Protection Act 78, 81, 84, 85, 92,94, 96-98, 100, 103, 105-107, 204, 252deciders 59decision making 7, 17, 38, 56, 58, 60,98, 102, 106, 118, 123, 219, 222, 234Decision Making Units (DMUs) 58-60decision tree analysis 219, 220, 221,222, 224, 292, 294defect 3, 4, 6, 15, 248delight 2, 3, 128, 196, 229, 230, 232,234, 236, 239, 240, 243, 245, 247Dell 22depth interviews 37, 57, 58, 61, 67, 92,212, 285derived importance 47, 50, 152-154,171determinance 47Deutsche Telekom 10dichotomous questions 129, 137, 138,256, 257differentiators (in customersatisfaction) 8, 47, 150, 161, 162, 165,191, 192, 196, 198dissatisfaction 2, 9,12, 32, 35, 86, 93,94, 103, 108, 118, 128, 133, 136, 137,140, 146, 150, 159, 160, 164, 165, 172,190, 191, 192, 196, 198, 229, 230, 232,242, 256, 264, 293, 294

DMU - as sampling variable 74, 75,143DMU - personnel 75DMUs 140, 146, 150doorstep interviews 91double counting 52, 53, 54, 136double questions 138,144drawing conclusions 69, 208, 222, 239drilling down 78, 159, 251, 292, 294electronic surveys 81-83, 85, 86, 107employee satisfaction 18, 20, 21, 25,26, 88, 119, 269, 278enhancers 226, 230-235, 237, 238, 247,291, 292Enterprise Rent-A-Car 35European Union 13event driven 105executive summary 1exit interviews 11, 91expectation scales 110, 117exploratory research 36, 57-67, 81, 98,125, 126, 139, 140, 144, 148, 152, 155,169, 172, 173, 205, 212, 214, 228, 252,255, 274, 277, 279, 280, 285, 289face to face interviews 58, 91-95, 148facilitator 62, 63, 64feedback 11-14, 43, 87, 99-101, 106,108, 110,127, 139, 252, 260, 262, 263,271, 272, 273, 274, 277, 280, 282, 284-287feedback reports 287fieldwork 252financial performance 3, 6, 13, 19, 20,22, 182, 240, 293five point scale 118-121, 167flirtatious customers 216, 222flow chart 259, 260focus groups 37, 57, 58, 61-64, 67, 212,285free markets 18, 23frequency of measures 277Friendly Martian 64

index 5/7/07 10:06 Page 309

Page 317: Customer Satisfaction

gatekeepers 59Gateway 22GDP 24-26givens (in customer satisfaction) 47,55, 117, 150, 154, 155, 165, 233, 240halo effect 105handling problems and complaints 6,137, 261, 272Harvard Business Review 37headline measure 7, 10, 40, 45, 67, 113,121, 166-170, 182, 183, 252, 253 hot alert system 103, 104, 108Hyundai 22image 25, 99, 171, 204, 212impact 15, 40, 44, 47, 48, 49, 50, 54, 55,56, 63, 65, 66, 67, 76, 79, 87, 90, 92, 94,137, 150, 152, 153, 154, 155, 164, 165,170, 171, 175, 182, 189-192, 196, 198,199, 206, 207, 211, 212, 228, 230, 232,240, 241, 247, 254, 260-265, 275, 277,286, 290, 293importance 43-56importance - derived 47, 50, 152, 153,154, 171importance - stated 46, 47, 50, 55, 56,65, 66, 153, 154, 171, 172, 182, 290improving satisfaction 7, 292incentives 62, 89, 90, 107, 108, 109incomplete measures 11indirect questionning 59indirect questions 60influencers 59, 68insurance company 8, 180, 218internal benchmarking 198, 199, 272-274, 291, 293internal metrics 11, 12, 137, 234, 235,247interviews - personal 91, 92, 95, 96, 97interviews - telephone 35, 86, 91-97,107, 115, 123, 148, 223, 273, 277, 279,285intranet 82, 84, 269, 273, 279

introducing the survey 98,99introductory letters 88intrusion 14intuitive judgement 223investing 5, 7, 9, 246, 253, 260, 267invitation to customers to participatein surveys 83, 84involving customers 282-288IVR 82, 83jargon 142, 144,Kano (model) 228, 229, 230, 233, 234,247, 248key drivers 55, 56lagging measures 11, 15late responses 85latent class regression 221, 222legal issues 104lens of the customer 37, 39, 43, 45, 46,55, 56, 57, 61, 66, 67, 126, 132, 135, 169,170, 183, 193, 194, 199, 205, 228, 250,251, 252, 255, 267, 274, 289, 293lens of the organisation 38, 43, 66, 126,132, 137, 148, 180, 255, 256Likert scales 112, 113linear 6, 15, 33, 123, 226, 227, 229, 235,236, 237, 238, 243, 246, 247, 248logging data 264long haul 253long term 19, 24, 34, 82, 100, 128, 223,232, 253, 273, 274, 275, 293low response 10, 39, 79, 82, 84, 107loyalty differentiators 150, 161, 162,165, 191, 198loyalty index 134, 135, 161, 166, 179,180, 215, 220, 244loyalty myths 13, 17, 249loyalty schemes 216loyalty segmentation 222maintainers 226, 230-235, 237, 238,240, 241, 245, 247, 292Manchester United 88

310 Index

index 5/7/07 10:06 Page 310

Page 318: Customer Satisfaction

Index 311

margin of error 38, 70, 71, 76, 78, 115,168, 175, 176, 177, 178, 183Market Research Society 17, 68, 80,104, 108, 127, 147market standing 201, 205, 206, 211,212, 214, 224maximising response rates 86MBNA 23, 170, 216, 277measurement error 70, 71, 76, 167,168, 169, 182measuring impact 48median 150, 151,mid-point 110, 115, 116, 117, 151, 155,156, 167mirror effect 25mirror survey 256, 268, 269, 270, 271,280mixed methods 96, 97,mode 150, 151moderator 62monitoring 1, 3, 10, 11, 22, 38, 39, 71,110, 113, 115, 117, 120, 121, 123, 137,160, 166-183, 258, 262, 267, 281multiple choice question 129multiple regression 50,52-55, 153, 233,290multivariate techniques 52, 111, 113,119, 122mystery shopping 11, 12, 13, 15NASDAQ 21net promoter score 7, 132,179, 252non-linear 6, 227, 238, 243, 248non-parametric measures 111normal distribution curve 77not-for-profit sector 18, 25, 263-265number of points 118, 146, 169, 213,214numerical rating scales 110, 112, 115,119, 120, 122, 123, 165open questions 127, 128, 257opt out 98Orange 9, 10, 181

ordinal scales 112, 113organisational goal 1overall satisfaction 5, 48-54, 121, 166-169, 171, 173, 182, 220, 235, 238, 247,253, 265, 267, 290, 291paper based surveys 85parametric statistics 111Pareto Effect 73performance indicators 272performance measures 12periodic surveys 105, 106, 107, 108personal interviews 91, 92, 95, 96, 97PFIs (Priorities for Improvement)44-47, 125, 126, 147, 148, 152, 163, 174,183, 185, 186, 187, 189, 192, 196-198,207, 208, 216, 220, 254, 256, 260, 262,267, 280, 294poor performers 118, 122, 293postage paid reply 85, 87, 91postal surveys 84, 86, 97precision 175, 176, 179, 183preference 132, 136, 215, 218price premium 19, 20price sensitive 9, 26, 212, 213, 224Priorities for Improvement (PFIs)44-47, 125, 126, 147, 148, 152, 163, 174,183, 185, 186, 187, 189, 192, 196-198,207, 208, 216, 220, 254, 256, 260, 262,267, 280, 294private sector 19, 34, 293problems and complaints 20, 137, 261profiling customers 218profitability 3, 19, 25, 35, 230, 293, 294projective techniques 63, 64, 228prompting 60, 192public sector 26, 34, 130, 136, 292, 293qualitative 37, 47, 57, 58, 58, 59, 61, 63,64, 67, 68, 92, 94, 140quantitative surveys of customersatisfaction 47, 57, 58, 59, 64, 65, 67,92, 172, 214, 279, 280

index 5/7/07 10:06 Page 311

Page 319: Customer Satisfaction

questionnaires 37, 43, 44, 45, 48, 55-61,64-67, 81, 82, 84-91, 94, 95, 96, 98, 99,100, 102, 104, 107, 108, 110, 120, 124-152, 164, 168, 170, 174, 183, 193, 204,212, 214, 222, 243, 255, 256, 269, 274,279, 285, 286, 289questions 7, 10, 17, 37, 38, 39, 43, 44-57, 60, 62, 63, 66, 67, 69, 84, 86, 92, 94,97, 102, 106, 119, 120, 121, 123, 125-149, 161, 162, 168, 174, 175, 179, 180,182, 183, 193, 194, 198, 199, 214, 215,218, 238, 243, 252, 255, 256, 257, 258,259, 262, 265, 267, 274, 277, 278, 279,281, 285, 290, 293quick wins 8, 15, 191, 198, 253, 275Qwest 22random error 71, 76, 168, 175random sampling 72, 73, 74, 75, 79, 97,290range 2, 7, 10, 26, 38, 46, 47, 50, 58, 61,63, 66, 70, 76, 84, 92, 112, 123, 129, 131,135, 151, 153, 156, 159, 162, 163, 172,179, 181, 193, 202, 211, 212, 218, 230,238, 244, 245, 253Rater scale 36rating scales 39, 61, 92, 110, 111, 115,122, 128, 130, 132, 142, 145, 167, 290rating scales - positively biased scales39, 48, 114, 116, 120, 121, 122, 123, 124,139, 141, 164, 193, 250, 290rating scales - ten point scale 123rating scales - types of scale 46, 47,116, 118-121, 123, 125, 130, 132, 141,146, 148, 151, 152, 155, 156, 159, 160,167, 172, 190, 257, 293recommendation 7, 26, 132, 134, 179-182, 220, 239recruitment of respondents 62recruitment of staff 25,referrals 2, 19, 20, 180, 181, 182, 216,232, 289

relative perceived value 201, 208, 209,210, 211, 212, 214, 224,reliability -statistical reliability 9, 22,36, 38, 49, 65, 75, 76, 78, 79, 95, 97, 103,107, 120, 142, 166, 172, 175, 176, 182,183, 204, 222, 230, 269, 272, 277, 290 reliable samples 65, 66, 71, 95, 96reminders 87, 88, 107repeat purchase 3, 121repeating research 57reply paid envelope (for selfcompletion surveys) 87, 147reputation 14, 22, 25, 31, 65, 99, 204,213, 264-267, 283, 286research agencies 251response rates 10, 39, 79, 81, 82, 84, 85,86, 87, 88, 89, 90, 91, 94, 95, 97, 98, 102,104, 107, 108, 109, 204, 223, 279return on investment 155, 231, 238,239, 247, 256, 263, 292reward 21, 23, 26, 35, 89, 90, 134, 216,268, 275, 291, 293reward and recognition 268, 275, 291,293routing 82, 84, 86, 94running focus groups 62Safeway 13, 20sales and profit 20sampling 14, 38, 59, 69, 70-81, 95, 97,109, 168, 203, 274, 290sampling frame 14, 71, 79,satisfaction gaps 113, 187, 188, 190,191, 206, 207, 208, 224, 245, 280, 283,290, 291satisfaction improvement loop 106, 107satisfaction index 16, 18, 20, 21, 23, 27,29, 38, 40, 45, 67, 71, 113, 115, 120, 121,123, 124, 150, 163, 164, 166, 169-176,179, 180, 183, 193, 194, 195, 198, 200,220, 244, 245, 253, 261, 263, 265, 272,274, 275, 276, 277, 280, 286, 290

312 Index

index 5/7/07 10:06 Page 312

Page 320: Customer Satisfaction

Index 313

satisfaction related pay 23, 275, 276,277, 278, 280, 293satisfaction trap 3satisfaction-loyalty relationship 5,122, 242, 244, 245satisfaction-profit chain 248scatter plot 49Sears Roebuck 21segmentation 61, 138, 217, 218, 221,222, 225 self completion questionnaires 35,95,96, 102, 104, 107, 120, 121, 126, 127,130, 138, 140, 141, 147,156, 214, 269 service quality 6, 11, 16, 22, 27, 31, 36,38, 40, 41, 42, 56, 67, 68, 149, 152, 153,155, 160, 162, 173, 184, 186, 187, 188,192, 196, 197, 199, 240, 248, 249, 270,271, 281, 287SERVQUAL 36, 38, 40, 41, 42, 56, 68,149, 170, 184, 199, 240,share of wallet 136, 215shareholder value 10, 21, 22, 27shareholders 10, 18, 19, 21, 22, 26, 27, 135 short term 14, 34, 128, 245, 246, 274,275, 293Smile School 13software 82, 84, 85, 86, 133, 150, 159,164, 231, 232spend 4, 21, 24, 25, 32, 34, 100, 133,134, 181, 182, 292SPSS (computer software) 164, 165standard deviation 77, 111, 113, 150,158, 159, 164, 175, 177, 178, 179, 183Starbucks 2, 3, 242stated importance 46, 47, 50, 55, 56,65, 66, 153, 154, 171, 172,182, 290statistical analysis 83, 110, 124, 164,290, 291statistical inference 69statistical modelling 26, 36, 119statistical reliability 38, 49, 75, 172, 277stimulus material 46, 63

stock prices 22stratified random sampling 73, 75, 79, 97stratified random sampling 74street interviews 62, 91structure of the questionnaire 130sub-groups 78, 175, 176, 219, 269switching 133, 136, 201, 215, 218, 222,245, 246systematic error 70, 74, 76Table of Outcomes 254, 267targets 166, 258, 262, 276, 277, 280, 291telephone survey 65, 94, 95, 96, 97, 107,108, 109, 156thematic apperception 63theme boards 63time based questions 258timing 105, 107top performers 122top priority 47, 61, 64, 152total importance matrix 65, 66total product concept 233tracking 45, 58, 66, 67, 97, 105, 108,118, 123, 137,147, 167, 169, 172, 193,247, 250, 252, 253, 256, 262, 263, 267,272, 290trend data 29, 253trust 19, 25, 132, 135, 215, 264, 265, 285unclassifiable data pattern 237understanding gap (concept) 268,271, 280unrepresentative samples 69, 107,unscientific surveys 38users 59, 84, 85, 98, 218, 240USP 10utility 18, 34, 86, 89, 118, 133, 222, 292value 3, 6, 10, 11, 14, 16, 18-24, 26, 27,32, 33, 36, 38, 58, 73, 74, 75, 80, 89, 90,96, 97, 99, 102, 132, 135, 136, 143, 151,153, 156, 167, 180-184, 186, 196, 201,208, 209, 210, 211, 212, 214, 215, 219,221, 224, 225, 232, 233, 242, 252, 262,263, 272, 281, 284, 288, 293, 294

index 5/7/07 10:06 Page 313

Page 321: Customer Satisfaction

Value-Profit chain 6, 16, 27, 184, 225,281variance 58, 109, 111, 119, 122, 123,134, 150, 156, 158, 177, 221venues (for focus groups) 62, 141,verbal scales 47, 112-118, 120, 121, 123,150, 162, 163, 165, 292visual prompts 92Vodafone 9voodoo poll 69, 75web surveys 82, 83, 84, 85, 86, 95, 131,164weighting 169weighted index 36, 46, 166, 169, 170,183, 205wow 229, 240, 247wowing the customer 1, 2zones 4, 5, 15, 33, 34, 66, 118, 122, 145,167, 209, 210, 211, 242-246, 249, 292zone of delight 243zone of mere satisfaction 34, 243zone of opportunity 244, 245, 246zone of pain 243zone of stability 246

314 Index

index 5/7/07 10:06 Page 314

Page 322: Customer Satisfaction

The main purpose of all organisations is meeting their customers’requirements. In a democracy it’s the only reason for public sectororganisations to exist. For private sector companies the rationale ispure business logic. They must maintain revenues just to surviveand most are aiming much higher, so to achieve their objectivescompanies must constantly manage and optimise present andfuture cash flows from customers. In most markets customers are

not locked in. They have choices. As Adam Smith told us over 200 years ago, peopleseek pleasure and avoid pain, so they move towards companies that give them agood experience and away from those subjecting them to a poor one. Nothing,therefore, is more important to companies’ future profits than understanding howtheir customers feel about the customer experience and how this will affect theirfuture behaviour. Customer Satisfaction provides the first fully referenced andcomprehensive guide to this vital subject.

“This book does a tremendous job of bringing to life customer satisfaction and itssignificance to modern businesses. The numerous examples contained within thebook’s pages have proved a fresh and continuous source of inspiration and expertiseas I work with my organisation in helping them understand why we should do whatmatters most to our customers and the lasting effect such actions will have on bothour customer loyalty and retention. The authors are to be commended.”Scott Davidson, Research Manager, Tesco Personal Finance

“I really enjoyed reading Customer Satisfaction. It was a good mix of academia,insights and case studies – this really carried the subject matter along and made itengaging. I would recommend it to managers looking at devising or revising acustomer satisfaction strategy.”Mark Adams, Head of Service Experience, Virgin Mobile

"Customer Satisfaction makes the case for monitoring and improving customersatisfaction in easy-to-read concise language, uncovering new insights anddebunking a few popular myths in the process. It includes thought-provokingexamples, comprehensible tables, graphs and diagrams, and engaging narrativesynthesised from leading academic research in the area and the authors’considerable experience. It will prove an invaluable tool for anyone tasked withimproving customer satisfaction in their organisation whatever their level ofknowledge or experience."Quintin Hunte, Customer Experience Manager, Fiat Auto UK

www.customersatisfactionbook.com