The Enterprise Story—Measuring What Matters

Posted on by Chief Marketer Staff

What that slide hit the screen, CRO Andy Taylor remembers, “there was an audible gasp in the room.” All eyes turned toward founder and chairman Jack Taylor, Andy’s father, who had devoted his life to building a company that would serve customers better than any other. Jack was upset. After the morning presentations, Jack met privately with Andy, and his message was short. “Andrew,” he said, ever the paterfamilias, “we’ve got a big problem.”

Andy Taylor, who hadn’t been called Andrew by his father (or anyone else) since childhood, remembers this as a defining moment. He had been named president and chief operating officer of the closely held company in 1980, CEO in 1991. Now, he knew, it was up to him to change things. He vowed to ensure that Enterprise set new standards of excellence in service and customer relationships. The only question was how to go about it.

The company had been experimenting with customer-satisfaction surveys ever since 1989, when it first began marketing car rentals to consumers. But back then, many managers doubted that the surveys really meant much. Sure, the numbers indicated a few problems. But wasn’t the company growing? Wasn’t it making money? Any difficulties, some of the managers said, weren’t systemic; they could be addressed locally. That as more in keeping with Enterprise’s decentralized tradition.

But by the early 1990s Andy Taylor was worried, partly because he himself had been hearing more complaints than usual from customers. So he assigned a team of senior managers to work on the surveys. That team designed a new instrument—and like a lot of such instruments, it suffered from “question creep.” The initial version, one page long, included nine questions and asked for seventeen separate responses, including an open-ended “How could we have served you better?” At the top, however, was the question that would turn out to be central to the whole endeavor: “Overall, how satisfied were you with your recent car rental from Enterprise?” The five boxes a customer could check ran from “completely satisfied” to “completely dissatisfied.” Taylor and his team decided that the company would calculate the percentages in each category for this question. They would call the scores the Enterprise Service Quality index, or ESQi.

Thus did Enterprise launch the measurement process, as Taylor later told Fortune Small Business, that “enabled us to go from being a nearly $2 billion business in 1994 to a $7 billion-plus business in 2004. But in 1994, there was still a long ways to go. Making ESQi into a useful, credible tool turned out to be a long, involved, and contentious process.

Enterprise’s first questionnaires went out in July 1994, and the company reported its first three months’ worth of results to senior managers in October. Overall, the ratings were only fair. 86% of respondents were at least moderately satisfied. But only 60% checked the “top box,” as the company called it, to indicate they were completely satisfied. That score, Taylor felt, was far lower than it should be.

Worse, there were huge disparities between the various regions, with some registering top-box scores in the 80 percents and others in the low 50s. One of the company’s biggest and most profitable regions came in at a dismal 54%. “We were pretty much at or near the bottom of the whole company,” acknowledged the region’s senior vice president for rental. “To competitive people like us, that was a real difficult pill to swallow, especially in front of our peers.”

Maybe not surprisingly, the first reaction among some managers was to shoot the messenger. Low scorers, Taylor remembers, “ripped the measurement, the survey questionnaire, and the sampling technique behind it.” The process didn’t allow for differences in branch size, the managers argued. It didn’t take into account that different regions of the country might have different expectations about customer service. Besides, they added, what did it all prove? ESQi might be a valid measurement of satisfaction, but did it have anything to do with growing the company? Was there really a connection between customer satisfaction and financial results?

So Taylor and his team continued to examine and refine their methods. They found that branch size and geographical region didn’t matter—top performers and poorer ones could be found in any category. The team challenged the notion that senior managers already knew where the problems lay. When asked to rank their various operations above one or below the company’s service average without looking at the latest ESQi scores, for example, the managers couldn’t peg more than half, the same as guessing.

The team also made three changes that would prove definitely important:

*Since the customer experience was primarily controlled by the local branch, team members reasoned, the company needed to score not just its regions but each of its several thousand branches. (Enterprise at the time had more than 1,800 branches; today it has well over 6000.) Only with this degree of granularity could regional managers reliably hold the branches accountable for building good customer relationships. Each branch, moreover, would need feedback from at least 25 customers a month, so the sample size had to increase. A three-month moving average of this feedback would produce a reliable ranking.

*Listening to their field managers, the team also decided that the information had to be more timely. Customer-satisfaction scores that were gathered once a quarter and disseminated long after the quarter’s end didn’t really tell you much. Who could remember what had happened during that quarter to move the scores one way or the other? In fact, Taylor and his team wanted data in as close to real time as possible, so that frontline staffers could remember events that had influenced the feedback. Timely feedback would also allow branches to test new ideas and then to evaluate them when the survey scores arrived. To speed things up, the researchers switched from mail to telephone surveys and began reporting ESQi monthly, just like the monthly reporting of profits and other performance measures.

*Finally, since executives wanted proof that investments to increase ESQi scores would actually pay off, the team analyzed how well various questions on the surveys linked to customer behaviors such as repurchases and referrals—behaviors that drove growth. Researchers called back hundreds of customers who had taken the survey months earlier, asking how many positive and negative referrals those customers had made. They asked the customers how many cars they had rented since taking the survey and what Enterprise’s share of those rentals had been .These questions struck pay dirt: the one question at the top of the page, “Were you completely satisfied,” accounted for a startling 86% of the variation in customer referrals and repurchases. Those who gave the company a perfect 5 on a 5-point scale—the equivalent promoters—were three times more likely to return to Enterprise than a customer giving a lower score. And nearly 90% of positive referrals were made by top-box customers. The bottom line: high top-box scores translated directly into growth and profit.

All these findings quieted the skeptical executives. The measurements meant something. But nothing actually seemed to be improving the company’s scores, as the 1996 meeting showed. So Andy Taylor’s next challenge was to get his executives and his branches to do something about the measurements. It was, he wrote, a “time for leadership, time to put some teeth into our efforts.”

Taking ESQi Seriously

Taylor’s first step was to link ESQi scores to corporate recognition. At Enterprise, the granddaddy of recognition programs is the prestigious President’s Award, a coveted prize given to people who make truly exceptional contributions to the company. After 1996, you weren’t eligible unless your branch or region was at or above the corporate average for ESQi. Southern California’s Group 32, which had won a disproportionate number of these awards in the past, came up empty-handed for the next two years. The point hit home. “People said, ‘You know what? This company is serious about ESQi,’” remembered Tim Walsh a former officer of Group 32.

Step two delivered an even stronger message. The company redesigned its monthly operating reports to highlight ESQi, listing every branch’s score right alongside the net profit numbers. The reports ranked every branch, region, and group manager in the company, so everyone immediately knew how he or she stacked up against everyone else. Moreover, the company announced that no one with a below-average ESQi score was eligible for promotion—and backed up its announcement by passing over a well-regarded California executive who Taylor says “would have been a shoo-in under the old system.”

Step three: communication and more communication. “ESQi became a key topic of every speech I gave internally,” says Taylor. “Customer satisfaction went on the agenda of every management and operations review meeting at all levels. When I was present, I would go right to the bottom of the ESQi rankings and pointedly ask the managers responsible to explain what was going on and what they were going about it. Those were apt to be the first questions in a sustained grilling.”

Before long, ESQi was an inextricable part of Enterprise’s corporate culture. The promotion requirement of above-average ESQi came to be known as “jacks or better,” as in the traditional poker-table requirement of a pair of jacks or better to open the betting. The branches or groups that were below average and thus ineligible for promotions were said to be in “ESQi jail.” And gradually, ESQi scores began to improve. In 1994 the average had been around 67. By 1998 it had risen to 72, and by 2002 it hit 77. The gap between top performers and those at the bottom narrowed, shrinking from 28 points in 1994 to only 12 in 2001. Even Southern California brought its number up to above average, and again was winning some President’s Awards.

The Closed Loop

One decision that was critical to ESQi’s success was not to ask the survey vendor to diagnose the root causes of a customer’s score. Much to the vendor’s dismay, Taylor and his team insisted that attempting to generate both the score and the diagnosis with the same survey would lead to failure on both counts.

The reasoning was compelling. Anyone who has done root-cause analysis knows the problem that needs attention. And probing for the root cause of an individual customer’s concerns often requires knowing something about both the customer and the transaction. For example, it may be essential to know whether the branch was temporarily understaffed, whether the transaction was a first-time rental, or what the customer’s historic rental pattern has been. No outside phone interviewer can possibly have all that knowledge and understanding.

So whenever a customer communicates any dissatisfaction on the ESQi survey, the phone rep asks the “would you accept a call” question. More than 90% of these customers agree to be called—at which point an e-mail alert, including the customer’s phone number and the survey score, is automatically forwarded to the branch involved. Branch managers have been trained to call right away, to apologize, to prove for the root cause of the customer’s disappointment, and then to develop an appropriate solution. In some cases, the apology itself is all it takes to fix the problem. In others, a free rental is more appropriate. The primary diagnosis is always performed at the front line so that the branch can learn what needs to be fixed and fix it.

Thanks to the closed loop, Enterprise has been highly successful in reducing detractors: the proportion of customers who rate their experience neutral or worse has declined from 12% to 5% since 1994. This drop by itself has improved the firm’s economics—there is less negative word of mouth. The increase in the percentage of promoters also improves the economics, both by driving growth and by reducing costs. For instance, Enterprise can spend less on advertising than Hertz and still grow faster due to Enterprise’s word-of-mouth advantage. Measuring and managing the number of customer promoters created at each branch allows the company to turn word of mouth form a soft benefit into a quantifiable competitive weapon.

Adapted by permission of Harvard Business School Press. THE ULTIMATE QUESTION: Driving Good Profits and True Growth, by Fred Reichheld. Copyright 2006 Harvard Business School Publishing Corporation. All Rights Reserved.

More

Related Posts

Chief Marketer Videos

by Chief Marketer Staff

In our latest Marketers on Fire LinkedIn Live, Anywhere Real Estate CMO Esther-Mireya Tejeda discusses consumer targeting strategies, the evolution of the CMO role and advice for aspiring C-suite marketers.

	
        

Call for entries now open



CALL FOR ENTRIES OPEN