Performance Perspectives

Is Your Scorecard Getting Stale?

Around this time of year, it is not uncommon to see clients challenging and refining the metrics that they will use to evaluate performance as the year progresses. Actually, it is quite a timely and productive exercise to go through, as many of us have just come out of our year-end planning cycles where a number of our goals and objectives may have been modified from previous planning cycles. And while one might argue that metrics should be an outgrowth of the planning cycle itself, we all know how often those processes get short circuited. So doing a quick inventory of our business metrics is always a healthy practice to get into.


In talking with one of my clients last week, we got into a fairly lengthy discussion about the value of having measures that don’t change very often. In his view, certain measures at his company were in fact “getting stale” and were hard to do anything meaningful with from a motivational or incentive standpoint because they really lacked any interesting “movement” from a reporting standpoint. Was he doing harm by keeping them on his scorecard? Was it time to bring some more “interesting” or “challenging” metrics to the table?

Anytime I get a question like this, I try and go back to asking how well their metrics line up with “where value is created” within their business…a sometimes obvious and trite, but often very valuable question. Going through a challenge like this can reveal a lot about where changes might be necessary. A few things to consider along these lines:

– Most of the time, the measure itself is not what has gotten “stale”, but rather the target against which you judge success. I had a client recently tell me that his target for a particular metric was to “improve” or “get better” year on year. Quite frankly, I believe this is a clear recipe for lackluster improvement. Most sports teams that have achieved greatness (consistently) usually started with some pretty bold and specific turnaround or improvement aspirations. Resetting the bar with a healthy dose of ambition can really bring life back into what might appear to be a stale set of business metrics.

– Sometimes, its not the measure or the target that is the problem, but rather how the measure is positioned. Simple metrics such as safety incidents, outage statistics, etc… can look stale especially since success is evaluated based on the “absence” of something happening. Simply changing the way the metric is positioned, however, can have a huge impact on visibility and motivational value. Repositioning these metrics into things like “days since last incident”, “near misses”, “time between failures” can turn a sleepy metric into something that grabs more attention.

– Some measures, however are meant to fade into the background over time. From time to time, we add metrics to the scorecard because of a problem that needs fixing. A good example of this is in corporate services functions where things like “help desk response times”, “recruiting cycle times”, etc. have become the centerpieces of their metric reporting. In fact, most of these areas have gone so “cycle time happy” that, while I’m not sure anything is getting done, I am certain its getting done FAST! Sure, these metrics were born because at some point in the past, I’m sure cycle times in the associated areas were really, really bad!!! But at some point, you need to acknowledge when a gap has been closed and put a metric into what I’ll call “maintenance mode”. It might not have to “go away” altogether, but perhaps it should fade into the background a bit so that a new source of value can gain visibility and be exploited. A good example of this is how many call centers have decreased the importance of things like speed of answer and abandon rates, and have put more emphasis on the role reps can play in shifting customer behaviors and service channels utilized.

– and yes, there are times, where the metrics we use are simply crappy metrics, and while they may have made sense at the beginning, they either no longer motivate the right behaviors, or worse, incentivize the wrong ones. Don’t be afraid to trash some metrics periodically so that you don’t end up creating layers of dead weight in your scorecard and Performance Management activities.

So if you are one of those managers in the throws of self reflection, happy hunting. Just make sure you go through the process a little more deliberately and methodically so that you don’t end up throwing the proverbial “baby out with the bathwater” (which incidentally is a metaphor that I hope is not based any real history!!!).

Seriously, the key is to ALWAYS TO MAKE SURE THE METRICS YOU USE MATCH UP WITH HOW YOUR FUNCTION, BUSINESS UNIT, OR COMPANY INTENDS TO BUILD VALUE FROM ITS EFFORTS during the current planning and reporting cycle.

-b
 

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com

 



2010 EPM Year in Review

2010 – EPM (Enterprise Performance Management) Year in Review

As is my tradition in the final days of the year, and in anticipation of the one to come, I provide below some of the more significant trends we’ve observed through our Performance Management related work with clients and colleagues in 2010, and some thoughts on what we see as the major forces, issues and trends that are likely to shape the year ahead.

Certainly no one will argue that the most dominant force, regardless of industry, continues to be the economy. No single factor has impacted the C-Suite and Performance Management Executives more than the economy in terms of its stifling impact on corporate growth and the state of paralysis it has created in our ability to plan for and manage the inevitable yet unpredictable growth that lies ahead.

By definition, EPM (Enterprise Performance Management), or CPM (Corporate Performance Management) as it is sometimes referred, is all about linking strategy and KPI’s, to the management and improvement initiatives that occur in the daily operation of the business. We accomplish this through effective measurement of performance, analysis of identified gaps, deployment of course corrections, and changes to business processes and operating protocols. This activity is challenging even in times of stable growth and “normal” operating conditions. But in times of unpredictability and chaos, which is what most of us find ourselves in today, it most certainly adds a level of complexity (and stress) that would challenge the best of us.

The wise among us would say that these are the exact times to refocus ourselves on the things we CAN control versus those we can’t, and perhaps, more importantly, to understand the difference between the two. Recognizing and acknowledging that difference requires understanding the business well enough to make such a call.

However, as Performance Managers, what we often find is that if we understand the business and its drivers well enough, we can actually identify ways of controlling what appears initially to be uncontrollable. Such is the case with good Performance Management Systems. They help us understand the business at the level of depth and granularity necessary in proactively managing change amidst the levels of risk and uncertainty we all find ourselves in today.

In my view, it was that transition in practice and philosophy that characterized the EPM discipline in 2010. Most of the improvements I’ve witnessed over the past twelve months were about how to make EPM within our organizations more flexible, dynamic, and better able to improve the manageable and change the changeable, while often managing and influencing what appeared to be unpredictable. To this end, we’ve seen changes in everything from the scope and focus of our EPM organizations to the manner in which we track, measure, and manage our performance. We’ve seen changes in how we define value, and when and how we declare victory. We’ve also become more keenly aware of the weaknesses and shortfalls of our EPM programs–from our ability to capture and track the right data, to the systems we use to report and analyze “mission-critical” information.

As in past years, there was no shortage of success stories. And, of course, none of us experienced these successes without our fair share of failures and setbacks.

Below are what I believe to be the most significant factors characterizing EPM successes and failures in 2010. These are:

  • Clarity around the role of EPM as a discipline within the business
  • Better “line of sight” linkage between strategy, KPI’s and business improvements
  • More focus on “value capture” and bottom line results
  • A shift toward (more holistic) “profitability management”
  • Consolidation (for the better) among technology vendors
  • Standardization in the data environment (within companies/ across industry)
  • Investments in EPM skill building and cultural transformation

Some of the above are strategic in nature, while others are more tactical and operationally focused. But I believe they all have relevance on how EPM will move forward into 2011 and beyond. I offer these, and the expanded discussion of each below, for your reflection, and as a beacon for how we can all navigate the challenges that lie ahead in the coming years.

More clarity around the discipline of EPM/CPM

In 2010, many of us were able to build more clarity around the role, charter, and delivery systems for EPM within our organizations. While this might not have been as “clean” a process, or ended up as structured as we might have liked, most of our organizations now see our role as more established, and with a clearer sense of purpose than they were in years prior.

In previous years, the relationship between Corporate IT and groups responsible for Enterprise Performance Management has always contained some friction, mostly around the management of the data required for effective performance reporting (i.e. data warehouses, and the more recent Business Intelligence (BI) solutions). Many companies have also struggled with the interface between EPM and other corporate governance/ support functions (Corporate Performance, Operations Analysis, Strategic Planning, and even areas like Auditing, Risk Management and Capital Planning) where roles and boundaries are sometimes hazy at best. In the past, these conflicts in role clarity often forced EP Managers to either relegate their role to basic metric tracking, or risk continuing amidst the confusing roles and frequent “turf battles” that had come to define our relationship with these stakeholders in past years.

Howard Dresner (the individual who first coined the term Business Intelligence (BI)) actually defines EPM as “BI with a Purpose.” For me, that is a good summary of what began to take place in 2010 within many of our EPM organizations. In 2010, EPM began to find its identity amidst what was clearly a state of role confusion. Rather than battling over whether the company needed a Performance Management solution or a BI solution, one simply became a way of leveraging the other (i.e., while both disciplines utilize operational data, EPM is a discipline and set of processes for driving the effective management of performance, while good BI enables a data environment that makes all that possible). The same case can be made for each of the other disciplines mentioned above. They all use performance data to differing degrees and for different purposes. 2010 identified and clarified many of these distinctions, bringing a great deal of that into focus. And for EPM managers, that was a welcome change, providing a clear mission and charter for it to rally around.

2011 will hopefully build upon that clarity. But EPM managers need to continue their vigilance, adding new dimensions of value to what they’ve created for the business, while carefully nurturing their stakeholder relationships so that the role clarity achieved thus far evolves into lasting internal partnerships. The more EPM can deliver a clear and distinct “value add” from its efforts, and make clear the impact it has on P&L, the more visible and vital (and less redundant) the company’s investment in EPM will be perceived.


Better “line of sight” linkages (between strategy, KPIs, and business improvement initiatives)

2010 saw a marked improvement in the ability of EP Managers to show visible “line of sight” linkages between the activity of metric tracking and their impact on operational business improvement initiatives. For many, this journey has been painful, and some have found the boundaries previously referenced even harder to discern, as their EPM groups were sometimes forced (out of necessity) into directly driving the very operational changes that, in the end, are actually operational accountabilities!

But on balance, 2010 saw mostly positive developments in how companies manage the “downward” linkages between KPI’s, business metrics, and the operational improvements underway in their organizations. That often required a clear process for identifying gaps in key measurements, and quickly deploying business improvements (using a variety of improvement methods (e.g., LEAN, TQM, Kaizen, Six Sigma). EPM groups that have been able to demonstrate these kinds of linkages, and show examples of how they can work, have achieved something big, and should be proud of it. Soon, they will be able to step back into more facilitative roles, allowing the operating groups to take back the baton and continue propagating these changes within their respective Business Units.

The same however, cannot be said for the “upward” linkage between KPIs and Corporate Strategy. More often than not, the very successes we have had operationally have only highlighted areas where business strategies themselves have either not been defined or lack sufficient clarity. Over 70 percent of the organizations with whom we have worked in 2010 have expressed major concerns in this arena.

Just as EPM groups have successfully facilitated a “line of sight” linkage between measurement and operating improvements, many will need to apply the same facilitative role to marrying their company’s strategy with the underlying measures and KPIs of the business. In some cases, where Company and Business Unit strategies do, in fact, exist, this will simply mean identifying the key gaps and weaknesses so that business strategy is clear, compelling and integrated into the KPI’s that are routinely tracked. In other cases, it will mean introducing some basic strategic thinking and frameworks (e.g., Porter models, options theory, etc.) to executive teams (particularly those who spend most of their time in the operational space) in order to kick-start or revisit the strategic planning process, and actually develop what may be the Company’s “first REAL strategy. And for some, it may only mean serving as a catalyst to force a better integration between existing strategy and the company’s KPI framework. But in all cases, this is likely to be a major challenge, as it will require a much stronger partnership between EPM and the highest level executives and strategic planning support groups (Planning, Finance, Risk Management, etc.) within the organization. Building that “upward linkage” linkage will be essential to completing the type of full “line of sight” visibility required of a successful and sustainable EPM environment.


More focus on “value capture” and “bottom line results”

Starting in 2009, and into 2010, we saw a much more deliberate focus on what some would call “finishing the race”. All too often, we have seen measures tracked and reported for the purpose of compliance or satisfying the optics of performance measurement. But for “best practice” EPM organizations, success is not only defined by the presence of a scorecard or dashboard, but also by being able to generate hard and sustainable results in terms of savings, service level improvements, or other (more substantive) sources of business value. We’ve seen numerous companies who have made the progression from not tracking downstream value at all, to being able to assign clear, single-point accountability for the full lifecycle of a particular KPI or critical business metric. This means not only owning the measure and the reporting of it, but also the accountability for meeting targets, closing critical gaps and being responsible for delivering downstream improvements and incremental value to the bottom line. Often, this requires a robust framework for identifying and managing these accountabilities, and an overall philosophy of “commitment management” that is embraced culturally by the company. There are many tools that have emerged in this arena, from the creation of “value registers” to formal “commitment tracking” protocols for executive and operating management.

2011 will hopefully see a continuation of this trend, bringing true clarity to how EPM organizations should be measuring themselves as service providers. Back to the first observation, there is no better answer to clarifying the identity and value delivered by an EPM function than being able to generate and consistently deliver on a robust pipeline of value improvements to the business.


Shift toward (more holistic) “profitability management”

While the focus on “value capture” has had significant impact on the identity of EPM and to the bottom line directly, many EPM executives have realized that success needs to go beyond conventional sources of value. For most, that definition of value over the past few years has translated directly to cost savings and productivity gains—essentially addressing the question “what and how much have you saved for me lately?” But if nothing else, the economic recoveries that followed past downturns have showed us the flaws and negative consequences associated with this kind of singular focus on cost savings. (a.k.a.—the “death by a thousand cuts” solution) Plain and simple, it works for a while, but quickly becomes a debilitating force when the business inevitably returns to periods of rapid and dynamic growth.

Striking an appropriate balance between conventional cost savings, and other (perhaps less obvious) sources of business value will become a critical success factor for companies in the years ahead. Some companies have begun this transition by actually changing how value is actually defined within the business. For these organizations, value is seen through the much wider lens of what actually drives profitability, and from what sources. Conventional thinking asks where we can add value by cutting the direct cost of goods sold, driving increases in operational outputs and labor productivity. More innovative and holistic thinking, on the other hand, delves well beyond direct operating costs and begins to tap into the savings embedded in corporate overheads (IT, HR, Supply chain, etc.), value that is locked up in our supplier and business partner relationships, and even value that may reside in customer behavior and day-to-day interaction with the company (i.e., those areas that were historically regarded as uncontrollable or may have been considered “off limits”).

We believe this expanded focus on profitability (versus simple operating costs and productivity) will have significant impact on what we measure in 2011, as well as how we define and claim value on the back end of our process. EPM can and should play a major role in LEADING this transformation, using its data and measurement frameworks to reveal new profitability drivers to the organization, and, in turn, growing the active pipeline of value improvements ( a new corporate asset) for the business.


Technology focus/ vendor consolidation

A few years ago, the landscape of supporting technologies was characterized by a plethora of vendors, each touting its own unique (and often proprietary) version of a performance management “system”. In fact, the domains within which they all operated were blurry even for those companies and the external (independent) research organizations who tracked their capabilities on a regular basis. Were these BI vendors? Dashboard providers? Visualization tools? Reporting engines? All of the above? There was a time in the not-too-distant past when the number of quasi-credible players for a company looking for performance management software would have stretched well into the hundreds.

Today, the landscape looks very different. Not only are the “credible” PM technologies fewer in number, the clarity of the domains within which they play has increased significantly for all involved (Who are the real EPM vendors versus those who are simply pieces of the puzzle?) In early 2010, Gartner published its Magic Quadrant analysis which did a good job at illustrating the consolidation that began a few years ago when each of the major IT solution providers acquired various BI and other performance management/ reporting related niche players in what turned out to be the start of a major industry consolidation.

With the exception of the very small one-off solution providers, the credible list of EPM technologies (which I define as robust (features and capabilities), easily integrated (open versus closed systems), and scalable) can now be counted on one hand. That’s good news for those who have waited until 2011 to pull the trigger on their EPM technology purchase/ upgrade, as the job of vendor selection has gotten much easier, and the cost of deployment much smaller.

In 2011, we expect a significant increase in the set of capabilities and innovations each of these players bring to the table, with the biggest of these being integration (within and between other applications such as risk management, asset management, capital planning, portfolio management, and HR), automation (less manual data manipulation and conditioning, better leveraging of BI tools), and the portability/ flexibility of reporting mediums (e.g. mobile versus desktop reporting).


Standardization of the data environment itself

Key to some of the above changes will be improvements in how data at all levels are collected, synthesized, and reported. Some would say this all started years ago with increases in regulatory oversight and the application of clear reporting standards (everything from basic GAAP to SOx in the financial realm, to industry-specific reporting such as FERC and NERC in the Utilities sector), many of which have made reporting transparency a way of life. But for others closer to the world of financial reporting, those forces will likely pale in comparison to what is coming in the era of International Financial Reporting Standards (IFRS), where moving to a global standard for transparency and reporting will prove far more complex and daunting.

2011 should see the acceleration of these factors on the EPM radar screen. Changes will no doubt emerge in terms of how data must be collected and reported, so “tuning into” these changes now will allow you to get ahead of the curve and be in a position to influence this transition within your organization (rather than reacting from the sidelines on what emerges from within IT and Accounting, two of the most impacted functions within your organization). As with most such changes, the implementation is never straightforward, so staying ahead of the curve may even create opportunities to drive positive change in the overall data environment within which you operate and rely on. Use it to your advantage.


EPM skill building/ cultural improvements

As performance managers, we always talk about the importance of skill building and driving culture change. But with the exception of those companies who are heavy invested in one of today’s major quality/ business improvement platforms (Lean, Six Sigma, et al), investment in a true performance driven culture has fallen woefully short of what is necessary in a successful EPM environment. In fact, many companies who have made significant investments in the above referenced platforms have actually lost ground in recent years as these initiatives became viewed as “passing fads” that merely generated lots of “lip service”. The bottom line is that there exists a broad spectrum of experiences in this space, from those who have invested heavily to those who have invested little to nothing.

What continued to concern us in 2010 was the number of organizations who had invested heavily in the EPM discipline (by building a support structure, acquiring dashboard technology, etc.), yet appeared to be moving backwards, largely because they had not made the corresponding investment in EPM awareness and leadership skills that are required at even the most basic stakeholder levels. Many of these organizations had limited their investments to tactical skill building like diagnostic and analysis techniques (typical of operational driven cultures) rather than the broader suite of skills demonstrated by leading EPM organizations.

Performance Management is a major investment in business infrastructure and governance, and to implement it without an aggressive, yet targeted approach to EPM skills at all levels of management (Performance Leadership, Reporting, Analysis, Commitment Management, Managing Change, –to name a few) will guarantee some major failures along the way.

The good news is that this is an area many have determined to be a priority, and many of those who have underinvested in the past intend on making up significant ground in the coming years. But by the same token, most companies have not adequately defined where these investments should be made and what specific skills should be focused on, and hence lack a credible “learning” program that can really accelerate its EPM success. The starting point for all of this is doing a solid inventory of EPM learning within your organization (defining the required skills and competencies, and understanding where you stand on each), and then building a comprehensive plan to introduce and reinforce these new behaviors into your business. As organizations, we know how to bring new skills into the business, having introduced effective learning programs in everything from technical skills to safety, diversity, and basic operating management skills and behaviors. Integrating EPM skills into these programs, consciously and deliberately, should be a major focus of EPM in the coming year.

———————–

So with that, we will put another volume of “EPM-year in review” on the shelf, hoping that it will be useful to you as you refine your strategies, plans, and tactics for the new year.

With any luck, 2011 will mark the long anticipated turnaround in the global economy, as well as the deployment of new EPM practices, tools and approaches that will help us navigate the new growth and ambition that will come with it. But let’s not lose sight of what enabled us to navigate through the challenges of 2010 amidst the unprecedented levels of uncertainty that surrounded all of us. Risk and unpredictability will always be present, whether visible to us, or merely lurking in the background. Being able to manage within that environment will continue to differentiate the best among us in the years ahead.

My sincerest best wishes for all of you over the Holiday season, and for a happy, prosperous and successful 2011!

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com

The Primary Fuel of Dissatisfaction…

Following up on an earlier post, the question of what really “fuels” dissatisfaction has been a hard one to answer because it is both multidimensional (I.e. There is no single source of discontent) and unique to the individual customer. That notwithstanding, I do believe that the answer does revolve around one single category of emotions: that being FEAR and UNCERTAINTY.

Come on, really? Many customers are customers of simple products. Not all purchases are big purchases like houses, cars, or other things that will stick you with really long term regrets. Most daily purchases- you gas transaction, payment of a utility or cell bill, a hotel reservation, and the like, are obviously too simple to drive the emotions of fear and uncertainty, right?

To the contrary, I believe fear and uncertainty emanates from many sources, not the least of which is being “surprised” in a way that has negative consequences. How many of you have gone through the angst associated with the uncertainty of data charges from your cell phone company? Or wondering if you identity will be compromised from an online purchase? Or if you electric bill for this month will bust your budget? Or if your credit card will exceed it’s limit and embarrass you among a group of close friends or family? Even something as simple as the uncertainty of missing an airport connection, can often create hours of angst rendering any exceptional service you receive before, during or after a flight pointless. Why? Because for most of us, worrying about something important will end up distracting anything within close proximity to it. Outside of a small handful of us who can compartmentalize emotions, most things outside of what’s urgent and important for us takes a backseat until what’s important gets resolved.

Some companies seem to get this, although I wonder how much much of what we see in this area is deliberate rather than simply random or haphazard success. Nevertheless, you more than likely have seen some examples of how uncertainty can be effectively minimized, if not overtly managed. Some simple examples include:

– airlines who announce connecting gates while still in the air
– unlimited calling and data plans
-leveled payment plans from electric and gas utilities
– notification of hold times and queue lengths

The proliferation of SMS alerts for everything from bank balances and data usage to first class airline upgrades and flight delays all help customers avoid surprises. Still, I wonder if some companies are just doing these things for technology sake rather than from a genuine understanding of customer mindset and motivating forces. In fact, most of this can be done without any technology intervention.

I am reminded of when united airlines used to (maybe they still do) allow customers to tune their in seat audio to the atc frequency so that they could monitor the flight. One of the reasons I liked that was you could hear about turbulence being reported by other pilots in advance of the bumps, as well as all the requests by your pilot for faster routing, smoother altitudes, as well as any unexpected delays. In fact, even now, when I am on an airplane that is going through turbulence more than a few minutes, I start wondering if the pilot is actually working as hard as the united pilots did to find the smoother air. Of course they probably are, but at least with united I knew. And that made the uncertainty go away.

Here’s another more recent example, and perhaps my favorite so far. The other day I ran into an electric utility that alerted (actually they reminded) customers to the fact that the summer months were approaching and bills would be spiking…thus opening up an opportunity to convert customers to both a leveled payment plan (same amount every month) and direct debit option, thus minimizing or eliminating the elements of uncertainty and surprise from the customer interaction. More importantly for the utility, it had the dual benefit of saving enormous amounts of money by minimizing transaction costs, eliminating a huge volume of inbound calls to the call center related to hi bill issues (high bill complaints are the highest duration and highest cost type of call for utility companies, in which 50+ percent of the time, the call actually results with the customer concluding the bill was similar in magnitude to the same time last year.. Can you think of many cases in which being proven wrong leads to a positive and happy state of mind?

I think the implications of adopting this “avoid the surprise” philosophy could be very large in terms of taking customer satisfaction to a new level. But it does require some fundamental changes in everything from how we view customer behavior, to how we design our offerings, and most importantly, how we define, measure and manage our success in this domain.

– Posted using BlogPress from my iPad

CSAT- the BIGGER picture…

First of all, my apologies for not having written in a long long time. Funny how the things that we enjoy the most take a back seat to the urgent priorities of the day (or in this case months) that are sometimes far less fun or rewarding. The good news is that there have been lots of interesting client experiences over the past several months, and hence lots of good fodder to expound on in the weeks and months ahead…assuming I can manage to carve out the hour or so a week it takes to get them down on paper.

Top of mind for me right now, is what companies are doing (or more importantly NOT doing) to drive good customer service. I think this stems from both a failure to understand what really makes a customer tick, and the associated failure to measure it, and ultimately manage it. As a backdrop, I’d ask us to all think about the work “tick”. For most, the words “what makes a customer tick?) translates into the things that really “drive” or “motivate”them to buy something, or just feel good about your product or service. But today I want to focus on a more literal interpretation of the word “tick”. I’m thinking something like the ticking of a timer- like a clock winding down to 0…at which point things go “boom”…which in today’s economy more quickly translates into a lost relationship, a lost sale, or a lost client. In my judgement, today’s customer is much more focused on extracting maximum value from the services they have ALREADY bought or paid for, much more so than (or at least long before) they will entertain buying something else from you.

So with that as the backdrop, I think we’d all be a lot better served by taking another, perhaps closer, look at the drivers of DISSATISFACTION as our primary way of driving customer value. Putting the drivers of dissatisfaction ahead of focusing on all the bells, whistles, and other sources of delighting the customer, will get you farther because if you can’t avoid the dissatisfaction, then all of the rest is a moot point. Of course, most of you understand that, right?…and have already put in place measures to prevent a customer from getting to the point of dissatisfaction. All of you probably measure things like how fast we answer calls, how many are abandoned, how many issues are resolved in the first contact, etc., and through doing those things you minimize the likelihood of a customer being dissatisfied, or at least staying dissatisfied, right? Not so fast.

Another perspective is that by the time a customer calls, the clock is ALREADY ticking, and whatever is done DURING the customer call is often occurring AFTER the clock has wound down to almost zero. For many of you, the picture may in fact look like this: the customer gets through without being dropped, bounces out of the automated call system in quick order, talks to a rep (who “resolves the call”), ending with the customer ostensibly satisfied because they didn’t call back or give a bad score on the automated survey, right? Of course there is another interpretation…which is the customer was already quite ticked when they called in, at which point they immediately concluded (based on the first 3 choices form the IVR) that he wouldn’t get anywhere with that route, bounced out of the IVR, ran into an unhelpful rep, and politely left the call without taking a survey, and left more upset than when he started. Call me cynical, but if that was a ticking time bomb to start with, chances are it went boom within minutes of the call ending, and did so with all of the measures and indicators pointing to the opposite, and the company thinking they have a happy customer whose ultimate dissatisfaction has been averted.

I submit that companies who score well on the traditional metrics of CSAT are giving themselves a false sense of security and are probably missing the core elements of customer perspectives…those that largely revolve around lingering sources of discomfort that are hard to express, not to mention measure or quantify. If we can get our arms around this, we have a much higher likelihood of eliminating perhaps our single biggest blindspot in generating customer value and ultimately leapfrogging the competition.

The next few posts will focus on some of these up front drivers are, as well as the kinds of things we need to be measuring in this space. Fortunately, this is an area where many of you are not behind the pack, because there is nobody really leading the pack. In the past several months, I’ve worked with some of the self proclaimed “best” companies (those who perform well on the conventional indicators) and have interacted as a customer (as many of you have) with the “big names” in customer service with less than adequate results and a time bomb in my gut that is still ticking long after the “polite” ending of the call to the company.

I think we would all be better served by making the following key priorities in our drive to maximize customer sat.

1. Redefine the drivers of customer satisfaction, and dissatisfaction (the things that start the countdown on the time bomb)

2. Seriously rethink what we measure and track, and the baseline against which we evaluate success (my hunch is we will throw out a lot of what we measure today)

3. Correct the upfront flaws in the design of our offerings and processes so that dissatisfaction in minimized and we all have a more solid base on which to build on in the years ahead

-b

– Posted using BlogPress from my iPad

Hunting for “Best Practices”

A lot is written about benchmarking as a vehicle for identifying best practices. Clearly the two are related, but sometimes too much weight is given to the connection.

The temptation is to look to high performing companies, make a laundry list of what they are doing, and then go try to emulate that. The presumption being that most of what they are doing qualifies as “best practice”. In reality, what is often taking place at leading companies is a mix of three things:

1. Basic or core operating practices (“blocking and tackling”) that are simply executed at a level better than most

2. An effective “operating model” within which these practices reside

3. True “best practices”, of the innovative and breakthrough variety

The first two categories are clearly important, and in some cases more important than the latter, because without those foundational aspects, all the best practices in the world will yield little incremental value. But assuming those are in place, true “best practices” are clearly the next place to look for innovation. The challenge is knowing what to look for.

When you invest in activities geared toward identifying these types of best practices (conferences, benchmarking studies, consulting projects, etc), its important to have a set of standards on which to base your return on that investment. For a best practice to pass the “innovation” test, it must deliver some level of insight that goes beyond just doing the same things better. I offer the following as a checklist for assessing whether a specific practice passes this type of sniff test:

1.Is it definable?
Best practices are not general philosophies (e.g.- “better management of risk”), but rather specific changes to process, technology, organization, policy, or operating protocol. And it refers to a specific “change” from current state, typically involving something you will either add (start doing) or subtract (stop doing) . Sometimes its a new process or technology all together. But defining it requires being specific.

2. Is it unique?
Is this a practice you are likely to find most everywhere you go, just implemented at different levels of effectiveness? There is nothing wrong with focusing on better execution/ implementation or core business practices as a driver of performance, but you are better off calling it what it it- an implementation breakdown- rather than disguising the issue as failure to have a particular practice or policy that the organization knows is already in place in some way, shape or form. Otherwise, you’ll be met with “this is just more of the same”.

2. Is it breakthrough?
Does the change in practice or policy create a step level change in result of a business process. Generally I look for a 10 times payback in a relatively short horizon, and at least a 50% change in current performance level to the affected business process. But these standards can vary from company to company. But we are not talking 1 or 2 %- but something of material significance. A small standard business case worksheet can help your employees do their own internal “sniff test” before consuming your time in analyzing the myriad of small ticket changes.

3. Leading edge or “bleeding edge”?
Often, it is our temptation to look at the coolest technology or system and proclaim it to be a best practice. Most of these are untested at best, and looking for “test dummies” to try themselves out on. Find companies that have implemented it, look at the business cases they used to justify it, and then look at how much of that actually materialized.

4. Is it actionable?
My test for “actionable” is usually that it can be adopted (fully implemented) inside of a 1-3 year timeframe. Otherwise, you’re adding new R&D into the pipeline. R&D is fine, but don’t let theoretical or speculative projects clutter up your best practices pipeline. Focus on things that you can quickly assign ownership to, and things you can get on with rather quickly.

5. Can I attach value?
Most importantly, can you attach dollars and a specific budget location to the achievement of implementation? And will someone “sign up” for that commitment? e.g. If I implement xyz, how many bodies go away, or how much money will i save, and when? If you cant answer those questions, we’re probably not talking about a credible “best practice”.

Look- there is nothing wrong with focusing on doing the basics better. Or having a better operating philosophy or business model. You need those elements to run the business. But when we talk about BEST practices, we are generally talking about doing something unique and different. And without that component to business improvement, its unlikely that you will get to or remain at a leading edge level of performance.

So make sure you have true best practices in your pipeline, and use these tests to make sure they pass the proverbial “sniff test”.

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com