Spare No Expense? – When spending more money actually decreases performance

Can you tell where the risk is in this picture?

I realize for some, any discussion of performance in the Safety and Security arenas may feel like touching the “third rail” in your business and process improvement portfolio. In fact, that may be exactly what it is.

Let’s be frank. We are all certainly appreciative of those who serve on the front line of keeping us safe and secure, and are certainly indebted to those who continue to deter or prevent efforts to harm us on the home front, particularly since 911. But as of late, it feels as if this “protective layer” of our society has taken on a life of its own. Safety and Security efforts, whether in public service or private enterprise, now appear largely “exempt” from the type of investment diligence or performance scrutiny that almost every other business process or function routinely endures.

No place is this more apparent than in the area of air travel. If you travel like me, week in and week out, you try to view the security around you as a “necessary inconvenience”, and if you’re lucky, sometimes you can make it fade into the background as more or less “white noise”. But at other times, it becomes almost unbearable, and makes you question whether we have gone too far and perhaps blown right past the point of what is rational…or what many would call the point of diminishing returns.

Last week, I traveled to Canada from the East Coast through Denver. From the start of the trip, you see security all around you. It starts with the almost inconspicuous. The handful of security officers outside the drop-off zone who essentially keep vehicles from getting too close to the airport for any length of time for the obvious reasons. Then there are the few security faces patrolling the ticketing area, although I see fewer and fewer of these of late. The real security doesn’t really begin until you enter the always crowded baggage screening area. And it begins in a BIG way. At any busy airport, there are multiple layers that emerge at this point of the process:

  • First there are those who guide you into the security line and make sure you have a boarding pass
  • Then there are the TSA ID Checkers (For lack of a better term.)- the ones that make sure you have a boarding pass and validate it with your ID or passport
  • Of course, there are those who tell you what screening lane to go to
  • …and those who guide your bag onto the belt and make sure things get into screener’s “black box”
  • There are those folks who apparently manage the “inventory” of the plastic containers that carry the laptops and other loose items over the screening belt
  • …and the agents who guide the passenger through the metal detector or body scan (the red light/ green light dude)
  • Occasionally, there is the added service of the person who guides you out of the metal detector/ body scan and toward your belongings (in case you forgot whether your conveyor was the one on the right or the left)
  • There are the screeners themselves (the ones behind the TV monitors, and in the case of the body scanners, somewhere behind closed doors)
  • …and those who make sure the items come out of their “black box” and keep the flow moving
  • There are those who collect the plastic bins so they can recycled and brought back to other end per the instruction of person #5 above
  • There supervisors at the end of the chain, who are always “at the ready” to handle any problems that may arise
  • …and of course, all of the other stuff we don’t see (People behind the scenes monitoring cameras, immigration and customs agents, people reading the body scanners, et al.)

Perhaps my sarcasm is bleeding out of the above list. But it’s not because I don’t value the job these people do. It’s because I have serious doubt in three things. 1) That we are investing in the right areas (proportionally)…steering resources to the the areas that have the biggest threat potential 2) That the investment in each area is commensurate with the risk (which conventionally is defined as the product of probability and consequence), and 3) that the overall processes are as efficient and effective as possible.

My take is that we haven’t really asked these tough questions largely because the areas of safety and security have been viewed like “blank checks” to most since 911. To answer the question, is to put ourselves dangerously close to assigning probabilities and price tags to human lives. So we kind of “exempt” these areas from the whole performance debate. We don’t challenge the investment. We don’t challenge the efficiency of the process. And we certainly don’t subject the outcomes or value produced by these investments to any real scrutiny. In my view, its very much how we treat the performance of educators and teachers in our public schools. The process is simply “too important” to subject it to the pains of proactively measuring and managing real performance accountability.

So back to airline security for a minute…

  • ARE WE FOCUSING OUR RESOURCES APPROPRIATELY? If so, why are nearly all of our visible security resources located “behind” the security entrance? In Denver, I encountered 59 (yes I actually counted them) visible security personnel from the time i walked in the door to the time i boarded the plane–53 of whom were located inside the screening area itself.I know I am not the first person to draw this conclusion, but if you were trying to inflict harm and terror on a large group of people, you would probably do so BEFORE you got caught up in the first layer of security (i.e. in the line itself amidst the biggest crowd). But that is not where their focus is.
    The majority of the 8.1 Billion dollars we spent in airline/ airport security in 2009 was focused on the 50,000+ TSA agents ( and the associated technology and equipment) that are situated INSIDE the boundaries of the screening area itself. Ergo- The nature of last week’s attack in Moscow should not have been a surprise at all.

TSA Agents at work

  • ARE OUR PROCESSES AS EFFICIENT AND EFFECTIVE AS THEY COULD BE? This one is easy. NO. Not even close. Those who defend the process sometimes steer the argument to “the deterrent value” that is achieved by the physical presence and show of force the TSA provides. Perhaps a little, but their argument falls apart quickly. To any reasonably educated person, the process on the surface simply LOOKS chaotic, uncoordinated and misguided. Anyone who has thought about this in a challenging way could probably tell you the top 10 ways they would beat the system. Instead, we spend over $8 Billion on a process that could very likely be pierced in a heartbeat. In my view, this is because we layer new processes on top of bad ones…never reverting back to the underlying value that each component activity or task contributes to the desired outcome. And this doesn’t stop with TSA. Look at Customs and Immigration and you’ll see the same thing. Last week I sat on a plane at the gate for 40 minutes in Denver (after landing almost an hour early) because of a new regulation that required passengers entering the US to remain on the aircraft until they were within 10 minutes of their scheduled arrival time. I’m no immigration expert, but its hard for me to conceive of a risk that warrants this type of policy guideline (I’m all ears if there is one!). Another example is the safety briefing on the aircraft itself. Other than the 15 or so references aimed at blackberry and ipad users (which is a whole separate debate in and of itself) , nothing materially has changed with this briefing in over four decades. We just keep “layering on” without ever pruning back. Come on…are there still people out there who don’t know how to buckle a seat belt??? .
  • ARE THE RESOURCES WE SPEND COMMENSURATE WITH THE RISKS WE ARE MITIGATING? This is the real question, isn’t it? But as I said before, answering it begins to place a value on the people and assets your are trying to protect. Again, we spend 8 Billion dollars that we didn’t spend before 2001. And that is really a small fraction of the overall investment, much of which we don’t see visibly. And since much of that is reactive strategies and tactics (dealing with the causes of events that have already happened), it is not likely to deter or prevent the NEXT real threat… be it mail rooms, baggage collection areas, biological attacks, railway threats, etc. Next time you travel (whether its by air, or just to the mall to buy something), look around you and ask yourself how much investment is being made in the spirit of “protecting” you, and then try and ask yourself how much of it really adds value. I continue to be amazed how much of it could be pruned back with little impact on the end result. The cost of getting to the “zero defect” solution- which is most likely not achievable anyway- would be enormous and would suck the lifeblood out of any industry trying to survive in the economic climate. (I’m reminded of Ralph Nader’s discussions of smoke hoods and air bags being mandated by the airlines (the latter is actually in testing as we speak!)). Simply speaking the “safety and security” has ostensibly given us a license to not challenge these investments and view the money pit as bottomless.

While I’d like to think all of this is just impacting the airline industry and travel community, most of us know that is far from accurate. Many of us can see this in our day to day lives. I am reminded of this every time I enter a client’s office building and see 2-3 guards at the guard desk badging and signing in visitors (a process that in its traditional application is of marginal value to begin with, and is often bypassed altogether by the people it is designed to protect!). And for most businesses, that is just the visible part of the process. There are the 24/7 parking lot patrols, grounds security, and those in the policy and admin part of the process; NOT to mention all of the security surrounding corporate and IT systems risks which often dwarf what we see on the surface.

Then there are the “first responder” type functions we all have in our local communities, which REALLY IS the “third rail” if there is one , and one that I would be best served to avoid (but not just yet). Take a State like New Jersey where we have literally dozens and dozens of small towns (some as small as 3-5 square miles in geography), each of whom have their own Police, Fire and EMS departments and management infrastructure. Layer on top of that the often restrictive policy and union guidelines (one of which actually requires sending two EMS units to every 911 call), and you have a recipe for gross ineffficiency, runaway budgets and little prayer of any future tax cuts.

The list clearly continues, and there is no shortage of examples like this. And while only a few of you may be directly associated with the management of safety and security functions in your organizations, all of us are in a position to initiate this debate, and perhaps even influence it.

The steps we need to take are not that different from what we would take in any business process. They revolve around the questions I’ve posed above: Are the investments commensurate with the outcomes we want to achieve? Are we effectively deploying these investments in the right areas? Are our processes efficient and effective? Does each activity make a contribution to value?

A simple start would be to begin measuring the function like we do many other business processes. Some benchmarking certainly wouldn’t hurt. But most importantly, look inside the the process itself with an eye toward the forensics. Begin by establishing a linkage between every activity and the outcome and value it produces. Without that, the layering of bad process on top of bad process will no doubt continue.

Its not rocket science. But we must start by bringing these area into focus and increasing the transparency of the issues. Continuing to exempt them from the debate will only delay the improvement that we all know is possible.

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com

Is Your Scorecard Getting Stale?

Around this time of year, it is not uncommon to see clients challenging and refining the metrics that they will use to evaluate performance as the year progresses. Actually, it is quite a timely and productive exercise to go through, as many of us have just come out of our year-end planning cycles where a number of our goals and objectives may have been modified from previous planning cycles. And while one might argue that metrics should be an outgrowth of the planning cycle itself, we all know how often those processes get short circuited. So doing a quick inventory of our business metrics is always a healthy practice to get into.


In talking with one of my clients last week, we got into a fairly lengthy discussion about the value of having measures that don’t change very often. In his view, certain measures at his company were in fact “getting stale” and were hard to do anything meaningful with from a motivational or incentive standpoint because they really lacked any interesting “movement” from a reporting standpoint. Was he doing harm by keeping them on his scorecard? Was it time to bring some more “interesting” or “challenging” metrics to the table?

Anytime I get a question like this, I try and go back to asking how well their metrics line up with “where value is created” within their business…a sometimes obvious and trite, but often very valuable question. Going through a challenge like this can reveal a lot about where changes might be necessary. A few things to consider along these lines:

– Most of the time, the measure itself is not what has gotten “stale”, but rather the target against which you judge success. I had a client recently tell me that his target for a particular metric was to “improve” or “get better” year on year. Quite frankly, I believe this is a clear recipe for lackluster improvement. Most sports teams that have achieved greatness (consistently) usually started with some pretty bold and specific turnaround or improvement aspirations. Resetting the bar with a healthy dose of ambition can really bring life back into what might appear to be a stale set of business metrics.

– Sometimes, its not the measure or the target that is the problem, but rather how the measure is positioned. Simple metrics such as safety incidents, outage statistics, etc… can look stale especially since success is evaluated based on the “absence” of something happening. Simply changing the way the metric is positioned, however, can have a huge impact on visibility and motivational value. Repositioning these metrics into things like “days since last incident”, “near misses”, “time between failures” can turn a sleepy metric into something that grabs more attention.

– Some measures, however are meant to fade into the background over time. From time to time, we add metrics to the scorecard because of a problem that needs fixing. A good example of this is in corporate services functions where things like “help desk response times”, “recruiting cycle times”, etc. have become the centerpieces of their metric reporting. In fact, most of these areas have gone so “cycle time happy” that, while I’m not sure anything is getting done, I am certain its getting done FAST! Sure, these metrics were born because at some point in the past, I’m sure cycle times in the associated areas were really, really bad!!! But at some point, you need to acknowledge when a gap has been closed and put a metric into what I’ll call “maintenance mode”. It might not have to “go away” altogether, but perhaps it should fade into the background a bit so that a new source of value can gain visibility and be exploited. A good example of this is how many call centers have decreased the importance of things like speed of answer and abandon rates, and have put more emphasis on the role reps can play in shifting customer behaviors and service channels utilized.

– and yes, there are times, where the metrics we use are simply crappy metrics, and while they may have made sense at the beginning, they either no longer motivate the right behaviors, or worse, incentivize the wrong ones. Don’t be afraid to trash some metrics periodically so that you don’t end up creating layers of dead weight in your scorecard and Performance Management activities.

So if you are one of those managers in the throws of self reflection, happy hunting. Just make sure you go through the process a little more deliberately and methodically so that you don’t end up throwing the proverbial “baby out with the bathwater” (which incidentally is a metaphor that I hope is not based any real history!!!).

Seriously, the key is to ALWAYS TO MAKE SURE THE METRICS YOU USE MATCH UP WITH HOW YOUR FUNCTION, BUSINESS UNIT, OR COMPANY INTENDS TO BUILD VALUE FROM ITS EFFORTS during the current planning and reporting cycle.

-b
 

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com

 



2010 EPM Year in Review

2010 – EPM (Enterprise Performance Management) Year in Review

As is my tradition in the final days of the year, and in anticipation of the one to come, I provide below some of the more significant trends we’ve observed through our Performance Management related work with clients and colleagues in 2010, and some thoughts on what we see as the major forces, issues and trends that are likely to shape the year ahead.

Certainly no one will argue that the most dominant force, regardless of industry, continues to be the economy. No single factor has impacted the C-Suite and Performance Management Executives more than the economy in terms of its stifling impact on corporate growth and the state of paralysis it has created in our ability to plan for and manage the inevitable yet unpredictable growth that lies ahead.

By definition, EPM (Enterprise Performance Management), or CPM (Corporate Performance Management) as it is sometimes referred, is all about linking strategy and KPI’s, to the management and improvement initiatives that occur in the daily operation of the business. We accomplish this through effective measurement of performance, analysis of identified gaps, deployment of course corrections, and changes to business processes and operating protocols. This activity is challenging even in times of stable growth and “normal” operating conditions. But in times of unpredictability and chaos, which is what most of us find ourselves in today, it most certainly adds a level of complexity (and stress) that would challenge the best of us.

The wise among us would say that these are the exact times to refocus ourselves on the things we CAN control versus those we can’t, and perhaps, more importantly, to understand the difference between the two. Recognizing and acknowledging that difference requires understanding the business well enough to make such a call.

However, as Performance Managers, what we often find is that if we understand the business and its drivers well enough, we can actually identify ways of controlling what appears initially to be uncontrollable. Such is the case with good Performance Management Systems. They help us understand the business at the level of depth and granularity necessary in proactively managing change amidst the levels of risk and uncertainty we all find ourselves in today.

In my view, it was that transition in practice and philosophy that characterized the EPM discipline in 2010. Most of the improvements I’ve witnessed over the past twelve months were about how to make EPM within our organizations more flexible, dynamic, and better able to improve the manageable and change the changeable, while often managing and influencing what appeared to be unpredictable. To this end, we’ve seen changes in everything from the scope and focus of our EPM organizations to the manner in which we track, measure, and manage our performance. We’ve seen changes in how we define value, and when and how we declare victory. We’ve also become more keenly aware of the weaknesses and shortfalls of our EPM programs–from our ability to capture and track the right data, to the systems we use to report and analyze “mission-critical” information.

As in past years, there was no shortage of success stories. And, of course, none of us experienced these successes without our fair share of failures and setbacks.

Below are what I believe to be the most significant factors characterizing EPM successes and failures in 2010. These are:

  • Clarity around the role of EPM as a discipline within the business
  • Better “line of sight” linkage between strategy, KPI’s and business improvements
  • More focus on “value capture” and bottom line results
  • A shift toward (more holistic) “profitability management”
  • Consolidation (for the better) among technology vendors
  • Standardization in the data environment (within companies/ across industry)
  • Investments in EPM skill building and cultural transformation

Some of the above are strategic in nature, while others are more tactical and operationally focused. But I believe they all have relevance on how EPM will move forward into 2011 and beyond. I offer these, and the expanded discussion of each below, for your reflection, and as a beacon for how we can all navigate the challenges that lie ahead in the coming years.

More clarity around the discipline of EPM/CPM

In 2010, many of us were able to build more clarity around the role, charter, and delivery systems for EPM within our organizations. While this might not have been as “clean” a process, or ended up as structured as we might have liked, most of our organizations now see our role as more established, and with a clearer sense of purpose than they were in years prior.

In previous years, the relationship between Corporate IT and groups responsible for Enterprise Performance Management has always contained some friction, mostly around the management of the data required for effective performance reporting (i.e. data warehouses, and the more recent Business Intelligence (BI) solutions). Many companies have also struggled with the interface between EPM and other corporate governance/ support functions (Corporate Performance, Operations Analysis, Strategic Planning, and even areas like Auditing, Risk Management and Capital Planning) where roles and boundaries are sometimes hazy at best. In the past, these conflicts in role clarity often forced EP Managers to either relegate their role to basic metric tracking, or risk continuing amidst the confusing roles and frequent “turf battles” that had come to define our relationship with these stakeholders in past years.

Howard Dresner (the individual who first coined the term Business Intelligence (BI)) actually defines EPM as “BI with a Purpose.” For me, that is a good summary of what began to take place in 2010 within many of our EPM organizations. In 2010, EPM began to find its identity amidst what was clearly a state of role confusion. Rather than battling over whether the company needed a Performance Management solution or a BI solution, one simply became a way of leveraging the other (i.e., while both disciplines utilize operational data, EPM is a discipline and set of processes for driving the effective management of performance, while good BI enables a data environment that makes all that possible). The same case can be made for each of the other disciplines mentioned above. They all use performance data to differing degrees and for different purposes. 2010 identified and clarified many of these distinctions, bringing a great deal of that into focus. And for EPM managers, that was a welcome change, providing a clear mission and charter for it to rally around.

2011 will hopefully build upon that clarity. But EPM managers need to continue their vigilance, adding new dimensions of value to what they’ve created for the business, while carefully nurturing their stakeholder relationships so that the role clarity achieved thus far evolves into lasting internal partnerships. The more EPM can deliver a clear and distinct “value add” from its efforts, and make clear the impact it has on P&L, the more visible and vital (and less redundant) the company’s investment in EPM will be perceived.


Better “line of sight” linkages (between strategy, KPIs, and business improvement initiatives)

2010 saw a marked improvement in the ability of EP Managers to show visible “line of sight” linkages between the activity of metric tracking and their impact on operational business improvement initiatives. For many, this journey has been painful, and some have found the boundaries previously referenced even harder to discern, as their EPM groups were sometimes forced (out of necessity) into directly driving the very operational changes that, in the end, are actually operational accountabilities!

But on balance, 2010 saw mostly positive developments in how companies manage the “downward” linkages between KPI’s, business metrics, and the operational improvements underway in their organizations. That often required a clear process for identifying gaps in key measurements, and quickly deploying business improvements (using a variety of improvement methods (e.g., LEAN, TQM, Kaizen, Six Sigma). EPM groups that have been able to demonstrate these kinds of linkages, and show examples of how they can work, have achieved something big, and should be proud of it. Soon, they will be able to step back into more facilitative roles, allowing the operating groups to take back the baton and continue propagating these changes within their respective Business Units.

The same however, cannot be said for the “upward” linkage between KPIs and Corporate Strategy. More often than not, the very successes we have had operationally have only highlighted areas where business strategies themselves have either not been defined or lack sufficient clarity. Over 70 percent of the organizations with whom we have worked in 2010 have expressed major concerns in this arena.

Just as EPM groups have successfully facilitated a “line of sight” linkage between measurement and operating improvements, many will need to apply the same facilitative role to marrying their company’s strategy with the underlying measures and KPIs of the business. In some cases, where Company and Business Unit strategies do, in fact, exist, this will simply mean identifying the key gaps and weaknesses so that business strategy is clear, compelling and integrated into the KPI’s that are routinely tracked. In other cases, it will mean introducing some basic strategic thinking and frameworks (e.g., Porter models, options theory, etc.) to executive teams (particularly those who spend most of their time in the operational space) in order to kick-start or revisit the strategic planning process, and actually develop what may be the Company’s “first REAL strategy. And for some, it may only mean serving as a catalyst to force a better integration between existing strategy and the company’s KPI framework. But in all cases, this is likely to be a major challenge, as it will require a much stronger partnership between EPM and the highest level executives and strategic planning support groups (Planning, Finance, Risk Management, etc.) within the organization. Building that “upward linkage” linkage will be essential to completing the type of full “line of sight” visibility required of a successful and sustainable EPM environment.


More focus on “value capture” and “bottom line results”

Starting in 2009, and into 2010, we saw a much more deliberate focus on what some would call “finishing the race”. All too often, we have seen measures tracked and reported for the purpose of compliance or satisfying the optics of performance measurement. But for “best practice” EPM organizations, success is not only defined by the presence of a scorecard or dashboard, but also by being able to generate hard and sustainable results in terms of savings, service level improvements, or other (more substantive) sources of business value. We’ve seen numerous companies who have made the progression from not tracking downstream value at all, to being able to assign clear, single-point accountability for the full lifecycle of a particular KPI or critical business metric. This means not only owning the measure and the reporting of it, but also the accountability for meeting targets, closing critical gaps and being responsible for delivering downstream improvements and incremental value to the bottom line. Often, this requires a robust framework for identifying and managing these accountabilities, and an overall philosophy of “commitment management” that is embraced culturally by the company. There are many tools that have emerged in this arena, from the creation of “value registers” to formal “commitment tracking” protocols for executive and operating management.

2011 will hopefully see a continuation of this trend, bringing true clarity to how EPM organizations should be measuring themselves as service providers. Back to the first observation, there is no better answer to clarifying the identity and value delivered by an EPM function than being able to generate and consistently deliver on a robust pipeline of value improvements to the business.


Shift toward (more holistic) “profitability management”

While the focus on “value capture” has had significant impact on the identity of EPM and to the bottom line directly, many EPM executives have realized that success needs to go beyond conventional sources of value. For most, that definition of value over the past few years has translated directly to cost savings and productivity gains—essentially addressing the question “what and how much have you saved for me lately?” But if nothing else, the economic recoveries that followed past downturns have showed us the flaws and negative consequences associated with this kind of singular focus on cost savings. (a.k.a.—the “death by a thousand cuts” solution) Plain and simple, it works for a while, but quickly becomes a debilitating force when the business inevitably returns to periods of rapid and dynamic growth.

Striking an appropriate balance between conventional cost savings, and other (perhaps less obvious) sources of business value will become a critical success factor for companies in the years ahead. Some companies have begun this transition by actually changing how value is actually defined within the business. For these organizations, value is seen through the much wider lens of what actually drives profitability, and from what sources. Conventional thinking asks where we can add value by cutting the direct cost of goods sold, driving increases in operational outputs and labor productivity. More innovative and holistic thinking, on the other hand, delves well beyond direct operating costs and begins to tap into the savings embedded in corporate overheads (IT, HR, Supply chain, etc.), value that is locked up in our supplier and business partner relationships, and even value that may reside in customer behavior and day-to-day interaction with the company (i.e., those areas that were historically regarded as uncontrollable or may have been considered “off limits”).

We believe this expanded focus on profitability (versus simple operating costs and productivity) will have significant impact on what we measure in 2011, as well as how we define and claim value on the back end of our process. EPM can and should play a major role in LEADING this transformation, using its data and measurement frameworks to reveal new profitability drivers to the organization, and, in turn, growing the active pipeline of value improvements ( a new corporate asset) for the business.


Technology focus/ vendor consolidation

A few years ago, the landscape of supporting technologies was characterized by a plethora of vendors, each touting its own unique (and often proprietary) version of a performance management “system”. In fact, the domains within which they all operated were blurry even for those companies and the external (independent) research organizations who tracked their capabilities on a regular basis. Were these BI vendors? Dashboard providers? Visualization tools? Reporting engines? All of the above? There was a time in the not-too-distant past when the number of quasi-credible players for a company looking for performance management software would have stretched well into the hundreds.

Today, the landscape looks very different. Not only are the “credible” PM technologies fewer in number, the clarity of the domains within which they play has increased significantly for all involved (Who are the real EPM vendors versus those who are simply pieces of the puzzle?) In early 2010, Gartner published its Magic Quadrant analysis which did a good job at illustrating the consolidation that began a few years ago when each of the major IT solution providers acquired various BI and other performance management/ reporting related niche players in what turned out to be the start of a major industry consolidation.

With the exception of the very small one-off solution providers, the credible list of EPM technologies (which I define as robust (features and capabilities), easily integrated (open versus closed systems), and scalable) can now be counted on one hand. That’s good news for those who have waited until 2011 to pull the trigger on their EPM technology purchase/ upgrade, as the job of vendor selection has gotten much easier, and the cost of deployment much smaller.

In 2011, we expect a significant increase in the set of capabilities and innovations each of these players bring to the table, with the biggest of these being integration (within and between other applications such as risk management, asset management, capital planning, portfolio management, and HR), automation (less manual data manipulation and conditioning, better leveraging of BI tools), and the portability/ flexibility of reporting mediums (e.g. mobile versus desktop reporting).


Standardization of the data environment itself

Key to some of the above changes will be improvements in how data at all levels are collected, synthesized, and reported. Some would say this all started years ago with increases in regulatory oversight and the application of clear reporting standards (everything from basic GAAP to SOx in the financial realm, to industry-specific reporting such as FERC and NERC in the Utilities sector), many of which have made reporting transparency a way of life. But for others closer to the world of financial reporting, those forces will likely pale in comparison to what is coming in the era of International Financial Reporting Standards (IFRS), where moving to a global standard for transparency and reporting will prove far more complex and daunting.

2011 should see the acceleration of these factors on the EPM radar screen. Changes will no doubt emerge in terms of how data must be collected and reported, so “tuning into” these changes now will allow you to get ahead of the curve and be in a position to influence this transition within your organization (rather than reacting from the sidelines on what emerges from within IT and Accounting, two of the most impacted functions within your organization). As with most such changes, the implementation is never straightforward, so staying ahead of the curve may even create opportunities to drive positive change in the overall data environment within which you operate and rely on. Use it to your advantage.


EPM skill building/ cultural improvements

As performance managers, we always talk about the importance of skill building and driving culture change. But with the exception of those companies who are heavy invested in one of today’s major quality/ business improvement platforms (Lean, Six Sigma, et al), investment in a true performance driven culture has fallen woefully short of what is necessary in a successful EPM environment. In fact, many companies who have made significant investments in the above referenced platforms have actually lost ground in recent years as these initiatives became viewed as “passing fads” that merely generated lots of “lip service”. The bottom line is that there exists a broad spectrum of experiences in this space, from those who have invested heavily to those who have invested little to nothing.

What continued to concern us in 2010 was the number of organizations who had invested heavily in the EPM discipline (by building a support structure, acquiring dashboard technology, etc.), yet appeared to be moving backwards, largely because they had not made the corresponding investment in EPM awareness and leadership skills that are required at even the most basic stakeholder levels. Many of these organizations had limited their investments to tactical skill building like diagnostic and analysis techniques (typical of operational driven cultures) rather than the broader suite of skills demonstrated by leading EPM organizations.

Performance Management is a major investment in business infrastructure and governance, and to implement it without an aggressive, yet targeted approach to EPM skills at all levels of management (Performance Leadership, Reporting, Analysis, Commitment Management, Managing Change, –to name a few) will guarantee some major failures along the way.

The good news is that this is an area many have determined to be a priority, and many of those who have underinvested in the past intend on making up significant ground in the coming years. But by the same token, most companies have not adequately defined where these investments should be made and what specific skills should be focused on, and hence lack a credible “learning” program that can really accelerate its EPM success. The starting point for all of this is doing a solid inventory of EPM learning within your organization (defining the required skills and competencies, and understanding where you stand on each), and then building a comprehensive plan to introduce and reinforce these new behaviors into your business. As organizations, we know how to bring new skills into the business, having introduced effective learning programs in everything from technical skills to safety, diversity, and basic operating management skills and behaviors. Integrating EPM skills into these programs, consciously and deliberately, should be a major focus of EPM in the coming year.

———————–

So with that, we will put another volume of “EPM-year in review” on the shelf, hoping that it will be useful to you as you refine your strategies, plans, and tactics for the new year.

With any luck, 2011 will mark the long anticipated turnaround in the global economy, as well as the deployment of new EPM practices, tools and approaches that will help us navigate the new growth and ambition that will come with it. But let’s not lose sight of what enabled us to navigate through the challenges of 2010 amidst the unprecedented levels of uncertainty that surrounded all of us. Risk and unpredictability will always be present, whether visible to us, or merely lurking in the background. Being able to manage within that environment will continue to differentiate the best among us in the years ahead.

My sincerest best wishes for all of you over the Holiday season, and for a happy, prosperous and successful 2011!

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com

The Primary Fuel of Dissatisfaction…

Following up on an earlier post, the question of what really “fuels” dissatisfaction has been a hard one to answer because it is both multidimensional (I.e. There is no single source of discontent) and unique to the individual customer. That notwithstanding, I do believe that the answer does revolve around one single category of emotions: that being FEAR and UNCERTAINTY.

Come on, really? Many customers are customers of simple products. Not all purchases are big purchases like houses, cars, or other things that will stick you with really long term regrets. Most daily purchases- you gas transaction, payment of a utility or cell bill, a hotel reservation, and the like, are obviously too simple to drive the emotions of fear and uncertainty, right?

To the contrary, I believe fear and uncertainty emanates from many sources, not the least of which is being “surprised” in a way that has negative consequences. How many of you have gone through the angst associated with the uncertainty of data charges from your cell phone company? Or wondering if you identity will be compromised from an online purchase? Or if you electric bill for this month will bust your budget? Or if your credit card will exceed it’s limit and embarrass you among a group of close friends or family? Even something as simple as the uncertainty of missing an airport connection, can often create hours of angst rendering any exceptional service you receive before, during or after a flight pointless. Why? Because for most of us, worrying about something important will end up distracting anything within close proximity to it. Outside of a small handful of us who can compartmentalize emotions, most things outside of what’s urgent and important for us takes a backseat until what’s important gets resolved.

Some companies seem to get this, although I wonder how much much of what we see in this area is deliberate rather than simply random or haphazard success. Nevertheless, you more than likely have seen some examples of how uncertainty can be effectively minimized, if not overtly managed. Some simple examples include:

– airlines who announce connecting gates while still in the air
– unlimited calling and data plans
-leveled payment plans from electric and gas utilities
– notification of hold times and queue lengths

The proliferation of SMS alerts for everything from bank balances and data usage to first class airline upgrades and flight delays all help customers avoid surprises. Still, I wonder if some companies are just doing these things for technology sake rather than from a genuine understanding of customer mindset and motivating forces. In fact, most of this can be done without any technology intervention.

I am reminded of when united airlines used to (maybe they still do) allow customers to tune their in seat audio to the atc frequency so that they could monitor the flight. One of the reasons I liked that was you could hear about turbulence being reported by other pilots in advance of the bumps, as well as all the requests by your pilot for faster routing, smoother altitudes, as well as any unexpected delays. In fact, even now, when I am on an airplane that is going through turbulence more than a few minutes, I start wondering if the pilot is actually working as hard as the united pilots did to find the smoother air. Of course they probably are, but at least with united I knew. And that made the uncertainty go away.

Here’s another more recent example, and perhaps my favorite so far. The other day I ran into an electric utility that alerted (actually they reminded) customers to the fact that the summer months were approaching and bills would be spiking…thus opening up an opportunity to convert customers to both a leveled payment plan (same amount every month) and direct debit option, thus minimizing or eliminating the elements of uncertainty and surprise from the customer interaction. More importantly for the utility, it had the dual benefit of saving enormous amounts of money by minimizing transaction costs, eliminating a huge volume of inbound calls to the call center related to hi bill issues (high bill complaints are the highest duration and highest cost type of call for utility companies, in which 50+ percent of the time, the call actually results with the customer concluding the bill was similar in magnitude to the same time last year.. Can you think of many cases in which being proven wrong leads to a positive and happy state of mind?

I think the implications of adopting this “avoid the surprise” philosophy could be very large in terms of taking customer satisfaction to a new level. But it does require some fundamental changes in everything from how we view customer behavior, to how we design our offerings, and most importantly, how we define, measure and manage our success in this domain.

– Posted using BlogPress from my iPad

CSAT- the BIGGER picture…

First of all, my apologies for not having written in a long long time. Funny how the things that we enjoy the most take a back seat to the urgent priorities of the day (or in this case months) that are sometimes far less fun or rewarding. The good news is that there have been lots of interesting client experiences over the past several months, and hence lots of good fodder to expound on in the weeks and months ahead…assuming I can manage to carve out the hour or so a week it takes to get them down on paper.

Top of mind for me right now, is what companies are doing (or more importantly NOT doing) to drive good customer service. I think this stems from both a failure to understand what really makes a customer tick, and the associated failure to measure it, and ultimately manage it. As a backdrop, I’d ask us to all think about the work “tick”. For most, the words “what makes a customer tick?) translates into the things that really “drive” or “motivate”them to buy something, or just feel good about your product or service. But today I want to focus on a more literal interpretation of the word “tick”. I’m thinking something like the ticking of a timer- like a clock winding down to 0…at which point things go “boom”…which in today’s economy more quickly translates into a lost relationship, a lost sale, or a lost client. In my judgement, today’s customer is much more focused on extracting maximum value from the services they have ALREADY bought or paid for, much more so than (or at least long before) they will entertain buying something else from you.

So with that as the backdrop, I think we’d all be a lot better served by taking another, perhaps closer, look at the drivers of DISSATISFACTION as our primary way of driving customer value. Putting the drivers of dissatisfaction ahead of focusing on all the bells, whistles, and other sources of delighting the customer, will get you farther because if you can’t avoid the dissatisfaction, then all of the rest is a moot point. Of course, most of you understand that, right?…and have already put in place measures to prevent a customer from getting to the point of dissatisfaction. All of you probably measure things like how fast we answer calls, how many are abandoned, how many issues are resolved in the first contact, etc., and through doing those things you minimize the likelihood of a customer being dissatisfied, or at least staying dissatisfied, right? Not so fast.

Another perspective is that by the time a customer calls, the clock is ALREADY ticking, and whatever is done DURING the customer call is often occurring AFTER the clock has wound down to almost zero. For many of you, the picture may in fact look like this: the customer gets through without being dropped, bounces out of the automated call system in quick order, talks to a rep (who “resolves the call”), ending with the customer ostensibly satisfied because they didn’t call back or give a bad score on the automated survey, right? Of course there is another interpretation…which is the customer was already quite ticked when they called in, at which point they immediately concluded (based on the first 3 choices form the IVR) that he wouldn’t get anywhere with that route, bounced out of the IVR, ran into an unhelpful rep, and politely left the call without taking a survey, and left more upset than when he started. Call me cynical, but if that was a ticking time bomb to start with, chances are it went boom within minutes of the call ending, and did so with all of the measures and indicators pointing to the opposite, and the company thinking they have a happy customer whose ultimate dissatisfaction has been averted.

I submit that companies who score well on the traditional metrics of CSAT are giving themselves a false sense of security and are probably missing the core elements of customer perspectives…those that largely revolve around lingering sources of discomfort that are hard to express, not to mention measure or quantify. If we can get our arms around this, we have a much higher likelihood of eliminating perhaps our single biggest blindspot in generating customer value and ultimately leapfrogging the competition.

The next few posts will focus on some of these up front drivers are, as well as the kinds of things we need to be measuring in this space. Fortunately, this is an area where many of you are not behind the pack, because there is nobody really leading the pack. In the past several months, I’ve worked with some of the self proclaimed “best” companies (those who perform well on the conventional indicators) and have interacted as a customer (as many of you have) with the “big names” in customer service with less than adequate results and a time bomb in my gut that is still ticking long after the “polite” ending of the call to the company.

I think we would all be better served by making the following key priorities in our drive to maximize customer sat.

1. Redefine the drivers of customer satisfaction, and dissatisfaction (the things that start the countdown on the time bomb)

2. Seriously rethink what we measure and track, and the baseline against which we evaluate success (my hunch is we will throw out a lot of what we measure today)

3. Correct the upfront flaws in the design of our offerings and processes so that dissatisfaction in minimized and we all have a more solid base on which to build on in the years ahead

-b

– Posted using BlogPress from my iPad