Performance Perspectives

Are our Strategic Plans Selling us Short ?

This morning in the Northeast, we had another taste of Fall in the air. I’m sure it was just a fake-out, as the temp climbed back up into the upper 70’s. I know our friends down south, particularly those in the hot and humid “Big Easy”, see that as early winter. For the rest of us, it was a nice departure from the sweltering dog days of summer.

Being a native of the south, one of the things I like about the Northeast is the change in seasons. Not only do I like the change in weather patterns, but all of the other signs of change that come with it. Pretty soon, the leaves will be changing, the days will be cooler, and before long, the holidays will be in sight. I think all of us, deep inside, like the change that occurs in the seasons. And as much as we probably hate to admit it sometimes, some of us actually like change itself.

For performance managers, the change in seasons- particularly summer into fall- also means our lives are about to become pretty hectic. I like to think of it as our “tax season”. We start thinking about updating our strategic plan, preparing initiatives for the next year, and getting started on that old dreaded budget. And while we don’t look forward to the long hours that accompany it, this period of the year actually gives some of us the renewal we need to keep pressing ahead.

One of the things that has always fascinated me is the dichotomy that exists between planning and change…- a contradiction, if you will, between change (which by its nature is dynamic), and strategic plans (which have historically been rather static, at least from the standpoint of the plan that results)

Historically, planning was thought of as an activity that was designed to reduce uncertainty, or “manage change”, if you will. Of course, our plans have all the basics- vision, mission, objectives, strategies, initiatives, tactics, financial projections, operational implications, performance measures, targets, and implementation plans. For organizations of our size, that’s a lot of STUFF…so it should be no surprise that many of our organizations take most of the fall and early winter to build, refine, or update our plans- both long and short term. And by December or early January, we have a work product that is based on extensive analysis of all that stuff- THE PLAN- which is often memorialized in a physical document (the proverbial “planning binder”, and the even more important “almighty budget”).

There is nothing inherently wrong with preparing a plan and using it to guide the organization forward. Memorializing our vision, strategies, and tactics is not only necessary, but is by all accounts a good thing to do. But its what you do with that plan, and how you use it, that can make all the difference in the world.

Unfortunately, many organizations allow their plan to get “fixed” into the culture of the business. The plan remains on the bookshelf, only to get referenced periodically to validate our actions and thinking. In many cases, the plan almost acts as a history book, or “bible” for our strategic thinking. And that’s where the problem arises.

Optimally, strategic plans should be dynamic, living documents. While it may seem odd to some, I am not a real advocate of documents that “memorialize” the strategy. I’d rather see the documentation focus on the assumptions, and the strategic options we would employ as these assumptions pan out or change. This kind of “options oriented plan” puts more emphasis on the process and the underlying assumptions, than it does on memorializing the strategies and tactics chosen to respond to a fixed set of assumptions. “Options based planning” and its close relative “Scenario Planning” are both examples of dynamic planning. In both form and function, they are far more conducive with the reality of change that occurs daily in our business lives.

I offer these 10 questions to help you discern whether or not your strategic plan is “dynamic” or “static” from the standpoint of dealing with the realities of business and environmental uncertainties and change:

1. To what extent does your vision, mission, and key objectives “guide” versus “prescribe”? – I like to think of this as an airliner on autopilot, in which the system maintains the altitude and attitude of the plane within a band of acceptable limits that correlate to the set parameters. It is acceptable for the plane to deviate slightly, as long as it is within close proximity to the preset parameters, thus avoiding undue stress on the aircraft.

2. Does your organization have a “compelling narrative”- a strategic story that aligns with core elements of your strategic plan? In other words, how easily can your overarching strategy, mission, and key objectives be translated into a “30 second elevator speech” by each of your executives and key stakeholders? Or would their natural response be to go search for the most current strategic planning binder? Does your narrative contain a good description of “what success looks like“? Does it reflect overarching principles, or a prescribed set of tactics? (at this level, the former is preferred to the latter in what we call “dynamic planning”)

3. How aligned would that narrative be from stakeholder to stakeholder, and executive to executive? Is the core theme, “embodied” into the fabric of the organization? And can it be repeated, at least thematically laterally and vertically across the organization? Do your stakeholders understand the tactical flexibility that they have in implementing the strategic vision, or are they looking for a prescribed set of “to do’s”? Remember, a good strategic foundation/ narrative, does not prescribe tactics, but establishes a strategic direction in a way that allows lower levels of the organization to identify, relate to, and ultimately link into with corresponding tactics. It doesn’t define their specific actions. A good narrative will produce those naturally in the tactical phases of your process.

4. Does your plan allow for changes in the operating environment, or is dependent on today’s snapshot of the current situation? For example, is the plan to become a competitive provider of business services (something that is based on today’s competitors and their position), or the low cost provider (which allows for the realities of new competitors and business models)?

5. How balanced is your strategic plan and roadmap? Rarely does a focus on a single measure survive past the current operating environment. In the above example, would we focus exclusively on cost, or would our strategic ambition include other areas like service delivery and customer retention? In the “autopilot example” in step 1, the plane does not fixate on only altitude, but also involves attitude, pitch, and other variables in its parameters.

6. Do the intermediate levels of your plan embrace the potential for different scenarios and contingencies? That is, do you have multiple options for achieving the same business model and outcome? It’s OK to have higher levels of importance geared to one of your strategies or tactics. But to become fixated on one strategy that has a 60% probability of success is shortsighted.

7. Does your planning process include some analysis of options value/ alternatives? Options strategy can be of enormous value in a strategic planning process, and the lessons learned here can be significant. For example, one of my past clients was able to discern the difference between saving a dollar of O&M versus saving a dollar of capital- almost a 7:1 tradeoff. Using that kind of analysis can really inform your planning and subsequent decision making processes.

8. Do your roll-down performance measures reflect the same level of balance, flexibility, and outcome orientation as your top-level plan does? This is really a reflection of how “connected your plan is” throughout various levels of the plan architecture. But it also says volumes about the degree of balance between your objectives and the level of completeness in your tactical and operating plans. For example, if your tactical plans and performance targets were achieved, would there be a corresponding level of success for each of the key strategic options identified for implementation?

9. How often do you review/ iterate your plan in an effort to “rebalance” and evaluate changes in contingency options? For example, does the review look like a once a year “dusting off” of the plan, or do you continuously review (monthly or quarterly) the relevance and changes to key assumptions and scenarios?

10. Does your plan and performance measurement system have “strategic staying power”? Can you effectively differentiate between a static plan that doesn’t change, and what we call “strategic staying power”? By the latter, we mean that once measure have been put in place and are renewed during plan review, do those measures survive changes in organization and personnel? One of our clients actually employed a system that implemented a “vesting approach” where managers were compensated on success of a particular measure whether or not they still had direct accountability for that area. This helped compensate for rapid turnover environments in which managers would otherwise remain shortsighted as they “eyed” future opportunities. Instead, they ended up with high degrees of “carry-forward alignment” and teamwork in helping their successors achieve success.

In short, you don’t want your plan to get so locked onto a specific tactic or objective, and lose sight of other options and contingencies that would contribute equally or more to your overarching definition of success.

I know there are some that see a “lock in, and implement at all costs” approach as far superior in that it maintains focus and eliminates the distraction of continuously iterating the plan. They may see the embedded flexibility here as a bit of a contradiction- something that prevents strategic focus. If you are in that camp, I would encourage you to look more closely at the distinction between different levels of the process. While I do endorse analysis, definition of options, and contingencies, I do concur with “locking in” on the business model, strategic intent, and the overarching narrative of the business. At the same time, however, I like to see a process that allows for identifying various ways to achieve the planned outcome, ways of dealing will fall-back contingencies, and the ability to revisit the underlying foundation as a last resort when operating or environmental conditions change.

And above all, remember this- planning is a process, not an outcome. If you maintain that perspective, it will be a lot easier to implement a planning solution that is dynamic, flexible, and effectively drives long term success.

Tomorrow, the temperature in the Northeast is expected to be back into the upper 80’s/ low 90’s. Some of my plans will change based on that. My plans for the weekend might look more like a “summer plan” than a “fall plan”. We roll with the changes in the environment we live in. We accept change, and if our minset remains open, we can actually thrive on it.

As I look at the news today, there are many down on the South Central/ and South West Gulf Coast whose plans will no doubt be changing this weekend with the approach of Hurricane Rita. Our thoughts and prayers are certainly with them. That said, there is no better way to explain the importance of a flexible planning perspective, than to look at what our brethren down south have been dealing with for weeks. We can learn much from them, in particular those who can roll with the punches and still keep perspective on what really matters- principle wise. Those are the true hero’s from which we can learn much

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com


 

Peer Benchmarking Initiatives- Revisited

Over the past several weeks, I’ve gotten more than a few comments on my June 16th column regarding Peer Company sponsored initiatives. Given the volume of comments, which ranged from “spot on” to “downright delusional”, I thought it would be a good idea to take another look at this topic.

First, my acknowledgements to some of the more prominent programs that were mentioned in the article. My intention was not to endorse or condemn any of these programs specifically. In retrospect, my biggest failing may have been the fact that I may “painted them all with the same brush”. That was not my intention. My failure to discern between the “good” and the “not so good”, while perhaps frustrating to the sponsor companies, was however, deliberate. The programs I mentioned by name were mentioned only to help the reader identify with what I meant by “peer company sponsored initiatives”- NOT to endorse or condemn any one in particular. If it was interpreted as anything other than that, I do apologize to both the program sponsor and their participants.

For clarification, my main objective with the article was to offer a guide, or checklist, to help the reader discern what to LOOK for, and what to LOOKOUT for, in such programs. All programs, whether sponsored by peer companies, consultants, or independent facilitators have strengths, weaknesses, and risks. In fact, if you look through my past columns on benchmarking (see article index), you’ll find that I offer similar analysis and guides for other types of programs as well. There is no perfect solution. Again, my only objective was simply to help the reader discern what is best for them.

This year alone, the number of programs available (peer sponsored, and others) will nearly double. And because of this, many companies are facing the tough decision of which ones to participate in, and which ones to pass on. Unlike the early 90’s, resource limitations prohibit companies for participating in everything out there. And despite what may be advertised by these programs, none of these programs are “free” on any dimension.

Our research has shown the cost of data collection alone to be many times the “entry fee” of such programs (assuming there is one). Offering a way for the reader to pick and choose in an educated manner was, again, my main objective. My other objective, while a bit in the background, was to encourage the facilitators of such programs to respond to these risks, and to help mitigate them for their members.

For example, within a few weeks of publishing the June column, I learned that one of these programs now requires executive review and approval prior to admitting a new member- a tactic that clearly manages one of the key risks identified. As new programs come on line, I encourage the facilitators and members to remain conscious of these risks/ issues, and continue managing them as appropriate. Those that do will no doubt end up as the best “draws” for future members.

As a refresher, I recommended several key questions for companies considering peer sponsored benchmarking initiatives. These questions are just as relevant today, as they were in my original column. Some of the peer-sponsored programs manage them well, and some don’t. As I indicated in June, and barring any formal comparison of these initiatives (something we may elect to provide in the future), it’s up to the reader to decide what is right for them. These questions/ issues are offered only as a guide to help inform the prospective member.

1. Do you know the GENUINE REASON the company is offering such a program? (Is it documented, written down and accepted by executives of both the sponsor company and the participating company

2. What is the REAL COST of the program? Again, both to the sponsor company’s shareholder, and the member? What will the real cost of data collection be? Is it redundant with other programs? What is the sponsor company doing to mitigate this cost?

3. How do the program INTEGRATE/ interact with other similar initiatives? Do they compete against them (creating more redundancy), or will they partner on data and other types of integration?

4. Does the program require both managerial and executive level APPROVAL and OVERSIGHT? Are competitive concerns and/or antitrust issues known and mitigated.

5. How will they PROTECT your data? What assurances do you have?

6. How ROBUST is the membership? Are there enough companies in their membership to provide meaningful information for your particular demographic or type of infrastructure?

 

While this may not be an exhaustive list, it is a start. I invite any of you to add to this list via commenting on it, and I will publish any additions/ modifications in future columns.

To me, some of these issues are obviously more important than others. As I go through the above list, I believe the most important issue to be managed TODAY is that of redundancy and duplication of resources. And as more of these programs come to market, this is a cost that will get more and more visible. For example, it would be nice to see some significant effort to merge data requirements so that the member only needs to collect the information once. Some of this occurs today, but only in a very informal and ad hoc manner. More often than not, these programs end up “competing” with each other for very scarce resources. While each program may have something unique to offer, most require very similar means (data required) to arrive at their specific end point.

An analogy to consider- There is a reason why there is only one set of wires running down my street. Anything more would result in stranded investment and underutilization of assets. And in a world of scarce resources, that can’t be good for the buyer. Likewise with benchmarking initiatives. There is a lot of this type of redundancy and stranded investment in the world of benchmarking. Call it wishful thinking, but why not have some type of “data clearinghouse” that feeds data to each program based on what it needs, but eliminating the data collection duplication that is present today.

We must approach, and address each of the other risks in a similar manner. Only in this way can we have programs that truly offer a win-win for both the member and sponsor.

Again, if I offended any of the program facilitators or members by my words or tone in the June 16th column. I sincerely apologize. But I do, and will continue to strive to offer observations and feedback that strengthens our collective ability to better manage our performance. And to this end, you comments and feedback are both welcome and appreciated. Please direct any of your comments to

rchampagne@epgintl.com

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com

 

Capturing Value from Performance Management- One step at a time…

I suspect all of you out there have someone that you rely on for insight and perspective – that wise old mentor that seems to have an unlimited depth of experience to draw from in helping you navigate life’s little challenges. You know, those little parables and anecdotal tales that always relate perfectly that very problem you’re trying to solve. Today, I go to that well of experience in responding to a problem I know many of you are facing right now- squeezing that last drop of improvement that never fails to elude us.

First, the problem: Most of you out there in the performance management world have worked for years trying to find hidden value inside your organizations. Along that journey, some of that value (be it cost savings, productivity improvements, or gains in service delivery and customer satisfaction) has come pretty easily.

We’ve all heard the term- LOW HANGING FRUIT (I’ll refer to this as LHF). Problem is that many of us haven’t heard that term lately. Why? Because most of your organization’s have already captured those kind of improvements… the kind that smack you in the face right away even when you’re not looking…and are very easy (perhaps too easy) to implement.

And along the way, capturing that LHF has created some real superstars within your company. How familiar is this- upon finding a bottomless pit of LHF, one of your esteemed, but intellectually challenged colleagues becomes the “instant hero” overnight! Reminds me of that FedEx commercial where they go on that long retreat, and the woman gets the idea of using FedEx in the first 30 seconds, and proceeds to get showered with kudos.

Of course now, months or years later, you’re now in the role of continuing in your esteemed colleague’s (aka new hero’s ) footsteps. Only the LHF you’re looking far has become much harder to find. Those “bit hits” and “home runs” have become fewer in frequency and impact. You work day and night to make your assessments, ground your conclusions, align management, and facilitate the necessary organizational changes. You put in more (much more!) work than your esteemed colleague ever put into it, but capture nothing close to the impact of those initial wins. Where did all that LHF go?

Truth is that it got replaced by the practice of “squeezing blood out of the proverbial turnip”. And you’re the corporate bloodsucker! You find yourself in that under-appreciated job that no doubt produces a lot a value, but not without several times the effort that used to be required. That’s life for many of our performance managers today. Boy, wouldn’t it be nice if someone made this job a little easier?

Enter my mentor, and one of his anecdotal gems. He starts by asking me to recall the first time I went out to the golf course. I’m sure if you’re a golfer, his experience will bring back similar memories. If not, just read on, as I’m sure you’ll be able to relate, at least in spirit.

He recalls his 1st time on the course: More misses than hits, a verrrrrrrrrry frustrating intro to the game…and probably one he’ll give up pretty quickly. But he continues despite his apprehension. It’s amazing how much one good shot (out of over 100) will keep you coming back, but I digress.

5 rounds later, AT LAST, he has more hits than misses- that moment of truth when the golfer gets “hooked”, and alas, the confidence starts to build

Round 10, now he’s starting to get the hang of it, and feeling pretty good

Round 15- end of his first full year- he’s cut his score by a whopping 25% (although he won’t mention our previous scores!)

Round 20- the following season- after a couple of rounds dusting off the “off-season rust”, he shaves off another 10-15%, and he’s now shooting somewhat respectable scores (measured, of course, not by the total stroke count, but rather by the number of people who aren’t embarrassed to have him in their foursome). He, on the other hand (although borderline delusional) is now thinking Senior Tour!

Round 20-320 (over the course of 30 years), He spends the rest of his golfing career trying to cut his score by another 5%- Where the heck is the LHF now, he asks ???

And, of course, the story gets worse….

He has spent virtually no money (except for those second hand clubs) getting that first 40% improvement, but could damn near retire on the money he spent chasing the next 5% (the latest driver, several hundred sleeves of those new balls that go farther and straighter (NOT), greens fees at clubs he has no business playing at,…you get the idea). How in the heck can I get that next 5%, he pleads ? Am I that poor of an athlete? Should I change my swing?

Now here’s the punch line… Most professionals don’t have this problem. Why, you might ask. Are they just naturals? Sure, that’s part of it. But there is more. My mentor calls this secret ingredient “the art of diagnosis”, something all of us could be much better at.

If you’ve ever read about the trials and tribulations of pro golfers (a far better investment of time than that new triple titanium, variable weighted, moon dust infused driver !), you’ll find one common thread. They know how to diagnose their game at a level we would never think of.

When we diagnose our game as an amateur (and it’s a stretch to call most of us amateurs!), – assuming we diagnose it at all- we think about things like % of fairways hit, greens in regulation, # total puts, etc…and that’s ok for a first cut. But the pros go much deeper.

I recently read some work by Dave Pelz- Phil Mickelson’s short game coach and advisor to several tour players. For those of you who don’t know, Dave is an ex-NASA engineer who worked on the first Lunar Module design who, over the course of the last several years, has applied his expertise to diagnosing and fixing the flaws of pro golfers. Interesting career shift to say the least, but it has paid off. His mission, is to help them find that next 5,4,3,2, and 1% (more like .001%) improvements. And how exactly does he do that?

Dave knows the art of diagnosis, which is no doubt driven by his engineering, scientific and technological prowess. Last weekend, I saw a special Dave ran on the Golf Channel in which he encouraged us amateurs to come up with a “short game handicap”. Without going into a lot of detail, this SGH didn’t just deal with one or two metrics, but many that worked together. Things like shot dispersion with different clubs, hit/miss ratios from points inside 30 yards for a variety of shot types, putting success from a half dozen different putt lengths and types, etc. Whether you’re a golfer, or identify better with another sport, the message is the same.

Many of us would cringe at the depth of analysis that goes into this one area of focus. In this case, the SGH only deals with shots inside 100 yards. But if you talk to pros, they’ll tell you that this guy is a miracle worker. Not because of his athletic ability, but because of his savvy at the art of diagnosis. He makes a living off of people (pros and amateurs alike) who have fully captured the LOW HANGING FRUIT, and want to begin sucking that turnip for some more blood. And that’s very similar to our jobs in today’s business environment as performance managers.

As performance managers, we must think like Dave. We must design scorecards that operate effectively at the executive/ “results” level. But we must also possess diagnostic measures that explore strengths and weaknesses in the very processes that PRODUCE and/ or CONTRIBUTE to those executive level results. We must be BRUTALLY HONEST with our baseline, and diligent in our goal setting. We must diagnose, challenge, and set new goals at the work-face- Goals that if achieved will make a difference in one or more sub-processes. In short, we must develop the equivalent of Dave’s breakdown of the TOTAL handicap in to SUB HANDICAPS like his Short Game and Putting-Only metrics.

There are many tools available to us that can help us achieve this. Tools that help us design these kind of narrowly focused, but strategically connected metrics and scorecards. Tools that help us integrate these scorecards so that we can see the rollup and rolldown effects on the bigger picture. Tools that help us translate our strategic plan into its manageable components. Tools that help us baseline and set targets. Tools that help us benchmark against the outside world. One of the things that our company has spent many years focusing on is developing these types of integrated scorecards for business, and helping organizations use them to manage the small but vital pieces of that equation.

But whatever tools you select, the biggest challenge you will face is changing the culture and mindset of the business. Essentially, developing a mindset that recognizes the new game we are in, and that those 1% gains are going to be a lot harder to come by. A game where the entire focus is on that Turnip, and how to get that last drop of blood out of it. It is a culture of ACVTIVE diagnosis and analysis, not one of PASSIVE enterprise level KPI tracking and reporting.

And that challenge starts with you, the performance manager. The bad news is that it will take the right tools, the right culture, and a lot of hard work. The good news is that if you can apply this art of diagnosis in the corporate world, you’ll begin to find that next tier of performance improvement, not to mention, a much lower golf handicap.

…And with any luck, I’ll see you on the Senior Tour!

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com

A Word (or three…) about Data Standards

I’ve had a number of recent conversations with my clients and business partners about the importance (or lack thereof) we, as a community of performance managers, are placing on data standards. After all, the ability to compare, analyze, and effectively mine for insights among peers depends on being able to have some type of common data lexicon to rely on for our conclusions.

Throughout the course of history in the PM discipline (which is still relatively young), we have seen some bright spots. For example, many industries have in fact established reporting standards that exist to this day. Safety concerns, for example led to the establishment (and proliferation I might add) of airline accident and incident reports. If one wanted to (and I wouldn’t suggest doing this before your next plane trip), you could literally find dozens of online databases that profile this type of data and be reasonably assured that the data is comparable across carriers. That is because the NTSB and FAA (save for some inter-agency inconsistencies and recent infighting) require very clear standards regarding when, how, and to what level of detail incidents should be reported. We also see this prevalent in the Nuclear power sector, where again, the main driver is public safety.

But what about where the main driver is something other than public safety? In regulated industries, we see evidence of similar reporting standards having emerged in banking , healthcare, and local transmission and distribution utilities. In these cases,the driver was more the regulator (e.g.- FERC for the utility sector) who set up these standards to prevent monopolistic control from being exerted on the industry and to hold these otherwise shareholder driven companies to “reasonable” levels of performance. Most of these reporting standards worked well for a while. But as deregulation occurred and competitive markets began to control themselves from a performance standpoint (as capital markets often do), the need for these standards began to wane. Sure, we still see some of the reporting artifacts still in place today (for example, the utility sector still requires FERC reporting), but it is almost always viewed as a necessary distraction for the real operational executives within these organizations. The vehicles that were designed to produce a rsolid eporting standard, now produce some of the least reliable information around And that should be no surprise. When standards are set up to oversee or punish an organization for not achieving a result, you can bet the data will be “worked” and stretched to the maximum extent possible in order to achieve that result.

While I may be accused of being a bit pollyanna, I genuinely believe that the setting of data standards can in fact work well. But the underlying PURPOSE that drives data standardization must be changed. I am of the opinion that if these systems were set up to enable each company to achieve their fullest potential, from both the cost and effectiveness sides of the equation, companies would be a lot more diligent and honest in their adherence to standards.

In the power sector for example, there are some benchmarking and best practice sharing programs that do a great job at reporting consistency. In fact, if I were to bet on the result, I’d put my money on the data that was reported in these programs well before I would trust the more institutional standards like FERC, NERC, and the NRC, for example (all of which have many more years of history under their belt). Why? Because the benefit of complying in the former case is directly proportional to how much an organization LEARNS from the data that is shared. If you twist the data to reveal a better overall position on the scorecard, you’re only hurting yourself.

I believe there is room for a new standard to emerge across all of the industries I mentioned and beyond. A standard that originates from a desire to be competitive and learn, versus one that is set up to regulate and punish. Such a system would need to revolutionize everything from accounting treatment to the work management processes themselves- where a dollar means a dollar (no red dollars and blue dollars) and an outage means an outage. Sure, it’s complex, but as performance managers, we’ve seen these standards achieved way back when it was done for regulatory and safety purposes. Not for long, but it did work. It failed not because it couldn’t be done, but because it had an underlying driver that was flawed. A good system became just another form to fill out.

So let’s try this again, only this time lets make sure that the people reporting the data see the clear benefit of complying with a standard. Only in this way will we end up with a system that is sustainable in the long run.

Poor Bernie? (Getting our head out of the sand)

I must say that I’ve had numerous reactions to the Bernie Ebbers’ news of late- from “good riddance”, to pure apathy. Actually, I did feel a flash of compassion, as I read about his blubbering crying episode in the courtroom- although it was a very quick flash, kind of like when Jim Baker was filmed walking out from his sentencing. But by far, my overwhelming emotion was one of justice. For once, executives in corporate America were given the message that we don’t reward corporate mischief. In fact we now punish it- hard. And that’s a big step forward that was a long time coming.

But there’s a lot more to do in terms of how we reward executive management in this country. Things have gone seriously awry when our executives are given enormous sums of money and other rewards, long before they actually perform. And while they may fail to get their second or third tier bonus when they fail to meet key targets (some actually still do, by the way), they’re base compensation levels are often left untouched. Sure, maybe they lose their jobs somewhere down the road, but only after they’ve banked millions during their performance backslide.

While the recent conviction and big time sentencing of Bernie shows that we are not TOTALLY blind as a society of shareholders, there is a long way to go. Punishing those who overstep the line of executive integrity is a start (hopefully it wont take a year next time). But what about the incompetent executive that brings a company down in flames without having committed a federal crime? There should be clear disincentives that stop that kind of poor performance in its tracks. Just stopping the flow of rewards earlier in the backslide cycle would be a start. To me, this is clearly the next battleground in executive performance management.

Why isn’t this happening today? Sure- part of it is that many Boards and CEO’s continue to “wish these problems away” rather than taking swift action in terms of consequence. Part of it also is the poor design of our compensation schemes that posses precious little in the way of compensation DOWNSIDE for poor performance. Sales teams know this well. Some of the best guys and gals I know in sales have upwards of 90% of their compensation “at risk”. Too excessive? Maybe. But no downside to base comp is equally ridiculous. Executive compensation, in design alone, could use some big time overhauling.

But even if we had well intentioned boards, operating inside of a near perfect comp structure, that was willing to act when it detected a performance breakdown …My guess is that the system would still fail to stop poor performers any earlier than it does today. Why? Because of the way performance is reported. The metrics that we use are often compiled by individuals or sophisticated algorithms, and reported on a periodic- weekly, monthly, or quarterly basis. Sometimes (actually more often than not), at the executive level, the data is reported annually. Hard to believe in the kind of information environment we now find ourselves in.

If we are to reform executive compensation, we must fix all of the things I mention above. But without more timely, accurate, and available performance feedback, even the most perfect system will fail. We must take our collective “heads out of the sand”, and bring our performance information to light, much quicker and more frequently than it is today. It must be broadly accessible, and accessible on demand. In the age of information we live in, there is almost no excuse for the kind of “back room” reporting that still takes place today. The more timely and accessible the information, the less poor performing executives will be able to hide behind their information reporting inadequacy.

So as you navigate forthcoming rounds of executive hiring at your company, do your part to drive performance information into the open forum. There are many tools and systems that will help you do that, in a manner that is more timely, accurate, and accessible. You might not be the most liked person at first, but if you survive the initial pain, you and your company will have a much brighter future.

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com