Making Your Targets Achievable…Its about Progress not Perfection!

As I was relaxing over the weekend watching the ATT Pro-Am, where more than a few players (and celebs) entered the final day well within contention. There are many golfers that are entering 2011 in good form for so early in the year, and while there was only one winner, there were at least a dozen players (from the 40-somethings like Phil and VJ to the cadre of “young guns” that appear to have come out of nowhere and are now taking center stage) within striking distance when the day started on Sunday.

As it turned out, the winning team was in fact one of those young guns(D.A. Points), who ironically stood beside his celebrity playing partner- not other than the “old fart” we all remember from caddy shack, Mr Bill Murray himself. And what a finish it was.

But what was interesting about this day was that there were so many guys playing great golf, and who had clearly taken there game to the next level. It was kind of odd listening to the Mickelson (after mucking up a few key holes that likely cost him a come from behind win), describe the tournament as “really fun” and his play as “much improved”. Sure, anyone making tens of millions a year could probably maintain that attitudeafter “letting one slip away”. But I actually watched that interview thinking the guy was genuine. In my view, good athletes get where are not only through hard work and unwavering commitment, but also by recognizing and reinforcing the principle of improvement as much as they do the actual win.

As you read the below post, I encourage you to think about how you can apply this principle to your periodic goal setting process, whether its setting new goals or negotiating mid course corrections. I hope you enjoy what I believe is some pretty timeless advice.

—————————

From my previous post “PROGRESS, NOT PERFECTION”…

Good performance managers can separate the “aspiration” from the “journey” toward it. Notice I said “toward it”, and not “to it”. Performance Management is a process, not and end game. It’s a journey “toward” a state of perfection, knowing that you may never fully achieve it. It’s working damn hard at something knowing that you never really graduate or declare a perfect ending. There’s always something else to aspire to. Our job as performance managers is to manage the process or the journey, using the “end game” only as a beacon that you navigate toward.

To some of you, this may contradict one of my earlier writings on ‘not accepting mediocrity’. In fact, there is a contradiction, and it’s by design. Goal setting is an art, always trying to find the balance between being too ambitious, but at the same time, not accepting mediocrity. Good goal setting will stretch the capabilities of the individual without demoralizing them with repeated failure. For example, an organization may aspire to six sigma performance standards, but manage the process in a way that reinforces and rewards milestones along the way. And when you’re at six sigma, there’s still something to aspire to.

Think about the game of golf. Hogan once said that man will never play a perfect round of golf, because of the nature of the game. Think about it. A perfect score of 18 is beyond human reach in the game as we know it (a hole in one on every hole). Hogan also said that when he plays a round of golf, he can expect only a handful of shots to go exactly as he planned them. Wow! Now that’s amazing. Here’s a world class golfer at his peak saying that out of 65-75 strokes, only 4 or 5 will pass his test of perfection.

But despite the fact that we’ll never achieve that perfect end state, the game of golf does challenge us with goals of par (what should a good golfer shoot), birdies, eagles, double eagles, and those rare but attainable hole in ones. The game’s scoring is also adjusted for a player’s handicap, which changes as his skill improves. There are not many sports that encourage and motivate players ‘toward’ a level of perfection, without ever fully achieving it, than the game of golf does.

So sticking with this analogy, how do golfers motivate themselves in a world where they’ll never fully achieve “perfection”? Most good golfers play each stroke, one at a time, putting a lot more focus on # of fairways hit, GIR’s (# greens hit in regulation), # of sand saves, # of up and downs, and average # of putts per green. That’s how they do it. They set meaningful and achievable milestones for the journey, knowing that if they achieve those, the final score will take care of itself. Turn on the TV every Sunday afternoon, and you’ll see it in action. Even if you don’t like golf, you can’t help but being impressed by how these guys and women manage their game (their journey).

If you like the above analogies and can relate to them, there are some great writings on the subject that will illustrate this point better than I ever could. Three that I recommend are “Golf is not a game of perfect”, “Life is not a game of perfect”, and “The golf of your dreams”, all written by Dr. Bob Rotella, a noted sports psychologist. While these may play more to the golfers among us, his style of writing lends itself to wide applications of these principles, from the workplace to life in general.

So as you set goals, and manage your people toward achieving them, remember to not only focus on the ‘end game’ or ultimate aspiration of perfection, but to also place an equal if not greater focus on the journey and the milestones we must achieve along the way.

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com

First things First- PROCESS before technology…

Here is is post I wrote a few years back. I was reminded of it while on a call today on Sales Enablement and Automation. Not surprising that it always comes back to “doing the RIGHT things RIGHT”.  BTW- an update on the below article- 3 years later, the process at EWR is STILL THE SAME!!!!                                          ———————————

Here’s a brief story I encountered while leaving Newark International Airport following a recent business trip. Hard to believe, but true.

After a long flight home from the West Coast, I took a short train ride to the long term parking facility, located my car (which is becoming more difficult with age it seems), and proceeded to the parking exit. Note that it’s been a while since I’ve used the long term parking facility, as I normally use a car or taxi service, so I was largely unfamiliar with their new “high tech” customer solutions.

As I pulled up to the pay station (expecting the attendant to inform me of my charge), she immediately looked at me with the gaze of a very frustrated woman who’s obviously done this before. In a short tone, she barked out an instruction suggesting that I had passed an automated ticket booth, from which I should have inserted my ticket and noted the charge. I complied with the instruction, quietly wondering why this woman was in the booth at all, given the fact that the machine and I pretty much had this thing licked. I concluded of course that she must be there to collect the money, so I proceeded to pay her. Not a good assumption as she pointed me back to the machine to insert my payment. OK, I get it, I interact with the machine for this too…no problem, thinking that this is a pretty good solution. I wait for the machine to give me my receipt, an obvious assumption given how the first two steps went. Nope…wrong again. This time she wants me to drive to her and pick up my receipt, at which point she presses a button, lifts the gate, and I’m on my merry way.

I can’t help thinking about all the time and money went into implementing this slick new solution, that probably cost an arm and a leg, had little to no impact on cost savings, destroyed customer satisfaction, and obviously put the employee in a perpetual stae of ‘grumpy’. No…what this was, is yet another example of “technology for technology’s sake”.

When I work with organizations on business impovement, one of the most important themes I try to drill home is PROCESS FIRST, then technology. You don’t implement technology on top of a broken process. Nor do you attempt to fix a broken brocess with technology only.

The right path is to measure the effectiveness of the process before you begin. Establish a baseline. Understand how the process works today (‘As Is’ State). Look for places to improve the process. Define changes. Examine the effect of each potential change on overall performance. Then, and only then, define the technology, systems, skills, and organization needed to support the new process. Develop cost benefits and business cases. Re-examine the degree to which performance will be improved over baseline. And then your almost ready for implementation.

It’s a simple principle, but one that often get overlooked. Try to pay some attention to this in your everyday life and you’ll probably see many similar examples. Then, use these as lessons learned, and start living by the mantra- “First Things First”- process first, technology later.

 

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com

 

Don’t Go Overboard on KPI’s

While much has been written in the past about performance management, most of it has dealt with things like the design of measures, development of targets, benchmarking, reporting methods, and IT solutions. Precious little has been written on the quantity of measures…essentially the question of “how many” measures an organization should have as you begin to cascade past the first few levels.

As most of you know from my past writings, I am a big fan in the “fewer is better” principle, the reason being that focus becomes distorted once you get past a certain number. Quite frankly, I don’t know psychologically why that is, nor do I really care. The less people need to remember, recall, and process, the more likely it is to stick. Ever wonder why things like social security numbers and phone numbers are broken up into three to four digit “clusters of numbers”? It’s been scientifically proven that people recall numbers less than seven digits at far greater levels than they do larger ones, and the recall is further enhanced by breaking it up into three and four digit “chunks”.

The number of measures shouldn’t be any different. In fact the word KEY in key performance indicators (KPI’s) suggests the need for that very level of focus. But for some reason, the design principle steering today’s KPI development seems to be favoring the “more is better” principle over more focused measurement design. In the last three weeks, I either spoke with or visited five companies that have an executive KPI “dashboard” in place. Four of the five organizations (and they were NOT alike in any way- different industries, geographies, and cultures – most had more than 15 KPI’s with one of those organizations nearing 40!

So here are some things to check for to ensure you have the right number and type of KPI’s

1. Don’t confuse “balance” with volume:

While organizations are encouraged to have a “bananced” set of KPI’s (e.g. a “balanced scorecard”), it does not mean that every business unit and functional workgroup in the organization’s structure needs to have the same degree of balance. Some functions exist for the sole purpose on moving one or two key indicators, and may legitimately have nothing to do with others. You’re better off with that group being responsible for 3-4 relevant indicators instead of a “balanced” suite of 25.

2. Don’t let the complexity of your metrics portfolio dilute the vision and compelling narrative of the business:

Some of the best companies out there have developed a short and compelling narrative or “elevator pitch” that encapsulates essence of the companies vision, mission, and strategic plan (our history, current vision, purpose, main points about strategy, and how we will measure success. What’s important here is the ability of the drive the “recall” of vision by the employees who are responsible for internalizing it and carrying it out. Better to have a few indicators they can relate to, internalize and influence than a multitude of indicators that go largely unnoticed.

3. Make the numbers mean something:

Often, that will mean avoiding the “index” or “roll up” type of indicators. The types of indicators often have meaning only to the person who built the underlying algorithm behind it. While it is ok to use these kind of indicators sparingly (perhaps at the high levels where they can be easily interpreted, I’d be inclined to get these indexes quickly translated into units that represent results. For example a CSI (customer sat index ) of 45 versus metrics like % of customers dissatisfied with service call, % rework, and first call resolution %. If you can create meaningful #’s, the need to measure a large number of “component” metrics typically goes down, freeing up attention to focus on the drivers and causal factors that will end up having much more impact on maximizing your PM dollar.

So there you have it, a simple list of three tips (not 5, 8 or 10, but 3)….hopefully simple enough to recall as you continue to improve your PM process.

-b

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com


Garbage In-Garbage Out…

One of the age-old problems we encounter as performance managers is one of data reliability. While it should be, intuitively, the most important aspect of performance management, it is, relatively speaking, given much lower priority than its more “sexy” relatives.

ERP’s, data warehouses, analysis engines, web reports…the list goes on. Comparatively speaking, each and every one of these important PM dimensions gets its fair shake of mind space and investment capital. But as the old adage goes, “garbage in/ garbage out” (GIGO). We all know that data quality is a necessary pre-requisite for any of these tools to work as designed. So why is it that so little time and attention goes into cleaning up this side of the street?

Tell me you can’t identify with this picture. You’re sitting in a Senior Management presentation of last quarter’s sales results. Perhaps you’re even the presenter. You get to a critical part of the presentation, which shows a glaring break in a trend which has been steadily improving for months. It signals the obvious- something bad has happened and we need to address it now! Conversation turns to the sales-force, the lead qualification process, the marketing department, competition,… 45 minutes later- no real clarity, except for lots of “to do’s” and follow up commitments.

Fast-forward two weeks (and several man-hours of investment) later. The Sales VP is pummeling one of his sales managers to “step up” the performance, and wants new strategies. A new commission structure is discussed, which brings in the need to get HR and IT involved. A few days later, when working on implementing some of the new strategies, a new story begins to unfold. An IT analyst, deep in the bowls of the organization astutely recognizes THE big missing piece of the puzzle. You see, last month, the manager of the Eastern Region changed the way he wants to see “sales-closes” reported (the way deals are essentially recorded), from one that is based on “client authorizations” to one based on “having the contract in hand”- a very useful distinction, particularly when viewed from a cash flow and accounting perspective. The only problem is that it was applied locally, not corporate wide, resulting in the apparent data anomaly.

Sounds a bit too simple for a modern corporation, well into the technology age. But unfortunately, this kind of story is all too common. We all understand the principles of GIGO, yet it continues to chew up corporate resources unnecessarily.

Overcoming the GIGO problem should be our number one priority- before systems, before reports, before analysis, before debate, and before conclusions are drawn. Before anything else, data quality is THE #1 priority.

Here are a few tactics for getting a solid “data quality” foundation in place:

1. Understand the “cost of waste”-

We measure everything else, why not measure the cost of poor data quality? Take a few of your last GIGO experiences and quantify what the organization wastes on unnecessary analysis, debate, and dialog around seemingly valid conclusions gone awry. This doesn’t have to be complex. Do it on the back of an envelope if you have to. Include everything that goes into it, including all the levels of management and staff that get involved. Then communicate it to your entire PM team. Make it part of your team’s mantra. Data quality matters!

2. Become the DQ (Data Quality) CZAR in your company-

Most performance managers got where they are by exposing that “diamond in the rough”. We got where we are by using data to be an advocate for change. It’s hard to imagine getting executive attention and recognition for something as “boring” as getting the data “right”. But that is what needs to happen. The increased visibility of post-Enron audit departments, SOX initiatives, and other risk management strategies have already started this trend. Performance Managers must follow. You need to embrace DQ as something you and your department “stand for”.

3. Create Data Visibility-

In some respects, this has already begun, but we have to do more. Our IT environments have the potential of disseminating information to every management level and location within minutes of publishing it. But let’s go one step further. Let’s “open the book” earlier in the process so more of those who can spot data issues earlier can participate in the game. What I’m saying here is that people have different roles when it comes to performance management. Some are consumers, and some are providers. It’s just as important to create visibility for the input factors, as it is to publish those sexy performance charts. You’ll get the input of that 4th level IT analyst I discussed above, much earlier in the process.

4. Utilize External Benchmarks Where Possible-

Benchmarks are often used within organizations to set targets, justify new projects, defend management actions, and to discover new best practices. These are all good and noble reasons to benchmark. One of the most overlooked benefits of benchmarking, however, is the role it plays (or should play) in your DQ process. I can’t tell you how many meetings I’ve been in where the presence of an external benchmark highlighted a key problem in data collection. Sometimes, seeing your data compared against a seemingly erroneous metric, can show major breakdowns in the data in cases where they would have otherwise gone undetected. Using comparisons to highlight reporting anomalies can be a very valuable use of external benchmarks.

5. Establish a DQ process-

It would be nice if all data were collected in an automated manner, where definitions could be hard-coded, and “what to include” would never be in question. But in most companies, that is simply not the case. Our research has shown that over 50% of data used in our performance management process is still collected manually. But very few of these companies have a defined and auditable process for doing so. This does not have to be complicated, as there are some very useful tools emerging that help collect, validate, approve, and publish required data, just as there are for data reporting and score-carding. Having a process, and system to ensure that process is followed, are both critical elements in data collection, and hence make for very good investments.

6. Don’t forget the Culture –

As I said above, most data, for the time being, will be collected in a manual fashion without fancy IT infrastructure. People will still be at the heart of that process. Invest time in helping them see the importance of the information they are collecting, how that information will be used, and what process will be followed to do so. Many organizations spend tens of millions on a systems solution to what is largely a people/ cultural problem. Investing in training and coaching can be as high payback as those mega systems investments.

* * * * * * * * * * * * * * * * * * * * * * * *

So as you navigate through your internal data collection efforts, try and keep these tips in mind. Sometimes, it’s the simple “blocking and tackling” that can make the difference between winners and those in second place.

 

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com


Managing Those Elusive Overheads

One of the biggest challenges faced by operations management is how to improve costs and service levels, especially when such a large portion of these costs are perceived to be “outside” of their control.

Despite recent attempts to control corporate overheads, it’s still very common for corporations to laden operating management with an “automatic” allocation for overhead costs such as Employee Benefits, IT, Legal, Facilities Management, Accounting…the list goes on. Our studies show that most of these costs are still allocated back to management as a direct “loader”, or percentage markup, on staff that is employed in the operating business units. Not only is this an unfair disadvantage to operating management who has little perceived influence on these costs, but it also results in a “masking” effect as these costs mysteriously get buried in the loading factor itself. Operating units struggle from year to year, trying to capture that next 1,2, 5 % of efficiency gains, while over 50% of their costs are, in effect, off limits.

But there are some organizations that clearly understand the challenges, and have begun to make nice strides in this area of corporate overheads. For some, it has involved ugly corporate battles, political in- fighting, and the “muscling in” of allocation changes. For others, the challenge has been a bit easier, by focusing on what really matters- visibility of overheads, and a direct path toward managing them.

Here’s a quick list of areas you can focus on to improve the way overheads are managed:

Transparency– The first, and most important driver for successfully managing overheads is making them visible to the enterprise. All to often, overheads from shared services functions are not visible to anyone outside of shared services organizations themselves. In fact, the word “overhead”, has an almost mystical connotation- something that just shows up like a cloud over your head.

One of my clients once said, “The most important thing leadership can do is to expose the ‘glass house’. Overheads need to get taken out of the “black box” and put into the “fish bowl.” Once you can see the costs clearly, both operating and corporate management can begin making rational assessments about to best control them.

Accountability– This is arguably one of the trickier overhead challenges, since managing overheads involves accountability at multiple levels. To simplify this challenge, most companies simply define accountability at the shared service level (VP IT, or VP Legal, for example) and leave it at that.

More successful organizations, on the other hand, split this accountability into its manageable components. For example, management of shared services functions can be accountable for policy, process, and the manner in which work gets performed. But there is a second layer that deals with “how much of a particular service” gets provided- and it’s that component that must be managed by operations, if we are to hold them accountable for real profit and loss (discussed below).

To do this right requires some hard work on the front end to appropriately define the “drivers” of overhead costs that are truly within line management’s control. A simple example is the area of Corporate IT, in which the IT department defines overall hardware standards and security protocols, while the variable costs associated with local support is based on actual usage and consumption of IT resources. That’s an overly simplified example, but still illustrative of how the process can work. Most overhead costs have a controllable driver to them. Defining those unique drivers, and distributing accountability for each will go a long way in showing how and where these costs can be managed.

“P&L” Mindset– There’s been a lot of debate around whether shared services functions can truly operate like real profit centers. The profit center “purists” will argue that internal services should behave just like “best in class” outsourcers, and if they can’t compete, they should get out the way. The more traditional view is that once a service is inside of the corporate wall, they become somewhat insulated from everyday price and service level competition. The reason being that “opening these services up to competition” would be too chaotic, and ignore the sunk cost associated with starting up, or winding down one of these functions.

A more hybrid solution that I like is to treat the first few years of a shared service function like a “business partnership” with defined parameters and conditions that must be met for the contract to continue. It takes a little bit of the edge, or outsourcing “threat”, off the table, and allows the operating unit and shared service function to collectively work on solving the problems at hand.

Still, shared services functions must look toward an “end state” where they begin to appear more and more like their competitors in the external marketplace and less like corporate entitlements. In the end, they must view their services as “universally contestable” with operating management as their #1 customer. For many organizations, particularly the larger ones, that’s a big change in culture.

Pricing– Save for the conservationists and “demand-siders”, most modern day economists will tell you that the “price tag” is the way to control the consumption of almost anything, from drugs to air travel. And it’s no different in the game of managing corporate overheads.

Once you’ve got the accountabilities squared away, and you’ve determined the “cost drivers” that are controllable by operating management, the price tag is the next big factor to focus on. One of the most important pieces of the service contract you have with operations management is the monthly invoice, assuming its real and complete. It needs to reflect the service provider’s true cost, not just the direct, or variable costs of serving operations. Otherwise, it’s a meaningless number. In the end, the pricing mechanism needs to be something that can be compared and benchmarked among leading suppliers of a particular service. For that to be possible, price needs to reflect the true cost of doing business.

Value Contribution– So far, we’ve only focused on the cost side of the equation. Now, let’s look at service levels.

For the more arcane areas of corporate overheads, where a pricing-for-service approach is more difficult, it is usually worth the time to understand the area’s value contribution to your business unit. Finding the one or two key value contributors is now the task at hand. For example, in US based companies, the Tax Department is generally staffed with high-end professionals, and often is the keeper of a substantial tax attorney budget. When treated from a pure cost perspective, a common rumbling among operating management becomes: Why am I paying so much for my tax return?

A better question would be: what value am I getting for my money? In this case, taking advantage of key US Tax code provisions can be expensive, but the cash flow impact (in terms of lower effective tax rates) can be a significant benefit to the operating unit. Clearly delineating and quantifying the value, combined with presenting an accurate picture of the cost to achieve that value (OH charges from the Tax department) can bring a whole new level of awareness to these types of overheads.

Of course, for this to work, you need to ensure that parity exists between the function benefiting from the value generated, and the function bearing the costs. So before you allocate costs, make sure you effectively match the budget responsibility with the function who ultimately reaps the benefits you define.

Service level agreements-This is the contract that manages the relationship between you and your internal service provider. It contains everything from pricing, to service level standards, to when and how outsourcing solutions can and would be employed. There must be a process in place to negotiate the standards, bind the parties, and review progress at regular intervals. While this can be a rather time consuming process (especially the first time out of the gate), it is essential in setting the stage for more commercial relationships between the parties.

Leadership– As with any significant initiative, competent and visible leadership is key. A good executive sponsor is key in getting through the inter-functional friction, and natural cultural challenges that will likely emerge during the process. Leadership must view controlling overheads as a significant priority, one that makes the enormity of the problem visible to both sides, and effectively set the “rules of engagement” for how to best address the challenges at hand. Without good leadership, the road toward efficiency and value of overheads becomes much more difficult to navigate

——————–
So there you have it…my cut at the top ingredients in managing corporate overheads and shared service functions. The road is not an easy one, but if you build in the right mechanisms from the start, you will avoid some of the common pitfalls that your organization is bound to face in its pursuit of a more efficient overhead structure.

b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com