Midway through lunch I started thinking about this and asked my friends if it was really possible that three million people went in one building complex each day. This was before the advent of online search, so for facts you had to rely on the old guy that hung around on a bench at your local gas station for information. Since that guy wasn’t handy we agreed that this was probably a bad fact. I bring this up because we do things like this all the time. We read or hear “facts” and accept them without critical thought. This article reviews a sample of the kind of facts we accept in the workplace that sometimes we should not.
There has been exponential growth in the availability of data, and the ability to analyze that data in business functions. Automated systems in operations such as HR, finance and purchasing create opportunities to easily access huge amounts of data that just 20 years ago would have been unimaginable. Magnificent spreadsheet software and online surveys have further enabled our ability to collect and analyze data. These technologies have facilitated advancements in the use of statistical process control tools in the business environment.
However, with great data comes great responsibility. Many times data is misunderstood and, therefore, used to draw inaccurate conclusions and make lousy business decisions. You will be better off if you can identify a few ways data is often misunderstood and misused; look at the following examples to help you build understanding and encourage a healthy skepticism when presented with “facts.” (Note: The author is just a working stiff who observes things over time – not a mathematician or scientist – so if you disagree with the analysis below, you should leave a comment and/or write your own article.) What follows are some common problems you may encounter when collecting or reviewing data.
Results We Forget to Measure
I first discovered Six Sigma when the CEO of a technology company I worked at discussed the concept in an employee presentation. He commented on a goal of achieving only 3.4 bad parts per million items produced. I remember thinking to myself that:
1. There’s nothing I’m ever going to do a million times.
2. I probably made 3.4 mistakes before lunch that day.
I was all for improvement but this kind of goal seemed unimaginable to me.
Our corporation was somewhat famous for their program-of-the-month method of management. We had a saying that CEOs came and went but department managers were forever. This expressed our work philosophy of just biding our time till the current fad went away. Now it turns out our CEO was kind of pushy (others might say driven) and he just wouldn’t let this concept go. So, we decided to collect some baseline data on our error rate so we could show we were team players. It turns out, we learned a lot.
We were not without measures in our business. We had cost measures and weekly spending compared to goal measures and response time measures and customer satisfaction measures. However, we had never really thought about measuring our defect rate until we contemplated the six sigma challenge introduced by our CEO. We decided to try to measure errors, and it proved enlightening.
In my section of the corporation we were in the business of answering customer questions – typically of moderate complexity. The questions weren’t such that you knew the answer right away but they could be answered by people after a several months training; an advanced university degree was not required. We assigned a couple people on our team to listen in to the questions coming in and our answers for a period of several weeks, and we did an evaluation of each answer for both accuracy (did we give the wrong answer) and completeness (the right answer but not all the information someone might have found useful). Prior to this time we had mostly been measuring how many calls were handled per associate.
We found that our error rate was something north of 20 percent, or 200,000 bad answers per million opportunities. We weren’t all great at doing math in our heads but we could all see that this was slightly worse than Six Sigma. We were shocked to find out we were that bad at what we did, but this drove us to significantly reengineer our whole way of doing business.
The specific actions we took to address the problems aren’t important, but we invested in additional automation and used a Lean tool known as standard work. We never approached Six Sigma results, but did get our error rate into the 5 percent range, which was a dramatic improvement. The big step to driving improvement was the decision to measure our defects.
The mistake we were making wasn’t not having measures – we had lots of them. Our problem was that we were not measuring all the things which were important. Lean discusses the concept of understanding value from the perspective of the customer, and we had missed a large part of our work that customers valued.
Things We Know for Sure That Aren’t So
There is one special kind of personal experience that is especially harmful to improvement, and that is a personal experience repeated so many times by so many people that it becomes data. There’s a saying attributed to a number of people of people including Will Rogers and Lou Holtz that goes, “We’re not hurt so much by the things we don’t know, but rather by the things we know for sure that aren’t so.” Too often in the workplace we accept certain truths as data and work to solve problems by attacking the wrong things.
Let me give you an example. The CEO of a company I worked for was convinced that the main reason people quit was because of poor leadership. So she felt if we invested big time in leadership development that the training would pay for itself in turnover reduction. So the company embarked on a massive global leadership training program based on the data that “people don’t quit their employer, they quit their supervisor.” Let’s examine this decision.
First, the decision was made based on a truism which is widely accepted – people quit because of bad leadership. Labor mobility is a complex topic and I’ve studied turnover data for most of my life. I would say that sometimes people do quit because they hate their boss, but in looking at data I’ve seen I never concluded that this was the overwhelming number one key to retention. In fact a study I read in a leading publication that looked at thousands of people who left their jobs found that people leave great and lousy supervisors at equal rates. It turns out that people working for leaders they view as great tend to receive more development opportunities and have more self-confidence and feel encouraged by their boss to learn and grow. They frequently feel empowered to take chances and move on, and often they do just that.
Second, the decision to focus on only one data point excluded an attack on all the other reasons people leave their job. We didn’t look at job flexibility, compensation (people do quit because of money no matter what those pesky HR people tell you), employee involvement in decisions, work hours or any number of other things. We invested most of our resources in training leaders.
Finally, we worked on a premise that you can get results by training leaders. Over the years I’ve seen a large number of bad leaders go through training, and the output is mostly trained bad leaders. So we worked with another assumption, which turned out to be bad data – that training in leadership for a couple weeks a year will actually drive behavioral change. I’m not saying it can’t, but ours didn’t.
Our large training effort had no impact other than to consume a lot of cash, which wasn’t surprising. A lot of very good work was done to chase a problem that wasn’t real. It’s hard to challenge these true stories that aren’t actually real. And although sometimes for the sake of your career you have to play along and live to fight another day, I encourage you to not accept commonly held assumptions just because they sound cool.
Convincing Ourselves That Flawed Processes Are In Control
I’ve worked for a number of corporations over the years that decided to invest in developing the perfect measures for processes that can’t be accurately measured. In Lean we call this the waste of over-processing. I have observed lots of these but let’s take one very common process – performance assessment.
Many businesses decide to develop the perfect performance assessment, performance review or performance appraisal process. The process names have changed over the years for the same reason I suppose that convicted felons frequently change their names upon release from prison – to hide from their past misdeeds.
The only clear data point concerning the performance evaluation process that is valid is that managers and workers universally feel the processes suck. No one likes them, they consume massive amounts of resources and there is no evidence in behavioral science research that performance can be measured reliably.
People have asked me over the course of my career for ideas on how they could improve their performance. I’ve told them one of the best ways to improve your performance is to change bosses. You can do exactly the same work in the same way, and one of your managers will hate it while another will feel you are the employee of the year. I was in a job where one of my managers felt I had no process skills and just made up crap as I went. I got a new boss who felt I was creative and a maverick who really knew how to work the system to get results. I was doing the same work for both of them; they saw my results very differently. It’s because real performance can’t be measured objectively.
Why can’t we measure work performance? Because there are too many errors we can’t control. Recency error, halo error, lack of inter-rater reliability, external events, rater bias and other factors all combine to make the performance evaluation process pretty crappy from a measurement – and therefore fairness –point of view. Sure, if you want to measure thigs like does someone ever show up for work or do they sleep on the job you can probably capture that, but fine differentiation among reasonably hard-working people can’t be done.
Many organizations have some form of merit pay that will force the use of some kind of performance assessment process, but it won’t be objective and fair and correct. Recognize that you can’t measure this, and if you have some role in implementation of design, focus your efforts on making the process consume as few resources as possible. Simplify and automate what you can, focus on interests and goals of your employees and call it a day. Job performance measurement is just one example; there are many other processes that cannot accurately be measured. Focus your efforts on avoiding the waste of over-processing.
Improvement That Really Isn’t
Here’s the situation. I’m at a tailgate party and we have a massive cooler full of many varieties of beers with 100 bottles on ice. I decide to get a beer for myself and four of my lazy companions ask me to grab them one since I’m up. I open the cooler and without looking pull out five bottles. Three of the bottles are Corona, one is a Smithwicks and one is Rolling Rock. What’s the chance that three out of five or 60 percent of the bottles in the cooler are Coronas, just like the sample I pulled? This is a question of sampling and margin of error. I think we’d all agree that none of us want to put a lot of money on a bet that 60 of the beers in that cooler are Coronas. If I pulled 50 beers and 30 of them were Corona, then we’d feel better about the bet.
This is the concept of sampling and making assumptions about the bigger population based on the sample. The reason to discuss this is that so many times in my life I have been in meetings where we were discussing changes in data that I’m pretty sure were within the margin of error. The margin of error is a range indicating how sure you can be that the sample looks like the population. The margin of error in my beer sample of five, by the way, is 43 percent.
I’ve been in meetings where we discussed a drop in the net promoter score or employee satisfaction score of 5 percent and talked for hours about why this may have occurred. Conversely, I’ve seen similar events where we were all patting ourselves on the back for improving. I have a hunch that a fair number of these conversations have been about data that is within the sampling margin of error. In other words, there was actually no drop or improvement in results we dedicated significant time and consumed valuable resources taking about what was nothing. We were chasing facts that weren’t real, like corporate cats chasing invisible laser pointer images. Anytime you begin a discussion of survey data, start with margin of error.
Can We Work Santa Into the Summary?
There are other types of process control errors you may encounter, like multiple variables affecting outcomes (sometime we can discuss the joys of multivariate regression analysis), sample bias, manipulation of output presentation and scaling and, of course, outright fudging of facts. This isn’t meant to be an all-inclusive guide, but rather a suggestion to be more inquisitive when presented with data. We waste so much time and money chasing bad facts, and my hope is that this gives you a little encouragement to challenge assumptions rather than accept them at face value.
I know that many of you work for large corporations and often feel powerless to effect meaningful change. When I was young in a large firm I often felt like one of Santa’s elves; I toiled away all year in anonymity while some old guy in an expensive suit got all the recognition at year’s end. I challenged the process many times and was shut down, but every once in a while I changed the direction, and that made all the other attempts worthwhile. I hope you keep challenging.
0 comments:
Post a Comment