Saturday 30 March 2019

Basic Sampling Strategies: Sample vs. Population Data

Six Sigma Tutorial and Material, Six Sigma Guides, Six Sigma Learning

Information is not readily found at a bargain price. Gathering it is costly in terms of salaries, expenses and time. Taking samples of information can help ease these costs because it is often impractical to collect all the data. Sound conclusions can often be drawn from a relatively small amount of data; therefore, sampling is a more efficient way to collect data. Using a sample to draw conclusions is known as statistical inference. Making inferences is a fundamental aspect of statistical thinking.

Selecting the Most Appropriate Sampling Strategy


There are four primary sampling strategies:

◈ Random sampling
◈ Stratified random sampling
◈ Systematic sampling
◈ Rational sub-grouping

Before determining which strategy will work best, the analyst must determine what type of study is being conducted. There are normally two types of studies: population and process. With a population study, the analyst is interested in estimating or describing some characteristic of the population (inferential statistics).

With a process study, the analyst is interested in predicting a process characteristic or change over time. It is important to make the distinction for proper selection of a sampling strategy. The “I Love Lucy” television show’s “Candy Factory” episode can be used to illustrate the difference. For example, a population study, using samples, would seek to determine the average weight of the entire daily run of candies. A process study would seek to know whether the weight was changing over the day.

Random Sampling


Random samples are used in population sampling situations when reviewing historical or batch data. The key to random sampling is that each unit in the population has an equal probability of being selected in the sample. Using random sampling protects against bias being introduced in the sampling process, and hence, it helps in obtaining a representative sample.

In general, random samples are taken by assigning a number to each unit in the population and using a random number table or Minitab to generate the sample list. Absent knowledge about the factors for stratification for a population, a random sample is a useful first step in obtaining samples.

For example, an improvement team in a human resources department wanted an accurate estimate of what proportion of employees had completed a personal development plan and reviewed it with their managers. The team used its database to obtain a list of all associates. Each associate on the list was assigned a number. Statistical software was used to generate a list of numbers to be sampled, and an estimate was made from the sample.

Stratified Random Sampling


Like random samples, stratified random samples are used in population sampling situations when reviewing historical or batch data. Stratified random sampling is used when the population has different groups (strata) and the analyst needs to ensure that those groups are fairly represented in the sample. In stratified random sampling, independent samples are drawn from each group. The size of each sample is proportional to the relative size of the group.

For example, the manager of a lending business wanted to estimate the average cycle time for a loan application process. She knows there are three types (strata) of loans (large, medium and small). Therefore, she wanted the sample to have the same proportion of large, medium and small loans as the population. She first separated the loan population data into three groups and then pulled a random sample from each group.

Systematic Sampling


Systematic sampling is typically used in process sampling situations when data is collected in real time during process operation. Unlike population sampling, a frequency for sampling must be selected. It also can be used for a population study if care is taken that the frequency is not biased.

Systematic sampling involves taking samples according to some systematic rule – e.g., every fourth unit, the first five units every hour, etc. One danger of using systematic sampling is that the systematic rule may match some underlying structure and bias the sample.

For example, the manager of a billing center is using systematic sampling to monitor processing rates. At random times around each hour, five consecutive bills are selected and the processing time is measured.

Rational Subgrouping


Rational subgrouping is the process of putting measurements into meaningful groups to better understand the important sources of variation. Rational subgrouping is typically used in process sampling situations when data is collected in real time during process operations. It involves grouping measurements produced under similar conditions, sometimes called short-term variation. This type of grouping assists in understanding the sources of variation between subgroups, sometimes called long-term variation.

The goal should be to minimize the chance of special causes in variation in the subgroup and maximize the chance for special causes between subgroups. Subgrouping over time is the most common approach; subgrouping can be done by other suspected sources of variation (e.g., location, customer, supplier, etc.)

For example, an equipment leasing business was trying to improve equipment turnaround time. They selected five samples per day from each of three processing centers. Each processing center was formed into a subgroup.

When using subgrouping, form subgroups with items produced under similar conditions. To ensure items in a subgroup were produced under similar conditions, select items produced close together in time.

Determining Sample Size


This article focused on basic sampling strategies. An analyst must determine which strategy applies to a particular situation before determining how much data is required for the sample. Depending on the question the analyst wants to answer, the amount of sample data needed changes. The analyst should collect enough baseline data to capture an entire iteration (or cycle) of the process.

An iteration should account for the different types of variation seen within the process, such as cycles, shifts, seasons, trends, product types, volume ranges, cycle time ranges, demographic mixes, etc. If historical data is not available, a data collection plan should be instituted to collect the appropriate data.

Factors affecting sample size include:

◈ How confident (accurate) the analyst wants to be that the sample will provide a good estimate of the true population mean. The more confidence required, the greater the sample size needed.
◈ How close (precision) to the “truth” the analyst wants to be. For more precision, a greater sample size is needed.
◈ How much variation exists or is estimated to exist in the population. If there is more variation, a greater sample size is needed.

Sample size calculators are available to make the determination of sample size much easier; it is best, however, that an analyst consults with a Master Black Belt and/or Black Belt coach until he or she is comfortable with determining sample size.

Friday 29 March 2019

Implementation of Six Sigma Priority Matrixes and a Tollgate Process

Working for a large multinational corporation has given me insight into, and sympathy for, the hearts and minds of department and cost center leaders. Each year they are required by company leadership to find millions of dollars using Six Sigma productivity initiatives – a subtle way of saying “savings.” Some people volunteer, but many are volunteered for the initiatives, without guidance or direction; this challenge either creates panic and complacency. Either way, these intrepid individuals strike out to save millions of dollars.

Six Sigma Tutorial and Material, Six Sigma Guides, Six Sigma Learning, Six Sigma Certifications

Different departments will rush to their senior managers claiming that they have found a way to save millions of dollars by changing internal processes. Adoration and praise are heaped upon the companies’ white knights.

Unfortunately, productivity measures sometimes end up in direct conflict with one another. Depending on the business structure each group may be told to proceed, even though their initiatives are contrary to the overall health of the company.

Conflicting Project Approvals


A product supply manager determines that producing in only a limited number of locations would cut warehousing, inventory and outside storage costs in half (saving approximately $155M). Excellent! But, wait. . . The director of transportation has determined that multiple production locations would save $200M. Even better! Now we’ll save $355M!

We can’t have production in limited locations to save inventory and warehousing costs, however, while also having production at all locations to reduce transportation costs! These conflicting analyses led me to volunteer to be the gatekeeper for all Six Sigma projects related to production supply/logistics.

I convinced the senior management team to create a single process improvement/Six Sigma body. The role of this group was to evaluate each project in relation to the total company – a truly holistic view. I became the (unenviable) financial gatekeeper, ensuring that the analysis provided by the various managers was accurate and would not simply shuffle the costs to another cost center; I did this by reviewing the net savings to the entire company.

Initially, the holistic results or net savings, when compared to the original Six Sigma projects, were reduced from millions of dollars in savings down to thousands. In some cases the projects were totally scrapped.

Prioritization Tool


I introduced two new tools to the company. The first was a mechanism to determine which projects should be attempted and in which order. I took each project and separated it into three components: 1) determining the ease or difficulty of implementation, 2) available human capital (Was there sufficient time to work on the project?) and 3) the net financial impact. (See table below.) The project with the highest score was the first Six Sigma project we tackled, then the project with the next highest score, and so on. This helped keep the company focused on the most significant projects in the pipeline. In a perfect world, the next year we would keep going down the list to the next projects until eventually all of the projects were completed.

After poking, prodding, vetting and rewriting, the prioritization tool was approved. The tool worked better than expected; it was used to identify and rank the programs that should be prioritized. By happy coincidence, three by-products of this tool were identified: 1) it identified individuals who were selected to be on multiple projects when it was impossible for them to participate on all of the projects, 2) project managers had to use realistic date/time assumptions – not a generic “TBD” and 3) finance was required to provide input on the projects, reducing the chance of project leaders using imaginary or unrealistic numbers to make their projects looks more appealing.

The following chart is an example of this Six Sigma tool.

Project Ease of Implementation
(1: easy to 5: difficult)
Time to Implement
(1: resources available to 5: stretched resources) 
Financial Impact
(1: <$100K; 2: $101K-$300K; 3: $301K-$500K; 4: $501K-$750K; 5: >$750K) 
Total Project Rank 
15
B 10 
C

Project Rank Description Priority
13-15  High priority
10-12  Medium priority 
7-9  Valuable but not essential 
4-6  Low value and not essential 
1-4  Low priority 

Visual Tollgate


I also decided to implement a visual Six Sigma tollgate. This tollgate process required the project’s Champion, sponsor and key resources to provide an update on their project twice a month. I used simple colors: green, yellow and red. If the project was green, the project was on course; yellow meant to keep a close eye on the project; and red indicated that the project was falling behind or needed to be revamped.

For example, if the project was projected to save $4M in a calendar year, but as of August the project had only saved $1M then it would be virtually impossible to capture the remaining $3M in the last quarter of the year. At this point, the project was marked red and the forecast for the project was lowered, the financial forecast was adjusted and additional resources could be deployed to the project.

On the Same Page


Between the implementation of the tollgate and the prioritization tool, the company was able to eliminate several million dollars of erroneous and duplicate cost savings. This holistic view allowed the business to compare its actual operational results with that of a realistic baseline in order to determine operational and financial performance. Based upon these metrics the company has identified some problem areas, but with its focus on Six Sigma, those weaknesses will soon be eliminated, leaving only opportunities and a burning desire to become market leader.

Thursday 28 March 2019

Understanding Process Variation

It is well established that there exist eight dimensions of quality:

1. Conformance
2. Performance
3. Features
4. Reliability
5. Durability
6. Serviceability
7. Aesthetics
8. Perceived quality

Each dimension can be explicitly defined and is self-exclusive from the other dimensions of quality. A customer may rate your service or product high in conformance, but low in reliability. Or they may view two dimensions to work in conjunction with eachother, such as durability and reliability.

This article will discuss the dimension of conformance and how process variation should be interpretted. Process variation is important in the Six Sigma methodology, because the customer is always evaluating our services, products and processes to determine how well they are meeting their critical to qualitys (CTQs); in other words, how well they conform to the standards.

Understanding Conformance


Conformance can simply be defined as the degree to which your service or product meets the CTQs and predefined standards. For the purpose of this article, it should be noted that your organization’s services and products are a funtion of your internal processes, as well as your supplier’s processes. (We know that everything in business is a process, right?)

Here are a few examples:

1. You manufacture tires and the tread depth needs to be 5/8 inch plus or minus 0.05 inch.
2. You approve loans and you promise a response to the customer within 24 business hours of receipt.
3. You write code and your manager expects less than five bugs found over the life of the product per thousand lines of code written.
4. You process invoices for healthcare services and your customers expect zero errors on their bills.

A simple way to teach the concept of how well your service or product conforms to the CTQs is with a picture of a target. A target, like those used in archery or shooting, has a series of concentric circles that alternate color. In the center of the target is the bullseye. When services or products are developed by your organization, the bullseye is defined by CTQs, the parts are defined by dimensional standards, and the materials are defined by purity requirements. As we see from the four examples above, the conformance CTQs usually involve a target dimension (the exact center of the target), as well as a permissible range of variation (center yellow area).

Six Sigma Study Materials, Six Sigma Tutorial and Materials, Six Sigma Certifications, Six Sigma Learning

Figure 1: Targeting Process Variation

In Figure 1, three pictures help explain the variation in a process. The picture on the left displays a process that covers the entire target. While all the bullets appear to have hit the target, very few are in the bullseye. This is an example of a process that is centered around the target, but very seldomly meets the CTQs of the customer.

The middle picture in Figure 1 displays a process that is well grouped on the target (all the bullets hit the target in close proximity to eachother), but is well off target. In this picture – like in the first picture – almost every service or product produced fails to meet the customer CTQs.

The far right picture in Figure 1 displays a process that is well grouped on the target, and all the bullets are within the bullseye. This case displays a process that is centered and is within the tolerance of the customer CTQs. Because this definition of conformance defines “good quality” with all of the bullets landing within the bullseye tolerance band, there is little interest in whether the bullets are exactly centered. For the most part, variation (or dispersion) within the CTQ specification limits is not an issue for the customer.

Relating the Bullseye to Frequency Curves


In the real-world, we seldom view our processes as bullseyes (unless you work at a shooting range). So how can you determine if your process is scattered around the target, grouped well but off the bullseye, or grouped well on the bullseye? We can display our data in frequency distributions showing the number (percentage) of our process outputs having the indicated dimensions.

Six Sigma Study Materials, Six Sigma Tutorial and Materials, Six Sigma Certifications, Six Sigma Learning

Figure 2: Targetting Process Variation With The Process Capability Ratio

One can easily see the direct relationship of Figure 2 to Figure 1. In Figure 2, the far left picture displays wide variation that is centered on the target. The middle picture shows little variation, but off target. And the far right picture displays little variation centered on the target. Shaded areas falling between the specification limits indicate process output dimensions meeting specifications; shaded areas falling either to the left of the lower specification limit or to the right of the upper specification limit indicate items falling outside specification limits.

Interpreting Process Variation


Most Black Belts have little time to completely understand the variation of their process before they move into the Improve phase of DMAIC (Define, Measure, Analyze, Improve, Control). For instance, do the critical X‘s of your process have a larger impact on variation (spread) or central tendency (centering)? Segmentation or subgrouping the data can help you find the correct critical X. Hypothesis testing will help you prove that it is so.

Wednesday 27 March 2019

3 Benefits of Implementing ITSM via ITIL

ITSM Certifications, ITIL Guides, ITIL Learning, ITIL Tutorial and Material

While Information Technology Service Management (ITSM) is prevalent in larger organizations, other organizations are still looking at implementing ITSM and haven’t taken the plunge yet. Chances are good some of those same shops have created their own do-it-yourself (DIY) service delivery models over the years, and they don’t see the value in switching to a formal ITSM environment yet. After all, why change something that’s not broken?

In today’s post I’ll attempt to answer the question “Why bother implementing ITSM if what you have is working fine?” I’ll do this by highlighting three specific benefits most shops experience when implementing ITSM inside an Information Technology Infrastructure Library (ITIL) framework.

Three of the key benefits of implementing ITSM and ITIL are:

#1 – ITSM and ITIL provide a framework to continually deploy, improve, and retire services


The ITIL service lifecycle (shown in figure 1) provides a framework for creating, deploying, delivering, improving, and retiring IT services, through these five core areas.

ITSM Certifications, ITIL Guides, ITIL Learning, ITIL Tutorial and Material

Figure 1: The ITIL service lifecycle

◈ The ITIL Service Strategy core area helps you create a service portfolio that consists of the service pipeline, the service catalog, and retired services. It provides a past, present, and future view of what services an organization offers, has offered, or will offer to its customers. The service pipeline tracks services that have been proposed or are being developed, while the service catalog exposes all the available services that a customer can order now. Retired services are kept for reference and possible reactivation or redevelopment as pipeline services.

◈ The ITIL Service Design core area provides the roadmap for adding or changing services in the ITSM/ITIL environment.

◈ The ITIL Service Transition core area defines how IT services move from one state to another in the service portfolio, transitioning from pipeline to catalog to retirement.

◈ The ITIL Service Operation core area helps organizations insure that their IT services are delivered effectively and efficiently, through incident management, problem management, and event management (more in a minute).

◈ The ITIL Continual Process Improvement (CPI) core area defines the effectiveness of IT services, and provides a template for how services are evaluated, improved, monitored, and corrected.

The ITIL service lifecycle helps you define and setup standard processes for how IT services are created, delivered, evaluated, improved, and ultimately retired. It provides industry standard practices and consistency for your IT service delivery and management, more so than with a DIY service management solution that may have been created in a previous decade.

#2 – ITSM and ITIL help provide standard IT service delivery through a service catalog


The ITIL framework helps you develop a standardized framework for users to request and receive services. An ITIL service catalog provides a single point of contact for customers requesting a specific service or requesting help when an incident occurs. Shops without an ITSM framework typically have separate (frequently home-grown) interfaces for user adds, deletes, and changes: IT requests; and Help Desk tickets. Developing an ITIL service catalog and service desk for your customers provides a standardized one URL structure for requesting and receiving IT services, cleaning up the myriad interfaces that a customer might have to navigate when they need something.

#3 –ITSM and ITIL provide a standardized framework for dealing with IT incidents, problems, and events


ITIL also helps provide consistency and defined roles when IT service delivery goes wrong, including:

◈ Incident management that responds to unplanned interruptions in IT services, where the goal is to restore the service as soon as possible. An example of an incident would be an application failure that corrupts an in-process order, where the fix is to correct the order so it can be processed.

◈ Problem management that identifies and corrects the underlying root causes (problems) causing multiple recurring incidents. A typical problem management situation might involve identifying why a customer ordering application occasionally crashes (causing incidents with order entry and fulfilment) and fix the app.

◈ Event management for constantly monitoring IT services or configuration items that are changing from one state to another, determining which event changes are normal and which event changes are exceptions. A change of state can be normal (e.g., an IBM i system has entered backup mode at its usually scheduled time) or it can be an exception (e.g., an IBM i system did not exit from backup mode at its expected time). Since not every system event needs attention, event management filters each event, determines which events are exceptions, and initiates the appropriate actions and alerts for any exceptions it finds.

Applying the practices listed in the ITIL Service Operations core areas, organizations can set up processes to restore IT services that have been interrupted, repair root causes that result in multiple incidents, and initiate responses and alerts for IT services that need attention as they change state.

The big Three


To summarize, these are three of the biggest benefits an organization can realize when implementing an ITSM environment using ITIL.

1. A consistent framework for creating, delivering, evaluating, improving, and ultimately retiring IT services

2. Standardized IT service delivery through an organization-specific service catalog

3. Provides consistency and defined IT roles for dealing with IT services that have failed, fixing root causes of persistent problems, and monitoring services and taking appropriate actions when an exception occurs in a running service or configuration item.

Tuesday 26 March 2019

Ideas for Using Lean Six Sigma in the Marine Corps

The U.S. Marine Corps’ continuous process improvement (CPI) effort is aimed at enhancing all aspects of support provided to operating forces to maximize combat readiness and war fighting capability. That means the Marine Corps is emphasizing that the CPI program is for every level and every element of the corps.

Six Sigma Tutorial and Material, Six Sigma Certifications, Six Sigma Guides

Here is what is being recommended to Marine Corps units:

Support Elements


Examine current practices and procedures, and identify appropriate areas where LSS could improve support to the war fighter. Review established processes that cycle regularly, can be easily measured and are in need of improvement but do not hold an obvious solution. Identify and train motivated candidates for LSS certification, apply DMAIC, and remember the sixth troop leading step – supervise. Educate marines and sailors about the benefits of CPI and LSS.

Operating Forces


Do not tune out the business-speak and methodology of CPI and LSS. The secretary of the Navy served successfully in high positions with corporate giants Northrop Grumman and TRW Systems, both of which boast robust Six Sigma programs, and he now leads the U.S. Navy and Marine Corps. Leaders, therefore, should not be surprised by the growing use of, and reliance on, business-friendly terms and methods. Learn about LSS and discover how it might help any unit better accomplish its mission. Apply LSS tenets as desired and able.

Headquarters, Marine Corps


Many marines do not like the LSS terminology when it mirrors USMC terms, but with significantly different meanings. LSS experts are identified by martial arts belt colors, titles that elicit a chuckle from those who have received similar designations via the Marine Corps Martial Arts Program (MCMAP). Similarly, the implementation of LSS project is referred to as a “deployment,” something sure to make corporals returning from their third combat tour wince. And tenet units and war fighters are now routinely referred to as “customers” by base staff personnel, an obvious but probably unnecessary nod to the civilian sector. As former USMC commandant Gen. James Jones created a USMC-unique program when establishing MCMAP by meshing certain concepts and tenets of different martial arts disciplines into one tailor-suited for the Marine Corps’ needs, so too should the Corps adopt the appropriate tenets of LSS, rather than wholesale adoption of the methodology and as-is application. Change the terminology and titles so they fit the warrior ethos – instead of “Green Belt,” use “LSS Level One Certified.”

LSS makes sense for certain parts of the Marine Corps, but not necessarily for all. Do not force this process onto units already struggling to keep pace with their extremely challenging operational tempo. LSS was never intended to replace the sound strategy and tactics of U.S. warriors. Rather, its utility lies in its ability to improve the support of those that are in the fight. Allow battalions and squadrons of the active operating forces to voluntarily participate in USMC business enterprise initiatives such as LSS.

DoD policy states that all organizations that generate cost savings and expense reductions from CPI efforts can retain those savings. Commanders need to ensure units successful with LSS efforts recoup any savings so they can further improve their support for marines and sailors. Remain open to suggestions for fleet-wide LSS implementation, responsive to developing improvements in the CPI Program itself, and have a long-term backup plan to continue LSS Program benefits.

All Hands


Understand that LSS is not TQM. It has proven benefits both in the corporate and military world, and it has the potential to make the Corps better. Research the process and explore where it might be beneficial to any unit and mission. LSS is ultimately about changing the culture – from what Vice Admiral Massenburg identified as the “culture of consumption” (spend it or lose it), into what David R. Clifton, director of Marine Corps Business Enterprise, calls a “culture of innovation.” That culture change must start at the top and be echoed throughout the leadership ranks. As Mr. Clifton told an audience at Marine Corps Base Hawaii, LSS is “not all about the tools; it is about the culture of always wanting to make things better.” This is basic Marine/Navy culture and is a philosophy that all marines and sailors can surely embrace.

Monday 25 March 2019

PRINCE2 Methodology & Advantages

Projects IN Controlled Environments or known as PRINCE2, is a methodology in project management. Which is applicable to all types of projects.

Prince2 Methodology, Prince2 Certifications, Prince2 Learning, Prince2 Study Materials

It is evolved from an earlier method called Project Resource Organisation Management Planning Techniques or known as PROMPT II. In 1989 the Central Computer and Telecommunications Agency (CCTA) started using one of the methodologies of PROMPT II for their Information System Project Management as a UK Government standard. And later they named it as “PRojects IN Controlled Environments” or “PRINCE2” s an acronym name.

PRINCE2 METHODOLOGY


PRINCE2 is not like others framework structure. It can promptly be custom-made to any size or sort of Project Management Framework. PRINCE2 methodology has comprised of 4 primary parts of Integrated Elements. Those are 1.Core Principles; 2.Themes; 3.Processess; and 4.Tailoring.
  • Core Principles: It has seven principles. Which is using for evaluating a project in a good way. Those principles are,
    • Continued business justification.
    • Learn from experience.
    • Define roles and responsibilities.
    • Manage by stages.
    • Manage by exception.
    • Focus on products.
    • Tailor to suit the project environment.
  • Themes: Ought to be tended to consistently be parts of Project Management. It gives us a decent review of how to address each of these viewpoints. Those viewpoints are, Business Case. Organization. Quality. Plans. Risk. Change. Progress.
  • Processes: From beginning to the end of a project it has been using seven processes step by step to complete a project. Those processes are,
    • Starting Up a Project.
    • Initiating a Project.
    • Directing a Project.
    • Controlling a Stage.
    • Managing Product Delivery.
    • Managing a Stage Boundary.
    • Closing a Project.
  • Tailoring: Tailoring is about speculation how best to apply the strategy to Project Management to get a decent adjust of project control and low organisation. It has been checking on
    • Terms and Language.
    • Management Products.
    • Roles and Responsibilities.

Advantages of PRINCE2 methodology


PRINCE2 has lots of benefits and also capabilities. Below we are highlighting all its benefits and capabilities.

Key Benefits:

◈ Conveyance Confidence.
◈ Talk the Same Language.
◈ Client Focused.
◈ Conveys Reliability.
◈ Backings Cross Functional Working.
◈ Convey What the Business Need.
◈ Adaptable Control.
◈ Vocation Development.
◈ Some portion of an Integrated Set.

Key Capabilities:

◈ Support Business Outcome.
◈ Empower Business Change.
◈ Oversee Risk in Line with Business Needs.
◈ Enhance Customer Experience.
◈ Show Value for Money.
◈ Persistently Improve.

Pros and Cons about PRINCE2 methodology

PRINCE2 also has its own few pros and cons. Actually, PRINCE2 has only one small con, which very small to manage.

Its Pros are:

◈ PRINCE2 gives a strategy to overseeing projects inside a plainly characterized structure.
◈ It has been giving learning about what to do if the project doesn’t create as arranged.
◈ In this method, every strategy is resolved with its key data sources and yields and with specific targets and activities to be finished.

And its Con is:

◈ Infrequently PRINCE2 become improper for little tasks or where prerequisites are relied upon to change because of the work required in making and looking after reports, logs and records.

Saturday 23 March 2019

Improving Personal Health Using Data and Six Sigma

Four times a year, I donate blood at a local blood bank and receive readings on my cholesterol, blood pressure and pulse. Although donating blood does not improve a person’s health, the regular readings of the vital signs, along with analysis and actions stemming from this data, can be significant in the fight against heart disease, strokes and other medical problems.

Six Sigma, Six Sigma Learning, Six Sigma Guides, Six Sigma Certifications

Six Sigma practitioners, because of their knowledge of statistical analysis and their belief in continuous improvement, are at a particular advantage for using this health data to make positive life changes. For instance, practitioners can conduct regression analysis to show a correlation of the vital sign variables and the relationship to exercise and diet. Also, they can use time-phasing process to find leading indicators. Six Sigma concepts and tools have already proven useful in practitioners’ professional lives – but the concepts and tools also can be beneficial in practitioners’ personal lives.

Collecting Data


Heart disease, related to high cholesterol or elevated blood pressure, can affect anyone – young, old, male, female. Uncontrollable factors, such as one’s heredity, as well as controllable factors (diet, weight, exercise, stress) influence a person’s cholesterol and blood pressure.

Cholesterol

After each blood donation, the San Diego Blood Bank (SDBB) provides data on my cholesterol and blood pressure via a secure website. Having this data available every three months helps me monitor and improve my health. High blood pressure is referred to as the silent killer because the symptoms may not be apparent. One in three Americans suffers from high blood pressure.

Figure 1 is a chart of my cholesterol. Although the National Heart Association recommends maintaining a cholesterol level of 200 (the red line) or below, my goal is to stay under 180 (the yellow line).

Six Sigma, Six Sigma Learning, Six Sigma Guides, Six Sigma Certifications

Figure 1: Ray’s Cholesetrol as Reported By the SDBB

For me, exercise and healthy eating significantly lower my cholesterol levels. As illustrated in Figure 1, my July 2008 and October 2008 cholesterol levels were favorable. I recall that I was eating right and exercising in preparation for a Multiple Sclerosis bike ride held in October 2008. I remember starting this exercise and eating routine after my January 2008 physical and January blood donation results.

There were three periods in which I missed my goal of keeping my total cholesterol value below 180; these were periods preceded by stress, overeating and minimal exercise.

Blood Pressure

The SDBB also provides blood pressure readings, which I downloaded to create Figure 2. The red and orange lines are my blood pressure. My goal is to keep the systolic blood pressure under 120. Fortunately, I was born with good genes for low (good) blood pressure.

Six Sigma, Six Sigma Learning, Six Sigma Guides, Six Sigma Certifications

Figure 2: Ray’s Blood Pressure and Cholesterol as Reported By the SDBB

The data shows that cholesterol value could be a leading indicator of blood pressure for me, as illustrated by the dotted green lines. For example, my cholesterol peaked in Jan. 2008 (bad) and my blood pressure peaked (bad) three months later (April 2008). Likewise, my cholesterol was good in July 2008 and my blood pressure was good three months later (Oct. 2009).

Regression Analysis


After noticing the relationship between my cholesterol and blood pressure levels, I conducted a mathematic regression analysis, which provided the same results.

With the original data, a regression analysis on blood pressure (systolic) to cholesterol showed no correlation; the coefficient of correlation is near zero (0 is no correlation, weak is .25, moderate is .50, strong is .75, perfect is 1.0).

However, when blood pressure is shifted three months backward, the coefficient of correlation is .58 (moderate to strong). The coefficient of determination, or the proportion of the total variation in blood pressure attributed to the cholesterol reading, is 34 percent in this situation. These calculations can easily be produced in Excel using the data analysis add-in.

Although this shows a mathematical correlation of blood pressure to cholesterol, correlation does not prove the causal relationship. Both blood pressure and cholesterol levels could be independently influenced by my exercise and eating routine. The mathematical exercise does reinforce that something happened as both variables changed, somewhat in tandem with the time shift. The analysis could also imply, at least with this data, the predictive nature of blood pressure by cholesterol; that is, cholesterol as a leading indicator.

Leading Indicators


The other lesson is that of leading indicators. When presented with data, such as that in Figure 2, practitioners should look for the phasing relationship, as indicated by the green lines. Many times there is correlation of the variables if the timeframe of one or more variables is shifted. There are mathematical means of determining a phasing relationship, but for many applications, simple charting and shifting data up or down in a spreadsheet is sufficient. More data and examples are needed to make a conclusion on a leading indicator and the causal relationship.

Pulse is another good health indicator. Years ago I read that a heart will have so many beats in a lifetime. When people exercise vigorously, their hearts beat faster per minute. However, the exercise builds up the heart from a muscular perspective, such that when they are not exercising (which is most of the time), the heart beats slower, thus providing a long heart life. As illustrated in Figure 3, my improved (lower) pulse rate correlates to those times when I exercise more. My goal is keep my pulse in the 60 beats per minute range.

Six Sigma, Six Sigma Learning, Six Sigma Guides, Six Sigma Certifications

Figure 2: Ray’s Blood Pressure and Cholesterol as Reported By the SDBB

Personal Takeaways


At age 61, my key takeaway is to eat healthy, avoid refined sugars and minimize fat. Avoiding salt and minimizing alcohol also are good practices. Minimizing fast food and processed food helps in this endeavor. Walking, hiking, and cycling are my favorite types of exercise and, as correlated to the charts, contribute to better health conditions.

Another important factor that goes with the healthy eating and exercises is weight control, which, for me, is a struggle. There are good medications that reduce cholesterol and blood pressure; however, my goal is try to avoid these as long as possible due to the dependency, cost and side effects. Fortunately, I have been blessed with good health.

Each year I get a physical from a primary care physician. There are a number of ways to manage heart disease once it is diagnosed; however, frequent monitoring, trending and taking proactive measures for disease prevention are limited. Once a year measurements are limited in terms of trending, providing body knowledge and creating focus. The quarterly vital sign updates I receive after donating blood help to fill the gaps.

Six Sigma Advantage


Every person is different. There is a tendency for people to compare their results to others or to averages. Or, people sometimes conclude that they can’t change their health condition. Six Sigma practitioners, however, know that this is not the right attitude. By approaching problems in an objective way, applying systems thinking and excluding the personalization of problems, it is possible to make continuous improvements. Data, although good, does not, by itself, change things. Decisions must be made based on both emotional and logical factors.

Behavioral change is difficult. Dr. Edwards Deming, the statistician and quality guru, developed 14 principles on quality and change to help with people and the emotion challenge. The applicable Deming principles include:

◈ No. 1: Have a constancy of purpose.
◈ No. 14: Take action on the transformation.
◈ No. 5: Improve the system constantly.
◈ No. 11: Eliminate numerical quotas.

The last principle about numerical quotes may seem contradictory, as Dr. Deming used so many charts and graphs. But his emphasis was on measuring and improving the process, and not judging the person. Healthcare is personal; cholesterol, blood pressure and weight numbers are your numbers. However, when Six Sigma practitioners approach the change as a process and system, it can help them to address the problem more directly and objectively.

“You can’t improve what you can’t measure” is a common concept in business quality management and Six Sigma. Charting data, monitoring trends, finding correlation and taking action (the Plan, Do, Check, Act cycle) can also be applied to personal health. Practitioners should challenge themselves to look for opportunities to apply Six Sigma concepts, grow the body of knowledge and share with others. And, if you are able to donate blood, check out the local blood bank. There is always a pressing need for blood and the personal health data every three months can be beneficial to your health.

Note that the information provided is based on personal experience and data. This article is not intended to be a substitute for professional medical advice.

Friday 22 March 2019

Leveraging Lean Six Sigma Tools for Strategic Planning

The Northern Regional Medical Command (NRMC) Office of the Chief Medical Information Officer (CMIO) is a team of high caliber military and civilian leaders tasked with deploying and supporting the use of information technology across 33 hospitals and clinics up and down the Eastern seaboard and extending as far West as the Mississippi River. This monumental task requires a sound strategic plan. Beyond simply building a plan, a process to facilitate execution of that strategic plan while simultaneously building workforce capabilities was undertaken.

Six Sigma Tools, Six Sigma Strategic Planning, Six Sigma Study Materials, Six Sigma Guides

NRMC CMIO began development of a strategic plan in October 2013. They conducted a strategic planning off-site incorporating a project identification selection workshop to brainstorm initiatives/projects to close identified gaps in meeting strategic objectives. Lean Six Sigma tools were used throughout the strategic planning activities resulting in value-added outputs.

Preparation for the off-site included staff members reading key documents to include: The Army Medicine 2020 Campaign Plan, The Information Management/Information Technology Campaign Plan 2020, Army Medicine CIO Themes to IMIT Strategic Metrics Matrix, IMIT Key Performance Indicators, the NRMC Balanced Scorecard, and the CMIO strategic documents from the previous strategic planning session. The purpose for the pre-work was to set the stage to cascade CMIO strategy from higher headquarters’ strategic plans and ensure alignment with their intent.

During the off-site session, staff reviewed and made minor updates to the mission, vision and goals. The focus then shifted to conducting brainstorming activities to glean the strengths and opportunities of the NRMC’s military treatment facilities and the region as a whole. Brainstormed ideas were then grouped together in an affinity diagram. This information was captured and the session came to a close.

Multi-voting/nominal group technique was conducted electronically by each staff member to vote on the importance for each initiative. The mean was calculated for each initiative and a benefits and efforts matrix was created documenting the results. During follow-up meetings team leads were assigned for each initiative. Each lead developed SIPOCs and/or quad charts with charter information for each initiative. Priorities were reassessed based on a clearer definition and understanding of each initiative. Initiatives were then classified as just-do-its (4) and DMAIC (Define, Measure, Analyze, Improve, Control) (3) projects. Team leads were tasked to complete A3 thinking model templates for the just-do-its.


Four staff members were identified to become Black Belt candidates to work the DMAIC projects and slated to attend training in April and June 2014. A certified Black Belt attended Master Black Belt training and was assigned the responsibility of mentoring the Black Belt candidates with oversight from the CMIO certified Master Black Belt. Training and certification of staff gives the CMIO the capability to identify future continuous process improvement opportunities using proven methodologies to make improvements and monitor for sustainment.

Microsoft InfoPath was used to create the CMIO project charter template to capture the critical elements required in a charter and signature capability to have documented approval of the project by the project sponsor; Belt candidate acknowledgement of the project intent; and the Belt candidate’s supervisor agreeing to dedicate time for the Belt candidate to complete the project.

A Microsoft SharePoint site was created to capture all identified projects resulting in a process for team leads to document and provide real time updates and facilitate the briefing process at CMIO and CIO meetings. This site also provides a snapshot of the status of a project following the red, amber, green methodology which allows the project sponsor to identify projects where higher level assistance may be needed to move the project forward.

A continuous process improvement SOP was developed defining the CMIO intent to tie into the strategic goals. The SOP outlines process steps for strategic planning process, identification/clarification of roles and responsibilities of staff (RACI chart), project identification, charter development, project approval, scope control and reporting mechanisms that include quarterly internal performance reviews.

A survey was given to the staff to assess the value of conducting strategic planning sessions in this manner. The survey results displayed staff satisfaction with this process with recommended improvements.

This model for strategic planning is easily replicated and will be used to facilitate the NRMC HQS ACSIM Strategic Planning Session.

Wednesday 20 March 2019

Proper Data Granularity Allows for Stronger Analysis

Can we trust the data? This is the fundamental question behind measurement system analysis (MSA). The question can come up in any data-based project. Shortcomings in a measurement system’s accuracy (bias, linearity, stability or correlation) and precision (resolution, repeatability or reproducibility) can obstruct analysis purposes.

One often-overlooked aspect of resolution is data granularity, or the measurement increment. Compared to other aspects of MSA, granularity is straight-forward to identify and relatively easy to fix. However, ignoring the importance of granularity may unnecessarily limit root cause analysis and the ability to manage and continually improve a process.

Granularity Makes a Difference


Simulated data from an imaginary delivery process is used here to illustrate the role of granularity. An exponential distribution is assumed for two suppliers, A and B, where the scale of the distribution is 70 and 90 hours, respectively. For supplier A, 20 percent of its deliveries (one day out of five) see a normally distributed additional delay of 100 hours, with a standard deviation of 20 hours. This could be due to delay accumulated in deliveries initiated the last day of the week.

To understand the impact of granularity, 250 data points from each of suppliers A and B are rounded up to hourly and daily resolution. Practitioners then complete the analysis by:

1. Visualizing the data
2. Identifying the data’s underlying distributions
3. Testing the suppliers’ influence on spread and central tendency

Visualizing the Data


Probability plots are far more capable than histograms of revealing data granularity. For this purpose, normal probability plots also can be used for non-normal data, as in this example. The histograms (Figure 1) and probability plots (Figure 2) of the cumulated data from both suppliers A and B with hourly resolution (above) and daily resolution (below) are shown here. Notice that the additional delay in 20 percent of the data is barely visible in the histogram of the hourly resolution data and not visible in the data with daily resolution.

Six Sigma Tutorial and Material, Six Sigma Guides, Six Sigma Learning, Six Sigma Certification

Figure 1: Histograms of Supplier Lead Times by Hourly and Daily Resolution

Six Sigma Tutorial and Material, Six Sigma Guides, Six Sigma Learning, Six Sigma Certification

Figure 2: Probability Plot of Supplier Lead Times by Hourly and Daily Resolution

Identifying Individual Distributions


Because it is known how the data has been produced, the question is whether the differences between suppliers A and B can be revealed. A powerful method to start the data analysis is to identify the underlying distributions. In this situation, the method yields results only for the data with hourly resolution (Figure 3): unlike the data from supplier B, the data from supplier A does not fit to any distribution. Daily resolution does not offer insight into the shape of the distribution (Figure 4).

Six Sigma Tutorial and Material, Six Sigma Guides, Six Sigma Learning, Six Sigma Certification

Figure 3: Goodness of Fit Test for Suppliers A and B by Hourly Resolution

Six Sigma Tutorial and Material, Six Sigma Guides, Six Sigma Learning, Six Sigma Certification

Figure 4: Goodness of Fit Test for Suppliers A and B by Daily Resolution

Thus, viewing data with appropriate granularity (in this example, hourly resolution appears to be sufficient) allows practitioners to find differences between the two data sets. Improvement teams can then conduct further investigation by asking “why?” and identifying special causes for the differences, such as the additional delay of shipments from supplier A one day a week.

When dealing with data with inappropriate granularity (in this example, daily resolution is of too low a granularity), neither set of data can be described by any distribution and special causes remain invisible (Table 1).

Table 1: Summary Results of Individual Distribution Identification  
Supplier A Supplier B
Hourly resolution No best-fit distribution with P > 1 percent Gamma, Exponential and Weibull distributions fit with P-values > 25 percent 
Daily resolution  No best-fit distribution with P > 1 percent No best-fit distribution with P > 1 percent 

Testing Spread and Central Tendency


Studying spread and central tendency are other methods used to reveal differences. However, they are less powerful than the individual distribution identification; multiple distributions, as well as mixed or merged distributions like the data for supplier A, can have the same spread or central tendency. Thus, individual distribution identification has a stronger discrimination power.

For non-normal data, the appropriate tests for spread and central tendency are Levene’s and Kruskal-Wallis (or Mood’s median), respectively (Table 2). The central tendency test uses the median if data is non-normal, as in the current situation (Figure 5).

Six Sigma Tutorial and Material, Six Sigma Guides, Six Sigma Learning, Six Sigma Certification

Figure 5a: Individual Value Plots for Both Suppliers with Median Values Highlighted

Six Sigma Tutorial and Material, Six Sigma Guides, Six Sigma Learning, Six Sigma Certification

Figure 5b: Individual Value Plots for Both Suppliers with Median Values Highlighted

Table 2: Results of Statistical Tests for the Two Sets of Data
Hourly Resolution Daily Resolution
Spread (Levene’s test) A: 80.7 hours
B: 82.4 hours
(P-value = 4 percent)
A: 3.36 days
B: 3.43 days
(P-value = 4 percent)
Daily resolution  A: 108.5 hours
B: 57.5 hours
(P-Value = less than 1 percent)
A: 5 days
B: 3 days
(P-Value = less than 1 percent)

Differences in spread were found to be marginally significant (P-value = 4 percent) from a statistical point of view. But both sets of data led to the conclusion that supplier B is faster. However, without the knowledge gained through the individual distribution identification, it is impossible to attribute this either to common cause variation (variation due to the nature of the distribution, such as Gamma, Weibull or exponential) or to special cause variation (like the “Friday effect” in the current example).

Another way to look at differences is to stratify the data by suppliers A and B within histograms. The special cause known to be active for supplier A becomes visible only in the data with hourly resolution (Figure 6), where two modes – which originate from the mixed nature of the underlying lead time distribution – are visible. Important indications may thus be overlooked if the data is of too low a granularity.

Six Sigma Tutorial and Material, Six Sigma Guides, Six Sigma Learning, Six Sigma Certification

Figure 6: Histogram of Supplier A Delivery Data with a Best-fit Exponential Distribution

Finding the Right Granularity


Metrics used for tracking adherence to customer specifications and other process performance standards usually follow ratios of precision to tolerance (P/T). In the context of granularity, the equivalent to precision is the measurement increment.

Practitioners should use the ratio of the measurement increment to the process sigma (spread) as a way to quantify the impact of granularity on the ability to detect special causes of variation. As a rough guideline, this ratio should be smaller than 10 percent for granularity to be acceptable, where a ratio much smaller than 10 percent is preferable. A ratio greater than 30 percent would be unacceptable.

For the tolerance window, defined by the customer specification limits, similar requirements hold: in order to tell whether a value is within or outside the specification limits, the measurement increment must be smaller than a tenth of the width of the tolerance window. Notice that, in conjunction with the first requirement, this only has practical relevance in cases of poor process capability when the tolerance window is about as wide as or even wider than the process spread.

Table 3 summarizes these parameters for the two data sets with hourly and daily granularity where the tolerance is given by an upper specification limit of 5 days (120 hours). For lead time data, the lower bound is zero. Clearly, the measurement system with a daily increment is at best marginally acceptable both from a customer’s perspective and from a process perspective.

Table 3: Acceptability of Granularity
Tolerance (T) Spread  Measurement Increment (MI)  MI/T  MI/Spread 
Hourly resolution 120 hours 83 hours 1 hour  0.8 percent  1.2 percent 
Daily resolution  5 days 3.4 days  1 day  20 percent 29 percent 

Taking Action


These observations lead to the following implications:

1. When releasing Measure phase tollgates, namely of lead time reduction projects, Master Black Belts should require a MSA and make sure granularity is considered.

2. A few slides about granularity should be included in the training materials for all Green Belt and Black Belts.

3. Organizations should screen their databases for delivery lead time data. Many companies only store “shipment day” and “reception day” data. This produces lead time data of daily granularity, which can keep practitioners from detecting delays and special causes.

4. In most such situations, providing a sound operational definition of “start” and “end” of the process, along with mapping out the process, can yield immediate improvement potential. This is especially true at beginning and end of a delivery process, where delays may come from letters waiting in signature folders or parcels left in shipment areas.

Monday 18 March 2019

Key Differentiator for Six Sigma: Its Infrastructure

Some detractors of Six Sigma have said that it just borrowed tools and techniques from earlier improvement systems – chiefly total quality management (TQM) – and repackaged and marketed it under a different name. While true, there is one reason why Six Sigma is different from its predecessors and why it is such a powerful improvement system – its infrastructure to manage and sustain organization-wide process improvement.

Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning

Here are seven success factors for why Six Sigma is a proven effective business change initiative.

  1. Top management driven: The organization’s CEO or president leads the Six Sigma infrastructure and drives process improvement by actively promoting Six Sigma internally.
  2. Alignment with vision and objectives: Six Sigma leadership ensures that all projects are aligned with the corporate vision and the objectives of all key stakeholders (customers, shareholders, employees).
  3. Guiding coalition: Everyone in the Six Sigma infrastructure is bound by a common conviction to achieve business improvement. The change initiative is guided by, and deployed from, the top and executed down every level. The infrastructure includes leaders who work in boundary-less collaboration, ditching the inflexible, yet traditional, silo mentality.
  4. Strong buy-in from the people: Six Sigma is a vehicle for employees to actively engage in process improvement, thus enriching job and workplace satisfaction.
  5. Clear performance goals: Project teams establish clear performance and financial improvement goals to achieve overall corporate objectives.
  6. Simple and rigid methodology: Six Sigma infrastructure imposes a robust, clear and simple DMAIC (Define, Measure, Analyze, Improve, Control) methodology. DMAIC harnesses a set of well-known tools and techniques for each phase that are proven successful in bringing improvement to the process and change to the organization as a whole.
  7. Proven successful results: Six Sigma practitioners are customer-centric, process-focused, data-driven and analytically rigorous, which strengthens the probability of meeting both project’s and corporate’s goals and expectations.

Friday 15 March 2019

Project Management vs General Management

Project Management, General Management, Process Guides

The differences between project management and general management are actually not very distinct. However, a few differences between the two set the two apart, giving them each a unique definition.

What is Project Management?


Project management is comprised of organizing, planning, motivating, and controlling procedures, resources and protocols to achieve specific goals of a specific project. A project may be a temporary and time constrained mission that is geared towards the production of a specific result, product or a service, also often constrained by funding and other resources. The aim of project management would be to use the limited time and resources and channel them towards the achieving of the goal of the project to achieve the optimum results that are beneficial and of added value.

There are many approaches to project management and certain projects do not follow a structured process at all. However, the traditional approach is comprised of five components.

1. Initiation
2. Planning and design
3. Execution and construction
4. Monitoring and controlling systems
5. Completion

What is General Management?


General management can be defined as coordinating the usage of available resources and time towards the accomplishment of a specific goal or an objective of a certain organization or a business. This task usually comprises of organizing, planning, staffing, leading, controlling or directing specific resources, time or people. This also includes the manipulation of human, financial, technological or natural resources to the maximum benefit of the cause at hand.

In for-profit causes, the main function of general management would be to satisfy its stakeholders. This usually involves the making of profit, creating employment opportunities to employees and producing quality goods and services at a low cost to customers. Most organizations have a board of directors voted for by its stakeholders for carrying out general management functions. Some have other methods such as employee voting systems which is quite rare.

One of the most prominent contributors to modern management concepts, management has six functions.

1. Forecasting
2. Planning
3. Organizing
4. Commanding
5. Coordinating
6. Controlling

Today, management is also an academic discipline, taught in schools and universities all over the world.

What is the difference between Project Management and General Management?


Although the functions and duties of both project management and general management are very much similar, a few differences between them make them unique functions with identities of themselves.

• Project management is usually employed in projects that are temporary and time constrained. General management is employed for ongoing procedures or functions of certain organizations, businesses etc.

• Usually, in project management, resources are limited. In contrast, general management is also responsible for resourcing whatever necessary ingredients as deemed necessary for the continuation of functions.

• Management is an academic discipline taught in schools and universities all over the world. Project management often falls under this broad discipline of management.

• Therefore, one can say that the difference between project management and general management does not lie in leadership or other qualities required, but in the scope of responsibilities that lie within each role.

Thursday 14 March 2019

Hypothesis Testing: Fear No More

When analyzing data as part of a Lean Six Sigma project, some Belts can become confused to the point of fear when their coach tells them they need to perform a hypothesis test. This fear often comes from two sources: 1) the selection of the appropriate hypothesis test and 2) the interpretation of the results.

But using hypothesis tests does not need to be scary. By breaking down the process and keeping several reminder charts on hand, Belts can ease their fear of using these powerful statistical tools.

Determining Normality


Not all data is normal. But many Lean Six Sigma students are told to assume normality because it makes the job of analyzing data easier. This is far from the truth, especially when practitioners focus on projects calling for process cycle-time reduction, which involve data that is typically not normal. With hypothesis testing, Belts must know whether or not the data is normal as different tests apply in different circumstances.

So the first step is to determine normality. Normal data is defined as data that has “normal” variation. This means that it will take the shape of a standard bell curve. The bell curve represents the central tendency of the data. The distribution also will follow these rules (as shown in Figure 1):

◈ Plus or minus 1 standard deviation around the mean contains roughly 68 percent of all data
◈ Plus or minus 2 standard deviation around the mean contains roughly 95 percent of all data
◈ Plus or minus 3 standard deviation around the mean contains roughly 99.7 percent of all data

Six Sigma Tutorial and Material, Six Sigma Learning, Six Sigma Study Materials

Figure 1: Normal Distribution

It is these properties that enable a practitioner to use a tool called a probability plot to determine if data is normally distributed. A probability plot is a graphical, or visual, method to show normality, but, unlike the histogram, it is based on a statistical test.

Specifically, the probability plot is a graph on semi-log paper (i.e., the y-axis is not linear) which plots the value on the x-axis and the percent of the data that any specific value is relative to the distribution. For example, in Figure 2, the mean on the x-axis of around 15 will translate to 50 percent on the y-axis; in other words, 50 percent of the data is below the mean and 50 percent is above the mean.

Six Sigma Tutorial and Material, Six Sigma Learning, Six Sigma Study Materials

Figure 2: Probability Plot of C1

To interpret this chart, if the data is distributed throughout the range of the measurement following a random variation pattern, a practitioner would expect 50 percent of the data to fall to the lower side of the median, 50 percent above the median and so on. The breakdown would follow the percentages listed in the picture of the normal curve in Figure 1. In a probability plot, this would result in a straight line, plotting the value of the data on the x-axis and the percentage less than the value of the data point on the y-axis.

Normal data would therefore approximate a straight line, as shown in Figure 2, indicating that as the value increases, a higher percentage of the total data falls within that range.

As a rule of thumb, if 80 to 90 percent of the data points fall between the lines, a practitioner can conclude that the data is normal.

The probability plot serves as the first key point for determining which hypothesis test to use.

Selecting the Appropriate Test


At this point in the analysis, Belts need to decide on a direction to follow and the type of hypothesis test to use. For the inexperienced Belt, this can be an overwhelming task. Using the flow chart in Figure 3 can help determine the appropriate test to use.

Six Sigma Tutorial and Material, Six Sigma Learning, Six Sigma Study Materials

Figure 3: Hypothesis Test Flow Chart

The following example can assist Belts in using the flow chart:

A Belt is analyzing why three different manufacturing lines that produce the same product have different cycle times. The data pulled showed that all lines have a normal variation, but with different means. The Belt is attempting to determine if this difference is statistically significant or just part of normal variation. In this case, statistically significant means the difference is not due to common-cause variation.

Following the flow chart, the data is normal for each line as determined from probability plots, so the Belt moves to the next decision: Is this a comparison of more than two groups? The answer is yes because the Belt is looking at three lines. Again, the Belt continues down to the next decision: Do the groups have equal variances? Here, the Belt needs to perform an F-test or a test for equal variances. If the results of the test determine that the groups have equal variances, the Belt proceeds down the chart and uses ANOVA.

It is important to note that if the data is not normal, there are different avenues that can be taken. For example, using the same example above of comparing cycle times of three manufacturing lines, assume the data was not normal. Moving to the right on the flow chart, the next question is: Can you convert the data to discrete? In other words, can the Belt convert the individual continuous data points to discrete data by comparing them to a specification or by “bucketing” the data into groups? Doing that will result in a contingency table, which could be used in a chi-square test. The chi-square test would help determine which line is not performing like the others using discrete data instead of continuous data.

If converting to discrete data is not an option, there is a group of tests known as non-parametric hypothesis tests, such as Mood’s median test or the Kruskal-Wallis test, which deliver the same results for non-normal data. In almost all cases, these are a good alternative.

Lastly, if, for some reason, conversion to discrete data is not possible and the Belt cannot use non-parametric tests, they can convert the data using a Box-Cox transformation.

Interpreting the Results


Most Green Belts and Black Belts hear the statement, “Just look for the p-value” when being taught how to interpret the results of a hypothesis test. The p-value is what practitioners look to in order to determine if they should reject or fail to reject the null hypothesis (H). The H almost always involves the statement, ”There is no difference between groups.”

To clarify this, Belts can use the chart below.

Table 1: How to Interpret Test Results

Test Name
Purpose of Test 
p-value < 0.05 
p-value > 0.05 
Two-sample t– and paired t-test
Test if the difference between two means is statistically significant
Reject H, confirming a difference exists
Fail to reject H, confirming no difference exisits
One-way ANOVA, Welch ANOVA 
Test if the difference between two or more means is statistically significant 
Reject H, confirming at least one mean is different 
Fail to reject H, confirming no difference exists 
Kruskal-Wallis 
Test if the difference between two or more medians is statistically significant, if data has outliers 
Reject H, confirming a difference exists 
Fail to reject H, confirming no difference exists 
Mood’s median
Test if the difference between two or more medians is statistically significant 
Reject H, confirming a difference exists 
Fail to reject H, confirming no difference exists 
Chi-square 
Test if the difference between two or more group proportions is statistically different 
Reject H, confirming at least one group is different 
Fail to reject H, confirming no difference exists 
F-test (Bartlett’s for normal data, Levene’s for non-normal data
Test if the difference between two variances is statistically different 
Reject H, confirming a difference exists 
Fail to reject H, confirming no difference exists 

Note that if the hypothesis test confirms that a difference exists, all that has been proven is that a difference exists. It is now up to the Belt to look at the details of the process and use investigative skills to identify the potential causes for the difference.