Sunday, 13 October 2019

Resolving Common Issues with Performance Indices

Measurements affect behavior. Wrong behavior results when metrics are confusing or do not represent what is truly happening. Leaders of many respected companies are paying the price for creating an environment in which measurements did not reflect accurately what was occurring in their organization.

Six Sigma Tutorials and Materials, Six Sigma Learning, Six Sigma Study Materials, Six Sigma Online Exam

In contrast, the wise selection of metrics and their tracking within an overall business system can lead to activities that result in moving toward achieving the three Rs of business: everybody doing the right things, doing them right and doing them at the right time.

Measurement issues can be prevalent at all levels of an organization. To add to this dilemma, the basic calculation and presentation of metrics can sometimes be deceiving. Organizations often state that suppliers must meet process capability objectives, typically measured in Cp, Cpk, Pp and Ppk. The requesters of these objectives often do not realize, however, that these reported numbers can be highly dependent upon how data is collected and interpreted. Also, these process capability metrics typically are utilized only at a component part level. To resolve these issues, practitioners need a common, easy-to-use fundamental measurement for making process stability and capability assessments at all levels of a business, independent of who is making the assessment – something beyond Cp, Cpk, Pp and Ppk.

Calculating Process Capability and Performance Indices


The process capability index Cp represents the allowable tolerance interval spread in relation to the actual spread of the data when the data follows a normal distribution. The equation to calculate this index is:

Six Sigma Tutorials and Materials, Six Sigma Learning, Six Sigma Study Materials, Six Sigma Online Exam

, where

USL and LSL are the upper specification limit and lower specification limit, respectively, and 6s describes the range or spread of the process. Data centering is not taken into account in this equation.

Cpaddresses only the spread of the process; Cpk is used concurrently to consider the spread and mean shift of the process. Mathematically, Cpk can be represented as the minimum value of the two quantities, as shown in this formula:

Six Sigma Tutorials and Materials, Six Sigma Learning, Six Sigma Study Materials, Six Sigma Online Exam

The relationship between Pp and Ppk is similar to that between Cp and Cpk. Differences in the magnitudes of indices are from differences in standard deviation (s) values. Cp and Cpk are determined from short-term standard deviation, while Pp and Ppk are determined using long-term standard deviation. Sometimes the relationship between Pp and Ppk is referred to as process performance.

Although standard deviation is an integral part of the calculation of process capability, the method used to calculate it is rarely and adequately scrutinized. It can be impossible to determine a desired metric if data is not collected in the appropriate fashion. Consider the following three sources of continuous data:

◈ Situation 1: Small groups of data are taken periodically and could be tracked in a time series using an x-bar and R control chart.
◈ Situation 2: Single data points are taken periodically and could be tracked in a times series using an individuals (X) chart.
◈ Situation 3: Random data is taken from a set of completed transactions or products where there is no time dependence.

All three of the above situations are possible sources of information, each with its own approach for determining standard deviation in the process capability and performance equations. The figure below illustrates the mechanics of these and three additional approaches for this calculation. Methods 2 to 5 require practitioners to maintain time order, while 1 and 6 do not.

Six Sigma Tutorials and Materials, Six Sigma Learning, Six Sigma Study Materials, Six Sigma Online Exam
Various Ways to Calculate Process Standard Deviation (Source: Integrated Enterprise Excellence Volume III – Improvement Project Execution: A Management and Black Belt Guide for Going Beyond Lean Six Sigma and the Balanced Scorecard [Bridgeway Books,2008])

Practitioners need to be careful about the methods they use to calculate and report process capability and performance indices. A customer may ask for Cp and Cpk metrics when the documentation may really stipulate the use of a long-term estimate for standard deviation. Using Cp and Cpk values, which account for short-term variability, could yield a very different conclusion about how a product is performing relative to customer needs. A misunderstanding like this between customer and supplier could be costly.

Common Issues with Process Capability and Performance Indices


There are a number of prevalent process capability and performance indices issues. For instance, capability reports must be accompanied by a statistical control chart that demonstrates the stability of the process. If this is not done and the process shifted, for example, the index would have been determined for two processes, where, in reality, only one of the two can exist at a time. Also, if the source data is skewed and not normally distributed, there may be significant out-of-specification conditions with Cpk and Ppk values greater than predicted. Other issues include:

◈ When an x-bar and R process control chart is not in control, calculated short-term standard deviations typically are significantly smaller than the long-term standard deviations, which results in Cp and Cpk indices that make a process performance appear better than reality.

◈ Different types of control charts for a given process can provide differing perspectives relative to stability. For example, an x-bar and R control chart can appear to be out of control due to regular day-to-day variability effects such as raw material, while a daily subgrouping individuals control chart could indicate the process is stable.

◈ Determined values for capability and performance indices are more than a function of chance; they depend on how someone chooses to sample from a process. For example, if a practitioner were to choose a daily subgrouping of five sequentially produced parts and determine process capability, they could get a much smaller Cp and Cpk value than someone who had a daily subgrouping of one.

◈ The equations above for determining these indices assume normality and that a specification exists, which is often not the case.

Changing the Approach


One alternative to the capability index is reporting the performance for stable processes as a predicted long-term nonconformance percentage out of tolerance or PPM defective rate. This alternative is not dependent on any condition except for long-term predictability. If no tolerance exists, a recommended capability metric is a median and 80 percent frequency of occurrence range from a long-term predictable process. This methodology allows for the assessment of non-normal data capability, such as found in time and many other process metrics.

These measurement objectives are accomplished when utilizing the following scorecard process:

1. Assess process predictability (i.e., whether it is in statistical control)

2. When the process is considered predictable, formulate a prediction statement for the latest region of stability. The usual reporting format for this prediction statement is the following:

a. Nonconformance percentage or defects per million opportunities (DPMO) when there is a specification requirement

b. Median response and 80 percent frequency of occurrence rate when there is no specification requirement

With this approach, day-to-day raw material variability is considered as a noise variable to the overall process. In other words, raw material would be considered a potential source of common cause variability. Because of this, practitioners need a measurement strategy that considers the between-day variability when establishing control limits for a control chart. This does not occur with a traditional x-bar and R control chart because within-sub-grouping variability mathematically determines the control limits.

Benefits of Zooming Out


Practitioners track the process using an individuals control chart that has an infrequent sampling plan, in which the typical noise variation of the process occurs between samples. It should be noted that the purpose of this 30,000-foot-level metric reporting is not to give timely control to a process or insight into what may be causing unsatisfactory process capability. The only intent of the metric is to provide an operational picture of what the customer of the process experiences and to separate common cause from special cause variability.

For continuous data, whenever a process is within the control limits and there are no trends, the stable process typically is said to be in control, or stable. For processes that have a recent region of stability, it is possible to say that the process is predictable. Data from this region of predictability can be considered a random sample of the future.

A probability plot is a powerful approach for providing the process prediction statement. For this high-level process view, practitioners can quantify the capability of the process in a form that everyone understands. If no specification exists, a best-estimate 80 percent frequency of occurrence band could be reported, along with a median estimate.

When specifications or nonconformance regions do exist, practitioners can determine the capability of the process in some form of proportion units beyond the criteria, such as PPM, DPMO or percent nonconformance, along with a related cost of poor quality (COPQ) or the cost of doing nothing differently (CODND) monetary impact. The COPQ or CODND metric, along with any customer satisfaction issues, can then be assessed to determine whether improvement efforts are warranted.

Transformations may be appropriate for both assessing process predictability and making a prediction statement; only transformations that make physical sense for a given situation should be used.

Also, this 30,000-foot-level metric reporting structure can be used throughout an enterprise as an organization performance metric, which may keep practitioners from firefighting common cause variability as though it were special cause. A similar approach also would apply to the tracking of attribute data, allowing for a single unit of measurement for both attribute and continuous data situations within manufacturing, development and transactional processes.

Switching Methods


Most people find it easy to visualize and interpret an estimated PPM rate beyond customer objectives as a reported process capability or performance index. Because of this, organizations should consider using the metric described here with appropriate data transformations, in lieu of Cp, Cpk, Pp and Ppk and other process metrics, whenever possible. When a customer asks for these, a supplier could provide the calculated indices along with a control chart that shows predictability, while highlighting the predicted nonconformance proportion.

A Lean Six Sigma deployment needs a sound infrastructure for the selection of projects, which should be linked to the goals and metrics of the overall business. A high-level individuals control chart can be a useful performance metric not only for projects, but also for value-chain performance metrics. This metric, coupled with an effective project execution roadmap, can lead to significant bottom-line benefits and improved customer satisfaction for an organization.

Saturday, 12 October 2019

Six Qualities of Successful Green Belts

Many factors play a role in the success of a Lean Six Sigma Green Belt: support of top management, a well-defined and properly scoped project, a solid project team and more. One element that may be overlooked is the qualities of the candidates themselves. Everyone can contribute to continuous improvement efforts, but the Green Belt role is not for everyone. Too often, prospective Green Belts are selected for one specific characteristic or skill: a reputation for fixing problems, personal ambition, their specific job title or responsibilities, or an aptitude for statistics. While these may be part of the selection criteria, a more complete set of essential qualities is needed to ensure someone is suited to being a Green Belt.

Six Sigma Study Materials, Six Sigma Tutorial and Materials, Six Sigma Online Exam

When managers decide who will attend the next wave of Green Belt training, they must make the time to choose candidates who have all the qualities to be successful in this demanding role. Failure to do so can be damaging to the project, the project team and to the overall continuous improvement program. It is also detrimental to the person thrust into the Green Belt role who is unlikely to be successful in that position.

The following six qualities (in no particular order) are key for the successful Lean Six Sigma Green Belt candidate:

1. Perseverance: Green Belts must be change agents for their businesses, regardless of the level that they influence. Implementing change is difficult and almost always involves both technical and cultural obstacles. Some Green Belts struggle with an inability to work through resistance to change and become discouraged. Successful Green Belt candidates understand that pushback against change is inevitable and, indeed, part of being human; they face up to this resistance with determination. It is important to remember that a Green Belt is not a dedicated resource like a Black Belt. Green Belts must be focused and tenacious to carve out time from their normal job responsibilities to devote to their continuous improvement team and project.

2. A logical, analytical mind: An individual with a logical, analytical mind may be perceived as having the math skills necessary to understand statistical analysis. While math ability is important, it may not be as crucial as being able to work a problem methodically and logically. Most organizations have people who are whizzes at “firefighting” problems; these are often the first employees sent to Green Belt training. Lean Six Sigma, however, requires that the Belts go beyond firefighting to fully understand the problem, measure the current state, identify and address root causes, and then put controls in place to prevent reoccurrence. Shooting from the hip is not sufficient for continuous improvement efforts, nor is putting a bandage on the issue. The successful Green Belt understands that the process for problem solving is as important as arriving at a solution. Many good Green Belts are problem solvers who are tired of the firefighting approach and are eager to embrace a better way.

3. A passion for improvement: Some candidates possess the people skills and the analytical ability to do the job, but are unable to see the need for improvement in their organizations. They struggle in the Green Belt role because they do not see the need to question the current processes, nor do they dig below the surface to get to the root causes of problems and inefficiencies. Managers should not draft their employees into Green Belt training; managers should instead seek out those who want to participate. The successful Green Belt candidate is one who is never satisfied with the status quo – who sees not just problems, but also opportunities.

4. Leadership skills: The ability to understand and apply Lean Six Sigma tools by itself is not enough. The Green Belt is a project manager and a team leader. Like all project leaders, they must manage time and resources, assign tasks, follow-up and report results to stakeholders. They must understand how to motivate their team. They cannot be afraid to prioritize and make tough decisions. Green Belt leadership styles can vary from fiery to analytical to laid-back – as long as the individual style provides results.

5. Initiative: When Green Belts require prodding from their sponsors to move forward, energy is wasted. When Green Belts step up to the challenge and are proactive with facing problems, the team and the project’s stakeholders are able to be excited and engaged. In addition to their projects, Green Belts have a “regular” job. Proactive Green Belts are successful because they carve out time and actively seek out opportunities to move their projects along without letting day-to-day tasks sweep them away.

6. People skills: In the long run, the so-called “soft” skills end up being more important to Green Belt success than the “hard” skills of technical knowledge. Overcoming resistance to change is futile if a Green Belt cannot understand and address the human factors in a company. A manager accustomed to working only with their direct reports may struggle in leading a cross-functional Green Belt team of people who report to others. Green Belt projects require buy-in from those involved; building that buy-in requires clear communication and an understanding of the needs and motivations of stakeholders. The successful Green Belt must understand and actively manage the dynamics of their team – seeking out and capitalizing on team diversity, having team members play to their strengths, and establishing team norms that make for a focused and effective environment.

A Lean Six Sigma Green Belt program can be an effective way for organizations to develop leaders and uncover hidden talent while improving the bottom line. Many great Green Belt prospects are not already in a leadership role, but are ready to step up and make a positive change in the company if identified. Careful selection of candidates for Green Belt training is the first step in the success of the person, the project and the program. Use these six qualities to decide who should be a Lean Six Sigma Green Belt in any organization – and set them up for success.

Friday, 11 October 2019

Using the Power of the Test for Good Hypothesis Testing

In any hypothesis test, there are four possible outcomes. The table below illustrates the only possibilities.

Table 1: Possible Outcomes of a Hypothesis Test

Reality  Decisions 
Ho is true Accepting Ho is true; good decision (p = 1 – a or confidence level)  Accepting Ho when it is false; Type II error (p = b) 
Ha is true Rejecting Ho when in fact it is true; Type 1 error (p = a or significance level) Rejecting Ho that is not true; good decision (p = 1 – b or power of the test) 


What should every good hypothesis test ensure? Ideally, it should make the probabilities of both a Type I error and Type II error very small. The probability of a Type I error is denoted as a and the probability of a Type II error is denoted asb.

Understandinga


Recall that in every test, a significance level is set, normallya= 0.05. In other words, that means one is willing to accept a probability of 0.05 of being wrong when rejecting the null hypothesis. This is thearisk that one is willing to take, and settingaat 0.05, or 5 percent, means one is willing to be wrong 5 out of 100 times when one rejects Ho. Hence, once the significance level is set, there is really nothing more that can be done abouta.

Understandingband 1 -b


Suppose the null hypothesis is false. One would want the hypothesis test to reject it all the time. Unfortunately, no test is foolproof, and there will be cases where the null hypothesis is in fact false but the test fails to reject it. In this case, a Type II error would be made.bis the probability of making a Type II error and b should be as small as possible. Consequently, 1 -b is the probability of rejecting a null hypothesis correctly (because in fact it is false), and this number should be as large as possible.

The Power of the Test


Rejecting a null hypothesis when it is false is what every good hypothesis test should do. Having a high value for 1 -b (near 1.0) means it is a good test, and having a low value (near 0.0) means it is a bad test. Hence, 1 -bis a measure of how good a test is, and it is known as the “power of the test.”

The power of the test is the probability that the test will reject Ho when in fact it is false. Conventionally, a test with a power of 0.8 is considered good.

Statistical Power Analysis


Consider the following when doing a power analysis:

1. What hypothesis test is being used
2. Standardized effect size
3. Sample size
4. Significance level ora
5. Power of the test or 1 – b

The computation of power depends on the test used. One of the simplest examples for power computation is the t-test. Assume that there is a population mean of m= 20 and a sample is collected of n = 44 and that a sample mean of  and sample standard deviation of s = 4 are found. Did this sample come from a population of mean = 20 if it is set that a= 0.05?

Ho: m does equal 20
Ha: m does not equal 20
a = 0.05, two-tailed test

The next example is testing an effect size of 2 . Since this is the absolute value, it needs to be standardized into a t-value using the standard error of the mean .

Hypothesis Testing, Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning, Six Sigma Study Materials

The critical value of t at 0.05 (two-tailed) for DF = 43 is 2.0167 (using spreadsheet software [e.g., Excel], TINV [0.05,43] = 2.0167). Since the t is greater than the critical value, the null hypothesis is rejected. But how powerful was this test?

Computing the Value of 1 -b


The critical value of t at 0.05 (two tailed) for DF = 43 is 2.0167. The following figure illustrates this graphically.

Hypothesis Testing, Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning, Six Sigma Study Materials

This t = +/-2.0167 equals in the hypothesized distribution = 20 +/-  (2.0167) = 20 + 0.603(2.0167) = 21.216 and 20 – 0.603(2.0167) = 18.784.

Hypothesis Testing, Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning, Six Sigma Study Materials

The next figure shows an alternative distribution of m= 22 ands= 4. This is the original distribution shift by two units to the right.

Hypothesis Testing, Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning, Six Sigma Study Materials

What is the probability of being less than -21.216 in this alternative distribution? That probability isb, accepting Ho when in fact it is false. This is because with any value within that region, in the original probability distribution, one would have accepted Ho. How does one find thisb? What is the t value of 21.216 in the alternative distribution?

Hypothesis Testing, Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning, Six Sigma Study Materials

What is the corresponding probability of being less than t = -1.3? From the t-tables, using one-tailed, DF = 43, t = 1.3, one finds 0.10026 (using spreadsheet software TDIST [1.3,43,1], it is 0.10026). Henceb = 0.10026 and 1 -b = 0.9, which was the power of the test in this example.

Below is the statistical software output (Minitab version 15) using the same example:

Hypothesis Testing, Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning, Six Sigma Study Materials

What Influences the Power of the Test?


Three key factors affect the power of the test.

Factor 1

The difference or effect size affects power. If the difference that one was trying to detect was not 2 but 1, the overlap between the original distribution and the alternative distribution would have been greater. Hence, b would increase and 1 -b or power would decrease.

Hypothesis Testing, Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning, Six Sigma Study Materials

Hence, as effect size increases, power will also increase.

Factor 2

Significance level or a affects power. Imagine in the example using the significance level of 0.1 instead. What would happen?

Table 2: Using a Different Significance Level

Significance Level DF  Critical t  Value in Original Distribution 
0.05 43 2.016692 21.21606538
0.l0  43  1.681071  21.21606538 

The critical t would shift from 2.01669 to 1.68. This makes b smaller and 1 – b larger. Hence, as the significance level of the test increases, the power of the test also increases. However, this comes at a high price becausearisk also increases.

Hypothesis Testing, Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning, Six Sigma Study Materials

Hypothesis Testing, Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning, Six Sigma Study Materials

Factor 3

Sample size affects power. Why? Consider the following equations:

Hypothesis Testing, Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning, Six Sigma Study Materials

How can t be increased? As t increases, it becomes easier to reject Ho. One way is to increase the numerator or the effect size. As the effect size increases, power also increases. Also, as the denominator or the standard error of the mean (SE mean) decreases, t also will increase, and consequently the power of the test also will increase. How can the denominator be decreased? As the sample size increases, the SE mean decreases. Hence, as sample size increases, t also will increase and the power of the test also will increase.

Hypothesis Testing, Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Learning, Six Sigma Study Materials
Power Curve for the One-Sample t-Test

In general, to improve power, really only the sample size can be increased because the significance level is usually fixed by industry (0.05 for Six Sigma) and there is not much that can be done to change the difference trying to be detected. 

Since that power of 0.8 is good enough, one can use statistical software to find out what the corresponding sample size is that will be need to be collected prior to hypothesis testing to obtain a good power of test.

Thursday, 10 October 2019

Understanding the Uses for Mood's Median Test

When comparing the average of two or more groups with the help of hypothesis tests, the assumption is that the data is a sample from a normally distributed population. That is why hypothesis tests such as the t-test, paired t-test and analysis of variance (ANOVA) are also called parametric tests.

Nonparametric tests do not make assumptions about a specific distribution. If assumptions do not hold, nonparametric tests are a better safeguard against drawing wrong conclusions.

The Mood’s median test is a nonparametric test that is used to test the equality of medians from two or more populations. Therefore, it provides a nonparametric alternative to the one-way ANOVA. The Mood’s median test works when the Y variable is continuous, discrete-ordinal or discrete-count, and the X variable is discrete with two or more attributes.

When to Use Mood’s Median Test


Examples for the usage of the Mood’s median test include:

◈ Comparing the medians of manufacturing cycle time (Y) of three different production lines (X = Lines A, B and C)

◈ Comparing the medians of the monthly satisfaction ratings (Y) of six customers (X) over the last two years

◈ Comparing the medians of the number of calls per week (Y) at a service hotline separated by four different call types (X = complaint, technical question, positive feedback or product info) over the last six months

A Project Example


A project team wants to determine what drives the lead times of quality control (QC) analyses. One potential X they analyze is the products (A, B, C).Thus, they collect the data of all analysis times over the last three months. A dot plot (Figure 1) of the data shows a lot of overlap between the lead times of the three product groups, but it is hard to tell whether there are significant differences.

Six Sigma Learning, Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Certifications
Figure 1: Quality Control Analysis Time for Each Product

The team decides to use a hypothesis test to determine if there are “true differences” between the three product types or simply random differences due to the samples taken. 

Six Sigma Learning, Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Certifications

Figure 2: Normality Test of Quality Control Analysis Time

The team now has the choice between the nonparametric Kruskal-Wallis and the Mood’s median test. Because the latter is more robust against outliers and some extreme values are observed in the QC data, the team decides to use the Mood’s median test. 

The null hypothesis, H, is: The samples come from the same distribution, or there is no difference between the medians of the three products’ analysis times. 

The alternative hypothesis, Ha, states: The samples come from different distribution (i.e., at least one median is different). 

Although the Mood’s median test does not require normally distributed data, that does not mean that it is assumption free. The assumptions of Mood’s median test are that the data from each population is an independent random sample and the population distributions have the same shape. 

Testing for same shape can ideally be done with the probability plot. A practitioner would now look for a distribution that is the same for all three product groups. 

In this case, the probability plot (Figure 3) shows that all data follows a lognormal distribution (p>0.05), which is also typical for cycle time data. If the probability plot does not provide distribution that matches all groups under comparison, a visual check of the data may help. Do the distributions look similarly (e.g., are they all left- or right-skewed, with only some extreme values)?

Six Sigma Learning, Six Sigma Tutorial and Materials, Six Sigma Guides, Six Sigma Certifications

Figure 3: Lognormality Test of Quality Control Analysis Time

If the assumptions are met, the Mood’s median test can be conducted. If the p-value is less than the agreed Alpha risk of 5 percent (0.05), the null hypothesis is rejected and at least one significant difference can be assumed. For the QC analysis time, the p-value is 0.016 – in other words, less than 0.05. 

The 95 percent confidence intervals of the individual group medians now help to find where the significant difference is. The rule is: If there is no overlap between the confidence intervals, a significant difference can be assumed. In this example, at least product A and C have significantly different analysis times (Figure 4). 

Figure 4: Mood’s Median Test: Quality Control Analysis Time Versus Product

Mood’s median test for QC analysis time
Chi-square = 8.27 Degrees of freedom (DF) = 2 p = 0.016
Product NMedian Q3-Q1 Individual 95 percent CIs
A 20 10 1.02 2.37 ( – – * – – – – – – – – )
B 16 14 1.58 1.96 ( – – * – – – – – – )
C 9 21 3.13 3.94 ( – – – – – – – * – – – – – – – – – – – )
– – – – –+– – – – – – – – – –+– – – – – – – – – –+– – – – – – – – – –+–
1.2 2.4 3.6 4.8
Overall median = 1.66

How the Mood’s Median Test Works 


The test statistic of the Mood’s median test is actually based on another well known hypothesis test: the chi-square test. This test is usually used to find differences between proportions of two or more groups. But how can it be used to compare medians?

First, practitioners should aggregate the original data into a two-way table following this procedure: 

1. Calculate the overall median of all the data (here: 1.66)

2. Calculate the number of observations per group less than or equal to the overall median and greater than the overall median. Note that only groups containing two or more observations are included in the analysis. If there are relatively few observations greater than the median due to ties with the median, then observations equal to the median may be counted with those greater than the median.

3. Display the data with a two-way contingency table (Table 1). 

Table 1: Two-way Contingency Table

Overall median = 1.66 Product Type
Number of Observations… A
Less than or equal to the overall median 20 16
Greater than the overall median  10  14  21 

The assumption (or null hypothesis) is that if there were no median difference between the groups, the percentage of values below and above the overall median should be equal for each group. The chi-square test can now be used to test this assumption. Low values of chi-square would prove this assumption true; large values would indicate that the null hypothesis is false. 

In this project example, the chi-square value is 8.27. The p-value of 0.016 indicates that the probability that such a chi-square value occurs if there are actually no differences between the product type groups is only 1.6 percent. Therefore, the practitioners can conclude that there is at least one significant difference between the groups, with just a 1.6 percent risk of being wrong.

Wednesday, 9 October 2019

What is Lean Manufacturing?

Six Sigma Tutorial and Materials, Six Sigma Learning, Six Sigma Certifications, Six Sigma Online Exam, Six Sigma Guides

Want better products, delivered in shorter times and at a lower cost? No, it is not just a dream.

With Lean management, you can have all three.

At its core, the philosophy of Lean management is to maximize customer value while minimizing waste. It is one of the most popular management systems and for good reason – no matter which industry you are in, Lean is a universal tool that can have a positive impact on any company’s performance.

It is not just a quick business hack – it is an entire business philosophy that has been around for decades and can have dramatic impacts on your business, from top to bottom.

In short, it can radically improve your business.

What Is Lean Management?


The foundations of Lean management, often shortened to just Lean, are built upon removing processes that do not bring value to the end product.

This is best described in the three key pillars of Lean (Figure 1):

1. Delivering value from your customers’ perspective

2. Eliminating waste (again, from your customers’ perspective)

3. Continuously improving your processes to better serve customers

Six Sigma Tutorial and Materials, Six Sigma Learning, Six Sigma Certifications, Six Sigma Online Exam, Six Sigma Guides


Figure 1: 3 Pillars of Lean Management

While they sound simple, these three key pillars of Lean strategy can have staggering results in improving efficiency, productivity and time management. By stripping away everything that does not bring value to the final product or service, you create a business that is 100 percent efficient with absolutely no wastage.

The Roots of Lean


Lean was not born in a classroom. In fact, lean originally comes from the car company Toyota, where Lean was coined to describe the unique business model used by this uber-successful car manufacturing giant. The Lean movement is therefore often referred to as the Toyota Way because it derived from the Toyota Production System.

Leading the team who shared this with the world was Dr. James Womack, who went to Japan. Together with Daniel Jones he wrote The Machine That Changed the World and later with Jones and Daniel Roos, Womack wrote another book about the Toyota Way called Lean Thinking. Both books remain a centerpiece of Lean management education to this day.

Inspired by Toyota’s methodologies, Lean provides a comprehensive framework to help business owners answer the types of questions that matter the most, no matter what stage their business is at. Knowing where best to invest time or money can be critical for future performance: should I invest in new equipment, hire new people or make my staff work longer hours?

The answer lies in shorter lead times, higher quality and lower costs. And Lean management shows us that all that are possible.

While owners often think, “How can I grow my business faster?” under Lean, the real question is: “How I can provide more value to my customer?”

The Four Principles of Lean


To get a good idea of how Lean works, this video from Four Principles shows us more about this revolutionary management style.


As the video highlights, there are four central ideologies at the heart of Lean management:

◈ Pull: Rather than produce as much as possible, customer demand “pulls” goods and services through the manufacturing process, minimizing production, inventory and working capital.

◈ One-piece flow: Focusing on one piece at a time minimizes work in progress, process interruptions as well as lead and wait times, all while increasing quality and flexibility.

◈ Takt: Takt is the heartbeat of a Lean system and is defined as the speed at which you need to manufacture a product in order to meet demand. Takt allows us to balance work content, achieve a continuous flow and respond flexibly to changes in the market place

◈ Zero defects: Mistakes happen, but a Lean company does not pass on defects. Mistakes from previous steps must be corrected before going on.

With these four principles, companies can respond quickly to changing consumer demands at very little cost; with no inventory, a massive shift in product type or service offering will not be a problem for Lean companies. The principles of pull, one-piece flow, takt and zero defects will help your company to stay ahead of the competition in a constantly changing marketplace.

The final piece to the Lean puzzle is vitally important and is the glue that holds the whole system together: continuous improvement.

5 Steps to Lean


Implementing Lean can be tricky. To help, there are five basic steps to achieve maximum efficiency, which can be summarized in the Lean improvement cycle (Figure 2).

Six Sigma Tutorial and Materials, Six Sigma Learning, Six Sigma Certifications, Six Sigma Online Exam, Six Sigma Guides

Figure 2: Five Steps of Lean Management

1. Identify value: The first step in Lean implementation is to identify value from the standpoint of the end customer. To do this, you need to understand what drives sales. What is it that makes your company so good? What value are consumers getting from your company?

2. Value stream mapping: The second step involves mapping out all the steps for each individual product or service. This includes all the actions your business takes to deliver the identified value to your customer.

3. Create one-piece flow: Now you’ve mapped out your value stream, what improvements can be made? Identify steps that do not create value and if you can, eliminate them. If you can’t eliminate, improve them. Make the value-creating steps occur in a tight sequence so the product flows smoothly to the customer in the fewest number of steps. Make it impossible to pass on defects down the chain.

4. Create pull: As flow is introduced, let customers pull value from the next upstream activity so that work is done only if there is a need for it. This step is key in eliminating waste – both resources and time.

5. Continuous improvement: After one run through of this process it is vital that you go back to the beginning and repeat this same process, again and again. Once flow and pull have been created, begin the cycle again and again, until you reach a state of perfection where every step in the process provides maximum value.

Over time, this continual refinement of production processes will eliminate bottlenecks and interruptions, increasing production speed without sacrificing quality. Information management becomes simpler and problems are more easily removed. Smoothing the process will remove any unwanted waste while giving employees a sense of value.

It is important that every employee feels valuable because you need to ensure that this process works from the ground up. That includes everyone from the receptionist right up to the CEO. By having a tight process line it is clear what everyone is working on and why.

There are various ways you can encourage this type of behavior at your workplace such as daily review sessions and rewards. The idea is to produce a stable production system where every step of the way creates value. This can only be achieved through continuous effort and improvement.

How Can Lean Benefit My Business?


While Lean seems particularly well suited to manufacturing type companies like Toyota, it doesn’t matter what type of business you run. Lean is a universal management tool that can have a positive impact on any company’s performance, whether you run a PR company, produce high-tech software or run a service business in the healthcare industry.

Lean focuses on the entire business – not just the manufacturing of a product – so it does not matter whether your business has a physical product or not. Unlike other management systems, Lean improves the entire value stream and not just an isolated point in the chain, which makes it easily replicable between business types.

For example, Lean has been instrumental for computing leader Intel to stay ahead of the competitive curve: In 2014, Joe Foley, factory manager at Intel Fab Operations in Ireland, said: “Five years ago, it took us 14 weeks to introduce a new chip to our factory; now it takes 10 days. We were the first Intel factory to achieve these times using Lean principles.”

At Seattle-based healthcare provider Virginia Mason, basic tenants of the Toyota Production System combined with elements of Kaizen and Lean in 2002 to create a unique version of Lean called the Virginia Mason Production System, which is focused on improving patient safety and quality. Erica Cumbee, a faculty member at Virginia Mason Institute, said: “Focusing on the highest quality and safety means pursuing zero defects in health care by removing waste and designing mistake-proofed processes. The tools of the Virginia Mason Production System support this work, but it is the culture that sustains it.”

For John Deere, the agricultural machinery expert, it is a similar story: In 2003 John Deere spent $100 million on transforming its business model using Lean principles. Project manager Kallin Kurtz said: “This project transformed our manufacturing engineering mindset. We have put a great deal of effort into identifying non-value-added activities and eliminating them where possible.”

Transformation Through Lean

Implementing Lean requires a long-term perspective. It won’t happen overnight but can be achieved by a rigorous feedback cycle of constant improvement. The goal is to reach zero waste in the company and thus achieve maximum customer value.

By using the five-step improvement process to implement the key principles of Lean, you can encourage your business to work smarter, not harder. The methodology of Lean focuses on creating value at every level, which builds overall value for your company and when done correctly, can be transformational.

Through Lean, it is possible to have better products, made in a shorter time and at lower costs. It will take time but you must continuously work to improve to bring you closer to your Lean goal. Have trust in everyone throughout your organization and over time, your chances for success will rapidly increase.

Monday, 7 October 2019

Bridging Functional Silos to Achieve 'Customer Impact

During the Define phase, it is common that a project is made smaller and more manageable by limiting the scope of the business process it will address. This may, however, cause problems during the Measure and Analyze phases when root causes outside the project scope are found.

An exploration of this issue is found in a case study that demonstrates how one company employed an integrated approach to bridge functional silos and achieves “customer impact” when the project scopes appeared mismatched to the resources available. By combining upper level process mapping with a series of failure modes and effects analyses (FMEAs), the company effectively scoped and managed an organization-wide initiative with hundreds of Six Sigma projects.

Finance Project That Went Out-of-Scope


An accounts receivable project was initiated in finance where only the latter end of the transaction with the customer was deemed as in-scope for the team. As the team brainstormed the causes of late payments, it found one of the largest causes of “late” payments was from mismatches in terms and conditions of service contracts. In short, the customers were not paying on time because there was a difference between the terms and conditions and what was delivered. Unfortunately, given the authority of the team, this root cause was deemed out-of-scope.

The project would have been too large if it were expanded to include the entire transaction, but the limited scope given to the original project would have resulted in an ineffective solution. Figure 1 shows one of many rework loops.

Six Sigma Tutorials and Materials, Six Sigma Learning, Six Sigma Certifications, Six Sigma Online Exam

Figure 1: High-Level Map of Quote-to-Remittance Process with Rework Loop

The finance Master Black Belt on the project came to the quality leader with what was beginning to become a trend in the company – solving problems by throwing them over the wall to another part of the company, or executing sub-optimal solutions within business functions. The problem with the Six Sigma implementation at this business was one of trying to assign process ownership to systemic problems, when the project Champions, sponsors, Master Black Belts, Black Belts and projects were all being managed at a business function level. Since projects were managed at the business function level, it was extremely difficult to assign ownership to projects that crossed the functional boundaries, hence solutions tended to be sub-optimal.

The quality leader for the entire business initiated a program with the purpose of making an impact on the customer while identifying and managing hundreds of individual projects within the business. The quality leader led “customer impact” projects using Six Sigma specifically for the purpose of defining roles and responsibilities and managing critical-to-quality (CTQ) elements. The CEO was the project sponsor and Champion. The operational definition of a defect for “customer impact” was: “Anytime the customer does not receive what we agreed to, when we agreed to supply it to them.” This project was not considered closed until all of the associated Six Sigma projects that impacted the problem had been identified, defined, executed and closed.

Defining and Mapping ‘Customer Impact’


The quality leader set out to map the entire quote-to-remittance process in detail across all the functional groups of the company. Since the company had more than 5,000 employees at various worldwide locations the entire process had never been mapped in great detail – it was simply too big.

Since the process was so big, it had to be broken into manageable pieces while preserving the integrity of the overall transactional flow. Process mapping for each functional group within the company was assigned to the respective Master Black Belt for that business function. The Master Black Belts coordinated the process mapping within their business functions. The quality leader provided specific guidance to ensure that the process mapping was done consistently across the functional areas, according to a common set of conventions for symbols, and to an appropriate level of detail. A clear definition of the handoff points between functional areas was particularly important.

When the functional process mapping was complete, each Master Black Belt brought their process maps to a “customer impact” war room to assemble them into a master process map of the entire transaction. The process maps were spaced out and taped to the whiteboard walls of the room. Even though this map contained only a moderate amount of detail, it had more than 500 steps and covered three walls of the room.

The next step was to define the rework loops between the functional groups. The subsequent data collection for this phase of “customer impact” was mostly centered on gathering data about rework and cross-functional problems. Cross-functional rework loops were added to the components of the master process map using dry erase markers. These cross-functional rework loops (Figure 2) were used to define the details of a communication plan. As well as to assign Champion ownership to specific project teams so that problems could be investigated and corrected.

Six Sigma Tutorials and Materials, Six Sigma Learning, Six Sigma Certifications, Six Sigma Online Exam

Figure 2: Rework Loops Added to Quote-to-Remittance Process Maps from All Functional Units

Process FMEA for ‘Customer Impact’


Even though the company had defined the overall process, it still lacked a clear representation of the relative size of each contribution to the overall problem. The Master Black Belts were then asked to go back to their “customer impact” teams to first construct a more detailed process map to be used for diagnosis, then initiate a detailed FMEA of their section of the overall process map, conduct interviews with process owners and review any available historical data. Once again, the quality leader was active in defining common scales of measurement and scope of impact. The severity rating used in the FMEAs was addressed in terms of the impact on the customer of the sub-process – in other words, the next functional unit in the master process map. The addition of the cross-functional rework loops allowed the teams to evaluate the impact of defects outside their process areas without having to expand the scope of projects to include the entire process. These evaluations were conducted as defined in the new communication plan. Figure 3 shows the completed FMEAs and risk priority number (RPN) Pareto charts for the first two functional areas, sales and contract preparation.

When all FMEAs were completed for each of the functional areas, they were combined into one master FMEA and Pareto chart. The quality leader preferred the master Pareto chart to divide the ownership of problems between functional areas. This allowed him to see the vital Xs of a very large and complex process from a single, common viewpoint.

Six Sigma Tutorials and Materials, Six Sigma Learning, Six Sigma Certifications, Six Sigma Online Exam

Figure 3: FMEAs for Sales and Contract Preparation for the Functional Areas

Six Sigma Project Management


The quality leader used the master process map, master FMEA and master Pareto chart to coordinate the entire set of projects needed to achieve the desired “customer impact.” He defined defect reduction themes within each functional area that would have an effect at the transactional level.

The Master Black Belts took the relevant defect reduction targets, and copies of the master process map and master FMEA back to their functional areas. There they defined projects, assigned project sponsors and Black Belts, executed and closed out projects and reported their progress back to the quality leader. The individual defect reduction themes within functional areas of the business were thus aligned with customer satisfaction at the upper level. The master process map allowed tightly scoped projects to be defined within functional areas while preventing sub-optimal solutions.

Saturday, 5 October 2019

How to Avoid Common Mistakes When Measuring Performance

In manufacturing, key quality indices – performance capability index (Cpk), defects per million opportunities (DPMO) and first pass yield (FPY) – are prevalent criteria for gauging the performance of products and processes. These indices however, often are interpreted wrongly and used without taking into account the conditions of application. Moreover, alternative indices such as rolled throughput yield (RTY) are sometimes ignored. The following case studies illustrate the proper use of Cpk, DPMO and FPY, and can be used as a guide for practitioners who apply these indices.

Problems with Cpk


Despite the importance placed on Cpk, continuous improvement practitioners often face problems when applying this indicator.

Losing Sight of Distribution

Cpk is calculated based on the premise that a process is controlled statistically (stable) and that product data follows normal distribution. Processes can be deemed in or out of control based on process data compiled in an Xbar-R chart. The process is out of control if:

1. A single point exceeds the 3 sigma control limits.
2. At least seven successive points fall on the same side of the centerline.
3. Seven successive points occur in ascending or descending order.

Cpk is typically calculated using this equation:

Six Sigma Tutorials and Materials, Six Sigma Learning, Six Sigma Certifications, Six Sigma Study Materials, Six Sigma Online Exam  where

Estimated process standard deviation ,  is the process mean,  is the mean range value, d2 is a constant decided by subgroup size, and LSL and USL stand for lower specification limit and upper specification limit, respectively. 

The following case study illustrates the application of Cpk in conjunction with an Xbar-R chart. Supplier A produces rubber pads with a thickness specification between 7.56 mm and 8.32 mm. In order to evaluate the performance of the process, 45 finished products are randomly chosen for analysis (Figure 1).

Six Sigma Tutorials and Materials, Six Sigma Learning, Six Sigma Certifications, Six Sigma Study Materials, Six Sigma Online Exam

Figure 1: Process Capability of Thickness

The Xbar-R chart produced in the analysis demonstrates that the process is under control. Nonetheless, the normal probability plot, which specifically tests normal distribution, shows that the 45 samples apparently do not conform to normal distribution because the p-value (0.005) is far less than 0.05. Based on hypothesis testing, if the p-value is less than 0.05, then the practitioner can reject the null hypothesis that the population follows normal distribution. The capability histogram also displays the pattern of non-normal distribution. Therefore, the Cpk value 0.53 virtually fails to mirror the truth of the process performance, proving the Cpk statistic should be ignored when data distribution is substantially not normal. 

In real-world situations, it’s common for raw data not to be distributed normally; usually it fits other distribution patterns such as lognormal, exponential or Weibull. Statistical analysis software was used to verify whether the 45 samples from this case followed Weibull distribution (Figure 2).

Six Sigma Tutorials and Materials, Six Sigma Learning, Six Sigma Certifications, Six Sigma Study Materials, Six Sigma Online Exam

Figure 2: Probability Plot of Thickness

The data did not conform to Weibull distribution with a 95 percent confidence interval, and the p-value (0.018) is far less than 0.05. Upon further analysis, the data is proven not to conform to either lognormal or exponential distribution. Under these circumstances, Box-Cox transformation is used to transform the data before Cpk is calculated (Figure 3).

Six Sigma Tutorials and Materials, Six Sigma Learning, Six Sigma Certifications, Six Sigma Study Materials, Six Sigma Online Exam

Figure 3: Box-Cox Transformation of Sample Data

Ultimately, Cpk turned out to be 0.20, which is dramatically different from the value 0.53 before transformation. However, the curves in Figure 3 disclosed that the transformed data remained non-normally distributed. That is, the raw data was characterized by irregularity, so there was no point in computing Cpk. In this case, non-conforming DPMO was used to gauge process performance. Figure 3 shows the short-term DPMO and long-term DPMO was approximately 322,000 and 341,000 respectively. 

Based on this case study, it is unreasonable to calculate Cpk directly without considering the distribution pattern. Furthermore, an Xbar-R chart is an indispensable tool for monitoring process status and should be required for analysis. As for supplier A, its rubber pads are made from scrap tires. Most of the time, scrap tires have been abraded unevenly and some chunks of tire may have come off; therefore, uniform thickness is not guaranteed, which is why the small Cpk emerged. From a manufacturing perspective, one approach to enhance the Cpk value of the process would be to fabricate rubber pads out of raw rubber instead of scrap tires. Of course, the additional cost from this process change should be taken into consideration. Because of the narrow profit margin for rubber pads, it is inadvisable to control the Cpk value of the process beyond 1.33, as long as thickness is qualified. 

Multiple Machines and Operators


In mass production, various operators using multiple identical machines aim to make identical products, which all must meet set thresholds in characteristics such as shaft diameter and sheet thickness. Accordingly, disparities in operators’ skills and machine performance are worth consideration when calculating Cpk. 

The following case study illustrates this importance: In a compressor factory, two operators are assigned to spray paint on crankcases manually. Each operator possesses a spray gun. All of the wet paint is prepared by a process engineer. A quality inspector routinely completes random checks of 10 crankcase surfaces for paint thickness. In order to determine the spraying process capability, seven units painted by two operators also are randomly chosen to spot check the paint thickness (Figure 4).

Six Sigma Tutorials and Materials, Six Sigma Learning, Six Sigma Certifications, Six Sigma Study Materials, Six Sigma Online Exam

Figure 4: Process Capability of Paint Thickness Measured Oct. 24

The normal probability plot in Figure 4 shows a p-value of 0.822, which is greater than 0.05, signifying that thickness measurements on Oct. 24 follow normal distribution. However, the histogram signals bimodal distribution – the mixture of two normal distributions. The root cause for such a phenomenon is the use of two spray guns. Bimodal distribution is commonplace in processes where two pieces of manufacturing equipment are employed. Care should be exercised when multiple identical machines are allocated to produce the same parts because distribution may appear non-normal. As a result, it is difficult to troubleshoot if data is not categorized by machines and operators.

In this case, one operator was well trained and experienced, while the other had comparatively less experience. Also, one of the operators was usually in a hurry to complete the spraying task, so the irregular distribution of paint thickness is not surprising (Figure 5). Thus, the Cpk value calculated under these conditions is misleading.

Six Sigma Tutorials and Materials, Six Sigma Learning, Six Sigma Certifications, Six Sigma Study Materials, Six Sigma Online Exam

Figure 5: Histogram with Isolated Islands

For quality control, it is much better to carry out Cpk analysis classified by operator in order to distinguish the abilities of different operators. To diagnose capability of equipment, the measurement Cmk (machine capability index) should be used: 

Six Sigma Tutorials and Materials, Six Sigma Learning, Six Sigma Certifications, Six Sigma Study Materials, Six Sigma Online Exam , where

s is standard deviation of samples and

Six Sigma Tutorials and Materials, Six Sigma Learning, Six Sigma Certifications, Six Sigma Study Materials, Six Sigma Online Exam

The difference between Cpk and Cmk lies in the denominator of the equations. Commonly, machine capability is acceptable when Cmk is greater than 1.67. In the case of paint thickness, if the Cmk of the two spay guns greatly differ from each other, the thickness data of crankcases painted by these two spray guns would conform to bimodal distribution. Note that Cmk calculation is based on the assumption that variability in materials, human factors and environment have been removed. 

Problems with DPMO


DPMO is the most frequently used yardstick for evaluating product quality. In terms of quality management, DPMO is often associated with defect percentage of a certain population. One example of its application: In a factory where chillers are produced, the DPMO equation is 

DPMO = (defective units/total units sold) x 1,000,000

In general, the total units sold monthly ranges from 200 to 700. Table 1 shows the relation between total units sold and defective units at various sigma quality levels. Note that a shift of 1.5 sigma is considered when converting sigma into DPMO. 

Table 1: Relation Between Total Units Sold and Defective Units

Sigma Level Total Units Sold Monthly  Defective Units  Sigma Level  Defective Units   Total Units Sold Monthly 
4 sigma 161 1 3 sigma 210 14
4 sigma  322  3 sigma  449  30 
4 sigma  802  3 sigma  704  47 
4.5 sigma  7,411  3.5 sigma 220 
4.5 sigma 1,482  3.5 sigma 703  16 

This table indicates that the factory cannot reach a 4.5 sigma quality level unless at least 741 units are sold monthly with only one nonconforming unit, or at least 1,482 units are sold monthly with no more than two defective units. Because a rate of zero defects is rare, the paramount driver for achieving 4.5 sigma level is total units sold. In other words, a combination of market demand and sales volume is the decisive factor in raising sigma quality level. Of course, there is a chance that 4 sigma level in this factory can be met if proper actions from engineering, manufacturing and management are taken and the process is monitored strictly. 

Practitioners should calculate DPMO based on the category of components, such as compressors, valves and so on. For the factory here, each chiller is equipped with four compressors, the component with the highest defect rate. Suppose 700 units of chillers (2,800 compressors) are sold monthly and 10 compressors have quality problems. The DPMO of nonconforming compressors amounts to 14,286. This metric is more meaningful as opposed to the total DPMO of nonconforming chillers because it assists in prioritizing projects for quality improvement. Undoubtedly, factories always put more energy into addressing issues that take up the largest portion of quality cost. Calculating the DPMO of an individual issue helps decision makers gain an insight into each issue and solve problems in a cost-efficient way. 

Problems with FPY


The metric FPY is used to assess the performance of a process; rework and repairs are not a part of FPY calculation. Once rework is in the picture, rolled-throughput yield (RTY) is a better metric. RTY is obtained by multiplying together the qualification rates of each process step. 

For instance, suppose 100 units go through 10 operations in an entire process (Figure 6). Throughout the process steps, faulty units are detected. Some can be repaired and returned to the operation, while others are scrapped.

Six Sigma Tutorials and Materials, Six Sigma Learning, Six Sigma Certifications, Six Sigma Study Materials, Six Sigma Online Exam

Figure 6: Process Flow Chart

In this case, FPY = 96/100 = 96 percent, because four units are scrapped throughout the whole process. However, RTY = (100-4)/100 x (99-3)/99 x (97-3)/97 = 90.2 percent of units passed the first inspection. Six percent of the units will be reusable after repair. RTY is more informative than FPY, because RTY conveys the qualification rate of each workstation as well as the general picture of scrap, rework and repair. Practitioners should calculate both FPY and RTY for a panoramic view of overall process quality. 

Know the Metrics


The quality indices Cpk, DPMO and FPY should be used extensively in numerous enterprises; however, the computation of metrics should not turn into a superficially mechanical task. Rather, practitioners should take time to understand fully the application conditions. They must also keep in mind the value of Xbar-R charts and alternative performance metrics, such as RTY.