Friday 29 January 2021

Combining Six Sigma and CMMI Can Accelerate Improvements

Capability Maturity Model Integration (CMMI) is a framework for business process improvement. Like any model, CMMI reflects one version of reality, and like most models, it may be more idealistic than realistic in some ways. Organizations can use the model as a jumping off point to create process improvement solutions that fit their unique development environment.

One way organizations have adapted CMMI is to integrate it with the Six Sigma DMAIC roadmap. The two frameworks complement each other’s strengths and combining the two can help accelerate quality improvements.

Six Sigma Tools in CMMI


Like a Six Sigma project, CMMI is essentially process oriented. Each method follows through certain phases. For a typical Six Sigma project, the phases are Define, Measure, Analyze, Improve and Control. CMMI follows maturity levels, from L2 (Managed) to L3 (Defined) to L4 (Quantitatively Managed) to L5 (Optimizing). Some of the tools used when following the DMAIC roadmap also can aid an organization in reach the next CMMI level (Table 1).

Table 1: Six Sigma-CMMI Matrix

Level Process Area and Link to Six Sigma Phase 
L-2 (Managed) Process and product quality assurance
Measurement and analysis – Measure
Configuration management
Supplier agreement management
Project monitoring and control – Control
Project planning – Define
Requirements management – Analyze
L-3 (Defined)   Decision analysis and resolution – Analyze
Organizational training – Improve
Organizational process definition – Define
Organizational process focus – Improve
Risk management
Integrated project management – Define, Control
Validation
Verification
Product integration
Technical solution
Requirements development – Define 
L-4 (Quantitatively Managed)   Organizational process performance – Analyze
Quantitative project management – Measure, Analyze, Control 
L-5 (Optimizing)   Causal analysis and resolution – Analyze
Organizational innovation and development – Improve 

Some of the process areas can be attributed and linked to corresponding DMAIC phases, as shown in Figure 1.

Six Sigma Exam Prep, Six Sigma Tutorial and Material, Six Sigma Preparation, Six Sigma Career
Figure 1: DMAIC and CMMI

This map can help spread the CMMI message through the more common Six Sigma approach. It may work to dispel the myth that CMMI is predominantly used in the IT sector but not in other sectors. This figure demonstrates the use of CMMI, irrespective of the industry or domain.

Case Study


When one particular service-industry company decided to go for CMMI L5 certification, they leveraged the strengths of Six Sigma and used it as a spring board to achieve the certification in a shorter time and without redoing the documentation and reinventing the wheel.

To transition to L5, the company needed to make quality improvements. They did this by adopting Six Sigma to help stabilize processes and reduce variation.

For example, one of the metrics the company was trying to improve was cost of quality (COQ). This was measured as the total hours spent doing quality control and rework for a deliverable divided by the total hours taken to complete the deliverable, multiplied by 100. They approached this issue using DMAIC.

Define

The team used voice of the customer and a SIPOC (suppliers, input, process, output, customers) diagram to define the problem and identify critical-to-quality characteristics. They defined the problem as high cost of quality. COQ for the month was the average COQ for each of the deliverables.

Measure

They measured the current COQ level, using the formula above.

Analyze

The analysis involved identifying the type of errors in the deliverables and listing the number of errors identified, capturing the quality control times taken to identify those errors and the time taken to rectify those errors. The team created a Pareto chart of the COQ for the various deliverables. Then, they analyzed the causes of the errors that were causing the high COQ.

Improve

The team brainstormed to determine an action plan for addressing the root causes. They implemented the action items and gathered data for the next set of deliverables.

Next, they calculated the COQ for the new set of deliverables. Using the two-sample t-test, they compared the new average against the corresponding average before implementing the actions.

Control

The change in the averages indicated the effectiveness of the process change. The team continued to monitor the COQ data using a control chart for the next few months. Any reoccurrence of the out-of-control situation would be handled by following the DMAIC approach.

Once a process is established, matured and the data variation is stable for a couple of months, the internal target is redefined and the bar is raised further. This revised target becomes the target to be achieved for the subsequent months (Figure 2). This shows the maturity of the process and effectiveness of Six Sigma in achieving and sustaining CMMI L5.

Six Sigma Exam Prep, Six Sigma Tutorial and Material, Six Sigma Preparation, Six Sigma Career
Figure 2: Achieving CMMI L5 Through Use of Six Sigma

Recognizing the Differences


Although Six Sigma can help a company achieve higher levels in CMMI, the two methods do have some significant differences (Table 2).

Table 2: Differences Between Six Sigma and CMMI

Six Sigma CMMI 
Assumes processes have been identified and defined   Focus on defining management and technical processes early
Doesn’t distinguish between organizational standards and project processes   Organizational process definition is used to capture best practices 
Emphasis on training to motivate and communicate skills   Emphasis on infrastructure to ensure that the key processes are addressed 
Relies on statistical methods to manage performance   Statistical approach is intended but often not implemented
Focus on learning from internal experience and data   Additional mechanisms to leverage external technology 
Prioritization of efforts based on business payoff   Link to strategic planning weak and often ignored 
Certification of individual practitioners, not organizations   Certification of assessors and organizations, not practitioners 

Wednesday 27 January 2021

The trends we are carrying forward for an improved future

Prince2 Exam Prep, Prince2 Preparation, Prince2 Guides, Prince2 Certification, Prince2 Guides, Prince2 Career

As we head into a new year there are many reasons to feel optimistic. The trends which have defined 2020 can be carried forward and, despite the doom and gloom, there is so much to emerge from the pandemic which we celebrate and take pride in. By continuing to embrace these developments we can look to an improved future.

Rapid project development

In the past 12 months medical tech has achieved great things. The rapid pace at which treatments, testing and vaccines have been developed for the COVID-19 coronavirus is astounding! As well as being a sign of hope for the future of vaccinations, disease treatment and cure, this has also proven to be a key lesson in project management. With the right funding, a sense of urgency and improved resources, phenomenal results can be achieved.

Prince2 Exam Prep, Prince2 Preparation, Prince2 Guides, Prince2 Certification, Prince2 Guides, Prince2 Career

Vaccine development project model timeline [Source]

Online learning

Whilst we may be sick of online meetings, Zoom family quizzes and video-conferencing, online learning is a trend we are sure to carry forward. After all, why should our physical location restrict our development? Here at PRINCE2 we have seen a vast uptake in our virtual learning and e-learning courses, and this is a trend sure to continue.

Lockdowns have seen us seeking out new ways to develop our skills, and the lasting effect is that we have learned the value of continual learning. It is good for our mind health and well-being as well as for our career goals. With online learning upskilling is accessible and flexible.

Local community

Clapping on our doorsteps may be over, but through lockdowns we have gotten to know our neighbours like never before. The pandemic has instilled a strong sense of community which is set to continue. We are shopping locally and choosing independent businesses over large chains more than ever. This shake-up in our habits as consumers has undoubtedly been the key to survival for small businesses.

What’s more, there is an increasing desire to know the people behind the business. As consumers, we are making more meaningful and mindful choices – shopping in the community and seeking out local companies to support. For small, local businesses to thrive they must connect with the community.

The environment and the planet

Climate change has been a key concern for some time, but the wake up for many of us was when our headlines were flooded with reports of the environmental change lockdowns had made. The lack of road traffic meant that air pollution fell by up to 50% in London and other major cities, and worldwide greenhouse gas emissions were slashed.

For a short time the environment saw the benefits of a slowed society, and there is a real desire for that to continue. A hot topic for 2021 and beyond is going to be ‘green recovery’; placing sustainability and tackling climate change at the forefront of business and of all we do.

Flexible working

Flexible working has long been on our career wishlists, but until last year it was simply not available to most professionals. The pandemic gave millions of workers the opportunity to prove that flexible working could work. And they did just that.

Free from the 9-5 and the wasted hours commuting, professionals benefitted from an improved work / life balance. They showed themselves to be just as (if not more) productive by working to their own schedules. And so, it is no surprise that huge amounts of companies are set to continue to embrace flexible and remote working well into the future.

5G Connectivity

The past year has seen a huge increase in demand for higher-speed internet, and well-connected homes. Afterall, we are innovating and accelerating at a pace never seen before. Therefore, the hot topic of 5G connectivity is a trend we are sure to continue to hear plenty about in 2021. We are on track to see new infrastructure, shifts towards smarter cities, and investments in 5G and 6G technologies.

Electronic travel

Looking to the future, all factors are pointing to us increasingly using electronic travel to get around: For one, a slowed pace of life at the hands of lockdowns has seen us increasingly get on our bikes, e-bikes and e-scooters. What’s more, cities across the world are investing in dedicated cycle paths in anticipation for people marking greener travel choices. Lastly, here in the UK, diesel and petrol car sales will be banned after 2030, which is driving an interest in electric alternatives.

Physical digital experiences

Social isolation has increased our reliance on digital interactions. The knock-on effect is that there is an increasing desire for digital experiences to be more physical and more human. In this way, organisations have been forced to innovate and embrace digital in order to satisfy their customers.

2021 will see companies investing in virtual interfaces and augmented reality technologies. Examples of this which we have already seen include the use of virtual avatars to ‘try on’ everything from clothes and make-up to glasses. Businesses will be looking at how they can transform to deliver their services in a digital form, which still feels a personalised and connected experience.

An improved culture

To round it up, 2021 is set to be a continuation of 2020, but that doesn’t have to be a bad thing. There are so many great evolutions from the past year that we can carry forward in order to shift closer to a better future. Our values have been shifted, but so too have our expectations. We are valuing philanthropy, an improved culture and a new normal with innovation at its core.

Source: prince2.com

Monday 25 January 2021

Using DMAIC to Improve Another Improvement Process – CAPA

Six Sigma DMAIC, Six Sigma Preparation, Six Sigma Career, Six Sigma Leaning, Six Sigma Guides

Six Sigma and its DMAIC (Define, Measure, Analyze, Improve, Control) methodology provide a structured process for solving problems and improving processes. For this project, our team used DMAIC to improve a problem-solving process used in the medical industry – the CAPA process.

Medical device companies are required to demonstrate compliance to the Federal Drug Administration (FDA) 21 Part 820.100 Corrective Action and Preventive Action (CAPA) to be able to sell medical devices in the United States.

The CAPA process at Medtronic complies with the regulations of the FDA and applicable international standards to address quality issues – device complaints, non-conformances and audit findings. The CAPA process is divided into three key phases and align with the DMAIC phases as shown in the table below.

CAPA Phase What Happens in Phase   DMAIC Phase Correlation
Investigation Determine root cause Define, Measure, Analyze
Action  Take corrective action   Improve 
Effectiveness  Verify the success of the corrective action   Control 

Overview


In this project, the team of quality managers and engineers used a DMAIC process to improve the CAPA process. The existing process was complex, leading to several inefficiencies including rework of CAPA tasks and delays in getting the tasks completed on time. The CAPA owners faced several challenges in writing the tasks and needed guidance to complete their work.

The team gathered the voice of the internal customers by:

◉ Soliciting feedback from CAPA owners (people who were responsible for following the CAPA problem-solving process to resolve a specific problem)
◉ Performing KJ analysis or affinity diagramming to group the voices by common themes
◉ Prioritizing sets of customer needs and converting them to measurable requirements

The measurable requirements were flowed down, and concepts were generated using TRIZ (the Theory of Inventive Problem Solving – more on this later) and a concept was selected using a Pugh matrix. Risks were evaluated and the potential favorable or unfavorable impact was statistically modeled using Monte Carlo simulation.

Define


The Define phase began with the gathering of the voice of the customer. The team gave stakeholders of the CAPA process and quality managers a survey shown in Figure 1. The data gathered from this survey gave the team the insight to the customer needs: the CAPA process stakeholders needed guidance and examples to help with writing a new CAPA. This survey was a questionnaire in which individuals were asked to fill out what is going well with the CAPA process, what is not going well and what are the recommendations to improve the process. Each question was rated from 0 to 10, where 0 is the worst score and 10 is the best score.

Six Sigma DMAIC, Six Sigma Preparation, Six Sigma Career, Six Sigma Leaning, Six Sigma Guides
Figure 1: Results of the CAPA Process Survey

To gather ideas on the type of platform to use to provide the guidance material to CAPA owners, a KJ analysis (similar to affinity diagramming) was used. This analysis was performed with the stakeholders of the CAPA process, which included CAPA owners and quality managers. The stakeholders brainstormed ideas based on the customer inputs. Ideas were written on sticky notes, organized on a white board and prioritized based on the key customer themes. (Figure 2)

Six Sigma DMAIC, Six Sigma Preparation, Six Sigma Career, Six Sigma Leaning, Six Sigma Guides
Figure 2: Results of KJ Analysis

The KJ analysis identified the main theme to improve: Provide CAPA owners with easily accessible examples and templates to help with the CAPA process. Being able to quickly look up the examples and templates that will help the CAPA owners in writing their CAPA tasks.

This proposed solution took the form of a CAPA Portal: a web-based system that provides several templates and examples to complete CAPAs while meeting the compliance requirements and international standards.

Measure


Measurable requirements for the CAPA process are the timely completion with only a few rework cycles associated with completing a CAPA while meeting the FDA’s quality and compliance requirements. The CAPA Portal was developed with these three requirements for measurement:

1. CAPA disposition time: The total time to resolve a problem through the CAPA process
2. Number of CAPA rework loops: The number of times each step in the CAPA process must be repeated to fix issues
3. CAPA resolution time: The time that it takes to resolve a CAPA issue

These key requirements were flowed down from customer expectations using the House of Quality partially shown in Figure 3, and the team prioritized the requirements for the CAPA Portal. The prioritization involved assessing how well each measurable system requirement (left to right) could fulfill each customer requirement (top to bottom). If the system requirement could strongly improve meeting a specific customer requirement, an H for High was entered and assigned a relative value of 9. If there was a medium improvement, an M for Medium and a relative value of 3 was assigned. If there was a low improvement, then an L for Low and a relative value of 1 was assigned. The requirements were then flowed down to sub-system, component and lower-level component requirements.

Six Sigma DMAIC, Six Sigma Preparation, Six Sigma Career, Six Sigma Leaning, Six Sigma Guides
Figure 3: House of Quality

For each column associated with each system requirement, the value of 1, 3 or 9 was multiplied by the relative importance (“Imp”) for that associated customer requirement and summed for the column. This resulted in high priorities for CAPA Portal (aka the “CAPA Playbook”) that would provide guidance along with Interface System Requirements and expectations for a CAPA Dashboard to summarize progress and results.

Analyze


Prior to implementing the requirements, it was necessary to identify the potential sources of failures that can lead to an ineffective CAPA. For the Analyze phase, FDA regulatory expectations and internal business expectations for effectiveness drove the team to understand sources of failure. Fault tree analysis (FTA) was performed to better understand what leads to a deficient CAPA record or an ineffective CAPA. The source of the failure? Deficient CAPA records are due to lack of access to guidance and inadequacies in training. Moreover, the analysis gave the team an insight on the causes that caused poor outcomes, as shown in Figure 4.

Six Sigma DMAIC, Six Sigma Preparation, Six Sigma Career, Six Sigma Leaning, Six Sigma Guides
Figure 4: Fault Tree Analysis

From the FTA, a primary cause of ineffective CAPAs was related to lack to the to-the-point training material. This reinforced the team’s belief that the CAPA Portal must behave as a CAPA Playbook, providing step by step instructions with helpful examples and templates.

The requirement flow-down indicated that the user interface should make it easy for multiple CAPA owners to access and share information. Sharing and communication could enable more rapid resolution of the CAPA issues.

Developing a user interface involves tradeoffs. The TRIZ (Theory of Inventive Problem Solving) concept-generation approach provided a way for the team to dispassionately consider the tradeoffs and find an innovative solution to meet expectations involved in the tradeoff. The TRIZ approach converts tradeoffs into generic tradeoffs and recommends a small set of TRIZ principles that have been used to resolve that sort of generic tradeoff in the past, based on engineer and inventor Genrich Altshuller’s research of millions of patents. Our team used TRIZ to find solutions for the following tradeoffs.

Tradeoff 1:

◉ Feature to improve: Report out on CAPA metrics to increase productivity by 25 percent
◉ Undesired result: The user could be too overwhelmed by content to consume information

Tradeoff 2:

◉ Feature to improve: Provide a workspace for CAPA owners to fill out the templates
◉ Undesired result: Inability to download file due to slow speed

Tradeoff 3:

◉ Feature to improve: Make content available with templates and examples on a dedicated page
◉ Undesired result: Site unable to load all the content

From here, we identified three TRIZ principles (aka known solutions), which were applied to the CAPA Portal to address the aforementioned tradeoffs. The three principles applied to the CAPA Playbook were:

1. Principle of Universality: Allows a part of the system to perform multiple functions so other parts can be eliminated. This principle was applied to create dashboards and organize information to report out on CAPA metrics such number of open and closed CAPAs, CAPA age, etc. This solution addressed Tradeoff 1 to solve adaptability versus productivity of the CAPA Portal.

2. Principle of Preliminary Action: Allows pre-arranging the elements of the system so that they perform rapidly. This principle was used to attach notes to guide the user and files that serve as a CAPA template that users can access directly from the system. This solution addressed Tradeoff 2 to solve speed versus extent of automation of the CAPA Playbook.

3. Principle of Segmentation: Allows separating an element of a system into smaller interconnected elements. This principle was used to provide dedicated links on the CAPA Portal Interface to access the three key phases of the CAPA process: investigation, action and effectiveness. This solution addressed Tradeoff 3 to solve productivity versus reliability of the CAPA Playbook.

Using these TRIZ principles, the team was able to design the system interface and dashboard for the CAPA Portal.

Improve


Based on the user criteria for the CAPA Portal established in the Define and Measure Phases, a Pugh matrix was used to evaluate the strengths and weaknesses of the available systems – Sitebuilder, SharePoint, MAP AGILE and Confluence – and rated using S = Neutral or 0, + = add 1, and – = subtract 1 for each selection (Figure 5). The total score and the weighted total were then calculated to identify the system that had the highest score. The CAPA Portal was developed using a web-based system that can be shared with multiple users and can be used to easily access guidance material such as templates and examples.

Six Sigma DMAIC, Six Sigma Preparation, Six Sigma Career, Six Sigma Leaning, Six Sigma Guides
Figure 5: Analysis Using Pugh Matrix

To ensure that the proposed CAPA Portal would meet the users’ expectations over a range of use conditions or noise factors, a P-diagram (parameter diagram) was used (Figure 6). It showed the interactions of the system, the inputs and outputs, the noise factors, control factors and error states. Error states from the P-diagram were evaluated further as failure modes through failure mode and effects analysis (FMEA). The FMEA helped the team analyze risks, prioritize risks and take actions to mitigate the risks. From this FMEA, the CAPA Portal was designed to anticipate potential error states and provide early warnings and direct users to mitigation through help options.

Six Sigma DMAIC, Six Sigma Preparation, Six Sigma Career, Six Sigma Leaning, Six Sigma Guides
Figure 6: Parameter Diagram

Through implementation in the Improve phase of DMAIC, users of the CAPA Portal began accessing the new CAPA Playbook to guide them in their CAPA tasks. The value of this system was measured using the critical parameters of disposition time, rework and resolution time for each CAPA task.

The results from critical parameters measured over a period of May 1, 2019, to January 11, 2020 (before the deployment of the CAPA Portal), and January 12, 2020, to July 7, 2020 (after the deployment of the CAPA Portal).

◉ The number of rework loops in approving a CAPA task decreased by 61 percent.
◉ The total time in review and approval of a CAPA task decreased by 53 percent.
◉ The total time to resolve a rejection of a CAPA task decreased by 43 percent.

Control


The transition from the Improve phase to the Control phase of DMAIC typically includes overcoming resistance to change, implementation of control mechanisms such control charting, mistake proofing (poka yoke) and institutionalization.

Since the users and other stakeholders were engaged throughout this project, from gathering their own voices through being involved in generating and selecting concepts, there was little resistance to overcome. Rather, users were extremely receptive.

The control mechanism was provided by the CAPA Dashboard that was integrated into the CAPA Portal.  Poka yoke was integrated into the user interfaces and help systems. The CAPA Playbook was institutionalized within the original organization through documented and controlled processes; it is being used for all CAPAs, with its immediate feedback and control.

The results were rapidly shared with executives including vice presidents of quality and manufacturing operations. The executives were impressed by the project’s impressive results, and the executives requested that the CAPA Playbook and associated improvements to the CAPA process be replicated through other parts of the organization.

Friday 22 January 2021

How to Avoid The Evils Within Customer Satisfaction Surveys

Six Sigma Tutorial and Material, Six Sigma Leaning, Six Sigma Preparation

When the Ritz-Carlton Hotel Company won the Malcolm Baldrige National Quality Award for the second time in 1999, companies across many industries began trying to achieve the same level of outstanding customer satisfaction. This was a good thing, of course, as CEOs and executives began incorporating customer satisfaction into their company goals while also communicating to their managers and employees about the importance of making customers happy.

When Six Sigma and other metrics-based systems began to spread through these companies, it became apparent that customer satisfaction needed to be measured using the same type of data-driven rigor that other performance metrics (processing time, defect levels, financials, etc.) used. After all, if customer satisfaction was to be put at the forefront of a company’s improvement efforts, then a sound means for measuring this quality would be required.

Enter the customer satisfaction survey. What better way to measure customer satisfaction than asking the customers themselves? Companies jumped on the survey bandwagon – using mail, phone, email, web and other survey platforms. Point systems were used (e.g., ratings on a 1-to-10 scale) which produced numerical data and allowed for a host of quantitative analyses. The use of the net promoter score (NPS) to gauge customer loyalty became a standard metric. Customer satisfaction could be broken down by business unit, department and individual employee. Satisfaction levels could be monitored over time to determine upward or downward trends; mathematical comparisons could be made between customer segments as well as product or service types. This was a CEO’s dream – and it seemed there was no limit to the customer-produced information that could help transform a company into the “Ritz-Carlton” of its industry.

In reality, there was no limit to the misunderstanding, abuse, wrong interpretations, wasted resources, poor management and employee dissatisfaction that would result from these surveys. Although some companies were savvy enough to understand and properly interpret their survey results, the majority of companies did not. This remains the case today.

What could go wrong with the use of customer satisfaction surveys? After all, surveys are pretty straightforward tools that have likely been used since the times of the Egyptians (pharaoh satisfaction levels with pyramid quality, etc.). Survey data, however, has a lot of potential issues and limitations that makes it different from other “hard” data that companies utilize. It is critical to recognize these issues when interpreting survey results – otherwise what seems like a great source of information can cause a company to do many bad things.

Survey Biases and Limitations

Customer satisfaction surveys are everywhere; customers are bombarded with email and online survey offers from companies who want to know what customers think about their products and services. In the web-based world, results from these electronic surveys can be immediately stored in databases and analyzed in a thousand different ways. In nearly all of these instances, however, the results are wrought with limitations and flaws. The most common survey problems include types of bias, variations in customer interpretations of scales and lack of statistical significance. These issues must be considered if sound conclusions are to be drawn from survey results.

Non-response Bias

Anyone who has called a credit card company or bank is likely to have been asked to stay on the line after their call is complete in order to take a customer satisfaction survey. How many people stay on the line to take that survey? The vast majority of people hang up as soon as the call is complete. But what if the service that a customer received on the phone call was terrible and the agent was rude? It is more likely that the customer would stay on the call and complete the survey at the end of the call. And that is a perfect example of the non-response bias at work.

Although surveys are typically offered to a random sample of customers, the recipient’s decision whether or not to respond to the survey is not random. Once a survey response rate dips below 80 percent or so, the inherent non-response bias will begin to affect the results. The lower the response rate, the greater the non-response bias. The reason for this is fairly obvious: the group of people who choose to answer a survey is not necessarily representative of the customer population as a whole. The survey responders are more motivated to take the time to answer the survey than the non-responders; therefore, this group tends to contain a higher proportion of people who have had either very good, or more often, very bad experiences. Changes in response rates will have a significant effect on the survey results. Typically, lower response rates will produce more negative results, even if there is no actual change in the satisfaction level of the population.

Survey Methodology Bias

The manner in which a customer satisfaction survey is administered can also affect the results. Surveys that are administered in person or by phone tend to result in higher scores than identical surveys distributed by email, snail mail or on the Internet. This is due to people’s natural social tendency to be more positive when there is another person directly receiving feedback (even if the recipient is an independent surveyor). Most people do not like to give another individual direct criticism, so responses tend to be more favorable about a product (or service, etc.) when speaking in person or by phone. Email or mail surveys have no direct human interaction and, therefore, the survey taker often feels more freedom to share negative feedback – criticisms are more likely to fly.

In addition, the manner in which a question is asked can have a significant affect on the results. Small changes in wording can affect the apparent tone of a question, which in turn can impact the responses and the overall results. For example, asking “How successful were we at fulfilling your service needs” may produce a different result than “How would you rate our service?” although they are similar questions in essence. Even the process by which a survey is presented to the recipient can alter the results – surveys that are offered as a means of improving products or services to the customer by a “caring” company will yield different outcomes than surveys administered solely as data collection exercises or surveys given out with no explanation at all.

Regional Biases

Another well-known source of bias that exists within many survey results is regional bias. People from different geographical regions, states, countries, urban vs. suburban or rural locations, etc. tend to show systematic differences in their interpretations of point scales and their tendencies to give higher or lower scores. Corporations that have business units across diverse locations have historically misinterpreted their survey results this way. They will assume that a lower score from one business unit indicates lesser performance, when in fact that score may simply reflect a regional bias compared to the locations of other business units.

Variation in Customer Interpretation and Repeatability of the Rating Scale

Imagine that your job is to measure the length of each identical widget that your company produces to make sure that the quality and consistency of your product is satisfactory. But instead of having a single calibrated ruler with which to make all measurements, you must make each measurement with a different ruler. This is not a problem if all the rulers are identical, but you notice that each ruler has its own calibration. What measures as one inch for one ruler measures 1¼ inches for another ruler, ¾ of an inch for a third ruler, etc. How well could you evaluate the consistency of the widget lengths with this measurement system if you need to determine lengths to the nearest 1/16 of an inch? Welcome to the world of customer satisfaction surveys.

Unlike the scale of a ruler or other instrument which remains constant for all measurements (assuming its calibration remains intact), the interpretation of a survey rating scale varies for each responder. In other words, the people who complete the survey have their own “calibrations” for the scale. Some people tend to be more positive in their assessments; other people are inherently more negative. On a scale of 1 to 10, the same level of satisfaction might solicit a 10 from one person but only a 7 or 8 from another.

In addition, most surveys exhibit poor repeatability. When survey recipients are given the exact same survey questions multiple times, there are often differences in their responses. Surveys rarely pass a basic gage R&R (repeatability and reproducibility) assessment. Because of these factors, surveys should be considered noisy (and biased) measurement systems – their results cannot be interpreted with the same precision and discernment as data that is produced by a physical measurement gauge.

Statistical Significance

Surveys are, by their very nature, a statistical undertaking and thus it is essential to take the statistical sampling error into account when interpreting survey data. Sample size is part of the calculation for this sampling error: if a survey result shows a 50 percent satisfaction rating, does that represent 2 positive responses out of 4 surveys or 500 positives out of 1,000 surveys? Clearly the margin of error will be different for those two cases.

There are undoubtedly thousands of examples of companies failing to take margin of error into account when interpreting survey results. A well-known financial institution routinely punished or rewarded its call center personnel based on monthly survey results – a 2 percent drop in customer satisfaction would solicit calls from executives to their managers demanding to know why the performance level of their call center was decreasing. Never mind that the results were calculated from 40 survey results with a corresponding margin of error of ±13 percent, making the 2 percent drop statistically meaningless.

An optical company set up quarterly employee performance bonuses based on individual customer satisfaction scores. By achieving an average score between 4.5 and 4.6 (based on a 1-to-5 scale), an employee would get a minimum bonus; if they achieved an average score between 4.6 and 4.7, they would get an additional bonus; and if their average score was above 4.7, they would receive the maximum possible bonus. As it turned out, each employee’s score was calculated from an average of less than 15 surveys – the margin of error for those average scores was ±0.5. All of the employees had average scores within this margin of error and, thus, there was no distinction between any of the employees. Differences of 0.1 points were purely statistical noise with no basis in actual performance levels.

When companies fail to take margin of error into account, they wind up making decisions, rewarding or punishing people, and taking actions based purely on random chance. As statistician W. Edwards Deming shared 50 years ago, one of the fastest ways to completely discourage people and create an intolerable work environment is to evaluate people based on things that are out of their control.

Proper Use of Surveys


What can be done? Is there a way to extract useful information about surveys without misusing them? Or should customer satisfaction surveys be abandoned as a means of measuring performance?

It is better not to use surveys at all then to misuse and misinterpret them. The harm that can be done when biases and margin of error are not understood is worse than the benefit of having misleading information. If the information from surveys can be properly understood and interpreted within their limitations, however, then surveys can help guide companies in making their customers happy. The following are some ways that can be accomplished.

Determine the Drivers of Customer Satisfaction and Measure Them

Customers generally are not pleased or displeased with companies by chance – there are drivers that influence their level of satisfaction. Use surveys to determine what those key drivers are and then put performance metrics on those drivers, not on the survey results themselves. Ask customers for the reasons why they are satisfied or dissatisfied, then affinitize those responses and put them on a pareto chart. This information will be more valuable than a satisfaction score, as it will identify root causes of customer happiness or unhappiness on which measurements and metrics can then be developed.

For example, if it can be established that responsiveness is a key driver in customer satisfaction then start measuring the time between when a customer contacts the company and when the company responds. That is a hard measurement and is more reliable than a satisfaction score. The more that a company focuses on improving the metrics that are important to the customer, the more likely that company will improve real customer satisfaction (which is not always reflected in biased and small-sample survey results).

Improve Your Response Rate

If survey results should reflect the general customer population (and not a biased subset of customers) then there must be a high response rate to minimize the non-response bias. Again, the goal should be at least an 80-percent response rate. One way to achieve this is to send out fewer surveys but send them to a targeted group that has been contacted ahead of time. Incentives for completing the survey along with reminder messages can help increase the response rate significantly.

Making the surveys short, fast and painless to complete can go a long way toward improving response rates. As tempting as it may be to ask numerous and detailed questions to squeeze every ounce of information possible out of the customer, a company is likely to have survey abandonment when customers realize the survey is going to take longer than a few minutes to complete. A company is better off using a concise survey that is quick and easy for the customers to complete. Ask a few key questions and let the customers move on to whatever else they need to attend to; the company will end up with a higher response rate.

Do Not Make Comparisons When Biases Are Present

A lot of companies use customer survey results to try to score and compare their employees, business units, departments, and so on. These types of comparisons must be taken with a grain of salt, as there are too many potential biases that can produce erroneous results. Do not try to compare across geographic regions (especially across different countries for international companies), as the geographic bias may lead to the wrong conclusions. If the business is a national or international company and wishes to sample across a large customer base, use stratified random sampling so that the customers are sampled in the same geographic proportion that is representative of the general customer population.

Also, do not compare results from surveys that were administered differently (phone versus mail, email, etc.) – even if the survey questions were identical. The survey methodology can have a significant influence on the results. Be sure that the surveys are identical and are administered to customers using the exact same process.

Surveys are rarely capable of passing a basic gage R&R study. They represent a measurement system that is noisy and flawed; using survey results to make fine discernments, therefore, is usually not possible.

Always Account for Statistical Significance in Survey Results

This is the root of the majority of survey abuse – where management makes decisions based on random chance rather than on significant results. In these situations Six Sigma tools can be a significant asset as it is critical to educate management on the importance of proper statistical interpretation of survey results (as with any type of data).

Set a strict rule that no survey result can be presented without including the corresponding margin of error (i.e., the 95 percent confidence intervals). For survey results based on average scores, the margin of error will be roughly

Six Sigma Tutorial and Material, Six Sigma Leaning, Six Sigma Preparation

where ? is the standard deviation of the scores and n is the sample size. (Note: For sample sizes <30, the more precise t-distribution formula should be used.) If the survey results are based on percentages rather than average scores, then the margin of error can be expressed as

Six Sigma Tutorial and Material, Six Sigma Leaning, Six Sigma Preparation

where p is the resulting overall proportion (note that the Clopper-Pearson exact formula should be used if np < 5 or (1-np) < 5). Mandating that a margin of error be included with all survey results helps frame results for management, and will go a long way in getting people to understand the distinction between significant differences and random sampling variation.

Also, be sure to use proper hypothesis testing when making survey result comparisons between groups. Use the following tools as appropriate for the specific scenario:

◉ For comparing average or median scores, there are t-tests, analysis of variance, or Mood’s Median tests (among others).

◉ For results based on percentages or counts there are proportions tests or chi-squared analysis.

If comparing a large number of groups or looking for trends that may be occurring over time, the data should be placed on the appropriate control chart. Average scores should be displayed on an X-bar and R, or X-bar and S chart, while scores based on percentages should be shown on a P chart. For surveys with large sample sizes, an I and MR chart may be more appropriate to account for variations in the survey process that are not purely statistical (such as biases changing from sample to sample, which is common). Control charts go a long way in preventing management overreaction to differences or changes that are statistically insignificant.

Finally, make sure that if there goal or targets are being set based on customer satisfaction scores, those target levels must be statistically distinguishable based on margin of error. Otherwise, people are rewarded or punished based purely on chance. In general, it is always better to set goals based on the drivers of customer satisfaction (the hard metrics) rather than on satisfaction scores themselves. Regardless, the goals must be set as statistically significantly different from the current level of performance.

Tuesday 19 January 2021

Improving Help Desk Functions by Using Lean Six Sigma

Lean Six Sigma, Six Sigma Exam Prep, Six Sigma Learning, Six Sigma Certification, Six Sigma Preparation

Help desks often face a barrage of complaints when voice of the customer (VOC) data is collected. These typically include:

◉ “I had to describe the problem to more than one person before it was solved.”

◉ “I had to describe the problem to more than one person, and each time I was asked to do the same set of diagnostic steps all over again.”

◉ “Sometimes I received rapid service and sometimes it took way too long. Some of the support people I spoke with were able to find the problem quickly and others were not.”

◉ “The problem was pretty simple. I wish there was a website where I could first search for a solution before calling the help desk for support.”

◉ “Before I could speak to a human being, I had to go through a long and tedious [interactive voice response] menu.”

Lean Six Sigma can help overcome these issues through the use of an efficient, effective process to review and improve help desk functions. The Lean technique focuses on process cycle efficiency (PCE) as a measure of process execution speed, the first step in understanding how a function works, according to Michael L. George, author of the book Lean Six Sigma.

Process cycle efficiency = value-added time / total elapsed time

A Lean process produces a PCE of 25 percent or more. Most service processes like help desks are not Lean with 20 percent of the activities contributing 80 percent of the waste in the process, according to author George. One of the main goals of Lean is to increase process velocity. Improving PCE helps achieve that goal by eliminating non-value-added activities from the process. Lean methods such as value stream mapping also provide a systematic way to identify and eliminate waste.

Identifying relevant qualitative and quantitative metrics for a specific function play a key role in understanding variation within that function. Measuring variation before and after implementation of a Lean process allows the business to identify random variations in critical-to-quality (CTQ) measures and focus a root cause analysis on only those instances where the function routinely impacts CTQs. Measurement of key CTQs and statistical process controls helps ensure that implementing a Lean help desk will create a better service experience for the client.

Value stream analysis can uncover activities that add waste to a typical help desk organization. Then that analysis can naturally lead to the implementation of a Lean Six Sigma help desk design. That process and descriptions of what is necessary at the organizational level to ensure that the Lean help desk succeeds are explored here.

Typical Help Desk Design


Figure 1 depicts the typical design of most help desks, where a problem or question enters the process stream and is touched from one to three times depending on the skill and knowledge of the support personnel handling the request. While this design has improved over time, the accompanying value stream analysis reflects the continued presence of waste activities within the process flow. A majority of the activities are non-value-added and represent a large portion of the overall process time.

Lean Six Sigma, Six Sigma Exam Prep, Six Sigma Learning, Six Sigma Certification, Six Sigma Preparation
Figure 1: Value Stream Analysis of Typical Help Desk Design

An activity adds value if it directly benefits the customer. Non-value-added activities may help the organization provide a service, however, they do not add benefit to the customer. The objective of Lean Six Sigma is to increase the velocity of the process and the best way to do that is to eliminate as many non-value-added activities as possible, reducing the total elapsed time of the process. In addition, a focus on reducing process time for value-added activities also is important as long as critical-to-quality metrics are not compromised.

Critical-to-Quality Metrics: A number of quantitative and qualitative critical-to-quality metrics are used to monitor help desk processes. Quantitative measures include average resolution time from opening a ticket to closing it, average cycle time for Level 1 support, average cycle time for Level 2 support, etc. For the initial phone call, other measures such as average handle time, average hold time, and number of calls held also are good measures. It is important to measure the variation in these metrics in an effort to maintain a consistent client experience. Qualitative measures are obtained from VOC efforts such as customer satisfaction surveys, which measure overall customer satisfaction with the entire help desk process as well as satisfaction with individual process steps. Using Lean Six Sigma to improve the help desk design requires measuring CTQ metrics before and after improvement efforts are made, ensuring that they reflect an improved process and that they remain in statistical process control.

Lean Help Desk Design


Figure 2 presents a possible Lean help desk implementation addressing a number of issues typically identified when collecting VOC data. This design also helps achieve the goal to improve the velocity of the help desk process. A value stream analysis is depicted, reflecting the implementation of the following improvements to the process:

◉ Introducing web-based self-service knowledge management and frequently asked questions (FAQs) functionality has the potential to significantly reduce calls to the help desk. Customers first search the support section of the website to see if their question was already answered for another customer. If not, they enter the question and await an answer by email. Over time, the problem knowledge base covers more and more FAQs or commonly seen problems, and calls to the help desk are reduced. Since the knowledge management function is available 24/7, it provides convenience to customers with questions.

◉ Improving first-call resolution is accomplished by ensuring that the help desk person receiving the call has the skills needed to solve the problem right away. This improvement minimizes total resolution time, increasing customer satisfaction.

Overall, the entire process has fewer steps and also incorporates ideas that increase process velocity.

Lean Six Sigma, Six Sigma Exam Prep, Six Sigma Learning, Six Sigma Certification, Six Sigma Preparation
Figure 2: Value Stream Analysis of Lean Help Desk Design

Organizational Support Requirements


There are a number of organizational requirements that are needed for the successful implementation of a Lean help desk. They include:

◉ Active promotion of self-service options: Offer encouragement and incentives for customers to try the web-based self-service knowledge base. Promote the site with customers at every opportunity and ensure that it is available when customers log in 24/7.

◉ Job rotation and multi-skilled workforce: Switching from a help desk where the first responder passes the call to an expert for resolution, to a system where resolution is attempted on the first call, requires a population of agents that have a diverse set of skills. This is likely to involve more extensive training of agents. Some help desks designed for first-call resolution use an approach where agents work as a team on difficult problems, leveraging the skills of more experienced agents.

◉ Empowered help desk agents: A Lean help desk involves giving more authority to agents to resolve problems. That means a full understanding of the organization and how cutting across functional areas influences these issues.

◉ Restructured incentives and awards: Incentives and awards in traditional help desks are more tuned toward agents working on problems individually. A Lean help desk should emphasize teamwork, velocity of problem resolution and customer satisfaction more than metrics tailored for the individual agent.

Lean Six Sigma offers a systematic way to analyze a help desk design and identify ways of increasing process velocity through reducing non-value-added activities. Six Sigma provides the tools to measure variation in critical-to-quality metrics and monitor the level of statistical control, helping companies improve help desk processes.

Monday 18 January 2021

Making SCOR Model More Effective with Lean Six Sigma

Many Six Sigma practitioners have asked how the Supply-Chain Operations Reference model, or SCOR®, relate to Six Sigma and Lean. However, perhaps a more relevant question should be, “How can Six Sigma and Lean make a SCOR model more effective?”

SCOR, a trademark of the Supply-Chain Council, consists of several hierarchal levels. These are shown in Figure 1 and include an evaluation of strategic goals and objectives in a context of competitive supply chain analysis, value stream mapping of major process workflows and at the lowest level an operational analysis of work tasks, standards and metrics. These hierarchal levels form a generic supply chain model which provides standardized workflows, procedures and metrics. However, the usefulness of a SCOR model is that it can be customized to fit the specific supply chain of almost any organization, but, major process workflows are standardized and include basic control elements based on the combined knowledge of many industry experts.

Building a supply chain model or process with using a highly standardized structure enables an organization to detect and eliminate process variation. This is because variation from a well-known standard is easily detected, analyzed and eliminated using Lean, Six Sigma and similar improvement methodologies. This concept is shown in Figure 1 at the second and third levels of the pyramid. As an example, Lean methods can be applied to simplify, standardize and mistake proof process workflows prior to migration to a SCOR model.

On the other hand, Six Sigma methods can be used to analyze customer and process data to identify repeatable patterns associated with the root causes of poor operational performance. These types of analyses also can be used to build quantified models of a system’s metrics as described by the Six Sigma expression Y=f(X). In other situations, when customization is required to match a specific supply chain operational design, Design for Six Sigma tools and methods can be used to augment the basic SCOR model. But, at this point a team should consider using the Logistics Council’s Design-Chain-Operations-Reference model.

Integrating Organizational Process Workflows

A SCOR model also helps to integrate organizational process workflows and their internal operational work tasks into a coherent whole within a global supply chain. Integration is especially important because global supply chains are heavily dependent on information technology to build and manage their demand and capacity planning infrastructure. Standardizing operational methodologies ensures that a common approach is taken by supply chain participants to development of procedures and metrics. This tends to integrate rather than isolate operational segments.

Also, to the extent that there are process breakdowns, Lean Six Sigma methods can be used to identify and eliminate their root causes. As an example, Six Sigma methods are very useful in identifying and analyzing the voice of the customer during the deployment of SCOR methodology and development of a SCOR model within a supply chain. Lean methods are used to simplify and eliminate non-essential operations by identifying value-adding and non-value-adding operations.

Six Sigma Learning, Six Sigma Certification, Six Sigma Exam Prep, Six Sigma Preparation
Figure 1: SCOR Model’s Hierarchal Levels

A major goal of applying a SCOR model is to develop a value stream map describing a supply chain’s major process workflows. Figure 2 shows that a SCOR model consists of five major process workflows. These are:

◉ Demand and supply planning

◉ Sourcing strategies

◉ Transformation processes

◉ Warehousing and delivery

◉ Reverse logistics

These major process workflows are then broken into lower level workflows. As a simple example, using the SCOR modeling method, the “sourcing” component is broken into several lower level process workflows – S1, or source stocked product, S2, or source make-to-order products and S3, or source engineer-to-order. Each of these workflows is then broken down further into operation within workflows and then into the specific work tasks of each operation.

Providing Definitions, Metrics, Tools and Methods


A SCOR model also provides operational definitions, expected performance metrics for basic supply chain operations as well as best-in-class tools, methods and systems for its members to use to modify their process workflows. This approach enables users to more easily adapt a SCOR model to their specific process workflows. As an example, S1, or source stocked product, is broken into the operations S1.1, or schedule product deliveries, S1.2, or receive product, S1.3, or verify product and S4, or authorize supplier payment. Metrics which have been shown to be predictors of best-in-class global supply chain performance also are provided by a SCOR model. These include: perfect order fulfillment, order fulfillment cycle time, system flexibility, cash-to-cash cycle time and similar metrics. A global supply chain can be analyzed relative to its major components to compare its actual to best-in-class performance benchmarks using this metric linkage concept. As a result, supply chains using a SCOR model provide Lean Six Sigma practitioners with a highly standardized supply chain process, but, also one that is operationally flexible as its needs change over time.

Six Sigma Learning, Six Sigma Certification, Six Sigma Exam Prep, Six Sigma Preparation
Figure 2: SCOR Model’s Five Major Process Workflows

Some of the major process workflows within supply chains can help demonstrate where Lean Six Sigma methods can be used in relation to a SCOR model. These include ordering materials, components and items; managing purchase orders and accounts payable systems; selecting and managing suppliers; estimating freight rates; choosing shipping modes, claims and returns systems; managing fleet maintenance and operations, forecasting workloads; maintaining equipment; developing inventory storage strategies and rules; receiving inbound trucks, order-picking activities including pack-out; and dock auditing and loading outbound trucks.

However, supply chain process does not always perform according to standard due to myriad reasons. As an example, Lean Six Sigma projects have been deployed to improve customer service levels and on-time delivery, reduce overdue backlogs, increase inventory asset utilization efficiency (turns), reduce unplanned orders, reduce scheduling changes, improve material availability, improve forecast accuracy and reduce operational lead-time. It would be an easier if processes were standardized upfront using SCOR principles, prior to deployment of Lean Six Sigma projects aimed at identifying and eliminating the root causes for process breakdowns.

Starting a SCOR Deployment…


How would one start a SCOR deployment? The experts at the Supply-Chain Council can help, but, at an elementary level, an organization can compare its current supply chain at all operational levels to the established SCOR procedures, standards, metrics and benchmarks. Then value stream map, simplify, standardize major process workflows. In parallel, remember to apply Lean Six Sigma methods wherever there are process breakdowns whose root cause analysis requires a reduction in process variations.

Friday 15 January 2021

Managing for Continuous and Breakthrough Improvement

Six Sigma Exam Prep, Six Sigma Learning, Six Sigma Guides, Six Sigma Prep

As a set of state-of-the-art tools for solving operations problems, Six Sigma can be used for both continuous and breakthrough improvement. What separates the two is the structure by which they are managed. When managers confuse the two types, the result is usually below-par performance. Even worse, such confusion could result in yet another dead-end quality program.

Continuous and Breakthrough Improvement

Continuous improvement is about many, small improvements initiated and implemented by anyone and everyone in the organization to improve the quality of their working processes and practices. Simplifying administrative processes by eliminating unnecessary copies, installing racks for organizing equipment in a more visual and orderly fashion, color-coding dossiers in a lab for easy identification are all examples of continuous improvement. It both reflects and creates a culture of quality.

Breakthrough improvement involves major improvements in key business areas. They are often chronic problems solved permanently through focused, dedicated resources working for a limited period of time. Due to the investments in time and attention required, breakthrough improvement projects are selected by a management group that typically acts as a steering group. The improvement goal is between 50 and 95 percent improvement in four to 12 months, depending on project scope. Usually the scope of inquiry crosses multiple functional boundaries. These are good opportunities for developing next-generation leaders, an equally important aspect of creating an enduring quality culture. Breakthrough improvement projects yield the highest economic return in the short- to medium-term.

In Six Sigma parlance, continuous improvement is done by Yellow Belts and White Belts trained in the basic DMAIC approach and tools. Black Belts are involved in breakthrough improvements. Depending on the project, Green Belts could be involved in both types of improvement. Design for Six Sigma projects are typically aiming for breakthroughs.

Guidelines for managing both continuous and breakthrough improvement are outlined in the figure below.

Six Sigma Exam Prep, Six Sigma Learning, Six Sigma Guides, Six Sigma Prep
Two Management Approaches

The following examples of using Six Sigma for continuous and breakthrough improvements are from a medium-sized European pharmaceutical company.

Reducing Packaging Equipment Downtime


A pharmaceutical company began a restructuring process that included an investment of €20 million to create a Lean operations factory. After coping with the teething problems of starting up the new plant, the plant manager was confronted with yet another challenge: The mind-set of the employees in the organization had not changed. This was reflected in the fact that while raw materials and products could flow continuously from the granulation to the packaging of finished products, supervisors and operators followed old practices in running the equipment and managing time within their departments.

Six Sigma was chosen as the continuous improvement methodology to identify and solve the problems relating to continuous flow production. The first step was to coach packaging supervisors and operators in the application of the Six Sigma method, and thereby provide them with the tools to quantify the downtime problem, identify root causes, develop solutions and implement them.

The initiative started with a three-day team training project. It was designed to give participants an overview of Six Sigma, further refine the scope of their project and build cohesion among a group of people who were not accustomed to working in project teams. At first, there was resistance to the whole idea of measuring because operators saw it as a camouflaged step toward reducing their work freedom. However, what seemed to open their minds to measuring were the benefits (know-how and insight) to be gained by taking part in the Six Sigma teams.

The operators identified several potential root causes and succeeded in building a measurement system to document cause-and-effect relationships. By fixing chronic issues that irritated the operators, the maintenance engineering staff won the credibility of the operators. The project indicated to them that management was serious about improvement.

During a period of nine months, the company reduced unplanned downtime in packaging by identifying several critical problems – dust, lack of cooperation between packaging lines, incorrect machine calibration and lack of proactive maintenance. This situation called for a new common working culture. Measuring, analyzing and taking action to reduce downtime became part of an everyday practice.

All in all, the most durable benefit was the teamwork and leadership skills that were learned. The project not only succeeded in implementing proactive procedures for machine maintenance, but also created closer cooperation among technicians, operators and packaging lines. This increased operator responsibility and empowerment toward preventative maintenance – all improvements that control measures have validated as durable.

Solving a Chronic Yield Problem


For 15 years a pharmaceuticals manufacturer experienced variations in a production process that resulted in periodic batch failures. Everyone in the company had their own theories about the reasons why – the chemistry of particle sizes, humidity depending on the time of the year, equipment settings, etc.

The company used Six Sigma for breakthrough improvement to solve this chronic production problem. A group consisting of a quality manager, a cross-functional team of operators, lab technicians and process engineers started out by re-labeling every pet theory as a hypothesis and testing them against data. As it turned out, several favorite theories failed to stand up to the scrutiny of fact-based analysis.

The process led to interesting results, the breakthrough coming from the collective knowledge of the cross-functional team. Taking a closer look at the data on lost batches, the operators remarked that problems in content uniformity occurred at the same time as agglomerates in the base material. Also, the team found that the agglomerates had more than the expected amounts of flavor additive. Finally, the team concluded that the process of mixing flavor additives was one of the root causes for lumps resulting in too little vitamin D3 in some tablets. This gave the team a chance to develop and implement simple procedural changes for mixing in flavor additives.

Control measures are now in place to monitor other critical variables. Furthermore, this pointed to the need for a future project variation reduction project further upstream. Now, more than ever before, there are frequent fact-based discussions between functions and levels in the organization working together to improve production yields.

The company succeeded in reducing batch failures from a yearly average of 12 to two, and is on its way to eliminating them entirely. By approaching the problems with fact-based analytical tools, the company improved the production process to the extent that the company generated net savings of nearly €500,000 per year. As it improved the uniformity of content, the company set off a positive chain reaction affecting downstream operations of tablet compression and packaging.